View source for OpenVZ
Jump to:
navigation
,
search
OpenVZ is a lightweight, secure, high-performance virtualization system for Linux. See also the [http://openvz.org OpenVZ Website]. === Introduction === (Broomfield, 03/14/2013) OpenVZ is an open-source virtualization system for Linux. It is kernel and container based. It is not a hypervisor-based system. The kernel is shared among the host and its guests. Guests may run any distribution of Linux compatible with the kernel being run on the host. The major benefits of OpenVZ are: * Low Overhead - almost no memory overhead for host or guests * High Performance - disk and I/O are approximately the same as the bare metal. * Security of host and guests * Easy creation and teardown of instances with few commands, in seconds, with networking * Easily script the creation, loading, test, and destruction of instances for testing software and systems * With 'expect', scripting can easily be extended to move context of root commands into and out of the instance The major limitations of OpenVZ are: * All guests must be running Linux * All guests must share the same kernel used by the host - you can't experiment with different kernels for different guests How many containers? You can have at least 20 containers. For more, you'll need to dig in deeper than this wiki page. Systems and Tools group is using OpenVZ in the Lab and on various projects. In most cases, OpenVZ is being run on CentOS 6.3, but in some cases is running on various versions of Fedora. The author is running OpenVZ on a 2.4 GHz quad processor machine with 8GB of RAM, which can easily support 20 instances that are not RAM greedy, and still be able to run applications on the host. ==== Beware... of the OpenVZ PDF Doc ==== The PDF manual available from the OpenVZ website is very out-of-date as of this writing. ==== Uncategorized Factoids ==== Container numbers (CT or CTID) are mapped to the host system's UID's (User IDs). They are specified as starting with 100 and increasing. Since I have installed on clean systems, I can't say what happens if you try to create a container with an ID number that already exists for a host username. ==== The Author ==== This page was, at one point, posted on multiple wikis. This (wa2iac.com) is now the authoritative version (2/18/2015). The author is the award-winning super-dude known across the Galaxy and with callsign WA2IAC. === Quick-Start User Guide === The good news is that (what should be) routine tasks like creating, starting, stopping and destroying a container... along with administering it... are all very easy and require very little typing in OpenVZ. Network setup for containers providing services to the network is automatic. This implies that you must be disciplined in allocating and documenting IPs used by containers. With power comes the requirement for responsibility! ==== Create a Container ==== ===== Simple Example ===== Become root on the host machine. Once the defaults are set, creating a container is as simple as: <pre> # vzctl create 101 # vzctl start 101 </pre> ... where '101' is the CT number, which must be more than 100. To admire your new creation: <pre> # vzlist CTID NPROC STATUS IP_ADDR HOSTNAME 101 17 running - - </pre> In the future, you'll want to use the vzlist command to help you pick an unused CTID. To "login" to your new creation (as root): <pre> # vzctl enter 101 entered into CT101 # exit logout exited from CT 101 </pre> Before we move on... where do the files go? ls /vz/private and you'll see a directory "101", that's where the file structure for container, or CT #101 lives on the host machine. Now, let's destroy that container, as it was just a simple example! <pre> # vzctl destroy 101 </pre> Pretty simple! Now lets start flipping switches and twisting knobs... ===== A More Complex Example ===== To specify the distro and config of the CT, Execute the following commands to view the templates and distros available: <pre> ls /etc/sysconfig/vz-scripts ls /vz/template/cache </pre> Select a template and distro. Create a CT (container) with the distro and template as follows: <pre> # vzctl create 101 --ostemplate centos-6-x86_64 --config basic # vzctl start 101 # vzctl enter 101 </pre> Note that '.tar.gz' was not included in the ostemplate specification. For the config arg, the "ve-" prefix and "-sample" were not included. Note also that 107 is the CT id, and it's above 100. All CT numbers below 100 are reserved! The defaults for these parameters can be set in /etc/sysconfig/vz so the host system administrator should set appropriate defaults there. And what about that '--config' arg? Look in ''/etc/vz/conf'' to see what's available. It's a link to ''/etc/sysconfig/vz-scripts'' as mentioned above. You can start with one of those templates and create your own. Getting on the network isn't too hard. The key commands are: <pre> # vzctl set 101 --ipadd 1.2.3.4 --nameserver 5.6.7.8 --save </pre> ===== Example Demonstrating Configuration Controls ===== Here is a more practical example, demonstrating some of the controls available... If you're going to rebuild the server, make a script. To roll the same basic config over and over, use command line substitution to allow specifying the Container ID. <pre> vzctl create 103 --ostemplate centos-6-x86_64 --config basic vzctl set 103 --ipadd 10.1.38.45 --nameserver 10.63.255.1 --save vzctl set 103 --ram 10G --onboot yes --save vzctl set 103 --cpus 4 --save vzctl set 103 --diskspace 20G --save vzctl start 103 vzctl enter 103 </pre> ===== Networking Hints ===== Remember to provide connectivity for the IP address aliases you are creating on the physical host. While experimenting, you may wish to turn off iptables to avoid frustration. Don't forget to create rules and turn it on again soon! === Installation on CentOS 6.3 === This installation describes the process for CentOS 6.3 ==== Install Repos ==== OpenVZ repos have been restructured in recent releases, and made much simpler. There's not much advantage to establishing a local repo. Download openvz.repo file and put it to your /etc/yum.repos.d/ repository, and import OpenVZ GPG key used for signing RPM packages. This can be achieved by the following commands, as root: <pre> wget -P /etc/yum.repos.d/ http://download.openvz.org/openvz.repo rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ </pre> ==== Install Kernel ==== Run the following command <pre> # yum install vzkernel </pre> Before answering 'y' check that the arch is correct. If there is an issue, consider those surrounding the "yum install [o]vzkernel[-flavor]" variants. ==== Configuring ==== Please make sure the following steps are performed before rebooting into OpenVZ kernel. ===== /etc/sysctl.conf ===== There are a number of kernel parameters that should be set for OpenVZ to work correctly. These parameters are stored in /etc/sysctl.conf file. Here are the relevant portions of the file; please edit accordingly. <pre> # On Hardware Node we generally need # packet forwarding enabled and proxy arp disabled net.ipv4.ip_forward = 1 net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.all.forwarding = 1 net.ipv4.conf.default.proxy_arp = 0 # Enables source route verification net.ipv4.conf.all.rp_filter = 1 # Enables the magic-sysrq key kernel.sysrq = 1 # We do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0 </pre> ===== CentOS 6.4 sysctl.conf Example ===== Here is an example of a currently used CentOS 6.4 sysctl.conf file, but don't just paste this in blindly! YMMV! This is indended only as an example of a completed edit. <pre> # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 1 net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.all.forwarding = 1 net.ipv4.conf.default.proxy_arp = 0 # We do not want all our interfaces to send redirects net.ipv4.conf.default.send_redirects = 1 net.ipv4.conf.all.send_redirects = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 1 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 </pre> ===== SELinux Configuration ===== SELinux should be disabled. To that effect, put the following line to /etc/sysconfig/selinux: <pre> SELINUX=disabled </pre> ==== Reboot into OpenVZ kernel ==== Grub has automatically set up the OpenVZ kernel as the kernel to boot. The pre-existing kernel has been demoted but is still present and bootable. See: /boot/grub/grub.conf Now reboot the machine and choose "OpenVZ" on the boot loader menu. If the OpenVZ kernel has been booted successfully, proceed to installing the user-level tools for OpenVZ. ==== Installing the utilities ==== OpenVZ needs some user-level tools installed. Those are: '''vzctl''' *A utility to control OpenVZ containers (create, destroy, start, stop, set parameters etc.) '''vzquota''' *A utility to manage quotas for containers. Mostly used indirectly (by vzctl). <pre> # yum install vzctl vzquota </pre> If on the x86_64 platform you would probably want to: <pre> # yum install vzctl.x86_64 vzquota.x86_64 </pre> When all the tools are installed, start the OpenVZ subsystem. ==== Starting OpenVZ ==== As root, execute the following command: <pre> # /sbin/service vz start </pre> This will load all the needed OpenVZ kernel modules. This script should also start all the containers marked to be auto-started on machine boot (there aren't any yet). During the next reboot, this script should be executed automatically. ==== Installing OS template caches ==== An OS template cache is a Linux distribution installed into a container and then packed into a gzipped tarball. Using such a cache, a new container can be created in a matter of minutes. Download precreated template caches from Downloads » Templates » Precreated, or directly from download.openvz.org/template/precreated, or from one of the mirrors. Another possible source is a another OpenVZ host nearby. You probably don't need them all, so you may want to be selective. Put those tarballs as-is (no unpacking needed) to the /vz/template/cache/ directory (for Debian, this is /var/lib/vz/template/cache/). Here's an example to get fedora templates onto OpenVZ running on CentOS: <pre> $ cd /vz/template/cache $ wget http://download.openvz.org/template/precreated/fedora* </pre> ==== Next Steps ==== OpenVZ is now set up on your machine. To load OpenVZ kernel by default, edit the default line in the /boot/grub/grub.conf file to point to the OpenVZ kernel. For example, if the OpenVZ kernel is the first kernel mentioned in the file, put it as default 0. See man grub.conf for more details. ==== Source Reference(s) ==== The instructions provided here are specifically for CentOS 6.3 using yum via connectivity to the public Internet. For other situations, refer to the source... http://openvz.org/Quick_installation === Live Migration === OpenVZ takes the operational advantages of virtualization to a new level by allowing live migration of virtual guests from one host to another without so much as dropping TCP connections. This gives you total freedom to do whatever you want with hardware without impacting operations (if you have enough hardware!). This has been successfully tested and utilized in the lab. The entire process, including movement of filesystems, all configuration, network configuration, and special measures for live migration are all done with a single command with a duration of about a minute. Here are the short instructions for doing a live migration: * OpenVZ host 1 : server.example.com, IP address: 192.168.0.100 * OpenVZ host 2 : server2.example.com, IP address: 192.168.0.101 Testing veracity of "liveness" of migration: in a terminal window, log into the virtual machine and run a process such as "ping -t google" or "top" so connectivity of the guest can be monitored throughout the migration. Not outlined here: destination (if not both destination and target) must have ssh keys for login without password (at least for the duration of the migration, which is about 1 minute). Also, the "modprobe" command may need to be used to load OVZ modules that are not loaded and only needed for migration. See source documentation: http://www.howtoforge.com/how-to-do-live-migration-of-openvz-containers Log on to source host as root <pre> #vzmigrate --online {ip address} {guest ID} </pre> That's it! The guest is moved, remains on source server deleted and cleaned up. The guest disappears from #vzlist on source host and reappears on #vzlist on target host. === Advanced Configuration === This section does not pretend to be comprehensive. At this point, it's assumed the reader has taken a look at the OpenVZ website. The author is simply sharing information about issues encountered here. Any network configurations that require bridging will require access to the 'brctl' command. That's contained in the 'bridge-utils' package. <pre> # yum install bridge-utils </pre> ==== Allowing Containers to Grab IPs via DHCP ==== This requires installing a "bridge" so that the CT (container) can issue a DHCP request and receive a lease. The following is abstracted from information available on the OpenVZ site. Source information: http://openvz.org/Common_Networking_HOWTOs#DHCP_supplied_addresses ==== Contrib Distros ==== There are "minimal" configuration distros that may be of use, and are available in the 'contrib' distro directory. Many of those that are in the RedHat family do not include an install of 'yum'.
Return to
OpenVZ
.
Navigation menu
Personal tools
Log in
Namespaces
Page
Discussion
Variants
Views
Read
View source
View history
Actions
Search
Navigation
Main page
Community portal
Current events
Recent changes
Random page
Help
Toolbox
What links here
Related changes
Special pages
Page information