Session on containers at NRSC, Shadnagar

Table of Contents

1 Introduction

This document contains reference material for session on containers at NRSC, Shadnagar by Saurabh Barjatiya. The goal of the session is to explain about containers theoretically followed by hands-on demo/lab session.

2 Containers vs. VMs

Most people are now aware of Virtual Machines (VMs). Hence, it is easier to understand about containers in comparison to VMs.

1Fully isolated OS where rootYesYes
or administrator privileges
can be given without risk to
other guests (containers or
2Light weightYesNoApprox.
10 containers =
1 VM
3Share resources (CPU, RAM,YesYes
Disk, etc.) with host and
4Alloted resources (esp. RAM,NoYesFor VMs assuming
disk) are reserved and cannotthick disks.
be used by other guests or host
5Any OS including WindowsNoYes
can be run in guest
6Multiple kernels are runNoYes
and kernel versions can be
7There is overhead ofNoYes
virtualization as virtual
hardware needs to be
8Time taken to create from3-4180-On a normal
existing image in seconds240laptop without
9Typical boot time in seconds5-1045-60OS dependent
10Can be a full networkYesYesBoth use
citizen including its ownsoftware
MAC, ip address and routingbridges
11Any services can be hostedDependsMostlyComplex services
which require
of devices eg VPN
or direct access to
hardware eg GPU are
either trickier or
not possible in
12Security isolation between rootNoYesAssuming root user
user of host and root user ofof host is not
guests is adequate.directly accessing
bridge or guest disk
files at base level

3 Different types of containers

3.1 OpenVZ

OpenVZ is very feature-rich container system developed almost 10 years ago. For OpenVZ a specific kernel needs to be installed. This is available only for CentOS 6. Since OpenVZ needs kernel modification it was not carried forward to CentOS 7 as there are security concerns with such modifications and also maintenance overheads in adding OpenVZ to each new kernel version.

For OpenVZ information how-to's please refer to

Note that this would require CentOS 6. It is not recommended to use OpenVZ for any new production work. It should be referred only to see how OpenVZ containers used to work in past.

3.2 lxc

Since containers (or solaris zones) were fairly successful it was decided to add these features to standard linux mainstream kernel. There is already chroot jail which can isolate process at file-system level. The mechanisms to limit CPU and RAM for isolated processes is provided by cgroups. Network isolation can be done using software bridges. Process and device isolation can be done by having a separate /dev, /proc which is not identical to host /dev, /proc.

Thus, by a combination of techniques we can create an isolated process which has CPU, RAM, etc. limits and has its own network stack. To facilitate this kernel is also upgraded (same as was done by OpenVZ modifications) to support multiple guests (containers) in terms of open files, process tree, network stack, etc. giving illusion of fully independent guest os.

Unlike chroot where we execute a specific process in chroot jail, in case of containers entire OS filesystem is available in container root (/) folder and container is started using its init process. Thus, containers are very close to a fully independent OS on host or VM.

lxc works on all mainstream Linux flavors. lxc on top of CentOS 7 will be used for hands-on demo/lab. We will refer to for lxc hands-on demo/lab.

3.3 docker

docker capitalizes on containers to promote micro-service architecture. This encourages setting up separate micro-services for each application. Each of these micro-services can be run on different docker containers.

docker initially used to use lxc only. New versions of docker have their own container abstraction library.

docker also focuses on fact that initial container OS images are all same. Then the OS is modified by running certain commands or by modifying file-system structure. docker keeps record of these commands or file-system changes in a diff format. This diff along with original OS image is enough to create modified container images.

This allows docker to keep a repository of large number of container images in a small space as only diff information is stored. Docker also provides API and server infrastructure to upload these images to a central server and then download it on other hosts.

This repository approach allows building software market place of docker images where software is preinstalled and pre-configured. Docker users can simply specify the software name and get one or more docker containers which together allow use of software without having to install or configure the software. This is similar to appliance model used by sharing VM images with pre-installed software.

For docker how-to's please refer to

Please note that in above how-to's docker containers typically run only one process. They are not running full-fledged OS in contrast to OpenVZ or lxc. These docker containers also do not have full network stack. Port forwarding is used as part of docker commands to allow container processes to talk to each other and to base host.

4 Hands on exercise/demo on lxc

4.1 Network bridging

It is necessary to have software bridges for lxc containers to become full network citizen. To create software bridge (eg br0) associated with a physical interface (eg eth0) refer to Network Bridging KB article

The same bridge can be used by VMs also. For other types of networking please refer

4.2 Installing lxc

Install lxc as explained at using:

yum -y install epel-release
yum -y install debootstrap perl libvirt libcap-devel libcgroup wget bridge-utils
yum -y install lxc lxc-templates lxc-extra   
systemctl start lxc.service
systemctl enable lxc.service
systemctl status lxc.service   

4.3 Create lxc containers

After lxc is installed containers can be created and started using:

lxc-create -n centos1 -t centos
cat /var/lib/lxc/centos1/tmp_root_pass   

vim /var/lib/lxc/centos1/config  
#Update lxc.include = /usr/share/lxc/config/centos.common.conf to fedora.common.conf

lxc-start -n centos1 -d
lxc-console -n centos1 -t 0

lxc-stop -n centos1
lxc-clone centos1 centos2  #Container must be stopped
lxc-destroy centos1  #Container must be stopped

For more detailed container creation information including creating debian or ubuntu containers refer

4.4 Thake lxc container snapshot

4.5 Attach lxc container to network bridge

For attaching lxc container to network bridge refer to

4.6 Work with networked containers

Perform at least following operations on networked container:

  1. Check container ip using 'ip addr show'
  2. Verify that sshd is running using 'systemctl status sshd'
  3. SSH to container from other terminal and verify that network connections work as expected.
  4. Install and run web server using:
    yum -y install httpd

    The above installation will most likely fail. Then refer to and fix the issue

    Then create test index.html file and start web browser using:

    echo "Basic web server" > /var/www/html/index.html
    systemctl start httpd

    Finally access the same using web browser on base host using container IP

    Do this on ssh connection and not on console. Console copy-paste might have issues.

  5. You might also want to learn about 100% CPU usage issue linked at
  6. Install and run mariadb using:
    yum -y install mariadb
    yum -y install mariadb-server
    systemctl start mariadb
    > create database test1
    > use test1;
    > create table test2 (name varchar(50), mobile varchar(50));
    > insert into test2 values('saurabh','12345 67890');
    > select * from test2;
    > \q
    shutdown -h now
  7. From base host try
    lxc-info <container-name>
  8. Clone container and verify table in cloned container using:
    lxc-clone <container-name> <clone-name>
    lxc-start -n <clone-name> -d
    lxc-attach -n <clone-name> -t 0
    #Login with same username and password as original container
    systemctl start mariadb
    > use test1;
    > select * from test2;
    > \q
    Ctrl a+q
  9. Take container snapshot and restore it later to validate snapshot feature:
    lxc-snapshot -n <container-name>
    ls /var/lib/lxcsnaps/
    ls /var/lib/lxcsnaps/<container-name>
    cat /var/lib/lxcsnaps/<container-name>/config
    cat /var/lib/lxcsnaps/<container-name>/ts
    lxc-start -n centos1 -d
    lxc-console -n centos1 -t 0
    systemctl start mariadb
    > drop database test1;
    > show databases;
    > \q
    shutdown -h now
    lxc-snapshot -L -n <container-name>
    lxc-snapshot -r snap0 -n <container-name>
    lxc-start -n centos1 -d
    lxc-console -n centos1 -t 0
    systemctl start mariadb
    > show databases;
    > use test1;
    > select * from test2;
    > \q
    shutdown -h now
  10. List all containers using

    and list only running containers using:

    lxc-ls --running
  11. Look at lxc container usage using:
  12. Look at information using:
    lxc-info -n <container-name>

    for both running and stopped containers

  13. Try to freeze container using:
    lxc-freeze -n <container-name>
    ping <container-ip>
    ssh <container-ip>
    #from other terminal
    lxc-console -n <container-name> -t 0
    #from other terminal
    lxc-unfreeze -n <container-name>

Date: 2018-05-23 Wed

Author: Saurabh Barjatiya

Org version 7.9.3f with Emacs version 24

Validate XHTML 1.0