Containers in UEC
Support containers as a type of host in UEC. This should be useful for ensemble, for
testing UEC under ec2, and for implementing very light-weight clusters on older hardware.
Blueprint information
- Status:
- Complete
- Approver:
- Robbie Williamson
- Priority:
- High
- Drafter:
- Chuck Short
- Direction:
- Approved
- Assignee:
- Chuck Short
- Definition:
- Approved
- Series goal:
- Accepted for natty
- Implementation:
- Implemented
- Milestone target:
- natty-alpha-3
- Started by
- Chuck Short
- Completed by
- Robbie Williamson
Whiteboard
== UDS Session ==
Support containers as a type of host in UEC. This should be useful for ensemble, for
testing UEC under ec2, and for implementing very light-weight clusters on older hardware.
Use cases:
* support old hardware
* support testing with ensemble
* support on arm
* high density
Challenges:
* libvirt containers can't be killed? (verify)
* (scott) the same image should work on both a container and a native host
* we'll need foundations support
* (should have been brought up in fine-tuning session)
* Look at /var/lib/
* udev doesn't work in a container. No one is quite sure why. Scott recalls that it has to do with a block device not needing to be mounted, so some required setup that udev needs to do doesn't get done.
* The lucid kernel did not support tagged sysfs, so all containers saw the network interfaces of the host's in sysfs. Udev must not run in a container on a lucid host.
* As of maverick, udev may safely run inside a container.
* If you plug in a usb stick, the container's udev will try to auto-mount it. With the default devices cgroup settings in lxc, it will not be allowed to.
* When the container boots, the default udev logging settings will cause some error messages to be printed on the container's console. These can be safely ignored. They can be turned off in udev's config.
ACTION: for Natty cycle, let udev.conf detect the kernel level and start udev if new enough.
ACTION: Find out why default upstart config files don't work and come up with some that do. Object is to have one set of conf files that cover both metal and container use cases.
* the container gets launched with base fs (plus devpts i believe) mounted, so some things simply don't need to run.
ACTION: Code and test patches for Eucalyptus. This involves both adding LXC as a target and adding the parametric data that would allow the scheduler to select targets that can run the image.
ACTION: For UEC code and test the patches.
ACTION: Investigate increasing the UEC VM-per-processor setting to some default value > 1
ACTION: Get Platform people involved in the udev problem, particularly whether udev can discover that it is running in a container and act accordingly.
Work Items alpha-3:
[zul] Submit UEC containers blueprint for inclusion into OpenStack: DONE
[zul] Hook in connection for lxc to openstack: DONE
[zul] Setup the base for the container: DONE
[zul] Setup the EC2 image as a container: DONE
[zul] Configure basic LXC container networking: DONE
[zul] Start LXC container in Opentack: DONE
[zul] Stop LXC container in Openstack: DONE
[zul] Write testsuite for LXC container in Openstack: DONE
[zul] Submit Upstream into Openstack: DONE
Work Items
Dependency tree
* Blueprints in grey have been implemented.