Juju: automated testing of charms
Charms need to be tested by an automated test runner on each commit to the charm store. There is a need to test each charm's use of each interface with any charms which implement the other side of the interface.
Blueprint information
- Status:
- Complete
- Approver:
- Antonio Rosales
- Priority:
- High
- Drafter:
- None
- Direction:
- Needs approval
- Assignee:
- Clint Byrum
- Definition:
- Approved
- Series goal:
- Accepted for precise
- Implementation:
- Implemented
- Milestone target:
- ubuntu-12.04-beta-2
- Started by
- Clint Byrum
- Completed by
- Robbie Williamson
Whiteboard
Status: Spec nearing final approval. Merge proposals submitted against juju to implement some parts of the wrapper inside juju itself: http://
Work Items:
[clint-fewbar] write spec for charm testing facility: DONE
[mark-mims] implement specified testing framework (phase1): DONE
implement specified testing framework (phase2) ( blocked on merge of http://
[mark-mims] deploy testing framework for use with local provider (all phase1 tests and charm graph tests are done w/ local provider): DONE
[mark-mims] deploy testing framework for use against ec2 (this is an out-of-cycle activity): POSTPONED
[mark-mims] deploy testing framework for use against canonistack (this is an out-of-cycle activitly): POSTPONED
deploy testing framework for use against orchestra (managing VMs instead of machines): POSTPONED
write charm tests for mysql: POSTPONED
[clint-fewbar] write charm tests for haproxy: POSTPONED
[clint-fewbar] write charm tests for wordpress: POSTPONED
[mark-mims] write charm tests for hadoop: POSTPONED
[james-page] add openstack tests: DONE
[mark-mims] jenkins charm to spawn basic charm tests: DONE
[mark-mims] basic charm tests... just test install hooks for now: DONE
Session notes:
Welcome to Ubuntu Developer Summit!
#uds-p #track #topic
put your session notes here
Requirements of automated testing of charms:
* LET'S KEEP IT SIMPLE! :-)
* Detect breakage of a charm relating to an interface
* Identification of individual change which breaks a given relationship
* Maybe implement tests that mock a relation to ensure implementers are compliant
* Test dependent charms when a provider charm changes
* Run test NxN of providers and requirers so all permutations are sane (_very_ expensive, probably impossible)
* Run testing against multiple environment providers (EC2/OpenStack/
* Notify maintainers when the charm breaks, rather than waiting for polling
* Verify idempotency of hooks
* Tricky to _verify_, and not an enforced convention at the moment, so not sure
* be able to specify multiple scenarios
* For functional tests, they are in fact exercising multiple charms. Should those sit
within the charms, or outside since it's in fact exercising the whole graph?
* The place for these composed tests seem to be the stack
* As much data as possible should be collected about the running tests so that a broken
charm can be debugged and fixed.
* Provide rich artifacts for failure analysis
* Ideally tests will be run in "lock step" mode, so that breaking charms can be individually
identified, but this is hard because changes may be co-dependent
* It would be nice to have interface-specific tests that can run against any charms that
implement such interfaces. In addition to working as tests, this is also a pragmatic
way to document the interface.
* support gerrit-like topics? (What's that? :-) i.e., change-sets (across different branches)
* We need a way to know which charms trigger which tests
* Keep it simple
* James mentioned he'd like to have that done by Alpha 1 (December) so that he can take
that into account for the OpenStack testing effort.
ACTIONS:
[niemeyer] spec
[james-page] add openstack tests
Proposal below is too complicated, rejected (Kept for posterity)
Proposal:
Each charm has a tests directory
Under tests, you have executables:
__install__ -- test to run after charm is installed with no relations
Then two directories:
provides/
requires/
These directories have a directory underneath for each interface provided/required. Those directories contain executables to run.
The test runner follows the following method:
deploy charm
wait for "installed" status
run __install__ script, FAIL if exits non-zero
destroy service
for interface in provides ; do
calculate graph of all charms in store which require interface and all of its dependency combinations
deploy requiring charm w/ dependencies and providing service
add-relation between requiring/providing
for test in provides/interface ; do
run test with name of deployed requiring service
for interface in requires ; do
repeat process above with provides/requires transposed
Each commit to any branch in charm store will queue up a run with only that change applied, none that have been done after it, and record pass/fail
Work Items
Dependency tree
* Blueprints in grey have been implemented.