Desktop automated tests
Leverage the ATK framework to implement a set of automated test routines for the desktop.
Agenda:
* Choice of technology - Evaluate Accerciser, Macaroon and Dogtail. Criteria: ease of use, availability (is dogtail still broken?), compatibility with other test projects
* Test structure - how long should test be?; how should multiple tests be chained together?
* Generating tests:
* Identify 5 target apps from the default desktop and record tests and clean up results as needed. Document the steps involved.
* Encourage community contributions following the established procedure
* Investigate auto-generated tests (model-based or random walk)
* Test output
* What should the result of a test be and how should it be evaluated for a pass/fail rating?
* Can we test for proper functionality at this point, or should that be phase 2?
* Policy for work-arounds
* Some applications may require a work-around if ATK support is incomplete - such limitations shuld be noted, a bug should be filed and tracked
Blueprint information
- Status:
- Started
- Approver:
- Henrik Nilsen Omma
- Priority:
- High
- Drafter:
- None
- Direction:
- Approved
- Assignee:
- Ara Pulido
- Definition:
- Approved
- Series goal:
- Accepted for intrepid
- Implementation:
- Beta Available
- Milestone target:
- None
- Started by
- Henrik Nilsen Omma
- Completed by
Whiteboard
2007-10-09 kamion: This could also cover automated testing of desktop installs using ubiquity's automation framework and maybe vmware.
2007-11-16: Changing priority to High; it's important but it's not a critical part of the release itself
2008-06-12 heno: Assigning to Ara and targeting for Intrepid
Work Items
Dependency tree
* Blueprints in grey have been implemented.