Expanding QA community coverage
As always our approach to how we test and maintenance of what we're using to test needs to be examined. Let's talk about how the underlying platform has/is changing and what adjustments we might need to make as a team. In addition, let's talk about growing into new areas and encouraging and working with other teams.
Blueprint information
- Status:
- Not started
- Approver:
- Jono Bacon
- Priority:
- Undefined
- Drafter:
- None
- Direction:
- Needs approval
- Assignee:
- Nicholas Skaggs
- Definition:
- New
- Series goal:
- Accepted for saucy
- Implementation:
- Unknown
- Milestone target:
- None
- Started by
- Completed by
Whiteboard
Please add notes/ideas/
Maintenance Tasks for saucy:
review testcases
do we have good testing coverage for all our our images?
do we have proper coverage for all of our default applications?
if not, do we have bugs filed for the missing pieces?
convert autopilot to autopilot 1.3
see this session/blueprint for more details on autopilot plans: http://
Refresh QA contact list and schedule meetups
Need to ensure all flavor QA leads meet together and understand what is availible and how they can leverage and help each other
Review current toolchain
what tools do we use?
Any issues preventing us from using these tools in the future?
QATracker
TestDrive
Autopilot
Discussion Points:
Increasing adoption of the manual testcases project (https:/
Can we push beyond our current default applications?
Can we increase the depth of our testcases?
Look at getting all default apps covered, then add depth before going beyond
Potential new areas for testing
ubuntu touch?
http://
Server
make sure the tests work, clarify ambigous sections, and make sure they are reproducible
Potentially create automated testing?
What do we plan to provide QA for?
Images
Packages (specific list to focus or ?)
Hardware (HEXR plans? laptop-testing tracker plans?)
Potential tools to examine/adopt
what tools might be interesting to adopt?
errors.
could this help drive our focus or provide value to us?
definitely, we can focus in testing the apps with more issues
reports.
see http://
cadence testing
how did it work out last time?
See idea below on testing new packages
We could set targets for cadence testing according to newly released packages from -changes list, or by utilizing data from reports.
Flavors
Plans for manual testing (cadence, alpha/beta, something else?)?
kubuntu - traditonal alpha, beta
lubuntu - traditional alpha, beta
ubuntu studio - beta only, and targeted testing before betas
xubuntu - discussing test plans tonight(14/5)
edubunutu - mostly traditonal alpha, beta?
ubuntu kylin - ?
ubuntu gnome - ?
Mythbuntu - ?
Any plans for automated testing?
kubuntu - ?
lubuntu - ?
ubuntu studio - Only plans so far, but there is interest
xubuntu - discussing test plans tonight(14/5)
edubunutu - ?
ubuntu kylin - ?
ubuntu gnome - ?
Mythbuntu - ?
Ideas:
Including all post install tests in autopilot (e.g. xubuntu's)
attend autopilot test session linked above
arm, arm, arm, clarify how to run tests on qemu, guides are outdated
need someone with
evaluate the possibility to send to reliable testers some kind of board to do hardware tests (panda board, ecc.)
we do this now; depending on needs and hw availible. Ping balloons if interested
[crhrabal] Lets have deeper qa testing into experimental new packages like newest unity, HUD2, 100 Scopes Project, Smart Scopes, Mir, experimental kernels, Ubuntu Touch Core Apps. Basically, allow for new feature elements to get more qa tracking and create a much larger group of testcases. Possibly on some form of qa tracking like iso.qa.ubuntu.com.
I'd like to refer to Nicholas Skaggs' blog, which lists the perceptions survey many months back:
http://
I think that by addressing feature elements and in-development package testing early, we can address these user statistics. When asked "What Does Quality Mean To You?" most users said "Quality means the default desktop and applications should work without crashing." When asked "What's the biggest problem with quality in Ubuntu right now?" most users chose "Applications are always crashing"
By pushing more testing on feature elements and experimental packages, it will ensure that new features and packages have more testing and better quality.
[smartboyhw] Here's my idea: Use the current Packages QA Tracker (https:/
We can track packages according to the package version, like what Netboot ISO does (debian-installer). People who have subscribed to the package testcases and whenever there is a new Ubuntu version they can test it.
Each week (or cadence week) we tell people: "Hey, this week we want to test this specific app, but there are other packages you can test also!" this can strengthen package testing.
+1 from noskcaj
Work Items
Work items for ubuntu-13.07:
[nskaggs] Review server testcases, and bring them up to speed for a "non-server" person to be able to perform: DONE
[phillw] Review server testcases, and bring them up to speed for a "non-server" person to be able to perform: DONE
[svwilliams] Review server testcases, and bring them up to speed for a "non-server" person to be able to perform: DONE
[carla-sella] Review server DONE, and bring them up to speed for a "non-server" person to be able to perform: TODO
[javier-lopez] Review server testcases, and bring them up to speed for a "non-server" person to be able to perform: DONE
[javier-lopez] search what default apps don't have a manual testcase and open reports: DONE
[nskaggs] Schedule meeting with QA leads to discuss testing plans for next cycle: DONE
Work items for ubuntu-13.06:
[nskaggs] Arrange for live demo on staging site of proposed cadence testing changes: DONE
[crhrabal] Arrange for live demo on staging site of proposed cadence testing changes: DONE
[nskaggs] explore average number of uploads for a package for cadence testing, to determine feasibility of changes to cadence testing: DONE
[nskaggs] start ml discussion with errors.ubuntu.com folks: DONE
[nskaggs] convert autopilot testcases in autopilot testcases project to autopilot 3: DONE
Work items for ubuntu-13.05:
[nskaggs] Do manual testcase review: DONE
Work items:
[phillw] Expand the lubuntu testcases: TODO
[nskaggs] probe ml for volunteers for emulating ARM and PPC using QEMU to test: TODO
Dependency tree
* Blueprints in grey have been implemented.