Usability testing and reporting of usability issues
This session should have a focus on improving ways of wide-scale usability testing and research as well as improving the method of reporting usability issues through Launchpad.
Blueprint information
- Status:
- Started
- Approver:
- Ivanka Majic
- Priority:
- Undefined
- Drafter:
- Charline Poirier
- Direction:
- Needs approval
- Assignee:
- Charline Poirier
- Definition:
- Drafting
- Series goal:
- None
- Implementation:
- Started
- Milestone target:
- ubuntu-10.10
- Started by
- Ivanka Majic
- Completed by
Whiteboard
We will be exploring ways to:
1. Share documents
2. Share process
3. report reliable findings in a distributed way
What we do now
* different ways to involve users input
* icon testing for launchpad
* ethnographic research: for social from the start, feeding into
* Card sorts for the software centre: looking at classification, how can we make it so people can find them
* usability testing: showing people an interface asking them to try it out, see what they can do and see where they fail
* We have been thinking about the way we use the methodologies and we have been using them in a very traditional way. They are designed to help designers make decisions. We have started to challenge our methodologies and look at how we can feed findings into the open-source community. We did some work with Empathy and showed them a traditional report to look at how we can make it more dynamic. How to communicate in a way that helps make it more useful
* we are also looking at having user advocates in other projects
* we would like to look at ways of making the gathering the data more distributed
Quick look at the findings
* 11 participants new to ubuntu
* we had them look at the interface in the context of tasks they would normally do
* downloaded photos, played music, looked at documents
* When new users come to Ubuntu:
* Mental-models (preconceptions)
* It is the difference between existing mental models and what we do that makes Ubuntu who we are
*
* conceptions of Ubuntu community
* Usabilty liability-
* Important to manage first-impression
"How can I make this mine? I want to use this tool, this tool has to be mine"
* Desktop background
* Screensaver
* Photographs
We sit beside them, we have a script, we guide them and we ask them clarifying questions. This is more exploratory than testing: we avoid leading them. We want to understand.
* We are testing _the software_, not testing _the participant_.
* New question: can we put clips of you online
** Opposite to traditional "don't worry, this won't be seen" approach
** During session, only document facts, don't draw conclusions
*** Not always suitable to share transcript, eg. includes introductary rapour
** Avoid leading questions, but can be useful to extend conversation
People are experts in their own needs and what they think
How can we do it in a distributed way
From Benjamin: GTK testing
== Testing ==
* Developers observing testing of their own software
* Requires self-awareness
* Body language
* "Echo chamber", tendency to hear what you want
* Tendency to /explain/ the interface
*
== Situation ==
* Apple/Microsoft have lots of usability experts
* Ubuntu has the power of the community (Crowdsourcing)
* Gimp/Launchpad ... any one user only uses 5% of the interface, but
* Overall the whole interface does get used across different groups (not possible to remove)
== Work items ==
* Mozilla test pilot:
* Collection of clicks/heatmaps
* => Statics to back the process/decision
* Howto for automated data collection (eg. legal issues)
=== Framework for automated data collection ===
* Privacy
* Publishing findings
* Data should be /viable/ ... eg. a change requires a control group for useful comparison
* Steps
** The _type_ of data
** Technical requirements, research requirements,
=== Statistics Gathering Policy ===
Previous draft/propose for an Ubuntu Statistics Gathering/
Gathering statistics is good---one of the best ways to innovate is to
understand how a system or interface works presently, and thus what areas
are inefficient and could benefit from further work or improvement.
Often the group tasked with, or desiring, a statistics-
implementation may not be immediately familiar with the intricacies of
statistical analysis, processes suitable for technical deployment within
Ubuntu, or the wider social and potential image impacts upon the Ubuntu
platform and its community of supporters (paid, or unpaid).
Ubuntu users *love* testing, particularly of experimental bleeding edge
features, and *absolutely crave* the opportunity to help gather data that
supports the improvement of the Ubuntu platform, when asked to do so.
Experiments can deliver rewards---when they technically sane, publicly
pre-announced, well-designed statistics-
proactively involve Ubuntu users.
The following are examples of statistics-
held, or are in use within the Ubuntu platform:
Active:
Popcon (on-going)
Hardware Database (HWDB, slipping)
"multisearch" (9.10 alpha 3/4)
Passive:
HTTP/mirror log analysis
NTP request analysis
I perceive that a Statistics Gathering policy might be based upon:
1. Data gathered must be statistically viable;
1.1. There must be a control group. Either;
1.1a. The behaviour/
unchanged.
1.1b. Or, if new behaviour experimental behaviour is being rolled out, it
must only be deployed to a percentage of the enrolled users.
1.2. The group must be meaningfully large enough
2. Statistics gathering must be *technically viable*
2.1. Must be in line with Debian/Ubuntu best practice
2.1.1 As of 2009, Popularity Contest (popcon) over HTTP is an example of
best practice.
2.2. Must be easy to disable.
2.3. Pre-documented procedure for removal from Ubuntu archive/system
2.3.1. Any altered configuration setting must be restored
3. Active statistic gathering must be *socially viable*:
3.1. Should be opt-in
3.2. Should be anonymous
3.3. Be clearly pre-announced/
3.3.1. Explanation of method working (laymans' and technical)
3.3.2. Have clearly defined goals
3.3.3. Have explained public benefit (follow-on intentions)
3.3.3.1. Where the published summary/finding can be found
4. Have finite, clear deployment start/stop timelines:
4.1. Maximum one month, or one alpha cycle (whichever is shorter)
4.2. Case must be made for (short/indefinite) extension, demonstrating
4.2.1. Statistic value of continuation
4.2.2. Technical conformity
4.2.3. Positive community impact (news coverage/community feedback)
5. Must have obtained, permission for deployment into the archive
5.1. Must be obtained from the Technical Board
5.2. A wiki page providing 24 hour technical and press point-of-contact
must be provided
The above is perhaps just documented common-sense. I do not see a need for
passive systems to be covered (or even have attention drawn to); and
neither do I see a need to duplicate any areas already covered by a relevant
Privacy Policy.
I believe *every single* statistics gathering exercise can be positive for
the Ubuntu community, when following the guidelines above; eg. a sanely
designed experiment which *involves* the community, without risk of creating
friction.
Work items:
[charlinepoirier] write-up a clear how-to for setting up and conducting qualitative research: TODO
[ivanka] write-requirements up: first research and then technical: TODO
[godbyk] write-requirements up: first research and then technical: TODO
[humphreybc] create a launchpad project for this: TODO
[sladen] contribute to statistics gathering policy document: TODO
[ivanka] to investigate a specific application to test this out on first (maybe shotwell?): DONE
Shotwell it is!