Completing HUD 2.0

Registered by Thomas Strehl

Remaining work items to get HUD functionally and feature complete for phablet.

Blueprint information

Needs approval
Series goal:
Not started
Milestone target:
Completed by
Ted Gould

Related branches



== Introduction ==

We have several things left to do to get HUD 2.0 up to being production ready from demoware in the current tablet/phone images. This blueprint is about getting those things complete and making sure that we get everything integrated in a reasonable manner.

 HUD 2.0

- a little bit in the demoware state for the phone demo
- some patches to unity compiz for it.
- voice support is working

it can evolve to get interactive help of commands on an application. Illustration, videos, wiki page…
Which format do we see for the text part? html? ui changes?
-> no clear decision as of now.

Ted has some concerns about html. Oren would prefer a wiki-like type (markups)
Translations needs to be taken into account.

Seems that embeeding a webkit view in the HUD QML would be a solution for this support.
We can use translations for webkit. Also support GNOME-like applications like mallard.

Voice support:
we published base on sphinx, OSS voice speech recognition. It's not in ubuntu. Julius is optional, so we can have the HUD 2.0 without it first.
The support is built from waves files. English has a really high rate of samples. How can be get that on other languages? (even if the barrier entry is low: it's a java applet).
Phone app to get people on board?

New source packages for training acoustic models: <-- NEW PACKAGE sphinx3 <-- NEW PACKAGE sphinxtrain

New acoustic model: <-- NEW PACKAGE sphinx-voxforge-en
Currently this is trained offline on personal computer, resulting in a source package around 30MB (binary package ~20MB), which essentially just repackages a tar.
See PPA for current results.

We might like to have the Voxforge corpus (~100 hours of wav files) be included in the source package, and train the model on a build server. This would result in:
Source package size: 8 GB
Binary package would be 21MB
Build 8-core 16GB machine with v. fast disks: 12+ hours for build process.

There is also a new training script package: <-- NEW PACKAGE sphinx-voxforge-en
This also should get into raring. It's a simple python based package.
Currently I have it building nightlies to:

Semantic database for features:
- set of keywords that can hint on a particular command (the keyword can be a synonym used on other comparable software for that action)
- command descriptions to abstract what it does
-> both of those are contextually surfaced on the ui where there is a match. Need documentation on how the developer need to use them. What is a good keyword? What is a good discussion

Alternative activations of a HUD command? Like keyboard shortcuts.
-> we already have that internally (keywords, descriptions or shortcuts)

We only show one line of the description for a giving command, maybe that's not enough. Oren think we should have a priority policy about which one comes first and which one should get higher scores.

Should the HUD is the sdk widgets or custom ones?
-> Oren thinks we should use the sdk widgets without any modifications. Those should be put in the sdk itself then. (less duplication, better integration with shell/application interactions).
pickers are missing from the sdk for instance for now.

== March ==
 * Clean up client/app API to have proper encapsulation
 * Finish second itteration of API for QML app developers
 * Improve tests suites and increase code coverage
 * Move away from custom usage database to desktop wide solution
 * Migrate to voice to use Sphinx voice engine exclusively
 * Build voice accoustic models from community contributed Voxforge data
 * Add a voice test suite based on captured audio

== April ==
 * Support for applications having multiple visual contexts throughout the stack
 * Highlighting of items based on search results
 * Port to MIR
 * Add additional parameterized widget support (to be defined)
 * Support "bubble up" of background actions that come from background applications
 * Increase documentation of keywords feature to allow for community contributions of keywords

So, at the end of this project (April) HUD 2.0 will have:
* Fully documented and tested public C and Qt/qml API for application developers
* Support for parameterized actions
* Support for actions exported by background applications
* Voice recognition
* Toolbar support including default actions (quit, copy, paste, etc)
* Providing a list of applications supporting a queried action
* Support for action alias names to make actions more intuitive


Work Items

Work items:
[thostr] add acceptance critera for each mile stone: DONE
[ted] Make a bunch of sub-blueprints: DONE
[didrocks] new packages? (see above): TODO
[didrocks] figure out about the sphinx/julius story: TODO
[oreneeshy] design a phone app to get languages samples (see POSTPONED
[ivanka] defines which components would likely be more frequently used in apps we want to target and what are missing from the sdk itself: POSTPONED
[ivanka] extend the app definition to define which ones should include the HUD (like the telephony app don't need it for instance): POSTPONED
[pete-woods] Finish test suite (libhud) (3d): INPROGRESS
[pete-woods] Finish test suite (libhud-client) (3d): DONE
[kaijanmaki] Review API with the SDK team, adjust (5d): INPROGRESS
[kaijanmaki] Context Support for application descriptions (libhud-qt) (3d): TODO
[kaijanmaki] Add support for pinging usage on action use (3d): TODO
[ted] Architecture documentation (2d): INPROGRESS
[ted] Provide support for Unity-Compiz (2d): DONE
[ted] (autotools) can't "make -j": DONE
[ted] Add column accessors to hide magic numbers (libhud-client) (2d): DONE
[ted] Support for additional base toolbar items (service) (2d): TODO
[ted] libhud subclasses for parameterized widget (3d): TODO
[ted] Fix action description types to have direct actions (1d): TODO
[ted] Context Support for application descriptions (libhud) (3d): INPROGRESS
[ted] Support for switching contexts from application (libhud) (3d): INPROGRESS
[ted] Create way to notify HUD of action usage outside of HUD (service, libhud) (2d): INPROGRESS
[kaijanmaki] Document API appropriately (5d): INPROGRESS
[kaijanmaki] Integrate documentation into SDK documentation (5d): BLOCKED
[kaijanmaki] Support for additional parameterized widgets (10d): INPROGRESS
[kaijanmaki] Support for additional base toolbar items (libhud-qt) (3d): TODO
[kaijanmaki] Integrate quit action into standard application object (1d): BLOCKED
[kaijanmaki] libhud-qt-dev package (1d): TODO
[ted] Mark highlighting of entries based on search (4d): TODO
[ted] Support background app actions (bubble up) (3d): BLOCKED
[ted] Allow application publisher to have application wide actions without a window ID (2d): TODO
[ted] Context support for application descriptions (service) (10d): TODO
[ted] Port to MIR application focus API (3d): TODO
[ted] Port to MIR applciation lifecycle API (3d): TODO
[ted] Use MIR to manage our own lifecycle (2d): TODO
[pete-woods] Update the HUD wiki to have those info for app developers to use the right keywords. Guidelines (2d): TODO
[pete-woods] Also ensure that there is clear guidance for translators for the keywords (2d): TODO
[pete-woods] Move Qt Plugin to be maintaned by non-shell (3d): TODO
[pete-woods] Base toolbar completion (libhud-client) (3d): TODO
[pete-woods] Support base classes to contain parameterized names (libhud-client) (4d): TODO
[pete-woods] Expand service test coverage (3d): TODO

Dependency tree

* Blueprints in grey have been implemented.