Interact by voice with OSHIP-based applications
Some scenarios:
Medical professionals walking around the clinic; each patient in a bed has his own touch screen showing his EHR. The caregiver speaks into her mobile phone to enter data into fields that show up on the screen. The cell phone can also be used as a 'pointing device'. No need to carry around tablets/slates (still too big); no other tokens needed to authorize/identify since the phone is personal (need to think about loss/theft). Websockets help to access the same webpage with various clients at the same time.
The phone can help in dictation sessions.
Doctor-patient encounters can be recorded to provide a 'black box' mechanism for medico-legal reasons, much like a black box in aircrafts.
Blueprint information
- Status:
- Started
- Approver:
- OSHIP Development Team
- Priority:
- Undefined
- Drafter:
- Roger Erens
- Direction:
- Needs approval
- Assignee:
- Roger Erens
- Definition:
- Discussion
- Series goal:
- None
- Implementation:
- Needs Infrastructure
- Milestone target:
- None
- Started by
- Tim Cook
- Completed by
Related branches
Related bugs
Sprints
Whiteboard
If you have the motivation, time and tools then I suggest that you start a branch to work on this. It would be a good thiing to have the APIs in place. But really what is the priority?
Collecting links here:
Most promising open source speech recognition toolkit:
http://
python:
http://
http://
android
http://
iPhone related:
http://
http://
Websockets:
http://
Work Items
Dependency tree
* Blueprints in grey have been implemented.