Improving audio testing
Discussion on improving testing of audio devices, focusing on how to monitor the system for changes to lessen the burden on human verifiers.
Current audio testing is somewhat limited to built-in and external speaker/headphone and microphone sets. We test for functionality before and after suspension. We also do some simple checks for volume levels on these devices.
The aim of this blueprint is to present several other aspects we'd like to test, and get feedback on whether they are useful, whether some existing test may already test this functionality and the new one is thus redundant, and whether some of these things can be tested by poking and/or looking at system data, or actually need interaction from the user to be tested in a valid way.
The results of work on this blueprint will benefit test developers who will have a clear picture of the scope of the tests that need to be implemented, but ultimately the benefit will be for testers who will be able to get more thorough and accurate test results with a minimum of manual verification and interaction.
Blueprint information
- Status:
- Complete
- Approver:
- Ara Pulido
- Priority:
- Undefined
- Drafter:
- Daniel Manrique
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- Approved
- Series goal:
- None
- Implementation:
- Informational
- Milestone target:
- None
- Started by
- Zygmunt Krynicki
- Completed by
- Zygmunt Krynicki
Whiteboard
- Whether it's valid to change e.g. the profile and port and then check levels. We've observed that when a new device (e.g. headphones) gets plugged in, the port auto-changes and so do the levels, so testing without the actual device may be invalid.
- How best to test plug detection. Is it something that the driver exposes, can we just look at alsa_info data to determine if it works at all, or do we absolutely need to have the user plug devices in/out?
- Can we "synthesize" plug detection behavior (simulating what happens when a device gets plugged in), and is this a valid stand-in for actually plugging in a device? this may mean the difference between a fully-automated test and one that requires a human.
- Regarding levels, what should we test? alsa levels or pulseaudio ones. We may even need to test both! The issue here is testing something that matters for a driver developer (i.e. closer to the hardware) vs. something that the user will interact with (higher layers). For instance, if alsa works perfectly, can we decide the hardware is "supported" and grant certification, dumping any remaining audio issues to the higher layers, as "software" issues?