Capturing system and app performance metrics during autopilot test runs
Our current daily quality efforts are focused on functional testing of
the overall (Unity) stack. That is, we are executing a large number of
autopilot tests in an automated manner to verify that any change within
the overall ecosystem does not break the user facing functionality.
We could leverage the existing daily quality setup even more to record a multitude of metrics describing the runtime characteristics
Blueprint information
- Status:
- Not started
- Approver:
- Loïc Minier
- Priority:
- Undefined
- Drafter:
- Thomas Voß
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- Drafting
- Series goal:
- None
- Implementation:
- Unknown
- Milestone target:
- None
- Started by
- Completed by
Related branches
Related bugs
Sprints
Whiteboard
Our current daily quality efforts are focused on functional testing of
the overall (Unity) stack. That is, we are executing a large number of
autopilot tests in an automated manner to verify that any change within
the overall ecosystem does not break the user facing functionality.
From my perspective though, we could leverage the existing daily quality
setup even more to record a multitude of metrics describing the runtime
characteristics
captured data to do in-depth analysis of the overall system performance
characteristics or even focus on application-
e.g., average latency of input event delivery. To this end, we would
need to have a system in place that allows us to (remotely) harvest
measurements from a multitude of different sources and that is easily
integrate-able with applications such that they can export their
specific measurements.
The following scenario is considered for evaluation purposes:
* Start Unity guest session
* Setup tracing tool under evaluation to capture dbus session output
* Setup tracing tool under evaluation to capture system/kernel characteristics
* Execute autopilot test suite
* Record trace during execution
The tracing tool-(suite) should support both developers and QA people alike, by offering a way to developers to inject app-specific measurements/
Ideally, we would like to be able to inject power measurement results into the trace, too.
Work Items
Work items:
* Evaluate Performance CoPilot: DONE
* Evaluate collectd: INPROGRESS
* Evaluate lttng: INPROGRESS
* Evaluate SystemTap: INPROGRESS