Collect all benchmark results and post them per LAVA run
Why?
ARM would like to keep track of the benchmarks that we run on a per baseline basis so that they can make comparisons. To do this efficiently, all of the benchmarks (and other data) that gets produced should be collected into one “thing” that can be read, processed and displayed.
Context?
This is part of the larger ARM benchmarking effort.
What gets produced?
Probably a big JSON thing with everything in it (the output from https:/
Where will the work get put?
Ideally this work gets built as a module that can live on the target itself or as a host script.
Blueprint information
- Status:
- Not started
- Approver:
- Zach Pfeffer
- Priority:
- Medium
- Drafter:
- Bernhard Rosenkraenzer
- Direction:
- Approved
- Assignee:
- None
- Definition:
- Approved
- Series goal:
- Accepted for future
- Implementation:
- Unknown
- Milestone target:
- backlog
- Started by
- Completed by
Related branches
Related bugs
Sprints
Whiteboard
Notes:
[2012/7/2 pfefferz] Please put notes here.
[2012/7/23 pfefferz] Since JB landed mid-cycle all of Bero's work was put on hold
Meta:
Headline: Benchmark results can be posted per LAVA run
Acceptance: 1. Data is all co-located into a common file or archive, 2. Archive or file can be wgetted and processed by LAVA
Roadmap id: CARD-134
Work Items
Work items:
[vishalbhoj] zip the pngs and email them to set of subscribers interested in results: TODO
Extract benchmark results from LAVA: TODO
Put them in a unified CSV (or similar) file: TODO
Generate gnuplot graphic from the data: TODO