Benchmarking the toolchain
We do benchmarks internally using a mix of custom scripts and a mix of boards. This should be cleaned up and taken over by the Infrastructure group. The runs should be mainly automatic, continuous, archived, and have basic daily reporting. Discuss what to benchmark, the platforms to run on, how to ensure consistency across runs, and how to present.
Blueprint information
- Status:
- Complete
- Approver:
- Michael Hope
- Priority:
- Not
- Drafter:
- Michael Hope
- Direction:
- Needs approval
- Assignee:
- None
- Definition:
- Approved
- Series goal:
- Accepted for trunk
- Implementation:
- Informational
- Milestone target:
- backlog
- Started by
- Matthew Gretton-Dann
- Completed by
- Matthew Gretton-Dann
Whiteboard
[2013-08-22 matthew-
[michaelh1] Deferring as this is too broad and chunks are split into other blueprints.
Work items:
Automate SPEC: DONE
Automate EEMBC: DONE
Get permission to use the reference platform: INPROGRESS
Document the reference platform: TODO
Benchmark all gcc-linaro releases: TODO
Benchmark interesting non-Linaro releases: TODO
Validate and justify methods: TODO
Document methods: TODO
Automate the statistics: TODO
Automate graphing: TODO
Produce a first internal report: TODO
Work items (public view):
Decide on sharable benchmarks: TODO
Design layout: TODO
Discuss with infrastructure team: TODO
Generate initial one-page reports: TODO
Update with each release: TODO
Extend to including each commit: TODO
Track upstream as well: TODO
Work Items
Dependency tree
* Blueprints in grey have been implemented.