Current documentation for “Linaro Toolchain Benchmarks”

To classify a blueprint as documentation, set the Implementation status to “Informational” When the blueprint's Definition status is marked “Approved”, it will appear in this listing.

Add EEMBC Network for Linaro Toolchain Benchmarks
We now have a license to EEMBC Network. Add to cbuild.
Add SPEC 2006 for Linaro Toolchain Benchmarks
Chris from LLVM prefers SPEC 2006. We don't run it currently as some of the tests require more than 1 GB of RAM and swapping in a test is bad. We have a license of SPEC 2006. Harness up similar to SPEC 2000, disable the excessive memory benchmarks, add to cbuild, and add to our regular runs.
Trunk reporting for Linaro Toolchain Benchmarks
We now build and benchmark trunk and the youngest release branch each week. Add a graph and automatic email on regressions.
Report benchmark results in LAVA for Linaro Toolchain Benchmarks
Use LAVA for reporting and visualization of toolchain benchmarks results. This is the first step to using LAVA for automating the toolchain benchmarks. Use the available reporting tools/API:s in LAVA for storing and visualizing benchmark results. All other steps for building, running and extracting results will be ...
Firefox as a toolchain benchmark for Linaro Toolchain Benchmarks
Set up a working cross-compilation environment for Firefox. Do a first benchmark. Present the results to the group and evaluate if the results/findings are useful. If Firefox is too hard for cross-compiling, then switch to a WebKit based browser: Chromium or QtWebKit. Investigate if Mozilla can share their test con...
Benchmarking the toolchain for Linaro Toolchain Benchmarks
We do benchmarks internally using a mix of custom scripts and a mix of boards. This should be cleaned up and taken over by the Infrastructure group. The runs should be mainly automatic, continuous, archived, and have basic daily reporting. Discuss what to benchmark, the platforms to run on, how to ensure consistency...

6 blueprint(s) listed.