Tempest based benchmark scenario
Major goal is to reuse tons of already existing tempest scenarios in purpose of benchmarking.
So we should add something like TempestScenario
And inside it couple of method:
def all(self)
#just run all tests from tempest one by one
pass
def set(set_name):
# run one by one methods from set (set is hardcoded in inside Rally)
pass
def random_
# run random test from set with name set_name
pass
def specific_
# regexp that will find all benchmarks
pass
def single_test(self, test_name):
# run test with test_name
pass
def list_of_test(self, test_names):
# run one by one test from list of test_names
pass
def random_
# run random test from test_names
pass
Blueprint information
- Status:
- Complete
- Approver:
- Boris Pavlovic
- Priority:
- High
- Drafter:
- Boris Pavlovic
- Direction:
- Approved
- Assignee:
- Andrey Kurilin
- Definition:
- Approved
- Series goal:
- None
- Implementation:
- Implemented
- Milestone target:
- None
- Started by
- Boris Pavlovic
- Completed by
- Boris Pavlovic
Related branches
Related bugs
Bug #1316986: tempest test validator doesn't work for tests from `tempest.thirdparty.*` | Fix Released |
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
Add benchmark for tempest. Part 1
Addressed by: https:/
Add benchmark for tempest. Part 2
Work Items
Work items:
TempestContext implementation: DONE
TempestScenario implementation: DONE
- all: DONE
- set: DONE
- specific_regex: DONE
- single_test: DONE
- list_of_test: DONE
Dependency tree
* Blueprints in grey have been implemented.