System tests for fuel upgrading feature
Feature Lead: Andrey Sledzinskiy
Mandatory Design Reviewers: A. Urlapova, T. Leontovich, Evgeniy Li, A. Panchenko
Developers: A. Urlapova, A. Sledzinskiy
QA: A. Urlapova, A. Sledzinskiy
Scenarious to automate will be:
Pre-condition for case 1:
System test successfully passed - environment was deployed and all checks passed
Case 1 - upgrade deployed environment (run if 'Upgrade_
(These steps will be added to all tests after deploying environment and all checks)
1. Check there is no 'uuuuuupgrade_
2. Run upgrade (download tarball to master, extract files, run script with upgrade)
3. Check that cluster still exists and there is 'uuuuuupgrade_
4. Run OSTF - all tests passed
5. Run 'docker images', 'docker ps' on master node - upgraded images and containers are shown, all containers are active
6. Check that - ls -l /etc/supervisord.d/ - current symlink is linked on /etc/supervisor
7. Check dockerctl shell works fine - it`s possible to “shell” into all of the containers
8. dockerctl destroy/stop/start works properly
9. check if there were no changes in permissions
10. Check that syslog gets HUP (analyze rsyslog)
11. For some tests we can add additional nodes, re-deploy cluster and run OSTF after that
Pre-condition for case 2:
Steps from Case 1 successfully passed
Case 2 - rollback upgraded deployed environment (run if 'Rollback' variable is set to true):
1. Execute next commands on master:
- rm /etc/supervisor
- ln -s /etc/supervisor
- rm /etc/fuel/
- ln -s /etc/fuel/
- /etc/init.
- docker stop $(docker ps -q)
- /etc/init.
2. Check that cluster still exists, release version is changed to previous
3. Run OSTF - all tests passed
4. Run 'docker images', 'docker ps' on master node - images and containers with previous version are shown, all containers are active
5. Check that - ls -l /etc/supervisord.d/ - current symlink is linked on /etc/supervisor
6. Check dockerctl shell works fine - it`s possible to “shell” into all of the containers
7. dockerctl destroy/stop/start works properly
8. Check that syslog gets HUP (analyze rsyslog)
9. check if there were no changes in permissions
Pre-condition for case 3:
Pre-condition of system test is successfully passed, cluster is created and updated but not deployed
Case 3 - upgrade not deployed environment (run if 'Upgrade_
1. Check there is no 'uuuuuupgrade_
2. Run upgrade (download tarball to master, extract files, run script with upgrade)
3. Check that cluster still exists and there is 'uuuuuupgrade_
4. Run 'docker images', 'docker ps' on master node - upgraded images and containers are shown, all containers are active
5. Check that - ls -l /etc/supervisord.d/ - current symlink is linked on /etc/supervisor
6. Check dockerctl shell works fine - it`s possible to “shell” into all of the containers
7. dockerctl destroy/stop/start works properly
8. check if there were no changes in permissions
9. Check that syslog gets HUP (analyze rsyslog)
10. Add additional nodes for some tests
Pre-condition for case 4:
1,2 steps from Case 3 successfully passed
Case 4 - rollback upgraded not deployed environment (run if 'Rollback_
1. Execute next commands on master:
- rm /etc/supervisor
- ln -s /etc/supervisor
- rm /etc/fuel/
- ln -s /etc/fuel/
- /etc/init.
- docker stop $(docker ps -q)
- /etc/init.
2. Check that cluster still exists, release version is changed to previous
3. Run 'docker images', 'docker ps' on master node - images and containers with previous version are shown, all containers are active
4. Check that - ls -l /etc/supervisord.d/ - current symlink is linked on /etc/supervisor
5. Check dockerctl shell works fine - it`s possible to “shell” into all of the containers
6. dockerctl destroy/stop/start works properly
7. check if there were no changes in permissions
8. Check that syslog gets HUP (analyze rsyslog)
Pre-condition for case 5:
Case 3 successfully passed
Case 5 - rollback upgraded deployed environment (run if 'Rollback' variable is set to true):
1. Execute next commands on master:
- rm /etc/supervisor
- ln -s /etc/supervisor
- rm /etc/fuel/
- ln -s /etc/fuel/
- /etc/init.
- docker stop $(docker ps -q)
- /etc/init.
2. Check that cluster still exists, release version is changed to previous
3. Run 'docker images', 'docker ps' on master node - images and containers with previous version are shown, all containers are active
4. Check that - ls -l /etc/supervisord.d/ - current symlink is linked on /etc/supervisor
5. Check dockerctl shell works fine - it`s possible to “shell” into all of the containers
6. dockerctl destroy/stop/start works properly
7. check if there were no changes in permissions
8. Check that syslog gets HUP (analyze rsyslog)
Case 6 - restart nodes after upgrade (probably should be added to test_failover.py):
1. Install master
2. Deploy simple cluster - simple, 1 controller, 1 compute, 1 cinder
3. Run upgrade (download tarball to master, extract files, run script with upgrade)
4. Restart all nodes
5. Check that nodes aren't bootstrapped
Case 7 - upgrade during deployment (probably should be added to test_failover.py):
1. Install master
2. Run deployment of ha cluster - 3 controller, 1 compute, 1 cinder
3. Run upgrade during deployment
4. Check that deployment stops
5. Check that cluster still exists and there is 'uuuuuupgrade_
6. Run OSTF - all tests passed
7. Run 'docker images', 'docker ps' on master node - upgraded images and containers are shown, all containers are active
8. Check that - ls -l /etc/supervisord.d/ - current symlink is linked on /etc/supervisor
9. Check dockerctl shell works fine - it`s possible to “shell” into all of the containers
10. dockerctl destroy/stop/start works properly
11. check if there were no changes in permissions
12. Re-deploy cluster
13. Run OSTF
Case 8 - coexistence of environments with different fuel versions (probably should be added to test_failover.py):
1. Install master
2. Create simple cluster - simple, 1 controller, 1 compute, 1 cinder
3. Deploy cluster
4. Run OSTF
5. Upgrade fuel
6. Create new simple cluster with default values - simple, 1 controller, 1 compute, 1 cinder
7. Deploy new cluster
8. Run ostf for new cluster
9. Add 1 cinder node to old cluster and re-deploy
10. Run OSTF for old cluster
Blueprint information
- Status:
- Complete
- Approver:
- Nastya Urlapova
- Priority:
- Essential
- Drafter:
- Andrey Sledzinskiy
- Direction:
- Approved
- Assignee:
- Andrey Sledzinskiy
- Definition:
- Approved
- Series goal:
- Accepted for 5.1.x
- Implementation:
- Implemented
- Milestone target:
- 5.1
- Started by
- Nastya Urlapova
- Completed by
- Nastya Urlapova
Related branches
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
Create tests for upgrade feature
Addressed by: https:/
Create tests for upgrade feature
Work Items
Dependency tree
* Blueprints in grey have been implemented.