[EDP] Add an engine for a Spark standalone deployment
Sahara needs an EDP implementation to run on clusters created with the Spark plugin. This implementation should include the 3 basic EDP functions:
run_job()
get_job_status()
cancel_job()
The Spark plugin creates "Spark standalone" deployments which use the native scheduler, not Yarn or Mesos. Therefore the EDP implementation must use only facilities provided natively by Spark and Linux.
Blueprint information
- Status:
- Complete
- Approver:
- Sergey Lukjanov
- Priority:
- High
- Drafter:
- Trevor McKay
- Direction:
- Approved
- Assignee:
- Trevor McKay
- Definition:
- Approved
- Series goal:
- Accepted for juno
- Implementation:
- Implemented
- Milestone target:
- 2014.2
- Started by
- Sergey Lukjanov
- Completed by
- Sergey Lukjanov
Related branches
Related bugs
Sprints
Whiteboard
Waiting for a spec.
Gerrit topic: https:/
Addressed by: https:/
[EDP] Add an engine for a Spark standalone deployment
Addressed by: https:/
Implement EDP for a Spark standalone cluster
Gerrit topic: https:/