Aggregated HTTP metrics
Currently the metrics extracted from Logs can generate a lot of messages into the Heka pipeline and have several drawbacks.
* This can lead to literally block Heka and then all the data processing is wedged.
* This issue has been mitigated (at least not observed so far) by increasing the poolsize configuration of heka (log_collector) from 100 to 200. This has the disadvantage to increase the memory footprint of the process (+100MB).
* The metric generation rate is directly dependant on log entries (generated by API calls) which implies:
* An “unpredictable" load on both heka processes and InfluxDB.
* The impossibility to determine a predictable retention period (defined by the buffer size of the InfluxDB output).
Blueprint information
- Status:
- Complete
- Approver:
- None
- Priority:
- High
- Drafter:
- Swann Croiset
- Direction:
- Approved
- Assignee:
- Swann Croiset
- Definition:
- Approved
- Series goal:
- Accepted for 0.10
- Implementation:
- Implemented
- Milestone target:
- 0.10.0
- Started by
- Patrick Petit
- Completed by
- Swann Croiset
Related branches
Related bugs
Sprints
Whiteboard
Gerrit topic: https:/
Addressed by: https:/
Modify multivalue_metric implementation
Addressed by: https:/
Emit aggregated HTTP metrics
Addressed by: https:/
Separate the (L)og of the LMA collector
Addressed by: https:/
Add keep_alive configurations for TCP input/output plugins
Addressed by: https:/
Remove dashboard configuration by the heka module
Addressed by: https:/
Avoid to inject common tags twice for log_messages metrics
Addressed by: https:/
Add metric TCP decoder for the metric_collector
Addressed by: https:/
Update dashboards to use aggregated HTTP metrics
Gerrit topic: https:/