Implement ways to track the performance of zeitgeist
One of the future task of zeitgeist development should be to implement ways to measure performance of the various components of zeitgeist, this includes questions like:
* how much memory does zeitgeist consume? What are the most memory intensive parts of zeitgeist (sqlite, cached objects etc.)? How does memory consumption evolve over time?
* How fast are DBus requests, what the 'response time'?
* How fast are the SQL queries? In which way can we optimize them?
Blueprint information
- Status:
- Started
- Approver:
- Zeitgeist Framework Team
- Priority:
- High
- Drafter:
- Markus Korn
- Direction:
- Approved
- Assignee:
- Markus Korn
- Definition:
- Approved
- Series goal:
- Accepted for 0.7
- Implementation:
- Good progress
- Milestone target:
- 0.7.0
- Started by
- Seif Lotfy
- Completed by
Related branches
Related bugs
Bug #624310: Large requests increase memory usage considerably | Invalid |
Bug #632363: Slow queries: SQL indexes not used | Fix Released |
Sprints
Whiteboard
Here is a little snippet of a discussion on the IRC channel
-------
<thekorn> seif__: what's wrong about calling FindEvents 7 times?
<thekorn> I don't think dbus roundstrips are bad
<thekorn> or expensive (performance wise)
<seif__> its more performant
<seif__> like in GAJ
<seif__> we want to draw the bottom bar
<seif__> instead of calling 90 times
<seif__> i call once
<seif__> can iget things cateogrized
<thekorn> there was even a talk at GUADEC last year, where a telepathy guy said: "do as much roundtrips as possilbe, instead of putting everything into one big method"
<thekorn> seif__: yeah, but: there will be 90 times more data to send, and it will take 90 times more time
<RainCT> do we have any benchmarks for this?
<thekorn> which roughly means 90 times more posssible timeouts
<thekorn> RainCT: no
<thekorn> we are pretty bad at doing benchmarks
<thekorn> and having sientific data of how much time we spend in sql,python,dbus etc