Replace adaptive hash index partitioning with multi-heap hash tables
It looks like the idea behind the AHI partitioning patch has been implemented upstream in a more general form: each hash table created by ha_create_func() may have multiple heaps protected by separate rwlocks/mutexes. This feature was implemented to split the buffer pool page hash, but was disabled later. So in the 5.6 code base it is only used in UNIV_DEBUG or UNIV_PERF_DEBUG builds. Currently it is only used for the page hash, but not for AHI tables.
By utilizing that feature for AHI tables we get:
1. Less code to maintain, as most of the helper functions and logic has already been implemented in the multi-heap hash tables.
2. Higher lock granularity: with multi-heap hash tables, tables are partitioned based on the hash value rather than index ID (as with the AHI partitioning implementation). Which makes it possible, for example, to read and build AHI entries for the same index concurrently.
3. Lower memory footprint. Unlike AHI partitioning which creates separate hash tables (with a fixed number of cells with many cells likely not being used), multi-heap tables share the same cells array.
With some further modifications to the hash tables API it is possible to implement automatic partitioning: increase the number of heaps in the AHI hash table as necessary, so that each index gets its own heap and its own lock (which are reset/destroyed when the corresponding index memory object is destroyed). This will essentially make the AHI partitioning feature and innodb_
Blueprint information
- Status:
- Not started
- Approver:
- None
- Priority:
- Medium
- Drafter:
- None
- Direction:
- Approved
- Assignee:
- None
- Definition:
- Approved
- Series goal:
- Accepted for 5.6
- Implementation:
- Unknown
- Milestone target:
- None
- Started by
- Completed by