about summary refs log tree commit diff stats
path: root/src/wrapped/generated/wrappedatomictypes.h
diff options
context:
space:
mode:
authorChi-Kuan Chiu <105687635+devarajabc@users.noreply.github.com>2025-04-22 17:51:57 +0800
committerGitHub <noreply@github.com>2025-04-22 11:51:57 +0200
commit75aaf7aa64274dad5f2644b2ea247d438b78b586 (patch)
tree1e3cd67df4c4f22c72d5f5c27dbabee2898e1e68 /src/wrapped/generated/wrappedatomictypes.h
parent39a66e25eecb9e0c449bfc868ff781ae6ee16ca1 (diff)
downloadbox64-75aaf7aa64274dad5f2644b2ea247d438b78b586.tar.gz
box64-75aaf7aa64274dad5f2644b2ea247d438b78b586.zip
[RBTREE] Cache boundary nodes and remove `add_range()` (#2557)
* Cache leftmost and rightmost node

Add two fields to `rbtree`: `lefter` and `righter`, to cache the
leftmost and rightmost nodes respectively. This eliminates the need for
O(log N) traversals in `rb_get_lefter()` and `rb_get_righter()`.

Motivated by the Linux kernel's use of cached pointers for `rb_first()`
and `rb_last()`, this change improves efficiency of boundary queries by
replacing repeated tree walks with direct pointer dereference.

Experiment: running `chess.exe` with Box64 + Wine (#2511)
- ~3,500 insertions into the tree
- 607 lightweight cache updates (single assignment)
- 397 full tree traversals avoided

This results in reduced runtime overhead for boundary checks, with
memory cost (+2 pointer  per tree). Expected benefits increase
in larger or more dynamic workloads.

Ref: https://docs.kernel.org/core-api/rbtree.html

* Remove redundant add_range() wrapper

The function `add_range()` was only called when `tree->root == NULL`.
In such cases, the while-loop inside `add_range()` never runs,
resulting in a call to `add_range_next_to()` with `prev == NULL`.

Replaced it with direct calls to `add_range_next_to(tree, NULL, ...)`.
Diffstat (limited to 'src/wrapped/generated/wrappedatomictypes.h')
0 files changed, 0 insertions, 0 deletions