diff options
| author | Christian Krinitsin <mail@krinitsin.com> | 2025-07-17 09:10:43 +0200 |
|---|---|---|
| committer | Christian Krinitsin <mail@krinitsin.com> | 2025-07-17 09:10:43 +0200 |
| commit | f2ec263023649e596c5076df32c2d328bc9393d2 (patch) | |
| tree | 5dd86caab46e552bd2e62bf9c4fb1a7504a44db4 /results/scraper/fex/1619 | |
| parent | 63d2e9d409831aa8582787234cae4741847504b7 (diff) | |
| download | qemu-analysis-main.tar.gz qemu-analysis-main.zip | |
Diffstat (limited to 'results/scraper/fex/1619')
| -rw-r--r-- | results/scraper/fex/1619 | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/results/scraper/fex/1619 b/results/scraper/fex/1619 new file mode 100644 index 000000000..bbaddc1f1 --- /dev/null +++ b/results/scraper/fex/1619 @@ -0,0 +1,10 @@ +Support Apple TSO mode enable bit +Once the kernel has some sort of interface for enabling this flag (prctl?, arch_prctl?) then wire this up. + +This should be as simple as changing the TSO IR ops to fall to the "non-atomic" variants and adding a flag to the code cache config. +M1/M1X is already significantly faster than Snapdragon even without this hardware feature enabled. So it would just be an improvement on already fast hardware. + +TODO: Is there a way to make non-coherent loadstores happen while this TSO flag is still enabled? Loading from our context, TLS, and stack accesses that we already convert to non-TSO for example. +Would need someone with hardware to test. Worst case we just eat the TSO cost always, which isn't terrible. + +TODO: Hopefully the kernel interface is per thread, so our helper threads don't pay the TSO cost, since they don't need it. \ No newline at end of file |