summary refs log tree commit diff stats
path: root/results/scraper/fex/585
diff options
context:
space:
mode:
authorChristian Krinitsin <mail@krinitsin.com>2025-07-17 09:10:43 +0200
committerChristian Krinitsin <mail@krinitsin.com>2025-07-17 09:10:43 +0200
commitf2ec263023649e596c5076df32c2d328bc9393d2 (patch)
tree5dd86caab46e552bd2e62bf9c4fb1a7504a44db4 /results/scraper/fex/585
parent63d2e9d409831aa8582787234cae4741847504b7 (diff)
downloadqemu-analysis-main.tar.gz
qemu-analysis-main.zip
add downloaded fex bug-reports HEAD main
Diffstat (limited to 'results/scraper/fex/585')
-rw-r--r--results/scraper/fex/58514
1 files changed, 14 insertions, 0 deletions
diff --git a/results/scraper/fex/585 b/results/scraper/fex/585
new file mode 100644
index 000000000..84dfcd464
--- /dev/null
+++ b/results/scraper/fex/585
@@ -0,0 +1,14 @@
+Optimize Long divide and long remainder on AArch64 JIT
+We currently call out to a helper routine for LDIV, LUDIV, LREM, and LUREM IR ops.

+This is because AArch64 doesn't have native support for a 128bit value being divided by a 64bit divisor like x86 does.

+

+### Step 1

+Good first step would probably be to check if the top 64bits of the dividend are zero for unsigned divide and only branch to helper if they aren't. Doing fast divide in that case.

+Then for signed, check to ensure that the top bits are all the same as the top bit of bit 63 of the lower source (sbfe + cmp) and do the fast one there.

+Same for the remainder bits.

+

+### Step A

+After that would probably be to inline the full long divide/remainder when it is actually needed.

+

+### Step I

+We should also have an optimization pass that downgrades long divide/remainder to regular divide and remainder when the top bits are discarded, or the incoming bits were zext/sext before the op.
\ No newline at end of file