about summary refs log tree commit diff stats
diff options
context:
space:
mode:
-rw-r--r--.gitmodules9
-rw-r--r--README.md113
-rw-r--r--flake.lock98
-rw-r--r--flake.nix52
m---------miasm0
-rw-r--r--pyproject.toml4
m---------qemu0
-rw-r--r--reproducers/issue-1373.c6
-rw-r--r--reproducers/issue-1376.c5
-rw-r--r--reproducers/issue-1377.c30
-rw-r--r--reproducers/issue-1832422.c3
-rw-r--r--reproducers/issue-1861404.c29
-rw-r--r--reproducers/issue-2495.c32
m---------rr0
-rw-r--r--src/focaccia/arch/aarch64.py4
-rw-r--r--src/focaccia/arch/arch.py19
-rw-r--r--src/focaccia/arch/x86.py13
-rw-r--r--src/focaccia/compare.py34
-rw-r--r--src/focaccia/deterministic.py226
-rw-r--r--src/focaccia/lldb_target.py180
-rw-r--r--src/focaccia/miasm_util.py6
-rw-r--r--src/focaccia/parser.py1
-rw-r--r--src/focaccia/symbolic.py486
-rw-r--r--src/focaccia/tools/_qemu_tool.py123
-rwxr-xr-xsrc/focaccia/tools/capture_transforms.py70
-rwxr-xr-xsrc/focaccia/tools/validate_qemu.py128
-rwxr-xr-xsrc/focaccia/tools/validation_server.py66
-rw-r--r--src/focaccia/trace.py27
-rw-r--r--src/focaccia/utils.py33
-rw-r--r--uv.lock115
30 files changed, 1509 insertions, 403 deletions
diff --git a/.gitmodules b/.gitmodules
index d74c6a5..32af123 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -2,3 +2,12 @@
 	path = miasm
 	url = git@github.com:taugoust/miasm.git
 
+[submodule "qemu"]
+	path = qemu
+	url = git@github.com:TUM-DSE/focaccia-qemu.git
+	branch = ta/focaccia
+
+[submodule "rr"]
+	path = rr
+	url = git@github.com:rr-debugger/rr.git
+
diff --git a/README.md b/README.md
index 68033d9..443ec1a 100644
--- a/README.md
+++ b/README.md
@@ -1,26 +1,15 @@
 # Focaccia
 
-This repository contains initial code for comprehensive testing of binary
-translators.
+This repository contains the source code for Focaccia, a comprehensive validator for CPU emulators
+and binary translators.
 
 ## Requirements
 
-For Python dependencies, see the `requirements.txt`. We also require at least LLDB version 17 for `fs_base`/`gs_base`
-register support.
+Python dependencies are handled via pyproject and uv. We provide first-class support for Nix via our
+flake, which integrates with our Python uv environment via uv2nix. 
 
-I had to compile LLDB myself; these are the steps I had to take (you also need swig version >= 4):
-
-```
-git clone https://github.com/llvm/llvm-project <llvm-path>
-cd <llvm-path>
-cmake -S llvm -B build -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_PROJECTS="clang;lldb" -DLLDB_ENABLE_PYTHON=TRUE -DLLDB_ENABLE_SWIG=TRUE
-cmake --build build/ --parallel $(nproc)
-
-# Add the built LLDB python bindings to your PYTHONPATH:
-PYTHONPATH="$PYTHONPATH:$(./build/bin/lldb -P)"
-```
-
-It will take a while to compile.
+We do not support any other build system officially but Focaccia has been known to work on various
+other systems also, as long as its Python dependencies are provided.
 
 ## How To Use
 
@@ -34,21 +23,60 @@ A number of additional tools are included to simplify use when validating QEMU:
 ```bash
 capture-transforms -o oracle.trace bug.out
 qemu-x86_64 -g 12345 bug.out &
-validate-qemu --symb-trace oracle.trace localhost 12345
+validate-qemu --symb-trace oracle.trace --remote localhost:12345
 ```
 
-Alternatively if you have access to the focaccia QEMU plugin:
+The above workflow works for reproducing most QEMU bugs but cannot handle the following two cases:
+
+1. Optimization bugs
+
+2. Bugs in non-deterministic programs
+
+We provide alternative approaches for dealing with optimization bugs. Focaccia currently does not
+handle bugs in non-deterministic programs.
+
+### QEMU Optimization bugs 
+
+When a bug is suspected to be an optimization bug, you can use the Focaccia QEMU plugin. The QEMU
+plugin is exposed, along with the QEMU version corresponding to it, under the qemu-plugin package in
+the Nix flake.
+
+It is used as follows:
 
 ```bash
-validation_server.py --symb-trace oracle.trace --use-socket=/tmp/focaccia.sock --guest_arch=<arch>
+validate-qemu --symb-trace oracle.trace --use-socket=/tmp/focaccia.sock --guest_arch=arch
 ```
-After you see `Listening for QEMU Plugin connection at /tmp/focaccia.sock...` you can start QEMU like this:
+
+Once the server prints `Listening for QEMU Plugin connection at /tmp/focaccia.sock...`, QEMU can be
+started in debug mode:
+
 ```bash
-qemu-<arch> [-one-insn-per-tb] --plugin build/contrib/plugins/libfocaccia.so <bug.out>
+qemu-<arch> [-one-insn-per-tb] --plugin result/lib/plugins/libfocaccia.so bug.out
 ```
 
+Note: the above workflow assumes that you used `nix build .#qemu-plugin` to build the plugin under
+`result`.
+
 Using this workflow, Focaccia can determine whether a mistranslation occured in that particular QEMU run.
 
+Focaccia includes support for tracing non-deterministic programs using the RR debugger, requiring a
+similar workflow:
+
+```bash
+rr record -o bug.rr.out
+rr replay -s 12345 bug.rr.out
+capture-transforms --remote localhost:12345 --deterministic-log bug.rr.out -o oracle.trace bug.out
+```
+
+Note: the `rr replay` call prints the correct binary name to use when invoking `capture-transforms`,
+it also prints program output. As such, it should be invoked separately as a foreground process.
+
+Note: `rr record` may fail on Zen and Zen+ AMD CPUs. It is generally possible to continue using it
+by specifying flag `-F` but keep in mind that replaying may fail unexpectedly sometimes on such
+CPUs.
+
+Note: we currently do not support validating such programs on QEMU.
+
 ### Box64
 
 For validating Box64, we create the oracle and test traces and compare them
@@ -72,31 +100,34 @@ The `tools/` directory contains additional utility scripts to work with focaccia
 
 The following files belong to a rough framework for the snapshot comparison engine:
 
- - `focaccia/snapshot.py`: Structures used to work with snapshots. The `ProgramState` class is our primary
-representation of program snapshots.
+ - `focaccia/snapshot.py`: Structures used to work with snapshots. The `ProgramState` class is our
+                           primary representation of program snapshots.
 
  - `focaccia/compare.py`: The central algorithms that work on snapshots.
 
- - `focaccia/arch/`: Abstractions over different processor architectures. Currently we have x86 and aarch64.
+ - `focaccia/arch/`: Abstractions over different processor architectures. Currently we have x86 and
+                     aarch64.
 
 ### Concolic execution
 
 The following files belong to a prototype of a data-dependency generator based on symbolic
 execution:
 
- - `focaccia/symbolic.py`: Algorithms and data structures to compute and manipulate symbolic program transformations.
-This handles the symbolic part of "concolic" execution.
+ - `focaccia/symbolic.py`: Algorithms and data structures to compute and manipulate symbolic program
+                           transformations. This handles the symbolic part of "concolic" execution.
 
- - `focaccia/lldb_target.py`: Tools for executing a program concretely and tracking its execution using
-[LLDB](https://lldb.llvm.org/). This handles the concrete part of "concolic" execution.
+ - `focaccia/lldb_target.py`: Tools for executing a program concretely and tracking its execution
+                              using [LLDB](https://lldb.llvm.org/). This handles the concrete part
+                              of "concolic" execution.
 
- - `focaccia/miasm_util.py`: Tools to evaluate Miasm's symbolic expressions based on a concrete state. Ties the symbolic
-and concrete parts together into "concolic" execution.
+ - `focaccia/miasm_util.py`: Tools to evaluate Miasm's symbolic expressions based on a concrete
+                             state. Ties the symbolic and concrete parts together into "concolic"
+                             execution.
 
 ### Helpers
 
- - `focaccia/parser.py`: Utilities for parsing logs from Arancini and QEMU, as well as serializing/deserializing to/from
-our own log format.
+ - `focaccia/parser.py`: Utilities for parsing logs from Arancini and QEMU, as well as
+                         serializing/deserializing to/from our own log format.
 
  - `focaccia/match.py`: Algorithms for trace matching.
 
@@ -104,14 +135,16 @@ our own log format.
 
 To add support for an architecture <arch>, do the following:
 
- - Add a file `focaccia/arch/<arch>.py`. This module declares the architecture's description, such as register names and
-an architecture class. The convention is to declare state flags (e.g. flags in RFLAGS for x86) as separate registers.
+ - Add a file `focaccia/arch/<arch>.py`. This module declares the architecture's description, such
+   as register names and an architecture class. The convention is to declare state flags (e.g. flags
+   in RFLAGS for x86) as separate registers.
 
  - Add the class to the `supported_architectures` dict in `focaccia/arch/__init__.py`.
 
- - Depending on Miasm's support for <arch>, add register name aliases to the `MiasmSymbolResolver.miasm_flag_aliases`
-dict in `focaccia/miasm_util.py`.
+ - Depending on Miasm's support for <arch>, add register name aliases to the
+   `MiasmSymbolResolver.miasm_flag_aliases` dict in `focaccia/miasm_util.py`.
+
+ - Depending on the existence of a flags register in <arch>, implement conversion from the flags
+   register's value to values of single logical flags (e.g. implement the operation `RFLAGS['OF']`)
+   in the respective concrete targets (LLDB, GDB, ...).
 
- - Depending on the existence of a flags register in <arch>, implement conversion from the flags register's value to
-values of single logical flags (e.g. implement the operation `RFLAGS['OF']`) in the respective concrete targets (LLDB,
-GDB, ...).
diff --git a/flake.lock b/flake.lock
index 0343a0a..a542c41 100644
--- a/flake.lock
+++ b/flake.lock
@@ -1,5 +1,37 @@
 {
   "nodes": {
+    "berkeley-softfloat-3": {
+      "flake": false,
+      "locked": {
+        "lastModified": 1741391053,
+        "narHash": "sha256-TO1DhvUMd2iP5gvY9Hqy9Oas0Da7lD0oRVPBlfAzc90=",
+        "owner": "qemu-project",
+        "repo": "berkeley-softfloat-3",
+        "rev": "a0c6494cdc11865811dec815d5c0049fba9d82a8",
+        "type": "gitlab"
+      },
+      "original": {
+        "owner": "qemu-project",
+        "repo": "berkeley-softfloat-3",
+        "type": "gitlab"
+      }
+    },
+    "berkeley-testfloat-3": {
+      "flake": false,
+      "locked": {
+        "lastModified": 1689946593,
+        "narHash": "sha256-inQAeYlmuiRtZm37xK9ypBltCJ+ycyvIeIYZK8a+RYU=",
+        "owner": "qemu-project",
+        "repo": "berkeley-testfloat-3",
+        "rev": "e7af9751d9f9fd3b47911f51a5cfd08af256a9ab",
+        "type": "gitlab"
+      },
+      "original": {
+        "owner": "qemu-project",
+        "repo": "berkeley-testfloat-3",
+        "type": "gitlab"
+      }
+    },
     "flake-utils": {
       "inputs": {
         "systems": "systems"
@@ -18,6 +50,24 @@
         "type": "github"
       }
     },
+    "flake-utils_2": {
+      "inputs": {
+        "systems": "systems_2"
+      },
+      "locked": {
+        "lastModified": 1731533236,
+        "narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
+        "owner": "numtide",
+        "repo": "flake-utils",
+        "rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
+        "type": "github"
+      },
+      "original": {
+        "owner": "numtide",
+        "repo": "flake-utils",
+        "type": "github"
+      }
+    },
     "nixpkgs": {
       "locked": {
         "lastModified": 1749285348,
@@ -34,19 +84,19 @@
         "type": "github"
       }
     },
-    "nixpkgs-qemu-60": {
+    "nixpkgs_2": {
       "locked": {
-        "lastModified": 1632168163,
-        "narHash": "sha256-iS3pBopSl0a2jAXuK/o0L+S86B9v9rnErsJHkNSdZRs=",
-        "owner": "nixos",
+        "lastModified": 1759831965,
+        "narHash": "sha256-vgPm2xjOmKdZ0xKA6yLXPJpjOtQPHfaZDRtH+47XEBo=",
+        "owner": "NixOS",
         "repo": "nixpkgs",
-        "rev": "f8f124009497b3f9908f395d2533a990feee1de8",
+        "rev": "c9b6fb798541223bbb396d287d16f43520250518",
         "type": "github"
       },
       "original": {
-        "owner": "nixos",
+        "owner": "NixOS",
+        "ref": "nixos-unstable",
         "repo": "nixpkgs",
-        "rev": "f8f124009497b3f9908f395d2533a990feee1de8",
         "type": "github"
       }
     },
@@ -96,13 +146,30 @@
         "type": "github"
       }
     },
+    "qemu-submodule": {
+      "inputs": {
+        "berkeley-softfloat-3": "berkeley-softfloat-3",
+        "berkeley-testfloat-3": "berkeley-testfloat-3",
+        "flake-utils": "flake-utils_2",
+        "nixpkgs": "nixpkgs_2"
+      },
+      "locked": {
+        "path": "qemu/",
+        "type": "path"
+      },
+      "original": {
+        "path": "qemu/",
+        "type": "path"
+      },
+      "parent": []
+    },
     "root": {
       "inputs": {
         "flake-utils": "flake-utils",
         "nixpkgs": "nixpkgs",
-        "nixpkgs-qemu-60": "nixpkgs-qemu-60",
         "pyproject-build-systems": "pyproject-build-systems",
         "pyproject-nix": "pyproject-nix",
+        "qemu-submodule": "qemu-submodule",
         "uv2nix": "uv2nix"
       }
     },
@@ -121,6 +188,21 @@
         "type": "github"
       }
     },
+    "systems_2": {
+      "locked": {
+        "lastModified": 1681028828,
+        "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
+        "owner": "nix-systems",
+        "repo": "default",
+        "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
+        "type": "github"
+      },
+      "original": {
+        "owner": "nix-systems",
+        "repo": "default",
+        "type": "github"
+      }
+    },
     "uv2nix": {
       "inputs": {
         "nixpkgs": [
diff --git a/flake.nix b/flake.nix
index fccabba..a1420ba 100644
--- a/flake.nix
+++ b/flake.nix
@@ -6,8 +6,6 @@
 
 		nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
 
-		nixpkgs-qemu-60.url = "github:nixos/nixpkgs/f8f124009497b3f9908f395d2533a990feee1de8";
-
 		flake-utils.url = "github:numtide/flake-utils";
 
 		pyproject-nix = {
@@ -27,21 +25,24 @@
 			inputs.nixpkgs.follows = "nixpkgs";
 			inputs.pyproject-nix.follows = "pyproject-nix";
 		};
+
+		qemu-submodule = {
+			url = "path:qemu/";
+			flake = true;
+		};
 	};
 
-	outputs = inputs@{
-		self,
+	outputs = {
 		uv2nix,
 		nixpkgs,
 		flake-utils,
 		pyproject-nix,
 		pyproject-build-systems,
+		qemu-submodule,
 		...
 	}:
 	flake-utils.lib.eachSystem [ "x86_64-linux" "aarch64-linux" ] (system:
 	let
-		qemu-60 = inputs.nixpkgs-qemu-60.qemu;
-
 		# Refine nixpkgs used in flake to system arch
 		pkgs = import nixpkgs {
 			inherit system;
@@ -248,6 +249,11 @@
 		'';
 
 		gdbInternal = pkgs.gdb.override { python3 = python; };
+		rr = pkgs.rr.overrideAttrs (old: {
+			pname = "focaccia-rr";
+			version = "git";
+			src = ./rr;
+		});
 	in rec {
 		# Default package just builds Focaccia
 		packages = rec {
@@ -271,6 +277,8 @@
 				];
 			});
 
+			qemu-plugin = qemu-submodule.packages.${system}.default;
+
 			default = focaccia;
 		};
 
@@ -302,7 +310,6 @@
 
 			validate-qemu = {
 				type = "app";
-				# program = "${packages.focaccia}/bin/validate-qemu";
 				program = let
 					wrapper = pkgs.writeShellScriptBin "validate-qemu" ''
 						exec ${packages.focaccia}/bin/validate-qemu --gdb "${gdbInternal}/bin/gdb" "$@"
@@ -318,7 +325,8 @@
 				type = "app";
 				program = "${pkgs.writeShellScriptBin "uv-sync" ''
 					set -euo pipefail
-					exec ${pkgs.uv}/bin/uv sync
+					${pkgs.uv}/bin/uv sync
+					sed -i '/riscv/d' uv.lock
 				''}/bin/uv-sync";
 				meta = {
 					description = "Sync uv python packages";
@@ -351,6 +359,19 @@
 					packages.dev
 					musl-pkgs.gcc
 					musl-pkgs.pkg-config
+				];
+
+				hardeningDisable = [ "pie" ];
+
+				env = uvEnv;
+				shellHook = uvShellHook;
+			};
+
+			musl-box64 = pkgs.mkShell {
+				packages = [
+					packages.dev
+					musl-pkgs.gcc
+					musl-pkgs.pkg-config
                     box64-patched
 				];
 
@@ -364,6 +385,21 @@
                   export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${zydis-shared-object}/lib
                 '';
 			};
+
+			musl-extra = pkgs.mkShell {
+				packages = [
+					packages.dev
+					rr
+					musl-pkgs.gcc
+					pkgs.capnproto
+					musl-pkgs.pkg-config
+				];
+
+				hardeningDisable = [ "pie" ];
+
+				env = uvEnv;
+				shellHook = uvShellHook;
+			};
 		};
 
 		checks = {
diff --git a/miasm b/miasm
-Subproject 2aee2e1313847cfbf88acd07143456e51989860
+Subproject 083c88f096d1b654069eff874356df7b2ecd460
diff --git a/pyproject.toml b/pyproject.toml
index 9e52dfb..5cc1e29 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -14,6 +14,8 @@ authors = [
 dependencies = [
 	"cffi",
 	"miasm",
+	"brotli",
+	"pycapnp",
 	"setuptools",
 	"cpuid @ git+https://github.com/taugoust/cpuid.py.git@master",
 ]
@@ -29,8 +31,8 @@ dev = [
 [project.scripts]
 focaccia = "focaccia.cli:main"
 convert = "focaccia.tools.convert:main"
-capture-transforms = "focaccia.tools.capture_transforms:main"
 validate-qemu = "focaccia.tools.validate_qemu:main"
+capture-transforms = "focaccia.tools.capture_transforms:main"
 
 [build-system]
 requires = ["hatchling"]
diff --git a/qemu b/qemu
new file mode 160000
+Subproject 3b2a0fb80eb9b6b5f216fa69069e66210466f5e
diff --git a/reproducers/issue-1373.c b/reproducers/issue-1373.c
new file mode 100644
index 0000000..b9f100e
--- /dev/null
+++ b/reproducers/issue-1373.c
@@ -0,0 +1,6 @@
+void main() {
+    asm("push 512; popfq;");
+    asm("mov rax, 0xffffffff84fdbf24");
+    asm("mov rbx, 0xb197d26043bec15d");
+    asm("adox eax, ebx");
+}
diff --git a/reproducers/issue-1376.c b/reproducers/issue-1376.c
new file mode 100644
index 0000000..8611c95
--- /dev/null
+++ b/reproducers/issue-1376.c
@@ -0,0 +1,5 @@
+void main() {
+    asm("mov rax, 0xa02e698e741f5a6a");
+    asm("mov rbx, 0x20959ddd7a0aef");
+    asm("lsl ax, bx");
+}
diff --git a/reproducers/issue-1377.c b/reproducers/issue-1377.c
new file mode 100644
index 0000000..b6b1309
--- /dev/null
+++ b/reproducers/issue-1377.c
@@ -0,0 +1,30 @@
+#include<stdio.h>
+#include<sys/mman.h>
+__attribute__((naked,noinline)) void* f(void* dst, void* p) {
+  __asm__(
+    "\n  pushq   %rbp"
+    "\n  movq    %rsp, %rbp"
+    "\n  movq    %rdi, %rax"
+    "\n  movq    $0x0, (%rdi)"
+    "\n  movl    $0x140a, (%rdi)         # imm = 0x140A"
+    "\n  movb    $0x4, 0x5(%rdi)"
+    "\n  cvtps2pd        (%rsi), %xmm0"
+    "\n  movups  %xmm0, 0x8(%rdi)"
+    "\n  cvtps2pd        0x8(%rsi), %xmm0"
+    "\n  movups  %xmm0, 0x18(%rdi)"
+    "\n  popq    %rbp"
+    "\n  retq"
+  );
+}
+int main() {
+  char dst[1000];
+  int page = 4096;
+  char* buf = mmap(NULL, page*2, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
+  // mprotect(buf+page, page, 0);
+  
+  float* src = (float*)(buf+0x40);
+  printf("src: %p\n", src);
+  
+  void* r = f(dst, src);
+  printf("res: %p\n", r);
+}
diff --git a/reproducers/issue-1832422.c b/reproducers/issue-1832422.c
new file mode 100644
index 0000000..108b661
--- /dev/null
+++ b/reproducers/issue-1832422.c
@@ -0,0 +1,3 @@
+void main() {
+    asm("cmppd xmm0,xmm0,0xd1");
+}
diff --git a/reproducers/issue-1861404.c b/reproducers/issue-1861404.c
new file mode 100644
index 0000000..c83dbc2
--- /dev/null
+++ b/reproducers/issue-1861404.c
@@ -0,0 +1,29 @@
+#include <stdio.h>
+#include <string.h>
+
+#define YMM_SIZE (32) // bytes
+
+void hex_dump(unsigned char *data, unsigned int len) {
+    for(unsigned int i=0; i<len; i++) {
+        printf("%02X ", data[i]);
+    }
+    printf("\n");
+}
+
+void set_ymm0(unsigned char m[YMM_SIZE]) {
+}
+
+void get_ymm0(unsigned char m[YMM_SIZE]) {
+    __asm__ __volatile__ ("vmovdqu %%ymm0, (%0);"::"r"(m):);
+}
+
+int main() {
+    unsigned char src[YMM_SIZE] = {0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f,0x10,0x11,0x12,0x13,0x14,0x15,0x16,0x17,0x18,0x19,0x1a,0x1b,0x1c,0x1d,0x1e,0x1f};
+    unsigned char dst[YMM_SIZE] = {0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00};
+
+    __asm__ __volatile__ ("vmovdqu (%0), %%ymm0;"::"r"(src):);
+
+    hex_dump(dst, YMM_SIZE);
+
+    return 0;
+}
diff --git a/reproducers/issue-2495.c b/reproducers/issue-2495.c
new file mode 100644
index 0000000..3648c1a
--- /dev/null
+++ b/reproducers/issue-2495.c
@@ -0,0 +1,32 @@
+#include <stdint.h>
+#include <stdio.h>
+#include <string.h>
+
+uint8_t i_R8[8] = { 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0 };
+uint8_t i_MM0[8] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
+uint8_t o_R8[8];
+
+void __attribute__ ((noinline)) show_state() {
+    printf("R8: ");
+    for (int i = 0; i < 8; i++) {
+        printf("%02x ", o_R8[i]);
+    }
+    printf("\n");
+}
+
+void __attribute__ ((noinline)) run() {
+    __asm__ (
+        ".intel_syntax noprefix\n"
+        "mov r8, qword ptr [rip + i_R8]\n"
+        "movq mm0, qword ptr [rip + i_MM0]\n"
+        ".byte 0x4f, 0x0f, 0x7e, 0xc0\n"
+        "mov qword ptr [rip + o_R8], r8\n"
+        ".att_syntax\n"
+    );
+}
+
+int main(int argc, char **argv) {
+    run();
+    show_state();
+    return 0;
+}
diff --git a/rr b/rr
new file mode 160000
+Subproject 7fe1e367c2b4e0df7647e020c20d66827badfad
diff --git a/src/focaccia/arch/aarch64.py b/src/focaccia/arch/aarch64.py
index 0e7d98c..76f7bd4 100644
--- a/src/focaccia/arch/aarch64.py
+++ b/src/focaccia/arch/aarch64.py
@@ -179,3 +179,7 @@ class ArchAArch64(Arch):
             from . import aarch64_dczid as dczid
             return dczid.read
         return None
+
+    def is_instr_syscall(self, instr: str) -> bool:
+        return instr.upper().startswith('SVC')
+
diff --git a/src/focaccia/arch/arch.py b/src/focaccia/arch/arch.py
index 234b0d9..c220a3b 100644
--- a/src/focaccia/arch/arch.py
+++ b/src/focaccia/arch/arch.py
@@ -80,7 +80,7 @@ class Arch():
         return self._accessors.get(_regname, None)
 
     def get_reg_reader(self, regname: str) -> Callable[[], int] | None:
-        """Read a register directly from Focaccia
+        """Read a register directly from Focaccia.
 
         :param name: The register to read.
         :return: The register value.
@@ -90,8 +90,25 @@ class Arch():
         """
         return None
 
+    def is_instr_uarch_dep(self, instr: str) -> bool:
+        """Returns true when an instruction is microarchitecturally-dependent.
+
+        :param name: The instruction.
+        :return: True if microarchitecturally-dependent.
+
+        Microarchitecturally-dependent instructions may have different output on different
+        microarchitectures, without it representing an error. Such instructions usually denote feature
+        support, output differences are representative of an error only if a program relies on the
+        particular microarchitectural features.
+        """
+        return False
+
+    def is_instr_syscall(self, instr: str) -> bool:
+        return False
+
     def __eq__(self, other):
         return self.archname == other.archname
 
     def __repr__(self) -> str:
         return self.archname
+
diff --git a/src/focaccia/arch/x86.py b/src/focaccia/arch/x86.py
index 809055e..a5d29f5 100644
--- a/src/focaccia/arch/x86.py
+++ b/src/focaccia/arch/x86.py
@@ -202,3 +202,16 @@ class ArchX86(Arch):
 
         # Apply custom register alias rules
         return regname_aliases.get(name.upper(), None)
+
+    def is_instr_uarch_dep(self, instr: str) -> bool:
+        if "XGETBV" in instr.upper():
+            return True
+        return False
+
+    def is_instr_syscall(self, instr: str) -> bool:
+        if instr.upper().startswith("SYSCALL"):
+            return True
+        if instr.upper().startswith("INT"):
+            return True
+        return False
+
diff --git a/src/focaccia/compare.py b/src/focaccia/compare.py
index 13f965c..4fea451 100644
--- a/src/focaccia/compare.py
+++ b/src/focaccia/compare.py
@@ -131,7 +131,8 @@ def compare_simple(test_states: list[ProgramState],
 
 def _find_register_errors(txl_from: ProgramState,
                           txl_to: ProgramState,
-                          transform_truth: SymbolicTransform) \
+                          transform_truth: SymbolicTransform,
+                          is_uarch_dep: bool = False) \
         -> list[Error]:
     """Find errors in register values.
 
@@ -155,6 +156,14 @@ def _find_register_errors(txl_from: ProgramState,
         )]
     except RegisterAccessError as err:
         s, e = transform_truth.range
+        if is_uarch_dep:
+            return [Error(ErrorTypes.INCOMPLETE,
+                          f'Register transformations {hex(s)} -> {hex(e)} depend'
+                          f' on the value of microarchitecturally-dependent register {err.regname}, which is not'
+                          f' set in the tested state. Incorrect or missing values for such registers'
+                          f'are errors only if the program relies on them. Such registers generally'
+                          f'denote microarchitectural feature support and not all programs depend on'
+                          f'all features exposed by a microarchitecture')]
         return [Error(ErrorTypes.INCOMPLETE,
                       f'Register transformations {hex(s)} -> {hex(e)} depend'
                       f' on the value of register {err.regname}, which is not'
@@ -172,10 +181,21 @@ def _find_register_errors(txl_from: ProgramState,
             continue
 
         if txl_val != truth_val:
-            errors.append(Error(ErrorTypes.CONFIRMED,
-                                f'Content of register {regname} is false.'
-                                f' Expected value: {hex(truth_val)}, actual'
-                                f' value in the translation: {hex(txl_val)}.'))
+            if is_uarch_dep:
+                errors.append(
+                    Error(
+                        ErrorTypes.POSSIBLE,
+                        f"Content of microarchitecture-specific register {regname} is different."
+                        f"This denotes an error only when relied upon"
+                        f" Expected value: {hex(truth_val)}, actual"
+                        f" value in the translation: {hex(txl_val)}.",
+                    )
+                )
+            else:
+                errors.append(Error(ErrorTypes.CONFIRMED,
+                                    f'Content of register {regname} is false.'
+                                    f' Expected value: {hex(truth_val)}, actual'
+                                    f' value in the translation: {hex(txl_val)}.'))
     return errors
 
 def _find_memory_errors(txl_from: ProgramState,
@@ -252,8 +272,10 @@ def _find_errors_symbolic(txl_from: ProgramState,
     to_pc = txl_to.read_register('PC')
     assert((from_pc, to_pc) == transform_truth.range)
 
+    is_uarch_dep = txl_from.arch.is_instr_uarch_dep(transform_truth.instructions[0].to_string())
+
     errors = []
-    errors.extend(_find_register_errors(txl_from, txl_to, transform_truth))
+    errors.extend(_find_register_errors(txl_from, txl_to, transform_truth, is_uarch_dep))
     errors.extend(_find_memory_errors(txl_from, txl_to, transform_truth))
 
     return errors
diff --git a/src/focaccia/deterministic.py b/src/focaccia/deterministic.py
new file mode 100644
index 0000000..5a2b411
--- /dev/null
+++ b/src/focaccia/deterministic.py
@@ -0,0 +1,226 @@
+"""Parsing of JSON files containing snapshot data."""
+
+import os
+import itertools
+from typing import Union, Iterable
+
+import brotli
+
+from .arch import Arch
+from .snapshot import ReadableProgramState
+
+try:
+    import capnp
+    rr_trace = capnp.load(file_name='./rr/src/rr_trace.capnp',
+                          imports=[os.path.dirname(p) for p in capnp.__path__])
+except Exception as e:
+    print(f'Cannot load RR trace loader: {e}')
+    exit(2)
+
+Frame = rr_trace.Frame
+TaskEvent = rr_trace.TaskEvent
+MMap = rr_trace.MMap
+SerializedObject = Union[Frame, TaskEvent, MMap]
+
+def parse_x64_registers(enc_regs: bytes, signed: bool=False) -> dict[str, int]:
+    idx = 0
+    def parse_reg():
+        nonlocal idx
+        enc_reg = enc_regs[idx:(idx := idx + 8)]
+        return int.from_bytes(enc_reg, byteorder='little', signed=signed)
+
+    regs = {}
+
+    regs['r15'] = parse_reg()
+    regs['r14'] = parse_reg()
+    regs['r13'] = parse_reg()
+    regs['r12'] = parse_reg()
+    regs['rbp'] = parse_reg()
+    regs['rbx'] = parse_reg()
+
+    # rcx is unreliable: parsed but ignored
+    parse_reg()
+
+    regs['r10'] = parse_reg()
+    regs['r9'] = parse_reg()
+    regs['r8'] = parse_reg()
+
+    regs['rax'] = parse_reg()
+
+    # rcx is unreliable: parsed but ignored
+    parse_reg()
+
+    regs['rdx'] = parse_reg()
+    regs['rsi'] = parse_reg()
+    regs['rdi'] = parse_reg()
+
+    regs['orig_rax'] = parse_reg()
+
+    regs['rip'] = parse_reg()
+    regs['cs'] = parse_reg()
+
+    # eflags is unreliable: parsed but ignored
+    parse_reg()
+
+    regs['rsp'] = parse_reg()
+    regs['ss'] = parse_reg()
+    regs['fs_base'] = parse_reg()
+    regs['ds'] = parse_reg()
+    regs['es'] = parse_reg()
+    regs['fs'] = parse_reg()
+    regs['gs'] = parse_reg()
+    regs['gs_base'] = parse_reg()
+
+    return regs
+
+def parse_aarch64_registers(enc_regs: bytes, order: str='little', signed: bool=False) -> dict[str, int]:
+    idx = 0
+    def parse_reg():
+        nonlocal idx
+        enc_reg = enc_regs[idx:(idx := idx + 8)]
+        return int.from_bytes(enc_reg, byteorder=order, signed=signed)
+
+    regnames = []
+    for i in range(32):
+        regnames.append(f'x{i}')
+    regnames.append('sp')
+    regnames.append('pc')
+    regnames.append('cpsr')
+
+    regs = {}
+    for i in range(len(regnames)):
+        regs[regnames[i]] = parse_reg()
+    
+    return regs
+
+class Event:
+    def __init__(self,
+                 pc: int,
+                 tid: int,
+                 arch: Arch,
+                 event_type: str,
+                 registers: dict[str, int],
+                 memory_writes: dict[int, int]):
+        self.pc = pc
+        self.tid = tid
+        self.arch = arch
+        self.event_type = event_type
+
+        self.registers = registers
+        self.mem_writes = memory_writes
+
+    def match(self, pc: int, target: ReadableProgramState) -> bool:
+        # TODO: match the rest of the state to be sure
+        if self.pc == pc:
+            for reg, value in self.registers.items():
+                if value == self.pc:
+                    continue
+                if target.read_register(reg) != value:
+                    print(f'Failed match for {reg}: {hex(value)} != {hex(target.read_register(reg))}')
+                    return False
+            return True
+        return False
+
+    def __repr__(self) -> str:
+        reg_repr = ''
+        for reg, value in self.registers.items():
+            reg_repr += f'{reg} = {hex(value)}\n'
+
+        mem_write_repr = ''
+        for addr, size in self.mem_writes.items():
+            mem_write_repr += f'{hex(addr)}:{hex(addr+size)}\n'
+
+        repr_str = f'Thread {hex(self.tid)} executed event {self.event_type} at {hex(self.pc)}\n'
+        repr_str += f'Register set:\n{reg_repr}'
+        
+        if len(self.mem_writes):
+            repr_str += f'\nMemory writes:\n{mem_write_repr}'
+
+        return repr_str
+
+class DeterministicLog:
+    def __init__(self, log_dir: str):
+        self.base_directory = log_dir
+
+    def events_file(self) -> str:
+        return os.path.join(self.base_directory, 'events')
+
+    def tasks_file(self) -> str:
+        return os.path.join(self.base_directory, 'tasks')
+
+    def mmaps_file(self) -> str:
+        return os.path.join(self.base_directory, 'mmaps')
+
+    def _read(self, file, obj: SerializedObject) -> list[SerializedObject]:
+        data = bytearray()
+        objects = []
+        with open(file, 'rb') as f:
+            while True:
+                try:
+                    compressed_len = int.from_bytes(f.read(4), byteorder='little')
+                    uncompressed_len = int.from_bytes(f.read(4), byteorder='little')
+                except Exception as e:
+                    raise Exception(f'Malformed deterministic log: {e}') from None
+
+                chunk = f.read(compressed_len)
+                if not chunk:
+                    break
+
+                chunk = brotli.decompress(chunk)
+                if len(chunk) != uncompressed_len:
+                    raise Exception(f'Malformed deterministic log: uncompressed chunk is not equal'
+                                    f'to reported length {hex(uncompressed_len)}')
+                data.extend(chunk)
+
+            for deser in obj.read_multiple_bytes_packed(data):
+                objects.append(deser)
+            return objects
+
+    def raw_events(self) -> list[SerializedObject]:
+        return self._read(self.events_file(), Frame)
+
+    def raw_tasks(self) -> list[SerializedObject]:
+        return self._read(self.tasks_file(), TaskEvent)
+
+    def raw_mmaps(self) -> list[SerializedObject]:
+        return self._read(self.mmaps_file(), MMap)
+
+    def events(self) -> list[Event]:
+        def parse_registers(event: Frame) -> Union[int, dict[str, int]]:
+            arch = event.arch
+            if arch == rr_trace.Arch.x8664:
+                regs = parse_x64_registers(event.registers.raw)
+                return regs['rip'], regs
+            if arch == rr_trace.Arch.aarch64:
+                regs = parse_aarch64_registers(event.registers.raw)
+                return regs['pc'], regs
+            raise NotImplementedError(f'Unable to parse registers for architecture {arch}')
+    
+        def parse_memory_writes(event: Frame) -> dict[int, int]:
+            writes = {}
+            for raw_write in event.memWrites:
+                writes[int(raw_write.addr)] = int(raw_write.size)
+            return writes
+
+        events = []
+        raw_events = self.raw_events()
+        for raw_event in raw_events:
+            pc, registers = parse_registers(raw_event)
+            mem_writes = parse_memory_writes(raw_event)
+
+            event_type = raw_event.event.which()
+            if event_type == 'syscall' and raw_event.arch == rr_trace.Arch.x8664:
+                # On entry: substitute orig_rax for RAX
+                if raw_event.event.syscall.state == rr_trace.SyscallState.entering:
+                    registers['rax'] = registers['orig_rax']
+                del registers['orig_rax']
+
+            event = Event(pc,
+                          raw_event.tid,
+                          raw_event.arch,
+                          event_type,
+                          registers, mem_writes)
+            events.append(event)
+
+        return events
+
diff --git a/src/focaccia/lldb_target.py b/src/focaccia/lldb_target.py
index c5042d5..940b3d9 100644
--- a/src/focaccia/lldb_target.py
+++ b/src/focaccia/lldb_target.py
@@ -1,10 +1,16 @@
 import os
+import logging
 
 import lldb
 
 from .arch import supported_architectures
 from .snapshot import ProgramState
 
+logger = logging.getLogger('focaccia-lldb-target')
+debug = logger.debug
+info = logger.info
+warn = logger.warn
+
 class MemoryMap:
     """Description of a range of mapped memory.
 
@@ -44,49 +50,53 @@ class LLDBConcreteTarget:
         x86.archname: x86.decompose_rflags,
     }
 
+    register_retries = {
+        aarch64.archname: {},
+        x86.archname: {
+            "rflags": ["eflags"]
+        }
+    }
+
     def __init__(self,
-                 executable: str,
-                 argv: list[str] = [],
-                 envp: list[str] | None = None):
+                 debugger: lldb.SBDebugger,
+                 target: lldb.SBTarget,
+                 process: lldb.SBProcess):
         """Construct an LLDB concrete target. Stop at entry.
 
-        :param argv: List of arguements. Does NOT include the conventional
-                     executable name as the first entry.
-        :param envp: List of environment entries. Defaults to current
-                     `os.environ` if `None`.
-        :raises RuntimeError: If the process is unable to launch.
+        :param debugger: LLDB SBDebugger object representing an initialized debug session.
+        :param target: LLDB SBTarget object representing an initialized target for the debugger.
+        :param process: LLDB SBProcess object representing an initialized process (either local or remote).
         """
-        if envp is None:
-            envp = [f'{k}={v}' for k, v in os.environ.items()]
+        self.debugger = debugger
+        self.target = target
+        self.process = process
 
-        self.debugger = lldb.SBDebugger.Create()
-        self.debugger.SetAsync(False)
-        self.target = self.debugger.CreateTargetWithFileAndArch(executable,
-                                                                lldb.LLDB_ARCH_DEFAULT)
         self.module = self.target.FindModule(self.target.GetExecutable())
         self.interpreter = self.debugger.GetCommandInterpreter()
 
         # Set up objects for process execution
-        self.error = lldb.SBError()
         self.listener = self.debugger.GetListener()
-        self.process = self.target.Launch(self.listener,
-                                          argv, envp,        # argv, envp
-                                          None, None, None,  # stdin, stdout, stderr
-                                          None,              # working directory
-                                          0,
-                                          True, self.error)
-        if not self.process.IsValid():
-            raise RuntimeError(f'[In LLDBConcreteTarget.__init__]: Failed to'
-                               f' launch process.')
 
         # Determine current arch
-        self.archname = self.target.GetPlatform().GetTriple().split('-')[0]
-        if self.archname not in supported_architectures:
-            err = f'LLDBConcreteTarget: Architecture {self.archname} is not' \
+        self.archname = self.determine_arch()
+        self.arch = supported_architectures[self.archname]
+
+    def determine_arch(self):
+        archname = self.target.GetPlatform().GetTriple().split('-')[0]
+        if archname not in supported_architectures:
+            err = f'LLDBConcreteTarget: Architecture {archname} is not' \
                   f' supported by Focaccia.'
             print(f'[ERROR] {err}')
             raise NotImplementedError(err)
-        self.arch = supported_architectures[self.archname]
+        return archname
+
+    def determine_name(self) -> str:
+        return self.process.GetTarget().GetExecutable().fullpath
+
+    def determine_arguments(self):
+        launch_info = self.target.GetLaunchInfo()
+        argc = self.target.GetLaunchInfo().GetNumArguments()
+        return [launch_info.GetArgumentAtIndex(i) for i in range(argc)]
 
     def is_exited(self):
         """Signals whether the concrete process has exited.
@@ -99,21 +109,24 @@ class LLDBConcreteTarget:
         """Continue execution of the concrete process."""
         state = self.process.GetState()
         if state == lldb.eStateExited:
-            raise RuntimeError(f'Tried to resume process execution, but the'
-                               f' process has already exited.')
-        assert(state == lldb.eStateStopped)
+            raise RuntimeError('Tried to resume process execution, but the'
+                               ' process has already exited.')
         self.process.Continue()
 
     def step(self):
         """Step forward by a single instruction."""
-        thread: lldb.SBThread = self.process.GetThreadAtIndex(0)
+        thread: lldb.SBThread = self.process.GetSelectedThread()
         thread.StepInstruction(False)
 
     def run_until(self, address: int) -> None:
         """Continue execution until the address is arrived, ignores other breakpoints"""
         bp = self.target.BreakpointCreateByAddress(address)
-        while self.read_register("pc") != address:
+        while True:
             self.run()
+            if self.is_exited():
+                return
+            if self.read_register('pc') == address:
+                break
         self.target.BreakpointDelete(bp.GetID())
 
     def record_snapshot(self) -> ProgramState:
@@ -148,12 +161,20 @@ class LLDBConcreteTarget:
         :raise ConcreteRegisterError: If no register with the specified name
                                       can be found.
         """
-        frame = self.process.GetThreadAtIndex(0).GetFrameAtIndex(0)
-        reg = frame.FindRegister(regname)
+        debug(f'Accessing register {regname}')
+
+        frame = self.process.GetSelectedThread().GetFrameAtIndex(0)
+
+        retry_list = self.register_retries[self.archname].get(regname, [])
+        error_msg = f'[In LLDBConcreteTarget._get_register]: Register {regname} not found'
+
+        reg = None
+        for name in [regname, *retry_list]:
+            reg = frame.FindRegister(name)
+            if reg.IsValid():
+                break
         if not reg.IsValid():
-            raise ConcreteRegisterError(
-                f'[In LLDBConcreteTarget._get_register]: Register {regname}'
-                f' not found.')
+            raise ConcreteRegisterError(error_msg)
         return reg
 
     def read_flags(self) -> dict[str, int | bool]:
@@ -223,7 +244,7 @@ class LLDBConcreteTarget:
                 f'[In LLDBConcreteTarget.write_register]: Unable to set'
                 f' {regname} to value {hex(value)}!')
 
-    def read_memory(self, addr, size):
+    def read_memory(self, addr: int, size: int) -> bytes:
         """Read bytes from memory.
 
         :raise ConcreteMemoryError: If unable to read `size` bytes from `addr`.
@@ -238,7 +259,7 @@ class LLDBConcreteTarget:
         else:
             return bytes(reversed(content))
 
-    def write_memory(self, addr, value: bytes):
+    def write_memory(self, addr: int, value: bytes):
         """Write bytes to memory.
 
         :raise ConcreteMemoryError: If unable to write at `addr`.
@@ -316,3 +337,82 @@ class LLDBConcreteTarget:
                 if s.GetStartAddress().GetLoadAddress(self.target) > addr:
                     addr = s.GetEndAddress().GetLoadAddress(self.target)
         return addr
+
+    def get_disassembly(self, addr: int) -> str:
+        inst: lldb.SBInstruction = self.target.ReadInstructions(lldb.SBAddress(addr, self.target), 1, 'intel')[0]
+        mnemonic: str = inst.GetMnemonic(self.target).upper()
+        operands: str = inst.GetOperands(self.target).upper()
+        operands = operands.replace("0X", "0x")
+        return f'{mnemonic} {operands}'
+
+    def get_disassembly_bytes(self, addr: int):
+        error = lldb.SBError()
+        buf = self.process.ReadMemory(addr, 64, error)
+        inst = self.target.GetInstructions(lldb.SBAddress(addr, self.target), buf)[0]
+        return inst.GetData(self.target).ReadRawData(error, 0, inst.GetByteSize())
+
+    def get_instruction_size(self, addr: int) -> int:
+        inst = self.target.ReadInstructions(lldb.SBAddress(addr, self.target), 1, 'intel')[0]
+        return inst.GetByteSize()
+
+    def get_current_tid(self) -> int:
+        thread: lldb.SBThread = self.process.GetSelectedThread()
+        return thread.GetThreadID()
+
+class LLDBLocalTarget(LLDBConcreteTarget):
+    def __init__(self,
+                 executable: str,
+                 argv: list[str] = [],
+                 envp: list[str] | None = None):
+        """Construct an LLDB local target. Stop at entry.
+
+        :param executable: Name of executable to run under LLDB.
+        :param argv: List of arguements. Does NOT include the conventional
+                     executable name as the first entry.
+        :param envp: List of environment entries. Defaults to current
+                     `os.environ` if `None`.
+        :raises RuntimeError: If the process is unable to launch.
+        """
+        if envp is None:
+            envp = [f'{k}={v}' for k, v in os.environ.items()]
+
+        debugger = lldb.SBDebugger.Create()
+        debugger.SetAsync(False)
+        target = debugger.CreateTargetWithFileAndArch(executable, lldb.LLDB_ARCH_DEFAULT)
+        
+        # Set up objects for process execution
+        error = lldb.SBError()
+        process = target.Launch(debugger.GetListener(),
+                                argv, envp,        # argv, envp
+                                None, None, None,  # stdin, stdout, stderr
+                                None,              # working directory
+                                0,
+                                True, error)
+
+        if not target.process.IsValid():
+            raise RuntimeError(f'Failed to launch LLDB target: {error.GetCString()}')
+
+        super().__init__(debugger, target, process)
+
+class LLDBRemoteTarget(LLDBConcreteTarget):
+    def __init__(self, remote: str, executable: str | None = None):
+        """Construct an LLDB remote target. Stop at entry.
+
+        :param remote: String of the form <remote_name>:<port> (e.g. localhost:12345).
+        :raises RuntimeError: If failing to attach to a remote debug session.
+        """
+        debugger = lldb.SBDebugger.Create()
+        debugger.SetAsync(False)
+        target = debugger.CreateTarget(executable)
+        
+        # Set up objects for process execution
+        error = lldb.SBError()
+        process = target.ConnectRemote(debugger.GetListener(),
+                                       f'connect://{remote}',
+                                       None,
+                                       error)
+        if not target.process.IsValid():
+            raise RuntimeError(f'Failed to connect via LLDB to remote target: {error.GetCString()}')
+        
+        super().__init__(debugger, target, process)
+
diff --git a/src/focaccia/miasm_util.py b/src/focaccia/miasm_util.py
index 2cbcc52..c9dc4e5 100644
--- a/src/focaccia/miasm_util.py
+++ b/src/focaccia/miasm_util.py
@@ -155,11 +155,7 @@ class MiasmSymbolResolver:
         return regname
 
     def resolve_register(self, regname: str) -> int | None:
-        try:
-            return self._state.read_register(self._miasm_to_regname(regname))
-        except RegisterAccessError as err:
-            print(f'Not a register: {regname} ({err})')
-            return None
+        return self._state.read_register(self._miasm_to_regname(regname))
 
     def resolve_memory(self, addr: int, size: int) -> bytes | None:
         try:
diff --git a/src/focaccia/parser.py b/src/focaccia/parser.py
index e9e5e0c..c157a36 100644
--- a/src/focaccia/parser.py
+++ b/src/focaccia/parser.py
@@ -201,3 +201,4 @@ def parse_box64(stream: TextIO, arch: Arch) -> Trace[ProgramState]:
                 states[-1].set_register(regname, int(value, 16))
 
     return Trace(states, _make_unknown_env())
+
diff --git a/src/focaccia/symbolic.py b/src/focaccia/symbolic.py
index 7b6098e..2a66a26 100644
--- a/src/focaccia/symbolic.py
+++ b/src/focaccia/symbolic.py
@@ -1,11 +1,12 @@
-"""Tools and utilities for symbolic execution with Miasm."""
+"""Tools and utilities for  execution with Miasm."""
 
 from __future__ import annotations
-from typing import Iterable
-import logging
+
 import sys
+import logging
+
+from pathlib import Path
 
-from miasm.analysis.binary import ContainerELF
 from miasm.analysis.machine import Machine
 from miasm.core.cpu import instruction as miasm_instr
 from miasm.core.locationdb import LocationDB
@@ -14,20 +15,29 @@ from miasm.ir.ir import Lifter
 from miasm.ir.symbexec import SymbolicExecutionEngine
 
 from .arch import Arch, supported_architectures
-from .lldb_target import LLDBConcreteTarget, \
-                         ConcreteRegisterError, \
-                         ConcreteMemoryError
+from .lldb_target import (
+    LLDBConcreteTarget,
+    LLDBLocalTarget,
+    LLDBRemoteTarget,
+    ConcreteRegisterError,
+    ConcreteMemoryError,
+)
 from .miasm_util import MiasmSymbolResolver, eval_expr, make_machine
-from .snapshot import ProgramState, ReadableProgramState, \
-                      RegisterAccessError, MemoryAccessError
+from .snapshot import ReadableProgramState, RegisterAccessError, MemoryAccessError
 from .trace import Trace, TraceEnvironment
+from .utils import timebound, TimeoutError
 
 logger = logging.getLogger('focaccia-symbolic')
+debug = logger.debug
+info = logger.info
 warn = logger.warn
 
 # Disable Miasm's disassembly logger
 logging.getLogger('asmblock').setLevel(logging.CRITICAL)
 
+class ValidationError(Exception):
+    pass
+
 def eval_symbol(symbol: Expr, conc_state: ReadableProgramState) -> int:
     """Evaluate a symbol based on a concrete reference state.
 
@@ -52,8 +62,8 @@ def eval_symbol(symbol: Expr, conc_state: ReadableProgramState) -> int:
             return self._state.read_memory(addr, size)
 
         def resolve_location(self, loc):
-            raise ValueError(f'[In eval_symbol]: Unable to evaluate symbols'
-                             f' that contain IR location expressions.')
+            raise ValueError('[In eval_symbol]: Unable to evaluate symbols'
+                             ' that contain IR location expressions.')
 
     res = eval_expr(symbol, ConcreteStateWrapper(conc_state))
 
@@ -61,7 +71,8 @@ def eval_symbol(symbol: Expr, conc_state: ReadableProgramState) -> int:
     # but ExprLocs are disallowed by the
     # ConcreteStateWrapper
     if not isinstance(res, ExprInt):
-        raise Exception(f'{res} from symbol {symbol} is not an instance of ExprInt but only ExprInt can be evaluated')
+        raise Exception(f'{res} from symbol {symbol} is not an instance of ExprInt'
+                        f' but only ExprInt can be evaluated')
     return int(res)
 
 class Instruction:
@@ -116,18 +127,24 @@ class Instruction:
 class SymbolicTransform:
     """A symbolic transformation mapping one program state to another."""
     def __init__(self,
+                 tid: int, 
                  transform: dict[Expr, Expr],
                  instrs: list[Instruction],
                  arch: Arch,
                  from_addr: int,
                  to_addr: int):
         """
-        :param state: The symbolic transformation in the form of a SimState
-                      object.
-        :param first_inst: An instruction address. The transformation
-                           represents the modifications to the program state
-                           performed by this instruction.
+        :param tid: The thread ID that executed the instructions effecting the transformation.
+        :param transform: A map of input symbolic expressions and output symbolic expressions.
+        :param instrs: A list of instructions. The transformation
+                       represents the collective modifications to the program state
+                       performed by these instructions.
+        :param arch: The architecture of the symbolic transformation.
+        :param from_addr: The starting address of the instruction effecting the symbolic
+                          transformation.
+        :param to_addr: The final address of the last instruction in the instructions list.
         """
+        self.tid = tid
         self.arch = arch
 
         self.addr = from_addr
@@ -376,15 +393,16 @@ class SymbolicTransform:
             try:
                 return Instruction.from_string(text, arch, offset=0, length=length)
             except Exception as err:
-                warn(f'[In SymbolicTransform.from_json] Unable to parse'
-                     f' instruction string "{text}": {err}.')
-                return None
+                # Note: from None disables chaining in traceback
+                raise ValueError(f'[In SymbolicTransform.from_json] Unable to parse'
+                                 f' instruction string "{text}": {err}.') from None
 
+        tid = int(data['tid'])
         arch = supported_architectures[data['arch']]
         start_addr = int(data['from_addr'])
         end_addr = int(data['to_addr'])
 
-        t = SymbolicTransform({}, [], arch, start_addr, end_addr)
+        t = SymbolicTransform(tid, {}, [], arch, start_addr, end_addr)
         t.changed_regs = { name: parse(val) for name, val in data['regs'].items() }
         t.changed_mem = { parse(addr): parse(val) for addr, val in data['mem'].items() }
         instrs = [decode_inst(b, arch) for b in data['instructions']]
@@ -404,15 +422,15 @@ class SymbolicTransform:
             try:
                 return [inst.length, inst.to_string()]
             except Exception as err:
-                warn(f'[In SymbolicTransform.to_json] Unable to serialize'
-                     f' "{inst}" as string: {err}. This instruction will not'
-                     f' be serialized.')
-                return None
+                # Note: from None disables chaining in traceback
+                raise Exception(f'[In SymbolicTransform.to_json] Unable to serialize'
+                                f' "{inst}" as string: {err}') from None
 
         instrs = [encode_inst(inst) for inst in self.instructions]
         instrs = [inst for inst in instrs if inst is not None]
         return {
             'arch': self.arch.archname,
+            'tid': self.tid,
             'from_addr': self.range[0],
             'to_addr': self.range[1],
             'instructions': instrs,
@@ -422,7 +440,7 @@ class SymbolicTransform:
 
     def __repr__(self) -> str:
         start, end = self.range
-        res = f'Symbolic state transformation {hex(start)} -> {hex(end)}:\n'
+        res = f'Symbolic state transformation [{self.tid}] {start} -> {end}:\n'
         res += '  [Symbols]\n'
         for reg, expr in self.changed_regs.items():
             res += f'    {reg:6s} = {expr}\n'
@@ -445,8 +463,8 @@ class MemoryBinstream:
 
     def __getitem__(self, key: int | slice):
         if isinstance(key, slice):
-            return self._state.read_memory(key.start, key.stop - key.start)
-        return self._state.read_memory(key, 1)
+            return self._state.read_instructions(key.start, key.stop - key.start)
+        return self._state.read_instructions(key, 1)
 
 class DisassemblyContext:
     def __init__(self, target: ReadableProgramState):
@@ -463,9 +481,14 @@ class DisassemblyContext:
         self.mdis.follow_call = True
         self.lifter = self.machine.lifter(self.loc_db)
 
+    def disassemble(self, address: int) -> Instruction:
+        miasm_instr = self.mdis.dis_instr(address)
+        return Instruction(miasm_instr, self.machine, self.arch, self.loc_db)
+
 def run_instruction(instr: miasm_instr,
                     conc_state: MiasmSymbolResolver,
-                    lifter: Lifter) \
+                    lifter: Lifter,
+                    force: bool = False) \
         -> tuple[ExprInt | None, dict[Expr, Expr]]:
     """Compute the symbolic equation of a single instruction.
 
@@ -578,8 +601,12 @@ def run_instruction(instr: miasm_instr,
         loc = lifter.add_instr_to_ircfg(instr, ircfg, None, False)
         assert(isinstance(loc, Expr) or isinstance(loc, LocKey))
     except NotImplementedError as err:
-        warn(f'[WARNING] Unable to lift instruction {instr}: {err}. Skipping.')
-        return None, {}  # Create an empty transform for the instruction
+        msg = f'Unable to lift instruction {instr}: {err}'
+        if force:
+            warn(f'{msg}. Skipping')
+            return None, {}
+        else:
+            raise Exception(msg)
 
     # Execute instruction symbolically
     new_pc, modified = execute_location(loc)
@@ -587,114 +614,317 @@ def run_instruction(instr: miasm_instr,
 
     return new_pc, modified
 
-class _LLDBConcreteState(ReadableProgramState):
-    """A wrapper around `LLDBConcreteTarget` that provides access via a
-    `ReadableProgramState` interface. Reads values directly from an LLDB
-    target. This saves us the trouble of recording a full program state, and
-    allows us instead to read values from LLDB on demand.
-    """
+class SpeculativeTracer(ReadableProgramState):
     def __init__(self, target: LLDBConcreteTarget):
         super().__init__(target.arch)
-        self._target = target
+        self.target = target
+        self.pc = target.read_register('pc')
+        self.speculative_pc: int | None = None
+        self.speculative_count: int = 0
+        
+        self.read_cache = {}
+
+    def speculate(self, new_pc):
+        self.read_cache.clear()
+        if new_pc is None:
+            self.progress_execution()
+            self.target.step()
+            self.pc = self.target.read_register('pc')
+            self.speculative_pc = None
+            self.speculative_count = 0
+            return
+
+        new_pc = int(new_pc)
+        self.speculative_pc = new_pc
+        self.speculative_count += 1
+
+    def progress_execution(self) -> None:
+        if self.speculative_pc is not None and self.speculative_count != 0:
+            debug(f'Updating PC to {hex(self.speculative_pc)}')
+            if self.speculative_count == 1:
+                self.target.step()
+            else:
+                self.target.run_until(self.speculative_pc)
+
+            self.pc = self.speculative_pc
+            self.speculative_pc = None
+            self.speculative_count = 0
+
+            self.read_cache.clear()
+
+    def run_until(self, addr: int):
+        if self.speculative_pc:
+            raise Exception('Attempting manual execution with speculative execution enabled')
+        self.target.run_until(addr)
+        self.pc = addr
+
+    def step(self):
+        self.progress_execution()
+        if self.target.is_exited():
+            return
+        self.target.step()
+        self.pc = self.target.read_register('pc')
+
+    def _cache(self, name: str, value):
+        self.read_cache[name] = value
+        return value
+
+    def read_pc(self) -> int:
+        if self.speculative_pc is not None:
+            return self.speculative_pc
+        return self.pc
+
+    def read_flags(self) -> dict[str, int | bool]:
+        if 'flags' in self.read_cache:
+            return self.read_cache['flags']
+        self.progress_execution()
+        return self._cache('flags', self.target.read_flags())
 
     def read_register(self, reg: str) -> int:
         regname = self.arch.to_regname(reg)
         if regname is None:
             raise RegisterAccessError(reg, f'Not a register name: {reg}')
 
-        try:
-            return self._target.read_register(regname)
-        except ConcreteRegisterError:
-            raise RegisterAccessError(regname, '')
+        if reg in self.read_cache:
+            return self.read_cache[reg]
+
+        self.progress_execution()
+        return self._cache(reg, self.target.read_register(regname))
+
+    def write_register(self, regname: str, value: int):
+        self.progress_execution()
+        self.read_cache.pop(regname, None)
+        self.target.write_register(regname, value)
+
+    def read_instructions(self, addr: int, size: int) -> bytes:
+        return self.target.read_memory(addr, size)
 
     def read_memory(self, addr: int, size: int) -> bytes:
-        try:
-            return self._target.read_memory(addr, size)
-        except ConcreteMemoryError:
-            raise MemoryAccessError(addr, size, 'Unable to read memory from LLDB.')
-
-def collect_symbolic_trace(env: TraceEnvironment,
-                           start_addr: int | None = None
-                           ) -> Trace[SymbolicTransform]:
-    """Execute a program and compute state transformations between executed
-    instructions.
-
-    :param binary: The binary to trace.
-    :param args:   Arguments to the program.
+        self.progress_execution()
+        cache_name = f'{addr}_{size}' 
+        if cache_name in self.read_cache:
+            return self.read_cache[cache_name]
+        return self._cache(cache_name, self.target.read_memory(addr, size))
+
+    def write_memory(self, addr: int, value: bytes):
+        self.progress_execution()
+        self.read_cache.pop(addr, None)
+        self.target.write_memory(addr, value)
+
+    def __getattr__(self, name: str):
+        return getattr(self.target, name)
+
+class SymbolicTracer:
+    """A symbolic tracer that uses `LLDBConcreteTarget` with Miasm to simultaneously execute a
+    program with concrete state and collect its symbolic transforms
     """
-    binary = env.binary_name
-
-    # Set up concrete reference state
-    target = LLDBConcreteTarget(binary, env.argv, env.envp)
-    if start_addr is not None:
-        target.run_until(start_addr)
-    lldb_state = _LLDBConcreteState(target)
-
-    ctx = DisassemblyContext(lldb_state)
-    arch = ctx.arch
-
-    # Trace concolically
-    strace: list[SymbolicTransform] = []
-    while not target.is_exited():
-        pc = target.read_register('pc')
-
-        # Disassemble instruction at the current PC
-        try:
-            instr = ctx.mdis.dis_instr(pc)
-        except:
-            err = sys.exc_info()[1]
-            warn(f'Unable to disassemble instruction at {hex(pc)}: {err}.'
-                 f' Skipping.')
-            target.step()
-            continue
-
-        # Run instruction
-        conc_state = MiasmSymbolResolver(lldb_state, ctx.loc_db)
-        new_pc, modified = run_instruction(instr, conc_state, ctx.lifter)
-
-        # Create symbolic transform
-        instruction = Instruction(instr, ctx.machine, ctx.arch, ctx.loc_db)
-        if new_pc is None:
-            new_pc = pc + instruction.length
-        else:
-            new_pc = int(new_pc)
-        transform = SymbolicTransform(modified, [instruction], arch, pc, new_pc)
-        strace.append(transform)
-
-        # Predict next concrete state.
-        # We verify the symbolic execution backend on the fly for some
-        # additional protection from bugs in the backend.
-        if env.cross_validate:
-            predicted_regs = transform.eval_register_transforms(lldb_state)
-            predicted_mems = transform.eval_memory_transforms(lldb_state)
-
-        # Step forward
-        target.step()
-        if target.is_exited():
-            break
-
+    def __init__(self, 
+                 env: TraceEnvironment, 
+                 remote: str | None=None,
+                 force: bool=False,
+                 cross_validate: bool=False):
+        self.env = env
+        self.force = force
+        self.remote = remote
+        self.cross_validate = cross_validate
+        self.target = SpeculativeTracer(self.create_debug_target())
+
+        self.nondet_events = self.env.detlog.events()
+        self.next_event: int | None = None
+
+    def create_debug_target(self) -> LLDBConcreteTarget:
+        binary = self.env.binary_name
+        if self.remote is False:
+            debug(f'Launching local debug target {binary} {self.env.argv}')
+            debug(f'Environment: {self.env}')
+            return LLDBLocalTarget(binary, self.env.argv, self.env.envp)
+
+        debug(f'Connecting to remote debug target {self.remote}')
+        target = LLDBRemoteTarget(self.remote, binary)
+
+        module_name = target.determine_name()
+        binary = str(Path(self.env.binary_name).resolve())
+        if binary != module_name:
+            warn(f'Discovered binary name {module_name} differs from specified name {binary}')
+
+        return target
+
+    def predict_next_state(self, instruction: Instruction, transform: SymbolicTransform):
+        debug(f'Evaluating register and memory transforms for {instruction} to cross-validate')
+        predicted_regs = transform.eval_register_transforms(self.target)
+        predicted_mems = transform.eval_memory_transforms(self.target)
+        return predicted_regs, predicted_mems
+
+    def validate(self,
+                 instruction: Instruction,
+                 transform: SymbolicTransform,
+                 predicted_regs: dict[str, int],
+                 predicted_mems: dict[int, bytes]):
         # Verify last generated transform by comparing concrete state against
         # predicted values.
-        assert(len(strace) > 0)
-        if env.cross_validate:
-            for reg, val in predicted_regs.items():
-                conc_val = lldb_state.read_register(reg)
-                if conc_val != val:
-                    warn(f'Symbolic execution backend generated false equation for'
-                         f' [{hex(instruction.addr)}]: {instruction}:'
-                         f' Predicted {reg} = {hex(val)}, but the'
-                         f' concrete state has value {reg} = {hex(conc_val)}.'
-                         f'\nFaulty transformation: {transform}')
-            for addr, data in predicted_mems.items():
-                conc_data = lldb_state.read_memory(addr, len(data))
-                if conc_data != data:
-                    warn(f'Symbolic execution backend generated false equation for'
-                         f' [{hex(instruction.addr)}]: {instruction}: Predicted'
-                         f' mem[{hex(addr)}:{hex(addr+len(data))}] = {data},'
-                         f' but the concrete state has value'
-                         f' mem[{hex(addr)}:{hex(addr+len(data))}] = {conc_data}.'
-                         f'\nFaulty transformation: {transform}')
-                    raise Exception()
-
-    return Trace(strace, env)
+        if self.target.is_exited():
+            return
+
+        debug('Cross-validating symbolic transforms by comparing actual to predicted values')
+        for reg, val in predicted_regs.items():
+            conc_val = self.target.read_register(reg)
+            if conc_val != val:
+                raise ValidationError(f'Symbolic execution backend generated false equation for'
+                                      f' [{hex(instruction.addr)}]: {instruction}:'
+                                      f' Predicted {reg} = {hex(val)}, but the'
+                                      f' concrete state has value {reg} = {hex(conc_val)}.'
+                                      f'\nFaulty transformation: {transform}')
+        for addr, data in predicted_mems.items():
+            conc_data = self.target.read_memory(addr, len(data))
+            if conc_data != data:
+                raise ValidationError(f'Symbolic execution backend generated false equation for'
+                                      f' [{hex(instruction.addr)}]: {instruction}: Predicted'
+                                      f' mem[{hex(addr)}:{hex(addr+len(data))}] = {data},'
+                                      f' but the concrete state has value'
+                                      f' mem[{hex(addr)}:{hex(addr+len(data))}] = {conc_data}.'
+                                      f'\nFaulty transformation: {transform}')
+
+    def progress_event(self) -> None:
+        if (self.next_event + 1) < len(self.nondet_events):
+            self.next_event += 1
+            debug(f'Next event to handle at index {self.next_event}')
+        else:
+            self.next_event = None
+
+    def post_event(self) -> None:
+        if self.next_event:
+            if self.nondet_events[self.next_event].pc == 0:
+                # Exit sequence
+                debug('Completed exit event')
+                self.target.run()
+
+            debug(f'Completed handling event at index {self.next_event}')
+            self.progress_event()
+
+    def is_stepping_instr(self, pc: int, instruction: Instruction) -> bool:
+        if self.nondet_events:
+            pc = pc + instruction.length # detlog reports next pc for each event
+            if self.next_event and self.nondet_events[self.next_event].match(pc, self.target):
+                debug('Current instruction matches next event; stepping through it')
+                self.progress_event()
+                return True
+        else:
+            if self.target.arch.is_instr_syscall(str(instruction)):
+                return True
+        return False
+
+    def progress(self, new_pc, step: bool = False) -> int | None:
+        self.target.speculate(new_pc)
+        if step:
+            self.target.progress_execution()
+            if self.target.is_exited():
+                return None
+        return self.target.read_pc()
+
+    def trace(self, time_limit: int | None = None) -> Trace[SymbolicTransform]:
+        """Execute a program and compute state transformations between executed
+        instructions.
+
+        :param start_addr: Address from which to start tracing.
+        :param stop_addr: Address until which to trace.
+        """
+        # Set up concrete reference state
+        if self.env.start_address is not None:
+            self.target.run_until(self.env.start_address)
+
+        for i in range(len(self.nondet_events)):
+            if self.nondet_events[i].pc == self.target.read_pc():
+                self.next_event = i+1
+                if self.next_event >= len(self.nondet_events):
+                    break
+
+                debug(f'Starting from event {self.nondet_events[i]} onwards')
+                break
+
+        ctx = DisassemblyContext(self.target)
+        arch = ctx.arch
+
+        if logger.isEnabledFor(logging.DEBUG):
+            debug('Tracing program with the following non-deterministic events')
+            for event in self.nondet_events:
+                debug(event)
+
+        # Trace concolically
+        strace: list[SymbolicTransform] = []
+        while not self.target.is_exited():
+            pc = self.target.read_pc()
+
+            if self.env.stop_address is not None and pc == self.env.stop_address:
+                break
+
+            assert(pc != 0)
+
+            # Disassemble instruction at the current PC
+            tid = self.target.get_current_tid()
+            try:
+                instruction = ctx.disassemble(pc)
+                info(f'[{tid}] Disassembled instruction {instruction} at {hex(pc)}')
+            except:
+                err = sys.exc_info()[1]
+
+                # Try to recovery by using the LLDB disassembly instead
+                try:
+                    alt_disas = self.target.get_disassembly(pc)
+                    instruction = Instruction.from_string(alt_disas, ctx.arch, pc,
+                                                         self.target.get_instruction_size(pc))
+                    info(f'[{tid}] Disassembled instruction {instruction} at {hex(pc)}')
+                except:
+                    if self.force:
+                        if alt_disas:
+                            warn(f'[{tid}] Unable to handle instruction {alt_disas} at {hex(pc)} in Miasm.'
+                                 f' Skipping.')
+                        else:
+                            warn(f'[{tid}] Unable to disassemble instruction {hex(pc)}: {err}.'
+                                 f' Skipping.')
+                        self.target.step()
+                        continue
+                    raise # forward exception
+
+            is_event = self.is_stepping_instr(pc, instruction)
+
+            # Run instruction
+            conc_state = MiasmSymbolResolver(self.target, ctx.loc_db)
+
+            try:
+                new_pc, modified = timebound(time_limit, run_instruction,
+                                             instruction.instr, conc_state, ctx.lifter)
+            except TimeoutError:
+                warn(f'Running instruction {instruction} took longer than {time_limit} second. Skipping')
+                new_pc, modified = None, {}
+
+            if self.cross_validate and new_pc:
+                # Predict next concrete state.
+                # We verify the symbolic execution backend on the fly for some
+                # additional protection from bugs in the backend.
+                new_pc = int(new_pc)
+                transform = SymbolicTransform(tid, modified, [instruction], arch, pc, new_pc)
+                pred_regs, pred_mems = self.predict_next_state(instruction, transform)
+                self.progress(new_pc, step=is_event)
+
+                try:
+                    self.validate(instruction, transform, pred_regs, pred_mems)
+                except ValidationError as e:
+                    if self.force:
+                        warn(f'Cross-validation failed: {e}')
+                        continue
+                    raise
+            else:
+                new_pc = self.progress(new_pc, step=is_event)
+                if new_pc is None:
+                    transform = SymbolicTransform(tid, modified, [instruction], arch, pc, 0)
+                    strace.append(transform)
+                    continue # we're done
+                transform = SymbolicTransform(tid, modified, [instruction], arch, pc, new_pc)
+
+            strace.append(transform)
+
+            if is_event:
+                self.post_event()
+
+        return Trace(strace, self.env)
 
diff --git a/src/focaccia/tools/_qemu_tool.py b/src/focaccia/tools/_qemu_tool.py
index 706a9fe..02d150b 100644
--- a/src/focaccia/tools/_qemu_tool.py
+++ b/src/focaccia/tools/_qemu_tool.py
@@ -7,12 +7,13 @@ work to do.
 """
 
 import gdb
+import logging
 import traceback
 from typing import Iterable
 
 import focaccia.parser as parser
 from focaccia.arch import supported_architectures, Arch
-from focaccia.compare import compare_symbolic
+from focaccia.compare import compare_symbolic, Error, ErrorTypes
 from focaccia.snapshot import ProgramState, ReadableProgramState, \
                               RegisterAccessError, MemoryAccessError
 from focaccia.symbolic import SymbolicTransform, eval_symbol, ExprMem
@@ -21,6 +22,20 @@ from focaccia.utils import print_result
 
 from validate_qemu import make_argparser, verbosity
 
+logger = logging.getLogger('focaccia-qemu-validator')
+debug = logger.debug
+info = logger.info
+warn = logger.warning
+
+qemu_crash = {
+        "crashed": False,
+        "pc": None,
+        'txl': None,
+        'ref': None,
+        'errors': [Error(ErrorTypes.CONFIRMED, "QEMU crashed")],
+        'snap': None,
+}
+
 class GDBProgramState(ReadableProgramState):
     from focaccia.arch import aarch64, x86
 
@@ -106,11 +121,11 @@ class GDBProgramState(ReadableProgramState):
             raise MemoryAccessError(addr, size, str(err))
 
 class GDBServerStateIterator:
-    def __init__(self, address: str, port: int):
+    def __init__(self, remote: str):
         gdb.execute('set pagination 0')
         gdb.execute('set sysroot')
         gdb.execute('set python print-stack full') # enable complete Python tracebacks
-        gdb.execute(f'target remote {address}:{port}')
+        gdb.execute(f'target remote {remote}')
         self._process = gdb.selected_inferior()
         self._first_next = True
 
@@ -148,6 +163,12 @@ class GDBServerStateIterator:
 
         return GDBProgramState(self._process, gdb.selected_frame(), self.arch)
 
+    def run_until(self, addr: int):
+        breakpoint = gdb.Breakpoint(f'*{addr:#x}')
+        gdb.execute('continue')
+        breakpoint.delete()
+        return GDBProgramState(self._process, gdb.selected_frame(), self.arch)
+
 def record_minimal_snapshot(prev_state: ReadableProgramState,
                             cur_state: ReadableProgramState,
                             prev_transform: SymbolicTransform,
@@ -220,7 +241,9 @@ def record_minimal_snapshot(prev_state: ReadableProgramState,
     return state
 
 def collect_conc_trace(gdb: GDBServerStateIterator, \
-                       strace: list[SymbolicTransform]) \
+                       strace: list[SymbolicTransform],
+                       start_addr: int | None = None,
+                       stop_addr: int | None = None) \
         -> tuple[list[ProgramState], list[SymbolicTransform]]:
     """Collect a trace of concrete states from GDB.
 
@@ -252,28 +275,44 @@ def collect_conc_trace(gdb: GDBServerStateIterator, \
     cur_state = next(state_iter)
     symb_i = 0
 
+    # Skip to start
+    try:
+        pc = cur_state.read_register('pc')
+        if start_addr and pc != start_addr:
+            info(f'Tracing QEMU from starting address: {hex(start_addr)}')
+            cur_state = state_iter.run_until(start_addr)
+    except Exception as e:
+        if start_addr:
+            raise Exception(f'Unable to reach start address {hex(start_addr)}: {e}')
+        raise Exception(f'Unable to trace: {e}')
+
     # An online trace matching algorithm.
     while True:
         try:
             pc = cur_state.read_register('pc')
 
             while pc != strace[symb_i].addr:
+                info(f'PC {hex(pc)} does not match next symbolic reference {hex(strace[symb_i].addr)}')
+
                 next_i = find_index(strace[symb_i+1:], pc, lambda t: t.addr)
 
                 # Drop the concrete state if no address in the symbolic trace
                 # matches
                 if next_i is None:
-                    print(f'Warning: Dropping concrete state {hex(pc)}, as no'
-                          f' matching instruction can be found in the symbolic'
-                          f' reference trace.')
+                    warn(f'Dropping concrete state {hex(pc)}, as no'
+                         f' matching instruction can be found in the symbolic'
+                         f' reference trace.')
                     cur_state = next(state_iter)
                     pc = cur_state.read_register('pc')
                     continue
 
                 # Otherwise, jump to the next matching symbolic state
                 symb_i += next_i + 1
+                if symb_i >= len(strace):
+                    break
 
             assert(cur_state.read_register('pc') == strace[symb_i].addr)
+            info(f'Validating instruction at address {hex(pc)}')
             states.append(record_minimal_snapshot(
                 states[-1] if states else cur_state,
                 cur_state,
@@ -282,33 +321,42 @@ def collect_conc_trace(gdb: GDBServerStateIterator, \
             matched_transforms.append(strace[symb_i])
             cur_state = next(state_iter)
             symb_i += 1
+            if symb_i >= len(strace):
+                break
         except StopIteration:
+            # TODO: The conditions may test for the same
+            if stop_addr and pc != stop_addr:
+                raise Exception(f'QEMU stopped at {hex(pc)} before reaching the stop address'
+                                f' {hex(stop_addr)}')
+            if symb_i+1 < len(strace):
+                qemu_crash["crashed"] = True
+                qemu_crash["pc"] = strace[symb_i].addr
+                qemu_crash["ref"] = strace[symb_i]
+                qemu_crash["snap"] = states[-1]
             break
         except Exception as e:
             print(traceback.format_exc())
             raise e
 
+    # Note: this may occur when symbolic traces were gathered with a stop address
+    if symb_i >= len(strace):
+        warn(f'QEMU executed more states than native execution: {symb_i} vs {len(strace)-1}')
+        
     return states, matched_transforms
 
 def main():
-    prog = make_argparser()
-    prog.add_argument('hostname',
-                      help='The hostname at which to find the GDB server.')
-    prog.add_argument('port',
-                      type=int,
-                      help='The port at which to find the GDB server.')
-
-    args = prog.parse_args()
-
-    gdbserver_addr = 'localhost'
-    gdbserver_port = args.port
+    args = make_argparser().parse_args()
+    
+    logging_level = getattr(logging, args.error_level.upper(), logging.INFO)
+    logging.basicConfig(level=logging_level, force=True)
 
     try:
-        gdb_server = GDBServerStateIterator(gdbserver_addr, gdbserver_port)
-    except:
-        raise Exception(f'Unable to perform basic GDB setup')
+        gdb_server = GDBServerStateIterator(args.remote)
+    except Exception as e:
+        raise Exception(f'Unable to perform basic GDB setup: {e}')
 
     try:
+        executable: str | None = None
         if args.executable is None:
             executable = gdb_server.binary
         else:
@@ -317,39 +365,50 @@ def main():
         argv = []  # QEMU's GDB stub does not support 'info proc cmdline'
         envp = []  # Can't get the remote target's environment
         env = TraceEnvironment(executable, argv, envp, '?')
-    except:
-        raise Exception(f'Unable to create trace environment for executable {executable}')
+    except Exception as e:
+        raise Exception(f'Unable to create trace environment for executable {executable}: {e}')
 
     # Read pre-computed symbolic trace
     try:
         with open(args.symb_trace, 'r') as strace:
             symb_transforms = parser.parse_transformations(strace)
-    except:
-        raise Exception('Failed to parse state transformations from native trace')
+    except Exception as e:
+        raise Exception(f'Failed to parse state transformations from native trace: {e}')
 
     # Use symbolic trace to collect concrete trace from QEMU
     try:
         conc_states, matched_transforms = collect_conc_trace(
             gdb_server,
-            symb_transforms.states)
-    except:
-        raise Exception(f'Failed to collect concolic trace from QEMU')
+            symb_transforms.states,
+            symb_transforms.env.start_address,
+            symb_transforms.env.stop_address)
+    except Exception as e:
+        raise Exception(f'Failed to collect concolic trace from QEMU: {e}')
 
     # Verify and print result
     if not args.quiet:
         try:
             res = compare_symbolic(conc_states, matched_transforms)
+            if qemu_crash["crashed"]:
+                res.append({
+                    'pc': qemu_crash["pc"],
+                    'txl': None,
+                    'ref': qemu_crash["ref"],
+                    'errors': qemu_crash["errors"],
+                    'snap': qemu_crash["snap"],
+                })
             print_result(res, verbosity[args.error_level])
-        except:
-            raise Exception('Error occured when comparing with symbolic equations')
+        except Exception as e:
+            raise Exception('Error occured when comparing with symbolic equations: {e}')
 
     if args.output:
         from focaccia.parser import serialize_snapshots
         try:
             with open(args.output, 'w') as file:
                 serialize_snapshots(Trace(conc_states, env), file)
-        except:
-            raise Exception(f'Unable to serialize snapshots to file {args.output}')
+        except Exception as e:
+            raise Exception(f'Unable to serialize snapshots to file {args.output}: {e}')
 
 if __name__ == "__main__":
     main()
+
diff --git a/src/focaccia/tools/capture_transforms.py b/src/focaccia/tools/capture_transforms.py
index 6ef0eaa..1208156 100755
--- a/src/focaccia/tools/capture_transforms.py
+++ b/src/focaccia/tools/capture_transforms.py
@@ -1,9 +1,11 @@
 #!/usr/bin/env python3
 
+import sys
 import argparse
+import logging
 
 from focaccia import parser, utils
-from focaccia.symbolic import collect_symbolic_trace
+from focaccia.symbolic import SymbolicTracer
 from focaccia.trace import TraceEnvironment
 
 def main():
@@ -20,12 +22,70 @@ def main():
                       default=False,
                       action='store_true',
                       help='Cross-validate symbolic equations with concrete values')
+    prog.add_argument('-r', '--remote',
+                      default=False,
+                      help='Remote target to trace (e.g. 127.0.0.1:12345)')
+    prog.add_argument('-l', '--deterministic-log',
+                      help='Path of the directory storing the deterministic log produced by RR')
+    prog.add_argument('--log-level',
+                      help='Set the logging level')
+    prog.add_argument('--force',
+                      default=False,
+                      action='store_true',
+                      help='Force Focaccia to continue tracing even when something goes wrong')
+    prog.add_argument('--debug',
+                      default=False,
+                      action='store_true',
+                      help='Capture transforms in debug mode to identify errors in Focaccia itself')
+    prog.add_argument('--start-address',
+                      default=None,
+                      type=utils.to_int,
+                      help='Set a starting address from which to collect the symoblic trace')
+    prog.add_argument('--stop-address',
+                      default=None,
+                      type=utils.to_int,
+                      help='Set a final address up until which to collect the symoblic trace')
+    prog.add_argument('--insn-time-limit',
+                      default=None,
+                      type=utils.to_num,
+                      help='Set a time limit for executing an instruction symbolically, skip'
+                           'instruction when limit is exceeded')
     args = prog.parse_args()
 
-    env = TraceEnvironment(args.binary, args.args, args.cross_validate, utils.get_envp())
-    trace = collect_symbolic_trace(env, None)
+    if args.debug:
+        logging.basicConfig(level=logging.DEBUG) # will be override by --log-level
+
+    # Set default logging level
+    if args.log_level:
+        level = getattr(logging, args.log_level.upper(), logging.INFO)
+        logging.basicConfig(level=level, force=True)
+    else:
+        logging.basicConfig(level=logging.INFO)
+
+    detlog = None
+    if args.deterministic_log:
+        from focaccia.deterministic import DeterministicLog
+        detlog = DeterministicLog(args.deterministic_log)
+    else:
+        class NullDeterministicLog:
+            def __init__(self): pass
+            def events_file(self): return None
+            def tasks_file(self): return None
+            def mmaps_file(self): return None
+            def events(self): return []
+            def tasks(self): return []
+            def mmaps(self): return []
+        detlog = NullDeterministicLog()
+
+    env = TraceEnvironment(args.binary, args.args, utils.get_envp(), 
+                           nondeterminism_log=detlog,
+                           start_address=args.start_address,
+                           stop_address=args.stop_address)
+    tracer = SymbolicTracer(env, remote=args.remote, cross_validate=args.debug,
+                            force=args.force)
+
+    trace = tracer.trace(time_limit=args.insn_time_limit)
+
     with open(args.output, 'w') as file:
         parser.serialize_transformations(trace, file)
 
-if __name__ == "__main__":
-    main()
diff --git a/src/focaccia/tools/validate_qemu.py b/src/focaccia/tools/validate_qemu.py
index 2b7e65c..e834a6d 100755
--- a/src/focaccia/tools/validate_qemu.py
+++ b/src/focaccia/tools/validate_qemu.py
@@ -6,6 +6,8 @@ Spawn GDB, connect to QEMU's GDB server, and read test states from that.
 We need two scripts (this one and the primary `qemu_tool.py`) because we can't
 pass arguments to scripts executed via `gdb -x <script>`.
 
+Alternatively, we connect to the Focaccia QEMU plugin when a socket is given.
+
 This script (`validate_qemu.py`) is the one the user interfaces with. It
 eventually calls `execv` to spawn a GDB process that calls the main
 `qemu_tool.py` script; `python validate_qemu.py` essentially behaves as if
@@ -16,11 +18,14 @@ necessary logic to pass them to `qemu_tool.py`.
 
 import os
 import sys
+import logging
 import argparse
 import sysconfig
 import subprocess
 
 from focaccia.compare import ErrorTypes
+from focaccia.arch import supported_architectures
+from focaccia.tools.validation_server import start_validation_server
 
 verbosity = {
     'info':    ErrorTypes.INFO,
@@ -50,6 +55,7 @@ memory, and stepping forward by single instructions.
                       action='store_true',
                       help='Don\'t print a verification result.')
     prog.add_argument('-o', '--output',
+                      type=str,
                       help='If specified with a file name, the recorded'
                            ' emulator states will be written to that file.')
     prog.add_argument('--error-level',
@@ -59,6 +65,23 @@ memory, and stepping forward by single instructions.
                       default=None,
                       help='The executable executed under QEMU, overrides the auto-detected' \
                             'executable')
+    prog.add_argument('--use-socket',
+                      type=str,
+                      nargs='?',
+                      const='/tmp/focaccia.sock',
+                      help='Use QEMU plugin interface given by socket instead of GDB')
+    prog.add_argument('--guest-arch',
+                      type=str,
+                      choices=supported_architectures.keys(),
+                      help='Architecture of the emulated guest'
+                           '(Only required when using --use-socket)')
+    prog.add_argument('--remote',
+                      type=str,
+                      help='The hostname:port pair at which to find a QEMU GDB server.')
+    prog.add_argument('--gdb', 
+                      type=str,
+                      default='gdb',
+                      help='GDB binary to invoke.')
     return prog
 
 def quoted(s: str) -> str:
@@ -71,50 +94,65 @@ def try_remove(l: list, v):
         pass
 
 def main():
-    prog = make_argparser()
-    prog.add_argument('--gdb', default='gdb',
-                      help='GDB binary to invoke.')
-    args = prog.parse_args()
-
-    script_dirname = os.path.dirname(__file__)
-    qemu_tool_path = os.path.join(script_dirname, '_qemu_tool.py')
-
-    # We have to remove all arguments we don't want to pass to the qemu tool
-    # manually here. Not nice, but what can you do..
-    argv = sys.argv
-    try_remove(argv, '--gdb')
-    try_remove(argv, args.gdb)
-
-    # Assemble the argv array passed to the qemu tool. GDB does not have a
-    # mechanism to pass arguments to a script that it executes, so we
-    # overwrite `sys.argv` manually before invoking the script.
-    argv_str = f'[{", ".join(quoted(a) for a in argv)}]'
-    path_str = f'[{", ".join(quoted(s) for s in sys.path)}]'
-
-    paths = sysconfig.get_paths()
-    candidates = [paths["purelib"], paths["platlib"]]
-    entries = [p for p in candidates if p and os.path.isdir(p)]
-    venv_site = entries[0]
+    argparser = make_argparser()
+    args = argparser.parse_args()
+
+    # Get environment
     env = os.environ.copy()
-    env["PYTHONPATH"] = ','.join([script_dirname, venv_site])
-
-    print(f"GDB started with Python Path: {env['PYTHONPATH']}")
-    gdb_cmd = [
-        args.gdb,
-        '-nx',  # Don't parse any .gdbinits
-        '--batch',
-        '-ex', f'py import sys',
-        '-ex', f'py sys.argv = {argv_str}',
-        '-ex', f'py sys.path = {path_str}',
-        "-ex", f"py import site; site.addsitedir({venv_site!r})",
-        "-ex", f"py import site; site.addsitedir({script_dirname!r})",
-        '-x', qemu_tool_path
-    ]
-    proc = subprocess.Popen(gdb_cmd, env=env)
-
-    ret = proc.wait()
-    exit(ret)
-
-if __name__ == "__main__":
-    main()
+
+    # Differentiate between the QEMU GDB server and QEMU plugin interfaces
+    if args.use_socket:
+        if not args.guest_arch:
+            argparser.error('--guest-arch is required when --use-socket is specified')
+
+        logging_level = getattr(logging, args.error_level.upper(), logging.INFO)
+        logging.basicConfig(level=logging_level, force=True)
+
+        # QEMU plugin interface
+        start_validation_server(args.symb_trace,
+                                args.output,
+                                args.use_socket,
+                                args.guest_arch,
+                                env,
+                                verbosity[args.error_level],
+                                args.quiet)
+    else:
+        # QEMU GDB interface
+        script_dirname = os.path.dirname(__file__)
+        qemu_tool_path = os.path.join(script_dirname, '_qemu_tool.py')
+
+        # We have to remove all arguments we don't want to pass to the qemu tool
+        # manually here. Not nice, but what can you do..
+        argv = sys.argv
+        try_remove(argv, '--gdb')
+        try_remove(argv, args.gdb)
+
+        # Assemble the argv array passed to the qemu tool. GDB does not have a
+        # mechanism to pass arguments to a script that it executes, so we
+        # overwrite `sys.argv` manually before invoking the script.
+        argv_str = f'[{", ".join(quoted(a) for a in argv)}]'
+        path_str = f'[{", ".join(quoted(s) for s in sys.path)}]'
+
+        paths = sysconfig.get_paths()
+        candidates = [paths["purelib"], paths["platlib"]]
+        entries = [p for p in candidates if p and os.path.isdir(p)]
+        venv_site = entries[0]
+        env["PYTHONPATH"] = ','.join([script_dirname, venv_site])
+
+        print(f"GDB started with Python Path: {env['PYTHONPATH']}")
+        gdb_cmd = [
+            args.gdb,
+            '-nx',  # Don't parse any .gdbinits
+            '--batch',
+            '-ex',  'py import sys',
+            '-ex', f'py sys.argv = {argv_str}',
+            '-ex', f'py sys.path = {path_str}',
+            "-ex", f'py import site; site.addsitedir({venv_site!r})',
+            "-ex", f'py import site; site.addsitedir({script_dirname!r})',
+            '-x', qemu_tool_path
+        ]
+        proc = subprocess.Popen(gdb_cmd, env=env)
+
+        ret = proc.wait()
+        exit(ret)
 
diff --git a/src/focaccia/tools/validation_server.py b/src/focaccia/tools/validation_server.py
index b87048a..db33ff3 100755
--- a/src/focaccia/tools/validation_server.py
+++ b/src/focaccia/tools/validation_server.py
@@ -1,25 +1,26 @@
 #! /usr/bin/env python3
 
+import os
+import socket
+import struct
+import logging
 from typing import Iterable
 
 import focaccia.parser as parser
 from focaccia.arch import supported_architectures, Arch
-from focaccia.compare import compare_symbolic
-from focaccia.snapshot import ProgramState, ReadableProgramState, \
-                              RegisterAccessError, MemoryAccessError
+from focaccia.compare import compare_symbolic, ErrorTypes
+from focaccia.snapshot import ProgramState, RegisterAccessError, MemoryAccessError
 from focaccia.symbolic import SymbolicTransform, eval_symbol, ExprMem
-from focaccia.trace import Trace, TraceEnvironment
+from focaccia.trace import Trace
 from focaccia.utils import print_result
 
-from validate_qemu import make_argparser, verbosity
 
-import socket
-import struct
-import os
+logger = logging.getLogger('focaccia-qemu-validation-server')
+debug = logger.debug
+info = logger.info
+warn = logger.warning
 
 
-SOCK_PATH = '/tmp/focaccia.sock'
-
 def endian_fmt(endianness: str) -> str:
     if endianness == 'little':
         return '<'
@@ -124,9 +125,8 @@ class PluginProgramState(ProgramState):
                 raise StopIteration
 
             if len(resp) < 180:
-                print(f'Invalid response length: {len(resp)}')
-                print(f'Response: {resp}')
-                return 0
+                raise RegisterAccessError(reg, f'Invalid response length when reading {reg}: {len(resp)}'
+                                          f' for response {resp}')
 
             val, size = unmk_register(resp, self.arch.endianness)
 
@@ -283,7 +283,7 @@ def record_minimal_snapshot(prev_state: ProgramState,
                 regval = cur_state.read_register(regname)
                 out_state.set_register(regname, regval)
             except RegisterAccessError:
-                pass
+                out_state.set_register(regname, 0)
         for mem in mems:
             assert(mem.size % 8 == 0)
             addr = eval_symbol(mem.ptr, prev_state)
@@ -377,27 +377,20 @@ def collect_conc_trace(qemu: PluginStateIterator, \
     return states, matched_transforms
 
 
-def main():
-    prog = make_argparser()
-    prog.add_argument('--use-socket',
-                      required=True,
-                      type=str,
-                      help='Use QEMU Plugin interface at <socket> instead of GDB')
-    prog.add_argument('--guest_arch',
-                      required=True,
-                      type=str,
-                      help='Architecture of the guest being emulated. (Only required when using --use-socket)')
-
-    args = prog.parse_args()
-
+def start_validation_server(symb_trace: str,
+                            output: str,
+                            socket: str,
+                            guest_arch: str,
+                            env,
+                            verbosity: ErrorTypes,
+                            is_quiet: bool = False):
     # Read pre-computed symbolic trace
-    with open(args.symb_trace, 'r') as strace:
+    with open(symb_trace, 'r') as strace:
         symb_transforms = parser.parse_transformations(strace)
 
-    arch = supported_architectures.get(args.guest_arch)
-    sock_path = args.use_socket
+    arch = supported_architectures.get(guest_arch)
 
-    qemu = PluginStateIterator(sock_path, arch)
+    qemu = PluginStateIterator(socket, arch)
 
     # Use symbolic trace to collect concrete trace from QEMU
     conc_states, matched_transforms = collect_conc_trace(
@@ -405,15 +398,12 @@ def main():
         symb_transforms.states)
 
     # Verify and print result
-    if not args.quiet:
+    if not is_quiet:
         res = compare_symbolic(conc_states, matched_transforms)
-        print_result(res, verbosity[args.error_level])
+        print_result(res, verbosity)
 
-    if args.output:
+    if output:
         from focaccia.parser import serialize_snapshots
-        with open(args.output, 'w') as file:
+        with open(output, 'w') as file:
             serialize_snapshots(Trace(conc_states, env), file)
 
-if __name__ == "__main__":
-    main()
-
diff --git a/src/focaccia/trace.py b/src/focaccia/trace.py
index c72d90f..14c475b 100644
--- a/src/focaccia/trace.py
+++ b/src/focaccia/trace.py
@@ -10,14 +10,18 @@ class TraceEnvironment:
     def __init__(self,
                  binary: str,
                  argv: list[str],
-                 cross_validate: bool,
                  envp: list[str],
-                 binary_hash: str | None = None):
+                 binary_hash: str | None = None,
+                 nondeterminism_log = None,
+                 start_address: int | None = None,
+                 stop_address:  int | None = None):
         self.argv = argv
         self.envp = envp
         self.binary_name = binary
-        self.cross_validate = cross_validate
-        if binary_hash is None:
+        self.detlog = nondeterminism_log
+        self.start_address = start_address
+        self.stop_address = stop_address
+        if binary_hash is None and self.binary_name is not None:
             self.binary_hash = file_hash(binary)
         else:
             self.binary_hash = binary_hash
@@ -28,9 +32,11 @@ class TraceEnvironment:
         return cls(
             json['binary_name'],
             json['argv'],
-            json['cross_validate'],
             json['envp'],
             json['binary_hash'],
+            None,
+            json['start_address'],
+            json['stop_address']
         )
 
     def to_json(self) -> dict:
@@ -39,8 +45,9 @@ class TraceEnvironment:
             'binary_name': self.binary_name,
             'binary_hash': self.binary_hash,
             'argv': self.argv,
-            'cross_validate': self.cross_validate,
             'envp': self.envp,
+            'start_address': self.start_address,
+            'stop_address': self.stop_address
         }
 
     def __eq__(self, other: object) -> bool:
@@ -50,21 +57,21 @@ class TraceEnvironment:
         return self.binary_name == other.binary_name \
             and self.binary_hash == other.binary_hash \
             and self.argv == other.argv \
-            and self.cross_validate == other.cross_validate \
             and self.envp == other.envp
 
     def __repr__(self) -> str:
         return f'{self.binary_name} {" ".join(self.argv)}' \
                f'\n   bin-hash={self.binary_hash}' \
-               f'\n   options=cross-validate' \
-               f'\n   envp={repr(self.envp)}'
+               f'\n   envp={repr(self.envp)}' \
+               f'\n   start_address={self.start_address}' \
+               f'\n   stop_address={self.stop_address}'
 
 class Trace(Generic[T]):
     def __init__(self,
                  trace_states: list[T],
                  env: TraceEnvironment):
-        self.states = trace_states
         self.env = env
+        self.states = trace_states
 
     def __len__(self) -> int:
         return len(self.states)
diff --git a/src/focaccia/utils.py b/src/focaccia/utils.py
index c4f6a74..c648d41 100644
--- a/src/focaccia/utils.py
+++ b/src/focaccia/utils.py
@@ -1,9 +1,10 @@
 from __future__ import annotations
 
-import ctypes
 import os
-import shutil
 import sys
+import shutil
+import ctypes
+import signal
 from functools import total_ordering
 from hashlib import sha256
 
@@ -114,3 +115,31 @@ def print_result(result, min_severity: ErrorSeverity):
           f' (showing {min_severity} and higher).')
     print('#' * 60)
     print()
+
+def to_int(value: str) -> int:
+    return int(value, 0)
+
+def to_num(value: str) -> int | float:
+    try:
+        return int(value, 0)
+    except:
+        return float(value)
+
+class TimeoutError(Exception):
+    pass
+
+def timebound(timeout: int | float | None, func, *args, **kwargs):
+    if timeout is None:
+        return func(*args, **kwargs)
+
+    def _handle_timeout(signum, frame):
+        raise TimeoutError(f'Function exceeded {timeout} limit')
+    
+    old_handler = signal.signal(signal.SIGALRM, _handle_timeout)
+    signal.setitimer(signal.ITIMER_REAL, timeout)
+    try:
+        return func(*args, **kwargs)
+    finally:
+        signal.setitimer(signal.ITIMER_REAL, 0)
+        signal.signal(signal.SIGALRM, old_handler)
+
diff --git a/uv.lock b/uv.lock
index 767163d..a9ea1d1 100644
--- a/uv.lock
+++ b/uv.lock
@@ -12,7 +12,7 @@ supported-markers = [
 
 [[package]]
 name = "black"
-version = "25.1.0"
+version = "25.9.0"
 source = { registry = "https://pypi.org/simple" }
 dependencies = [
     { name = "click", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
@@ -20,11 +20,29 @@ dependencies = [
     { name = "packaging", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "pathspec", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "platformdirs", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
+    { name = "pytokens", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
 ]
-sdist = { url = "https://files.pythonhosted.org/packages/94/49/26a7b0f3f35da4b5a65f081943b7bcd22d7002f5f0fb8098ec1ff21cb6ef/black-25.1.0.tar.gz", hash = "sha256:33496d5cd1222ad73391352b4ae8da15253c5de89b93a80b3e2c8d9a19ec2666", size = 649449, upload-time = "2025-01-29T04:15:40.373Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/4b/43/20b5c90612d7bdb2bdbcceeb53d588acca3bb8f0e4c5d5c751a2c8fdd55a/black-25.9.0.tar.gz", hash = "sha256:0474bca9a0dd1b51791fcc507a4e02078a1c63f6d4e4ae5544b9848c7adfb619", size = 648393, upload-time = "2025-09-19T00:27:37.758Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/6f/22/b99efca33f1f3a1d2552c714b1e1b5ae92efac6c43e790ad539a163d1754/black-25.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3b48735872ec535027d979e8dcb20bf4f70b5ac75a8ea99f127c106a7d7aba9f", size = 1783816, upload-time = "2025-01-29T04:18:33.823Z" },
-    { url = "https://files.pythonhosted.org/packages/09/71/54e999902aed72baf26bca0d50781b01838251a462612966e9fc4891eadd/black-25.1.0-py3-none-any.whl", hash = "sha256:95e8176dae143ba9097f351d174fdaf0ccd29efb414b362ae3fd72bf0f710717", size = 207646, upload-time = "2025-01-29T04:15:38.082Z" },
+    { url = "https://files.pythonhosted.org/packages/84/67/6db6dff1ebc8965fd7661498aea0da5d7301074b85bba8606a28f47ede4d/black-25.9.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9101ee58ddc2442199a25cb648d46ba22cd580b00ca4b44234a324e3ec7a0f7e", size = 1655619, upload-time = "2025-09-19T00:30:49.241Z" },
+    { url = "https://files.pythonhosted.org/packages/1b/46/863c90dcd3f9d41b109b7f19032ae0db021f0b2a81482ba0a1e28c84de86/black-25.9.0-py3-none-any.whl", hash = "sha256:474b34c1342cdc157d307b56c4c65bce916480c4a8f6551fdc6bf9b486a7c4ae", size = 203363, upload-time = "2025-09-19T00:27:35.724Z" },
+]
+
+[[package]]
+name = "brotli"
+version = "1.1.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/2f/c2/f9e977608bdf958650638c3f1e28f85a1b075f075ebbe77db8555463787b/Brotli-1.1.0.tar.gz", hash = "sha256:81de08ac11bcb85841e440c13611c00b67d3bf82698314928d0b676362546724", size = 7372270, upload-time = "2023-09-07T14:05:41.643Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/e5/18/c18c32ecea41b6c0004e15606e274006366fe19436b6adccc1ae7b2e50c2/Brotli-1.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:70051525001750221daa10907c77830bc889cb6d865cc0b813d9db7fefc21451", size = 2906505, upload-time = "2023-09-07T14:04:01.327Z" },
+    { url = "https://files.pythonhosted.org/packages/08/c8/69ec0496b1ada7569b62d85893d928e865df29b90736558d6c98c2031208/Brotli-1.1.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7f4bf76817c14aa98cc6697ac02f3972cb8c3da93e9ef16b9c66573a68014f91", size = 2944152, upload-time = "2023-09-07T14:04:03.033Z" },
+    { url = "https://files.pythonhosted.org/packages/ab/fb/0517cea182219d6768113a38167ef6d4eb157a033178cc938033a552ed6d/Brotli-1.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d0c5516f0aed654134a2fc936325cc2e642f8a0e096d075209672eb321cff408", size = 2919252, upload-time = "2023-09-07T14:04:04.675Z" },
+    { url = "https://files.pythonhosted.org/packages/55/ac/bd280708d9c5ebdbf9de01459e625a3e3803cce0784f47d633562cf40e83/Brotli-1.1.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:4ed11165dd45ce798d99a136808a794a748d5dc38511303239d4e2363c0695dc", size = 2914304, upload-time = "2023-09-07T14:04:08.668Z" },
+    { url = "https://files.pythonhosted.org/packages/c7/4e/91b8256dfe99c407f174924b65a01f5305e303f486cc7a2e8a5d43c8bec3/Brotli-1.1.0-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:7e4c4629ddad63006efa0ef968c8e4751c5868ff0b1c5c40f76524e894c50248", size = 2938751, upload-time = "2023-09-07T14:04:12.875Z" },
+    { url = "https://files.pythonhosted.org/packages/5a/a6/e2a39a5d3b412938362bbbeba5af904092bf3f95b867b4a3eb856104074e/Brotli-1.1.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:861bf317735688269936f755fa136a99d1ed526883859f86e41a5d43c61d8966", size = 2933757, upload-time = "2023-09-07T14:04:14.551Z" },
+    { url = "https://files.pythonhosted.org/packages/13/f0/358354786280a509482e0e77c1a5459e439766597d280f28cb097642fc26/Brotli-1.1.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:87a3044c3a35055527ac75e419dfa9f4f3667a1e887ee80360589eb8c90aabb9", size = 2936146, upload-time = "2024-10-18T12:32:27.257Z" },
+    { url = "https://files.pythonhosted.org/packages/ad/cf/0eaa0585c4077d3c2d1edf322d8e97aabf317941d3a72d7b3ad8bce004b0/Brotli-1.1.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:ca63e1890ede90b2e4454f9a65135a4d387a4585ff8282bb72964fab893f2111", size = 3035102, upload-time = "2024-10-18T12:32:31.371Z" },
+    { url = "https://files.pythonhosted.org/packages/d8/63/1c1585b2aa554fe6dbce30f0c18bdbc877fa9a1bf5ff17677d9cca0ac122/Brotli-1.1.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:e79e6520141d792237c70bcd7a3b122d00f2613769ae0cb61c52e89fd3443839", size = 2930029, upload-time = "2024-10-18T12:32:33.293Z" },
 ]
 
 [[package]]
@@ -46,11 +64,11 @@ wheels = [
 
 [[package]]
 name = "click"
-version = "8.2.1"
+version = "8.3.0"
 source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/60/6c/8ca2efa64cf75a977a0d7fac081354553ebe483345c734fb6b6515d96bbc/click-8.2.1.tar.gz", hash = "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202", size = 286342, upload-time = "2025-05-20T23:19:49.832Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/46/61/de6cd827efad202d7057d93e0fed9294b96952e188f7384832791c7b2254/click-8.3.0.tar.gz", hash = "sha256:e7b8232224eba16f4ebe410c25ced9f7875cb5f3263ffc93cc3e8da705e229c4", size = 276943, upload-time = "2025-09-18T17:32:23.696Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/85/32/10bb5764d90a8eee674e9dc6f4db6a0ab47c8c4d0d83c27f7c39ac415a4d/click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b", size = 102215, upload-time = "2025-05-20T23:19:47.796Z" },
+    { url = "https://files.pythonhosted.org/packages/db/d3/9dcc0f5797f070ec8edf30fbadfb200e71d9db6b84d211e3b2085a7589a0/click-8.3.0-py3-none-any.whl", hash = "sha256:9b9f285302c6e3064f4330c05f05b81945b2a39544279343e6e7c5f27a9baddc", size = 107295, upload-time = "2025-09-18T17:32:22.42Z" },
 ]
 
 [[package]]
@@ -66,9 +84,11 @@ name = "focaccia"
 version = "0.1.0"
 source = { editable = "." }
 dependencies = [
+    { name = "brotli", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "cffi", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "cpuid", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "miasm", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
+    { name = "pycapnp", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "setuptools", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
 ]
 
@@ -83,9 +103,11 @@ dev = [
 [package.metadata]
 requires-dist = [
     { name = "black", marker = "extra == 'dev'" },
+    { name = "brotli" },
     { name = "cffi" },
     { name = "cpuid", git = "https://github.com/taugoust/cpuid.py.git?rev=master" },
     { name = "miasm", directory = "miasm" },
+    { name = "pycapnp" },
     { name = "pyright", marker = "extra == 'dev'" },
     { name = "pytest", marker = "extra == 'dev'" },
     { name = "ruff", marker = "extra == 'dev'" },
@@ -104,11 +126,11 @@ wheels = [
 
 [[package]]
 name = "iniconfig"
-version = "2.1.0"
+version = "2.3.0"
 source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/f2/97/ebf4da567aa6827c909642694d71c9fcf53e5b504f2d96afea02718862f3/iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7", size = 4793, upload-time = "2025-03-19T20:09:59.721Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050, upload-time = "2025-03-19T20:10:01.071Z" },
+    { url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
 ]
 
 [[package]]
@@ -169,11 +191,11 @@ wheels = [
 
 [[package]]
 name = "platformdirs"
-version = "4.4.0"
+version = "4.5.0"
 source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/23/e8/21db9c9987b0e728855bd57bff6984f67952bea55d6f75e055c46b5383e8/platformdirs-4.4.0.tar.gz", hash = "sha256:ca753cf4d81dc309bc67b0ea38fd15dc97bc30ce419a7f58d13eb3bf14c4febf", size = 21634, upload-time = "2025-08-26T14:32:04.268Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/61/33/9611380c2bdb1225fdef633e2a9610622310fed35ab11dac9620972ee088/platformdirs-4.5.0.tar.gz", hash = "sha256:70ddccdd7c99fc5942e9fc25636a8b34d04c24b335100223152c2803e4063312", size = 21632, upload-time = "2025-10-08T17:44:48.791Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/40/4b/2028861e724d3bd36227adfa20d3fd24c3fc6d52032f4a93c133be5d17ce/platformdirs-4.4.0-py3-none-any.whl", hash = "sha256:abd01743f24e5287cd7a5db3752faf1a2d65353f38ec26d98e25a6db65958c85", size = 18654, upload-time = "2025-08-26T14:32:02.735Z" },
+    { url = "https://files.pythonhosted.org/packages/73/cb/ac7874b3e5d58441674fb70742e6c374b28b0c7cb988d37d991cde47166c/platformdirs-4.5.0-py3-none-any.whl", hash = "sha256:e578a81bb873cbb89a41fcc904c7ef523cc18284b7e3b3ccf06aca1403b7ebd3", size = 18651, upload-time = "2025-10-08T17:44:47.223Z" },
 ]
 
 [[package]]
@@ -186,6 +208,22 @@ wheels = [
 ]
 
 [[package]]
+name = "pycapnp"
+version = "2.2.1"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/b2/ae/60ab226053f34c59c1d8be77ccfe28db4281796d8640cfcaf9bfcf235189/pycapnp-2.2.1.tar.gz", hash = "sha256:a76432667bfb1e56b69fcc5bca4be7c72e1ac0f6ab57583857dc5d5293045451", size = 709104, upload-time = "2025-10-23T14:39:31.283Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/ef/0f/d4e2e67ee42858f9c675a640a6d427e40a10d005c3de109b3d6cbaa9d3c9/pycapnp-2.2.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:c41e93efec53f5adc18f72dd4f2f4790c4f28d907155c5a8975fdc073a752edb", size = 5301839, upload-time = "2025-10-23T14:38:02.602Z" },
+    { url = "https://files.pythonhosted.org/packages/91/51/b7029514b16a4668c1fa45cde1870531e325897e06e8b60655e7ff45c891/pycapnp-2.2.1-cp312-cp312-manylinux_2_28_ppc64le.whl", hash = "sha256:40548abb12ce7441945d6d5c2da3c72e789eaf5e8eddc374d0450191a8f55c98", size = 5671773, upload-time = "2025-10-23T14:38:06.307Z" },
+    { url = "https://files.pythonhosted.org/packages/e1/1e/324657e0aaf68cb5a055f1685f2386321349f8290cc81b31244654242b8c/pycapnp-2.2.1-cp312-cp312-manylinux_2_28_s390x.whl", hash = "sha256:95046f1069237fcc07c72b1a4604916e338ea54c8fe411bdd7cc09f1ded9aed4", size = 5807994, upload-time = "2025-10-23T14:38:07.762Z" },
+    { url = "https://files.pythonhosted.org/packages/ac/4c/3fa5a6b13123ede108d552ee777d2cf7c4da3dd4e680f17501af703401ef/pycapnp-2.2.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:66714dbca707a4cdd5e60f97cd0f3d2c88ad3c56b05a42b6887397ccdc0ee5b1", size = 5530885, upload-time = "2025-10-23T14:38:10.559Z" },
+    { url = "https://files.pythonhosted.org/packages/b4/ed/dd089607dfd7aff2e4585355908d8728cf020ebb76986306f7102b3dd974/pycapnp-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:cea2543ad9edc15d22f4449074d7d729a10521c164a5a3c33cfbd37bc083606a", size = 6156151, upload-time = "2025-10-23T14:38:12.079Z" },
+    { url = "https://files.pythonhosted.org/packages/c9/09/ce66e5f488022731412b9b11131381e64f807e1a9ed4024abcc05db68e76/pycapnp-2.2.1-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:ec778ae022e5330f38cbd2b28f8f77c3687ea65432c3d0d1c51272976d49c904", size = 6606665, upload-time = "2025-10-23T14:38:16.009Z" },
+    { url = "https://files.pythonhosted.org/packages/d7/aa/a79190132b7e0d43525f5d591818059860f72986ba7b8c223774a92c3926/pycapnp-2.2.1-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:eb4d87aad63ce63fd84a205af691098459cc1b40ffabe9c4cba5ab49c6b3ad47", size = 6711383, upload-time = "2025-10-23T14:38:17.698Z" },
+    { url = "https://files.pythonhosted.org/packages/7a/90/cefa1cf873eb4523b25674bf9c5458c28d8df72e086d7aa66c2a289442bd/pycapnp-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:9340463c40d83c9fcc45d8706692b04f604dcd729a6b8d802700dd2a0a425281", size = 6452073, upload-time = "2025-10-23T14:38:19.177Z" },
+]
+
+[[package]]
 name = "pycparser"
 version = "2.23"
 source = { registry = "https://pypi.org/simple" }
@@ -205,29 +243,29 @@ wheels = [
 
 [[package]]
 name = "pyparsing"
-version = "3.2.3"
+version = "3.2.5"
 source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/bb/22/f1129e69d94ffff626bdb5c835506b3a5b4f3d070f17ea295e12c2c6f60f/pyparsing-3.2.3.tar.gz", hash = "sha256:b9c13f1ab8b3b542f72e28f634bad4de758ab3ce4546e4301970ad6fa77c38be", size = 1088608, upload-time = "2025-03-25T05:01:28.114Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/f2/a5/181488fc2b9d093e3972d2a472855aae8a03f000592dbfce716a512b3359/pyparsing-3.2.5.tar.gz", hash = "sha256:2df8d5b7b2802ef88e8d016a2eb9c7aeaa923529cd251ed0fe4608275d4105b6", size = 1099274, upload-time = "2025-09-21T04:11:06.277Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/05/e7/df2285f3d08fee213f2d041540fa4fc9ca6c2d44cf36d3a035bf2a8d2bcc/pyparsing-3.2.3-py3-none-any.whl", hash = "sha256:a749938e02d6fd0b59b356ca504a24982314bb090c383e3cf201c95ef7e2bfcf", size = 111120, upload-time = "2025-03-25T05:01:24.908Z" },
+    { url = "https://files.pythonhosted.org/packages/10/5e/1aa9a93198c6b64513c9d7752de7422c06402de6600a8767da1524f9570b/pyparsing-3.2.5-py3-none-any.whl", hash = "sha256:e38a4f02064cf41fe6593d328d0512495ad1f3d8a91c4f73fc401b3079a59a5e", size = 113890, upload-time = "2025-09-21T04:11:04.117Z" },
 ]
 
 [[package]]
 name = "pyright"
-version = "1.1.404"
+version = "1.1.407"
 source = { registry = "https://pypi.org/simple" }
 dependencies = [
     { name = "nodeenv", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "typing-extensions", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
 ]
-sdist = { url = "https://files.pythonhosted.org/packages/e2/6e/026be64c43af681d5632722acd100b06d3d39f383ec382ff50a71a6d5bce/pyright-1.1.404.tar.gz", hash = "sha256:455e881a558ca6be9ecca0b30ce08aa78343ecc031d37a198ffa9a7a1abeb63e", size = 4065679, upload-time = "2025-08-20T18:46:14.029Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/a6/1b/0aa08ee42948b61745ac5b5b5ccaec4669e8884b53d31c8ec20b2fcd6b6f/pyright-1.1.407.tar.gz", hash = "sha256:099674dba5c10489832d4a4b2d302636152a9a42d317986c38474c76fe562262", size = 4122872, upload-time = "2025-10-24T23:17:15.145Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/84/30/89aa7f7d7a875bbb9a577d4b1dc5a3e404e3d2ae2657354808e905e358e0/pyright-1.1.404-py3-none-any.whl", hash = "sha256:c7b7ff1fdb7219c643079e4c3e7d4125f0dafcc19d253b47e898d130ea426419", size = 5902951, upload-time = "2025-08-20T18:46:12.096Z" },
+    { url = "https://files.pythonhosted.org/packages/dc/93/b69052907d032b00c40cb656d21438ec00b3a471733de137a3f65a49a0a0/pyright-1.1.407-py3-none-any.whl", hash = "sha256:6dd419f54fcc13f03b52285796d65e639786373f433e243f8b94cf93a7444d21", size = 5997008, upload-time = "2025-10-24T23:17:13.159Z" },
 ]
 
 [[package]]
 name = "pytest"
-version = "8.4.1"
+version = "8.4.2"
 source = { registry = "https://pypi.org/simple" }
 dependencies = [
     { name = "iniconfig", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
@@ -235,27 +273,36 @@ dependencies = [
     { name = "pluggy", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
     { name = "pygments", marker = "(platform_machine == 'aarch64' and sys_platform == 'linux') or (platform_machine == 'x86_64' and sys_platform == 'linux')" },
 ]
-sdist = { url = "https://files.pythonhosted.org/packages/08/ba/45911d754e8eba3d5a841a5ce61a65a685ff1798421ac054f85aa8747dfb/pytest-8.4.1.tar.gz", hash = "sha256:7c67fd69174877359ed9371ec3af8a3d2b04741818c51e5e99cc1742251fa93c", size = 1517714, upload-time = "2025-06-18T05:48:06.109Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/a3/5c/00a0e072241553e1a7496d638deababa67c5058571567b92a7eaa258397c/pytest-8.4.2.tar.gz", hash = "sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01", size = 1519618, upload-time = "2025-09-04T14:34:22.711Z" }
+wheels = [
+    { url = "https://files.pythonhosted.org/packages/a8/a4/20da314d277121d6534b3a980b29035dcd51e6744bd79075a6ce8fa4eb8d/pytest-8.4.2-py3-none-any.whl", hash = "sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79", size = 365750, upload-time = "2025-09-04T14:34:20.226Z" },
+]
+
+[[package]]
+name = "pytokens"
+version = "0.2.0"
+source = { registry = "https://pypi.org/simple" }
+sdist = { url = "https://files.pythonhosted.org/packages/d4/c2/dbadcdddb412a267585459142bfd7cc241e6276db69339353ae6e241ab2b/pytokens-0.2.0.tar.gz", hash = "sha256:532d6421364e5869ea57a9523bf385f02586d4662acbcc0342afd69511b4dd43", size = 15368, upload-time = "2025-10-15T08:02:42.738Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/29/16/c8a903f4c4dffe7a12843191437d7cd8e32751d5de349d45d3fe69544e87/pytest-8.4.1-py3-none-any.whl", hash = "sha256:539c70ba6fcead8e78eebbf1115e8b589e7565830d7d006a8723f19ac8a0afb7", size = 365474, upload-time = "2025-06-18T05:48:03.955Z" },
+    { url = "https://files.pythonhosted.org/packages/89/5a/c269ea6b348b6f2c32686635df89f32dbe05df1088dd4579302a6f8f99af/pytokens-0.2.0-py3-none-any.whl", hash = "sha256:74d4b318c67f4295c13782ddd9abcb7e297ec5630ad060eb90abf7ebbefe59f8", size = 12038, upload-time = "2025-10-15T08:02:41.694Z" },
 ]
 
 [[package]]
 name = "ruff"
-version = "0.12.10"
+version = "0.14.2"
 source = { registry = "https://pypi.org/simple" }
-sdist = { url = "https://files.pythonhosted.org/packages/3b/eb/8c073deb376e46ae767f4961390d17545e8535921d2f65101720ed8bd434/ruff-0.12.10.tar.gz", hash = "sha256:189ab65149d11ea69a2d775343adf5f49bb2426fc4780f65ee33b423ad2e47f9", size = 5310076, upload-time = "2025-08-21T18:23:22.595Z" }
+sdist = { url = "https://files.pythonhosted.org/packages/ee/34/8218a19b2055b80601e8fd201ec723c74c7fe1ca06d525a43ed07b6d8e85/ruff-0.14.2.tar.gz", hash = "sha256:98da787668f239313d9c902ca7c523fe11b8ec3f39345553a51b25abc4629c96", size = 5539663, upload-time = "2025-10-23T19:37:00.956Z" }
 wheels = [
-    { url = "https://files.pythonhosted.org/packages/24/e7/560d049d15585d6c201f9eeacd2fd130def3741323e5ccf123786e0e3c95/ruff-0.12.10-py3-none-linux_armv6l.whl", hash = "sha256:8b593cb0fb55cc8692dac7b06deb29afda78c721c7ccfed22db941201b7b8f7b", size = 11935161, upload-time = "2025-08-21T18:22:26.965Z" },
-    { url = "https://files.pythonhosted.org/packages/12/ad/44f606d243f744a75adc432275217296095101f83f966842063d78eee2d3/ruff-0.12.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:822d9677b560f1fdeab69b89d1f444bf5459da4aa04e06e766cf0121771ab844", size = 12092276, upload-time = "2025-08-21T18:22:36.764Z" },
-    { url = "https://files.pythonhosted.org/packages/06/1f/ed6c265e199568010197909b25c896d66e4ef2c5e1c3808caf461f6f3579/ruff-0.12.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:37b4a64f4062a50c75019c61c7017ff598cb444984b638511f48539d3a1c98db", size = 11734700, upload-time = "2025-08-21T18:22:39.822Z" },
-    { url = "https://files.pythonhosted.org/packages/02/9e/39369e6ac7f2a1848f22fb0b00b690492f20811a1ac5c1fd1d2798329263/ruff-0.12.10-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:059e863ea3a9ade41407ad71c1de2badfbe01539117f38f763ba42a1206f7559", size = 14436642, upload-time = "2025-08-21T18:22:45.612Z" },
-    { url = "https://files.pythonhosted.org/packages/e3/03/5da8cad4b0d5242a936eb203b58318016db44f5c5d351b07e3f5e211bb89/ruff-0.12.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1bef6161e297c68908b7218fa6e0e93e99a286e5ed9653d4be71e687dff101cf", size = 13859107, upload-time = "2025-08-21T18:22:48.886Z" },
-    { url = "https://files.pythonhosted.org/packages/19/19/dd7273b69bf7f93a070c9cec9494a94048325ad18fdcf50114f07e6bf417/ruff-0.12.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4f1345fbf8fb0531cd722285b5f15af49b2932742fc96b633e883da8d841896b", size = 12886521, upload-time = "2025-08-21T18:22:51.567Z" },
-    { url = "https://files.pythonhosted.org/packages/c0/1d/b4207ec35e7babaee62c462769e77457e26eb853fbdc877af29417033333/ruff-0.12.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f68433c4fbc63efbfa3ba5db31727db229fa4e61000f452c540474b03de52a9", size = 13097528, upload-time = "2025-08-21T18:22:54.609Z" },
-    { url = "https://files.pythonhosted.org/packages/12/8c/9e6660007fb10189ccb78a02b41691288038e51e4788bf49b0a60f740604/ruff-0.12.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:f3fc21178cd44c98142ae7590f42ddcb587b8e09a3b849cbc84edb62ee95de60", size = 11896759, upload-time = "2025-08-21T18:23:00.473Z" },
-    { url = "https://files.pythonhosted.org/packages/67/4c/6d092bb99ea9ea6ebda817a0e7ad886f42a58b4501a7e27cd97371d0ba54/ruff-0.12.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:7d1a4e0bdfafcd2e3e235ecf50bf0176f74dd37902f241588ae1f6c827a36c56", size = 11701463, upload-time = "2025-08-21T18:23:03.211Z" },
-    { url = "https://files.pythonhosted.org/packages/ad/37/63a9c788bbe0b0850611669ec6b8589838faf2f4f959647f2d3e320383ae/ruff-0.12.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:ae479e1a18b439c59138f066ae79cc0f3ee250712a873d00dbafadaad9481e5b", size = 13164356, upload-time = "2025-08-21T18:23:10.225Z" },
+    { url = "https://files.pythonhosted.org/packages/16/dd/23eb2db5ad9acae7c845700493b72d3ae214dce0b226f27df89216110f2b/ruff-0.14.2-py3-none-linux_armv6l.whl", hash = "sha256:7cbe4e593505bdec5884c2d0a4d791a90301bc23e49a6b1eb642dd85ef9c64f1", size = 12533390, upload-time = "2025-10-23T19:36:18.044Z" },
+    { url = "https://files.pythonhosted.org/packages/15/8b/c44cf7fe6e59ab24a9d939493a11030b503bdc2a16622cede8b7b1df0114/ruff-0.14.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3d0bbeffb8d9f4fccf7b5198d566d0bad99a9cb622f1fc3467af96cb8773c9e3", size = 12358285, upload-time = "2025-10-23T19:36:26.979Z" },
+    { url = "https://files.pythonhosted.org/packages/45/01/47701b26254267ef40369aea3acb62a7b23e921c27372d127e0f3af48092/ruff-0.14.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7047f0c5a713a401e43a88d36843d9c83a19c584e63d664474675620aaa634a8", size = 12303832, upload-time = "2025-10-23T19:36:29.192Z" },
+    { url = "https://files.pythonhosted.org/packages/27/4c/0860a79ce6fd4c709ac01173f76f929d53f59748d0dcdd662519835dae43/ruff-0.14.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:1c505b389e19c57a317cf4b42db824e2fca96ffb3d86766c1c9f8b96d32048a7", size = 14512649, upload-time = "2025-10-23T19:36:33.915Z" },
+    { url = "https://files.pythonhosted.org/packages/7f/7f/d365de998069720a3abfc250ddd876fc4b81a403a766c74ff9bde15b5378/ruff-0.14.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a307fc45ebd887b3f26b36d9326bb70bf69b01561950cdcc6c0bdf7bb8e0f7cc", size = 14088182, upload-time = "2025-10-23T19:36:36.983Z" },
+    { url = "https://files.pythonhosted.org/packages/6c/ea/d8e3e6b209162000a7be1faa41b0a0c16a133010311edc3329753cc6596a/ruff-0.14.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:61ae91a32c853172f832c2f40bd05fd69f491db7289fb85a9b941ebdd549781a", size = 13599516, upload-time = "2025-10-23T19:36:39.208Z" },
+    { url = "https://files.pythonhosted.org/packages/fa/ea/c7810322086db68989fb20a8d5221dd3b79e49e396b01badca07b433ab45/ruff-0.14.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1967e40286f63ee23c615e8e7e98098dedc7301568bd88991f6e544d8ae096", size = 13272690, upload-time = "2025-10-23T19:36:41.453Z" },
+    { url = "https://files.pythonhosted.org/packages/59/a1/1f25f8301e13751c30895092485fada29076e5e14264bdacc37202e85d24/ruff-0.14.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e681c5bc777de5af898decdcb6ba3321d0d466f4cb43c3e7cc2c3b4e7b843a05", size = 12266116, upload-time = "2025-10-23T19:36:45.625Z" },
+    { url = "https://files.pythonhosted.org/packages/5c/fa/0029bfc9ce16ae78164e6923ef392e5f173b793b26cc39aa1d8b366cf9dc/ruff-0.14.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:e21be42d72e224736f0c992cdb9959a2fa53c7e943b97ef5d081e13170e3ffc5", size = 12281345, upload-time = "2025-10-23T19:36:47.618Z" },
+    { url = "https://files.pythonhosted.org/packages/a4/7f/638f54b43f3d4e48c6a68062794e5b367ddac778051806b9e235dfb7aa81/ruff-0.14.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:5ca36b4cb4db3067a3b24444463ceea5565ea78b95fe9a07ca7cb7fd16948770", size = 13371610, upload-time = "2025-10-23T19:36:51.882Z" },
 ]
 
 [[package]]