OpenWrt on RPi:
Hacking with Frida (Part II)

/nl_img1

You just fired up an old Linux-based appliance. Once upon a time, it was widely deployed, but in recent years, you discover it’s fallen by the wayside: Vendor support? Neglected. Software updates? Nonexistent.

Of course, you want to figure out how the underpinnings work, perform vulnerability research on it, or red-team it for a cybersecurity effort.

You have basic local access… but the root file system is largely stripped of any useful tools.

You’re thinking how nice it would be to have modern tools deployed on the system – GDB, Python, a better shell, and Frida – but you want them “on the side” and not affecting the running system.

I feel your pain, reader.

This is a quite common scenario. Commercial consumer devices and industrial counterparts most often have an end-of-life (EOL) with respect to support, software upgrades, and security patches. Device manufactures could be acquired or go out of business, and their products stop getting the TLC they deserve. There could be CVE bulletins and n-days from years back that are still applicable to such systems, and they could be ripe for exploitation.

However, it could be challenging to build said software for such targets; you’d inevitably face considerations such as:

If you’ve stumbled into a situation similar to the above, please read on, since this article will present a hands-on recipe with a potential solution. (Plus, an additional twist of hacking with Raspberry Pi.)

Don’t miss the prequel

This post assumes you’ve read my Part I: Porting Frida to an Unsupported Platform – be sure to start there if you haven’t yet!

The following write-up will augment Part I, providing you with a more comprehensive solution to the effort of running Frida on an unsupported target – while alleviating your EOL pains.

Let’s build a toolset for your hacking target

Over the course of this article we will produce the following artifacts:

I’ve used this scheme for a variety of targets, including iPhone/iOS, a proprietary network appliance, and more. Is it the most elegant or configurable framework? Maybe not, but it’s sufficient in the laid-out context.

Now, let’s turn our attention to RPi running OpenWrt.

OpenWrt, Raspberry Pi 1 + host

Continuing the journey started in Part I

In Part I, we used a virtualized 32-bit x86 OpenWrt target. I selected this platform for simplicity, as the main focus of the article was an exploration into porting Frida onto an unsupported target. Since it’s always more fun to run on dedicated hardware, my initial idea for Part II meant procuring a vintage DD-WRT (or similar) device from eBay.

Eventually, I decided to make use of yet another somewhat contrived target: the Raspberry Pi 1. It’s cheap, and very representative of the hardware you might find inside an old network appliance. Also, the RPi Broadcom chipset is supported by the same OpenWrt version used in Part I.

Just to be clear, for the purposes of this article, please view the RPi target as if it were some sort of obsolete appliance – abandoned in its current state, with no vendor updates in sight.

Homage to the venerable ARM1176JZF-S

Raspberry Pi 1 sports the Broadcom BCM2835 chip (same as BCM2708), which is an ARM1176JZF-S implementation – a real celebrity in ARM lore. While not the first MMU-equipped ARM core, it was the one that launched ARM into ubiquity, having been used in the original 2007 iPhone via the Samsung S5L8900 chipset.

OpenWrt and host

Accessing the OpenWrt download archives, you’ll find the same 17.01.2 version used in Part I available for RPi 1.

Download the OpenWrt image

And for reference, I use Ubuntu 24.04 as the host platform throughout this exercise.

More #FridaGoals

Specifically, our goals are to provide:

Firing up the RPi

If you make use of the available USB ports with ethernet adapters, the RPi could actually serve as a home router.

Normally, you might run Raspberry Pi OS, but naturally we want to run the identified old OpenWrt software. Let’s start by downloading the image here. Plug a microSD card into your host computer (use an USB adapter, if necessary) and pinpoint the device node:

user@asus:~/dev/blog/archive$ lsblk
NAME    	MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS

sdb       	8:16   0   3.6T  0 disk
└─sdb1    	8:17   0   3.6T  0 part /media/quadone
sdc       	8:32   1  29.8G  0 disk
└─sdc1    	8:33   1  29.8G  0 part
nvme1n1 	259:0	0   1.8T  0 disk

In my case, the device is /dev/sdc (naturally, the device will be different on your host). Now, let’s overwrite the existing partition table onto the raw device with our downloaded image (see OpenWrt instructions):

user@asus:~/dev/blog/archive$ gunzip -c \
lede-17.01.2-brcm2708-bcm2708-rpi-ext4-sdcard.img.gz | \
sudo dd of=/dev/sdc bs=2M; sync
0+9087 records in
0+9087 records out
297795584 bytes (298 MB, 284 MiB) copied, 63.6225 s, 4.7 MB/s
user@asus:~/dev/blog/archive$ sudo fdisk -l /dev/sdc
Disk /dev/sdc: 28.79 GiB, 30908350464 bytes, 60367872 sectors
...
Device 	Boot Start	End Sectors  Size Id Type
/dev/sdc1  * 	8192  49151   40960   20M  c W95 FAT32 (LBA)
/dev/sdc2   	57344 581631  524288  256M 83 Linux

The image is not even 300 MB, and thus is far smaller than our – and most – SD cards. Therefore, we want to expand both the partition and file system to utilize the full size.

To set the second partition to the max size of the disk, use fdisk (or similar), and expand the filesystem with expand2fs. (See one of many tutorials on how to do this.)

For good measure, run fsck.ext4 and tune2fs -O^resize_inode on the second partition as well.

At this point, it’s easiest to use the serial console for the initial network configuration, and we assume that the network you’re connecting to has DHCP service (again, see OpenWrt instructions):

root@LEDE:/# uci set network.lan.proto=dhcp                                	 
root@LEDE:/# uci commit                                                    	 
root@LEDE:/# /etc/init.d/network restart

Now you have SSH access (as in Part I). You can follow the same procedure for putting your key on the target and creating a host openwrt SSH alias.

One-stop GitHub repo

Wishing you had everything you need to replicate the builds and the experiments laid out in this article? Well, consider your wish granted, as it’s all available in our GitHub repository:

Visit our repo ⇢

Let’s clone the repo and take a look:

user@asus:~/dev/blog$ git clone git@github.com:Zetier/frida-musl.git
...
user@asus:~/dev/blog$ cd frida-musl; ll
total 144
...
drwxrwxr-x 2 user user 4096 Mar 18 22:07 cross
drwxrwxr-x 4 user user 4096 Mar 18 22:07 frida
-rw-rw-r-- 1 user user 1938 Mar 10 19:36 Makefile
drwxrwxr-x 4 user user 4096 Mar 18 22:07 python
...
drwxrwxr-x 2 user user 4096 Mar 18 22:06 tool-install

In essence, the folders contain makefiles to build the toolchain, libraries, and Frida tools. Two components I’d initially like to highlight:

Not surprisingly, you have to build the toolchain ahead of Frida – and make sure your environment adds a path to your installation.

Each component is self-contained within one directory:

Custom toolchain for ARMv6kz with musl libc

As we have elaborated on earlier, we have the following requirements for the toolchain:

This toolchain is one of a kind, combining the latest GCC with quite an old musl libc, which targets a vintage 32-bit ARM architecture.

We’re leveraging this GitHub project, which facilitates satisfying all the above goals:

musl-cross-make

Our cross/makefile will clone this project once invoked, but before that, we’ll inspect the cross/config.mak, which is our custom toolchain configuration.

We observe the following:

TARGET = arm-linux-musleabihf
GCC_CONFIG += --with-arch=armv6kz --with-tune=arm1176jzf-s \
--with-fpu=vfpv2 --with-float=hard
OUTPUT = $(PWD)/../install
GCC_VER = 14.2.0
MUSL_VER = 1.1.16

And we define the following:

Everything else stays default, as per the author’s recommendations. The only thing you might want to change is the OUTPUT installation directory. The build is complex due to multiple stages, but no need to worry about that. Depending on your host computer, the build could take anywhere between 5 to 30 minutes, so some patience is required.

Build as:

user@asus:~/dev/blog/frida-musl$ cd cross && make
...
Cloning into 'musl-cross-make'...
...
tar xf ../dnld/musl-cross-make-6f3701d08137496d5aac479e3a3977b5ae993c1f.tar.xz
cp -a config.mak musl-cross-make
cd musl-cross-make && make -j 20 >> 
../make-musl-cross-make-6f3701d08137496d5aac479e3a3977b5ae993c1f.log 2>&1
cd musl-cross-make && make install >> 
../make-musl-cross-make-6f3701d08137496d5aac479e3a3977b5ae993c1f.log 2>&1
user@asus:~/dev/blog/frida-musl/cross$

Congratulations, you have an oven-fresh custom toolchain installed under cross/install.

Finally, add the bin path to your environment profile, as per your distribution’s best practices. On Ubuntu 24.04, I usually add the below into /etc/profile.d/toolchain-custom.sh. That way, it’s added once you’ve begun your login session. For earlier Ubuntu, add the same line in ~/.profile.

export PATH="$PATH:<OUTPUT>/cross/install/bin"

For good measure, you can verify that your compiler functions correctly (as outlined in Part I).

Frida, Python + other select tools

Remember, this repository is not meant to compete with Buildroot, OpenEmbedded, or Yocto, which are comprehensive solutions to build a complete embedded Linux system, including kernel, boot loaders, and rootfs. The intention of this repository is only to build a few select tools for deployment onto an existing target, and to overcome the identified challenges building them.

Host prerequisites

For time’s sake, I won’t list the exact set of required packages that should be available on your host machine; just follow the normal distribution recommendations regarding setting it up for development activities. Some packages might have unusual dependencies, such as textinfo for documentation. Therefore, if a package fails to build, just inspect the log file created and identify what went wrong. The solution, more often than not, is just to install another host package.

That said, I’ll point out a few special prerequisites, which do not fall into the simple “missing package” category:

Python 3.13

The default Python installation on my host is version 3.12, but we’re building a full Python interpreter for version 3.13. Experience has shown that life becomes oh-so-much simpler if the host also has the target version of Python installed. Fortunately, on Ubuntu, this is easily accomplished by installing the deadsnakes PPA, followed by the 3.13 main interpreter and virtual environment.

$ sudo add-apt-repository ppa:deadsnakes/ppa
$ sudo apt-get update
$ sudo apt-get install python3.13 python3.13-venv

TL;DR:

Type make.

The recursive top-level Makefile will build all the libraries and tools in the dependency order required. Once done, the make install command will create a tarball for target deployment.

user@asus:~/dev/blog/frida-musl$ make && make install && ll pack*
-rw-rw-r-- 1 user user 212976753 Mar 18 23:11 pack-arm-82811fc-main.tar.gz
user@asus:~/dev/blog/frida-musl$

Slightly longer TL;DR

The top-level Makefile will build the libraries first, then the tools that depend on them, and finally Frida itself. Some common libraries frequently used by popular tools include:

Subsequent packages built can now compile and link against these libraries.

The build

A few conventions that all components adhere to:

And holistically, as summarized in the TL;DR:

Ever-increasing list of “good-to-haves”

During the course of putting this article together, I did add some tools not really necessary for the eventual goal of getting Frida running. However, they were useful for verifying functionality, and they include:

I left them in the build, as they might be useful for people trying to replicate this.

Python

As usual with everything related to Python, there are numerous ways and tools for accomplishing things. This is very much true when it comes to cross-compiling to another architecture than that of the host. The actual main Python package, with its many C-language modules, might not pose too much head-scratching, but adding any platform-specific third-party modules surely will.

First, we’ll jump ahead a bit and talk about the Frida python prerequisites. Not only do we need a full distribution of Python for Frida, but we also need some third-party packages added. If you’re running on a normal Linux host machine, you’d normally add these packages to your Python environment with python -m pip install <package name>, and the package manager pip downloads and installs the appropriate package from PyPI.

With our use case, we don’t want to be dependent on network access, as we might conduct our research in an air-gapped environment. As such, we do want to include a number of third-party packages at build time, which satisfy the Frida requirements:

Therefore, the sources for our Python build will include the main Python tarball from https://www.python.org/ plus the above third-party libraries. We’ll build and install the main Python build to the staging area as usual and install the third-party libraries into the pack/lib/python3.13/site-packages directory. That way, they’re ready to go without pip install the wheels. However, the built wheels are also copied into pack/share for good measure.

Verifying the Python build

Building the main Python tarball is somewhat straightforward. However, please note that if prerequisite libraries are missing, most of the time the consequences are just not to build the corresponding standard libraries. And if a module fails to build, most often it fails silently. Therefore, it’s important to inspect the configuration and build log files and verify that (almost) all modules are built as well (some are teed up for deprecation, or irrelevant for your use case).

First, inspect the configuration log file configure_Python-3.13.2.log towards the end:

checking for stdlib extension module _multiprocessing... yes
...
checking for stdlib extension module _dbm... missing
checking for stdlib extension module _gdbm... yes
...
checking for stdlib extension module _tkinter... missing
...

Only two modules will be excluded, and neither is critical: _dbm is obsolete, and we won’t do any Tk scripting.

Next, inspect the build log make-default-Python-3.13.2.log towards the end:

The necessary bits to build these optional modules were not found:
_dbm                  	_tkinter                                    	 
To find the necessary bits, look in configure.ac and config.log.

Checked 112 modules (33 built-in, 76 shared, 1 n/a on linux-arm, 0 disabled, 2 missing, 0 failed on import)

We’re all good; the log confirms that, as expected, only two modules were excluded from the build. (And be sure to check the final module count.)

Finally, we want to verify that the third-party modules were built successfully:

user@asus:~/dev/blog/frida-musl/python$ grep Success make-third-party.log
Successfully built colorama-0.4.6-py2.py3-none-any.whl
Successfully built prompt_toolkit-3.0.50-py3-none-any.whl
Successfully built pygments-2.19.1-py3-none-any.whl
Successfully built wcwidth-0.2.13-py2.py3-none-any.whl
Successfully built websockets-15.0.1-cp313-cp313-linux_x86_64.whl

Frida

And we save the most complicated tool build for last!

A lot has already been conversed about and elaborated on in Part I, and the same troubleshooting steps still apply here. Basically, we’ll augment the Part I build with full Python support. And please recall the additional module prerequisites that we’ve already taken care of in the Python environment. Note: Some of the patches required in Part I are no longer necessary – the good Frida author accepted my merge requests, so they’re now in mainline.

The mission:

	@cd $(FRDDIR)/build/subprojects/frida-tools/scripts && \
    	for f in $$(find . -type f -name "frida*"); do \
        	echo Fixing python path in $$f; \
        	sed -i	-e 's|^#!/usr/bin/python.*$$|#!$(CMN_TGTINST)/bin/$(CMN_PYEXE)|' \
                	-e 's|$(CMN_INSTALL)|$(CMN_TGTINST)|' \
        	$$f; \
    	done

Verifying the Frida build

After the make command finishes, you can inspect the log file make-frida-16.6.6.log and look for the following key items:

  Subprojects (for host machine)
	frida-core 	: YES
	frida-gum  	: YES
	frida-python   : YES
	frida-tools	: YES

We’re building everything:

...
[250/254] Generating subprojects/frida-core/inject/frida-inject with a custom command
[251/254] Generating subprojects/frida-core/portal/frida-portal with a custom command
[252/254] Generating subprojects/frida-core/server/frida-server with a custom command
[253/254] Compiling C object subprojects/frida-python/frida/_frida/_frida.abi3.so.p/extension.c.o
[254/254] Linking target subprojects/frida-python/frida/_frida/_frida.abi3.so

All targets were successfully built!

Tarball

As shown in the TL;DR above, invoking the top-level make install command generates a compressed tarball of the pack/tool-install folder.

user@asus:~/dev/blog/frida-musl$ make install && ll pack*
...
tar cf - pack | gzip > ../pack-arm-82811fc-import.tar.gz
make[1]: Leaving directory '/home/user/dev/blog/frida-musl/tool-install'
Tar took: 0:18.70 m:s
-rw-rw-r-- 1 user user 212976753 Mar 18 23:35 pack-arm-82811fc-main.tar.gz
user@asus:~/dev/blog/frida-musl$

The pack tarball filename includes the architecture, git short commit, and git branch for uniqueness purposes.

At this point, I don’t seek to minimize the footprint in any meaningful way: I do not strip, all installed artifacts are included, even the static libraries, etc. Suffice it to say, there’s plenty of room to shrink the tarball — but that’s a mission for another day.

Run on RPi

So, we have our compressed tarball in hand. Now we just need to transfer it to the target, log into the target, and uncompress the fresh tarball.

user@asus:~/dev/blog/frida-musl$ scp pack-* openwrt:
pack-arm-82811fc-main.tar.gz                 	100%  203MB   1.9MB/s   01:44    
user@asus:~/dev/blog/frida-musl$ ssh openwrt
BusyBox v1.25.1 () built-in shell (ash)
...
root@LEDE:~# pwd
/root
root@LEDE:~# gunzip -c pack-arm-82811fc-main.tar.gz | tar xf -
root@LEDE:~# ll
...
drwxrwxr-x   10 1000 	1000      	4096 Mar 19 03:45 pack/
-rw-r--r--	1 root 	root 	212976753 Mar 19 03:40 pack-arm-82811fc-main.tar.gz
root@LEDE:~#

As we see, we have installed everything in its correct target location under /root/pack.

We’ve gotten acquainted with frida-inject while exploring its functionality in Part I. Therefore, we won’t dwell on it here, leaving it as a reader exercise.

What if Frida could make it even more convenient for the user to trace function invocations, program JavaScript manipulations, etc.?

Well, welcome the frida-tools utilities, such as:

The above commands and friends (see https://frida.re/docs/home) are all implemented in terms of Python.

Let’s target the exact same daemon as in Part I – but this time, we make use of frida-trace:

root@LEDE:~# ./pack/bin/bash
root@LEDE:~# which frida-trace
/root/pack/bin/frida-trace
root@LEDE:~# frida-trace --decorate -i "read*" -i "send*" uhttpd
Instrumenting...                                                   	 
readahead: Auto-generated handler at "/root/__handlers__/libc.so/readahead.js"
readv: Auto-generated handler at "/root/__handlers__/libc.so/readv.js"
readdir64_r: Auto-generated handler at "/root/__handlers__/libc.so/readdir64_r.js"
readlinkat: Auto-generated handler at "/root/__handlers__/libc.so/readlinkat.js"
readdir64: Auto-generated handler at "/root/__handlers__/libc.so/readdir64.js"
readlink: Auto-generated handler at "/root/__handlers__/libc.so/readlink.js"
read: Auto-generated handler at "/root/__handlers__/libc.so/read.js"
sendfile: Auto-generated handler at "/root/__handlers__/libc.so/sendfile.js"
sendmmsg: Auto-generated handler at "/root/__handlers__/libc.so/sendmmsg.js"
send: Auto-generated handler at "/root/__handlers__/libc.so/send.js"
sendmsg: Auto-generated handler at "/root/__handlers__/libc.so/sendmsg.js"
sendto: Auto-generated handler at "/root/__handlers__/libc.so/sendto.js"
Started tracing 12 functions. Web UI available at http://localhost:33153/
       	/* TID 0x13d */
1102972 ms  read() [libc.so]
1102993 ms  read() [libc.so]
1104089 ms  read() [libc.so]

This is very interesting. Unpacking the above:

Let’s inspect the generated handler for read():

root@LEDE:~# cd __handlers__/libc.so/
root@LEDE:~/__handlers__/libc.so# cat read.js
/*
 * Auto-generated by Frida. Please modify to match the signature of read.
 * This stub is currently auto-generated from manpages when available.
 *
 * For full API reference, see: https://frida.re/docs/javascript-api/
 */

defineHandler({
  onEnter(log, args, state) {
	log('read() [libc.so]');
  },

  onLeave(log, retval, state) {
  }
});
root@LEDE:~/__handlers__/libc.so#

Now you can experiment with adding more elaborate tracing or live data manipulation. Your imagination is the only limit.

Mission Accomplished: Frida runs on RPi with OpenWrt

We succeeded in making the latest version of Frida with Python bindings to run on a 20-year-old ARM chip with musl-based system software. This project was not without challenges, package dependency chasing, and the usual tinkering, but most of the temporary roadblocks were omitted in this write-up due to article-length concerns.

An interested reader should have no major problems replicating this build as all the necessary build scripts reside in our public GitHub repository.

I’d personally like to extend my gratitude to Ole, the creator of Frida, for making a very interesting and useful piece of software publicly available.

The Raspberry Pi 1, with its ARM11 core, is by no means the first Linux-capable chip. ARM9TDMI and ARM9E(J)S implementations were already running Linux in the early 2000s – at a blistering speed of around 200 MHz.

Over to you…

It would be interesting to know if any readers rise to the occasion of porting Frida & friends to the earliest ARM possible?

If so, it would be quite a trivial task to fork my repository and make the necessary changes. Off the top of my head, you’d need to change the toolchain build to target ARMv4T or ARMv5TE architecture respectively. Apart from that, the rest might just work. Since ARM9 chips have less resources, one potential problem could be RAM size, but again, there are many opportunities to minimize the storage and RAM footprint not explored in this article.

Let our team know how it goes if you venture down this route – we love hearing what readers learn when they go on an engineering adventure.

Illustration by Inkinetic Studios.

Share the journey of hacking with Frida

Help fellow hackers find this article in your favorite online venue!

Your Next Read

Discover more from Zetier

Subscribe now to keep reading and get access to the full archive.

Continue reading