Building for ARM with Docker – packaging libcamera-apps for Alpine on armv7

In this article we look at compiling applications for different CPU architectures using Docker to emulate on an x86 host, with a working example of packaging libcamera-apps for Alpine Linux for an armv7 device (a Raspberry Pi Zero) and armv8 (Raspberry Pi Zero 2).

Background

Recently the availability of diverse mainstream architectures has increased, moving away from niche use-cases and servers to home computers, laptops, single board computers (SBC) like the hugely popular Raspberry Pi and more.

ARM has become particularly popular in recent years powering a wide range of devices from cheaper embedded hardware to expensive and highly performant Apple Macs and Servers.

While this variety of architectures allows for many advantages, including lower costs and greater performance versus power uses, it comes with a potential challenge of building applications to run on multiple architectures.

The Challenge

Many desktop computers and laptops (devices) are still “Intel-based”, using an x86 CPU architecture which has been dominant for decades.

In some languages, code is compiled to a native binary (aka application) that can be executed directly on the target device without a runtime. Programs written in lower-level languages like C and C++ are built into native binaries, whereas many higher-level languages like JavaScript, Java, C#, Python and Ruby rely on a runtime to execute.

The runtime-based languages still require a binary to execute them and for these it is the runtime that must be compiled for the target architecture. With the higher-level languages the code can be written once and deployed as is to a target device provided there is a suitable runtime for the device’s architecture.

With the lower-level languages, the applications need compiling directly for the target device architectures which means multiple versions may need to be shipped when supporting a range of devices.

It is often easiest to compile for a target architecture on a device with that architecture, however this is not always practical. The Pi Zero is a low-spec device which means intense tasks can take a long time to complete, or worst case exhaust the available resources (eg memory) – and it has an older ARM architecture (armv7) which is not likely to be found in more powerful or resource rich devices.

In situations like this it can be helpful to build on a more powerful device, emulating the target device architecture, even though the emulation will incur a performance hit.

For a recent project we needed to compile an Alpine Linux package for the Pi Zero’s armv7 architecture which was slow and tricky to do on the device itself due to the performance limitations and the default disk-less storage mode in Alpine being space constrained to the device’s available memory.

The Approach

The following steps have been tested with Docker 24 on an x86 (64-bit) host running Ubuntu 23.04.

Docker is now capable of emulating architectures other than that of the host device using QEMU.

QEMU is a free and open-source emulator. It emulates a computer’s processor through dynamic binary translation and provides a set of different hardware and device models for the machine, enabling it to run a variety of guest operating systems.

To check the supported platforms on your system run docker buildx ls:

❯ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  BUILDKIT             PLATFORMS
default * docker                                      
  default default         running v0.11.6+0a15675913b7 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386

If you do not see ARM listed, like in the example above then it’s likely QEMU dependencies are missing. They can be installed with the following command:

❯ sudo apt-get install qemu-system binfmt-support qemu-user-static

Afterwards docker should report more platforms:

❯ docker buildx ls                                               
NAME/NODE DRIVER/ENDPOINT STATUS  BUILDKIT             PLATFORMS
default * docker                                      
  default default         running v0.11.6+0a15675913b7 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386, linux/arm64, linux/riscv64, linux/ppc64, linux/ppc64le, linux/s390x, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6

As the example above shows, QEMU enables building for a wide number of platforms including but not limited to ARM.

Quick Tests

The following commands run Ubuntu under different ARM variants using the uname command to check and report back on the architecture.

❯ docker run –rm -ti –platform linux/arm64 ubuntu:latest uname -m
aarch64

❯ docker run –rm -ti –platform linux/arm/v7 ubuntu:latest uname -m
armv7l

Anything we run in a container will execute under the specific platform using QEMU to emulate it as required.

Note that some examples show declaring the platform as part of the image name, which does still work but also throws a warning: 

❯ docker run –rm -t arm64v8/ubuntu uname -m                       
WARNING: The requested image’s platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v4) and no specific platform was requested
aarch64

Building libcamera-apps

The commands to build libcamera-apps can be run directly on a Raspberry Pi if it has enough resources, but to save time we opted to run it on a more powerful host as it would also speed up the compilation process.

The commands to build the package (on Alpine Linux) are as follows:

# Install/configure dependencies
apk update
apk add alpine-sdk apk-tools git libcamera libcamera-raspberrypi libcamera-tools
abuild-keygen -a -n

# Clone libcamera-apps-alpine
git clone https://github.com/wjtje/libcamera-apps-alpine.git
cd libcamera-apps-alpine

# Build libcamera-apps-alpine
abuild -r -F

These are also the commands to run in a container, with a wrapper script to ensure the artefact ends up in an accessible location since by default it will sit there in the container post build and will need moving out manually.

build.sh

This is the script to execute on the host, that will run the docker container and compile script. It also mounts a volume call out so that the artefact can be copied out to the host:

#!/bin/sh
set -e

# Work in the directory of the script
cd “$(dirname “$0″)”

mkdir -p ./out

docker run –rm –name builder -i -t -v ./out:/out -v ./entrypoint.sh:/entrypoint.sh –entrypoint /entrypoint.sh –platform linux/arm/v7 alpine:latest 

It also mounts a local script in the container called entrypoint.sh. As we’re not making an actual image here, just running a container from an existing base image we want to be able to define our own entrypoint to run.

entrypoint.sh

This script runs in the container, other than the commands listed to build the package, it has a few safety checks (the if statements) and copies the artefact out at the end to the volume mount:

#!/bin/sh
set -e

# Ensure running in the container
if [ ! -f /entrypoint.sh ]; then
    echo “This script must be run in the container”
    exit 1
fi

# Ensure running on alpine
if [ ! -f /etc/alpine-release ]; then
    echo “This script must be run on alpine”
    exit 1
fi

echo “Install/configuring dependencies…”
apk update
apk add alpine-sdk apk-tools git libcamera libcamera-raspberrypi libcamera-tools
abuild-keygen -a -n

echo “Clone libcamera-apps-alpine…”
git clone https://github.com/wjtje/libcamera-apps-alpine.git
cd libcamera-apps-alpine

echo “Build libcamera-apps-alpine…”
abuild -r -F

echo “Copying artefacts to /out”
cp /root/packages/**/libcamera-apps*.apk /out/

echo “Done, your apk is in ./out!”

Finally

Then to run it, simply run build.sh:

❯ ./build.sh

Conclusion

Docker with buildx and QEMU and can provide a nice and convenient way to run commands and compile code under other platform architectures from the same computer. While the example above may not always work in future the principle remains and we use this approach to automate build processes for non-x86 target hardware for our customers on a regular basis.