Setting up headless Chrome in ARM64 containers without

Google doesn't publish ARM64 Chrome. Chromium fills that gap on Debian-based ARM64 Linux systems, and any CDP automation library works identically against it. Pin the version, fix `/dev/shm`, work around the M113 CDP bind change with socat, and you've got a solid headless setup.

Running Chrome on ARM64 Linux Servers: docker-compose Setup and Pitfalls

Google does not publish an official linux/arm64 build of Chrome. There is no google-chrome-stable package for ARM64, and there never has been. Chromium fills that gap, and on Debian-based ARM64 hosts, apt install chromium works fine. The two browsers share the same DevTools Protocol surface, so any automation library that targets CDP works identically against Chromium.

Use Chromium, Not Chrome, on ARM64 Linux

The Debian chromium package is maintained and regularly updated. Alpine also ships a chromium package, but its musl libc causes silent failures with some Node.js automation libraries that link against glibc. Stick with debian:bookworm-slim as your base image. Alpine saves about 20 MB and causes hours of debugging. The maths is not in its favour.

Pin the version. Leaving the package unpinned means a container rebuild can silently pull a newer Chromium that breaks your automation. Find the current package version and lock it:

dockerfile
FROM debian:bookworm-slim

RUN apt-get update && apt-get install -y –no-install-recommends \
chromium=120.0.6099.216-1~deb12u1 \
fonts-liberation \
libnspr4 \
libnss3 \
&& rm -rf /var/lib/apt/lists/*

Replace the version string with whatever apt-cache policy chromium returns on your target host before you write the Dockerfile. If the pinned version disappears from the repo, the build fails loudly. That is the correct behaviour.

/dev/shm and the Non-Root User

Chromium uses /dev/shm for shared memory between the browser process and its renderer processes. Docker sets /dev/shm to 64 MB by default. Chromium needs closer to 256 MB before it can render anything without crashing. Set it in your Compose file:

yaml
services:
chromium:
image: your-chromium-image:latest
shm_size: “256mb”
user: “1001:1001”

Running as root inside a container is the easy path and the wrong one. Create a non-root user in the Dockerfile and make sure that user owns /dev/shm access. The user: key in Compose handles the runtime UID drop. Combined with --no-sandbox (covered below), this keeps the attack surface narrow.

dockerfile
RUN groupadd -r chrome && useradd -r -g chrome -G audio,video chrome
USER chrome

Headless Chrome ARM64 and the CDP Bind Change

From Chromium M113 onward, --remote-debugging-address=0.0.0.0 no longer works in headless mode. Chromium forces the CDP listener to bind on 127.0.0.1 regardless of what you pass. This breaks any setup that tries to connect to the CDP port directly from a separate container or from the host. The flag is silently ignored.

The fix is a socat proxy inside the Chromium container. Run socat as a sidecar process that forwards from 0.0.0.0:9222 to 127.0.0.1:9223, and start Chromium on port 9223:

dockerfile
RUN apt-get install -y –no-install-recommends socat

Then in your entrypoint:

bash

!/bin/bash

chromium \
–headless=new \
–no-sandbox \
–disable-gpu \
–remote-debugging-port=9223 \
–remote-debugging-address=127.0.0.1 &

socat TCP-LISTEN:9222,fork TCP:127.0.0.1:9223 &
wait

This keeps Chromium bound to loopback while exposing port 9222 for other containers to reach via the Docker Compose internal network.

Wiring It Up in docker-compose

Do not publish port 9222 to the host. If you are running monitoring dashboards or any other automation on an ARM server, keep the CDP port on the internal Compose network only:

yaml
services:
chromium:
build: ./chromium
shm_size: “256mb”
user: “1001:1001”
networks:
– browser-net

automation:
image: your-automation-image:latest
dependson:
chromium:
condition: service
healthy
networks:
– browser-net

networks:
browser-net:
driver: bridge

The automation container reaches Chromium at http://chromium:9222. The port is never reachable from outside the Compose network.

Verifying the WebSocket Endpoint Before Connecting

Your automation script should not assume Chromium is ready the moment the container starts. Add a healthcheck that polls the /json/version endpoint:

yaml
chromium:
healthcheck:
test: [“CMD”, “curl”, “-f”, “http://localhost:9222/json/version”] interval: 5s
timeout: 3s
retries: 5
start_period: 10s

A 200 response from /json/version confirms the CDP WebSocket is live. The response body includes the webSocketDebuggerUrl, which your library should use directly rather than constructing the URL itself. On a cold ARM64 container start, Chromium typically takes 3 to 6 seconds before CDP responds. The start_period accounts for that.

The –no-sandbox Flag

--no-sandbox is required when running as a non-root user inside a Docker container because the kernel user namespace setup that Chrome’s sandbox relies on is unavailable in most default container configurations. The flag disables the renderer sandbox, not network isolation. The actual host-level isolation comes from Docker’s seccomp profile and the fact that the container runs as a non-root UID.

Do not add --disable-web-security alongside it. That flag disables same-origin enforcement and has no place in an automation or monitoring container. Keep the flag list minimal:

bash
chromium \
–headless=new \
–no-sandbox \
–disable-gpu \
–disable-dev-shm-usage \
–remote-debugging-port=9223

--disable-dev-shm-usage tells Chromium to use /tmp for shared memory if /dev/shm is too small. With shm_size: "256mb" set in Compose, this flag is a fallback rather than a primary fix. Still worth including.

The combination of a non-root UID, a locked Chromium version, an internal-only CDP port, and a proper shm allocation is enough to run containerised browser automation reliably on ARM64. None of it is complicated. The tricky part is knowing which of these the defaults get wrong.