I run HandBrake Web on headless kit to automate video transcoding. This guide walks through a practical HandBrake Web setup, from the basics on a headless server to running automated jobs and distributed encoding. I keep commands minimal and give checks you can run right away. Read a step, try it, check the verification notes.
Start with the requirements and server setup. You need a Linux box or NAS with Docker or a container runtime, enough CPU and storage for your source and output, and optional GPU drivers if you plan to use hardware acceleration. Create three directories on the server: config, input and output. Make sure the user running Docker has permission to write those folders. A simple docker run example that maps those folders and exposes the web interface looks like this:
1) docker run -d –name=handbrake-web -p 8080:3000 -v /srv/handbrake/config:/config -v /srv/handbrake/input:/input -v /srv/handbrake/output:/output handbrake-web-server:latest
2) Check the container is running with docker ps and view logs with docker logs -f handbrake-web. The web interface will be available at http://your-server:8080. If you plan to use GPU acceleration, install the correct drivers and test the vendor tools first, then enable acceleration inside the web UI and run a short transcode as a proof. Hardware support varies by CPU and GPU model, so test before relying on it.
Configuring the web interface and creating jobs takes a few minutes. Open the UI and add or confirm the input and output paths match the volumes you mounted. Create a new job by selecting the source file, choosing a preset or custom settings, and pointing the destination to your output path. Typical quick settings for general sharing are: 1080p, H.264, a reasonable constant quality (RF 20–22), and copy subtitles if required. Save that as a named preset so you can reuse it. For automation, enable directory monitoring in the settings and add a watcher that points to your input folder. Set the action on new files to create jobs automatically and choose the preset to apply. Small tip: use a subfolder per origin (for example: /input/phones, /input/camera) so different presets can apply per watcher.
If you want distributed encoding, run worker instances on other machines and register them with the main server. There are two common approaches: shared storage or networked job distribution. Shared storage is simpler: mount the same input and output paths into each worker container; the server creates jobs and workers pick them up from the shared filesystem. For true distributed work without shared storage, run the worker image and point it at the server API endpoint so it can fetch job data and source segments. Make sure your firewall allows the management port between server and workers and that any API tokens are kept safe. I recommend starting with one remote worker, run a few test transcodes, and watch how load spreads across CPUs and GPUs before adding more workers.
Verification and basic troubleshooting. After a job finishes check three places: the web UI job log, the container logs, and the output file properties. Play the output and confirm duration, audio tracks and subtitles match expectations. If a transcode fails quickly, check file permissions; a lot of failures come from container users not having write access to the output folder. If a transcode is slow, run a quick single-file test with software-only encoding to get a baseline, then enable hardware acceleration and compare. Watch CPU and GPU utilisation with top, nvidia-smi or intelgputop depending on hardware. For long runs, rotate logs and back up the /config folder so you keep presets and worker registration if the container is rebuilt.
Concrete takeaways. Use containerised HandBrake Web for a reliable web interface on a headless server. Map config, input and output folders and test one job before switching to full automation. Use directory watchers to create jobs automatically and add workers incrementally for distributed encoding. Verify each change with short test transcodes. That gives you repeatable automation, predictable results, and a clear path to scale.