I run Frigate on an Intel N100 mini PC and use it with several Amcrest PoE cameras. This guide shows how I set the mini PC up, install Frigate, configure Amcrest cameras and tune detection so the N100 keeps up. I give concrete examples you can copy, and what I watch for when the system is under load. Read it, apply it, test it on your network.
Start with the mini PC and network. Fit an M.2 SSD for recordings and a decent 30–60W PoE switch on the same LAN. Install a minimal Linux image such as Debian or Ubuntu Server. I use Docker, so install Docker Engine and docker-compose. Before you install Frigate, check hardware video acceleration exists on your build by running vainfo or ffmpeg -hwaccels on the device. If /dev/dri is present, map it into the container so Frigate can use VAAPI hardware decode. Example Docker Compose fragment I use for Frigate:
yaml
services:
frigate:
image: blakeblackshear/frigate:stable
devices:
– /dev/dri:/dev/dri
volumes:
– /srv/frigate/config:/config
– /srv/frigate/media:/media
restart: unless-stopped
network_mode: host
Map storage to /media for recordings. Run other services with care; extra containers cost CPU and memory on the N100.
Amcrest cameras work well over RTSP once you set the right stream and frame rate. A common RTSP URL for Amcrest is:
rtsp://username:password@CAMERA_IP:554/cam/realmonitor?channel=1&subtype=0
Use the main stream for recordings when necessary, but use the substream for live viewing and low-load detection where possible. In Frigate, set each camera with a low detection fps to keep CPU down. Example camera block I use in config.yml:
yaml
cameras:
frontdrive:
ffmpeg:
inputs:
– path: rtsp://user:pass@192.168.1.50:554/cam/realmonitor?channel=1&subtype=0
roles:
– detect
– record
width: 1920
height: 1080
fps: 6
detect:
enabled: true
maxdisappeared: 25
objects:
track:
– person
– car
Set fps to 5–8 on 1080p cameras to reduce load. Lower the resolution in the camera or set a crop in Frigate for the detect role if only a zone needs monitoring. Use object filters to track only what matters, for example person and car rather than every label the model can return.
Detection tuning and monitoring are where most of the work happens. I start with one camera set to detect, then add others and watch CPU, memory and temperature with htop and sensors or glances. If VAAPI is available, Frigate will offload decode and that frees CPU for detection. If decoding is CPU-only, drop fps or detection regions. Tweak model settings: lower threshold a little if you miss small objects, raise it if you get noise. Set motion and min_area in camera-level config to ignore small triggers like leaves or rain. Test during the busiest times for your site, for example at night if headlights are a problem.
Testing under load means running the full camera set and watching how long detections take and how many frames are skipped. I record short test clips while ramping up camera count. If clipping or frame drops appear, lower fps, drop record quality, or offload detection to a Coral USB TPU or a separate device. Long-term maintenance is simple: rotate and prune recordings, check disk use weekly, update Frigate container monthly and inspect frigate.db size. Keep camera firmware current and lock RTSP passwords.
Takeaways: map /dev/dri into the container if present, run low detection fps per camera, use the substream when suitable, and test under realistic load. The Intel N100 mini PC is tight on headroom compared with desktop hardware, so choose what to record and what only to detect. Make changes gradually and measure the effect.






