Why I Moved from Google Photos to Self-Hosting for Privacy

I had roughly a terabyte of photos and videos spread across devices and Google Photos. That number kept climbing each month. I tracked uploads, duplicates, and the large video files. That set the storage target for any self-hosted option I looked at.

I used the privacy controls Google offered. I switched off automatic face grouping where I could. I restricted shared albums. Even so, the data still sat on Google’s servers. I had limited control over what scans or machine learning ran over my media. The settings helped, but they did not change who held the data.

User engagement metrics

I looked at how often I opened albums, how I shared images with family, and which devices uploaded automatically. Sync speed and mobile upload reliability mattered more than fancy album features. That told me what I needed from a replacement: reliable sync, quick search, and simple sharing with tight controls.

Google Photos is polished. It is convenient. It also puts your media under another company’s terms. That cuts down the amount of control you have. It also ties you to retention and inactivity rules. I wanted something where I decided what stayed and who could see it.

The end of unlimited free storage and the changes to free tiers pushed costs back onto users. My storage needs were already past the free allowance, and the monthly fees started to feel like rent. I also wanted predictable costs. Cloud storage can work well for light use. For a large library, the maths starts to favour one-off hardware or a cheap VPS.

I did not like automated scanning and analysis on someone else’s servers. Even with privacy settings, metadata and usage patterns were still under another party’s control. I wanted direct control over the data and fewer opaque processes looking at my pictures.

Self-hosting setup

I went for a minimal setup I could run at home or on a small VPS. The core pieces were:

  • A small server with RAID or a reliable NAS for storage.
  • A containerised photo manager. I used PhotoPrism for indexing and Nextcloud for file sync.
  • Let’s Encrypt TLS and a reverse proxy for secure remote access.

I pulled everything into Docker Compose so backups and updates stayed straightforward. That gave me portability and made restores consistent.

I sorted the files before import. I removed duplicates, compressed some older videos, and kept originals for shots I could not replace. Metadata stayed intact. I set up automated ingest from phones through the Nextcloud client. I ran regular scans to rebuild the index. Storage quotas and archive folders kept active working storage small and fast.

Running systems I control removed most of the unknowns. I left automatic facial recognition off by default and used local-only processing when I needed it. Sharing used expiring links and password-protected folders. I turned off telemetry in the apps. That gave me clearer control over who could access what and for how long.

Results and comparison

Indexing and the first import were slower than Google’s cloud processing. Day-to-day access felt quick on the local network and reasonably fast over VPN or reverse proxy. Backup speed depended on upload bandwidth. I now watch sync times, index rebuild time, and backup cadence. Those numbers are predictable, and I can add storage or CPU when needed.

Family feedback was mixed at first. They missed automatic face albums and Google’s search polish. After a short adjustment period, most people were happy enough with simple shared folders and clearer control over who saw images. Mobile uploads needed the Nextcloud app or direct WebDAV. That added friction compared with Google Photos, but the sharing and privacy model was clearer.

My upfront hardware cost replaced a few years of subscription fees. Running costs are power and the occasional replacement drive. If you use a VPS, the bill is monthly, but it is often still lower than many commercial photo storage plans for large libraries. There are hidden costs too: maintenance, backups, and software updates. I count my time in that total.

Potential data loss

Self-hosting means taking on the failure modes yourself. Hard drives fail. People make mistakes. I treat backups as non-negotiable. My setup is local RAID for uptime, off-site backup snapshots, and a cold copy on external drives. I test restores regularly. That keeps the risk at a level I can live with.

You still need to patch software, update containers, and deal with problems when they turn up. That takes time. I scripted updates and set up monitoring alerts to catch failing services. If you want zero maintenance, a cloud service still wins. If you want control and are willing to deal with the details, self-hosting makes sense.

Public-facing services bring attack surface with them. I used firewalls, fail2ban, strong SSH keys, and regular security updates. TLS and strict access rules cut down exposure. The job of securing it still sits with you, not the provider.

Future enhancements planned

I plan to add an off-site S3-compatible backup and automated snapshots for quicker recovery. I will test a dedicated photo mobile app for faster uploads. I also want better deduplication on ingest and better network QoS during large uploads.

I will keep the family in the loop. I’ll add a simple upload page and clear guest instructions. Their feedback will decide whether I turn on optional local face grouping or keep things manual. I prefer small changes with a way back if they are a pain.

Monitoring tracks uptime, storage growth, and failed backups. I use basic alerts for low storage, high CPU during index jobs, and failed index builds. That means I spot problems before they show up during a holiday or an event where the photos matter.

Summary of findings

Moving to self-hosted alternatives gave me clearer privacy and control over the data. I traded convenience for responsibility. I gained predictable costs and the ability to set my own retention and scanning rules. Performance is acceptable and predictable on my network. Sharing and privacy controls are more explicit.

Self-hosting is not a vanity project. It is a practical choice for people who want control and are willing to maintain the system. If you need zero maintenance and the best polish, a commercial cloud still works well. If you want to keep your data under your own roof and be clear about who can access it, self-hosting is worth a look.

Start small. Move one photo collection to a small server or VPS. Test restores, tune sharing, and see how your family gets on with it. If you measure the right things — storage growth, sync time, and backup success — you will know whether a self-hosted setup suits you. I made the switch. I control my photos now. That was the point.

Tags:

Related posts

Vector | vdev-v0.3.3

Vector vdev v0 3 3: patch release with crash, leak and parsing fixes, connector and tooling improvements, upgrade notes on prechecks, rolling updates, compat

Loki | v3.7.2

Loki v3 7 2: security and CVE fixes, updated S3 client to aws sdk v1 97 3, ruler panic fix for unset validation scheme, S3 Object Lock sends SHA256 checksum

Loki | v3.7.2

Loki v3 7 2: Patch release with CVE fixes, AWS S3 SDK update, ruler panic fix, S3 Object Lock SHA256 checksum support