I built a prototype that senses activity without a camera. It uses short-range radar, rough depth sensing and non‑recorded audio levels to spot presence, falls, sudden events and long inactivity. The goal was a safer space that respects privacy while tying into home automation and smart home safety. I’ll explain the hardware choices, the logic, the privacy design and a simple build path you can copy.
Start with a clear problem and a small area. Pick one room where privacy is a concern, for example a bedroom or bathroom. Then follow these steps. 1) Pick sensors: an mmWave radar module for presence and motion, a time‑of‑flight depth sensor for posture and distance, PIR for simple occupancy, and a mic set to level-only mode for sound events. 2) Local compute: use an edge MCU such as an ESP32 or an nRF54 series chip to fuse signals and run simple models. 3) Signal fusion: combine radar movement vectors with depth deltas and sound‑level spikes to reduce false positives. 4) Event rules: report only labelled events — presence, fall, loud incident, prolonged immobility — not raw data. 5) Connectivity: send event notifications to your home automation hub rather than a cloud recorder. 6) Test and tune: collect anonymous metric counts, tune thresholds, and log only diagnostics during development. These steps keep the system actionable and privacy friendly while integrating with home automation flows.
The sensor choices matter. Radar modules like those from Vayyar or other mmWave vendors detect motion and micro‑movement in the dark and through light obstructions. Time‑of‑flight depth sensors give coarse posture data that helps distinguish sitting from lying down. PIR sensors are cheap and low power for basic occupancy. For fall detection, fusion is the practical route: radar says rapid displacement, depth sensor shows body change, and sound level rises. That combination reduces false alarms compared with a single sensor. For smart home safety, tie events into existing automations: turn on lights and call a contact, trigger sirens, or open a voice channel only after a confirmed event. Keep automation rules simple and test them in the real room layout. Small changes to sensor position and threshold tuning will change detection rates more than swapping modules.
Privacy in monitoring is the core reason to pick non‑visual solutions. Design the system so raw sensor streams never leave the local device. Convert audio to a single amplitude value and discard samples. Use on‑device models or rule engines so the hub receives only events and small diagnostic counters. Log rotation and short retention times limit exposure. Use TLS for notifications and store private configuration on local storage only. Avoid cloud video or audio storage entirely. Those constraints protect residents and make monitoring acceptable in bedrooms and bathrooms where cameras feel inappropriate. Practical advantages include operation in darkness, simpler consent management, and smaller network footprints. There is a trade off: non‑visual sensors give less detail. That makes good sensor fusion and conservative alerts essential.
I’ve used this approach in prototypes that emphasise alert‑on‑unusual rather than constant monitoring. Examples include fall alerts, remote check‑ins that report presence or long inactivity, and privacy‑first room occupancy for heating and lighting control. Short experiment notes: angle the radar down slightly to focus on human height; place depth sensors where furniture won’t block field of view; and set sound thresholds to ignore regular household noise. Future trends to watch are cheaper integrated mmWave modules, more capable edge ML on tiny MCUs, and standard non‑camera device profiles for home automation hubs. The practical takeaway is simple: you can build non‑visual home monitoring that works in real rooms, respects privacy and plugs into smart home safety without sending images to the cloud.






