Privacy-first car telemetry: monitoring EV charging over

I’d rework a luxury EV launch into a private car charging dashboard with Home Assistant and MQTT

A luxury EV launch page gave me a neat set of telemetry ideas, then I stripped out the theatre and kept the bits that matter in a Home Assistant MQTT car charging dashboard.

The useful part was never the marketing gloss. It was the charging state, the battery level, the plugged-in status, the session time, and the cost estimate. That is the shape of a proper EV charging monitoring setup. Once those values are in MQTT, Home Assistant can do the rest without sending your charging data off to a cloud panel that exists mainly to sell subscriptions.

Identify the failure context and error message

The first failure mode is usually boring. You get an OUTPUTPARSINGFAILURE, or something close to it, because the model returned prose when the consumer expected structured data.

In my own work, I treat that as a contract problem, not a model problem. The output format was vague, the prompt was loose, or the schema was not strict enough. If the parser chokes, the raw text is the evidence. Keep it.

For a charging dashboard, the same rule applies. If Home Assistant cannot trust the payload, the automation breaks. MQTT messages need a stable shape, whether they carry charger power, state of charge, or a simple plugged and charging flag.

Retrieve the JSON Schema being used to validate outputs

I keep the schema close to the code, not buried in a note. Put it in the same repo as the parser, or next to the Home Assistant template that consumes it.

For this kind of setup, the schema usually needs a small set of fields:

  • charging_state
  • battery_percent
  • power_w
  • plugged_in
  • timestamp

If the raw EV telemetry or the model output cannot fill those fields cleanly, the dashboard should not guess. In a privacy-first telemetry setup, guessing is a bad habit. It is how nonsense ends up looking like truth.

Compare the model’s raw output to the schema to find mismatches

When the output fails, compare it against the schema line by line.

Check for:

  • missing required properties
  • wrong types, such as a string where a number is expected
  • extra properties that the parser does not allow
  • null values where the schema expects a real value

This is where Home Assistant dashboard work and model output work feel the same. MQTT automation lives or dies on predictable keys. If one payload says powerw and another says chargingpower, you have not built flexibility. You have built a mess.

Reproduce the issue with a minimal prompt and example input to isolate cause

Cut the prompt right back. Use one short input and one expected output.

For example, feed the model only:

  • battery: 67
  • plugged_in: true
  • power_w: 7400

Then ask for exactly one JSON object.

That tells you whether the failure comes from the prompt, the schema, or the surrounding system. I have spent too long blaming the model for something I caused with a badly written prompt. It is always humbling.

Update the prompt to explicitly instruct the model to return JSON only and to follow the exact schema

State the contract plainly.

Tell it to return JSON only.
Tell it to follow the exact schema.
Tell it not to add commentary.
Show one example output that matches the schema.

A strong prompt for this use case might ask for a payload like:

{“chargingstate”:”charging”,”batterypercent”:67,”powerw”:7400,”pluggedin”:true,”timestamp”:”2026-04-21T18:30:00Z”}

That example matters. It gives the model a shape to copy, which is useful when you are wiring model output into MQTT messages and a Home Assistant template sensor is waiting on the other side like a very impatient tax form.

Use validation tools to check outputs before parsing, log both raw and validated outputs

Do not parse first and validate later. Validate first.

Run the output through a JSON Schema validator, then log:

  • the raw output
  • the validated object
  • any validation errors

That gives you a clean audit trail when the dashboard shows stale charging data or a sensor stops updating. It also helps when a privacy-first telemetry flow needs a quick check without shipping the data anywhere else.

If the model output is still invalid, implement a post-processing step

If the output is close but messy, sanitise it.

Typical fixes include:

  • casting numbers from strings
  • filling safe defaults
  • removing extra properties
  • normalising booleans and timestamps

I use this only as a last pass. It is a repair step, not a licence to be sloppy. If you keep papering over bad output, the contract gets weaker every time.

Add unit tests and sample cases that ensure compliance with the schema, and run them in CI

Write tests for valid and invalid payloads.

Cover cases such as:

  • valid charging state with all fields present
  • missing battery_percent
  • power_w as a string
  • extra keys that should be rejected
  • malformed timestamps

Run those tests in CI on every change. If the schema changes, the test should fail before the dashboard does. That is the sort of dull failure you want.

Monitor production logs for parsing errors and implement retries or safe fallbacks when validation fails

Watch the logs for parser errors, schema failures, and empty payloads.

If validation fails, retry once with the same prompt and a stricter instruction set. If it fails again, fall back to the last known good payload or a safe default state in Home Assistant.

For an EV charging dashboard, that might mean showing the last confirmed charge status rather than inventing a fresh reading. Stale data is annoying. Fake data is worse.

Iterate on prompt engineering and schema design to make the contract between model and consumer robust and unambiguous

The cleanest setup is the one where the prompt, schema, and parser all agree on the same small shape.

Keep the payload narrow. Keep the keys stable. Keep the example output in sync with the schema. If you are feeding MQTT automation from model output, that contract has to be plain enough that future-you can still understand it after six months and one too many updates.

That is the point of the Home Assistant MQTT car charging dashboard rework. Strip out the glossy launch noise, keep the useful telemetry, and make the data boring enough to trust.

Related posts

Privacy-first car telemetry: monitoring EV charging over

I took a glossy EV launch page and turned the useful bits into a Home Assistant MQTT car charging dashboard, then spent most of the time fighting the usual small failures, bad keys, vague output, and...

Telegraf | v1.38.3

Telegraf v1 38 3 fixes parser panics, allows per input collection jitter 0s, improves Docker, OPC UA, nftables, turbostat robustness, bumps deps and packages

Telegraf | v1.38.3

Telegraf v1 38 3: bug fixes for inputs and parsers, dependency upgrades and broad platform packages, test in staging, verify checksums and vendor builds