Here’s a calm translation guide.
One quick note: email tracking is imperfect (especially with privacy features), so treat these metrics as signals, not courtroom evidence.
The 4 questions every email metric is trying to answer
Nearly every metric maps to one of these questions. If you keep this lens, the dashboard gets less stressful.
- Did it reach inboxes? (deliverability and bounces)
- Did anyone notice it? (opens and related signals)
- Did it cause action? (clicks, conversions)
- Did it annoy people? (unsubscribes, spam complaints)
If you’re not sure what to look at, start with “reach” and “annoyance” before you obsess over “noticed.”
Delivered vs sent: why “sent” is basically a receipt from your outbox
Most tools show Sent (you attempted to send) and Delivered (the receiving mail server accepted it).
Think of Sent as: “we put it in the mail.” Think of Delivered as: “the building’s mailroom accepted it.”
Hard bounce vs soft bounce: which one is a real list problem
Bounce rate is the share of messages that couldn’t be delivered.
- Hard bounce: a permanent failure (address doesn’t exist, domain invalid). These should be removed quickly.
- Soft bounce: a temporary failure (mailbox full, server busy, message too large). These might succeed later.
If hard bounces rise, it’s often a list quality issue (old addresses, scraped lists, typos). If soft bounces spike suddenly, it can be a sending or infrastructure hiccup (rate limits, content size, temporary blocks).
Open rate: useful for trends, unreliable for absolutes
Open rate usually means “an invisible tracking pixel was loaded.” That is not the same thing as “a human read the email.”
Reasons open rate can be misleading:
- Privacy protections can pre-load the pixel (counting opens that weren’t real reads).
- Images blocked means real reads may not count as opens.
- Plain-text readers may not load the pixel.
How to use it anyway: compare the same audience over time (subject line tests, send time changes). Don’t use it as the single measure of success.
Click rate, CTR, and CTOR: three similar terms that mean different things
This is the area where dashboards cause the most confusion.
- Click rate: clicks divided by total recipients (sometimes total sent, sometimes delivered—check your tool’s definition).
- CTR (click-through rate): often used interchangeably with click rate, but some tools define it as clicks / delivered.
- CTOR (click-to-open rate): clicks divided by opens.
- Click rate/CTR tells you: “Out of everyone we reached, how many took an action?”
- CTOR tells you: “Among the people who noticed it (or were counted as noticing), how persuasive was it?”
When open tracking is messy, CTOR can get weird (because “opens” is the shaky denominator). If CTOR suddenly jumps but sales don’t, it may be measurement noise.
Conversion rate and revenue: the numbers that matter (with a tracking caveat)
Conversion rate means “the share of recipients (or clickers) who completed the goal” (purchase, signup, download, etc.).
If your system supports it, the most decision-useful pair is often:
- Conversions per delivered (or per subscriber)
- Revenue per delivered (or per subscriber)
Tracking caveat: conversions can be undercounted due to cookie restrictions, cross-device behavior, and attribution settings (for example, someone reads an email on their phone and buys later on a laptop).
Still, conversions are usually a more honest “did this work?” signal than opens.
Unsubscribe rate, spam complaint rate, and “this is hurting us” thresholds
If you only watch two “health” metrics, watch these.
- Unsubscribe rate: people opting out. Not always bad—sometimes it’s list cleanup working as intended.
- Spam complaint rate: people hitting “Mark as spam.” This can damage deliverability fast.
- A small unsubscribe bump after a more frequent campaign is normal.
- A spike in spam complaints is a flashing warning light: wrong audience, misleading subject line, or emails people didn’t clearly consent to.
- If both unsubscribes and complaints rise together, the issue is usually expectation mismatch (people didn’t realize what they signed up for or how often you’d email).
Also watch reply rate (for more personal newsletters) and “this is not spam” signals where available—those are positive deliverability indicators.
A quick checklist to interpret an email report without spiraling
- Step 1: Check bounces. If hard bounces jumped, fix list hygiene first.
- Step 2: Check complaints. If complaints rose, pause and diagnose before sending again.
- Step 3: Check delivered vs last time. If delivered dropped, you may be throttled/filtered.
- Step 4: Check clicks and conversions. These are the best “did it work?” signals.
- Step 5: Use opens only for trend comparisons. Especially if your audience includes Apple Mail users (privacy effects).
- Step 6: Segment before concluding. New subscribers vs long-time readers often behave very differently.
Takeaway: tie each metric to one question
Email metrics get clearer when you stop treating them as a single grade and start treating them as answers to separate questions: reach, notice, act, and annoy. Fix reach and annoyance problems first; then optimize for action.