Magnifying glass and compass icon representing UX review workflow
If you’ve ever reviewed a screen and ended up with vague notes like “feels off,” this is a calmer way to work. This article is a plain-English glossary, but it’s also a step-by-step workflow you can reuse on any page you’re testing or critiquing (even if you’re doing it quickly on Windows with Safari).

It’s designed to turn reactions into specific, fixable feedback.

Use it like this: pick a screen, walk through the sections in order, and write one or two sentences per term that applies. You’re not trying to be exhaustive—you’re trying to be clear.

Before you start, set a tiny scope: one screen, one goal, one user type.

Checklist: the 90-second setup

  • Screen: name it (e.g., “Checkout shipping step”).
  • User: who is it for (new vs returning, rushed vs careful).
  • Goal: what success looks like (place order, compare plans, save draft).
  • Constraints: small laptop, trackpad, spotty internet, accessibility needs.
  • Evidence: what you observed (recording, screenshot, user quote, or your own run-through).

Now you’re ready to review without drifting into taste debates.

Folded map icon illustrating a step-by-step review path
Think of the workflow like a map: start with “what is this screen for,” then move from understanding → action → friction → trust.

1) Intent & audience terms (so you don’t critique the wrong thing)

Primary job: the one thing the screen must accomplish. If you can’t say it in a short sentence, the screen will feel “busy” no matter how pretty it is.

Primary user: the most common or most important user for this screen. One screen can’t perfectly serve every persona at once.

Moment of use: when/why someone is here (rushing, comparing, recovering from an error). Your UX decisions change depending on the moment.

Success criteria: what the user should achieve and what the business should achieve. Write both; it prevents “conversion” fixes that harm understanding.

Workflow note to write: “This screen’s primary job is ___ for ___ who are ___.”

2) Information clarity terms (can people understand what they’re looking at?)

Information scent: the cues that tell users they’re in the right place (headings, labels, familiar wording). Weak scent causes back-and-forth clicking.

Hierarchy: what visually reads as most important, second, third. If hierarchy doesn’t match the primary job, users feel lost.

Scanning: how easily someone can pick out key facts without reading everything. Most pages are scanned first, read second.

Ambiguity: where a label could mean two things (e.g., “Continue” without saying what happens next). Ambiguity adds hesitation.

Progressive disclosure: showing the basics first, then details when needed. Good for reducing overwhelm, risky if it hides required info.

Quick test: cover the body text with your hand and look only at headings/buttons. Do you still understand the path?

3) Action & decision terms (can they confidently take the next step?)

Call to action (CTA): the main action you want. A good CTA is specific (“Save and continue”), not generic (“Submit”).

Affordance: what makes something feel clickable/draggable/typeable. If it doesn’t look interactive, people won’t try.

Decision load: how many choices you ask for at once. Too many options creates delays and second-guessing.

Default: the pre-selected option. Defaults are powerful; they should be safe and reversible.

Commitment: the moment the user feels “I’m locked in.” If commitment feels too early, users abandon.

Tab key and button icon for keyboard navigation check
If you’re reviewing on Windows, try a keyboard-only pass (Tab/Shift+Tab/Enter). It’s a fast way to spot weak affordances and confusing focus order.

Workflow note to write: “The next best action is ___, but it’s competing with ___, so the user might ___.”

4) Friction terms (where do they slow down, get stuck, or give up?)

Friction: anything that adds effort without adding value (extra fields, unclear requirements, surprising steps).

Cognitive load: how much thinking is required. High cognitive load can come from wording, layout, or too many conditions.

Interaction cost: extra taps/clicks/scrolling. Cost isn’t always bad, but it should “buy” clarity or safety.

Form burden: how hard it is to fill in inputs (validation rules, formatting, required fields). Reduce where possible; explain where not.

Dead end: when the user has no obvious next move (especially after an error).

Useful way to capture friction: write the exact moment you hesitated, and what question was in your head.

5) Feedback & error terms (does the interface talk back at the right time?)

System status: clear signals about what’s happening (loading, saving, submitted). Silence feels like failure.

Validation: how the system handles input rules. The best validation is timely (not after a long form submit) and specific (what to fix).

Error message quality: a good message says what happened, why, and what to do next—without blame.

Recovery path: a direct way out of trouble (undo, edit, retry, contact support). Recovery reduces fear.

Confirmation: proof the action worked (receipt screen, toast, email notice). Confirmations should match the seriousness of the action.

Workflow note to write: “When ___ happens, the user sees ___, but they need ___ to recover.”

6) Trust, safety, and “is this legit?” terms

Credibility cues: signals that the product is real and safe (clear pricing, recognizable policies, consistent tone, accurate microcopy).

Risk perception: how risky the action feels (payment, deleting, sharing data). If perceived risk is high, users need extra clarity.

Transparency: explaining what will happen next (renewal terms, shipping costs, data usage) before the user commits.

Privacy friction: places where users worry you’re collecting too much. Offer explanations and choices, not just a checkbox.

Dark pattern: nudges that benefit the business by confusing or pressuring the user. Even “small” ones can damage long-term trust.

Shield and checkmark icon representing trust and safety cues
A simple check: if a friend asked “is it safe to click this?”, what on the screen would you point to as evidence?

7) Turn glossary observations into actionable notes (a reusable mini-template)

To keep your review from turning into opinion, write notes in this structure. It forces you to connect a term to a user outcome.

  • Observation (what you saw): “The main heading says ___, but the CTA says ___.”
  • Term (shared language): “This is a hierarchy / ambiguity issue.”
  • User impact (what it causes): “Users may hesitate because ___.”
  • Fix direction (not a full redesign): “Try changing ___ to ___ / move ___ above ___ / add a short line explaining ___.”
  • How to verify: “Success looks like fewer ___ / faster ___ / fewer retries on ___.”

One good note beats ten vague ones.

Takeaway: the reusable workflow in one pass

Define the screen’s job, check clarity, check actions, locate friction, confirm feedback, then scan for trust gaps. Use the glossary terms as labels so your notes are easy to discuss—and easy to fix.