1. Introduction (5 minutes)

  • Thank the participant for their time.
  • Explain the purpose of the test: to evaluate the visualization, not the user’s abilities.
  • Explain the “think-aloud” method: ask them to verbalize their thoughts throughout the session.

2. Task Scenarios (20-30 minutes)

  • Task 1: Compare: “Using this dashboard, can you compare the average bus delay on a weekday versus a weekend?”
    • Expectation: The user should use the filtering or selection options to isolate weekdays and weekends.
  • Task 2: Find Anomaly: “Looking at this data, can you find the single worst bus delay that occurred last week? What was the weather like on that day?”
    • Expectation: The user should be able to locate the outlier and use the correlated weather data to answer the second part of the question.
  • Task 3: Locate Hotspot: “If you were a city planner, where would you recommend adding a new bus lane to solve a delay problem? Use the visualization to support your answer.”
    • Expectation: The user should be able to identify a consistently delayed route on the map and correlate it with the line chart.

3. Post-Test Interview (10 minutes)

  • “What was the most confusing part of using this tool?”
  • “Did this tool help you find new information that you didn’t know before?”
  • “On a scale of 1-5, how satisfactory were the visualizations for your specific needs?”

Recruitment Criteria:

  • Criterion 1: [Specific requirement]
  • Criterion 2: [Specific requirement]
  • Criterion 3: [Specific requirement]

Screening Questions:

  1. [Question to validate target user group]
  2. [Question about relevant experience]
  3. [Question about availability/consent]

Pre-Test Setup

Equipment Checklist:

  • Computer/device for testing
  • Screen recording software running
  • Audio recording equipment tested
  • Backup recording method ready
  • Test environment prepared
  • Consent forms printed/ready

Test Environment:

  • Browser: [Recommended browser and version]
  • Screen Resolution: [Optimal resolution]
  • Data Setup: [What data will be loaded]
  • Starting URL: [Where test begins]

Facilitator Preparation:

  • Script reviewed and practiced
  • Timing planned for each section
  • Note-taking materials ready
  • Observer roles assigned (if applicable)

Test Script

Introduction (5 minutes)

Welcome and Introduction: “Hi [Name], thank you for participating in our usability test today. I’m [Your name] and I’ll be guiding you through this session.

Purpose Explanation: Today we’re testing a data visualization tool for [domain/purpose]. We want to understand how people interact with it and where we can improve the design. This is not a test of your abilities – we’re testing the software, not you.

Think-Aloud Protocol: As you work through the tasks, please think out loud. Tell me what you’re looking for, what you’re thinking, and what’s confusing or helpful. This will help us understand your experience.

Recording Consent: We’d like to record your screen and audio to help us analyze the results later. Is that okay with you? [Confirm consent]

Questions: Do you have any questions before we begin?

Background Questions (5 minutes)

  1. Tell me about your role and how you typically work with [domain] data.
  2. What tools do you currently use for data analysis or visualization?
  3. How comfortable are you with web-based applications? (1-5 scale)
  4. [Domain-specific context question]

System Overview (3 minutes)

Brief Orientation: “I’m going to show you the tool we’ll be testing. This is a visualization of [data description]. Take a moment to look around and tell me your first impressions.”

Initial Questions:

  • What do you think this tool is for?
  • What stands out to you?
  • What questions do you have?

Task Testing (25-30 minutes)

Task 1: Data Overview and Exploration

Scenario: [Set up a realistic scenario] “Imagine you’re [role scenario]. You want to get a general understanding of the data patterns.”

Task: “Please explore the visualization and tell me what insights you can gather about [specific aspect].”

Success Criteria:

  • User navigates to appropriate view
  • User interprets data correctly
  • User demonstrates understanding of key patterns

Probing Questions:

  • What do you notice about [specific pattern]?
  • How would you describe this data to a colleague?
  • What additional information would be helpful?

Notes Section: [Space for detailed observations]


Task 2: Specific Data Lookup

Scenario: [Specific scenario requiring precise information]

Task: “Find the [specific data point] for [specific criteria].”

Success Criteria:

  • User locates correct information within [time limit]
  • User uses appropriate tools/filters
  • User expresses confidence in result

Probing Questions:

  • How did you find that information?
  • Was that what you expected to see?
  • How confident are you in this result?

Notes Section: [Space for detailed observations]


Task 3: Data Comparison

Scenario: [Scenario requiring comparison between data points]

Task: “Compare [data point A] and [data point B]. What differences do you notice?”

Success Criteria:

  • User successfully compares relevant data
  • User identifies key differences
  • User uses appropriate visualization features

Probing Questions:

  • What method did you use to compare these?
  • Are there other ways you might make this comparison?
  • What makes this comparison easy or difficult?

Notes Section: [Space for detailed observations]


Task 4: Trend Analysis

Scenario: [Scenario requiring temporal or pattern analysis]

Task: “Identify trends in [specific data dimension] over [time period].”

Success Criteria:

  • User navigates to trend view
  • User correctly interprets temporal patterns
  • User can articulate findings

Probing Questions:

  • What trends do you see?
  • What might explain these patterns?
  • How would you verify these findings?

Notes Section: [Space for detailed observations]


Task 5: [Custom Task Based on Your Domain]

Scenario: [Domain-specific scenario]

Task: [Specific task description]

Success Criteria:

  • [Criterion 1]
  • [Criterion 2]
  • [Criterion 3]

Probing Questions:

  • [Question 1]
  • [Question 2]
  • [Question 3]

Notes Section: [Space for detailed observations]

Post-Task Interview (10 minutes)

Overall Experience:

  1. What was your overall impression of the tool?
  2. What did you like most about it?
  3. What was most frustrating or confusing?
  4. How does this compare to tools you currently use?

Specific Features:

  1. How useful were the filtering options?
  2. What did you think of the visual design and layout?
  3. Were the interactions (hover, click, etc.) intuitive?
  4. How was the performance/responsiveness?

Improvements:

  1. What would make this tool more useful for your work?
  2. What features are missing that you would expect?
  3. What would you change about the interface?
  4. Who else in your organization might find this useful?

Satisfaction Ratings:

Please rate the following on a scale of 1-5 (1=Poor, 5=Excellent):

  • Overall usefulness: ___
  • Ease of use: ___
  • Visual clarity: ___
  • Speed/performance: ___
  • Likelihood to recommend: ___

Wrap-up (2 minutes)

Thank You: “Thank you so much for your time and feedback. Your input is incredibly valuable for improving this tool.”

Next Steps: “We’ll be analyzing all the feedback we receive and using it to improve the design. If you’re interested, we can share the final results with you.”

Contact Information: “If you think of anything else or have questions, please feel free to contact me at [email].”

Observation Guidelines

What to Observe:

  • Navigation patterns: How users move through the interface
  • Error recovery: How users handle mistakes or confusion
  • Efficiency: Time taken and number of steps for tasks
  • Emotional reactions: Frustration, delight, confusion
  • Help-seeking behavior: When and how users look for assistance

Note-Taking Template:

Time Observation Quote Severity Follow-up
[mm:ss] [What happened] [User’s words] [High/Med/Low] [Action needed]

Common Issues to Watch For:

  • Users clicking on non-interactive elements
  • Users looking for features that don’t exist
  • Users misinterpreting data or visualizations
  • Users struggling with navigation or layout
  • Users expressing confusion about terminology

Data Collection

Quantitative Metrics:

  • Task completion rate (%)
  • Task completion time (seconds)
  • Number of errors per task
  • Number of clicks/interactions per task
  • Time to first interaction

Qualitative Data:

  • User quotes and reactions
  • Pain points and friction areas
  • Positive feedback and delights
  • Suggestions for improvement
  • Comparison to existing tools

Recording Requirements:

  • Screen recording for all sessions
  • Audio recording with participant consent
  • Written notes from observer(s)
  • Post-session summary immediately after each test

Post-Test Analysis Plan

Immediate Actions:

  • Review recordings within 24 hours
  • Compile quantitative metrics
  • Identify critical usability issues
  • Categorize feedback themes

Reporting:

  • Create summary report with findings
  • Prioritize issues by severity and frequency
  • Recommend specific design changes
  • Plan follow-up testing if needed