Participant: Mark (Domain Expert) Date: Week 7, Day 3

Task 1 (Compare):

  • Behavior: Mark immediately used the day-of-week filter. He noted that the line chart clearly showed lower delays on weekends.
  • Feedback: “This is so much faster than pulling up two different spreadsheets and comparing them manually.”
  • Insight: The filter and line chart effectively addressed the “compare” task.

Task 2 (Find Anomaly):

  • Behavior: Mark hovered over a spike in the line chart. The tooltip correctly showed a large delay on a specific day. He then checked the weather data and commented on the high rainfall on that day.
  • Feedback: “I’ve always suspected rain was a major factor, but now I have some data to back it up.”
  • Insight: The details-on-demand and correlated views successfully enabled the “correlate” task and led to a key insight.

Task 3 (Locate Hotspot):

  • Behavior: He used the bar chart to sort routes by average delay. He then clicked the top-ranked route, which highlighted its location on the map.
  • Feedback: “It’s clear that Route 42 is our biggest problem area. This map shows exactly where the delays are most concentrated.”
  • Insight: The brushing and linking functionality proved effective for a key task.

Task Performance

Task 1: Compare Transit Performance Across Days

Start Time: 00:15
End Time: 02:30
Duration: 2m 15s
Completion Status: Completed

Observations:

Time Action/Behavior User Quote Notes
00:15 Examined line chart immediately “I can see patterns here right away” Good initial engagement
01:00 Hovered over data points for details “These tooltips are helpful” Using interactive features
01:45 Applied route filter to focus on Route 43 “Let me look at just the problematic route” Strategic filtering
02:30 Concluded Wednesday shows highest delays “Definitely Wednesday is the worst day” Task completed successfully

Key Findings:

  • Successful Actions: Line chart enabled quick day-to-day comparison, tooltips provided necessary detail
  • Difficulties: None observed - task completed smoothly
  • Errors: No errors made
  • Insights Gained: Wednesday consistently shows higher delays across routes

Severity Ratings:

  • Navigation Issues: None
  • Comprehension Issues: None
  • Interaction Issues: None

Task 2: Identify Delay Anomaly

Start Time: 02:45
End Time: 04:20
Duration: 1m 35s
Completion Status: Completed

Observations:

Time Action/Behavior User Quote Notes
02:45 Scanned bar chart for highest values “Route 43 definitely stands out” Quick visual identification
03:10 Clicked on Route 43 bar “Let me see the details for this route” Using coordinated views
03:35 Examined weather correlation “This makes sense - rain causes delays” Making domain connections
04:20 Identified Route 43 Wednesday as anomaly “18 minutes is unusually high” Anomaly successfully identified

Key Findings:

  • Search Strategy: Visual scanning followed by detailed exploration
  • Tool Usage: Bar chart for overview, line chart for detail, weather data for context
  • Accuracy: Correctly identified the data anomaly
  • Confidence: High confidence expressed in findings

Issues Identified:

  • Minor: Could benefit from highlighting extreme values automatically
  • Enhancement: Weather icons might make correlation more obvious

Task 3: Locate High-Delay Geographic Areas

Start Time: 04:30
End Time: 06:15
Duration: 1m 45s
Completion Status: Completed

Observations:

Time Action/Behavior User Quote Notes
04:30 Immediately looked at the map view “The map should show me the hotspots” Direct approach to spatial task
05:00 Correlated map markers with bar chart “These locations match the high-delay routes” Using multiple views effectively
05:45 Noted geographic clustering “The problem routes are all in the same area” Spatial insight gained
06:15 Concluded about downtown congestion “Downtown clearly has infrastructure issues” Task completed with actionable insight

Key Findings:

  • Comparison Method: Used map markers sized by delay values, cross-referenced with bar chart
  • Visual Effectiveness: Map visualization clearly showed geographic concentration of delays
  • Accuracy: Correctly identified downtown area as problematic region

Post-Test Interview Responses

Overall Experience:

Q: What was your overall impression of the tool? A: “This is exactly what we need for our transit planning meetings. The visual approach makes it so much easier to spot problems and explain them to management. Much better than our current spreadsheet reports.”

Q: What did you like most about it? A: “The way everything connects together - when I click on a route in the bar chart, I can immediately see it on the map and track its performance over time. That connection between views is really powerful.”

Q: What was most frustrating or confusing? A: “Nothing major. Maybe the color scheme could be more intuitive - I initially thought green meant ‘go’ but it actually means low delay, which is good. That took a moment to adjust to.”

Q: How does this compare to tools you currently use? A: “Night and day difference. We currently export data to Excel and create basic charts manually. This is interactive, faster, and reveals patterns we never would have spotted before.”

Specific Features:

Q: How useful were the filtering options? A: “Very useful. Being able to focus on specific routes or time periods makes the analysis much more targeted. The ‘all routes’ option is great for getting the big picture first.”

Q: What did you think of the visual design and layout? A: “Clean and professional. The three-panel layout feels natural - overview, details, and geography. My colleagues would definitely be comfortable using this in presentations.”

Q: Were the interactions (hover, click, etc.) intuitive? A: “Yes, very intuitive. The hover tooltips give just the right amount of detail without cluttering the interface. Clicking to select routes felt natural.”

Q: How was the performance/responsiveness? A: “Fast and smooth. No lag when switching between views or applying filters. That’s important when you’re presenting to a room full of people.”

Improvement Suggestions:

Q: What would make this tool more useful for your work? A: “Adding passenger count data would be huge - then we could see delay impact relative to ridership. Also, being able to export insights or share specific views would help with reporting.”

Q: What features are missing that you would expect? A: “Date range selection for historical analysis, and maybe some basic forecasting. Also, it would be great to overlay events like construction or weather alerts directly on the timeline.”

Summary of Findings

Successes:

  1. Task Completion: All three core tasks completed successfully with high confidence
  2. Intuitive Interface: Minimal learning curve, interactions felt natural
  3. Coordinated Views: Linking between bar chart, line chart, and map was highly effective
  4. Performance: Responsive interface enabled smooth exploration
  5. Insights Generated: User discovered actionable patterns about route performance

Issues Identified:

  1. Color Scheme Clarity: Initial confusion about green = good performance
  2. Missing Export Feature: Cannot save or share specific views
  3. Limited Historical Data: Only shows one week of data
  4. Weather Integration: Weather correlation requires manual interpretation

Recommendations:

  1. High Priority: Add legend clarifying color meanings, implement export/share functionality
  2. Medium Priority: Expand to multi-week historical data, add weather event overlays
  3. Future Enhancement: Integrate passenger ridership data, add forecasting capabilities

Task Success Metrics:

  • Completion Rate: 100% (3/3 tasks completed)
  • Average Task Time: 1m 58s (well under 3-minute target)
  • Error Rate: 0% (no incorrect conclusions drawn)
  • User Satisfaction: High (positive feedback on all aspects)
  • Learning Curve: Minimal (immediate productive use)

Q: What would you change about the interface? A: “[Participant response]”

Q: Who else in your organization might find this useful? A: “[Participant response]”

Satisfaction Ratings:

  • Overall usefulness: [1-5 rating]
  • Ease of use: [1-5 rating]
  • Visual clarity: [1-5 rating]
  • Speed/performance: [1-5 rating]
  • Likelihood to recommend: [1-5 rating]

Critical Issues Identified

High Priority Issues:

  1. Issue: [Description of critical problem]
    • Impact: [How it affects user goals]
    • Frequency: [How often it occurred]
    • Recommended Fix: [Suggested solution]
  2. Issue: [Description of critical problem]
    • Impact: [How it affects user goals]
    • Frequency: [How often it occurred]
    • Recommended Fix: [Suggested solution]

Medium Priority Issues:

  1. Issue: [Description of moderate problem]
    • Impact: [How it affects user experience]
    • Recommended Fix: [Suggested solution]

Low Priority Issues:

  1. Issue: [Description of minor problem]
    • Impact: [Minor inconvenience or confusion]
    • Recommended Fix: [Suggested solution]

Positive Feedback

What Worked Well:

  • [Positive observation 1]
  • [Positive observation 2]
  • [Positive observation 3]

User Delights:

  • [Moment of particular user satisfaction]
  • [Feature that exceeded expectations]

Unexpected Behaviors

Surprising User Actions:

  • [Unexpected way user approached a task]
  • [Creative problem-solving observed]
  • [Misunderstanding that led to different approach]

New Insights:

  • [New understanding about user needs]
  • [Unexpected use case discovered]
  • [Different mental model than anticipated]

Technical Observations

Performance Issues:

  • Load Time: [Observed loading delays]
  • Responsiveness: [Lag in interactions]
  • Browser Issues: [Any browser-specific problems]

Accessibility Observations:

  • Text Size: [Any readability concerns]
  • Color Distinction: [Any color-related difficulties]
  • Interaction Size: [Any issues with click targets]

Facilitator Notes

Session Quality:

  • Think-Aloud Effectiveness: [How well participant verbalized thoughts]
  • Task Clarity: [Whether instructions were clear]
  • Time Management: [Whether session stayed on schedule]

Areas for Script Improvement:

  • [Suggestion for better questions]
  • [Timing adjustments needed]
  • [Additional probes that would be helpful]

Follow-up Actions

Immediate Actions Required:

  • [Critical fix needed before next session]
  • [Clarification needed from participant]
  • [Additional data collection needed]

Questions for Design Team:

  • [Question about design decision]
  • [Clarification needed about intended behavior]
  • [Discussion point for next team meeting]

Session Summary

Key Takeaways:

  1. [Main insight from this session]
  2. [Important finding that impacts design]
  3. [Validation or contradiction of design assumptions]

Participant Quote Summary:

“[Most impactful or representative quote from session]”

Overall Assessment:

[Brief summary of how this participant’s experience aligns with or differs from expectations]


Next Session Preparation

Adjustments for Next Test:

  • [Change to make based on this session]
  • [Additional question to explore]
  • [Technical issue to resolve]

Patterns to Watch:

  • [Behavior to observe in future sessions]
  • [Issue to see if it repeats]
  • [Positive reaction to validate]