Benchmark Usability Testing

Project Overview

I started working on behalf of Blink UX to partner with a market-leading social media client. I collaborated with a team of researchers and research assistants in conducting a series of six benchmarking usability testing studies with virtual reality, augmented reality, and wearable devices.

Goals

Primary: Measure specific usability metrics such as task completion time, error rate, and success rate to evaluate the efficiency and effectiveness of the user interface for the purpose of measuring improvements or declines in the user experience over time.

Secondary: Identify areas where users experience difficulties or frustration.

Methodology Used:

Benchmarking & usability testing

Timeline:

3 weeks x 6 projects = ~4 months

Stakeholders:

Clients, project managers, researchers, research assistants, visual designers, and recruiters

Challenges Encountered:

Working with tight timelines and dealing with last-minute changes to the project requirements.

Planning

During the planning phase, I collaborated with the Lead Researcher to refine and optimize the session guides to maximize efficiency, clarity, and alignment with client needs.

We iterated the guides for each study to determine the behavioral and attitudinal data we needed to collect, including:

  • Behavioral and perceived time-on-task

  • Task success

  • Attitudinal measures (confidence, satisfaction, ease, delight, frustration)

  • Task-specific questions

To ensure the session guide was effective, we conducted internal pilot tests with Blink UX employees, allowing us to make any final adjustments. The entire planning and review process took approximately 3 to 5 days.

Recruitment

For participant recruitment, our project manager worked with the client and a recruitment agency. We screened and scheduled around 100 participants per study, starting the recruitment process 2 to 3 weeks before the sessions began.

Data Collection

For each of our 2-3 week studies, I moderated 3-4 usability testing sessions per day, engaging approximately 80 participants per study. I observed these sessions, offering feedback and guidance to handle unforeseen situations and maintain consistency. As a team, we collected between 30,000 and 50,000 data points per study, culminating in a total of 250,000 data points across all six studies.

Data Analysis

Following the data collection phase for each study, I reviewed our data repository to highlight and document usability issues our research participants encountered. These were then added to an issues list and organized by frequency and severity.

Project Outcomes

The project concluded with enthusiastic feedback from each client, who valued the comprehensive data and analysis we provided. The insights gained are being used to demonstrate the impact of design changes over time and to establish a baseline for future benchmarking efforts.

This study highlights my expertise in contributing to complex usability testing projects, collaborating effectively with clients and stakeholders, and delivering actionable insights that drive design improvements.