One of the largest broadcast television companies wanted to know the impressions, pain points, and overall experience a realistic user base has when navigating their OTT application. In addition, a competitor's app was tested.

*The client is anonymous for confidentiality reasons

Role: Lead Researcher

Method: In-Person Moderated Usability Test

Product: OTT News Streaming App


Users prefer the overall experience of the company app to the competitor's app.

Users struggle with not being able to control some automated features of the app.

The participants expected features found in different apps (such as Hulu and Netflix).

The interrupter proved useful for staying in the app.

The “Watch” carousel was intuitive to use, but users had mixed feelings about the automated features.

I created an "at-home" experience:

I simulated how a user would actually experience the application on their TV at home. The testing room was equipped with audio and visual capture devices and streamed live to stakeholders.


13 participants were recruited for this study:


I created a series of tasks for the user to perform that tested the following features on Roku and Firestick for the company app and competitor app:

I also created a companion facilitator guide to take notes and ask task-specific follow-up questions.


I took notes during each session and reviewed the recordings afterward.

For each session I analyzed:

The final report was organized by the features tested in the app and not by task.

Design Recommendations (these are just a few based on the most obvious pain points)


Wait until the end to take observer questions and set my expectations from the start.

Because all the tests were streamed live via Zoom to the stakeholders of the company, my observers had the ability to ask questions and comment during the sessions. Some of the observers were the designers of the app and this was a great chance for collaboration among teams. Unfortunately, the questions and comments they brought forward interfered at times with the natural flow of the tasks and were not always structured in a way that wouldn't give faulty data. Going forward, I will be more mindful of the way I receive live feedback from stakeholders and make my expectations clear from the beginning to limit questions or wait until the end. 

Take notes immediately after each session when memory is the freshest.

Note-taking is a crucial part of the usability test. However, this is a balance between writing all the useful information you observe and being present. In these tests, I opted for note-taking with a pen and paper and jotted briefly what the users did and said. These notes appeared to be important in the beginning, but I still watched the recordings of the sessions afterward. The notes I took during the session missed critical moments and important insights the camera captured. Memory is typically freshest after an event happens, so instead of focusing on note-taking during the session, opting to watch the playback video right after the session and taking notes would allow me to be more present during the session, while still producing high-quality notes. 

Return Home

Next Case Study