NOTE: This study is covered by a non-disclosure agreement which bars me from being specific about devices, clients, or precise findings. The following information is the most I am legally allowed to disclose about this work.
OVERVIEW
In July 2022, a Fortune 100 telecomm company needed to compare its latest smartphone camera to three flagship competitors.
Thus, my team and I had to run tests for 4 different phones: Device #1, Device #2, Device #3, and Device #4.
We ran N=34 in-person user tests, conducted N=12 diary studies, collected time-on-task-data and qualitative feedback.
OBJECTIVES
1
Determine which smartphone camera and camera features were most preferred by users.
2
Collect overall time-on-task data to find out which smartphone camera was the most efficient to use.
METHODOLOGY
NEW HIRE TRAINING

Photo by Jonathan Borba @ Pexels.com
I wasn’t just a moderator on this study: I was the project leader, too.
Since I had so much input into how this study was designed, I could keep our new hire heavily involved with every step of the research preparation process.
This gave them insight into why we made the decisions we made and helped them understand how to set up a study on their own in the future.
It also gave us the opportunity for the new hire to get moderation practice runs, with me acting as the interviewee. That way, they were better prepared for a real-world one-on-one interview session.
WARMUPS
During testing, we began our sessions by asking some simple warm-up questions.
Then, we had each participant play with each of the four devices being tested. This way, participants could get first impressions with each device and get a better idea of how to use each one.
After that, we had participants rank the devices on a scale from 1 to 10; 10 indicated a phone they most preferred, and 1 indicated a phone they least preferred. (They didn’t have to rank devices a 1 or a 10 if they didn’t want to do so.)
We then asked follow-up questions to understand why participants ranked the devices the way they did.
CAMERA TASKS

Photo by feyzayildirimphoto @ Pexels.com
After our first impressions were finished, we had participants complete specific tasks with each phone. This included activities like…
- Portrait mode photos
- Snapping photos from a distance with a telephoto lens
- Regular photo capture with no setting changes
- Taking a brief video
- Capturing a panoramic snapshot of the test room
Tasks and devices were counterbalanced to help address Order Effect or other biases.
Participants were also timed using a stopwatch to determine how long each task took to complete with each device.
SHUTTER ACCURACY
Our clients not only wanted time-on-task and Likert scale data, they wanted to know the relative input lag of each device.
We captured this data by testing the shutter response accuracy estimation for each phone.
To accomplish this, we ran an animation of two shapes passing over one another, and asked participants to take a photo once the shapes aligned.
The speed of the animation grew gradually faster the longer each participant tried to take a photo.
After completing these accuracy tests with each of the four devices, my team and I analyzed each of the photos taken by the phones at every speed and compared them to a perfect overlap. The photos received numbered scores to understand how quickly the shutters performed – highly accurate photos scored higher than inaccurate photo capture. The averages of these scores helped us determine a baseline estimate for which shutters were most responsive to user input.
NIGHT MODE
After our shutter accuracy tests, we moved on to test the “Night Mode” features of our devices*. We asked participants to take a photo of an object in the room in low lighting conditions and timed participants to see how long it took to capture each image.
To ensure each participant was subjected to similar levels of lighting conditions, my team and I used luminance meters and adjusted lighting conditions as necessary.
*Device #4 had no automatic “Night Mode,” so it was excluded from this test. All other devices had an automatic low lighting feature.
DEVICE RE-RANK
For our last task, participants re-ranked each of the four devices on the same 1-to-10 scale that was introduced at the start of testing.
This time, they were asked to take their experiences from testing into account and provide qualitative feedback on why they chose to rank each device the way they did.
Generally, participants enjoyed using phones that were not only more time-efficient but also had simpler navigational elements.
Once participants had re-ranked each of the four devices, they were thanked for their time and dismissed.
DIARY STUDY
N=34 participants completed in-person sessions, but N=12 of those 34 were selected by our research team to submit more qualitative data over the course of a one-week mobile diary study.
Participants selected one of the four devices to take home for a one-week period. During that time, they were given daily assignments from our team – “Capture an object in motion,” “Take a 5-second video of something,” etc.
They then uploaded these images to an online submissions folder.
Additionally, they were also told to write in a small journal provided by our research team, detailing their thoughts on the device they had borrowed.
After one week with their new smartphone, these 12 participants shipped the phones back to our research team, along with their written journals.
FINDINGS
PARTICIPANTS PREFERRED SIMPLER UI
Participants in our tests generally liked phones that had simpler user interfaces, so they could easily point and shoot with their devices. While special features such as Portrait Mode or Night Mode were generally well-liked, the main feature that participants cared about was convenience.
One participant, who is a parent, said that the speed and ease with which they could use their phone meant a lot to them, since they were a parent to a toddler whose actions or behaviors “could change at any minute.” They explained that having a simpler user interface helped them capture once-in-a-lifetime moments that could disappear in an instant.
DEVICE #4 WAS THE LEAST PREFERRED DEVICE
Four devices were tested during this study – Device #’s 1, 2, 3 and 4. At the start of testing, our client expected Device #4 to be the favorite among all of our devices.
Our tests found the opposite was true.
Device #4 not only took the longest time to complete basic tasks, but also ranked the lowest among all devices in terms of user preference. This was due to two main reasons:
-
Device #4’s UI was not seen as intuitive
-
Device #4 had no automatic Night Mode
The reasons for this likely lied in its intended use case: Device #4 was designed to look and function as though it were a replacement for a professional-grade DSLR camera.
This caused most of our participants to become confused and frustrated, as most reported a desire for simpler camera settings allowing them to point and shoot at a moment’s notice. One participant stated clearly that while they liked photography and thought the idea of Device #4 was appealing, they would never buy it because “I know how to point and shoot a smartphone. I don’t know what an F-Stop number is.”
DEVICE #3 WAS MOST LIKED + MOST EFFICIENT
In the end, Device #3 was the most popular device among participants.
Participants found its simpler point-and-shoot capabilities very useful, while still providing enough customization features to keep the camera interesting (such as portrait mode, customizable Night Mode, and filters). It was also the most efficient smartphone camera among all four devices, with the lowest times-on-task for every single category.
Device #2 was a close second. It also had simple point-and-shoot capabilities, like Device #3, but was slightly slower. It also had slightly more camera features, which some participants liked, but others felt that having so many choices was slightly overwhelming.
OUTCOMES
1
These clients used our study to alter their product roadmap before the device was released the following year.
2
The client later returned to our company with a much larger budget and asked a new team to run the study again.