Eyetracking Research
Eyetracking equipment can track and show where a person is looking. To do so, it uses a special light to create a reflection in the person’s eyes. Cameras in the tracker capture those reflections and use them to estimate the position and movement of the eyes. That data is then projected onto the UI, resulting in a visualization of where the participant looked.
This research can produce three types of visualizations:
- Gazeplots (qualitative)
- Gaze replays (qualitative)
- Heatmaps (quantitative)
This video clip is a gaze replay — it shows how one participant’s eye processed a page on Bose.com.
We use this eyetracking data to understand how people read online and how they process webpages. Our eyetracking research has yielded major findings such as:
- Banner blindness: People avoid elements (like banners) that they perceive as ads.
- Uncertainty in the processing of flat UI elements: Extremely flat UIs with weak signifiers require more user effort than strong ones do.
- Gaze patterns: Users tend to process different content in different ways. Two of the most common patterns are the F-pattern and the layer-cake pattern.
In an eyetracking study, the tracker has to be calibrated for each participant. Every individual has a different eye shape, face shape, and height. As a consequence, the tracker has to “learn” each participant before it can follow their gaze. Once the machine is calibrated, the participant has to stay roughly in the same position — moving too far side to side or leaning in or out can cause the tracker to lose calibration.
Materials List
In this desktop eyetracking study of how people read online, we used the following materials:
- Desktop eyetracker with built-in monitor (Tobii Spectrum)
- Powerful PC desktop tower
- Large monitor for facilitator and observer
- Two keyboards
- Two computer mice
- External speakers
- External microphone
- Printed task sheets
- Printed facilitator script
- Printed consent forms
- External hard drive for backing up data
- Two tables, side-by-side
- Two chairs
- Envelopes with incentives for participants (cash)
Lab Setup
Room
For this specific study, we rented out a 4-person office space in a WeWork coworking facility. This office provided enough space for a participant, a researcher, and 1–2 observers, without getting too crowded.
PC, Monitors, & Eyetracker
We used a powerful PC desktop tower, connected to two monitors:
- Participant’s monitor (with the eyetracking cameras attached)
- Facilitator’s monitor (showing the participant’s gaze in real time)
The participant and facilitator each had a separate mouse and keyboard, so they shared control of the PC. The facilitator controlled the PC only for setup, calibration, and to stop and start the recording.
Using a separate monitor for the facilitator was optional, but had two major benefits:
- Space: Having a separate monitor allowed the facilitator to observe the task without sitting too close to the participant.
- Real-time gaze data: The facilitator’s monitor showed a red dot and line representing the participant’s gaze; these were useful for monitoring the participant’s calibration. (If the participant shifts in her seat, the tracker can lose her eyes. Lost calibration means that the gaze visualization won’t show what the participant was looking at — making the data unusable. By monitoring the gaze data in real time, the facilitator can catch the problem and recalibrate as needed.)
I’d recommend using a large, high-definition screen for the facilitator’s monitor, in order to easily see which words the participants were (and weren’t) reading on the screen.
Tables and Chairs
The monitors, keyboard, mice, and tasks sheets were spread across two tables that we pushed together. The facilitator sat in a rolling chair, so she could easily move closer to the participant to adjust the eyetracking equipment as needed or to hand him a task sheet. The participant sat in a fixed (not rolling) chair. This little detail won’t necessarily matter in a normal usability test, but matters a lot in eyetracking — you don’t want to give participants any reason to move out of range and ruin the calibration.
Task Sheets
Task sheets are another detail that can sometimes cause problems in eyetracking studies. When participants look down at a task sheet, they’re turning away from the eyetracker. When possible, it’s nice to have the task instructions delivered either verbally or through the eyetracking software itself.
In the past, we’ve found that referencing task sheets can break the calibration, but we did not have a problem with it in this study: when people looked back up at the screen to perform their task, the tracker was able to refind and track their eyes. Be aware that this capability may differ depending on the tracker you use.
Eyetracking Now vs. 2006
The setup for a desktop eyetracking study hasn’t changed very much in the past 13 years. Compared to a photo of our setup in a 2006 eyetracking study, our 2019 version looks quite similar — two monitors, an eyetracker, and a PC tower.
However, even though the structure of the system may be similar, the technology has definitely changed from 2006 (check out those little low-resolution monitors!). Compared to 2006, eyetracking tools have certainly improved the calibration process and they’ve gotten better at hiding the eyetracking mechanisms in the eyetracker (thanks largely to smaller cameras).
Tips for Your Eyetracking Study
Think through your goals for the study. What data are you looking to gather?
- Gaze replays and anecdotes: If you’re looking for video clips and qualitative insights, a lightweight tool might work for you. Instead of the complex setup we used for this study, you could consider using lightweight USB-connected eyetracker systems or special eyetracking goggles (particularly for testing mobile designs). Those types of studies can be much easier to run than full-fledged quantitative eyetracking studies. Be aware, though, that those products are often not capable of producing gazeplots or heatmaps. Lightweight systems also tend to be less precise — instead of a little dot showing you which word someone is reading, you might get a big bubble that just shows you which paragraph he’s looking at.
- Gazeplots: If you want static visualizations of where individuals looked on a page, you could use a setup similar to ours, but you wouldn’t need as many users. You could collect data from 8-12 participants. (For regular qualitative usability testing, it’s usually best to test with around 5 users, but for a qualitative eyetracking study you’ll want to recruit a few extra test users to account for calibration problems and other technical issues.)
- Heatmaps: If you want static visualizations that summarize where many people looked at a page on average, you’ll need to run a quantitative study like we did. We usually recommend having 39 participants complete the task you want to use for a heatmap.
If you’re planning an eyetracking study, it’s important to think through all the little logistics details. Running a day or two of pilot testing is a good way to work through all the potential hurdles you’ll encounter. Based on our experiences, you should absolutely expect technical difficulties.
I also highly recommend dedicating 1–2 days just to set up your equipment, before your pilot testing. Traditional eyetracking tools are complex, delicate systems. You’ll want plenty of time to think through and experiment with your study setup.
For more details, check out our free report on how to run eyetracking studies.
Share this article: