Eyetracking Research

Eyetracking equipment can track and show where a person is looking. To do so, it uses a special light to create a reflection in the person’s eyes. Cameras in the tracker capture those reflections and use them to estimate the position and movement of the eyes. That data is then projected onto the UI, resulting in a visualization of where the participant looked.

This research can produce three types of visualizations:

  • Gazeplots (qualitative)
  • Gaze replays (qualitative)
  • Heatmaps (quantitative)
This gaze plot shows how one participant processed a web page in a few minutes. The bubbles represent fixations – spots where the eyes stopped and looked at; the size of the bubble is proportional with the duration of the fixation.


This video clip is a gaze replay — it shows how one participant’s eye processed a page on Bose.com.

This heatmap is an aggregate from many participants performing the same task. The colored areas indicate where people looked, with red areas signifying the most amount of time, followed by yellow and green, respectively. To get this type of visualization, we recommend having at least 39 participants perform the same task on the same page.

We use this eyetracking data to understand how people read online and how they process webpages. Our eyetracking research has yielded major findings such as:

In an eyetracking study, the tracker has to be calibrated for each participant. Every individual has a different eye shape, face shape, and height. As a consequence, the tracker has to “learn” each participant before it can follow their gaze. Once the machine is calibrated, the participant has to stay roughly in the same position — moving too far side to side or leaning in or out can cause the tracker to lose calibration.

Materials List

In this desktop eyetracking study of how people read online, we used the following materials:

  • Desktop eyetracker with built-in monitor (Tobii Spectrum)
  • Powerful PC desktop tower
  • Large monitor for facilitator and observer
  • Two keyboards
  • Two computer mice
  • External speakers
  • External microphone
  • Printed task sheets
  • Printed facilitator script
  • Printed consent forms
  • External hard drive for backing up data
  • Two tables, side-by-side
  • Two chairs
  • Envelopes with incentives for participants (cash)

Lab Setup

Room

For this specific study, we rented out a 4-person office space in a WeWork coworking facility. This office provided enough space for a participant, a researcher, and 1–2 observers, without getting too crowded. 

PC, Monitors, & Eyetracker

We used a powerful PC desktop tower, connected to two monitors:

  • Participant’s monitor (with the eyetracking cameras attached)
  • Facilitator’s monitor (showing the participant’s gaze in real time)

The participant and facilitator each had a separate mouse and keyboard, so they shared control of the PC. The facilitator controlled the PC only for setup, calibration, and to stop and start the recording.

The facilitator’s monitor, keyboard, and mouse are set up to the left of the participant’s monitor, keyboard, and mouse. In this room, we chose to place the eyetracker in the corner because it was out of the range of direct overhead lights (which can sometimes cause problems with the tracking). The facilitator’s monitor was angled away from the participant, to prevent her from seeing it.
During each session, the participant (right) completed tasks using what looked to her to be a normal monitor. Meanwhile, the screen was shared on the facilitator’s screen with real-time gaze data. The facilitator (me, left) monitored the gaze calibration, watched user behavior, and administered tasks and instructions as needed. I also took some notes, but as eyetracking facilitation requires multitasking through many activities, those notes were very light. Primarily, I used my notes to record any issues I saw in the gaze data or to remind myself to go back and rewatch particularly interesting incidents. Human eyes move fast, so the bulk of eyetracking analysis work has to happen by slowing down the videos and watching them several times.

Using a separate monitor for the facilitator was optional, but had two major benefits:

  • Space: Having a separate monitor allowed the facilitator to observe the task without sitting too close to the participant.
  • Real-time gaze data: The facilitator’s monitor showed a red dot and line representing the participant’s gaze; these were useful for monitoring the participant’s calibration. (If the participant shifts in her seat, the tracker can lose her eyes. Lost calibration means that the gaze visualization won’t show what the participant was looking at — making the data unusable. By monitoring the gaze data in real time, the facilitator can catch the problem and recalibrate as needed.)

I’d recommend using a large, high-definition screen for the facilitator’s monitor, in order to easily see which words the participants were (and weren’t) reading on the screen.

This screenshot shows the facilitator’s view during a session. The white dots in the upper right corner represent the position participant’s eyes as seen by the eyetracker. If the dots disappear or move too far from the center, the facilitator knows she needs to intervene to save the calibration. The real-time gaze data is shown on the screen as red dots and lines (center). This provides another piece of information for monitoring calibration. For example, if the participant seems to be reading a headline, but the red dots are appearing a half-inch below that headline, that could be an indication that the calibration is off.

Tables and Chairs

The monitors, keyboard, mice, and tasks sheets were spread across two tables that we pushed together. The facilitator sat in a rolling chair, so she could easily move closer to the participant to adjust the eyetracking equipment as needed or to hand him a task sheet. The participant sat in a fixed (not rolling) chair. This little detail won’t necessarily matter in a normal usability test, but matters a lot in eyetracking — you don’t want to give participants any reason to move out of range and ruin the calibration.

Task Sheets

Task sheets are another detail that can sometimes cause problems in eyetracking studies. When participants look down at a task sheet, they’re turning away from the eyetracker. When possible, it’s nice to have the task instructions delivered either verbally or through the eyetracking software itself. 

In the past, we’ve found that referencing task sheets can break the calibration, but we did not have a problem with it in this study: when people looked back up at the screen to perform their task, the tracker was able to refind and track their eyes. Be aware that this capability may differ depending on the tracker you use.

Eyetracking Now vs. 2006

The setup for a desktop eyetracking study hasn’t changed very much in the past 13 years. Compared to a photo of our setup in a 2006 eyetracking study, our 2019 version looks quite similar —  two monitors, an eyetracker, and a PC tower.

However, even though the structure of the system may be similar, the technology has definitely changed from 2006 (check out those little low-resolution monitors!). Compared to 2006, eyetracking tools have certainly improved the calibration process and they’ve gotten better at hiding the eyetracking mechanisms in the eyetracker (thanks largely to smaller cameras).

In 2006 Kara Pernice (right) facilitated an eyetracking study with a very similar setup to our 2019 study.

Tips for Your Eyetracking Study

Think through your goals for the study. What data are you looking to gather?

  • Gaze replays and anecdotes: If you’re looking for video clips and qualitative insights, a lightweight tool might work for you. Instead of the complex setup we used for this study, you could consider using lightweight USB-connected eyetracker systems or special eyetracking goggles (particularly for testing mobile designs). Those types of studies can be much easier to run than full-fledged quantitative eyetracking studies. Be aware, though, that those products are often not capable of producing gazeplots or heatmaps. Lightweight systems also tend to be less precise —  instead of a little dot showing you which word someone is reading, you might get a big bubble that just shows you which paragraph he’s looking at.
  • Gazeplots: If you want static visualizations of where individuals looked on a page, you could use a setup similar to ours, but you wouldn’t need as many users. You could collect data from 8-12 participants. (For regular qualitative usability testing, it’s usually best to test with around 5 users, but for a qualitative eyetracking study you’ll want to recruit a few extra test users to account for calibration problems and other technical issues.)
  • Heatmaps: If you want static visualizations that summarize where many people looked at a page on average, you’ll need to run a quantitative study like we did. We usually recommend having 39 participants complete the task you want to use for a heatmap.

If you’re planning an eyetracking study, it’s important to think through all the little logistics details. Running a day or two of pilot testing is a good way to work through all the potential hurdles you’ll encounter. Based on our experiences, you should absolutely expect technical difficulties.

I also highly recommend dedicating 1–2 days just to set up your equipment, before your pilot testing. Traditional eyetracking tools are complex, delicate systems. You’ll want plenty of time to think through and experiment with your study setup.

 

For more details, check out our free report on how to run eyetracking studies.