Skip to main content

Our Research on the Block Design Test, as Featured on CBS 60 Minutes with Anderson Cooper

By Maithilee Kunda, in collaboration with members of the AIVAS Lab. Website text contributions from: James Ainooson, Fernanda Eliott, Carson Fallon, and Soobeen Park.

We were delighted and honored to host Anderson Cooper and the CBS 60 Minutes film crew in February 2020, as part of their story on autism, neurodiversity, and employment. (This took place just before the COVID pandemic, which is why none of us were social distancing or wearing masks!)

The “block design test” segment features research on technology-based approaches to cognitive assessment being done by Vanderbilt’s Laboratory for Artificial Intelligence and Visual Analogical Systems (AIVAS Lab). The AIVAS Lab is directed by Dr. Maithilee Kunda, who is an assistant professor of computer science in Vanderbilt’s School of Engineering.

Click me!

Click on gold bars (like this one) to learn more about a particular topic.

Our test participants: Anderson Cooper and Dan Burger

Along with CBS correspondent Anderson Cooper, who served as our neurotypical test participant, we were very glad to be doing this segment with Dan Burger, a software developer here at Vanderbilt who is on the autism spectrum. Dan kindly agreed to be the second test participant for our block design demonstration. Thank you, Dan!

Here, you can find more in-depth information about this segment, including a deeper dive into the results of the Anderson-versus-Dan matchup, and about our ongoing research.

In this article:

Dan getting set up to take the block design test
PhD student Joel Michelson (left) helps Dan (middle) get set up to take the block test, while PhD student Tengyu Ma (right) waits to help run the test. The device Joel is adjusting on Dan’s head is a Pupil Core wearable eye tracker, from Pupil Labs. Photo credit: Claire Barnett.
Anderson Cooper about to start the block design test
Anderson Cooper (right) about to start the block design test. From left: Assistant Professor of computer science Maithilee Kunda and AIVAS Lab PhD students Tengyu Ma (in back), Deepayan Sanyal, Joel Michelson (mostly hidden), Ryan Yang, and James Ainooson. Photo credit: Claire Barnett.

 

What is the block design test about?

The block design test shown in the episode is a type of cognitive assessment—a short test that we can give people to measure how well they can use a particular type of mental skill or ability.

How cognitive assessments can be used

There are many different kinds of cognitive assessments, including tests of language, memory, attention, perception, etc. These tests can be used in lots of ways, including:

In the block design test, you are given a pile of colored blocks, and you have to rearrange the blocks to match a target design. This test might look easy, but trust us—it’s harder than it looks!

The block design test measures a person’s visuospatial abilities— that is, how well they can mentally imagine, rotate, combine, and reason about visual information. (These abilities often go by different names, like spatial skills, visuospatial reasoning, or visual thinking.)

Example of the block design test
An example of a “test problem” from our research version of the block design test. The goal is to rearrange the colored blocks to match the given target design. Photo credit: Maithilee Kunda.

 

Why focus on visuospatial abilities?

We use our visuospatial abilities every day, often without even realizing it. For example, when you pack clothes into a suitcase or figure out how to fit all your leftovers into your refrigerator, you are using your visuospatial abilities to think about the sizes and shapes of things, and the sizes and shapes of the containers you are trying to fit them into.

More everyday examples of visuospatial abilities

We use our visuospatial abilities every day, often without even realizing it. For example, when you pack clothes into a suitcase or figure out how to fit all your leftovers into your refrigerator, you are using your visuospatial abilities to think about the sizes and shapes of things, and the sizes and shapes of the containers you are trying to fit them into.

Other everyday examples of using visuospatial abilities

  • When driving a car, thinking about how to take a turn, how to park in a tight space.
  • Identifying things that are close versus far away, and thinking about where they are in relation to you.
  • Using a map.
  • Fixing your hair while looking at yourself in a mirror.
  • Imagining a scene while you are reading a book.
  • Thinking about numbers and quantities, for example when you are following a recipe or thinking about the stock market.
  • Navigating menus on a webpage, and moving your mouse to control the pointer on the screen.

Visuospatial abilities are just as important as other cognitive abilities like math and reading, even though they are not often formally measured or taught in schools the way that other subjects are.

Visuospatial abilities in different jobs

Visuospatial abilities in various jobs

  • For example, engineers use their visuospatial abilities to visualize new products, or imagine how a piece of machinery might operate, or imagine how it might break.
  • Surgeons use their visuospatial abilities to reason about surgical procedures and human anatomy, for instance to avoid damaging important organs that might be hidden underneath the surgical site.
  • Airplane pilots use their visuospatial abilities to continually pay attention to readouts on their instrument panels, and to maintain awareness of the plane’s position relative to the view outside the cockpit.
  • Even computer programmers use visuospatial abilities, for example when visualizing how different pieces of code operate and how to fit them together.

Our research initially focuses on cognitive assessments of visuospatial abilities. In the future, we will be expanding our work to include other kinds of cognitive abilities.

Different ways people use visuospatial abilities
Examples of activities that involve visuospatial abilities, including: (a) reading a book; (b) thinking about numbers and math concepts; (c) writing computer code; (d) performing surgery; (e) designing new machines or products; (f) thinking about experiments and making scientific discoveries.

 

Visuospatial abilities in autism

Visuospatial abilities are just one of several areas in which people on the autism spectrum often show strengths and talents.

A famous visual thinker: Temple Grandin

Temple Grandin is a professor of animal science at Colorado State University who is on the autism spectrum. She has written extensively about her experiences with being a visual thinker, for instance in her autobiography Thinking in Pictures.

Dr. Grandin writes about how her strengths in visual thinking have given her benefits in some areas, like in her work as designer of complex facilities for the livestock industry. However, her visual thinking also causes difficulties in other areas, like sometimes in understanding abstract concepts.

As Temple Grandin has often suggested, our research aims to better understand individual patterns of strengths and talents in visuospatial abilities, and help people on the spectrum find jobs that leverage those abilities.

While many people on the autism spectrum and with other neurodiverse conditions show strengths in visuospatial abilities, we do not fully understand what these strengths mean, how they differ from person to person, and how they map onto workforce-relevant capabilities.

In our research, we are developing technology-enabled approaches to help answer these questions and enable neurodiverse individuals to maximize their potential in contributing to our workforce and our society.

 

How can technology help?

In our research, we aim to use technology to improve cognitive assessment, to enable measuring not just how well a person can perform a task, but also how they are going about it.

Solving block design in different ways

For example, two people might solve a block design problem in exactly the same amount of time but use completely different strategies to do it. One person might work slowly and methodically, thinking about each block carefully before they place it. The other person might work more quickly and take more of a trial-and-error approach, putting each block down and then checking afterwards whether it fits.

There is nothing fundamentally better or worse about either of these strategies. They are just different, and they reflect variations in how different people think and solve problems. Our research aims to try to measure these kinds of differences, and then match people to jobs that fit their own individual cognitive makeup.

Most of the time, people’s problem-solving strategies are hidden. We can watch while someone thinks about a math problem, but we cannot see what is going on inside their head. However, for certain tasks like block design, we can watch people’s behaviors to get clues as to how they might be thinking through the problem. This is where technology comes in!

In our research, we use sensors, like cameras and eye trackers, to record people’s behaviors while they solve block design test problems. Cameras can help us record a person’s physical actions during the test, like which blocks they move, in what order, and how fast. A wearable eye tracker can help us record a person’s gaze: where they look while solving each problem.

Depth camera images of the block design test
These images show a view of the block design test from an overhead camera that records the state of the tabletop. The camera records in both color (left) and depth (right).

Wearable eye tracking images of the block design test
These images show a person wearing a wearable eye tracker (left) that records the view in front of their face and also where their eyes are looking (shown as a yellow circle on the right). (Note that these pictures are from two slightly different testing setups.)

 

Our 60 Minutes testing setup

For the 60 Minutes episode, we asked both Anderson Cooper and Dan Burger to complete a series of block design test problems while being recorded in our testing setup.

We used an Intel RealSense depth camera mounted on an overhead track, to capture a top-down view of the tabletop. We also used a Pupil Labs Pupil Core wearable eye tracker. All of the data recorded by these devices is streamed into computers at our control station.

Our block design testing setup
Our block design testing setup, with PhD student Deepayan Sanyal sitting at the control station. Image credit: Claire Barnett

 

Initial results from the demonstration

Anderson and Dan both completed a series of 14 block design test problems that we created for our research.

Our block design test problems

The first set of problems used red and white blocks where the patterns only contained straight lines. The second set of problems used blue and white blocks where the patterns only contained curved lines. (Many thanks to our high school research intern Tessa Haws, who created the blue and white version!)

We first looked at Anderson and Dan’s response times–how long it took each of them to solve each problem. The graph below shows these results.

What this graph does–and does not–show

As you can see from this graph, Dan was faster than Anderson on every single problem. He was only a little bit faster on the small, easy problems, but he got a lot faster on the larger and more difficult ones!

So far, these results tell us that Dan is faster at block design…but it does not tell us why he is faster!

Next, we will show you how our technology-based assessment approach can give us more information about the cognitive strategies that Anderson and Dan were using.

Graph showing response times for Anderson and Dan on our test
This graph shows how long it took Anderson (blue points) and Dan (orange points) to solve each block design test problem.

 

Results from the overhead camera

Why was Dan so much faster than Anderson? Looking at video from the overhead camera gives us our first clue.

What results from the overhead camera reveal

On many of the problems, Dan used a very systematic strategy, often placing blocks from left to right and top to bottom regardless of the design.

Anderson, in contrast, used different strategies on different problems, often finding chunks of the design that he could repeat, and then doing those one after another. You can also see in this video that Anderson takes a long time to figure out how to place certain blocks.


This video clip shows a 4x speeded up view of Anderson and Dan solving one of our most difficult block design test problems.

 

Results from the wearable eye tracker

Next, we can look at data from our Pupil Labs wearable eye tracker to learn more about Dan and Anderson’s different block design strategies.

How wearable eye trackers work

The video below shows a clip from Anderson’s eye tracking recordings. Wearable eye trackers work by having one camera, facing outward, that records the scene in front of a person’s face. Then, two more cameras point inwards, at a person’s eyes, and image processing algorithms are used to precisely measure the person’s eyeball movements. The two streams of data can be combined to create the view shown here, where the person’s gaze is marked by a yellow dot that moves over the video.

The visualization at the bottom comes from our manual analyses of these videos to identify where Anderson was looking while he solved this problem (the colored bar), and also when he placed each block into his solution (the white squares on the very bottom).

Previously, we analyzed these videos mostly by hand. Our current research includes developing computer vision algorithms that use neural networks to learn how to automatically detect key behaviors from a person’s testing session.


This video clip shows outputs from our Pupil Labs wearable eye tracker and video analysis.

The 60 Minutes episode talked a little bit about how Dan and Anderson showed different patterns of gaze while solving the block design test. The graph below gives a more in-depth look at these differences.

What results from our wearable eye tracker reveal

This graph might be a little difficult to understand at first, but bear with us!

This graph shows gaze data for Dan and Anderson on just a single block design problem–the same problem shown in the episode, and also shown in the video clips above. There are sixteen rows here, because there are 16 blocks in this particular block design puzzle.

Each colored bar represents the sequence of gaze that either Dan or Anderson used while placing a single block during a block design problem. The colors represent different places they looked: red for the target design, blue for the pile of blocks off to the left, and yellow for the “construction area” where they are building their solution.

There are many interesting patterns that you might notice here! For example, Dan only looks once or twice at the design (red segments) while placing each block. Anderson often ends up looking back at the design lots of times–over 10 times for the last block!

Anderson also does not look at the block bank (blue segments) very often, which seems to make sense because all the blocks are identical. However, Dan does look at the block bank to select his block, almost every single time. Why?

We are not sure of the answer, yet! But we think Dan might actually be using a more efficient strategy here too, where he looks at the block bank in order to select a block that happens to be rotated the way he needs it. (This is our hypothesis, for now. We are still analyzing the data to learn more!)


Graph showing gaze sequences on the block design test

This graphs shows the sequences of gaze shown by Dan and Anderson on one problem from the block design test. Each row shows their gaze sequence prior to placing a single block.

 

Takeaways, and our ongoing research

What do all of these differences between Dan and Anderson mean, and how might they be relevant to visuospatial abilities used in the workplace?

We are currently working on several different research studies in artificial intelligence and cognitive science to investigate these questions, including:

  • Building computational models of human visuospatial reasoning to better understand how different reasoning strategies work.
  • Developing image processing and computer vision algorithms to automate the measurement of behavior during cognitive assessments.
  • Using techniques from data mining to identify important patterns of behavior and how they might connect with and predict on-the-job performance.

For more information about this research, please visit the AIVAS Lab website, including links to our research papers on these topics.

This project has been funded in part by the National Science Foundation, through a Convergence Accelerator grant on advancing AI for neurodiverse employment (awards #1936970 and #2033413, PI: Nilanjan Sarkar) and through a 2026 Idea Machine grant on developing next-generation, AI-enabled assessments of visuospatial cognition (award #2034013, PI: Maithilee Kunda).

You can also find information about other Vanderbilt Frist Center research on the main center website.


A short discussion after our block design test

A short chat with Anderson Cooper after we finished our block design demonstration. From left to right: PhD student Tengyu Ma, high school research intern Tessa Haws, and PhD students Joel Michelson, James Ainooson, and Ryan Yang. Photo credit: Claire Barnett


Contributors to our 60 Minutes research demonstration

Our Vanderbilt block design research team running the 60 Minutes demo. Left to right: Maithilee Kunda (assistant professor of CS), Deepayan Sanyal (PhD), Soobeen Park (undergrad), Joel Michelson (PhD), Tessa Haws (high school), Anderson Cooper, Carson Fallon (undergrad), James Ainooson (PhD), Ryan Yang (PhD), Tengyu Ma (PhD).