How Does the Brain See the Same Image Differently?

Explore how different brains perceive the same image using shared neural patterns—insights that bridge cognitive science & AI innovation.
Digital art of human brains showing shared neural activity viewing the same image, representing visual perception and cognition

⬇️ Prefer to listen instead? ⬇️


  • 🧠 A 2024 study showed different brains produce highly similar neural responses when viewing the same image.
  • 🔬 Brain decoding models trained on one individual can accurately predict another’s visual brain activity.
  • 🤝 Shared neural coding may be a basis for empathy and smooth social communication.
  • 👁️ fMRI and machine learning reveal the brain’s common visual information formatting system.
  • 🧩 Disorders like autism may reflect disruptions in shared neural perception patterns.

It’s curious. Two people look at the same image and come away with surprisingly similar, but also very different, experiences. One might feel awe, the other nostalgia. Still, under these personal feelings are basic neural similarities. Studies in how we see things show that, even though we are all individuals, human brains decode images in a very consistent way. This idea, called shared neural coding, changes how we understand seeing, empathy, mental health, and even artificial intelligence.


retina and optic nerve closeup

Visual Processing: The Brain’s Layered System

Visual perception starts not in the brain, but in the eye, or more exactly, in the retina. Your retina has photoreceptors, which are special cells that change light into electrical signals. These signals then go along the optic nerve into a connected brain network that processes things in layers.

At the beginning of this network is the primary visual cortex (V1), found in the occipital lobe. V1 is like the “first stop” for vision in the brain. It breaks down basic things like contrast, edges, and how things are angled. This is the most basic level of visual processing, and it works like raw code in a computer system.

From V1, information spreads to other visual areas, including V2, V3, V4, and the inferotemporal cortex. These areas interpret more complex things like object shapes, colors, how things are placed in space, movement, and even give meaning to what is seen.

It’s important to know that visual processing is not just a straight line or passive. Our past experiences also change what we see. Past events, what we pay attention to, our goals, feelings, and even how we use words help the brain build a visual scene. So, what you see is made through a constant back-and-forth between what you take in with your senses and how your brain thinks about it. This is how the brain sees images — through a process of making things clearer and adding layers of meaning.


fmri brain scans with visual stimulus

Shared Neural Coding: A Common Brain Language

How can two people with different life histories, personalities, and brains interpret the same visual scene in a very similar way? The answer is in what scientists call “shared neural coding.”

Shared neural coding is when different people show consistent patterns of brain activity when seeing the same visual things. No two brains are identical. But, repeated activity patterns suggest a common way or “brain language” for how images are processed.

An important study by van den Bosch et al. (2024) made this process much clearer. Using functional MRI (fMRI), the researchers looked at how different brains responded to the same set of natural images. It was remarkable. Each person showed slightly different or personal response timing, but the brain activity patterns were very consistent across all people.

Even more surprisingly, the team was able to reconstruct what one person viewed using only the brain activity data from others. This shows that brain ways of showing images are not only predictable but can be moved from one person to another. This supports the idea that humans have a common way of seeing things, even with our different ways of thinking.

This suggests that we may view the world through personal filters, but the main part that sees — how we see things — is built from a common brain plan.


two people watching beach sunset

The Role of Individual Interpretation

Shared neural coding does not get rid of what makes us individuals. In fact, it’s the opposite. It acts as a base for personal thoughts, culture, feelings, and memories to be added.

For example, when two people see a beach scene, their visual cortices may activate in similar ways to process what they see, like sand, water, or horizon lines. But once the information moves into brain areas that link ideas — including the hippocampus and prefrontal cortex — past experiences start to change how they see it. One person may think of vacations with their family. Another may think of a sad tsunami event.

Emotions work in a similar way. If you’re feeling anxious, your amygdala (the brain’s emotion center) may change how you see even neutral visual things. On the other hand, a relaxed state may make positive connections stronger. This change adds differences on top of the basic “shared code.”

This two-part design — a common brain structure mixed with personal stories — is key to how human perception can be both flexible and consistent. It shows how rich it is to see the world in different ways, but still stay connected through common experiences.


scientist using eeg machine

Tracking Perception: Tools for Peeking Inside the Brain

To measure and understand shared neural coding, researchers use many advanced tools to study the brain. The main ones are:

  • Functional MRI (fMRI): This is a way to scan without cutting the skin. It finds changes in blood oxygen. This stands in for brain cell activity. It shows which brain areas are active when someone sees certain things.
  • Magnetoencephalography (MEG): MEG picks up magnetic fields from brain activity. It shows changes in milliseconds. This helps follow how perception happens over time.
  • Electroencephalography (EEG): It is very accurate with timing. EEG is another common tool used for mapping brain responses, especially in real-time uses.
  • Machine Learning & Brain Decoding: Researchers use neural networks and representational similarity analysis (RSA). They train computer programs on fMRI data to figure out how images are seen, and even rebuild images that were viewed.

For example, in RSA (Kriegeskorte et al., 2006), researchers make “representational spaces.” These show how a brain stores different images. They then compare these spaces across people. When enough of these patterns overlap, it confirms common ways people see the world.

Together, these tools help us get a better look at how people share visual perception and how the brain processes information.


prehistoric humans hunting together

Why Shared Perception Matters: Evolution Made It So

Looking at how we developed over time, shared visual perception was very important for early human survival. If a group could reliably find food, spot dangers like predators, or understand gestures and faces in similar ways, they had a much better chance of staying alive.

So, shared neural coding likely developed as a way to make groups stick together better and react quickly. Humans are social creatures. Seeing things in the same, timed way allowed groups to do things together — building, hunting, defending land, or making social bonds.

In today’s complex, digital world, that same shared coding still helps with everything. This includes understanding street signs, and enjoying timed reactions at a concert, a movie, or a sports game. It’s the brain’s glue that keeps social experiences connected. It makes shared reality possible.


friends laughing together

Empathy and Connection: Social Brains at Work

It’s one thing to see the same thing as someone else. It’s another to feel that connection.

A key part of how we understand others is the mirror neuron system. These are special brain cells that turn on both when we do something and when we see someone else do it. This system relies a lot on us seeing things in a similar way.

When visual information is broken down similarly between two people, it gives us a way to feel for others. You not only understand what someone else sees, but also get close to what they feel. Imagine watching someone win a race. You see their great happiness and, without trying, feel a rush of joy — because your brain mirrors their experience.

This is very important in how children grow, in romantic ties, in talking with friends, in therapy, and even in solving disagreements. When we consistently see things in the same way, it makes us better at understanding each other and reacting well.


autistic child avoiding eye contact

What Happens When Shared Coding Breaks Down?

Not all brains handle the world in the same way. When the shared visual code is broken, big problems with thinking and social skills can happen.

For conditions like autism spectrum disorder (ASD), studies have shown less timing in brain responses to common visual and social things. People with ASD often see social signs, like eye contact or facial expressions, in their own unique ways. This makes talking harder.

Similarly, in schizophrenia, seeing things that aren’t there might come from the brain misunderstanding or overworking its own visual signals. These are not linked to what others see outside. This disconnect from reality can cause deep confusion in what a person sees.

By using shared neural coding as a standard, scientists are making tools to find unusual brain patterns sooner. For instance, if a person’s brain response to common visual things is very different from what most people show, it could become a sign for disorders in how people see or process social things.

This shows promise for mental health help that is more tailored to each person. It would be based not just on behavior, but directly on how someone’s brain interprets the world.


robot looking at human face

Teaching AI to See Like Humans

One of the most interesting uses of shared neural coding is in building better artificial intelligence.

AI and machine vision systems used to process images in mathematical ways that were very different from how humans see things. But more recent work in deep learning, especially convolutional neural networks (CNNs), now copies the layered way the human visual system works.

Training these networks using data from shared neural activations lets researchers close the gap between how humans and AI see things. The result: machines that can “see” things like humans do — noticing what we notice and understanding context as we do.

This matters for:

  • Self-driving cars making decisions that focus on humans.
  • Robot helpers understanding emotional clues from faces.
  • Helpful technology that fits how humans need to see things.
  • Art and media tools that make things in line with how our brains naturally react.

By putting shared ways of thinking into machines, we’re creating helpers that not only process data but also understand human ways of seeing things.


people watching movie in theater

A Common Lens in a Digitally Fragmented World

Today, online content, social media, and hand-picked news bubbles fill our world. People see more and more different visual things. But shared neural perception gives us a kind of anchor. It shows that under all our different cultural influences, the brain still responds in common ways.

Future technologies may even use real-time brain timing. Imagine:

  • VR experiences that adjust visuals based on how engaged a whole classroom is.
  • Therapy where shared brain feedback helps families understand each other better.
  • Entertainment platforms timed to group emotional responses, changing how content moves along.

These uses could create moments of shared thinking in real time. Technology would bring us closer in how we see things, instead of pulling us apart.


multigenerational family at picnic

Culture and Age: Does Shared Coding Hold Up?

How much shared neural coding stays the same across different groups of people is an interesting area for study.

  • Children: Children’s brains are still developing their thinking paths. So, they might show more varied visual coding. They are still learning what to focus on in what they see.
  • Elderly individuals: As people get older, some brain areas may decline. This could affect how consistent shared coding is, especially in how people judge or understand things.
  • Cross-cultural differences: Gestures, symbols, or even colors have different feelings or meanings in different cultures. But strong proof shows that basic visual coding — like edges, shapes, and movement — stays surprisingly the same everywhere. Functional MRI studies have found that basic visual processing shows shared activity in people all over the world. But how people interpret ideas at a higher level might differ.

Understanding these details could make global communication plans better. It could also help design technology that includes everyone and build better understanding across cultures.


classroom using brain computer interface

Looking Ahead: Toward Shared Cognition?

Brain-computer interfaces (BCIs) are becoming more common. So, brain timing may soon move from just being watched to being used. Imagine classrooms where group engagement can be seen in real time. Or therapy sessions where brain connection helps track how well people get along.

But these advances raise ethical questions:

  • Should brain connection be changed?
  • Can shared neural coding be used unfairly in ads or by controlling governments?
  • Who owns the data if many minds help create a timed output?

There is huge potential for “brain harmony,” but we also need to be very careful how we move forward. Still, it’s striking to think of a future based on both individual thought and group thinking.


One Image, Many Brains, One Pattern?

You have your own unique thoughts. Even so, your brain is shaped by millions of years of change to see the world in ways others can understand. Finding shared neural coding connects art and science, psychology and computing. It is like a brain fingerprint that is quiet but connects all of us.

So, the next time you and a friend look at a sunset, think about this. Under your different feelings and experiences, your brains may be seeing the same way.


References

van den Bosch, J. J. F., Shahmohammadi, M., van Bree, S., & de Haas, B. (2024). Asynchronous, individual-specific but highly similar representations of natural images across human brains. eLife.

Kriegeskorte, N., Goebel, R., & Bandettini, P. (2006). Information-based functional brain mapping. Proceedings of the National Academy of Sciences, 103(10), 3863–3868. https://doi.org/10.1073/pnas.0600244103

Hasson, U., Malach, R., & Heeger, D. J. (2010). Reliability of cortical activity during natural stimulation. Trends in Cognitive Sciences, 14(1), 40–48. https://doi.org/10.1016/j.tics.2009.10.011

Previous Article

Music and Touch: Does Feeling Sound Boost Emotions?

Next Article

Psychopathy and Mind-Reading: Are 'Mean' People More Accurate?

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *



⬇️ Want to listen to some of our other episodes? ⬇️

Subscribe to our Newsletter

Subscribe to our email newsletter to get the latest posts delivered right to your email.
Pure inspiration, zero spam ✨