AI ‘Mind Captioning’ System Translates Brain Activity into Descriptive Text
Researchers reported that the AI system decoded what participants were seeing or recalling with close to fifty percent accuracy, demonstrating its potential to interpret both visual input and remembered scenes.
Japanese neuroscientists have reported a breakthrough in mind captioning that uses artificial intelligence to translate human brain activity into descriptive sentences, opening new possibilities for communication tools in Digital Health and Neuroscience.
The team led by Dr Tomoyasu Horikawa at NTT Communication Science Laboratories developed a two-stage system that first decodes patterns of brain activity captured by functional MRI scans and then feeds this information into a language model that generates text.
Participants watched short video clips or recalled scenes while their brain activity was recorded. The AI system then produced sentences that described what the person was seeing or remembering.
Researchers said the model correctly described viewed scenes in nearly half of the test cases when choosing from a set of 100 options, a level of accuracy significantly higher than chance. It also generated meaningful descriptions of recalled memories, suggesting that the system can interpret internal mental imagery rather than just direct visual input.
The method relies on decoding semantic features in the brain, such as objects and actions, rather than accessing private, free-flowing thoughts.
The team said the work is not mind-reading in the popular sense but an interpretive framework that links patterns of neural activity to language. They highlighted the potential humanitarian impact of such a system, particularly for people with conditions such as aphasia, advanced neurodegenerative disease, or severe paralysis who can think clearly but cannot speak or type.
For these patients, a calibrated mind captioning tool could one day act as an interface that turns neural responses into text-based communication.
At the same time, the study underlined the ethical frontier around mental privacy. As brain decoding and AI models improve, there is concern that unintended or unauthorized interpretation of neural activity could pose new risks in already data-intensive health and technology environments.
The authors stressed that decoded content should be treated as an approximation of internal meaning, not a literal reconstruction of private thoughts, and called for strong safeguards and consent frameworks around future applications.
Although the current system depends on large MRI scanners and controlled experimental settings, researchers are exploring how similar approaches might eventually integrate with more practical brain-computer interfaces and digital health tools.
Stay tuned for more such updates on Digital Health News