Electroencephalograms, functional magnetic resonance imaging, and diffusion tensor imaging are just a few of the techniques scientists use to measure the structure and functions of the brain. All of these different methods produce large sets of data. The problem is that there has been no systematic way of integrating all of the information.
Sharing data from dozens of different types of scans to build a brain
In 2006, Professor Randy McIntosh, Reva James Leeds Chair in Neuroscience and Research Leadership, dreamed up a solution to this problem with his friend and collaborator Victor Jirsa, in a pub in Cambridge, UK. Their idea was to build a virtual model of the human brain on a platform that standardizes how scientists integrate data from various types of brain scans.
McIntosh developed the virtual brain an interdisciplinary team that includes experimental psychologists, anatomists, physicists, mathematicians, and clinician scientists, along with a consortium of institutions. As McIntosh explains, “The virtual brain becomes a new mode of collaboration that allows people to share data in ways that we haven’t before.”
The virtual brain explains why our minds never rest
Today, their virtual brain runs on a supercomputer at U of T, simulating the neural network and capturing the shapes and speeds of myriad connections both within and between areas of a real human brain. They incorporate new empirical data from real brains to continually refine their virtual one, increasing the accuracy of its predictions.
In one of several discoveries, the virtual brain has been instrumental in explaining why the brain produces so much “noise” in a resting state. The noise appears to be the brain’s vigilant humming as it prepares to respond to likely, immanent scenarios. The brain does not rest so much as actively anticipate its next moves.
Testing how your brain might react to aging, medication, sleep or stimulation
McIntosh’s larger goal is to make the virtual brain useful in a clinical setting. Because each human brain is unique, the virtual brain must be more than a model for human brains in general; it must also be adaptable enough to model the brains of individual patients. To this end, McIntosh and his team have built a web interface that allows physicians to upload data from brain scans to the virtual brain and get results relevant to their own patients. “We can actually make your brain the virtual brain,” McIntosh explains. “We can watch it grow up, we can watch it get old, we can give it drugs, and we can put it to sleep, for example. We can also stimulate it.”
For example, McIntosh notes, different brains recover from strokes very differently. Physicians will be able to use the virtual brain to accurately prescribe the best form of treatment for a given stroke victim.
Revolutionizing how computers talk to humans
The virtual brain also has commercial prospects, in helping to develop new ways for us to interact with computer hardware. Attempts to make computers responsive to brain activity have so far suffered from a lack of accurate data about how the brain functions. The virtual brain can provide that data, and could soon help revolutionize how engineers design brain-computer interfaces.
Looking back over the past decade, bringing such a disparate group of researchers, all speaking the language of their own disciplines, was a huge hurdle. Now, as the consortium begins turning out PhDs trained in this interdisciplinary environment, researchers are already fluent in one another’s disciplines and the project promises to become increasingly sophisticated. With each step forward, their virtual brain becomes more and more like a real one.