Neuron Probes are Exposing the Brain as Never Before (Kavli Roundtable)

An activated neuron in a tangle of neurons.

Lindsay Borthwick, writer and editor for The Kavli Foundation, contributed this article to Live Science's Expert Voices: Op-Ed & Insights.

Neural probes are the workhorses of neuroscience, as essential to a neuroscientist as a compass is to a cartographer. They record the electrical activity of the neurons in our brains — the Buzsaki256, for example, can monitor nearly 250 cells at once. Such tools are indispensable in the accelerating effort to map the brain circuits that underlie how humans think, feel and behave. But they are just some of a growing suite of tools that are exposing the brain as never before.

The Buzsaki256, named for New York University professor and neural pioneer Gyorgy Buzsaki, was developed by biomedical engineer Daryl Kipke of NeuroNexus. "It's finally cool to be a toolmaker," Kipke said recently as he launched into a presentation about the company's technologies. He and 13 more of the nation's leading toolmakers for brain research were gathered together for a two-day symposium, The Novel Neurotechnologies, hosted by Columbia University.

Neurotech shifted into high gear with the launch of U.S. President Barack Obama's Brain Research for Advancing Innovative Neurotechnologies (BRAIN) Initiative in 2013. Its centerpiece, as the name suggests, is neurotechnology.

All of this is pushing toolmakers to the front lines of neuroscience research, and as Kipke's comment implies, elevating their status.

Just after the symposium, The Kavli Foundation sat down with the organizers to discuss some of the remarkable new tools that are poised to transform the science of the brain.

The participants were:

  • Rafael Yuste — professor of biological sciences and neuroscience at Columbia University, director of the NeuroTechnology Center and co-director of the Kavli Institute for Brain Science. Yuste is a world leader in the development of optical methods for brain research.

  • Liam Paninski — professor of statistics at Columbia University in New York, co-director of the NeuroTechnology Center and of the Grossman Center for the Statistics of the Mind. Using statistics, he is studying how information is encoded in the brain.

  • Darcy Peterka — research scientist at Columbia University and director of technologies at the NeuroTechnology Center. Peterka is working on developing novel methods for imaging and controlling activity in the brain.

  • Ken Shepard — professor of electrical engineering and biomedical engineering at Columbia University and co-director of the NeuroTechnology Center. His research is focused on combining components of biological and electronic systems to create bioelectronic devices.

The following is an edited transcript of a roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks.

THE KAVLI FOUNDATION: "New directions in science are launched by new tools much more often than by new concepts." So said Cornelia Bargmann, who spearheaded the advisory panel for the BRAIN Initiative, during her kick-off presentation at the Symposium. Do you agree?

Rafael Yuste: I do. In fact, we used that exact quote, from the physicist Freeman Dyson, in a white paper we wrote for the Brain Activity Map project, which evolved into the BRAIN Initiative.

Normally, people think that revolution in science is as simple as having a new bright idea. But if you dig deeper, most of the major revolutions have happened because of new tools. Much of the work we heard about over the past two days was about new methods, and once we as a community develop new methods, the next generation of scientists will be able to see things no one has seen before.

Liam Paninski: There is a long history of theoretical and computational ideas in neuroscience that have percolated for years, even decades, but they have been waiting for the tools to come along to test them out. And that's what's really exciting about where the field is today.

TKF: Can you give me an example?

L.P.: Sure. I saw a talk by a neuroscientist the other day who has done some beautiful work on understanding the motion detection system of the fly: essentially, how a fly figures out which way it's going. Theories about this have been around since the 1950s, but it's only in the past year that people have been actually able to test these theories in detail, by mapping the brain circuits involved in detecting motion.

There are also a handful of theories about how information propagates through neural circuits or how memories are encoded in the structure of neural networks that we're now able to test due to new brain research tools. [Learning from Earth's Smallest Ecosystems (Kavli Hangout)]

R.Y.: Today, Sebastian Seung, a computational neuroscientist at Princeton, gave a similar example for direction selectivity in the retina of mammals. He argued that it took 50 years for people to figure this out, and that the critical advances came with the introduction of new techniques. So that's a very clear example of how with new tools we're beginning to solve these long-standing questions in neuroscience.

Darcy Peterka: I think in some ways, however, the distinction between tools and ideas depends on your perspective. The things that become tools for neuroscientists are sometimes fundamental discoveries in other fields such as chemistry or physics. People may not have realized at first the value of these discoveries outside of those fields, but the merger of ideas across disciplines often creates opportunities to apply fundamental discoveries in new ways.

TKF: Rafa, in your wrap-up today, you called the Kavli Futures Symposium "a dazzling feast of exciting ideas and new data." What did you hear that you're feasting on?

R.Y.: I was very excited by things that I'd never seen before, like the deployable electronics that Charles Lieber, a chemist at Harvard, is working on. He's embedding nanoscale electrical recording devices in a flexible material that can be injected into the brain. I thought it was just a spectacular example of a nanotool that could transform our ability to record the activity of networks of neurons.

In terms of new imaging tools, I'd never seen the type of microscopy that the physicist Jerome Mertz, from Boston University, was showing: phase-contrast microscopy in vivo. He has transformed a relatively simple microscope, the kind that most of us used in school, into a tool to look at thick tissue in vivo, including brain tissue. It was like a sip of fresh water.

On the computational side, I thought Konrad Kording's work on neural connectivity was very refreshing. Kording is the neuroscientist at Northwestern University who showed that by using mathematics to analyze the connections between nerve cells in the worm c. elegans, a widely used model organism, you can distinguish the different cell types that make up its nervous system. I've worked on that problem myself, but I never looked at it from the angle he proposed.

Overall, I felt a little bit like a kid in a candy store where all the candy was new!

L.P.: The talk by George Church, who helped to kick-start the Human Genome Project and the Brain Activity Map Project with Rafa, was just a wonderland of exciting new things. He's obviously done some radical science in his career, but the technique he talked about — FISSEQ, for fluorescent in situ RNA sequencing — was really exciting. It's a way of looking at all the genes that are expressed, or turned on, in living cells. It has all kinds of applications in neuroscience. If he gets the technique working reliably, it will be huge.

D.P.: Jerome Mertz also introduced us to a technology that is really interesting because it brings together two fields — optical communication and biological imaging — that haven't before been combined very powerfully before. He has developed an incredibly thin, flexible microscope that can be inserted deep into the brain. To get it working, he had to figure out how to transmit a lot of spatial information, carried by light through an optical fiber, from one end of the fiber to the other without degrading the image. The telecommunications industry has already solved this problem in cell phones and he has adapted the solution for optical imaging.

Ken Shepard: What stood out for me is the continued scaling of technologies designed to make electrical recordings of brain activity. We're seeing the development of higher and higher electrode counts, which lets us record from more and more cells.

TKF: Ken, as you just pointed out, one of the major themes of the symposium was finding ways to observe the activity of more neurons — a goal that is shared by the BRAIN Initiative. Michael Roukes, from the Kavli Nanoscience Institute at California Institute of Technology, lamented yesterday that existing tools for making electrical recordings can only monitor a couple hundred neurons at once. Where is that technology moving?

K.S.: One of the issues is that solid-state electronics and the brain have different form factors. One of them is hard and flat; the other is round and squishy. The challenge is to reconcile those two things to make tools that are as non-invasive as possible. The less invasive they are, the less tissue damage they cause and the longer you can leave them in the brain. [The Nanotech View of the Microbiome (Kavli Roundtable ) ]

There are two ways of doing this: One is to try to make the solid-state stuff as small as possible, so tool developers are trying to make the shanks that contain the electrodes and are inserted into the brain very thin. Tim Harris, director of applied physics at Janelia Research Campus, part of the Howard Hughes Medical Institute, said yesterday that you'd better make them 10 microns — that's 10 millionths of a meter — thin if you can. The second way is to make the electronics flexible, as Charles Lieber is doing. The idea is that if the device is more conformal, it will be more acceptable to the tissue.

As we saw yesterday, nanotechnologists are moving both of these approaches forward and trying to scale them up to record simultaneously from more neurons.

TKF: But there is a limit to the number of neurons that can be recording electrically, isn't there? I think Michael Roukes argued that limit is 100,000 neurons, after which neuroscience will need a new paradigm.

K.S.: Yes. One of the problems with electrical recording, which I think Michael explained really nicely, is proximity. You have to get the electrodes very close to the neurons that you're trying to record from, which means that if you're trying to record from a lot of cells you need an incredible density of electrodes. Beyond 100,000 neurons, it's just not practical.

So what can we use instead? Michael argued that optical tools could take over from there. In fact, I'm working with him on a tool we call "integrated neurophotonics." We received one of the first BRAIN Initiative grants to develop it. Basically, we're aiming to put the elements of an imaging system — emitter pixel and detector pixel arrays — in the brain. We'll still be sticking probes in the brain but they'll be much smaller and therefore less invasive. And because they'll detect light rather than electrical signals, they don't require the same proximity. We think that 25 probes will be enough to record the simultaneously activity of 100,000 neurons.

L.P.: If you can solve the computational problem, demixing the signals.

K.S.: Absolutely. I saw you light up when Michael was showing all that stuff. It's going to be an incredible computational problem.

TKF: The other big challenge in neurotechnology is the problem of depth. Even the best optical tools we have can't see more than about a millimeter into the brain. Why is that?

D.P.: The problem is that a beam of light doesn't travel very far in brain tissue without being scattered out of focus. People are working to overcome this by developing ways to see through opaque materials, but the devices they've developed are still too slow to be of practical use to neuroscientists.

L.P.: Astronomers have developed techniques to solve this scattering problem that correct the images taken by ground-based telescopes for atmospheric disturbances. They call this adaptive optics and there's lots of interest in using these same techniques in biology. But the research is still in the early stages.

D.P.: I would say there are two types of adaptive optics. There's traditional adaptive optics, from astronomy. For example, imagine looking through a Coke bottle. The image you see is distorted, but you can still make it out. Now imagine that you're looking through an eggshell or a piece of paper. You would see light but no form or structure. That's closer to the problem neuroscientists face when trying to image the brain. Until recently, people considered the problem too difficult to solve. But in the last couple of years, some researchers have found ways to focus light scattered by a slice of chicken breast. They've also imaged through eggshell and a mouse ear. It's pretty remarkable.

R.Y.: Essentially, there are enough pieces in place that we can actually imagine solving a problem that seemed impossible just two or three years ago. And this is due to the interaction of completely disparate fields: physicists working in optics, engineers building very fast modulators of light and computer scientists developing mathematical approaches to reconstructing images and cancelling out aberrations. So the solution is not here, but the path toward it is starting to be clear.

TKF: The third challenge — and the third focus of the symposium — is computation, which Janelia's Tim Harris underlined when he talked about how difficult it is to handle the data coming from an electrode with just a few hundred channels. Are experimental neuroscientists running ahead of those who are thinking about how to handle the data and what it all means?

L.P.: I think that's a huge bottleneck. There are massive datasets becoming available, and the people who build the computational tools are catching up, but there needs to be a lot more investment and focus in that area. We saw the same thing in systems biology and in genomics, right? First the data came, and then people started figuring out how to deal with them. We're at the first stage now in neuroscience, and I think we're just beginning to build up the computational and statistical infrastructure we need.

D.P.: Another hindrance to the dissemination and analysis of the data is a lack of standardization. Geneticists figured out a way to store and share DNA sequence data, but in neuroscience there is still very little standardization.

L.P.: That'll come eventually. I don't think that's the major roadblock. What I see as lacking right now are students and post-docs who are fluent in both languages: computation and neuroscience.

TKF: Liam, do you think the catch-up will just happen in time, or do there need to be incentives in place to move things along?

L.P.: The objective is in place, and as neuroscientists generate more and more data, they are becoming more and more desperate to work with computational scientists. And that brings more funding into the computational realm. But on the other hand, I'm starting to lose trainees to Google and Facebook, which need people who can analyze big data.

R.Y.: One of the most popular majors in college is computer science. I think that will be good for neurotechnology because we'll have students who learned how to code when they were in middle school or high school. They'll be completely fluent by the time they get to the lab, and I think they'll lead the synthesis between computer science and neuroscience that has to happen.

TKF: At the symposium, we heard a lot about new efforts to identify the different types of cells that make up the brain. I think most people would be surprised to learn that we don't really have a good handle on that. Why is there a renewed focus on this?

R.Y.: Neuroscientists worked a lot on this issue of cell types in the past, and it reminds me of an old idea from Georg Hegel, the German philosopher, who argued that history progresses in an iterative way. He called that the dialectic method. You end up circling back to a problem but at a higher level, like a spiral.

With the problem of how many cell types there are in the brain, we're sort of going back to the beginning of neuroscience, except we're doing it in a more quantitative way. Neuroanatomists working 100 years ago identified many cell types, but we don't have numbers associated with them. Now, we can visit this question anew with the full power of mathematics and computer science. We'll probably confirm what we already know and swing up this Hegelian spiral to another level in which we'll discover new things that people didn't see before because they didn't have these computational tools.

The tool issue is an important one because the only difference between us and the 19th-century neuroanatomists is that we have better tools, which give us more complete data about the brain. We are not smarter than they were.

L.P.: These cell types are serving as footholds to deeper questions about brain function. Sure, if I hand you piles and piles of data about different cells, computation can help you answer certain questions, such as what does it mean to be a different cell type? How many different cell types are there? What are these cell types useful for? But to me, cell type is just a starting point, a tool that allows you to do more interesting research, rather than the end goal.

TKF: The circuits that traffic information through the brain have been even more of a mystery than cell types. Are we starting to glean some patterns in the way that brains are organized or how circuits operate?

R.Y.: There was a talk in this meeting, by Chris Harvey, a neuroscientist from Harvard, that touched on a model for how neural circuits operate called the attractor model. It's still debated whether it applies to brain circuits or not, but if it does, this is the kind of model that would apply widely to neural circuits in pretty much any animal. Still, it's very difficult to test whether the attractor model is true or not because doing so would require the acquisition of data from every neuron in a circuit and the ability to manipulate the activity of these neurons. That's not something we can do right now.

L.P.: You can count on one hand the neural circuits we understand. So I think it's just too early right now to really make any conclusions about whether circuits in the retina actually look like those in the cortex, for example. Maybe we will be able to in a couple more years as some of these new methods for monitoring and manipulating large numbers of neurons come online.

TKF: John Donoghue from Brown University, who is a world leader in creating brain-computer interfaces, was one of the few scientists who talked about human applications of neurotechnology. How closely connected are the tools for basic neuroscience research and those aimed at treating brain disorders such as Parkinson's or paralysis?

D.P.: In general, most of the neurotechnologies being used in humans are a little bit bigger than those being used in the lab and lag behind them because of the approval process. But some multielectrode arrays, such as those that John Donoghue implants in people with paralysis to restore mobility, are pretty similar to what people are using in cutting-edge neuroscience labs to study rats or primates.

R.Y.: Donoghue's laboratory has both nanoscientists who are building these cutting-edge tools and a team that works with human patients. So there are places where these technologies are being rapidly developed or adopted to treat brain disorders or to restore lost function.

L.P.: At the moment, I think there are about 20 technologies that can interact with the different parts of the brain in specific medical contexts. John talked about cochlear implants for assisting with hearing loss, deep brain stimulation for Parkinson's disease and retinal implants for blindness, and in all of these cases there are related basic science questions that people are working hard to tackle. For example, to understand what deep brain stimulation is doing, you really need to understand subcortical circuits. So in some cases medicine is driving basic research that probably wouldn't be done if it wasn't for the potential health impact.

I started in John's lab when he was just getting into multielectrode recording. That's what set me on the path toward statistics, because it was very clear that you needed good statistical models of neural activity to develop useful neural prosthetics.

Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.

Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.