On Wednesday, scientists from the Event Horizon Telescope project unveiled the first ever photo of a black hole. While the photo itself is incredible, the feats of human ingenuity the project’s scientists used to capture it are just as impressive if not moreso.
The EHT project was a collaboration between eight different telescope observatories on four different continents, whose data was collected by a state-of-the-art combination of sensors and atomic clocks, stored on thousands of hard drives, and shipped to supercomputer clusters at MIT and Bonn, Germany. There, the data was analyzed by a complex algorithm and meshed together to produce the single image shared with the world earlier this week.
That sort of project takes an absurd amount of coordination between hundreds of people to pull off. Here’s how they did it.
Perhaps the most difficult aspect of the EHT project was getting all the telescopes to work together in the first place. Getting observing time at even one telescope is challenging for a typical astronomer; getting simultaneous observations at over half a dozen telescopes required a lot of negotiations.
It helped, says lead EHT scientist Sheperd S. Doeleman, that the team already had a proof-of-concept image from 2008 that showed this kind of experiment was possible. “Once people saw that our technology worked, and once people saw that we could go to telescopes and do good science,” says Doeleman, “then people were willing to let us into their other telescopes.”
But simply receiving some observing time wasn’t all the EHT project needed. The team had a collection of different telescopes with different sets of equipment and highly different capabilities, so many of the facilities needed to be upgraded or modified. “Some of [the observatories] required tailored modifications in order to work with our equipment,” says Doeleman. “So not only did we have to ask people to use their telescope, but then we also have to ask to go in and modify their telescope in a certain way.”
“We had to convince all of these observatories that the science we wanted to do was good enough that they would let us come in and rummage around in some of their sensitive insides,” he says.
In some cases, that could have been as simple as installing a few extra pieces of equipment. In other cases, observatories needed new sensors, new cameras, and new image processing hardware to meet the guidelines. The Atacama Large Millimeter Array in Chile took six years and a lot of negotiations to receive all the necessary upgrades.
“We had to spend six years developing a system to electronically combine the signals from all 60 of the dishes at that site to make it appear as though that one site was a single dish,” says Doeleman. "We went to ALMA and we told them what we wanted to do. They said ‘This design is too invasive. It touches too many of the components in our facility, so go build a simpler one.’ So we went back to the drawing board and figured out a different way to do it.”
Those telescope upgrades were important due to the way the EHT team collected and used their data. The black hole the team observed was 53 million light-years away and an impossibly tiny dot in the sky. No single telescope could possibly make it out, so the team needed to combine all the light from all the telescopes together into one image.
Of course, the problem there is that the observatories are thousands of miles apart. The astronomers would somehow have to collect the light from these telescopes and bring it across oceans to combine it without losing any of the data. And essentially, that’s what they did.
At each of the eight observatories, the EHT team installed cameras and ultra-high-precision atomic clocks. During observations, the scientists recorded the incoming light signals along with the exact moment those signals were received. All of that information was saved on hard disks, which were then shipped to the computing clusters at MIT and Bonn.
“It turns out that the Internet is just too slow to transfer all of our data,” says Doeleman. “All the data that we recorded at the South Pole in April 2017 would have taken about twenty five years to get back using the Internet.”
Instead, hard drives were loaded into trucks and planes and hand delivered to the data centers, where they were processed on a supercomputer using an algorithm developed by Katie Bouman. There, the data recordings were lined up using the atomic clock timestamps and adjusted based on factors like their locations and the curvature of the Earth.
“The issue is that you need to have enough computational fabric to handle all the recordings at the same time,” says Doeleman. “And these recordings are very high bandwidth. So you're recording many many different frequencies and to do all that-to compute the compensating models for the shape of the Earth and so forth-that literally takes a cluster of computers.”
But the end result is an image much sharper than we would ever be able to obtain with one individual telescope, no matter how large. By combining all eight observatory outputs into one image, the EHT team has created the equivalent of a telescope the size of the entire Earth. Each additional observatory makes that equivalent telescope bigger and more sensitive.
That’s why those six years spent at ALMA were so important. ALMA is one of the biggest telescopes in the world, and it was absolutely essential to pulling off this experiment. “ALMA increased our sensitivity by nearly a factor of 10,” says Doeleman.
What Comes Next?
So if each new telescope makes such a big difference in the output quality, why not just add a few more telescopes to the network? Well, that’s the plan, says Doeleman, but it’s not quite so easy.
“Whenever you add [telescope] dishes it becomes an N-squared problem,” he says. Adding just one more dish means connecting it with all the ones already in the network. “So adding just a couple more stations really does increase the amount of computation you need to do.”
That’s not an insurmountable hurdle, of course, but it does mean that the EHT team is focusing a lot more on their computational infrastructure. Currently, the plan is to outsource a lot of that computation to cloud-based data centers. “When you do that you have almost unlimited access to computational power,” says Doeleman.
And of course, there are also the logistical and organizational hurdles involved in bringing in more telescopes to the project. Each new observatory means weeks, months, or sometimes years of work to make it compatible with the project. Each new observatory means a new collaboration with an organization or government. And each new observatory means additional scientists and staff to run it. Coordinating all of that isn’t easy.
But it’s worth it, says Doeleman, if it means sharper images of the most enigmatic objects in all of astronomy. With more telescopes, we we can get even better photos, and eventually we’ll be able to achieve Doeleman’s ultimate goal: real-time video.
“We're confident that we can take the next step and move from still images to making movies of black holes,” he says. “And we feel there are no logistical problems that would prevent us from doing that over the next decade.”
Sometime in the late 2020s, we might be able to all watch a video of clouds of gas swirling around a black hole larger than our entire solar system, moving so fast they’re energized into plasma, as they are inexorably pulled into a place so dense not even light can escape. And it won’t just be an animation or a render: It’ll be a live recording.
('You Might Also Like',)