Seeing the Unseeable — Imaging a Black Hole with Katie Bouman

by | 19 September 2019 | Conferences, Industry Leaders, Research

Photo by Dina Douglass © 2019 ACM SIGGRAPH

On 10 April 2019, Katie Bouman and her team shared the first-ever images of the M87 black hole from the Event Horizon Telescope. On 31 July 2019, she shared with SIGGRAPH 2019 audiences how the team was able to accomplish something that was once believed to be an impossible feat, noting the importance of computer graphics in their endeavor and detailing her thoughts on what’s next for black hole imaging. 

The Structure of a Black Hole

“You might ask, how were we able to take a picture of something that, by definition, doesn’t let light escape and is unseeable?”

Bouman began her presentation by breaking down the contents of a black hole and what makes it so difficult to capture. She explained that black holes are cloaked by an “event horizon,” which is what prevents light from escaping them. The light you see in the image is created by the matter that flows throughout, being heated to hundreds of billions of degrees, allowing this matter to shine brightly before passing through the event horizon. Sometimes this gives a black hole enough light to outshine all the stars in its host galaxy. Then, Bouman raised the following question to the audience: “You might ask, how were we able to take a picture of something that, by definition, doesn’t let light escape and is unseeable?”

Light from a black hole doesn’t actually follow straight lines, she explained. The light is curved because the black hole is curving “spacetime” (or, any mathematical model that fuses the three dimensions of space and the one dimension of time into a single, four-dimensional continuum) and photons can go in full circles around it. The net effect of these photons flying around the black hole is the shadow they cast, which allows us to see that round shape in the image.

Technology Needed to Observe at the Correct Size and Wavelength

As her the Frontiers Talk continued, Bouman went into detail on how using a telescope with too short of wavelengths, like the Hubble Telescope, would block the light from reaching the Earth at the core of the galaxy. Longer wavelengths, however, could see gas throughout the black hole, blocking the light from within. So, even at 3 mm wavelengths, the black hole was not visible, leading the team to use the Event Horizon Telescope.

The Need for the Event Horizon Telescope

“It’s the size of a grain of sand… if that grain of sand is in New York, and you’re viewing that grain of sand from SIGGRAPH 2019 in California.”

Another roadblock Bouman explained was the size of the black hole’s shadow. A black hole shadow is about 40 µas, or microarcseconds. Comparatively, it’s the size of a grain of sand… if that grain of sand is in New York, and you’re viewing that grain of sand from SIGGRAPH 2019 in California. The team calculated that the telescope would needed to be about 13 million meters in length, or about the size of Earth. This Earth-sized telescope was able to be simulated by using a computational telescope, created by building eight telescopes from around the world — also known as the Event Horizon Telescope (EHT). To make joining these telescopes possible, Bouman explained Very Long Baseline Interferometry (VLBI). Researchers at each site recorded thousands of petabytes of data by freezing the light onto hard drives, which the computers then processed to make an image. The researchers gathered so much data that it could not be sent over the internet. Bouman also shared insight into how the data was instead flown to a common location, where a special supercomputer, called a Correlator, combined the precisely timed data. After that, the data went through a calibrator to extract a stronger signal.

Accounting for Error

To go in-depth on why the researchers needed to account for atmospheric error, Bouman informed the SIGGRAPH audience how VLBI EHT works in the first place. Light from the black hole travels to the Earth for 55 million years, and when it reaches Earth, it hits one of the telescopes slightly before the other. This time delay allowed the team to extract the 2D spatial frequency measurements that were used for image reconstruction. However, the many atmospheres from where the telescopes are placed all over the world provided an extra challenge of accounting for atmospheric error. Accounting for various phases, gains, and amplitudes, while using different imaging techniques, allowed for cancelling out this error as if there was no atmosphere at all.

Keeping Bias out of the Data

Regardless of the method, human bias is particularly dangerous when it comes to these images. Bouman went on to explain to the audience the four-step process that the team used to make sure they weren’t biasing their images. The team first tested synthetic data, where data sets were given to groups of researchers to create images without any knowledge of the source, and then passed the images to experts to evaluate. Occasionally, bizarre images were added to the mix. For example, a snowman in space that Bouman displayed on the screen to prove that there were no expectations. She detailed that step two as blind imaging, where 40 imaging experts were split into four teams and isolated from each other around the world for seven weeks to try to make the best image from the data. Step three included objectively choosing parameters through image pipelines, which helped the team confirm that the ring shape was real. Step four was a number of validation tests: for example, independent day tests, variations across parameters, evaluating the gains, and so on. 

Was Einstein Right?

Bouman questioned the audience, “What did we actually learn? Did we prove, for instance, that Einstein was right?” She followed up with “No… but we didn’t prove he was wrong, which is also big,” eliciting a laugh from the crowd. The Schwarzschild radius displayed on the screen to explain the relationship between a fixed mass and the creation of a black hole, and how the image that the team showed the world in April 2019 was incredibly consistent with predictions that scientists have been making for years.

The Importance of Computer Graphics in the Black Hole Discovery

“Without these computer graphics renderings, it’s likely we would have never even attempted to see a black hole.”

Indulging the audience in the history of computer graphics use in black hole imaging, Bouman explained it wasn’t until the ’70s that people began to create images of what they expected a black hole would look like. In 1979, Jean-Pierre Luminet set out to create the first rendering of a black hole, using an IBM 7040 Mainframe (an early transistor computer); however, Bouman went on to explain, due to the lack of computer graphics software, Luminet had to create the image by hand, marking down each individual dot with black India ink.

Computers were more easily able to simulate what the black hole may look like in the late ’80s and ’90s due to technological advancements, with which SIGGRAPH attendees are more familiar, and images created by Heino Falcke in the early 2000s provided inspiration for the Earth-sized telescope. Bouman noted, “Without these computer graphics renderings, it’s likely we would have never even attempted to see a black hole.” She also went into detail on the more recent work that has been going into GRMHD (General Relativistic MagnetoHydroDynamics) simulations that are later ray-traced using General Relativistic Ray Tracing.

The Next Steps for Black Hole Imaging

Bouman and team are already looking toward the future for new sources they can image to learn more about the black hole structure. Bouman educated the audience on the black hole “in our own backyard” called Sagittarius A*, and how it’s evolving incredibly quickly due to its small size, with an orbital period of the gas being 4 to 30 minutes, compared to the 4- to 30-day orbital period of M87. This creates new obstacles that the team must overcome, and has already made progress on, to produce images of Sagittarius A*. Bouman ended the presentation by leaving the audience with the hope that one day the researchers will be able to provide, “not just a static image, but a dynamically-changing, breathing black hole video.”

Bouman presented “Imaging a Black Hole With the Event Horizon Telescope” at SIGGRAPH 2019 Frontiers Talks on Wednesday, 31 July 2019 at 8:00 AM.

Katie Bouman is an assistant professor at Caltech. She was previously a postdoctoral fellow in the Harvard-Smithsonian Center for Astrophysics. She received her Ph.D. in the Computer Science and Artificial Intelligence Laboratory (CSAIL) lab at MIT. The focus of her research is on using emerging computational methods for interdisciplinary imaging.

Related Posts

SIGGRAPH Spotlight: Episode 77 – Synesthetic Connections, Real-Time, and Immersive Experiences

SIGGRAPH Spotlight: Episode 77 – Synesthetic Connections, Real-Time, and Immersive Experiences

SIGGRAPH 2024 Courses Chair Ruth West chats with audio and graphics experts Aaron McLeran, Felipe Romero, and Max Hays, about synesthetic connections between real-time audio and graphics within games.