Thread: Excellent video on black hole imaging

1. What I can't figure out yet, will need to do some reading and math, is for each of the radio telescopes that they used what fraction of the image was the accretion disk region?

Say they took a "picture" of the region with telescopes A and B that are the same size and also have the same angular field of vision. Is the disk region only 1% of that? Not really sure how much a radio telescope can magnify. But why overly magnify if the data becomes more fuzzy? The rest of the field of vision is not directly useful to the disk image, but it can be used to standardize the intensities and other stuff I guess.

Anyway, the telescopes pointed for a week and got a lot of data that was simultaneous at times between specific pairs or more of telescopes.

Also the disk is huge and slow changing, so it can be treated as a constant object on a week's time scale unlike the more ephemeral Sagittarius A*.

I can see how simultaneous A and B scopes can stack via accounting for time with atomic clocks. But can C and D that were separately combined with VLBI be added to this image?

Finally, was this image and the four separate team images made with ZERO reference to what they were expecting to see? Was there no finessing of the data other than what normally happens? If so, then a non finessed image is truly a scientific achievement of the highest order possible at this time as a first run.

Also, the

2. VLBI is a form of "aperture synthesis", something also done on a smaller scale, as with the Very Large Array of New Mexico, some radiotelescopes on tracks over several square kilometers in New Mexico.

One takes the signals from several telescopes and then combines those signals, finding the signals' correlations. For a source with direction vector n and for two telescopes separated by distance vector X, the intensity correlation is

K = K0 * exp(i*k*(n.X))

where k is the angular wavenumber (2*pi)/(wavelength)

The total intensity is the sum over all the sources:

K = sum over n of K0(n) * exp(i*k*(n.X))

One usually refers the directions to some reference direction n0, and when one does that, one gets

K = sum over n of K0(n) * exp(i*k*((n-n0).X))
K = sum over n of K0(dn) * exp(i*k*(dn.X))

where dn = (n - n0) is the direction difference. It is usually taken to be d1*e1 + d2*e2 where e1 and e2 are two unit vectors perpendicular to n, usually taken to be east-west and north-south. If
n = {cos(dec)*cos(ra), cos(dec)*sin(ra), sin(dec)}
then
e1 = {-sin(dec)*cos(ra), -sin(dec)*sin(ra), cos(dec)}
e2 = {-sin(ra), cos(ra), 0}
(dec = declination, ra = right ascension, a sort of celestial latitude and longitude)

So one gets the equation

K = sum over n of K0(d1,d2) * exp(i*k*(d1*(e1.X)+d1*(e2.X))

So from the K's, one can get an intensity map, K0, by going through all the possible values of the separation X.

3. thanks, but did they indeed not use any modeling of the expected accretion disk to make the images?

4. Returning to my discussion of VLBI, there is a big problem. The X's do not completely cover the space of possible values. Instead, one has a curve that goes through that space as a result of the Earth's rotation. For only two telescopes, that is not very much, so that is why aperture synthesis usually uses more than two, with the more the better.

Even then, the observation X values are sparsely distributed over the space of them, and one has to use some technique for filling in the blanks. X appears as its projections onto the plane of the sky, (e1.X) and (e2.X), often designated u and v, and that makes the problem only a little bit easier.

Astronomers have developed several algorithms for doing that, typically involving some hypotheses about the resulting image.

The "CLEAN" algorithm assumes a collection of point sources, and subtracts out each one that it finds.

The Maximum Entropy Method tries to find the smoothest image that will fit. The "entropy" here is given by sum of I*log(I) for intensity I, though log(I) without the I multiplier also often works.

A technique used on M87* is to assume that parts of its image resemble parts of some set of reference images. This technique gave consistent results for different sets of reference images -- simulations of black holes, pictures of astronomical stuff, and pictures of ordinary stuff.

5. Originally Posted by repoman
thanks, but did they indeed not use any modeling of the expected accretion disk to make the images?
I think that they were careful to avoid that, because that is a rather circular sort of procedure. To be able to test one's models, one has to avoid assuming any of them.

That ring is a gravitational-lens effect, from light being deflected near the BH.

6. Originally Posted by lpetrich
I think that their next target will be Sagittarius A*, the black hole in the center of our Galaxy. If they get confident enough, then the Andromeda Galaxy's BH may be next.
I thought that they had already taken the data for a Sagittarius A image but were still processing that data to come up with an image. But then I only heard that once from one source so it could be wrong.

7. Originally Posted by skepticalbip
Originally Posted by lpetrich
I think that their next target will be Sagittarius A*, the black hole in the center of our Galaxy. If they get confident enough, then the Andromeda Galaxy's BH may be next.
I thought that they had already taken the data for a Sagittarius A image but were still processing that data to come up with an image. But then I only heard that once from one source so it could be wrong.
You could well be right, but I have not probed into that issue very deeply The problem with Sgr A* is that it's much smaller than M87*, and thus that material can vary much faster near it. It's like trying to take a long exposure of something that moves very fast.

Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•