Super-resolution Planar Mass-mapping

This example instantiates the planar massmapping forward model with the super-resolution operator active. Moreover, this the example is also noise whitened; which is to say the pixel space map is multiplied by the inverse noise covariance matrix \(\Sigma^{-1}\) – which is sufficiently approximated from the observation counts per pixel. Full details can be found in Price et al. 2021.

In such an example the forward model (measurement operator) is straightforwardly given by

\[\Phi = \mathsf{C} \; \mathsf{M} \; \mathsf{F}^{-1}_{\text{lr}} \; \mathsf{D} \; \mathsf{Z} \; \mathsf{F}_{\text{hr}}\]

where \(\mathsf{F}_{\text{hr}}\) is a high resolution (dimension \(N^{\prime}\)) fast Fourier transform, \(\mathsf{Z} \in \mathbb{C}^{N \times N^{\prime}}\) is a Fourier space down-sampling which maps \(\tilde{\kappa}^{\prime} \in \mathbb{C}^{N^{\prime}}\) on to \(\tilde{\kappa} \in \mathbb{C}^{N}\), where tilde represents Fourier coefficients, \(\mathsf{C}\) is multiplication by the inverse noise covariance matrix \(\Sigma^{-1}\), \(\mathsf{M}\) is a typical masking operator, and \(\mathsf{D}\) is the planar forward model given by

\[\mathsf{D}_{k_x,k_y} = \frac{k_x^2-k_y^2+2ik_xk_y}{k_x^2+k_y^2},\]

as derived in Kaiser & Squires 1993. Correspondingly the adjoint measurement operator is straightforwardly given by

\[\Phi^T = \mathsf{F}^{-1}_{\text{hr}} \; \mathsf{Z}^T \; \mathsf{D}^\star \; \mathsf{F}_{\text{lr}} \; \mathsf{M}^T \; \mathsf{C},\]

where \(\mathsf{F}^T\) is the inverse, the adjoint of unitary mapping \(\mathsf{D}\) is the complex conjugate, the inverse covariance weighting \(\mathsf{C}\) is trivially self-adjoint, \(\mathsf{M}^T\) is gridding (the adjoint of masking or degridding), and the adjoint of downscaling kernel \(\mathsf{Z}\) is given by zero-padding to dimension \(N^{\prime}\).

Note: in this example we assume all pixels have observation counts > 0 and thus the masking operator \(\mathsf{M} \Rightarrow \mathcal{I}\) and is therefore omitted for clarity.

First lets import some python modules

[ ]:
import numpy as np

import darkmappy.estimators as dm
import darkmappy.validation as stats
import darkmappy.noise_simulations as noise_factory

from matplotlib import pyplot as plt
from scipy import ndimage, signal

Load some simulation data – here bolshoi N-body cluster simulations

Additionally we define a plotting function that simply takes two arguments: \(x\) (the map to plot) and iteration (the iteration that is being plotted). This can be fed into the estimator setting parameters so it plots after an adjustable number of iterations. This way you can watch the solver reconstruct dark-matter in real-time!

[ ]:
# Import data
kappa_true = np.load("data/Bolshoi_cluster_256.npy") + 0j

# Define a plotting function for realtime viewing during optimisation
def make_plot(x, iteration, show_stats=True):
    plt.imshow(np.real(x), cmap='magma', vmax=0.6)
    plt.axis('off')
    plt.show()
    if show_stats:
        snr, p_corr = stats.analyse(kappa_true, x)
        print("Iteration: {} SNR: {} and Pearson Correlation: {}".format(iteration, snr, p_corr))
    return

# Take a look at the 'ground truth'
make_plot(kappa_true, iteration=0, show_stats=False)

Now lets simulate some shear observations

This computes the size of a given pixel in your image given the specified resolution and the dimensionality of your pixelisation scheme. It then determines the number of galaxies that might reasonably be located within each pixel and adds Gaussian noise \(n \sim \mathcal{N}(0,\Sigma)\) appropriately.

Note: In cluster situations like this the number of observations per pixel is typically very low, and often shot limited. Hence, in a more realistic simulation example the noise distribution would be more accurately treated as Poissonian (which is also supported through Optimus-Primal but is beyond the scope of this example).

[ ]:
n_gal = 100         # Number of galaxy observations per square arcminute
sidelength = 30     # side of observed square on the sky, in arcminutes
supersample = 2     # By what factor would you like to attempt to super-resolve (even only!)

# Compute the input dimension size and construct an appropriately sized mask
n = int(kappa_true.shape[0]/supersample)

# Generates a random realisation of i.i.d. Gaussian noise per pixel
data, n_gal_map = noise_factory.simulate_shear_plane(
             x = kappa_true,
    sidelength = sidelength,
          ngal = n_gal,
   supersample = supersample)

Instantiate the Darkmapper estimator

Most variables are self-explanatory however take note of the ‘wav’ and ‘levels’ parameters. These parameters correspond to the wavelet dictionary being used for regularisation, a full selection of planar wavelet dictionaries can be found in the PyWavelets API documentation.

Note: If you find your results aren’t producing visually reasonable results this could be due to your signal not being sparsely distributed when projected into your chosen dictionary. Try adjusting the dictionary to see if you can find a more naturally sparsifying dictionary.

[ ]:
# Instantiate the estimator class
darkmapper_estimator = dm.DarkMappyPlane(
                   n = n,                         # Dimensionality of problem
                data = data,                      # Simulated shear data
                ngal = n_gal_map,                 # Map of observation count per pixel
              viewer = make_plot,                 # Custom plotting function for realtime viewing
                 wav = ['db1', 'db4', 'db6'],     # Selected wavelet dictionaries for sparsity averaging
              levels = 6,                         # Number of wavelet levels
         supersample = supersample)               # Degree of supersampling (typically <~2)

Run the estimator

[ ]:
# Manually adjust the optional parameters
darkmapper_estimator.options['tol'] = 1e-4           # Converges once the optimisation update falls below this value
darkmapper_estimator.options['positivity'] = False   # Includes a positivity constraint on the reconstruction
darkmapper_estimator.options['real'] = False         # Includes a reality constraint on the reconstruction
darkmapper_estimator.options['constrained'] = False  # Solve the unconstrained optimisation problem.
darkmapper_estimator.options['update_iter'] = 200    # Iterations before printing solver diagnostics (and image)
darkmapper_estimator.options['iter'] = 1000          # Maximum number of iterations

# Run the optimization and recover the MAP solution 'sol'
sol, diag = darkmapper_estimator.run_estimator(mu=7)

Compute the optimally smoothed Kaiser-Squires estimator to compare

Here we just simply perform a gridsearch over the range of different smoothing scales and locate that smoothing scale that maximises the recovered SNR. In this way we are giving the existing Kaiser-Squires method as much of an advantage as is possible.

[ ]:
# Use darkmappers inbuilt Kaiser-Squires estimator for simplicity.
ks = darkmapper_estimator.phi.ks_estimate(data)

# Gridsearch to find the 'optimal' smoothing for KS estimator
snr_max = 0
smooth_max = 0
for i in range(30):
    smoothed_ks = ndimage.gaussian_filter(np.real(ks), float(i+1))
    snr_iter = stats.snr(np.real(kappa_true), np.real(smoothed_ks))
    if snr_iter > snr_max:
        snr_max = snr_iter
        smooth_max = float(i+1)

optimal_ks = ndimage.gaussian_filter(np.real(ks), smooth_max)

# Plot the optimally smoothed KS estimator
make_plot(optimal_ks, iteration=smooth_max)

Final plot for clarity

Lets plot all the different reconstructions together and compare.

[ ]:
fig, axs = plt.subplots(1, 4, figsize=[16, 5])

titles = ["Truth", "KS", "KS Optimal", "DarkMappy"]
est = [np.real(kappa_true), np.real(ks), np.real(optimal_ks), np.real(sol)]

for i in range(4):
    axs[i].imshow(est[i], cmap="magma", vmax=0.6, vmin=np.min(est[0]))
    axs[i].set_title(titles[i], fontsize=14)
    if i > 0:
        axs[i].set_xlabel("SNR: {}dB,".format(round(stats.snr(est[0], est[i]),2)), fontsize=12)

    plt.setp(axs[i].get_xticklabels(), visible=False)
    plt.setp(axs[i].get_yticklabels(), visible=False)

plt.show()

Conclusions

Darkmappy displays significantly greater reconstruction fidelity in a realistic cluster reconstruction problem. There is however a caveat to this; as the super-resolution becomes more extreme the constraining power of Bayesian uncertainty quantification techniques becomes increasingly dilute due to the curse of dimensionality.