Daun

Introduction

This suite of methods is based on the deconvolution procedure with Tikhonov regularization described by Daun at al. [1] and extends it with additional, smoother, approximations and regularization types.

How it works

The original method formulates the numerical Abel transform in a form equivalent to the “onion peeling” method, where the original distribution is approximated with a step function (piecewise constant function with 1-pixel-wide radial intervals). The forward transform thus can be described by a system on linear equations in a matrix form

(1)\[\mathbf A_\text{OP} \mathbf x = \mathbf b,\]

where the vector \(\mathbf x\) consists of the original distribution values sampled at a uniform radial grid \(r_i = i \Delta r\), the vector \(\mathbf b\) consists of the projection values sampled at the corresponding uniform grid \(y_i = i \Delta r\), and the matrix \(\mathbf A_\text{OP}\) corresponds to the “onion peeling” forward Abel transform. Its elements are

\[\begin{split}A_{\text{OP}, ij} = \begin{cases} 0, & j < i, \\ 2 \Delta r \big[(j + 1/2)^2 - i^2\big]^{1/2}, & j = i, \\ 2 \Delta r \Big(\big[(j + 1/2)^2 - i^2\big]^{1/2} - \big[(j - 1/2)^2 - i^2\big]^{1/2}\Big), & j > i \end{cases}\end{split}\]

and represent contributions of each distribution interval to each projection interval.

However, instead of performing the inverse transform by using the inverse matrix directly:

\[\mathbf x = \mathbf A_\text{OP}^{-1} \mathbf b,\]

as it is done in the onion peeling method, the equation (1) is solved by applying Tikhonov regularization:

(2)\[\tilde{\mathbf x} = \operatorname{arg\,min}_{\mathbf x} \left( \|\mathbf A_\text{OP} \mathbf x - \mathbf b\|^2 + \alpha \|\mathbf L \mathbf x\|^2 \right),\]

where \(\alpha\) is the regularization parameter, and the finite-difference matrix

\[\begin{split}\mathbf L = \begin{bmatrix} -1 & 1 & 0 & \cdots & 0 \\ 0 & -1 & 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & -1 & 1 \end{bmatrix}\end{split}\]

(approximation of the derivative operator) is used as the Tikhonov matrix. The idea is that the admixture of the derivative norm to the minimization problem leads to a smoother solution. The regularization parameter \(\alpha\) controls how much attention is paid to the derivative: when \(\alpha = 0\), the exact solution to (1) is obtained, even if very noisy; when \(\alpha \to \infty\), the solution becomes very smooth, even if reproducing the data poorly. A reasonably chosen value of \(\alpha\) can result in a significant suppression of high-frequency noise without noticeably affecting the signal.

The minimization problem (2) leads again to a linear matrix equation, and the regularized inverse transform is obtained by using the regularized matrix inverse:

(3)\[\tilde{\mathbf x} = \mathbf A_\text{Tik}^{-1} \mathbf b_\text{Tik}, \quad A_\text{Tik} = (\mathbf A^T \mathbf A + \alpha \mathbf A \mathbf L^T \mathbf L), \quad \mathbf b_\text{Tik} = A^T \mathbf b.\]

(Note: here \(\mathbf x\) and \(\mathbf b\) are column vectors, but in PyAbel they are row vectors corresponding to image rows, so all the equations are transposed; moreover, instead of processing row vectors separately, they are transformed as the image matrix at once.)

PyAbel additions

Basis sets

The step-function approximation used in the original method implies a basis set consisting of rectangular functions

\[\begin{split}f_i(r) = \begin{cases} 1, & r \in [i - 1/2, i + 1/2], \\ 0 & \text{otherwise}. \end{cases}\end{split}\]

This approximation can be considered rather coarse, so in addition to these zero-degree piecewise polynomials we also implement basis sets consisting of piecewise polynomials up to 3rd degree. An example of a test function composed of broad and narrow Gaussian peaks and its approximations of various degrees is shown below:

../_images/daun-basis.svg

Here the solid black line is the test function, and the dashed black line is its approximation of degree \(n\), equal to the sum of the colored basis functions.

degree = 0:

Rectangular functions produce a stepwise approximation. This is the only approach mentioned in the original article and corresponds to the usual “onion peeling” transform.

degree = 1:

Triangular functions produce a continuous piecewise linear approximation. Besides being continuous (although not smooth), this also corresponds to how numerical data is usually plotted (with points connected by straight lines), so such plots would faithfully convey the underlying method assumptions.

degree = 2:

Piecewise quadratic functions

\[\begin{split}f_i(r) = \begin{cases} 2[r - (i - 1)]^2, & r \in [i - 1, i - 1/2], \\ 1 - 2[r - i]^2, & r \in [i - 1/2, i + 1/2], \\ 2[r - (i + 1)]^2, & r \in [i + 1/2, i + 1], \\ 0 & \text{otherwise}. \end{cases}\end{split}\]

produce a smooth piecewise quadratic approximation. While resembling BASEX basis functions in shape, these are localized within ±1 pixel, sum to unity (although produce oscillations on slopes), and their projections are much faster to compute.

degree = 3:

Combinations of cubic Hermite basis functions produce a cubic-spline approximation (with endpoint derivatives clamped to zero for 2D smoothness). Offers the most accurate representation for sufficiently smooth distributions, but produces ringing artifacts around sharp features, which can result in negative interpolated intensities even for non-negative data points.

(The projections of all these basis functions are calculated as described in Polynomials.)

In practice, however, the choice of the basis set has negligible effect on the transform results, as can be seen from an example below. Nevertheless, cubic splines might be useful for transforming smooth functions, in which case they yield very accurate results.

Regularization methods

\(L_2\) norm

In addition to the original derivative (difference) Tikhonov regularization, PyAbel also implements the usual \(L_2\) regularization, as in BASEX, with the identity matrix \(\mathbf I\) used instead of \(\mathbf L\) in (3). The results are practically identical to the BASEX method, especially with degree = 2, except that the basis set is computed much faster.

Non-negativity

A more substantial addition is the implementation of the non-negativity regularization. Namely, instead of solving the unconstrained quadratic problem (2), non-negativity constraints are imposed on the original problem:

\[\tilde{\mathbf x} = \operatorname{arg\,min}_{\mathbf x \geqslant 0} \|\mathbf A \mathbf x - \mathbf b\|^2.\]

This non-negative least-squares solution yields the distribution without negative intensities that reproduces the input data as well as possible. In situations where the distribution must be non-negative, this is the best physically meaningful solution.

The noise-filtering properties of this method come from the fact that noise in the inverse Abel transform is strongly oscillating, so if negative-going spikes are forbidden in the solution, the positive-going spikes must also be reduced in order to preserve the overall intensity. Thus the method is most beneficial for very noisy images, for which linear methods produce a large amount of noise reaching negative values. For clean images of non-negative distributions, the constrained solution exactly matches the solution of the original problem (1). And unlike Tikhonov regularization, it does not blur legitimate sharp features in any case.

Notice that constrained quadratic minimization remains a non-linear problem. This has several important implications. First, it is much more computationally challenging, so that transforming a megapixel image takes many seconds instead of several milliseconds (and depends on the image itself). Second, the average of transformed images is generally not equal to the transform of the averaged image. It is thus recommended to perform as much averaging (image symmetrization and summation of multiple images if applicable) as possible before applying the transform. In particular, using symmetry_axis=(0, 1) in abel.transform.Transform would in fact require transforming only one quadrant, which is 4 times faster that transforming the whole image. Third, the method is only asymptotically unbiased, but for sufficiently noisy data can systematically shift or blur the intensities towards the symmetry axis. Thus while this regularization can be very helpful in revealing the structure in raw images, it is not recommended when further processing (like model fitting) is involved. In particular, for extraction of velocity and anisotropy distributions from velocity-map images, the rBasex method, possibly with its “positive” regularization, is more appropriate.

When to use it

This method with default parameters (0th degree, 0 regularization parameter) is identical to the “onion peeling” method, but can also be used for the forward transform.

The original (derivative/difference) Tikhonov regularization with non-zero regularization parameter helps to remove high-frequency oscillations from the transformed image. However, an excessively large regularization parameter can lead to oversmoothing and broadening of the useful signal and under/overshoots around sharp features. As recommended by Daun et al., by systematically adjusting the heuristic regularization parameter, the analyst can find a solution that represents an acceptable compromise between accuracy and regularity.

The \(L_2\) Tikhonov regularization approach is equivalent to that in the BASEX method and has the same use cases and [dis]advantages.

The non-negativity regularization is recommended for visual inspection of very noisy images and images with sharp features without a broad background. However, due to its slowness, it cannot be used for real-time data processing.

How to use it

The inverse Abel transform of a full image can be done with the abel.Transform class:

abel.Transform(myImage, method='daun').transform

For the forward Abel transform, simply add direction='forward':

abel.Transform(myImage, method='daun', direction='forward').transform

Additional parameters can be passed through the transform_options parameter. For example, to use the original regularization method with the regularization parameter set to 100:

abel.Transform(myImage, method='daun',
               transform_options=dict{reg=100}).transform

The \(L_2\) regularization can be applied using

abel.Transform(myImage, method='daun',
               transform_options=dict{reg=('L2', 100)}).transform

And the non-negative solution is obtained by

abel.Transform(myImage, method='daun',
               transform_options=dict{reg='nonneg'}).transform

In this case, it is recommended to use symmetrization:

abel.Transform(myImage, method='daun',
               symmetry_axis=0,  # or symmetry_axis=(0, 1) if applicable
               transform_options=dict{reg='nonneg'}).transform

unless independent inspection of all image parts is desired.

The algorithm can be also accessed directly (to transform a right-side half-image or properly oriented quadrants) through the abel.daun.daun_transform() function.

Examples

Performance of various regularization methods for the Dribinski sample image with added Poissonian noise:

../_images/example_daun_reg.svg

(source code)

The degree of basis-set polynomials has almost no effect on the results (shown here for reg=0):

../_images/example_daun_degree.svg

(source code)

Citation

Note

If you use any non-default options (degree, regularization), please cite not only the article by Daun et al. and the PyAbel article, but also this PyAbel release (DOI: 10.5281/zenodo.7438595), because these capabilities are not present in the original work by Daun et al. and were added to PyAbel after the RSI publication.