Hansen–Law

Introduction

The Hansen and Law transform [1, 2] is a fast (linear time) Abel transform.

In their words, Hansen and Law [1] present:

“… new family of algorithms, principally for Abel inversion, that are recursive and hence computationally efficient. The methods are based on a linear, space-variant, state-variable model of the Abel transform. The model is the basis for deterministic algorithms.”

and [2]:

“… Abel transform, which maps an axisymmetric two-dimensional function into a line integral projection.”

The algorithm is efficient, one of the few methods to provide both the forward Abel and inverse Abel transform.

How it works

projection diag

Projection geometry (Fig. 1 [1])

For an axis-symmetric source image the projection of a source image, \(g(R)\), is given by the forward Abel transform:

\[g(R) = 2 \int_R^\infty \frac{f(r) r}{\sqrt{r^2 - R^2}} dr\]

The corresponding inverse Abel transform is:

\[f(r) = -\frac{1}{\pi} \int_r^\infty \frac{g^\prime(R)}{\sqrt{R^2 - r^2}} dR\]

The Hansen and Law method makes a coordinate transformation to model the Abel transform as a set of linear differential equation, with the driving function either the source image \(f(r)\), for the forward transform, or the projection image gradient \(g^\prime(R)\), for the inverse transform. More detail is given in themath below.

recursion

Recursion: pixel value from adjacent outer-pixel

Forward transform is:

\[ \begin{align}\begin{aligned}x_{n-1} &= \Phi_n x_n + B_{0n} f_n + B_{1n} f_{n-1}\\g_n &= \tilde{C} x_n,\end{aligned}\end{align} \]

where \(B_{1n}=0\) for the zero-order hold approximation.

Inverse transform:

\[ \begin{align}\begin{aligned}x_{n-1} &= \Phi_n x_n + B_{0n} g^\prime_n + B_{1n} g^\prime_{n-1}\\f_n &= \tilde{C} x_n\end{aligned}\end{align} \]

Note the only difference between the forward and inverse algorithms is the exchange of \(f_n\) with \(g^\prime_n\) (or \(g_n\)).

Details on the evaluation of \(\Phi, B_{0n},\) and \(B_{1n}\) are given below, themath.

The algorithm iterates along each individual row of the image, starting at the out edge, ending at the center-line. Since all rows in an image can be processed simultaneously, the operation can be easily vectorized and is therefore numerically efficient.

When to use it

The Hansen-Law algorithm offers one of the fastest, most robust methods for both the forward and inverse transforms. It requires reasonably fine sampling of the data to provide exact agreement with the analytical result, but otherwise this method is a hidden gem of the field.

How to use it

To complete the forward or inverse transform of a full image with the hansenlaw method, simply use the abel.Transform: class

abel.Transform(myImage, method='hansenlaw', direction='forward').transform
abel.Transform(myImage, method='hansenlaw', direction='inverse').transform

If you would like to access the Hansen-Law algorithm directly (to transform a right-side half-image), you can use abel.hansenlaw.hansenlaw_transform().

Tips

hansenlaw tends to perform better with images of large size \(n \gt 1001\) pixel width. For smaller images the angular_integration (speed) profile may look better if sub-pixel sampling is used via:

angular_integration_options=dict(dr=0.5)

Example

../_images/example_O2_PES_PAD1.svg

Source code

Historical Note

The Hansen and Law algorithm was almost lost to the scientific community. It was rediscovered by Jason Gascooke (Flinders University, South Australia) for use in his velocity-map image analysis, and written up in his PhD thesis [3].

Eric Hansen provided guidence, algebra, and explanations, to aid the implementation of his first-order hold algorithm, described in Ref. [2] (April 2018).

The Math

The resulting state equations are, for the forward transform:

\[x^\prime(r) = -\frac{1}{r} \tilde{A} x(r) + \frac{1}{\pi r} \tilde{B} f(R),\]

with inverse:

\[x^\prime(R) = -\frac{1}{R} \tilde{A} x(R) - 2\tilde{B} f(R),\]

where \([\tilde{A}, \tilde{B}, \tilde{C}]\) realize the impulse response: \(\tilde{h}(t) = \tilde{C} \exp{(\tilde{A} t)}\tilde{B} = \left[1-e^{-2t}\right]^{-\frac{1}{2}}\), with:

\[ \begin{align}\begin{aligned}\tilde{A} = \rm{diag}[\lambda_1, \lambda_2, ..., \lambda_K]\\\tilde{B} = [h_1, h_2, ..., h_K]^T\\\tilde{C} = [1, 1, ..., 1]\end{aligned}\end{align} \]

The differential equations have the transform solutions, forward:

\[x(r) = \Phi(r, r_0) x(r_0) + 2 \int_{r_0}^{r} \Phi(r, \epsilon) \tilde{B} f(\epsilon) d\epsilon.\]

and, inverse:

\[x(r) = \Phi(r, r_0) x(r_0) - \frac{1}{\pi} \int_{r_0}^{r} \frac{\Phi(r, \epsilon)}{r} \tilde{B} g^\prime(\epsilon) d\epsilon,\]

with \(\Phi(r, r_0) = \rm{diag}[(\frac{r_0}{r})^{\lambda_1}, ..., (\frac{r_0}{r})^{\lambda_K}] \equiv \rm{diag}[(\frac{n}{n-1})^{\lambda_1}, ..., (\frac{n}{n-1})^{\lambda_K}]\), where the integration limits \((r, r_0)\) extend across one grid interval or a pixel, so \(r_0 = n\Delta\), \(r = (n-1)\Delta\).

To evaluate the (superposition) integral, the driven part of the solution, the driving function \(f(\epsilon)\) or \(g^\prime(\epsilon)\) is assumed to either be constant across each grid interval, the zero-order hold approximation, \(f(\epsilon) \sim f(r_0)\), or linear, a first-order hold approximation, \(f(\epsilon) \sim p + q\epsilon = (r_0f(r) - rf(r_0))/\Delta + (f(r_0) - f(r))\epsilon/\Delta\). The integrand then separates into a sum over terms multiplied by \(h_k\),

\[\sum_k h_k f(r_0) \int_{r_0}^{r} \Phi_k(r, \epsilon) d\epsilon\]

with each integral:

\[\int_{r_0}^{r} \left(\frac{\epsilon}{r}\right)^\lambda_k d\epsilon = \frac{r}{r_0}\left[ 1 - \left(\frac{r}{r_0}\right)^{\lambda_k + 1}\right] = \frac{(n-1)^a}{\lambda_k + a} \left[ 1 - \left(\frac{n}{n-1}\right)^{\lambda_k+a} \right],\]

where, the right-most-side of the equation has an additional parameter, \(a\) to generalize the power of \(\lambda_k\). For the inverse transform, there is an additional factor \(\frac{1}{\pi r}\) in the state equation, and hence the integrand has \(\lambda_k\) power, reduced by -1. While, for the first-order hold approximation, the linear \(\epsilon\) term increases \(\lambda_k\) by +1.