Polynomials¶
Implemented in abel.tools.polynomial
.
Abel transform¶
The Abel transform of a polynomial
defined on a domain \([r_\text{min}, r_\text{max}]\) (and zero elsewhere) is calculated as
where \(r = \sqrt{x^2 + y^2}\), and the Abel integral is taken over the domain where \(r_\text{min} \leqslant r \leqslant r_\text{max}\). Namely,
These integrals for any power \(k\) are easily obtained from the recursive relation
For even \(k\) this yields a polynomial in \(y\) and powers of \(x\) and \(r\):
For odd \(k\), the recursion terminates at
so
with the same expressions for \(C_m\). For example, here are explicit formulas for several low degrees:
\(k\) \(\int r^k \,dy\) 0 \(y\) 1 \(\frac12 r y + \frac12 x^2 \ln(y + r)\) 2 \(\left(\frac13 r^2 + \frac23 x^2\right) y\) 3 \(\left(\frac14 r^3 + \frac38 r x^2\right) y + \frac38 x^4 \ln(y + r)\) 4 \(\left(\frac15 r^4 + \frac4{15} r^2 x^2 + \frac8{15} x^4\right) y\) 5 \(\left(\frac16 r^5 + \frac5{24} r^3 x^2 + \frac5{16} r x^4\right) y + \frac5{16} x^6 \ln(y + r)\) … \(\dots\)
The sums over \(m\) are computed using Horner’s method in \(x\), which requires only \(x^2\), \(y\) (see above), \(\ln (y + r)\) (for polynomials with odd degrees), and powers of \(r\) up to \(K\).
The sum of the integrals, however, is computed by direct addition. In particular, this means that an attempt to use this method for high-degree polynomials (for example, approximating some function with a 100-degree Taylor polynomial) will most likely fail due to loss of significance in floating-point operations. Splines are a much better choice in this respect, although at sufficiently large \(r\) and \(x\) (≳10 000) these numerical problems might become significant even for cubic polynomials.
Affine transformation¶
It is sometimes convenient to define a polynomial in some canonical form and adapt it to the particular case by an affine transformation (translation and scaling) of the independent variable, like in the example below.
The scaling around \(r = 0\) is
which applies an \(s\)-fold stretching to the function. The coefficients of the transformed polynomial are thus
The translation is
which shifts the origin to \(r_0\). The coefficients of the transformed polynomial can be obtained by expanding all powers of the binomial \(r - r_0\) and collecting the powers of \(r\). This is implemented in a matrix form
where the coefficients are represented by a column vector \(\mathbf{c} = (c_0, c_1, \dots, c_K)^\mathrm{T}\), and the matrix \(\mathrm{M}\) is the Hadamard product of the upper-triangular Pascal matrix and the Toeplitz matrix of \(r_0^k\):
Example¶
Consider a two-sided step function with soft edges:
The edges can be represented by the cubic smoothstep function
which smoothly rises from \(0\) at \(r = 0\) to \(1\) at \(r = 1\). The left edge requires stretching it by \(2w\) and shifting the origin to \(r_\text{min} - w\). The right edge is \(S(r)\) stretched by \(-2w\) (the negative sign mirrors it horizontally) and shifted to \(r_\text{max} + w\). The shelf is just a constant (zeroth-degree polynomial). It can be set to \(1\), and then the desired function with the amplitude \(A\) is obtained by multiplying the resulting piecewise polynomial by \(A\):
import matplotlib.pyplot as plt
import numpy as np
from abel.tools.polynomial import PiecewisePolynomial as PP
r = np.arange(51.0)
rmin = 10
rmax = 40
w = 5
A = 3
c = [0, 0, 3, -2]
smoothstep = A * PP(r, [(rmin - w, rmin + w, c, rmin - w, 2 * w),
(rmin + w, rmax - w, [1]),
(rmax - w, rmax + w, c, rmax + w, -2 * w)])
fig, axs = plt.subplots(2, 1)
axs[0].set_title('func')
axs[0].set_xlabel('$r$')
axs[0].plot(r, smoothstep.func)
axs[1].set_title('abel')
axs[1].set_xlabel('$x$')
axs[1].plot(r, smoothstep.abel)
plt.tight_layout()
plt.show()
Polynomial
and PiecewisePolynomial
are also accessible through the
abel.tools.analytical
module. Amplitude scaling by multiplying the
“function” (a Python object actually) is not supported there, but it can be
achieved simply by scaling all the coefficients:
from abel.tools.analytical import PiecewisePolynomial as PP
c = A * np.array([0, 0, 3, -2])
smoothstep = PP(..., [(rmin - w, rmin + w, c, rmin - w, 2 * w),
(rmin + w, rmax - w, [A]),
(rmax - w, rmax + w, c, rmax + w, -2 * w)], ...)
In spherical coordinates¶
Implemented as SPolynomial
.
Axially symmetric bivariate polynomials in spherical coordinates have the general form
(see rBasex: mathematical details for definitions of the coordinate systems).
Abel transform¶
The forward Abel transform of this function defined on a radial domain \([\rho_\text{min}, \rho_\text{max}]\) (and zero elsewhere) is calculated as
where
and the Abel integral is taken over the domain where \(\rho_\text{min} \leqslant \rho \leqslant \rho_\text{max}\). That is, for each term we have
where
The antiderivatives
are given in rBasex: mathematical details, with the only difference that besides the recurrence relation
for calculating the terms with positive \(k = n - m\), the reverse recurrence relation
is also used for negative \(k\), requred for the terms with \(m > n\).
The overall Abel transform thus has the form
and is calculated using Horner’s method in \(r\) and \(\cos\theta\) after precomputing the \(F_{n-m}(r, z_\text{min,max})\) pairs for each needed value of \(n - m\) (there are at most \(M + N + 1\) of them, if all \(M \times N\) coefficients \(c_{mn} \ne 0\)).
Notice that these calculations are relatively expensive, since they are done for all pixels with \(\rho \leqslant \rho_\text{max}\), for each of them involving summation and multiplication of up to \(2MN\) terms in the above expression and evaluating transcendent functions present in \(F_k(r, z)\). Moreover, the numerical problems for high-degree polynomials thus can be even more severe than for univariate polynomials.
Approximate Gaussian¶
Implemented as ApproxGaussian
.
The Gaussian function
is useful for representing peaks in simulated data but does not have an analytical Abel transform unless \(r_0 = 0\). However, it can be approximated by piecewise polynomials to any accuracy, and these polynomials can be Able-transformed analytically, as shown above, thus providing an arbitrarily accurate approximation to the Abel transform of the initial Gaussian function.
In practice, it is sufficient to find the approximating piecewise polynomial for the Gaussian function \(g(r) = \exp(-r^2/2)\) with unit amplitude and unit standard deviation, and the polynomial coefficients can then be scaled as described above to account for any \(A\), \(r_0\) and \(\sigma\).
The goal is therefore to find \(f(r)\) such that \(|f(r) - g(r)| \leqslant \varepsilon\) for a given tolerance \(\varepsilon\). The approximation implemented here uses a piecewise quadratic polynomial:
where the domain is split into \(N\) intervals \([R_n, R_{n+1}]\), \(n = 0, \dots, N - 1\). The strategy used for minimizing the number of intervals is to find the splitting points such that
in other words, each parabolic segment matches the \(g(r)\) values at the endpoints and midpoint of its interval, and its maximal deviation from \(g(r)\) equals \(\varepsilon\). The process starts from \(R_0 = \sqrt{-2 \ln(\varepsilon/2)}\), such that \(g(R_0) = \varepsilon/2\), but setting \(f_0(R_0) = 0\) for continuity. Then subsequent points \(R_1, R_2, \dots\) are found by solving \(\max_{r \in [R_n, R_{n+1}]} \big|f_n(r) - g(r)\big| \approx \varepsilon\) for \(R_{n+1}\) numerically, using the following approximation obtained from the 3rd-order term of the \(g(r)\) Taylor series (by construction, \(f_n(r)\) reproduces the lower-order terms exactly, and the magnitudes of higher-order terms are much smaller):
This process is repeated until \(r = 0\) is reached, after which the found splitting is symmetrically extended to \(-R_0 \leqslant r < 0\), and the polynomial coefficients for each segment are trivially calculated from the equations \(f_n(r) = g(r)\) for \(r = R_n, R_{n+\frac12}, R_{n+1}\).
As an example, here is the outcome for the default approximation accuracy ≲0.5 %, resulting in just 7 segments:
from abel.tools.polynomial import ApproxGaussian, PiecewisePolynomial
r = np.arange(201)
r0 = 100
sigma = 20
# actual Gaussian function
gauss = np.exp(-((r - r0) / sigma)**2 / 2)
# approximation with default tolerance (~0.5%)
approx = PiecewisePolynomial(r, ApproxGaussian().scaled(1, r0, sigma))
The Abel transform of this approximation is even more accurate, having the maximal relative deviation ~0.35/170 ≈ 0.2 %:
A practical example of using ApproxGaussian
with
PiecewiseSPolynomial
can be found in the SampleImage
source
code.