Non-iterative computation of Gauss-Jacobi quadrature

Asymptotic approximations to the zeros of Jacobi polynomials are given, with methods to obtain the coefficients in the expansions. These approximations can be used as standalone methods for the non-iterative computation of the nodes of Gauss--Jacobi quadratures of high degree ($n\ge 100$). We also provide asymptotic approximations for functions related to the first order derivative of Jacobi polynomials which are used for computing the weights of the Gauss--Jacobi quadrature. The performance of the asymptotic approximations is illustrated with numerical examples, and it is shown that nearly double precision relative accuracy is obtained both for the nodes and the weights when $n\ge 100$ and $-1<\alpha, \beta\le 5$. For smaller degrees the approximations are also useful as they provide $10^{-12}$ relative accuracy for the nodes when $n\ge 20$, and just one Newton step would be sufficient to guarantee double precision accuracy in that cases.

We use the asymptotic representations to derive asymptotic expansions of the zeros of P (α,β) n (x) and the corresponding weights, which are needed in Jacobi-Gauss quadrature. In §4 we give details of the numerical performance of these expansions.
In a recent paper [10] we have considered similar methods for Gauss-Hermite and Gauss-Laguerre quadratures.
The present paper extends the results in [3], where the case of the Legendre polynomials (α = β = 0) has been considered and those in [12], where an iterative Newton method for the Jacobi zeros is used with initial values obtained by asymptotic approximations with a limited number of terms.
2. Expansions in terms of elementary functions. We give details for an asymptotic expansion of the Jacobi polynomial P (α,β) n (x) that is valid inside the interval [−1 + δ, 1 − δ], with δ a fixed small positive number. We use this expansion for the zeros of P (α,β) n (x) around the origin. Because of the relation P (α,β) n (x) = (−1) n P (β,α) n (−x) we can concentrate on the positive zeros. Hahn. In [13, §18.15(i)] an expansion is given derived in [11], which has the nice property that the coefficients are known in explicit form.

An expansion derived by
The expansion is described in terms of several formulas. We have for large values of n P (α,β) n (cos θ) = 2 2n+α+β+1 B (n + α + 1, n + β + 1) This expansion has been used in a paper by Gatteschi and Pittaluga [7] to derive an approximation of the mid range zeros, and this approximation in terms of elementary functions is used in [12] as first estimates for these zeros.
For deriving more accurate approximations of the mid-zeros we prefer a different expansion, which will be given in the next section. Differently to [12], iterative methods (Newton) will not be needed in order to improve the accuracy.

2.2.
A compound Poincaré-type expansion. In this section we give an expansion which has the canonical form , and α and β bounded. A few steps to obtain these expansions will be given in §2.2.4. The first coefficients are p 0 (x) = 1, q 0 (x) = 0, The expansions in (2.5) have negative powers of κ. We can modify the expansions by introducing the front factor 1 and by multiplying the expansions in (2.5) by the asymptotic expansion of 1/G κ (α, β). The result is the representation The first coefficients are (2.10) We give the first terms of the expansion of G κ (α, β):

Expansions of derivatives.
A representation of the derivative can be written in the form where y 0 (x) = 1 and for m = 0, 1, 2, . . .
For computing the Gauss weights it is convenient to have an expansion of the derivative of the function which is the oscillatory part in the representation given in (2.8). We have where m 0 (x) = u 0 (x) = 1, n 1 (x) = v 1 (x), and for ℓ = 1, 2, 3, . . .
As a first-order approximation in terms of the θ variable we take For this value of θ, χ defined in (2.4) becomes χ 0 = (n − k + 1 2 )π, which gives cos χ 0 = 0. This initial value θ 0 is given in [7], together with the expansion θ = θ 0 + θ 1 /κ 2 + O κ −4 , with To find a few more details, we use W (θ) defined in (2.19), and expand this function at θ 0 by writing θ = θ 0 + ε, which gives for a zero θ where the derivatives are with respect to θ. We assume for ε an expansion in the form Using this expansion and those of U (x) and V (x) in (2.9), and comparing equal powers of κ, we find where u m , v m have argument x, and x = cos(θ 0 ). The derivatives are with respect to θ. In terms of the original variables: (2.28) We summarize the steps for finding a zero x k , 1 ≤ k ≤ n, given n, α, and β. 1. Compute θ 0 defined in (2.23). 2. With this θ 0 , compute the coefficients given in (2.27) or (2.28), with x = cos θ 0 . The coefficients u 2m (x) and v 2m+1 (x) are those in the expansions in (2.9). 3. Compute ε from (2.26) and next θ = θ 0 + ε. 4. Compute x k ∼ cos θ.

Details on computing cos χ .
When we evaluate the functions W (θ) or W ′ (θ) (see (2.19) and (2.20)), with θ in the form θ = θ 0 + ε, see (2.23), we can write χ defined in (2.4) as and, hence, In the original forms, especially for the zeros in the middle of the interval, the argument χ may be of order κ, in the new forms the arguments are of order 1/κ. As a consequence, the evaluation of these functions can be done with better accuracy when we use (2.30).
For example, when we take n = 100, α = 1 3 , β = 1 5 and the expansion in (2.26) with the given three terms in (2.27), we obtain for the middle zero x 50 , using Maple with Digits=16, This feature is more relevant for the middle zeros than for the other ones. However, the middle zeros are more interesting than those near the endpoints ±1, because the expansions are not valid there. A similar problem may occur when evaluating x = cos θ when we have found the approximation of a zero θ near 1 2 π, that is, near x = 0. Assume that n is even, and write n = 2m. Then m = 1 2 κ − 1 4 (α + β + 1) and for a zero x k near the origin, , and x = cos(θ 0 + ε) = − sin(ε + δ). Similar for odd n = 2m + 1 and the zero x k with k = m + j. Then we have δ = (α − β + 4 − 4j)/(4κ). When α = β and j = 1 (the zero at the origin in this case), we have δ = 0.
where κ is given in (2.4) and The saddle points z ± follow from the zeros of The saddle point contour runs through z − from z = −1 to z = 1 (below the real z-axis), and then through z + from z = 1 to z = −1 (above the real axis). The contribution P + from z + follows from the substitution with corresponding points z = ±1 and s = ±∞. This gives c + k s k , the following expansion is obtained: and this gives The contribution from the saddle point z − is the complex conjugate of P + (assuming that α and β are real), and by splitting up the coefficients of the expansion in (2.39) in real and imaginary parts, we obtain (2.3) and (2.5).
Remark 2. The starting integrand in (2.4) has a pole at z = x, while the one of (2.33) shows an algebraic singularity at z = x and φ(z) defined in (2.34) has a logarithmic singularity at this point. To handle this from the viewpoint of multivalued functions, we can introduce a branch cut for the functions involved from z = x to the left, assuming that the phase of z − x is zero when z > x, equals −π when z approaches −1 on the lower part of the saddle point contour in (2.33), and +π on the upper side. Because the saddle points e ±iθ stay off the interval (−1, 1), we do not need to consider function values on the branch cuts for the asymptotic analysis.
3. An expansion in terms of Bessel functions.
To compute the coefficients A m (θ) for small values of θ in a stable way we need expansions. We can write where the series represent entire functions of θ. The first few A jm are (3.8) The coefficients S m (θ) and T m (θ) can be expanded in the form (3.9) in which the series represent entire functions of θ.

Expansions of the zeros.
For obtaining accurate approximations of the zeros for large degree Jacobi polynomials we can use the Bessel-type expansion given earlier. We give asymptotic expansions that can be used for all positive zeros. However, the most interesting region of application of Bessel-type expansions is close to the endpoints of the interval [−1, 1], because for the rest of the interval the previous elementary expansions are accurate enough and they are simpler to handle (and therefore more efficient). We explain how to compute explicitly enough coefficients of the expansions so that the zeros are computed accurately, without the need to use iterative methods to refine the results.
We write the zeros x n+1−m (with x 1 < x 2 < · · · < x n ) of P (α,β) n (x) in terms of the zeros j m of the Bessel function J α (x). The zero x n correspond to j 1 , x n−1 to j 2 , and so on. Because the representation in (3.3) cannot be used as x → −1, we consider only the positive zeros. For the other zeros we can use the symmetry relation (1.1).
A zero of P (α,β) n (x) is a zero of W (θ) defined in (3.3), and we approximate a zero x n+1−m with corresponding θ value following from x n+1−m = cos θ. We write θ in the form (3.14) We assume for ε an expansion in the form By expanding W (θ) at θ 0 we have upon expanding Using the representation of W (θ) given in (3.3), substituting the expansion of ε and those of S(θ) and T (θ) given in (3.4), and comparing equal powers of κ, we can obtain the coefficients θ j of (3.15).
Computations are done in Maple with Digits=16.
3.3. Details on computing Bessel functions near a zero. The numerical evaluation of the Bessel function J α (κθ) that occurs in W (θ) defined in (3.3) and in the derivative of U (θ) in (3.10), may become unstable when κθ is near a zero of J α (z). This problem shows up when computing the Gauss weights and when we use the standard software for the Bessel functions. A related problem is discussed in §2.2.3 for the elementary case.
To handle this, we can use the following expansion (see [14,Eqn. 10.23.1]) with z = u and λ = 1 + h/u. This gives When h is small the series works as a Taylor expansion, because w = O(h). When u is a zero of J α (u), the first term of the series vanishes. This happens when we take u + h = κθ with u = κθ 0 = j k (a zero of J α (u), see (3.14)) and h = κε. The mentioned computational problems arise for largest zeros x k (with small k). In that case, u = O(1) and h = O(1/κ), and the series in (3.21) converges quite fast.  The terms of the series can be generated by using the recurrence relation of the Bessel function which gives for m = 1, 2, 3, . . .
with starting value f 0 = 0 (when u is a zero of J α (u)) and f 1 = wJ α+1 (u). In Table 3 Remark 3. We know that the forward recursion of the Bessel functions in (3.23) may be unstable. However, because we start in the domain where the zeros are (u > α), the recursion of the first terms will be stable, and for the later terms we can accept less accurate values. Because of the fast convergence of the series for the values of u and h to compute the Gauss weights, it is not needed to use a backward recursion scheme for the Bessel functions. A m (θ). We verify the expansion in (3.1) and give details on the coefficients A m (θ). We introduce

Details on the coefficients
and for m = 0, 1, 2, . . . functions ψ j,m (θ) defined by the generating functions Then, the coefficients A m (θ) are given by The starting point in [5] is the integral representation (due to George Gasper [6]) in terms of the Gauss hypergeometric function cos θ − cos φ 1 + cos θ dφ, where κ is defined in (3.2) and (3.28) By expanding the 2 F 1 -function, it follows that Next, the expansion in (3.25) is used and the integral representation 2 with σ = j + m + α. This gives the double sum where ψ j,m (θ) are the coefficents in (3.25), and This verifies the expansion in (3.1) with the A m (θ) defined in (3.26).
We give a few details from [5] on evaluating the functions ψ j,m (θ). First consider the expansion (see [17, p. 140 where the χ j (θ) are available in the form of Bessel functions of fractional order: , j = 0, 1, 2, , . . . . These functions follow from a simple recursion, and then the ψ j,m (θ) follow from The first two are 4. Numerical performance of the expansions for computing the nodes and weights of the G-J quadrature. Next, we discuss in more detail the performance of the expansions given in §2.2 and §3 for computing the nodes and weights of the Gauss-Jacobi quadrature.
In terms of the derivatives of the orthogonal polynomials, the weights of the Gauss-Jacobi quadrature are given by M n,α,β = 2 α+β+1 Γ(n + α + 1)Γ(n + β + 1) n!Γ(n + α + β + 1) , where x i = cos θ i . As we did in [10] for the Gauss-Hermite and Gauss-Laguerre quadratures, it is convenient to introduce scaled weights: where the dot indicates the derivative with respect to θ and u(θ) = M  The weights are related with the scaled weights by The advantage of computing scaled weights is that, similarly as described in [10], scaled weights do not underflow/overflow for large parameters and, additionally, they are well-conditioned as a function of the roots θ i . Indeed, the scaled weights are ω i = W (θ i ) with W (θ) =u(θ) −2 , butẆ (θ i ) = 0 becauseü(θ i ) = 0. In addition, as n → +∞, the scaled weights are essentially constant and the main dependence on the nodes of the unscaled weights w i is given by the elementary sine and cosine factor of (4.4). This is related to the circle theorem for Gaussian quadratures in [−1, 1] (see [4,8]), which states that, as n → +∞, where in the Gauss-Jacobi case w( When considering the asymptotic expansion for Jacobi polynomials in terms of elementary functions, the function u(θ) can be written in terms of the function W (θ) given in (2.15) and for the computation ofu(θ) we use the expansion (2.20). When considering the asymptotic expansion in terms of Bessel functions, the function u(θ) is given by where U (θ) is given in Section 3.1. In this case, the expansion (3.10) is used for computing the derivative of u(θ).
Examples of the performance of the asymptotic expansions for the evaluation of the nodes and scaled weights of the Gauss-Jacobi quadrature are given in Figures 4.1,4.2,4.3 and 4.4. We concentrate on the positive zeros; for the negative zeros we can use the relation (1.1). In our tests, we use finite precision implementations (double precision accuracy) of the expansions 3 . We compare the approximations to the nodes obtained with the asymptotic expansions against the results of a Maple implementation (with a large number of digits) of an iterative algorithm which uses the global fixed point method of [15]. The Jacobi polynomials used in this algorithm are computed using the intrinsic Maple function. The scaled weights for testing have also been computed using Maple. the plot correspond to values with all digits correct in double precision accuracy. As can be seen, for n = 100 the use of the elementary expansion allows the computation of more than 3/4 of the first zeros with 15-16 digits correct and it fails for the largest zeros, as expected. When n = 1000, only the last few nodes are not computed with double precision accuracy. It is important to note that, using the idea given §2.2.3 for the computation of cos θ, there is no loss of accuracy for the first nodes and then double precision accuracy is also possible in the relative error of these nodes.
The asymptotic expansion for the scaled weights performs also well for the first half of the nodes (n = 100) with the number of terms used in the computation. For n = 1000, only the last 15 scaled weights are computed with an accuracy larger than 10 −15 .
For the computation of the nodes using the Bessel expansion, an algorithm for computing the zeros of Bessel functions is needed. In the finite precision implementation of the expansion we use the algorithm given in [9] for the first few zeros. For larger zeros, a very efficient method of computation is the use of the MacMahon's expansion (see [14, §10.21(vi) where µ = 4ν 2 , a = (m + ν/2 − 1/4)π.
On the other hand, the J-Bessel functions needed to compute the weights, are evaluated using our own algorithm for Bessel functions. Two first examples of the performance of the Bessel expansion are shown in Figure 4.2. For comparison, the choice of parameters is the same as in Figure 4.1. For computing the scaled weights for n = 100, four terms have been considered in the expansions of the coefficients Y (x) and Z(x) in (3.11). For the nodes, we have used three terms in the expansion (3.15). As can be seen, the expansion performs well even for the smallest nodes (and their corresponding scaled weights), although higher accuracy is obtained for larger nodes as expected. Regarding the computation of scaled weights, it is important to mention that the accurate evaluation of the two Bessel functions appearing in the expansion, seems to be crucial in order to obtain more than 14 digits of accuracy for few of the points closest to x = 1. For computing the Bessel function J α (κθ) the scheme given in §3.3 is used in our calculations for the last points.   Figure  4.3, the parameter α is chosen now larger than before (α = 5) in order to show the performance of the expansions for moderate values of the parameters. On the other hand, Figure 4.4 illustrates the performance when both parameters (α and β) are negative. As can be seen, the performance of the expansions is very similar in all cases.