Algorithm 939

Methods and an algorithm for computing the generalized Marcum Q–function (Qμ(x,y)) and the complementary function (Pμ(x,y)) are described. These functions appear in problems of different technical and scientific areas such as, for example, radar detection and communications, statistics, and probability theory, where they are called the noncentral chi-square or the noncentral gamma cumulative distribution functions. The algorithm for computing the Marcum functions combines different methods of evaluation in different regions: series expansions, integral representations, asymptotic expansions, and use of three-term homogeneous recurrence relations. A relative accuracy close to 10−12 can be obtained in the parameter region (x,y,μ) ∈ [0,A] ×[0,A] × [1,A], A = 200, while for larger parameters the accuracy decreases (close to 10−11 for A = 1000 and close to 5 × 10−11 for A = 10000).


INTRODUCTION
We define the generalized Marcum Q−function by using the integral representation where x ≥ 0, y ≥ 0, μ > 0, and I μ (z) is the modified Bessel function. We also use the complementary function and the complementary relation reads P μ (x, y) + Q μ (x, y) = 1.
There are other notations for the generalized Marcum function in the literature. Among them, probably the most popular one is Q μ (α, β) = α 1−μ +∞ β t μ e −(t 2 +α 2 )/2 I ν−1 (αt)dt, where we have added a tilde in the definition to distinguish it from the definition we are using (1). For μ = 1 this coincides with the original definition of the Marcum Q−function [Marcum 1960]. This is the notation used, for instance, in the MATLAB built-in function marcumq. The relation with the notation we use is simple, namely, and similarly for the P−function.
The generalized Marcum Q−function is an important function used in radar detection and communications; see Marcum [1960], Rice [1968], and Robertson [1976]. In this field, μ is the number of independent samples of the output of a square-law detector. In our analysis μ is not necessarily a positive integer number and we will consider real values μ ≥ 1.
These functions also occur in statistics and probability theory, where they are called noncentral chi-square (semi-integer μ) or noncentral gamma cumulative distributions [Ashour and Abdel-Samad 1990;Cohen 1988;Dyrting 2004;Knüsel and Bablok 1996;Robertson 1976;Ross 1999]. The central gamma cumulative distribution is in fact the incomplete gamma function, and the relation of the Marcum functions to the incomplete gamma functions is given in Section 2.1.
In this article we describe methods and an algorithm for computing the functions P μ (x, y) and Q μ (x, y) for a large range of the parameters μ, x, y. We consider series expansions in terms of the incomplete gamma functions, recurrence relations, asymptotic expansions, and numerical quadrature. A Fortran 90 module implementing the algorithm is tested, and we conclude it provides a relative accuracy close to ∼ 10 −12 in the parameter region (x, y, μ) ∈ [0, A] × [0, A] × [1, A], A = 200, while for larger parameters the accuracy decreases (close to 10 −11 for A = 1000 and better than 10 −10 for A = 10000).
We include comparisons with the MATLAB built-in function marcumq; this function is restricted to integer positive values of the parameter μ, differently from our Fortran 90 module which allows the computation of the Marcum functions for real values (greater or equal to 1) of that parameter. Also, the MATLAB function computes the Marcum Q−function but not the complementary P−function: computing this function simply as 1 − Q when Q is close to 1 can lead to serious cancellation problems. Our tests reveal some bugs in certain parameter regions when computing the Marcum Q−function using MATLAB.
These expansions are in terms of the incomplete gamma function ratios defined by where the standard incomplete gamma functions are defined by Again, we have the complementary relation P μ (x) + Q μ (x) = 1. In Gil et al. [2012], algorithms are given for the computation of the incomplete gamma function ratios P μ (y) and Q μ (y) appearing in (7). From these relations we obtain the particular values and similar complementary relations for P μ (x, y).
In the numerical algorithm we compute both functions P μ (x, y) and Q μ (x, y). The primary function in the algorithm is the smallest one of P μ (x, y) and Q μ (x, y), and the primary function will be computed first. The other one follows from the complementary relation in (3).
As will follow from the relations given later (and in less detail from the relations in (10)), the transition in the (x, y) quarter plane from small values of Q μ (x, y) to values close to unity occurs for large values of μ, x, y across the line y = x + μ, and above this line in the (x, y) quarter plane Q μ (x, y) is taken as the primary function. Below this line the complementary function P μ (x, y) is taken as the primary function.

Recurrence Relations
Considering integration by parts in the integrals in (1) and (2), together with the relation z μ I μ−1 (z) = d dz z μ I μ (z) , it is easy to see that the Marcum functions satisfy the following first-order difference equations (see, for instance, Temme [1993] It is possible to eliminate the Bessel function appearing in (11) and to obtain a homogeneous third-order recurrence relation [Temme 1993].
The function P μ (x, y) and every constant (with respect to μ) satisfy the same relation. These recurrence relations can be useful for testing. This first-order inhomogeneous equation in (11) can be used for computing the Marcum Q−function in the direction of increasing μ. Observe that the right-hand side in the relation for Q μ (x, y) has only positive terms. Also, considering Perron's theorem for this difference equation [Gil et al. 2007, Theorem 4.17], it is easy to check that the recurrence relation admits a minimal (or recessive) solution and that the dominant solutions y μ are such that lim μ→+∞ y μ+1 as corresponds to the Marcum Q−function, which is therefore dominant and can be computed by forward recursion. When we consider Perron's theorem, which is also known as the Perron-Kreuser theorem, for the homogeneous recursion in (12), it follows that the Marcum Q−function is neither minimal nor dominant and therefore it cannot be computed neither in the forward or backward direction. A possible way out is to combine two first-order recursions of (11) to obtain the three-term homogeneous recurrence relation Both Q μ (x, y) and the complementary function P μ (x, y) satisfy (14). Q μ (x, y) is dominant and can be computed in the forward recursion, while P μ (x, y) is minimal and has to be computed in the backward direction.
An advantage of the relation in (14) is that ratios of Bessel functions appear which are slowly varying with respect to μ. Because dominant factors in the Bessel function, representations disappear in the ratios, these are free of overflow/underflow problems. In addition, for the ratios of Bessel functions, continued fraction representations can be used.
It is interesting to observe that Pincherle's theorem [Gil et al. 2007, Theorem 2.7] allows us to compute a ratio of P μ (x, y) as a continued fraction, the coefficients of which can be represented as continued fractions themselves, namely,

Derivatives and Monotonicity
Taking the derivative with respect to y in (1) and using (11) we have Taking the derivative with respect to x, and using the relation And using (11) we see that Q μ (x, y) (P μ (x, y)) is an increasing (decreasing) function of x and a decreasing (increasing) function of y. With respect to μ, Q μ (x, y) is increasing and P μ (x, y) is decreasing.

USING THE SERIES EXPANSIONS
The series representations in (7) can be computed by using the algorithms for the incomplete gamma ratios described in Gil et al. [2012]. The recurrence relations are stable for Q μ (y) in the forward direction, and for P μ (y) in the backward direction.
For the series representation of P μ (x, y) we need to find a truncation number n 0 of the infinite series, compute P μ+n 0 (y) and perform the summation and recursion in (7) with n = n 0 , n 0 − 1, . . . , 0. The recursion for P μ+n (y) is written in the backward form and we write the series for P μ (x, y) in the form where we assume that the first neglected term is smaller than a prescribed precision ε.
Observe that the value of the first term in the sum is 1, and then we expect that by considering S n 0 < ε, with S n 0 the first neglected term, the sum will reach a relative accuracy better than ε. The first neglected term can be bounded using some monotonicity properties of the incomplete gamma function. Using (8) we have Now, it is easy to check that h ν (y) = γ (ν,y) γ (ν−1,y) < y. Indeed, it is known that h ν (y) is monotonically increasing as a function on ν [Qi 2002] and, on the other hand, it is easy to check that lim ν→+∞ h ν (y) = y (for instance, by using Paris [2010, 8.11.4]); these two facts imply that h ν (y) < y. With this information, we conclude that and taking it is guaranteed that the first neglected term is smaller than ε. Now we use the Stirling approximation for the factorials in the denominators, which is a lower bound for the factorials. This translates into finding the solution of f (n) = 0 for f (n) = (n+μ+1/2) ln(n+μ)+(n−1/2) ln n−2n−n ln(xy)−C, C = ln If f (n * ) = 0 with n * the larger solution then it is guaranteed that n 0 = n * − 1 is such that S n 0 < ε. The analysis becomes more simple considering some tiny modifications and taking For this function the derivative with respect to n is given by and this first derivative vanishes if n = n e = (−μ + μ 2 + 4xy)/2, which is a minimum because f (2) (n) > 0. Starting at the right of this minimum we can use Newton's method to solve f (n) = 0; convergence will be certain because f (n) > 0 at the right of the minimum and f (2) (n) > 0. We observe that for small values of P μ (x, y) (close to the underflow limit), it may happen that the last terms in the series cannot be computed because the incomplete gamma ratios may underflow. Similarly, in the the series for the Q μ (x, y) it may happen that the first values of Q μ+n (y) in (7) may underflow, but not the total sum. This means that some error degradation can be expected for values close to underflow. We will discuss this issue later in Section 6.

ASYMPTOTIC REPRESENTATIONS
In Temme [1993] two types of asymptotic expansions have been derived: one with small values of μ and one with large values. In both cases the expansion is given in terms of the complementary error function in this definition z may assume any finite (complex) value.

Asymptotic Representations for Large xy
We summarize the results given in Temme [1993]. We use the function and we have where the parameters are defined by We assume that ξ is large and substitute the well-known expansion 2 , n = 0, 1, 2, . . . , (32) into (29). This gives the expansion where n is an incomplete gamma function (see (9)) The function 0 can be written in terms of the complementary error function (see (28) Further terms can be obtained from the recursion Using (30) and (33) we obtain For P μ (x, y) the expansion reads where n = − n , n ≥ 1, and Information on the asymptotic nature and error bounds of expansion (33) can be found in Temme [1986], where numerical aspects of recursion (36) are discussed as well. Recursion of the functions n should begin at some point n 0 near σ ξ, and from n 0 backward or forward recursion has to be used. These results are based on Gautschi [1961], where exponential integrals are considered, which are related to the incomplete gamma functions.
Note that the integral defining F μ (ξ , σ ) in (29) becomes undetermined when σ = 0. However, since we use a combination of two F−functions in (30), and ρ tends to unity as σ → 0, that is, as x → y, the left-hand sides in (30) are well defined when x = y.

Asymptotic Representations for Large μ
For numerical calculations the large μ expansion given in Temme [1993, Eq. (4.3)] is not suitable, and in this section we derive an alternative expansion with coefficients that are easier to evaluate than those derived earlier.
We consider both Q μ (x, y) and P μ (x, y) and use the integral representations given in (1) and (2).
For using the asymptotic properties of the Bessel function it is convenient to consider this function with order μ. Furthermore, we scale x and y and write where

Asymptotic Expansion of the Modified Bessel Function.
For deriving an asymptotic expansion of the functions Q μ (x, y) and P μ (x, y) we use the expansion 3 where The first coefficients u k (t) are and other coefficients can be obtained by applying the formula

Asymptotic Expansion of Q μ+1 (μx, μy)
We write (42) in the form where The saddle point follows from the equation φ (z) = 0. We have η (z) = 1 + z 2 /z, and the saddle point is obtained by solving the equation It follows that the positive saddle point z 0 is given by The saddle point is located outside the interval of integration if ξ > z 0 , that is, when y > x + 1, in which case Q μ+1 (μx, μy) 1 2 (when the parameters are large). As discussed earlier (see also Temme [1993, Section 4]), the relation y = x+1 (in the scaled variables) indicates a transition from small values of Q μ+1 (μx, μy) (when y > x + 1) to values close to 1 when (y < x + 1) (see also the first line of (10)).
So, for the Q−function the case ξ > z 0 is of special interest, because in that case this function is smaller than the P−function. We will derive an expansion in which the cases y < x + 1, y = x + 1, and y > x + 1 are included; however, for y < x + 1 we use an expansion for the P−function; see Section 4.4.
We use a method introduced in Bleistein [1966] (see also Olver [1997, pages 344-351]). As a standard form in asymptotic analysis with the previously described features (a saddle point coalescing with an endpoint of integration) we can consider with saddle point ζ , which may be any complex number. We transform the integral in (49) into this form by writing where ζ has to be determined. The point z = ξ corresponds to w = 0, and we assume that the z−saddle point z 0 corresponds to the w−saddle point at w = ζ . Also, we assume that sign(z − z 0 ) = sign(w − ζ ). This gives Because z 0 is the saddle point, the right-hand side is always nonnegative. The sign of ζ is chosen as follows. As mentioned after (52), the saddle point z 0 is inside the domain of integration if y < x + 1. The same condition is used for the sign of ζ .
For the numerical evaluation of ζ from this relation, we refer to Section 4.6.1. The transformation (54) applied to (49) gives where In the representation in (57) we have used the relation which follows from observing that In Bleistein's method an asymptotic expansion is obtained by expanding (interpolating, actually) this function at the points w = 0 and w = ζ . We modify this method, as in Olver [1997, pages 346-348], by expanding at w = ζ only. Also, we replace the Bessel function with its asymptotic expansion in (45). In this way, we obtain where with (see (45)) and t is as in (46). The relation between z and w follows from the transformation given in (54). We expand and obtain These functions can be expressed in terms of the complementary error function defined in (28). The first ones are and the other ones follow from integrating by parts in (66), giving By substituting in (66) w = ζ + 2/μs we have It follows that j (ζ ) can be written in terms of the incomplete gamma functions; see (9). Combining these expansions, we find We can rearrange this expansion in the form The first coefficients f jk are given in (90). By separating the first term in this expansion and using (67), we obtain If we wish, we can find the asymptotic representation for Q μ (μx, μy) by using (11), which with the present notation can be written as This follows from The result is 20:12 A. Gil et al.

Asymptotic Expansion of P μ+1 (μx, μy)
The computation of the complementary function P μ (x, y) is required when y < x + μ. To avoid using the relation P μ (x, y) = 1 − Q μ (x, y), which may cause a large relative error, we derive an expansion that can be used for the P−function, and that has a similar form and is in fact complementary relation with the expansion in (72). Starting from (43) we have (refer to (49)) where ξ is defined in (44) and φ(z) is the same as in (49). This function has a saddle point z 0 defined in (52). When y < x + 1 (in the scaled variables used in (76)), we have z 0 > ξ. We use the same mapping as in (54), now with the conditions that z = 0 corresponds to w = +∞ and (again) z = ξ to w = 0. This gives (see (57)) where f (w) has the same form as in (58), although the relation between z and w is different; for example dz/dw < 0. The quantity ζ is defined in (56) and it is positive in this case, because y < x + 1. We can repeat the procedure that we used for the Q−function. The main change is the choice of the square root in a 1 that occurs in the expansion in (87), which has its effect on signs of the coefficients f j,k given in (90). In this way we obtain the expansion (refer to (72)) where and j (−ζ ) can be obtained in the same way as j (ζ ) in (67)-(69). In this representation, the coefficients f j,k are the same as those used for the Q−function, and the first few are given in (90). From (73) we obtain P μ (μx, μy) = P μ+1 (μx, μy) + e − 1 2 μζ 2 e −μη(ξ ) I μ (μξ ).

Where to Use the Expansions
From (69) we see that { j (ζ )} is an asymptotic sequence for large μ and bounded values of ζ μ/2. By assuming −b ≤ ζ μ/2 ≤ b, for some b > 0, we try to find a domain in the (x, y)−plane where we can use the expansions in (72) and (79).
For large values of μ the inequalities can only be satisfied when |ζ | is small. In that case we consider the expansion given in (85), and we use for small values of ζ the approximation ζ ∼ −(y − x − 1)/ √ (2x + 1. This gives for y the inequalities This is in the scaled variables for Q μ (μx, μy). For the unscaled variables we have From numerical tests we will conclude which values μ and b can be used for a given set of coefficients in the expansions.

Computational Aspects of the Expansion
We consider some numerical aspects of the expansion of Q μ (μx, μy). First we observe that verifying numerically the parameter domain for applying the asymptotic expansion for Q μ (μx, μy) we can use the inhomogeneous recursion given in (73).
4.6.1. Expanding ζ . The value of ζ , see (55) and (56), vanishes when z 0 = ξ , that is, when y = x + 1. To avoid numerical cancellation we expand as follows. We have This gives Expanding the right-hand side of (84) in powers at y = x + 1, we obtain an expansion that can be written in the form The first few coefficients are c 0 = 1, 4.6.2. Computing the Coefficients f jk . The computation of the coefficients f jk is rather straightforward, although we need a computer algebra package for obtaining enough coefficients for performing numerical calculations.
First we need the coefficients a k in the expansion These follow from the transformation in (54). To simplify the notation we introduce u ∈ (0, 1] by writing Then the first few coefficients a k are a 3 = −a 1 u 4 2u 6 + u 4 − 10u 2 + 9 36 1 + u 2 2 , a 4 = −a 1 u 5 40u 10 + 30u 8 − 141u 6 − 158u 4 + 432u 2 − 216 The coefficients f jk are polynomials in u and the first few are (90)

A QUADRATURE METHOD
Numerical quadrature of suitable integral representations can be an important tool for evaluating special functions, as explained in Gil et al. [2007, Chapter 5]. In the case of the Marcum functions, the integrals in (1) and (2) give stable integral representations, but we prefer a representation in terms of elementary functions. Also, one important point for applying the trapezoidal rule efficiently is the vanishing of the integrand with many (or all) derivatives at the endpoints of integration. The derivation of the integral representations in terms of elementary functions for the Marcum functions is described in detail in Gil et al. [2013]. The starting point is the contour integral representation (see Temme [1993, Eq. (2.3 where L Q is a vertical line that cuts the real axis in a point s 0 , with 0 < s 0 < 1.  For the complementary function we have now with a vertical line L P that cuts the real axis at a point s 0 with s 0 > 1. Introducing scaled variables x, y, we write the representation in (91) in the form where Using saddle point analysis for this expression, we arrive to a final integral representation in the form where ζ is the same quantity as used in Section 4.3 (see (55)), and with ξ = 2 √ xy, and r(θ ) and ρ(θ, ξ) defined as follows.
Some extra care is needed when computing the expressions for small values of θ . Details are given in Gil et al. [2013] on how to proceed in this situation when computing (96), (97), and (98).
Finally, an important point is to determine the domain of applicability of the quadrature method for computing the Marcum Q-function: In Section 4.5 we have explained where we can use the asymptotic expansion derived in Section 4.3. These are valid around the transition line y = x + μ for large values of y, μ (see in particular the domain given in (82)). The quadrature method is not valid inside this domain because the saddle point tends to unity (where the pole is located in the representation (91)) as y → x + μ. So, in the resulting algorithm the quadrature method will be used outside the domain specified by (82) with a proper value of b. Numerical test leads us to consider b = 1. It should be observed that the quadrature method does not necessarily need large parameters, although it also performs quite well in that case.
In Helstrom [1992] the trapezoidal rule is used by including the pole at s = 1 in (93) in the function φ(s). That is, by writing (compare (94)) In that case the saddle point has to be calculated from a cubic polynomial and the contour follows from φ(s) = 0 through that saddle point. Helstrom used an approximation of this contour by taking a parabola centered at the saddle point. The results in Helstrom [1992, Table I] show correct values at the critical value y = x + 1, also for large values of the parameters, as we will comment later.

ALGORITHM FOR THE MARCUM FUNCTIONS, NUMERICAL TESTS AND COMPARISONS
We propose an algorithm for computing the Marcum functions which combines different methods of evaluation in different regions: series expansions, integral representations, asymptotic expansions, and use of the three-term homogeneous recurrence relation given in (14). The algorithm uses one or another method based on the expected region of validity of each method supported by numerical tests of each method, particularly near the boundaries of the domains where we switch methods.
From the numerical tests that we later discuss in detail, the following scheme of computation has been adopted.
We use unscaled variables. Let ξ = 2 √ xy and Then the scheme is as follows.
Next we give details on the numerical tests for each method and on the global tests for the complete algorithm.

Testing
For testing the accuracy of the different methods described in the previous sections, we have used the recurrence relation given in (12). Because, as discussed previously, the Q is neither minimal nor domininant, the recursion should not be tested in the backward or forward direction; instead, we write the recurrence in the form and the deviations from 1 of the left-hand side of (101) (in absolute value) will measure the accuracy of the tested methods. We use (101) when y ≥ x + μ and the same expression but for P μ (x, y) when y < x + μ. For x > μ all terms in the numerator and the denominator are positive. For the case x < μ a negative terms appears in the numerator and a more stable test, both for P (when y < x + μ) and Q (when y > x + μ) is provided by writing the recurrence as The values of μ and y are fixed to μ = 8192 and y = 1.05μ. The last column shows the values obtained for Q μ (x, y) with the MATLAB function marcumq; bold font indicates the wrong digits in the MATLAB computations.
We should note that the tests involve four functions and that the errors in the individual evaluations of each function turn out to be slightly smaller than the error of the test of the recurrences.
We have first checked the implementation of the series expansions given in (7), which are used for x < 30. For computing the ratios of gamma functions appearing in the expressions, we have used the algorithms given in Gil et al. [2012]. As discussed before, the series may present some error degradation for values of the functions close to the underflow limit; also, a mild degradation takes place as large parameters are considered. We have conducted a series of tests using 10 8 random points in regions For values of P greater than 10 −280 , our tests show that a relative accuracy better than 10 −12 when testing (101) and (102) can be obtained for A = 200, while the relative accuracies obtained for A = 1000 , 5000 , 10000 are approximately 3 × 10 −12 , 3 × 10 −11 and 5 × 10 −11 respectively. For function values smaller than 10 −280 and larger than 10 −290 the relative accuracies are quite uniform for different values of A and around 5 × 10 −11 . For function values smaller than 10 −290 the loss of accuracy becomes more severe. For this reason, we have limited the output of the algorithms to values larger than 10 −290 ; and when a smaller value is obtained, we set it to zero.
The relations (101) and (102) have also been tested for the asymptotic representation for large values of ξ = 2 √ xy derived in Section 4.1. Our results show that an accuracy better than 6 × 10 −14 can be obtained in the parameter region defined by ξ > 30, μ 2 < 2ξ . For this case 10 11 random points have been considered in the domain (x, y, μ) ∈ [30, 10000] × [0, 10000] × [1, 10000] and the values corresponding to random points inside the region of interest and with values not underflowing have been tested (close to 8 × 10 8 points). The asymptotic expansion for large μ given in (72) has been tested first for some of the values of μ, x and y of Temme [1993, Table 5.1] and Robertson [1976, Table I]. In these tables, the values of μ and y are fixed to μ = 8192 and y = 1.05μ. The values obtained for P μ (x, y) and Q μ (x, y) are shown in Table I. and they are in agreement with those of reference Temme [1993]. For comparison, we include the values given by MATLAB, and we observe that most of them present loss of accuracy; we will come back to this point next.
We have also used the asymptotic expansion of Section 4 for the large values of μ (up to μ = 10 9 ) as shown in Helstrom [1992, Table I], and we confirm the shown values of that table. These results have been tested with Maple. Finally, similarly to the other methods, we have checked the accuracy obtained with the large μ-expansion when testing the recurrence relation in the parameter region defined by the relation x + μ − 4x + 2μ < y < x + μ + 4x + 2μ. We consider 10 10 random points for values of the parameters such that (μ, x, y) ∈ [136, A] × [0, A] × [0, A] and we analyze the accuracy when the asymptotic expansion is evaluated. As expected, our results show that the highest errors occur for the smaller values of μ, and the accuracy tends to decrease as A becomes large, particularly close to the transition line y = x + μ. We get accuracies better than 3 × 10 −13 , 2 × 10 −12 and 10 −11 for A = 200, 1000, 10000 respectively; the percentage of points in these regions where asymptotics for large μ is used is between 0.01% (A = 200) and 0.03% (A = 10000). We have performed an additional test for values of μ close to 136 and the accuracies confirm the previous results and the fact that the accuracy is better as μ is higher.
Finally, we have tested (101) and (102) when the quadrature method is used in the regions [0, A] × [0, A] × [1, A]. We consider 10 8 random points for three different values. We obtain an accuracy better than 6 × 10 −13 , 1.1 × 10 −12 and 10 −11 for A = 200, 1000, 10000 respectively. It is important to observe that more than 95% of the points in these regions correspond to quadrature, which means that our algorithm uses quadrature most of the time. However, the rest of the methods are crucial in order to cover regions where the quadrature method loses accuracy.
6.1.1. Global Test. The algorithm has been implemented in the Fortran 90 module MarcumQ, which includes the Fortran 90 routine marcum for the computation of Q μ (x, y) and P μ (x, y). After the assembly of the different methods, we have tested that in the regions (x, y, μ) ∈ [0, A] × [0, A] × [1, A] we obtain relative accuracies close to 10 −12 , 10 −11 and 5 × 10 −11 for A = 200, 1000, 10000 respectively when the function values are larger that 10 −280 . A total of 10 9 random points was considered for each of these tests. For values between 10 −280 and 10 −290 the accuracy is close to 5 × 10 −11 for the three values of A. We set all values smaller than 10 −290 to be equal to zero.

Comparison against Other Software
As far as we know, the only available software for computing Marcum functions are the functions marcumq of MATLAB which, according to MathWorks documentation, implements the algorithm given in Shnidman [1989], and the function MarcumQ of Mathematica. The MATLAB function is a fixed precision algorithm while the Mathematica function is a variable precision one. It makes sense to compare our algorithm against MATLAB's, but it is not so clear how to compare against a variable precision algorithm, particularly when Mathematica gives no clue about the methods used.
Mathematica computes the function Q m (α, β) of (4), in our notation Q μ (x, y) with x = α 2 /2 and y = β 2 /2; it also computes the complementary function P. Real parameters m, α and β are accepted. It appears that the algorithm gives correct results if a sufficiently high accuracy is considered, but that it may fail if the demanded accuracy is not high enough. These examples seem to indicate that Mathematica uses some approximations suffering from severe cancellations which need to be computed with high accuracy. It is always possible to get the correct result, but the accuracy needed is uncertain. Apart from this, the Mathematica algorithm appears to be much slower than our algorithm. The comparison with MATLAB is more natural because it computes the Marcum function with fixed precision arithmetic. It is important to observe that our method is more general than the function marcumq of MATLAB, which can only be used for integer μ. MATLAB computes Q m (α, β), but it does not compute the P−function; the only way to obtain values of P from MATLAB is by computing them as 1 − Q, but then small values of P lose accuracy. In terms of speed, our algorithm appears to be much faster. In average, MATLAB spends around 8 × 10 −3 s per evaluation for Apart from being faster, our algorithm is more accurate and reliable than MATLAB's marcumq and we have found ranges of the parameters where our algorithm is accurate while MATLAB gives wrong results.
Some of these accuracy problems occur for large μ, in the region where we consider the asymptotic expansion of (72), as shown in Table I.
Severe problems occur with the MATLAB algorithm near the transition line y = x + μ for μ large and x small even more, but also for larger values of y. For example, the minimum (maximum) error found when computing Q 800 (x, y) with x ∈ [0.3, 5] and y ∈ [806, 870] (taking a step size in the intervals of x = 0.2 and y = 1, respectively) is 0.87 (1.96). The error is computed as 2 Q 1 − Q 2 / (Q 1 + Q 2 ), Q 1 , Q 2 being the values obtained with MATLAB's marcumq and our Fortran 90 module, respectively. In other words, none of the 1650 computed MATLAB values is correct. As a specific numerical example, MATLAB gives Q 800 (0.4, 810) 0.0053 while the correct value is approximately 0.3632. This type of errors is so clear that it becomes noticeable by just plotting function values. The plot of the decreasing function Q 800 (1, y) as a function of y shows different abrupt changes when computed with MATLAB. For small y the results are correct, but plotting the function for y ∈ [750, 850] we observe a steep jump close to y = 800 and for 800 < y < 1100 all the values are wrong (smaller than the correct values); for y > 1100 a different type of error occurs and the algorithm returns approximately machineepsilon and for for y = 2348 the result jumps to approximately 10 −300 and it jumps again to machine-epsilon when 2375 < y < 2384. Finally, for y > 2384 the algorithm gives 0, which is reasonable because the value of Q is already close to underflow. These anomalies are present for smaller values of m. For m = 550 the first jump is still noticeable, and the other phases (machine-epsilon and close to underflow) remain. For smaller values of m the first jump becomes unnoticeable, but the other (wrong) phases remain for large y.
There exist other regions where MATLAB gives no accuracy. For instance, the plot of the increasing function Q 2 (x, 200) as a function of x shows that the MATLAB function oscillates very rapidly for 0 < x < 70, with most values smaller than machine-epsilon; all these values are wrong. Similar errors occur also for large m and even some negative results are obtained. We observe that if for x small enough, Q becomes smaller than machine-epsilon, then the MATLAB values are wrong. As x increases, the error tends to disappear.
None of these anomalies occurs for our algorithm, which is able to compute accurately for parameters smaller than 10000. Our main loss of accuracy comes from the use of the series when the functions are close to the underflow limit, but the results are accurate if they are larger than 10 −290 .