Sensitivity Analysis in Calculus of Variations . Some Applications ∗

This paper deals with the problem of sensitivity analysis in calculus of variations. A perturbation technique is applied to derive the boundary value problem and the system of equations that allow us to obtain the partial derivatives (sensitivities) of the objective function value and the primal and dual optimal solutions with respect to all parameters. Two examples of applications, a simple mathematical problem and a slope stability analysis problem, are used to illustrate the proposed method.

sensitivity analysis is the best technique because it identifies the main causes of instability and immediately suggests the most effective actions needed to make adequate corrections.In our slope example, one can ask about the sensitivity of the safety factor to soil properties and to the slope profile.Once this information is available, the engineer can know immediately where the slope profile must be modified and the properties of the soil that are the most influential on the slope safety.Thus, the answers to questions of what changes in the slope profile produce the largest improvement of the safety factor or what changes in the soil strength are the most effective to avoid instability can be obtained via sensitivity analysis; no other techniques give better and more precise answers.
As in this example, there are many practical problems in which the calculus of variations is the natural and best mathematical model to use and where sensitivity analysis becomes appropriate.For the sake of illustration, we will use the slope stability problem and another example in section 5, but a similar treatment can be given to many other relevant problems.
Sensitivity analysis is a well-developed technique in nonlinear optimization, for which relevant results have been obtained (see, among others, [15,25,16,11,10,8,9,12]).They include closed formulas for the sensitivities of the objective function value, and the primal and dual variables with respect to the parameters.
Another related field to the topic treated in this paper is semi-infinite programming, dealing with the optimization of functions which have an infinite number of variables or constraints, in which important developments have been made (see, for example, [17,29,1,18,19,30]).In addition, this technique has been applied to practical problems such as design, optimal control, transportation, economic equilibrium, probability, etc. (see the list of references in [19]).
Sensitivity analysis in calculus of variations can be considered to be a byproduct of results on second order conditions in optimization problems in general (see [1]) or on some particular cases of optimal control.Note that some parametric calculus of variations can be reduced to parametric optimal control by setting ẋ = u, where x is the state and u is the control.However, there are other cases in which calculus of variations problems are not typical cases of control problems.There is an extensive literature on optimal control such as, for example, [23] and [26], not to mention papers on abstract parametric infinite-dimensional optimization.In this field, questions such as how much the optimal value of the objective function or the optimal solution change when some parameters or data functions are modified are of interest.Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpA direct and interesting treatment of sensitivity analysis, together with one example in optimal control, appears in [27], which states, "Despite the increasing number of papers dealing with stability and sensitivity, there is still a considerable deficit of examples in the literature."We fully agree with these authors.Though there are some practical applications such as in [2], [27], or the references above, unfortunately, most of the available results are too abstract, lack examples of relevant applications, or are not sufficiently developed for many users to be able to use them in practice.The direct consequence of all of this is that many people are unaware of the existing important results on sensitivity analysis, and so these results have not received the recognition they deserve.So, some effort is needed to make these important results available to engineers and applied scientists.
In this paper we try to head in this direction as we deal with calculus of variations, developing a general technique for obtaining the sensitivities of the objective function optimal value, the dual variable values or functions, and the optimal solution with respect to real data, parameters, or data functions.In addition we provide two practical examples to illustrate the methods and try to convince readers of the importance of these tools.
The main contributions of this paper are as follows: 1.A specific and systematic treatment of sensitivity analysis for calculus of variations including the treatment of natural and transversality conditions is made and the corresponding methods are given.2. New theorems are given that allow us to obtain direct formulas for the sensitivities of the objective function with respect to parameters (finite or infinite).3. Two examples of applications are presented, one of which uses a nonstandard problem of calculus of variations since it is a quotient of integrals.Since in the derivation of the sensitivity formulas we plan to follow a path that parallels that of nonlinear programming problems, we start with a quick review of corresponding results in this field and present a summary of some relevant results for nonlinear optimization problems.

Sensitivity in Nonlinear
Programming.Consider the following nonlinear programming problem (NLPP):
We present two important sensitivity results that allow us to obtain the sensitivities in a very simple form.The first is a straightforward result for the sensitivities of the objective function values, and the second gives all sensitivities (optimal objective function values, primal (optimal solutions), and dual solutions) at once.The objective function sensitivities are the easier to obtain and we have the following theorem. Theorem which is the partial derivative of its Lagrangian function L(x, λ, µ; θ) = f (x; θ) + λ T h(x; θ) + µ T g(x; θ) (1.4) with respect to θ evaluated at the optimal solution x * , λ * , and µ * .
The second result is as follows.If certain regularity conditions hold and the matrix U below is invertible, differentiating the objective function (1.1) and the corresponding Karush-Kuhn-Tucker optimality conditions, the matrix with all partial derivatives with respect to its parameters becomes (see [15,16,11,8]) (1.5) where the matrices U and S are and where the dimensions of the matrices appear in parentheses, J is the set of binding (active) inequality constraints, |J| is its cardinality, and all the matrices are evaluated at the optimal solution x * , λ * , µ * , Z * P .Theorem 1.1 and (1.5) give the sensitivities of the objective function optimal value and the primal and the dual solutions with respect to all the parameters in a neat and straightforward form.Then the following question can be asked: Are there equivalent formulations for the calculus of variations problems?
The answer is positive and is given in this paper.In fact, parallel results exist for optimal control (see [27]).Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php The paper is organized as follows.Section 2 presents the notation and some background on calculus of variations that is required for understanding the following sections.Section 3 develops the expressions that allow us to calculate the sensitivities.One of these expressions is a differential equation to be solved with the trivial null boundary conditions for the variation at the end-points or the corresponding conditions for natural and transversality conditions.The other two are algebraic equations in the variations of the parameters and dual variables.Though the distinction between the finite-and infinite-dimensional cases is theoretically irrelevant because the main argument in sensitivity analysis is the implicit function theorem and the parameter can be taken in an abstract vector space, whatever the dimension is and provided sufficient smoothness of the data with respect to the parameters holds, from a practical point of view it makes an important difference.In fact this case could be treated together with the case of a finite number of parameters, but notational reasons and clarity of exposition moved us to treat them separately.So, in section 4 the case of infinitely many parameters is analyzed.Section 5 presents two examples of application: a simple mathematical problem and a slope stability analysis problem, which illustrate all the steps to be followed in order to derive the sensitivities.Finally, in section 6 some conclusions are drawn.

Some Required Background on Calculus of Variations.
Let us consider the following constrained classical problem of calculus of variations (see [22,13,20]): where H(u(t)) = (H 1 (u(t)), H 2 (u(t)), . . ., H m (u(t))), 0 ∈ R m , and To analyze the equilibrium conditions for this problem we need to define the variation of a functional with respect to its parameters.For a functional J depending on parameters u, a, b, the variation δJ is defined by In [31] the following multiplier theorem for the analysis of equilibrium conditions for problem (2.1)-(2.2) is given.
Theorem 2.1.Let J and H be as defined above and u * (t) be a local extremum for the problem (2.1)-(2.2).Assume that the set {u(t) : H(u(t)) = 0} is not empty and the variation of J and H i are weakly continuous in a neighborhood of u * .Then one of the following two possibilities must hold: 1.The following determinant vanishes identically: where integration by parts has been used and δu(t), δa, and δb are the differential increments of u(t), a, and b, respectively, λ * = (λ * 1 , . . ., λ * m ), and H = (H 1 , . . ., H m ).Note that we have denoted δa = δ(t)| a , and for the sake of brevity we have dropped all dependencies on t for u * , (u * ) , and δu.
Note that the above expression requires F and H to have partial derivatives with respect to u and u .In the following we will assume that all functions involved have the required smoothness.
From the above expression we can deduce the following results.
1.The arbitrariness of δu leads to the vanishing of the integral (2.7), and thus we have the Euler-Lagrange equation to be satisfied by all extremals: which, introducing the notation if a is free, or the corresponding equation if b is free.4. Finally, if the end-point t = a is on a given curve ψ(t) (which, in particular, implies that a is free), we have the transversality conditions or the corresponding equations in the case when t = b lies on the curve ψ(t).Equations (2.12)-(2.15)are the well-known necessary conditions for an extremum in the calculus of variations, which have been extensively used and applied to many practical problems.
Equation (2.12), together with the fixed end-point conditions or the corresponding natural or transversality conditions in (2.13)-(2.15),leads to a boundary value problem (BVP) which allows us to solve the calculus of variations initial minimization problem.The necessary conditions for the optimal solution u * of this problem are given by the Euler-Lagrange equation (2.12), the constraints (3.2), and some of the equations (2.13)-(2.15),depending on the problem, where now there is a dependence on p in all equations.Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Sensitivity Analysis in
To obtain the sensitivity equations we compute all variations with respect to their parameters.Thus, the variations of (3.1) The variations of the Euler-Lagrange equation (2.12) lead to V(E u (L(t, u * , (u * ) , λ * ; p))) = 0, (3.6)where we have used the operator V defined by If one faces a fixed end problem, the variation of the boundary condition leads to With respect to the natural and transversality conditions, the variations at point t = a if u(a) is free ((2.13)where there is also dependence on p) give Similarly, the variation of the natural boundary condition if a is free (see ( For each λ, p, δλ, and δp, (3.6) is in general a nonhomogeneous linear second order differential equation in δu, which, together with (3.8) and/or the boundary conditions in (3.9)-(3.11),leads to a BVP, which is the parallel of the BVP referred to in [27].Its solution together with (3.4) and (3.5) allows us to get the sensitivities of J, u, and λ with respect to p by replacing δp by the identity matrix, and (3.4) and (3.5) are the counterparts of (1.5) for calculus of variations.Note also that (3.4) and (3.5), on account of (3.6), lead to a system of linear equations in δJ, δp, and δλ.
Theorem 1.1 also holds for calculus of variations.As for the particular form of the derivative of the objective function with respect to the parameters, the corresponding sensitivity can be obtained analytically thanks to the Lagrangian and without using the so-called variational derivatives, as is stated in the following theorem.which is the gradient of its Lagrangian function with respect to p evaluated at the optimal solution u * , λ * .The proof is a direct consequence of (3.4) and (3.5).
The practical consequences of Theorem 3.1 are that direct formulas for the sensitivities are available, while the remaining sensitivities (of primal and dual variables) are more difficult to obtain.

Sensitivity Analysis for Infinitely Many Parameters.
A similar analysis to that performed in section 3 can be developed if the k-dimensional parameter p is changed to a vector data function φ = (φ 1 , . . ., φ r ) : R → R r .Indeed, it would be possible to redo all sensitivity computations when both parameters p and φ are present; however, the notation is rather heavy going.
Now, the variation on the Euler-Lagrange equation (2.12) is where the operator W is given by This is the nonhomogeneous linear second order differential equation in δu, which, together with (4.6)-(4.11),leads to a BVP (the parallel to that mentioned in [27] for optimal control), whose solution, together with (4.4) and (4.5), allows us to find the sensitivities.
In this case, the counterpart of Theorem 1.1 is the following theorem.
which is the gradient of its Lagrangian function with respect to φ evaluated at the optimal solution u * , λ * .
As in the previous case this theorem allows us to obtain closed form formulas for the objective function sensitivity.

Some Examples of Applications.
In this section we illustrate the proposed method with its application to two examples.

A Mathematical Example.
Consider the following parametric problem of parameter p (see [22, p. 111] Direct Calculation of the Sensitivities.In order to calculate the partial derivatives of J, λ, and u(t; p), we first solve the parametric problem in terms of p.The Euler-Lagrange equation for this problem is u (t; p) − λpu(t; p) = 0, (5.3) whose solution is which, using the boundary conditions in (5.2), leads to the optimal solution Then the partial derivatives of u(t; p), λ, and J are Sensitivity Analysis Using the Proposed Methods.The objective function sensitivity in (5.8) can also be calculated using Theorem 3.1 as follows: If one is interested in calculating the partial derivatives of u(t), λ, and J(u(t)) with respect to p simultaneously, (3.4), (3.5), and (3.6) can be applied. As we have Then we get To obtain the partial derivatives of u(t), λ, and J(u(t)) with respect to p, we solve the BVP (5.14) with the two boundary conditions δu(0) = δu(π) = 0, and the system of linear equations (5.12)-(5.13)together with δp = 1, resulting in exactly the sensitivities in (5.6)-(5.8).

A Slope Stability Problem.
In this section we present a slope stability problem.Slope stability analysis (see [3,4]) consists of determining the safety factors F (the ratio of the resistance to sliding forces or moments) associated with a series of sliding lines previously defined by the engineer, and finding the one that leads to a minimum safety factor F 0 .Since each of these forces and moments can be given as a functional, the problem can be stated as the minimization of a quotient of two functionals.
The papers [5,6,7,28,24], based on the Janbu method (see [21]), proposed for a purely cohesive soil the following functional: where f 1 (t, u(t), u (t)) and f 2 (t, u(t), u (t)) and Q N and Q D are the subintegral functions and the functionals in the numerator and denominator, respectively, Q = F V , F is the safety factor, V = c γH , c is the cohesion of the soil, γ is the unit weight of the soil, H is the slope height, t 1 and t 2 are the t-coordinates of the sliding line end-points, ū(t) is the slope profile (ordinate at point t), u(t) is the ordinate of the sliding line at point t, and (see Figure 1) We note that t and u(t) have been adequately normalized by dividing the true coordinates x and y(x), respectively, by the slope height H.
The Euler-Lagrange equation for this nonstandard problem of calculus of variations is (see [14] and [28])  (5.18)where B and C are arbitrary constants.
Then (5.16) provides the set of extremals 19) In this case, the critical sliding line is infinitely deep (a well-known result to soil mechanics experts).Thus, to simplify, we consider only the sliding lines passing through the end-points at t 0 = −1 and t 1 = 1.Then the constants B, C, and Q must satisfy the end-point conditions ũ(t) = u(t), t = t 0 , t 1 , (5.20) and (5.15).Solving this system one gets B = 0.401907, C = −1.00284,Q = 7.13684, with a critical sliding line with equation which is plotted in Figure 1.
The numerator Q N and denominator Q D of Q in (5.15) become and To illustrate the proposed sensitivity method we consider also the parameterized problem Minimize u(t) + Bt + C, (5.25) where the constants B, C, and λ can be calculated using the equations u(−1) = ũ(−1), u(1) = ũ(1), (5.26) and (5.22), to get with an optimal value of the objective function Sensitivity Analysis Using the Proposed Methods.The objective function sensitivity (5.27) can also be calculated using Theorem 3.1 as follows: whose minimum value is attained at p = 0.651005, as expected.This is another way of minimizing the initial quotient functional Q.Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php In addition, in this case we have and then which implies which leads to Applying Theorem 4.1 we get However, we can also obtain the sensitivities of the initial slope stability problem using the rule for obtaining the derivative of a quotient, and for this we get

Conclusions.
The main conclusions from this paper are as follows.
1. Sensitivity analysis can be done for calculus of variations problems in a similar way to how it is done in optimization problems (linear and nonlinear) and in optimal control problems, obtaining results for calculus of variations that are the parallel versions of those for the other problems.2. Since not all calculus of variations problems can be considered as particular examples of typical optimal control problems, a specific treatment of the sensitivity analysis problem for calculus of variations is necessary.3. Expressions (3.4), (3.5), and (3.6) together with the boundary conditions (3.8)-(3.11)lead to a BVP and a system of linear equations in δJ, δλ, and δp, which are the counterpart of (1.5) and permit the sensitivities of the objective function optimal value, the dual solution, and the optimal solution function with respect to data or parameters to be obtained.This result extends naturally to the case of infinitely many parameters, as shown in section 4. Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Fig. 1
Fig. 1 Critical sliding curve passing through the given end-points.

F
Calculus of Variations.In this section we consider a parametric family of calculus of variations problems and analyze how the corresponding optimal solutions change when parameters are modified.For the sake of simplicity, we start by considering the case of finite parameters, and we postpone the case of data functions (infinitely many parameters) to section 4.More precisely, (t, u, u ; p) dt(3.1)subject toH(u; p) = 0, (3.2)whereH(u; p) = (H 1 (u; p), H 2 (u; p), . . ., H m (u; p))andH i (u; p) = b(p) a(p) H i (t, u, u ; p) dt, i = 1, 2, . . ., m, (3.3) 0 ∈ R m ,and p = (p 1 , p 2 , . . ., p k ) ∈ R k is the vector of parameters.

2 SFig. 3
Fig.3Sensitivity of the sliding curve with respect to the slope profile.

4 .
Theorems 3.1 and 4.1 are the counterparts of Theorem 1.1 for calculus of variations in the finite and infinite cases, respectively.They allow us to obtain closed formulas for the objective function sensitivities with respect to the data.5.The two practical applications presented in this paper have illustrated and clarified the theory and demonstrated the quality of the proposed technique, together with the importance of the practical applications that can benefit from the proposed methods.6.The procedure developed in this paper can be easily generalized to more complicated cases of calculus of variations, such as those including several unknown functions, multiple integrals, etc. 7. It would be convenient for mathematicians, engineers, and practical researchers in general to become familiar with the sensitivity BVP and the linear systems of equations in (3.6)-(3.11)and (4.6)-(4.11),and with Theorems 3.1 and 4.1, as they are already familiar with the general results of calculus of variations in (2.12)-(2.15).This would allow them to incorporate sensitivity analysis into their solutions of calculus of variations problems, increasing the quality of their work.