OPTIMAL CONTROL OF SEMILINEAR ELLIPTIC EQUATIONS IN MEASURE SPACES∗

Optimal control problems in measure spaces governed by semilinear elliptic equations are considered. First order optimality conditions are derived and structural properties of their solutions, in particular sparsity, are discussed. Necessary and sufficient second order optimality conditions are obtained as well. On the basis of the sufficient conditions, stability of the solutions is analyzed. Highly nonlinear terms can be incorporated by utilizing an L∞(Ω) regularity result for solutions of the first order necessary optimality conditions. AMS subject classifications. 90C48, 49J52, 49K20, 35J61


Introduction.
This paper is dedicated to the study of the optimal control problem (1.1) (P) min u∈M(ω) where y is the unique solution to the Dirichlet problem (1.2) −Δy + a(x, y) = u in Ω, y = 0 on Γ.
The control domain ω is a relatively closed subset of Ω. We assume that α > 0, y d ∈ L 2 (Ω), and Ω is a bounded domain in R n , n = 2 or 3, with Lipschitz boundary Γ. The controls are taken in the space of regular Borel measures M(ω). As usual, M(ω) is identified by the Riesz theorem with the dual space of C 0 (ω)-consisting of the continuous functions inω vanishing on Γ ∩ω-endowed with the norm which is equivalent to the total variation of u; see Rudin [19]. We recall that the use of measure-valued controls is motivated by their sparsity promoting properties. If u ∈ L 1 (ω), then u M(ω) and Ω |u| dx coincide. However, the consideration of (P) in L 1 (ω) does not allow us to argue existence of a minimizer, whereas the larger space M(ω) does. The choice of M(ω) in the cost functional is also useful for the optimal actuator placement. Moreover the cost of the control enters (P) in a manner that is linearly proportional rather than the frequently investigated quadratic costs.
Sparsity promoting controls were investigated in several earlier works. Some of them consider the case of measure-valued controls as done here (see [7,8,12,14]); others use additionally pointwise control constraints. In this case the M(ω) norm can be equivalently replaced by the L 1 (ω) norm; see [16,18,20,23]. In the previous papers, the state equation is linear. The case of semilinear elliptic equations with L ∞ (Ω) controls controls is studied in [9,10,11].
The paper is organized as follows. In section 2 we provide the necessary analysis of the state equation, including differentiability of the state with respect to the control. Necessary first and second order optimality conditions are derived in section 3. A second order sufficient optimality condition is achieved in section 4. This condition allows a stability analysis of the solutions to perturbations in y d and possible perturbations on the right-hand side of the equation. In the case that ω = Ω extra regularity of controls and states which satisfy the first order necessary condition can be obtained. This is carried out in section 5. The L ∞ (Ω) bound of these states can be used to allow for highly nonlinear terms in a(x, y). This is exploited in section 6.

Analysis of the state equation.
In this section, we will establish the existence and uniqueness of the solution of the state equation (1.2) as well as the continuity and differentiability properties of the control-to-state mapping. For the well-posedness we will use the following assumption.
(A1) The mapping a : Ω × R → R is a Carathéodory function, nondecreasing monotone with respect to the second variable for almost every x ∈ Ω, and satisfying for almost all x ∈ Ω and all s ∈ R We say that y ∈ L 1 (Ω) is a solution to (1.2) if a(·, y) ∈ L 1 (Ω) and Observe that Z ⊂ C 0 (Ω). Thus, all the integrals in (2.2) are well defined. Theorem 2.1. Under assumption (A1), there exists a unique solution y of (1.2). Moreover, it satisfies that y ∈ W 1,p 0 (Ω) for every p < n/(n − 1) and for some constant C p independent of u ∈ M(ω). Finally, if u k * u in M(ω), then y(u k ) → y(u) strongly in W 1,p 0 (Ω) for every 1 ≤ p < n/(n − 1). This result was first proved by Brezis and Strauss [6] for functions u ∈ L 1 (Ω) without the growth assumption given in (2.1). Later, Benilan and Brezis [2] observed Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php that the situation for measures is different. Specifically, they showed that (1.2) has no solution for a(x, s) = s 3 , n = 3, and u = δ x0 , where x 0 is a point in Ω. A way to ensure the existence of a solution for problem (1.2) consists of assuming the growth condition on a expressed in (2.1); see Boccardo and Gallouët [3]. For the sake of completeness, let us give an independent proof of the existence of a solution that illustrates the difficulty of passing from L 1 functions to measures and the role played by the growth condition (2.1).
Let us further consider From the growth condition assumed in (A1), we have that f ∈ L 1 (Ω) and Moreover, the monotonicity of a with respect to the second variable implies g(x, s)s ≥ 0 for all s ∈ R. With these properties the existence of a solution w ∈ W 1,1 0 (Ω) of (2.6) follows; see [3,Theorem 2]. Setting y = w + ζ ∈ W 1,1 0 (Ω) gives a solution to (1.2). By [5,Corollary B1] this solution is unique.
To verify the a priori estimate, we express (1.2) in the form Using again [21], as in (2.5), we deduce that y ∈ W 1,p 0 (Ω) for every 1 ≤ p < n/(n − 1) and Finally, let us prove the claimed continuity. From (2.3) we get the existence of a subsequence, denoted in the same way, y k = y(u k ) y in W 1,p 0 (Ω) for all Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php 1 ≤ p < n/(n − 1). Hence, y k → y strongly in L p * (Ω). By the Lebesgue dominated convergence theorem and (A1) we obtain that a(·, y k ) → a(·, y) strongly in L 1 (Ω). Now, we can pass to the limit in the equations satisfied by u k and y k and deduce that y is the state associated to u. By uniqueness of the solution of the state equations we conclude that the whole sequence {y k } ∞ k=1 converges weakly to y in W 1,p 0 (Ω). Now, from the compact embeddings of M(ω) and L 1 (Ω) in W −1,p (Ω) for all 1 ≤ p < n/(n − 1), we get that u k − a(·, y k ) → u − a(·, y) strongly in W −1,p (Ω) for the mentioned range of p. Applying the result by Jerison and Kenig [17, Theorem 0.5], we conclude the strong convergence of {y k } ∞ k=1 to y in the space W 1,p 0 (Ω). Now, we define the space Δy ∈ M(Ω)}, which is a Banach space when endowed with the norm . At this point we remark that M(ω) is identified with a subspace of M(Ω). We also observe that by (2.5) the space V (Ω) is continuously included in W 1,p 0 (Ω) for every 1 ≤ p < n/(n − 1).
In the remainder of this section we study the differentiability of the mapping G : M(ω) → V (Ω) given by G(u) = y(u) with y(u) the solution of (1.2). To this end we make the following assumptions.
(A2) The mapping a : Ω × R → R is a Carathéodory function of class C 1 with respect to the second variable for almost all x ∈ Ω, and it satisfies for almost all x ∈ Ω and all s ∈ R The mapping a is a Carathéodory function of class C 2 with respect to the second variable for almost all x ∈ Ω, and it satisfies for almost all x ∈ Ω and all s ∈ R (2.8) We observe that if (A2) holds and a(·, 0) ∈ L 1 (Ω), then (A1) is satisfied. Similarly, if (A3) holds, ∂ y a(·, 0) ∈ L q1 (Ω) for some q 1 > n 2 , and ∂ y a(x, s) ≥ 0, then (A2) also holds.
As an immediate consequence of the implicit function theorem we deduce that G is a C 1 mapping and G (u)v is given by (2.9). Finally, if (A3) holds, then the mapping y → a(·, y) is C 2 , and consequently F is also C 2 . Let us observe that (A3) and the fact that y, z v1 , z v2 ∈ V (Ω) imply that ∂ 2 a ∂y 2 (x, y)z v1 z v2 ∈ L 1 (Ω). Once again the implicit function theorem implies that G is of class C 2 and (2.10) is satisfied.
Remark 2.3. Due to the W 1,p 0 (Ω) regularity of the solution y to (1.2), we can integrate by parts in (2.2) and use density of The same variational formulation is valid for (2.9) and (2.10).

Necessary optimality conditions for (P).
From Theorem 2.1 the existence of a global minimum for problem (P) is immediate. Since this problem is not convex, we are going to deal with local minimizers. Hereafterū will denote a local minimum of (P) with associated stateȳ. Before stating the optimality conditions satisfied by (ȳ,ū), we analyze the differentiability of the cost functional. Let us express the cost in the form J(u) = F (u) + αj(u), where and j(u) = u M(ω) .
To prove the second identity of (3.1) it is enough to take into account Remark 2.3 and the regularity of ϕ, along with (2.9) and (3.2). The same argument is used to deduce the second identity of (3.3).
Concerning the functional j : M(ω) → R, j(u) = u M(ω) , we note that it is Lipschitz continuous and convex. Hence, it has a subdifferential and a directional derivative, which are denoted by ∂j(u) and j (u; v), respectively. The following propositions give some properties of ∂j(u) and provide an expression for j (u; v).

Taking the Jordan decomposition
The inequality λ C0(Ω) ≤ 1 follows easily from the definition of subdifferential. The reader is referred to [7] for the proof of 1 and to [7, Lemma 3.4] for 2.
Before considering the directional derivative j (u; v), let us introduce some notation. Given two measures u, v ∈ M(ω), we consider the Lebesgue decomposition of v = v a + v s with respect to |u|, where v a is the absolutely continuous part of v with respect to |u|, and v s is the singular part. Now, we take the Radon-Nikodym derivative of v a with respect to |u|, dv a = g v d|u|.
In particular, it is obvious that u is absolutely continuous with respect to |u|. Moreover we can express du = hd|u|, where h is measurable with respect to |u| and |h(x)| = 1 for all x ∈ ω, du Proof. As above, let us write du = hd|u|. Then we have Since the quotients are dominated by |g v |, we could use Lebesgue's dominated convergence theorem in the last identity.
Using the previous propositions we derive the first order optimality conditions for problem (P).
Theorem 3.4. Suppose that (A1) and (A2) hold and letū be a local solution to (P). Then there existsφ ∈ W 1,p 0 (Ω) for some p > n such that Moreover, ifū = 0, then Proof. Using Proposition 3.1 and the convexity of j we obtain for every u ∈ M(ω) which implies that −1 αφ ∈ ∂j(ū). Now, it is enough to apply Proposition 3.2 to deduce (3.6)-(3.8). Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php To prepare for the second order necessary conditions we introduce the critical cone as follows: It seems natural that the second order optimality conditions must be imposed only on those directions where the directional derivatives vanish. Let us point out some properties of this critical cone. Proposition 3.5. Cū is a closed convex cone that can equivalently be expressed in the form Proof. The cone property and closedness of Cū are a straightforward consequence of the continuity and positive homogeneity of the To prove the convexity we first observe that to conclude that Cū is convex. Though the convexity, continuity, and positive homogeneity of v → j (ū; v) can be easily checked by using the representation given in (3.4), they are also true for any convex and Lipschitz continuous functional j; see [4, section 2.4] or [13,Chapter 2].
To prove (3.10), we compute with the aid of (3.1) and (3.4) The last identity is a consequence of the fact thatφ d|ū| = −α dū, which follows by (3.8).
see [8,Lemma 3.4]. Therefore, the cone Cū can be expressed in the following way: Since the support of the absolutely continuous part of v with respect to |ū| is obviously contained in suppū, we deduce with (3.8) that every measure v ∈ Cū is supported on the set {x ∈ Ω : |φ(x)| = α}. Theorem 3.7. Suppose that (A1)-(A3). Ifū is a local minimum of (P), then Proof. Let v be an element in Cū and consider the Lebesgue decomposition dv = g v d|ū| + dv s . For every integer k ≥ 1 we set Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php 0 by Lebesgue's dominated convergence theorem. Moreover, since the singular parts of v k and v coincide and v ∈ Cū, then (3.10) implies that v k ∈ Cū for every k.
For any 0 < ρ < 1 k , following the proof of Proposition 3.3, we find Now, using thatū is a local minimum of J and making a Taylor expansion we get for every k and 0 < ρ < 1 k the existence of θ = θ(k, ρ), with 0 < θ < 1, such that since v k ∈ Cū. Finally, dividing the last term by ρ/2 and taking the limit when k → ∞, we get that F (ū)v 2 ≥ 0. Remark 3.8. Before finishing this section let us observe that we have not made precise the meaning for the local optimality ofū in Theorems 3.4 and 3.7. In fact, any norm on M(ω) leads to the same result, which is not the case for second order sufficient conditions, which will be considered next.

Second order sufficient conditions and stability.
In this section,ū will denote an element of M(ω), with associated stateȳ and adjoint stateφ, such that the first order optimality conditions (3.5)-(3.7) hold. Our first goal is to give a second order sufficient condition for the local optimality ofū. To this end we strengthen assumption (A3).
(A3 ) There exist a constant C a and a function φ 2 ∈ L q2 (Ω) with q 2 > 2 such that for a.a. x ∈ Ω and for all s ∈ R Associated to q 2 and r we introducep as follows. In dimension n = 3, we take 6/5 ≤p < 3/2 withp sufficiently close to 3/2 so that −Δ : is an isomorphism; see [17]. For n = 2 we take 1 ≤p < 2 so that The reason for this choice ofp will be clear from the estimates below.
As usual, we have to consider an extended cone of critical directions to formulate a sufficient second order condition for optimality. For every τ > 0, we denote The second order condition involves this cone as follows: Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php (SOSC) There exist positive constants κ, ρ, and τ such that Remark 4.1. We point out that the neighborhood for admissible controls in (SOSC) is chosen in W −1,p (Ω) rather than M(ω). This is the proper choice, as we show by the following example. Let Let us assume thatū satisfies the condition (4.2). As we will prove below,ū is a strict local minimizer of (P) in the sense of the then we can also prove thatū is a strict local minimum of (P) in the sense of the M(ω)-topology. However, if ε < 2, this does not allow to guarantee that J(ū) < J(u k ) for k sufficiently large because u k −ū M(ω) = 2 > ε for every k. This example illustrates the fact that the strong topology of M(ω) is not the appropriate one for the analysis of the state equation, but rather weaker topologies should be used.
The following theorem implies that (SOSC) is sufficient for strict local optimality ofū.
Proof. Let us argue by contradiction and assume that (4.3) does not hold for any ε and σ. Then there exists a sequence Let us prove that u k −ū ∈ C τ u for all k sufficiently large. Using the convexity of j we know that Combining (4.4) and (4.5), a Taylor expansion of F aroundū leads to 3), (2.9), and (2.10) we get . Because of our choice ofp and (4.4), we have Combining this inequality and (4.6) we deduce Hence, for C/k < τ , we get that u k −ū ∈ C τ u . Moreover, from (3.7) and (3.11) it follows that Finally, from this inequality, (4.6) and (4.2), and observing that Since (4.4) implies that u k =ū, the above inequality gives the contradiction.  Subtracting (4.9) from (4.8) and applying the mean value theorem, we get for somê In dimension 3, Assumption (A3 ) implies the boundedness of ∂ 2 y a(x,ŷ) and consequently where we have estimated the L 2 (Ω)-norm of y −ȳ by the W 1,p 0 (Ω)-norm. Hence, in dimension 3, the first inequality in (4.7) follows from the triangle inequality and (4.12).
To obtain the estimate (4.12) for dimension 2, we use again assumption (A3 ) to get Then, our choice ofp and Hölder's inequality imply We proceed as in the three-dimensional case to prove the first estimate in (4.7).
Let us also notice that from (4.11) and (4.12) we infer for n = 2 or 3 Once again, the triangle inequality leads to Finally, to prove the second inequality of (4.7) we use (4.10) and the first inequality of (4.7) to obtain Replacing y−ȳ by z, we can argue as above to estimate ∂ 2 y a(x,ŷ)z in L 2 (Ω). Therefore, with (4.13) we conclude (Ω) z L 2 (Ω) . Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php Remark 4.4. The reader may observe that the second order sufficient optimality condition (4.2) is not imposed at the pointū as usual. It is imposed for every u in a certain ball aroundū. The reason for this stronger assumption is that we have not been able to prove that given any ε > 0 there exists ρ > 0 such that Corollary 4.5. Under the assumptions of Theorem 4.2, there exists a constant σ > 0 independent of u such that (4.14) Proof. For simplification, we write y = y(u). Subtracting the equations satisfied by y andȳ we get forŷ =ȳ + θ(y −ȳ), θ being a Lebesgue measurable function such Using the identity Subtracting the equations for y −ȳ and z u−ū , and setting ξ = y −ȳ − z u−ū , we obtain Proceeding as in the proof of Lemma 4.3 we find Finally, from the triangle inequality we have which with (4.3) leads to (4.14) forσ = σ/(Cε + 1) 2 .
The rest of the section is dedicated to the stability analysis of the control problem (P) with respect to perturbations of the desired state y d . More precisely, for δ > 0 consider the problems We denote by u δ local solutions to (P δ ) with associated states y δ . We have the following approximation theorem as δ → 0. Theorem 4.6. Suppose that (A1) holds. Then every family {u δ } δ>0 of global solutions is bounded in M(ω) and every weak * subsequential limitū is a global solution to (P). The convergence properties hold for every p < n/(n − 1). Conversely, for every strict local minimumū of (P) in the W −1,p (Ω) (or M(ω)) sense there exists a sequence of local solutions {u δ } δ>0 of (P δ ) such that (4.16) holds. Proof. Denote by y 0 the solution to (1.2) associated to the control u = 0. Then, using (4.15) we get which proves the boundedness of {u δ } δ>0 . Hence, taking a subsequence, if necessary, we have u δ * ū in M(ω). From the compactness of the embedding M(ω) ⊂ W −1,p (Ω) for every p < n/(n − 1), we get the strong convergence in W −1,p (Ω) and the strong convergence y δ →ȳ in W 1,p 0 (Ω), whereȳ = G(ū). Let us prove thatū is a global solution to (P). From the stated convergence properties and (4.15) we get for every u ∈ M(ω) which proves the optimality ofū. In particular, taking u =ū we deduce from the above inequalities that J δ (u δ ) → J(ū), which implies that u δ M(ω) → ū M(ω) .
Conversely, letū be a strict local solution of (P). Then, for some ε > 0,ū is the unique global solution of the problem We also introduce the perturbed problems Observe that the compactness of the embedding M(ω) ∈ W −1,p (Ω) implies that U ε is sequentially weakly * closed in M(ω). This implies the existence of global solutions u δ to problems (P ε,δ ). Now, we can argue as in the first part of the theorem to deduce (4.16). In this case we replace the inequality J δ (u δ ) ≤ J δ (0) by J δ (u δ ) ≤ J δ (ū). As a consequence we have that u δ −ū W −1,p (Ω) < ε for δ sufficiently small, which shows that u δ is a local solution of (P δ ).
To get a rate of convergence for the states {y δ } δ>0 toȳ we use (SOSC). Let us fix a local solutionū of (P) satisfying (SOSC) and let ε > 0 be given by Theorem 4.2. We Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php know from the proof of Theorem 4.6 that there exists a sequence {u δ } δ>0 converging toū in the sense of (4.16) and such that every u δ is a minimum of J δ in the ball u −ū W −1,p (Ω) < ε.
Theorem 4.7. With the above notation and assuming (A1), (A2), and (A3 ), there exists a constant C independent of δ such that Proof. Using (4.14), the optimality of u δ , and (4.15) it follows that which proves the first estimate of the theorem. For the second estimate we use the optimality ofū and u δ to get Combining these two estimates the second inequality follows. Remark 4.8. Consider a perturbation in the state equation of the following type: Associated to these perturbed state equations we can define control problems (P δ ), analogous to problem (P) with solutions u δ . The previous analysis can be repeated to get the estimates of Theorem 4.7. In this argumentation it is enough to establish that for every control u ∈ M(ω) with corresponding states y δ and y, solutions to (4.17) and (1.2), respectively, satisfy y δ − y W 1,p 0 (Ω) ≤ C f δ L 1 (Ω) ≤ Cδ. Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

A regularity result.
The goal of this section is to prove a regularity result for the optimal controls and the associated states assuming that y d ∈ L ∞ (Ω) and ω = Ω. A similar result was obtained for linear state equations in [18]. The assumption that the control domain coincides with the observation domain can be restrictive for genuine control problems. However, it is an efficient way to determine the optimal placement of actuators.
We also consider the functions where G denotes the Green's function for the Dirichlet problem in Ω associated to the Laplace operator. The reader should notice thatζ + (ζ − ) denotes a class of measurable functions, whileζ + * (ζ − * ) is a particular selection in this class well defined at every point of Ω, that could take the value +∞ at some points. Defineȳ * =w +ζ * . The proof will utilize the following three lemmas. Lemma 5.2. The following properties hold: Proof. Let us prove the first inequality, the proof of the second being analogous. Let us assume that the inequality is false. Then, there exists a point x 0 ∈ suppū + such thatȳ * (x 0 ) > y d L ∞ (Ω) . From (3.8) we know that the supports ofū + andū − Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php are disjoint compact sets. Then we have thatζ − * is continuous in a neighborhood of suppū + ; see [21]. Moreover,w is continuous andζ + * is lower semicontinuous in Ω. Hence,ȳ * is also lower semicontinuous in a neighborhood of suppū + . Therefore, there exists a ball B ρ (x 0 ) such thatȳ * (x) > y d L ∞ (Ω) for every x ∈ B ρ (x 0 ). Then, (3.5) From (3.8), we know thatφ(x 0 ) = −α; consequentlyφ cannot be constant in the ball B ρ (x 0 ) because the left-hand side would be nonpositive and the right-hand side strictly positive. Therefore, an application of the maximum principle shows that there exists x ∈ ∂B ρ (x 0 ) such thatφ(x ) <φ(x 0 ) = −α, which contradicts (3.7). Remark 5.3. In the case ω = Ω, (3.7) says that φ C0(ω) ≤ α, but |φ| can be bigger than α outside ω. As a consequence, the proof of Lemma 5.2 is not valid.
Since this lemma is crucial in the proof of Theorem 5.1, it is our opinion that the regularity result is not valid for ω = Ω.
Then, the strong maximum principle can be applied as before. Second, we assume that x 0 ∈ Ω i for some i. Then we have Since φ C0(Ω) = α andφ(x 0 ) = −α, the maximum principle implies thatφ ≡ −α in Ω i , which contradicts the fact thatφ = 0 on Γ ∩ ∂Ω i . In the case there exists a connected component Ω i such that Γ ∩ ∂Ω i = ∅, we assume that {x ∈ Ω i : ∂a ∂y (x, s) > 0 for all s ∈ R} has positive Lebesgue measure. In this case the only constant function satisfying the last equation is the zero function, which is a contradiction with the fact thatφ(x 0 ) = −α. a(x, y). Remark 5.6 suggests the possibility to formulate the control problem (P ∞ ) for highly nonlinear functions a(x, y) under the more restrictive assumption on the control: u ∈ M ∞ (Ω). Hereafter, (P ∞ ) will denote the control problem

Dealing with highly nonlinear terms
where y is the unique solution of (1.2). In what follows we will assume that y d ∈ L ∞ (Ω). The following hypotheses are assumed for the function a.
(A4) The mapping a : Ω × R → R is a Carathéodory function of class C 1 with respect to the second variable for almost all x ∈ Ω, and it satisfies for almost all x ∈ Ω and all s ∈ R (6.2) ⎧ ⎨ ⎩ a(·, 0) ∈ L q1 (Ω) for some q 1 > n 2 , and ∀M > 0 ∃C M > 0 such that Let us prove that (P ∞ ) is well formulated, which means that y is uniquely defined for every u ∈ M ∞ (Ω).
The difficult issue is to prove that (P ∞ ) has at least one solution. To this end, given M > 0, we consider a function γ M : R → [−M − 1, +M + 1] of class C 2 having the properties for some constant C γ independent of M . For instance, we can select where p(t) = 3t 5 − 7t 4 + 4t 3 + t. With this choice, we can take 1] . Now, we define a M : Ω× R → R by a M (x, t) = a(x, γ M (t)). Hence, by assumption (A4), a M is of class C 1 with respect to the second variable and, using the mean value theorem, we get for every s ∈ R and almost all x ∈ Ω Now, we define the control problems where y is the solution of the state equation Since a M satisfies assumptions (A1) and (A2), the existence of a global minimum u M for (P M ), with associated state y M , is immediate. Moreover, (u M , y M ) along with the adjoint state ϕ M satisfy the optimality system (3.5)-(3.7), which we write Moreover, if u M = 0, then Let us prove that u M ∈ M ∞ (Ω) and it is a solution of (P ∞ ) for M sufficiently large.
Let us take a subsequence of {(u M , y M )} M≥M0 , denoted in the same form, such that We have thatȳ also satisfies (5.1) and (5.2), that it is the state associated toū, and thereforeū ∈ M ∞ (Ω). Moreover, for every u ∈ M ∞ (Ω) where we used that u M is a solution of (P M ). Therefore, J(u M ) = J(ū) for every M ≥ M 0 , and consequently u M is a solution of (P ∞ ) for all M ≥ M 0 .
Next we study second order optimality conditions. To this end we make the following assumption.
(A5) The mapping a is a Carathéodory function of class C 2 with respect to the second variable for almost all x ∈ Ω, and it satisfies (6.9) ∃φ 2 ∈ L q2 (Ω) with q 2 > n and ∀M > 0 ∃C M > 0 such that If we perturb u in M(Ω), then the existence of a solution to these equations may fail. Since our goal is to get first and second order necessary conditions for local optimality, differentiability of F is required. This motivates the introduction of a stronger norm in M ∞ (Ω) where this differentiability holds. We define the norm where ζ is the solution to (2.4). Endowed with this norm, M ∞ (Ω) is a Banach space. Now we introduce the space of associated states as is also a Banach space. From Theorem 6.1 we know that the mapping G ∞ : To prove the differentiability we argue as in the proof of Theorem 2.2. To this end we define F ∞ : From (A4), respectively, (A5), it follows that the mapping y → a(·, y) is C 1 , respectively, C 2 , from V ∞ (Ω) to L q1 (Ω) ⊂ M ∞ (Ω). Then following the same arguments as in the proof of Theorem 2.2, we obtain that G ∞ is C 1 , respectively, C 2 , and that (2.9) and (2.10) hold. Furthermore, by using the chain rule, we deduce the differentiability of To formulate the second order necessary optimality conditions we define the analogous cone to (3.9) C ∞,ū = {v ∈ M ∞ (Ω) : F (ū)v + αj (ū; v) = 0}, whereū ∈ M ∞ (Ω) satisfies the first order optimality conditions. Theorem 6.3. Under assumption (A4), ifū is a local solution of (P ∞ ), then there existsφ ∈ W 1,p 0 (Ω) for some p > n such that (3.5)-(3.7) hold. If in addition (A5) is satisfied, then F (ū)v 2 ≥ 0 for every v ∈ C ∞,ū .
Proof. The first order necessary optimality conditions can be proved as in Theorem 3.4. In the proof of second order necessary conditions given in Theorem 3.7, the only issue to take into account is the following one. For v ∈ M ∞ (Ω), we construct v k in the same way, but we have to prove that v k ∈ M ∞ (Ω); otherwise the existence of the state associated toū + ρv k can fail. To this end, we first observe, as pointed out in Remark 5.6, that |v| ∈ M ∞ (Ω) as well. Comparing the expressions |v| = |g v |d|ū| + d|v s | and |v k | = |g v k |d|ū| + d|v s |, we obtain by the definition of g v k that |v k | ≤ |v|. Hence, the solutions ζ and ζ k of (2.4) corresponding to |v| and |v k |, respectively, satisfy 0 ≤ ζ k ≤ ζ. Since, |v| ∈ M ∞ (Ω), then ζ ∈ L ∞ , hence ζ k ∈ L ∞ (Ω) too. This implies that |v k | ∈ M ∞ (Ω). Finally, since 0 ≤ v + k ≤ |v k | and 0 ≤ v − k ≤ |v k |, arguing in the same way we conclude that . Analogously to section 4, for the sufficient conditions we introduce the extended cone Based on the cone C τ ∞,ū we define the second order sufficient condition: (SOSC) There exist positive constants κ, ρ, τ , and M > M 0 such that (6.10) F (u)v 2 ≥ κ z v 2 L 2 (Ω) ∀v ∈ C τ ∞,ū , ∀u ∈ B ∞,M with u −ū W −1,p (Ω) < ρ, where M 0 was introduced in Theorem 6.2 and 1 ≤p < n/(n − 1) is chosen so that Δ : W 1,p 0 (Ω) → W −1,p (Ω) is an isomorphism. Proof. The proof of this theorem follows the steps of the proof of Theorem 4.2 with the following differences. First we observe that there exists a constant K M such that Indeed, we can decompose y(u) = ζ + w with ζ solving (2.4) and w satisfying (2.6). By definition of B ∞,M we know that ζ L ∞ (Ω) ≤ M . Moreover, (2.6) and assumption Downloaded 06/10/14 to 143.50.47.57. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php (A4) imply that w L ∞ (Ω) is bounded by a constant depending on M , which leads to the above estimate.
In the statement of Lemma 4.3, u andū must belong to B ∞,M and v ∈ B ∞,2M . Recall that when Lemma 4.3 is used in the proof of Theorem 4.2 v = u k −ū with u k andū both belonging to B ∞,M in the present case. Now, the estimate in (4.11) can be obtained as follows: ∂ 2 a ∂y 2 (x,ŷ)(y −ȳ) Finally, it is enough to observe that |ŷ(x)| ≤ K M and to recall assumption (A5) to deduce ∂ 2 a ∂y 2 (x,ŷ) ≤ φ 2 (x) + C KM with φ 2 ∈ L q2 (Ω).
Similar arguments are used to get the second estimate of (4.7), taking into account that the solution of (4.8) is bounded due to the fact v ∈ B ∞,2M .