Analysis of control problems of nonmontone semilinear elliptic equations

In this paper we study optimal control problems governed by a semilinear elliptic equation. The equation is nonmonotone due to the presence of a convection term, despite the monotonocity of the nonlinear term. The resulting operator is neither monotone nor coervive. However, by using conveniently a comparison principle we prove existence and uniqueness of solution for the state equation. In addition, we prove some regularity of the solution and differentiability of the relation control-to-state. This allows us to derive first and second order conditions for local optimality.


Introduction
In this paper, we consider an optimal control problem associated with the following elliptic semilinear equation Ay + b(x) · ∇y + f (x, y) = u in Ω, y = 0 on Γ, (1.1) where A is an elliptic operator, b : Ω −→ R n is a given function, f : Ω × R −→ R is nondecreasing monotone in the second variable, u ∈ L 2 (Ω), Ω is a domain in R n , n = 2 or 3, and Γ is the boundary of Ω. The precise assumptions on these data will be given in the next section. Due to the convection term induced by b, the linear part of the above operator is nonmonote. We emphasize that here we neither assume that div b = 0 nor b is small. Consequently, the bilinear form associated with the linear part of the operator is * The first authors were partially supported by not necessarily coercive. This introduces some important difficulties in the analysis of the equation. A thorough study is needed to prove existence and uniqueness of a solution of the equation (1.1) for every u. This study makes a strong use of a comparison principle. When the nonlinear term f is not present in the state equation, the reader is referred to [10,Theorem 8.3] for the existence and uniqueness of a solution. The case of nonmonotone quasilinear elliptic equations was considered in [5] and [12]. However, in the last two papers, the operator was coercive. The equation considered in this paper does not fit in the problems studied in the mentioned references.
Associated with the state equation (1.1) we consider the following control problem: where y u is the solution of (1.1) associated with u, L : Ω × R −→ R is a given function, ν > 0, and U ad = {u ∈ L 2 (Ω) : α ≤ u(x) ≤ β for a.a. x ∈ Ω} with −∞ ≤ α < β ≤ +∞. A precise analysis of the state equation allows us to prove the existence of a solution for (P) as well as to get the first and second order optimality conditions. Typical examples of nonlinearities in the state equations are f (x, y) = a 0 (x)|y| r y with r > 1 or f (x, y) = a 0 (x) exp(y), where a 0 is assumed to be nonnegative and bounded. Concerning the cost functional, the usual tracking cost functional falls into this framework by setting L(x, y) = 1 2 (y − y d (x)) 2 for some fixed function y d ∈ L 2 (Ω).
The paper is organized as follows. In section 2, the state equation is analyzed. We address the issues of existence, uniqueness and regularity results of the solution for both the linear and semilinear cases. Differentiability of the relation control-to-state is also established. Finally, the existence of solution for (P) as well as first and second order optimality conditions is proved in §3. The numerical analysis for (P) will be carried out in a forthcoming paper.

Analysis of the state equation
In this section we study the equation (1.1) proving some results that will be used in the analysis of control problem (P). Before studying (1.1), we analyze a linear equation involving the convection term. The section is divided into two subsections. The first one is devoted to the linear equation and the second to the study of (1.1)

Study of the linear operator
The following assumption is needed for this analysis. Assumption 1.
Ω is an open domain in R n , n = 2 or 3, with a Lipschitz boundary Γ. A is the operator given by and satisfying the ellipticity condition a ij (x)ξ i ξ j ≥ Λ|ξ| 2 ∀ξ ∈ R n and for a.a. x ∈ Ω. (2.1) On the function b : Ω −→ R n we assume only measurability for the moment. Later, some additional integrability property will be required in the theorems. Along this paper we will take We will frequently use the Poincaré inequality As a consequence, we have that u H −1 (Ω) ≤ C Ω u L 2 (Ω) for all u ∈ L 2 (Ω). Let us consider the elliptic operator Then, under Assumption 1, the linear operator A : Proof. We will prove the result for dimension n = 3. We can argue in a similar way for dimension n = 2. First, we observe that A is a linear continuous operator. Indeed, it is obvious that A : H 1 0 (Ω) −→ H −1 (Ω) is a continuous linear mapping due to the fact that a ij ∈ L ∞ (Ω). Since H 1 (Ω) ⊂ L 6 (Ω), it is enough to prove that b · ∇y, a 0 y ∈ L 6 5 (Ω). Using Hölder inequality we get for n = 3 b · ∇y Hence, we have that A is well-posed linear and continuous operator. Now, as a consequence of Fredholm alternative and the open mapping theorem, we know that either A is an isomorphism or there exists non zero elements y ∈ H 1 0 (Ω) such that Ay = 0; see, for instance, [9,Chapter 6,Theorem 4]. To prove that the kernel of A is reduced to 0 we adapt the proof of Theorem 8.1 in [10]. Let y ∈ H 1 0 (Ω) satisfy that Ay = 0. We prove that y ≤ 0 in Ω, the contrary inequality follows by arguing on −y. We argue by contradiction and we suppose that this is false. Then, we take 0 < ρ < ess sup x∈Ω y(x) and we we define z(x) = (y(x) − ρ) + . Obviously we have that z ∈ H 1 0 (Ω). We denote Ω ρ = {x ∈ Ω : ∇z(x) = 0}, then Using these facts and our assumptions on b and a 0 we get From here and the continuous embedding H 1 0 (Ω) ⊂ L 6 (Ω) we infer which contradicts the fact |Ω ρ | → 0 if ρ → ess sup x∈Ω y(x). Indeed, if the set of points E = {x ∈ Ω : y(x) = ess sup x∈Ω y(x)} has zero Lebesgue measure, then it is obvious that |Ω ρ | → 0 if ρ → ess sup x∈Ω y(x). In the case that |E| > 0, then we have that ∇y(x) = 0 a.e. in E [10, Lemma 7.7], and consequently ∇z(x) = 0 a.e. in E as well. Hence, |Ω ρ | → 0 holds in any case.
The following corollary is a straightforward consequence of Theorem 2.1.
is an isomorphism.
Under additional assumptions we have the following regularity result.
Theorem 2.4. Assume that a ij ∈ C 0,1 (Ω) for 1 ≤ i, j ≤ n and the ellipticity condition (2.1) holds, b ∈ L p (Ω) n for some p > n, and a 0 ∈ L 2 (Ω). We also suppose that Γ is of class Proof. First we observe that A : H 2 (Ω) ∩ H 1 0 (Ω) −→ L 2 (Ω) is an injective, continuous linear operator. Indeed, taking into account that H 2 (Ω) ⊂ W 1, 2p p−2 (Ω) if p > n and H 2 (Ω) ⊂ C(Ω), we get the following estimates The above estimates prove that A is well defined and continuous. The injectivity of A is an immediate consequence of Theorem 2.1. Let us prove that A is surjective. To this end we divide the proof into three steps.
Step 1.-Here we regularize the coefficients b and a 0 . To this end, we consider two sequences {b k } ∞ k=1 ⊂ L ∞ (Ω) n and {a 0,k } ∞ k=1 ⊂ L ∞ (Ω) such that b k → b strongly in L p (Ω) n and 0 ≤ a k → a 0 strongly in L 2 (Ω). We define the operator A k : Let us prove that A k is an isomorphism. The proof of the continuity and injectivity follows as we did for A. Now, given u ∈ L 2 (Ω) arbitrary, from Theorem 2.1 we deduce the existence of an element y k ∈ H 1 0 (Ω) such that A k y k = u. This equation can be written as follows Thus, we have that Ay k ∈ L 2 (Ω). The Lipschitz regularity of the coefficients a ij implies that A :  [11] for a C 1,1 boundary Γ and a convex domain Ω, respectively. Hence, there exists a constant C A such that y k H 2 (Ω) ≤ C A Ay k L 2 (Ω) . From here and the above estimates we infer From the strong convergences of the sequences {b k } ∞ k=1 and {a 0,k } ∞ k=1 , we deduce the existence of some integer k 0 such that Inserting these inequalities in the above expression we infer ∀k ≥ k 0 where the first embeddings are compact. Notice that p > n and, hence, 2p p−2 < 6 holds for n = 3. Therefore, selecting we deduce the existence of a constant C λ such that ∀y ∈ H 2 (Ω) ∩ H 1 0 (Ω) ∇y Inserting these inequalities in the above estimates we get which leads to the desired estimated: ∀y ∈ H 2 (Ω) ∩ H 1 0 (Ω) Step 2.-Let us prove that y k → y = A −1 u strongly in H 1 0 (Ω). We have Hence, there exists an integer k 1 such that

This implies that
Hence, the operator (Ω) and its inverse is given by From here we obtain for every k ≥ k 1 Finally, we find with (2.6) and (2.7) that Step 3.-Finally, the estimate (2.5) and the convergence y k → y in H 1 0 (Ω) yields y k ⇀ y weakly in H 2 (Ω). Since u ∈ L 2 (Ω) was arbitrary, this implies the surjectivity of A.
Corollary 2.5. Under the assumptions of Theorem 2.4 and in addition div b ∈ L 2 (Ω), then the operator A * : Then, the result follows from Theorem 2.4.

Analysis of the semilinear equation
Here, we analyze the equation (1.1). To deal with this equation we make the following hypotheses on the nonlinear term f . Assumption 2. We assume that f : Ω × R −→ R is a Carathéodory function monotone nondecreasing with respect to the second variable satisfying: x ∈ Ω and ∀|y| ≤ M. (2.8) Now, we prove the existence and uniqueness of a solution for problem (1.1).
Theorem 2.6. Let b ∈ L p (Ω) n with p > 2 if n = 2 and p > 6 if n = 3. Under the Assumptions 1 and 2, for every u ∈ Lp(Ω) (1.1) has a unique solution y u in H 1 0 (Ω) ∩ C(Ω). Moreover, there exists a constant K f independent of u such that Before proving this theorem we establish the following lemma.
Proof. We argue by contradiction, proceeding similarly to the proof of Theorem 2.1. If the statement of the lemma is false, then there exists 0 Using these facts and our assumptions on b and u i , and the monotonicity of g we get Now, we continue as in the proof of Theorem 2.1 to achieve the contradiction Proof of Theorem 2.6. The uniqueness of a solution is an immediate consequence of Lemma 2.7. The proof of existence is divided into three steps according to different assumptions on f . To simplify the presentation, we redefine f = f − f (·, 0) and u = u − f (·, 0) ∈ Lp(Ω).
Step 1.f is bounded in Ω × R. In this case, we define C f = esssup{|f (x, y)| : (x, y) ∈ Ω × R}. We consider the operator T : C(Ω) −→ C(Ω) given by T w = y w , where y w is the solution of the problem From Corollary 2.2 we have the existence and uniqueness of a solution y w ∈ H 1 0 (Ω)∩C 0,µ (Ω) for some µ ∈ (0, 1). This solution satisfies From the compactness of the embedding C 0,µ (Ω) ⊂ C(Ω), we deduce that T is a compact operator applying the closed ballB ρ (0) into itself. Hence, from the fixed point Schauder's Theorem we infer the existence of a solution y u ∈ H 1 0 (Ω) ∩ C(Ω) of (1.1). Moreover, (2.9) follows from the above inequality and the redefinition of u along with the fact that min{y, k}). Then f k is bounded and we can apply Step 1 to deduce the existence of a function y k ∈ H 1 0 (Ω) ∩ C(Ω) satisfying Now, by Corollary 2.2 there exists a function y ∈ H 1 0 (Ω) ∩ C(Ω) solution of Ay + b(x) · ∇y = u + C f in Ω, y = 0 on Γ.

Subtracting both equations we get
Then, Lemma 2.7 implies that y k ≤ y in Ω. Hence, if we take k > y C(Ω) , we have that f k (x, y k ) = f (x, y k ), and therefore y k is solution of (1.1). The estimate (2.10) follows from the bound for f (·, y k ) independently of k.
Step 3.-The general case. Let us define f k (x, y) = f (x, proj [−k,+k] (y)). Then, f k is bounded and from Step 1 we know that there exists y k ∈ H 1 0 (Ω)∩C(Ω) satisfying Ay k +b(x)· ∇y k +f k (x, y k ) = u. Now, we take z 1 ∈ H 1 0 (Ω)∩C(Ω) satisfying Az 1 +b(x)·∇z 1 +f (x, z + 1 ) = u. The existence of such a function follows from Step 2 because f (x, z + 1 ) ≥ 0. Subtracting the equations satisfied by z 1 and y k we get Then, Lemma 2.7 implies that z 1 ≤ y k in Ω. Now, let z 2 ∈ H 1 0 (Ω)∩C(Ω) satisfy Az 2 +b(x)·∇z 2 = u−f (x, − z 1 C(Ω) ). The existence of such a function is consequence of the boundedness of f (·, − z 1 C(Ω) ), Theorem 2.1 and Corollary 2.3. From the equations satisfied by y k and z 2 we infer for every k > z 1 C(Ω) : T where the last inequality follows from the fact that y k ≥ z 1 : Then, once again from Lemma 2.7 we obtain that y k ≤ z 2 . Therefore, z 1 ≤ y k ≤ z 2 and, hence, f k (x, y k ) = f (x, y k ) for every k > max{ z 1 C(Ω) , z 2 C(Ω) }, and consequently y k is solution of (1.1) for k large enough. As in the previous step, the estimate (2.10) follows from the bound for f (·, y k ) independently of k.
On the next theorem, we establish some additional regularity for the solutions of (1.1).
Theorem 2.8. We suppose that Assumption 2 holds withp = 2, a ij ∈ C 0,1 (Ω) for 1 ≤ i, j ≤ n and the ellipticity condition (2.1) is fulfilled, and b ∈ L p (Ω) n with p > 2 if n = 2 and p > 6 if n = 3. We also suppose that Γ is of class C 1,1 or Ω is convex. Then, for every u ∈ L 2 (Ω) (1.1) has a unique solution y u ∈ H 2 (Ω) ∩ H 1 0 (Ω). This is an immediate consequence of Theorems 2.4 and 2.6. The following result on the continuous dependence of the state y u respect to u will be useful to prove the existence of a solution for the control problem (P). Theorem 2.9. Let {u k } ∞ k=1 ⊂ Lp(Ω) withp > n 2 be a sequence weakly converging to u in Lp(Ω). Then, under the assumptions of Theorem 2.6 we have that y u k → y u strongly in H 1 0 (Ω) ∩ C(Ω). Proof. From Theorem 2.6 we know that {y u k } ∞ k=1 is bounded in H 1 0 (Ω) ∩ C(Ω). Hence, taking a subsequence we have that y u k ⇀ y in H 1 0 (Ω) and y u k * ⇀ y in L ∞ (Ω). Now, setting M = max 1≤k<∞ y u k C(Ω) , we deduce from (2.8) Then, using Corollary 2.2 and the compactness of the embedding C 0,µ (Ω) ⊂ C(Ω) we deduce that y u k → y strongly in C(Ω). As consequence we have that f (x, y u k ) → f (x, y) strongly in Lp(Ω). Moreover, the compactness of the embedding Lp(Ω) ⊂ H −1 (Ω) implies that u k → u strongly in H −1 (Ω). Hence, from Theorem 2.1 we infer that y u (x, y)) strongly in H 1 0 (Ω). Hence, we have that y = y u and y u k → y u strongly in H 1 0 (Ω) ∩ C(Ω). Since, this convergence holds for any converging subsequence, we deduce that the whole sequence converges as indicated in the statement of the theorem.
To finish this section we analyze the differentiability of the relation u → y u . To this end, we make the following assumptions on f . Assumption 3. We assume that f : Ω × R −→ R is a Carathéodory function of class C 2 with respect to the second variable satisfying: For every M > 0 and ε > 0 there exists δ > 0, depending on M and ε, such that It is obvious that Assumption 3 implies Assumption 2. Therefore, all the previous results remain valid if we replace Assumption 2 by Assumption 3. (2.14) For Proof. The fact that G is well defined is a straightforward consequence of Theorem 2.6. To prove the differentiability we will use the implicit function theorem as follows. We consider the vector space Y = {y ∈ H 1 0 (Ω) ∩ C(Ω) : Ay ∈ Lp(Ω)}. This is a Banach space when we endow it with the norm y Y = y H 1 0 (Ω) + y C(Ω) + Ay Lp(Ω) . Due to our assumption on b ∈ L p (Ω) n with p > 2 if n = 2 and p > 6 if n = 3, we can assume without loss of generality thatp is close enough to n 2 so that 2 < 2p 2−p < p. Then, we have with Hölder inequality From the continuous embeddings Lp(Ω) ⊂ L 2n n+2 (Ω) ⊂ H −1 (Ω) and the above inequality we deduce that Ay + b · ∇y ∈ Lp(Ω) ⊂ H −1 (Ω) for every y ∈ Y .
It is immediate to check that F is well defined and it is of class C 2 . Moreover, the linear mapping ∂F ∂y is an isomorphism. Indeed, if we consider the operator A defined by (2.2) with a 0 (x) = ∂f ∂y (x, y(x)), we have to prove that A : Y −→ Lp(Ω) is an isomorphism. From the definition of Y and the above estimates, we know that A is well defined and continuous. From Theorem 2.1 we also deduce the existence of a unique solution z ∈ H 1 0 (Ω) of the equation Az = v for every v ∈ Lp(Ω) ⊂ H −1 (Ω). In addition, from Corollary 2.2 we know that z ∈ C(Ω). Hence, we have that z ∈ Y and A is an isomorphism. Then, we can apply the implicit function theorem and deduce easily the theorem; see e.g. [3,Proposition 16].

Analysis of the optimal control problem
In this section, we firstly prove the existence of a global solution (ȳ,ū) of the control problem. Then, we derive first and second order necessary optimality conditions for local solutions. Finally, we prove sufficient conditions for local optimality. In the whole section we suppose that Assumptions 1 and 3 are fulfilled. In addition, we assume that b ∈ L p (Ω) n for some p > 2 if n = 2 and p > 6 if n = 3. Then, if U ad is bounded in L 2 (Ω) or L is bounded from below, the control problem (P) has at least one solutionū.
Proof. Let {u k } ∞ k=1 ⊂ U ad be a minimizing sequence of (P). From the boundedness of U ad or the lower boundedness of L we deduce that {u k } ∞ k=1 is bounded in L 2 (Ω). Hence, we can take a subsequence, denoted in the same way, converging weakly in L 2 (Ω) to some element u. Since U ad is weakly closed in L 2 (Ω) we infer thatū ∈ U ad . Moreover, Theorem 2.5 implies that y u k → yū strongly in H 1 0 (Ω) ∩ C(Ω). Therefore, using the assumption (3.1) along with the Lebesgue's dominated convergence theorem, we get that J(ū) ≤ lim inf k→∞ J(u k ) = inf (P) and, hence,ū is a solution of (P).
Before establishing the optimality conditions for (P), we study the differentiability of J. To this end we make the following assumptions on L.
Assumption 4. We assume that L : Ω × R −→ R is a Carathéodory function of class C 2 with respect to the second variable satisfying that L(·, 0) ∈ L 1 (Ω) and for all M > 0 there exist a function ψ M ∈ Lp(Ω) withp > n 2 and a constant C L,M > 0 such that for a.e. x ∈ Ω and for all |y| ≤ M . In addition, for every M > 0 and ε > 0 there exists δ > 0, depending on M and ε, such that It is obvious that (3.1) holds under Assumption 4. In the rest of the paper, we will suppose that Assumption 4 is fulfilled. Then, we have the following differentiability result.
Theorem 3.2. The functional J is of class C 2 . Moreover, given u, v, v 1 , v 2 ∈ L 2 (Ω) we have Proof. The C 2 differentiability of J is an immediate consequence of Theorem 2.10, Assumption 4 and the chain rule. Moreover, the derivation of the formulas (3.4) and (3.5) is standard. The only delicate point is the analysis of the adjoint state equation (3.6). Let us prove the existence of a unique solution in ϕ u ∈ H 1 0 (Ω) ∩ C(Ω). From Corollary 2.3 with a 0 (x) = ∂f ∂y (x, y u (x)) we know the existence and uniqueness of a solution ϕ u ∈ H 1 0 (Ω). Observe that (2.12) along with y u ∈ C(Ω) implies that a 0 ∈ L ∞ (Ω). Let us prove that ϕ u ∈ C(Ω) as well. First, we consider the case n = 3. Since b ∈ L p (Ω) 3 with p > 6, we can select r > 0 small enough so that 6(3+r) 3−r ≤ p. The equality holds for r = 3p−18 p+6 . Now we have with Hölder inequality The assumption p > 2 for n = 2 implies that p+2 2 > 2. Then, using again Hölder inequality we get On the other hand, assumption (3.2) implies that ∂L ∂y (x, y u ) ∈ Lp(Ω). Hence, applying the results of [14, §7] or [10,Chapter 8] to the equation we deduce that ϕ u ∈ C(Ω).
Since (P) is not a convex problem, we consider local solutions of (P) as well. Let us state precisely the different concepts of local solution. Definition 3.3. We say thatū ∈ U ad is an L r (Ω)-weak local minimum of (P), with r ∈ [1, +∞], if there exists some ε > 0 such that An elementū ∈ U ad is said a strong local minimum of (P) if there exists some ε > 0 such that We say thatū ∈ U ad is a strict (weak or strong) local minimum if the above inequalities are strict for u =ū.
As far as we know, the notion of strong local solutions in the framework of control theory was introduced in [1] for the first time; see also [2]. We analyze the relationships among these concepts in the followin lemma.
Lemma 3.4. The following properties hold: If U ad is bounded in L 2 (Ω), then 1.ū is an L 1 (Ω)-weak local minimum of (P) if and only if it is an L r (Ω)-weak local minimum of (P) for every r ∈ (1, +∞).
2. Ifū is a L r (Ω)-weak local minimum of (P) for some r < +∞, then it is a L ∞ (Ω)-weak local minimum of (P).
3.ū is a strong local minimum of (P) if and only if it is an L r (Ω)-weak local minimum of (P) for all r ∈ [1, ∞).

3.ū is an L 2 (Ω)-weak local solution if and only if it is a strong local solution.
The reader is referred to [4] and [8] for the proof of this lemma. To deduce that any strong local solution is an L 2 (Ω)-weak local solution the following estimate is used where B r (ū) is the ball in L 2 (Ω). This inequality follows from the next result.
Proof. Without loss of generality, we can suppose that U is convex. Otherwise, we replace it by its convex hull, which is also a bounded set. Given u, v ∈ U, from Theorem 2.10 and the mean value theorem we have Then, it is enough to prove that DG(û) L(Lp(Ω),H 1 0 (Ω)∩C(Ω)) is bounded by a constant M U for everyû ∈ U. To this end we argue by contradiction and we assume that there exist two sequences From Theorem 2.10 we know that z k satisfies the equation From the boundedness of {u k } ∞ k=1 in Lp(Ω), we deduce the existence of a subsequence, that we denote in the same way, such that u k ⇀ u in Lp(Ω). Then, from Theorem 2.9 we get that y u k → y u strongly in H 1 0 (Ω) ∩ C(Ω). Therefore, from (2.12) we infer that where M = max 1≤k<∞ y u k C(Ω) . Now, we define the operators Ay = Ay + b(x)∇y + ∂f ∂y (x, y u )y, From Theorem 2.1 we get that A and A k are isomorphisms between H 1 0 (Ω) and H −1 (Ω). Moreover, from the above estimate we obtain for every y ∈ H 1 0 (Ω) with y H 1 0 (Ω) = 1 , H 1 0 (Ω)) and there exists k 0 such that This along with the continuity of the embedding Lp(Ω) ⊂ H −1 (Ω) implies On the other hand, from Corollary 2.2 it follows Hence, {z k } ∞ k=1 is bounded in H 1 0 (Ω) ∩ C(Ω), which contradicts the assumption. Now, we establish the first order optimality conditions. Theorem 3.6. Letū be a local solution of (P) in any of the previous senses, then there exist two unique elementsȳ,φ ∈ H 1 0 (Ω) ∩ C(Ω) such that This theorem is consequence of the expression for J ′ given in (3.4) and the convexity of U ad , which implies that J ′ (ū)(u −ū) ≥ 0 holds for any local solution. As a consequence of this theorem we have the following regularity result on the optimal control. Corollary 3.7. Letū satisfy (3.7)-(3.9) along with (ȳ,φ), thenū ∈ H 1 (Ω) ∩ C(Ω) holds. Moreover, if a ij ∈ C 0,1 (Ω) for 1 ≤ i, j ≤ n, div b ∈ L 2 (Ω), the functions ψ M in (3.2) belong to L 2 (Ω), and Γ is of class C 1,1 or Ω is convex, thenȳ,φ ∈ H 2 (Ω) ∩ H 1 0 (Ω) holds. Finally, if U ad = L 2 (Ω), then we have thatū ∈ H 2 (Ω) ∩ H 1 0 (Ω) Proof. It is well known that (3.9) implies that Then, the H 1 (Ω) ∩ C(Ω) regularity ofū follows from this formula and the same regularity of ϕ. Under the additional assumptions on the data of the problem, the regularity ofȳ andφ follows from Theorem 2.8 and Corollary 2.5. Finally, if U ad = L 2 (Ω), then (3.9) is reduced toφ + νū = 0, henceū enjoys the same regularity asφ.
In order to write the second order optimality conditions we introduce the cone of critical directions. Letū ∈ U ad be a function satisfying the system (3.7)-(3.9) along with the associated stateȳ and adjoint stateφ. We define the cone Let us observe that (3.9) implies that Therefore, if v ∈ L 2 (Ω) satisfies (3.10), then J ′ (ū)v ≥ 0 holds, and J ′ (ū)v = 0 if and only if v(x) = 0 ifφ(x) = 0.

11)
then there exist ε > 0 and κ > 0 such that Proof. The reader is referred to [6] and [7] for the proof of this theorem. To reproduce that proof we have to use that y u k → y u and ϕ u k → ϕ u strongly in H 1 0 (Ω) ∩ C(Ω) when u k ⇀ u in L 2 (Ω). The convergence for the states is proved in Theorem 2.9. Here we prove the part corresponding to the adjoint states. To this end we set ∂y (x, y u k )ϕ.
Since y u k → y u in C(Ω), there exists M > 0 such that y u k C(Ω) ≤ M ∀k. Then, from (2.12) and the mean value theorem we deduce for ϕ H 1 0 (Ω) ≤ 1: Hence, we can proceed as in the proof of Theorem 2.4 to deduce the existence of k 0 such that Hence, we get with (3.2) (Ω)) y u − y u k L 2 (Ω) → 0. It remains to prove that ϕ u − ϕ u k C(Ω) → 0. To this end, we subtract the equations satisfied by ϕ u and ϕ u k and we get Let us estimate the terms of the right hand side. For the first term we argue as in the proof of Theorem 3.2 and obtain for some r > 0 b(ϕ u − ϕ u k ) L 3+r (Ω) 3 ≤ b Hence, using these convergences to zero and the classical L ∞ (Ω) estimates for linear elliptic operators we conclude that ϕ u − ϕ u k C(Ω) → 0.
The following corollary is an immediate consequence of Theorem 3.8 and Lemma 3.5.