ERROR ESTIMATES FOR THE NUMERICAL APPROXIMATION OF A DISTRIBUTED CONTROL PROBLEM FOR THE STEADY-STATE NAVIER–STOKES EQUATIONS∗

We obtain error estimates for the numerical approximation of a distributed control problem governed by the stationary Navier–Stokes equations, with pointwise control constraints. We show that the L2-norm of the error for the control is of order h2 if the control set is not discretized, while it is of order h if it is discretized by piecewise constant functions. These error estimates are obtained for local solutions of the control problem, which are nonsingular in the sense that the linearized Navier–Stokes equations around these solutions define some isomorphisms, and which satisfy a second order sufficient optimality condition. We establish a second order necessary optimality condition. The gap between the necessary and sufficient second order optimality conditions is the usual gap known for finite dimensional optimization problems.


Introduction.
The goal of this paper is to derive some error estimates for the numerical approximation of a distributed optimal control problem governed by the steady-state Navier-Stokes equations, with pointwise control constraints.More precisely we consider the following problem: We can easily show that problem (P) admits at least one solution.On one hand, uniqueness of solution to problem (P) is not necessarily guaranteed even if (1.2) has a unique solution (which is not necessarily the case).On the other hand, we can only hope to obtain error estimates for solutions to problem (P) which are locally unique.Local uniqueness can be proved for solutions satisfying first order and sufficient second order optimality conditions.When first order optimality conditions in qualified form are satisfied by a local solution (ū, ȳ) of problem (P), we have where Proj [α,β] is a projection operator and Φ is the adjoint state associated with (ū, ȳ).Thus, even if Φ is regular, because of the projection operator Proj [α,β] (due to control constraints), ū is only a Lipschitz function.
Assuming that (ū, ȳ) satisfies first order and sufficient second order optimality conditions, we can define a discrete control problem (P h ) by discretizing the state equation (1.2) with a finite element method (here h is the mesh size of the underlying triangulation, and we assume that the family of triangulations is regular; see section 4).We consider two cases, the case where the control set in (P h ) is still U ad , and the case where the control set U h ad is the set of functions in U ad which are piecewise constant on the elements of the triangulation.We show that there exists ĥ such that, for all 0 < h ≤ ĥ, the discrete control problem (P h ) admits at least one local solution ūh in a ball B ρ (ū).We prove that the corresponding sequences {ū h } h strongly converge to ū in L 2 (see Theorem 4.11).When the control set in (P h ) is U ad , we show that (1.3) ūh − ū L 2 ≤ Ch 2 , while if the control set is U h ad , we prove that (1.4) ūh − ū L 2 ≤ Ch (see Theorem 4.18).To the best of our knowledge both results are new.For numerical computations it seems easier to solve (P h ) when the control set is discretized, that is, when controls belong to U h ad .However, it is also possible to solve it without a priori discretizing the control set (see, e.g., [16]).
Before comparing our results with the ones existing in the literature, let us make some comments.Knowing that ū is a Lipschitz function, the error estimate (1.4), obtained when the discrete control set is defined with piecewise constant functions, is consistent with estimates obtained by approximating Lipschitz functions by piecewise constant functions.The result obtained in (1.3) is more surprising.Indeed, as we are going to see, this kind of result is already known for problems without control constraints.But in that case the optimal control belongs to H 2 , and the error estimate is then directly derived from error estimates for the adjoint state.Here we obtain the same order of error estimate, but with control constraints.As far as we know, this kind of result was not previously known.Moreover, our method is quite general, and it can be used in some other problems, provided that we are able to obtain error estimates for the discrete state and discrete adjoint equations.
Let us come back to the existing results in the literature.For optimal control problems of the steady-state Navier-Stokes equations with a distributed control and a slightly different functional, Gunzburger, Hou, and Svobodny have proved error Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpestimates similar to (1.3) in the case when there is no control constraints and when the control acts everywhere in Ω (see [13, end of section 5.2]).But for a distributed control localized in Ω, the error estimate is only of order h 3/2−ε (see [13, end of section 5.3]).To prove these estimates they do not assume that the optimal solution (ū, ȳ), which they want to approximate, satisfies a sufficient second order optimality condition.But they assume that the optimality system satisfied by (ū, ȳ) is regular, in the sense that the corresponding linearized optimality system defines some isomorphism.This approach is the extension-to optimality systems of control problems-of the classical one used in the numerical approximation of the steady-state Navier-Stokes equations; see, e.g., [12].This method has been used in the literature for other similar problems [17] and for the boundary control of the stationary Navier-Stokes equations [14,15].Observe that the estimates are not the same if the boundary of the domain where the control is applied is empty or nonempty [14,Theorem 4.6 and the assumptions in Theorem 3.5].In any case this method cannot be used for problems with control constraints.Another approach used more recently for problems without control constraints is the one by Deckelnick and Hinze [10], which is based on the Kantorovich convergence theorem of the Newton method.In that case a second order sufficient optimality condition is needed, but the Kantorovich convergence theorem is proved only for systems of equations and not for generalized equations.Thus this method cannot be used for problems with control constraints.
For problems with control constraints the obtention of both optimality conditions and error estimates is more complicated.Indeed even if the nonlinear Navier-Stokes equations are well posed, the linearized ones are not necessarily well posed.Thus in general one can obtain optimality conditions only in nonqualified form, that is, optimality conditions of Fritz-John type.Such optimality conditions for optimal control problems of the stationary Navier-Stokes equations have been obtained by Abergel and Casas [1]; see also Casas [3].Optimality conditions in qualified form, that is, optimality conditions of Karush-Kuhn-Tucker type, may be obtained either by assuming that data of the problem are small enough with respect to the viscosity parameter ν (see, e.g., Roubiček and Tröltzsch [19], Tröltzsch and Wachsmuth [21], De Los Reyes [18]) or by assuming some qualification condition of the set of feasible controls as in Gunzburger, Hou, and Svobodny [15, condition (2.7)] or in [1].
Here, since we are mainly interested in the numerical approximation of control problem (P), we assume that the local optimal solution (ū, ȳ) we want to approximate is a nonsingular solution, that is, that the linearized Navier-Stokes equations about ȳ define some isomorphism.As already mentioned, this is the classical assumption used in the numerical approximation of the Navier-Stokes equations (see, e.g., [12, p. 297]).Thanks to this assumption we derive a necessary optimality condition of the form where C ū is the set of directions belonging to the tangent cone at ū to U ad satisfying J (ū)v = 0; see Theorem 3.6 and Corollary 3.7 (here J(u) = F (u, y u ), where y u is the unique solution to (1.2) corresponding to u, when u belongs to some ball B ρ (ū)).
The weakest sufficient optimality condition we can state is the following: Under this condition, and assuming that the first order optimality conditions are in qualified form, we prove that (ū, ȳ) is the unique local solution to (P) in some ball Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpB ρ (ū).(See Theorem 3.8.Notice that we cannot hope to prove such a result without assuming that ū satisfies the first order optimality conditions in qualified form and condition (1.6).)This local uniqueness result is essential to carry out some numerical analysis of the control problem.The discrete state equation is stated in section 4. The well posedness of the discrete state equation is performed in Theorem 4.8, and error estimates are obtained in Lemma 4.10.The discrete adjoint equation is studied in section 4.3.Its well posedness and error estimates are proved in Lemmas 4.12 and 4.13.Error estimates for the control problem are obtained in section 4.4.
Let us finally mention that in the case of control problems governed by scalar semilinear elliptic equations, this approach to derive error estimates has been developed by Arada, Casas, and Tröltzsch [2], Casas [4,5], Casas, Mateos, and Tröltzsch [6], and Casas and Raymond [7].

Assumptions and preliminary
where χ ω is the characteristic function of ω.In the functional 1), we assume that N > 0 and y d ∈ L r (Ω; R d ), for some r > d, are given fixed.For u ∈ L 2 (ω; R m ), we denote by u j the components of u, that is, u = (u j ) 1≤j≤m .For 1 ≤ j ≤ m, let −∞ ≤ α j < β j ≤ +∞ be extended real numbers, and set In the case when α j = −∞, this means that the corresponding constraint is absent.The same convention is adopted if β j = ∞.
To study (1.2) we have to introduce some function spaces and operators.Throughout the following we set H 1 ), and W s,p (Ω) = W s,p (Ω; R d ) for 1 ≤ p ≤ ∞ and s > 0. We introduce different spaces of divergence-free vector fields: where n is the outward unit normal to Γ.The dual space of V 1 0 (Ω) with respect to the pivot space V 0 n (Ω) is denoted by V −1 (Ω).Thus we have with dense and continuous imbeddings.The orthogonal projector from L 2 (Ω) onto V 0 n (Ω) will be denoted by P .The operator P can be extended to a bounded operator from H −1 (Ω) to V −1 (Ω).For notational simplicity this extension will still be denoted by P .
Let us consider the bilinear form on H and the nonlinear operator or to the weak formulation This last equation is equivalent to which we shall simply write in the form We know that, for all u ∈ L 2 (ω; R m ), equation (2.1), or equivalently (2.2), admits at least one solution y ∈ V 1 0 (Ω).The pressure appearing in (1.2) is the unique function in It is a consequence of [ The following regularity result will be used throughout this paper.It is an immediate consequence of the classical result by Cattabriga [8].
Theorem 2.2.There exists a constant 2) and p the associated pressure, then y ∈ W 2,r (Ω), p ∈ W 1,r (Ω), and Proof.The estimate of y V 1 0 (Ω) is classical.Using this estimate, since d ≤ 3, we can write Thus, from estimates for the Stokes equation, we successively deduce and which yields if 3 ≤ r < ∞, which provides the desired estimate.
It is well known that the solution of (1.2) is unique when ν is large enough with respect to the right-hand side; see, for instance, Temam [20].Since this is a strong assumption we are interested in the solutions of (1.2) for which the equation is locally unique.These solutions, called nonsingular solutions, are defined below.
), we will also say that the pair (u, y) is a nonsingular solution of (1.2).
Remark 2.4.For a nonsingular solution (u, y) of (1.2), the condition ) corresponds to the one stated in [12, Chapter 4, condition (3.4)], which is used to get the error estimates for the approximation of the Navier-Stokes equations.
The following theorem is a straightforward consequence of the implicit function theorem and will be useful in what follows.
, then z v and w satisfy the equations for all u ∈ O(ū).Lemma 2.6.Let (ū, ȳ) be as in Theorem 2.5, and let p be the associated pressure (the solution of (2.3) corresponding to ȳ).Let (u k ) k be a sequence in O(ū) weakly converging to ū in L 2 (ω; R m ).Let y k be the solution to (1.2) in O(ȳ) corresponding to u k , and let p k be the associated pressure.Then (y k ) k converges to ȳ in V 1 0 (Ω), and (p k ) k converges to p in L 2 0 (Ω).Proof.The proof is an easy consequence of Theorem 2.2 and of formula (2.3).

Analysis of the control problem.
The existence of a solution of problem (P) can be obtained by the usual approach of taking a minimizing sequence, which is bounded in L 2 (ω; R m ) × V 1 0 (Ω), and passing to the limit; see, for instance, [18] for a detailed proof.In this section we will derive the first and second order optimality conditions for a local solution (ū, ȳ) in U ad × V 1 0 (Ω).

First order optimality conditions. Let us precisely define local solutions of (P).
Definition 3.1.We shall say that (ū, ȳ) ∈ U ad × V 1 0 (Ω) is a local solution of (P) if and only if (ū, ȳ) satisfies (1.2) and there exist neighborhoods The following theorem was proved by Abergel and Casas [1] for a slightly different functional, but the proof can be repeated for our problem step by step, just by doing the obvious modifications.Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php be a local solution of (P); then there exist a real number λ and some elements Φ ∈ W 2,r (Ω) and π, p ∈ W 1,r (Ω) such that These conditions for optimality are of Fritz-John type, and we are interested in the cases where λ can be chosen equal to one.Gunzburger, Hou, and Svobodny [14] introduced an assumption on U ad for the local solution (ū, ȳ).The control set U ad is said to have the property (C) at (ū, ȳ) if the system Here we will make a different assumption which will be crucial in what follows, in particular for the numerical analysis.We consider only local solutions (ū, ȳ) of (P) such that (ū, ȳ) is a nonsingular solution of (2.2).In that case we shall say that (ū, ȳ) is a local nonsingular solution of (P).For such a local nonsingular solution we can apply Theorem 2.5 and define the control problem where Then ū is a local solution of (P O(ū) ).Let us study the differentiability properties of J.
where z v is the solution of (2.6) and The proof follows easily from Theorem 2.5.The only delicate point is the definition of Φ u .Let us remark that (3.8) is equivalent to the equation A * Φ u + Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

and due to Theorem 2.5 the operator P
. By using the previous theorem we get the following result.Theorem 3.4.Let (ū, ȳ) ∈ U ad × V 1 0 (Ω) be a local nonsingular solution of (P), and let p be the associated pressure; then there exist some elements It is enough to take into account that J (ū)(u − ū) ≥ 0 for all u ∈ U ad and to use (3.6).
Using the first order necessary conditions we can deduce some extra regularity for the optimal control, the state, and the adjoint state.
Theorem 3.5.Let (ū, ȳ) be a local nonsingular solution of (P) and let Φ be the adjoint state as defined by Proof.Taking into account that C ū ∈ L 2 (Ω) and the assumption on f , it is enough to apply Theorem 2.2 to deduce that ȳ belongs to H 2 (Ω) and that Φ belongs to W 2,r (Ω).On the other hand, Φ ∈ W 2,r (Ω) ⊂ C 0,1 ( Ω; R d ) because r > d.Now using the Lipschitz property of the function M defining C and the representation of the optimal control deduced from (3.5), we obtain for a.e.x ∈ ω, which gives the desired regularity for ū.Now still using Theorem 2.2, we obtain the regularity of ȳ.

Second order optimality conditions.
To perform the numerical analysis of the problem as well as the analysis of the algorithms of optimization, second order sufficient conditions are required.These sufficient conditions should be as unrestrictive as possible.One way of measuring this is to compare them with the necessary second order conditions and check if the gap is small.This is the reason why we first introduce the second order necessary conditions.
Second order conditions have to be written for directions v ∈ T U ad (ū) such that J (u)v = 0, where T U ad (ū) is the tangent cone at ū to U ad .To characterize these directions, we introduce d(x) = C * Φ(x) + N ū(x) for x ∈ ω, and the following conditions: Now we define the cone x ∈ ω and all v ∈ C ū.
Corollary 3.7.Let (ū, ȳ) be a nonsingular local solution of (P) and let Φ be the corresponding adjoint state.Then for every (v, z) satisfying the linearized state equation (2.6) and v ∈ C ū.
To state second order sufficient conditions we will not suppose that (ū, ȳ) is a nonsingular solution of the Navier-Stokes equations (1.2).The result we are going to state is the following.
Theorem 3.8.Then there exist ε > 0 and μ > 0 such that for every (u, y) satisfying (1.2), u ∈ U ad , and Let us suppose the theorem is false.In that case, for all k ∈ N, there exists and , and hence there exist weakly convergent subsequences in L 2 (ω; R m ) and L 2 (Ω), still indexed by k, such that v k v, z k z.We are going to check that the pair (v, z) satisfies the linearized equation (2.6) and v ∈ C ū.
Let us now check that v ∈ C ū.The sign condition (3.11)-(3.12) is satisfied by v k,j , and this is conserved when we pass to the weak limit because the set of functions satisfying these sign conditions is closed and convex in L 2 (ω; R m ).On the other hand, using condition (3.16), for all k, we have and  v k v weakly in L 2 (ω; R m ), we can pass to the limit when k tends to infinity, and we get The sign condition (3.5) implies that dj (x)v j (x) ≥ 0 for a.e.x ∈ ω; therefore the above inequality is equivalent to Making a second order Taylor expansion of F at (ū, ȳ), with condition (3.16), we obtain Notice that the pair (v k , z k ) satisfies (3.17), but does not satisfy the linearized equation (2.6).Thus dx.We can write (3.17) as follows: Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpusing the adjoint state Φ and making an integration by parts, we get that and therefore Since v k satisfy the sign condition, we have d(x) Taking the inferior limit in this inequality we deduce Since v ∈ C ū and the pair (v, z) satisfies the linearized equation (2.6), this is possible only if (v, z) = (0, 0).The sequence {z k } ∞ k=1 converges strongly in L 2 (Ω) and weakly in V 1 0 (Ω).Since Φ ∈ L ∞ (Ω), by passing to the limit when k tends to infinity, we obtain The last three relations imply that v k → 0 strongly in L 2 (ω; R m ).So we have proved that (v k , z k ) → 0 strongly in L 2 (ω) m × L 2 (Ω), which contradicts the fact that The proof is complete.
The sufficient condition (3.15) is the best possible.Actually the gap between (3.15) and the second order necessary condition (3.14) is the same as in finite dimension.In the case of nonsingular solutions we have the following result analogous to Theorem 3.6.
To make the numerical analysis of control problem (P), we will use the following equivalent condition to (3.15), which may seem stronger but is not, as we will see below.Given τ > 0, let us define a bigger cone than C ū in the following way: 2)-(3.5) with λ = 1.Then the condition (3.15) is equivalent to the existence of δ > 0 and τ > 0 such that for every (v, z) satisfying the linearized state equation (2.6) and v ∈ C τ ū.
Proof.Notice that We can suppose that . Then there exist two weakly convergent subsequences in L 2 (ω; R m ) and L 2 (Ω), still indexed by k, such that v k v and z k z.Repeating the argument of the proof of Theorem 3.8, we deduce that the pair (v, z) satisfies the linearized equation (2.6) and it follows that for k > 1/ε all the terms of the sequence { ωε v j,k (x) dj (x)dx} k are 0, and so the limit is also 0. Since v satisfies the sign condition (3.5), this can happen only if v j (x) = 0 almost everywhere in ω ε .Since ε is arbitrarily small, we conclude that v j (x) = 0 for a.e.x such that | dj (x)| = 0, and so v ∈ C ū. Finally, taking the lower limit in (3.24) we obtain that We complete the proof by arguing as at the end of the proof of Theorem 3.8.

Numerical analysis of the state equation. Let
0 (Ω) be two finite dimensional spaces satisfying the assumptions (H1)-(H3) stated below.
(H1) (Approximation property of X h ).There exists an operator

(H2) (Approximation property of M h ). There exists an operator
(H3) (Uniform inf-sup condition).For each p h ∈ M h there exists and , where C > 0 is independent of h, p h , and y h .
Remark 4.1.Assumptions (H1)(b), (H1)(c), and (H1)(d) are needed to establish uniform convergence for the approximation of the state and the adjoint state (cf.Lemmas 4.10 and 4.13).In particular, if we use the finite element method when the family of triangulations is quasi-uniform, the above assumptions are satisfied for the Taylor-Hood finite element method and for the (P1-Bubble, P1) finite element method (see [12, p. 98, Lemma A.7 on p. 103, and Chapter 2]).The quasi-uniformity condition can be relaxed in some cases.For instance, Eriksson [11] gives some conditions on a locally refined family of triangulations in order to have an inverse inequality similar to (H1)(d).
Conversely, let us assume that ∂ (y,p) F(ū, ȳ, p) is an automorphism in It is easy to check that y ∈ V 1 0 (Ω) is the unique solution of Ay + B (ȳ)y = g.Let T h be the bounded linear operator from Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpRemark 4.3.Notice that if F h (u, y, p) = 0, then (y, p) belongs to X h × M h and is a solution of (4.1).Conversely if (y, p) ∈ X h × M h is a solution of (4.1), then F h (u, y, p) = 0. Now we want to prove that if ȳ is nonsingular and if y− ȳ H 1 0 (Ω) is small enough, then ∂ (y,p) F h (u, y, p) is an automorphism in H 1 0 (Ω) × L 2 0 (Ω).For that we make the following additional and usual assumptions concerning the approximation results for the Stokes problem. ( Before proving the desired property of ∂ (y,p) F h (u, y, p), we establish several lemmas.
Lemma 4.4.There exists C > 0 independent of h such that . The estimate for the pressure q h follows from inf-sup condition (H3).Indeed if we take w h such that (q h , div . We will need the following standard result.Lemma 4.5.Let X be a Banach space, A ∈ L(X) invertible and Lemma 4.6.Let ȳ ∈ V 1 0 (Ω) be a nonsingular solution of (2.2).Then for every ε > 0 there exist h ε > 0 and ρ ε > 0 such that Proof.With classical calculations we can write Since ȳ ∈ H 2 (Ω), B (ȳ)z belongs to L 2 (Ω), and due to assumption (S2) we have . Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpOn the other hand, using Lemma 4.4 we have sup Taking h ε and ρ ε small enough, we obtain the desired result.
Proof.Let ρ 0 and h 0 be the positive constants given by Theorem 4.7.For ρ ≤ ρ 0 , h ≤ h 0 , and u ∈ B ρ 2 (ū), we define the mapping It is clear that any fixed point of Ψ u is a solution of F h (u, y, p) = 0. Let us show that Ψ u is a strict contraction if ρ is small enough.(i) First, we show that Ψ u is a mapping from B ρ (ȳ) × B ρ (p) into itself.With the identity F(ū, ȳ, p) = 0, and a Taylor formula we obtain Let us estimate each of the terms.Using the definition of F h and Lemma 4.4 we get With assumption (S2) we have , and Finally, from Lemma 4.4 it follows that Collecting these estimates all together, we have proved that there exists a constant Ĉ > 0 independent of h and ρ such that 1 , and ĥ1 = min{h 0 , ρ1 /(2 Ĉ)}.It is clear that for all 0 < h < ĥ1 and all u ∈ B ρ2 (ū), Ψ u is a mapping from B ρ1 (ȳ) × B ρ1 (p) into itself.
can be estimated by a constant C independent of h; see Theorem 4.7.To estimate the expression in brackets we can repeat the argument of inequalities (4.2), since y = y 1 + θ(y 2 − y 1 ) ∈ B ρ1 (ȳ).There then exists C > 0 independent of ρ1 and h such that Choosing , and h 1 = min h 0 , ρ 1 /(2 Ĉ) , we have established that, for all 0 < h < h 1 and all u ∈ B ρ2 (ū), Ψ u is a strict contraction in B ρ1 (ȳ) × B ρ1 (p).Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpRemark 4.9.We have proved that, for all 0 < h < h 1 and all u ∈ B ρ2 (ū), the equation F h (u, y h , p h ) = 0 admits a unique solution (y h (u), and the implicit function theorem implies that it is of class C ∞ in the interior of the ball B ρ2 (ū).Notice that G h is not an approximation of G because G(u) = y u is a velocity field, while G h (u) stands for a velocity field and a pressure.

Discretization of the control problem.
For simplicity throughout the following we assume that ω is a polygonal domain.But we could consider a more general situation if we take into account the error we introduce by approximating ω by a polygonal domain.
For h > 0, let T h be a triangulation of ω.Although the discretization of the control can be done independently of the discretization of the state equation, in practice, when we use the finite element method to approximate the state and adjoint state equation, the same family of triangulations is used.Some assumptions must be made on the family of triangulations in order to have the inverse estimate of assumption (H1)(d).We will suppose that the family is quasi-uniform (see, e.g., [9, p. 135]): In this case h = max T ∈T h ρ(T ), where ρ(T ) is the diameter of the set T .We denote by σ(T ) the diameter of the largest ball contained in T .We assume there exist two positive constants ρ and σ such that hold for all T ∈ T h and all 0 < h.
In the following we would like to treat in the same way the cases when the control set is discretized and when it is not.We shall see that we obtain better estimates when the control set is not discretized.For that we set In the discrete control problem stated below, the case when the control set is not discretized corresponds to the choice U ad,h = U ad , while the case when the control set is discretized corresponds to U ad,h = U h ad .We can now define the discrete control problem associated with (P) in the following way: Let us recall that (u, y, p) satisfies (4.1) if and only if Our aim is to study the existence of local minima of problems (P h ) which approximate the local minima of (P).This can be proved for nonsingular local solutions of (P).Let us start by proving some error estimates for the state equation.Given a nonsingular solution (ū, ȳ) of (1.2), let h 1 > 0 and ρ 2 > 0 be given by Theorem 4.8.Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

By using the function G
duced at the end of the previous section in Remark 4.9, we set (y h u , p h u ) = G h (u) = (y h (u), p h (u)).Now we have the following result.
(iii) Let (u h ) h be a sequence in B ρ2 (ū)∩U ad , weakly converging to u in L 2 (ω; R m ).Due to Theorem 2.2, y u belongs to W 2,r (Ω) and {y u h } h is bounded in W 2,r (Ω).Thus it converges to y u in L p (Ω) for all 2 ≤ p < ∞, and the sequence {y u h ⊗ y u h } h converges to y u ⊗ y u in (L p (Ω)) d for all 2 ≤ p < ∞.The function y u h − y u satisfies the equation Let p satisfy d < p < 6.From classical estimates for the Stokes equations it follows that Since W 1,p (Ω) → L ∞ (Ω), and L 2 (Ω) is compactly embedded in W −1,p (Ω) (because p < 6), it is clear that {y u h } h tends to y u in L ∞ (Ω).Downloaded 05/24/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php We have From (H1)(c) and (H1)(d) we deduce that and With (H1)(b) and (4.4) we have Collecting together these estimates and the previous convergence result we have proved that {y h u h } h converges to y u in L ∞ (Ω).Theorem 4.11.Let us assume that (P) has a nonsingular local minimum (ū, ȳ).Then there exists h 2 > 0 such that, for all 0 < h < h 2 , (P h ) has at least one solution.If, furthermore, (ū, ȳ) is a strict local minimum of (P), then (P h ) has a local minimum (ū h , ȳh ) in a neighborhood of (ū, ȳ) for all 0 < h < h 2 and the following identities hold: where J h (ū h ) = F (ū h , ȳh ).
Proof.Let us start by proving that the set of feasible pairs (u, y) for problem (P h ) is nonempty for h small enough.We prove it only in the case when U ad,h = U h ad .The case when U ad,h = U ad is obvious.
Since (ū, ȳ) is a nonsingular local minimum, with the aid of Theorem 4.8 we derive the existence of ρ ≤ ρ 2 such that (4.6) It is clear that Π h ū ∈ U ad,h .Let us prove that it belongs to B ρ (ū) if h is small enough.Since ū is Lipschitz continuous (see Theorem 3.5), we can write Since the set of feasible points of (P h ) is nonempty and closed, and F h is continuous, convex on U ad,h × X h , and coercive with respect to u ∈ U ad,h , then (P h ) has at least one solution.Now let us assume that (ū, ȳ) is a strict local solution of (P) in (U ad ∩ B ρ (ū)) × B ρ (ȳ).We consider the problems where J h (u) = F (u, y h u ) with (y h u , p h u ) = G h (u), G h being defined in Remark 4.9.Above we have proved that U ad,h ∩ B ρ (ū) is nonempty for h ≤ h 2 .Observe that U ad,h ∩ B ρ (ū) is convex, bounded, and closed in L 2 (ω; R m ), the mapping u → ω |u| 2 is lower semicontinuous for the weak topology of L 2 (ω; R m ), and from Remark 4.9 it follows that the mapping u → Ω |y h u − y d | 2 is continuous for the weak topology of L 2 (ω; R m ).Therefore (Q h ) has at least one solution ūh .From any subsequence of {ū h } h , we can extract another subsequence, still indexed by h to simplify the notation, converging weakly in L 2 (ω; R m ) to some ũ ∈ B ρ (ū).Let us check that ũ = ū.Let us take again u h = Π h ū ∈ U ad,h ∩ B ρ (ū) for all h < h 2 .By passing to the limit when h tends to zero, with the convergence result stated in Lemma 4.10, we can write Since ũ ∈ B ρ (ū) and the inequality in (4.6) is strict for u = ū, the above inequality implies that ũ = ū.Thus we have lim h→0 J h (ū h ) = J(ū), and still with Lemma 4.10, we deduce that lim h→0 ω Therefore the subsequence {ū h } h converges to ū in L 2 (ω; R m ).Since ū is the only cluster point for the weak topology of L 2 (ω; R m ) of the original sequence {ū h } h , it is clear that the convergence properties stated in the theorem hold for the whole sequence {ū h } h .The convergence of the corresponding states is a consequence of Lemma 4.10.Finally, the strong convergence ūh → ū in L 2 (ω; R m ) implies that ūh belongs to the interior of the ball B h ρ (ū), which implies that (ū h , ȳh ) is a local minimum of (P h ).

Discrete adjoint equation. We define the discrete adjoint state (Φ
and the solution (z h g , q h Choosing w h = z h g in (4.15) and w = Φ u − Φ h u in (4.16) and combining the two identities, we obtain Thus we have (4.17) To complete estimate (4.12), we are going to use (4.13) and a similar error estimate for (z g , q g ): ).With (4.17), (4.13), (4.18), and (4.4), we obtain The proof of (4.12) is complete.Estimate (4.14) and the last statement in the lemma can now be proved in the same way as we did it for the state.Let (ū, ȳ) be a nonsingular strict local minimum of (P) and {(ū h , ȳh )} h≤h3 be a sequence of local minima of problems (P h ) converging to (ū, ȳ) in L 2 (ω; R m )×H 1 0 (Ω), with ūh ∈ B ρ3 (ū), where h 3 and ρ 3 are given by Lemma 4.12.Then every element ūh from a sequence {ū h } h≤h3 is a local solution of the problem where ( Φh , πh ) = (Φ h ūh , π h ūh ) ∈ X h × M h is the discrete adjoint state associated with ūh , that is, the solution to the system (4.8)where u is replaced by ūh .
Proof.The lemma is a consequence of the following identity: Now we can establish uniform convergence for the controls.Lemma 4.15.Let ūh be as Lemma 4.14; then lim h→0 ūh − ū L ∞ (ω;R m ) = 0.
Proof.Let us start with the case where U ad,h = U h ad .Since the components of the elements of U h are constant on every triangle, for all T ∈ T h and 1 For all x ∈ T , using (3.9), the integral mean value theorem, and the Lipschitz continuity of Φ, we can write for some x T ∈ T .The uniform convergence of the adjoint states allows us to complete the proof in the case when U ad,h = U h ad .In the case when U ad,h = U ad we have ūi The convergence of ūh follows from Lemma 4.13.

Error estimates.
Let (ū, ȳ) be a nonsingular local solution of (P) satisfying the sufficient second order optimality conditions (3.15) or, equivalently, (3.25).As a consequence of these conditions, we know that (ū, ȳ) is a strict local minimum of (P).Let {(ū h , ȳh )} h be a sequence of local solutions of problems (P h ) converging to (ū, ȳ); see Theorem 4.11 and Lemma 4.15.We assume that h ≤ h 3 and ūh ∈ B ρ3 (ū), so that ūh is a local minimum of (P h ).The goal of this section is to estimate the order of convergence of this sequence.
Lemma 4.16.Let δ > 0 be the constant defined in Corollary 3.11.There exists  Therefore I i,T > 0 and ūi,h | T = α i .In particular ūi,h (ξ) = α i and ūi,h (ξ) − ūi (ξ) = 0. Similarly if di (ξ) < −τ , we have ūi,h (ξ) = β i < ∞ and ūi,h (ξ) − ūi (ξ) = 0, and condition (3.20) is still satisfied in that case.Thus second order sufficient conditions stated in Corollary 3.11 can be applied, and we have On the other hand, with the mean value theorem, we obtain for some 0 < θ h < 1. Due to the uniform convergence properties stated for the control and the adjoint state and the explicit form of the second derivative of J, it is clear that we can choose h 4 small enough to have for all 0 < h ≤ h 4 .The proof is complete.Lemma 4.17.Assume that U ad,h = U h ad .There exists 0 < h 5 ≤ h 4 such that for every 0 < h ≤ h 5 there exist u * h ∈ U h and a constant C > 0 independent of h such that Due to the Lipschitz continuity of ū, there exists 0 < h 5 ≤ h 4 such that, for 0 < h ≤ h 5 , each component ūi cannot achieve both values α and β in the same triangle.Hence, for each T ∈ T h , either di (x) is nonnegative for all x ∈ T or di (x) is nonpositive for all x ∈ T .Therefore, I i,T = 0 if and only if di (x) = 0 for all x ∈ T .Moreover, if I i,T = 0, then di (x)/I i,T ≥ 0 for all x ∈ T .So applying the integral mean value theorem if I i,T = 0 or the generalized mean value theorem if I i,T = 0, we have u * i,h | T = ūi (x T ) for some x T ∈ T .As a first consequence, u * h ∈ U ad,h .Moreover, due to the Lipschitz continuity of ū, we have that for x ∈ ω, if we fix the triangle T such that x ∈ T ,  Let us check what happens with the first term.From first order optimality conditions for problems (P) and (P h ) we have Making the sum of these two expressions and using Lemma 4.17 From (4.19), (4.20), and (4.21), we deduce that therefore there exists a constant C > 0, independent of h, such that We conclude with Young's inequality.
(ii) Now let us consider the case where U ad,h = U ad .We rewrite the previous steps by introducing the simplifications corresponding to this case.For 0 < h ≤ h 5 , we have Since U ad,h = U ad , from the first order optimality conditions satisfied by ū and ūh we have (J (ū) − J h (ū h ))(ū − ūh ) ≤ 0.
results.Let us recall that Ω is a bounded open and connected subset in R d , of class C 2 , with d = 2 or d = 3, and that ω is a nonempty open subset in Ω.We assume that M : ω → R d×m is a Lipschitz function, with 1 ≤ m ≤ d (R d×m denotes the space of d × m real matrices).Let us consider the linear operator From the well-known interpolation inequality when d = 3 (see Temam [20, Lemma 3.5, p. 296]), strongly in L 4 (Ω).Let us prove that v ∈ C ū.The sign condition (3.11)-(3.12) is again trivial since every v k satisfies it.To check condition (3.10) we are going to prove that if | dj (x)| = 0, then v j (x) = 0. Let us fix ε > 0 and define ω ε = {x ∈ ω : |d j (x)| > ε}.Notice that ωε v j,k (x) dj (x)dx → ωε v j (x) dj (x)dx when k tends to infinity.From the definition of C 1/k ū

and we have proved statement 3 .Theorem 4 . 18 .
Since I i,T = 0 if and only if di (x) = 0 for all x ∈ T , we can claim thatI i,T u * i,h | T = T di (x)ū i (x)dxfor all T ∈ T h and all 1 ≤ i ≤ m.A straightforward calculation yields statement 2:J (ū)u * h = ω d(x) • ū * h (x)dx = m i=1 T ∈T h T di (x)u * i,h (x)dx = m i=1 T ∈T h I i,T u * i,h | T = m i=1 T ∈T h T di (x)ū i (x)dx = J (ū)ū.There exists a constant C > 0 such that, for all 0 < h ≤ h 5 , we haveū − ūh L 2 (ω;R m ) ≤ Ch 2 if U ad,h = U ad , while ū − ūh L 2 (ω;R m ) ≤ Ch if U ad,h = U h ad .Proof.(i) Let us start with the case where U ad,h = U h ad .For 0 < h ≤ h 5 , we have(4.19)