A DISCONTINUOUS GALERKIN TIME-STEPPING SCHEME FOR THE VELOCITY TRACKING PROBLEM

The velocity tracking problem for the evolutionary Navier-Stokes equations in 2d is studied. The controls are of distributed type and they are submitted to bound constraints. First and second order necessary and sufficient conditions are proved. A fully-discrete scheme based on discontinuous (in time) Galerkin approach combined with conforming finite element subspaces in space, is proposed and analyzed. Provided that the time and space discretization parameters, τ and h respectively, satisfy τ ≤ Ch2, then L2 error estimates of order O(h) are proved for the difference between the locally optimal controls and their discrete approximations.

‡ Department of Mathematics, School of Applied Mathematics and Physical Sciences, National Technical University of Athens, Zografou Campus, Athens 15780, Greece (chrysafinos@math.ntua.gr).
without the assumption τ ≤ Ch 2 .However, they make a strong second order condition that we do not need.Their approach is not easily adapted for the control of Navier-Stokes systems because the nonlinearity involves the gradient of the state, and the boundedness of the states fails.Moreover, the discretization in time of the state equation leads to a stationary Navier-Stokes system, for which we cannot guarantee the uniqueness of a solution.
The discontinuous Galerkin time-stepping schemes are known to perform well in a variety of problems whose solutions satisfy low regularity properties.The discontinuous (in time) Galerkin framework also accommodates many different time-stepping schemes.For example, the lowest order scheme (in time) considered here can be viewed as the implicit Euler scheme, while there is a close similarity between higher order (in time) discontinuous Galerkin schemes and other time-stepping approaches such as Runge-Kutta time-stepping techniques, provided that suitable integration techniques are being used to discretize related integrals (see, e.g., [32]).The key difference between the analysis of the classical implicit Euler scheme and its discontinuous (in time) stepping approach is the use of local (in time) approximation theory tools instead of global (in time) approximation and interpolation tools.In addition, the discontinuous (in time) formulation inherits stability/regularity properties of the underlying PDE, due to its heavily implicit nature.As a result, it leads to an efficient analysis of approximation of problems whose solution satisfies low regularity properties, and in particular to problems where the time derivative is discontinuous, and hence discretization in a completely discontinuous fashion is preferable.On the other hand, continuous (in time) Galerkin schemes typically require much more regularity than the one anticipated from our optimal control problem.For example, the lowest order (in time) continuous (in time) Galerkin scheme corresponds to a Petrov-Galerkin Crank-Nicolson scheme, which requires additional regularity properties even in the case of uncontrolled linear parabolic PDEs (see, e.g., [32]).For earlier work on these schemes within the context of optimal control problems, we refer the reader to [24], [25] for error estimates for an optimal control problem for the heat equation with and without control constraints, respectively, and to [8] for a convergence result for a semilinear parabolic optimal control problem.An analysis of the second order Petrov-Galerkin Crank-Nicolson scheme for an optimal control problem for the heat equation is analyzed in [26] where estimates of second order (in time) are derived.However, the regularity assumptions on the control, state, and adjoint variables are not present in the nonlinear setting of the Navier-Stokes equations.For general results related to discontinuous time-stepping schemes for linear parabolic uncontrolled PDEs, we refer the reader to [11,12,13,14,32] (see also the references within).Finally, in the recent work of [9], discontinuous timestepping schemes of arbitrary order for the Navier-Stokes equations in two and three dimensions were examined.Further results concerning the analysis and numerical analysis of the uncontrolled Navier-Stokes equations can be found in the classical works of [15], [21], [22], [31].For several issues related to the analysis and numerics of optimal control problems, we refer the reader to [33] (see also the references within).

Assumptions and preliminary results
. Ω is a bounded open and convex subset in R 2 , Γ being its boundary.The outward unit normal vector to Γ at a point x ∈ Γ is denoted by n(x).Given 0 < T < +∞, we denote Ω T = (0, T ) × Ω and Σ T = (0, T ) × Γ.We fix the notation for Sobolev spaces: ) , and W s,p (Ω) = W s,p (Ω; R 2 ) for 1 ≤ p ≤ ∞ Downloaded 05/23/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpand s > 0. We also consider the spaces of integrable functions, ) and, for a given Banach space X, L p (0, T ; X) will denote the integrable functions defined in (0, T ) and taking values in X endowed with the usual norm.Following Lions and Magenes [23,Vol. 1] we put equipped with the standard norm.In [23,Vol. 1] it is proved that every element of H 2,1 (Ω T ), after a modification over a zero measure set, is a continuous function from We introduce the usual spaces of divergence-free vector fields: Throughout this paper, we will assume that f , u ∈ L 2 (0, T ; L 2 (Ω)) and y 0 ∈ Y.A solution of (1.1) will be sought in the space W(0, T ) = {y ∈ L 2 (0, T ; Y) : Let us introduce the weak formulation of (1.1).To this end we define the bilinear and trilinear forms a : Now, we seek y ∈ W(0, T ) such that for a.e.t ∈ (0, T ), Above (• , •) denotes the scalar product in L 2 (Ω).This notation will be frequently used throughout the paper and • will denote the associated norm.Any other norm will be indicated by a subscript.Equation (2.1) has a unique solution in W(0, T ).Once the velocity y is obtained, then the existence of a pressure p ∈ D(Ω T ) is proved in such a way that the first equation of (1.1) holds in a distribution sense.Thanks to the regularity assumed on f , y 0 , and Ω, then some extra regularity is proved for (y, p).Indeed, we have that y ∈ H 2,1 (Ω T ) ∩ C([0, T ], Y) and p ∈ L 2 (0, T ; H 1 (Ω)), the pressure being unique up to an additive constant; see, for instance, Ladyzhenskaya [21], Lions [22], Temam [31].
The next properties of the trilinear form c will be used later.The proof can be found in many books; see [21], [22], or [31].Downloaded 05/23/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpMoreover, the following inequalities hold: By using the interpolation inequality (see [31,Lemma 3.3, page 91]), we obtain ∀y, w ∈ H 1 0 (Ω) and ∀z ∈ H 1 (Ω) Returning back to the control problem (P), we will assume Since the mapping G : , associating to each control u the corresponding state G(u) = y u solution of (2.1), is well defined and continuous, then the cost functional J : L 2 (0, T ; L 2 (Ω)) −→ R is also well defined and continuous.The proof of the existence of at least one solution of (P) is standard.
3. Optimality conditions.Since the problem (P) is not convex, we will deal hereafter with global and local solutions.A control ū ∈ U ad is said to be a local solution of (P) if there exists ε > 0 such that J(ū) ≤ J(u) for every u ∈ U ad ∩ B ε (ū), where B ε (ū) denotes the open ball of L 2 (0, T ; L 2 (Ω)) centered at ū and radius ε.In this section, we establish first and second order optimality conditions for a local solution of problem (P).To this end, we need the differentiability of the mapping G.
Theorem 3.1 (Casas [4]).The mapping G is of class C ∞ .Moreover, for any u, v ∈ L 2 (0, T ; L 2 (Ω)), if we denote y u = G(u), z v = G (u)v, and z vv = G (u)v 2 , then z v and z vv are the unique solutions of the following equations: As a consequence of this theorem we get the differentiability of the cost functional.Theorem 3.2.The cost functional J : and for every u, v ∈ L 2 (0, T ; L 2 (Ω)) we have Proof.First, let us observe that (3.5) is the adjoint of (3.1).Since (3.1) has a unique solution in H 2,1 (Ω T )∩C([0, T ], Y) for any v ∈ L 2 (0, T ; L 2 (Ω)), then arguing by transposition we can prove the existence and uniqueness of the solution ϕ u of (3.5), as well as the regularity ϕ u ∈ H 2,1 (Ω T ) ∩ C([0, T ], Y).Now, the differentiability property of J and relations (3.3) and (3.4) are a consequence of Theorem 3.1 and the chain rule.Now, we get the optimality conditions.We start with the first order conditions.Theorem 3.3.Let us assume that ū is a local solution of problem (P); then there exist ȳ and φ belonging to Proof.Since U ad is convex, any local solution ū satisfies the condition J (ū)(u − ū) ≥ 0 for every u ∈ U ad .Then it is enough to use the expression of the derivative given by (3.3) and take ȳ = y ū and φ = ϕ ū to deduce (3.6)- (3.8).The regularity of ū follows from (3.8) as usual; we simply observe that (3.8) Theorem 3.4.Let ū be a local solution of problem (P); then J (ū)v 2 ≥ 0 ∀v ∈ C ū. Conversely, let us assume that ū ∈ U ad satisfies Then there exist ε > 0 and δ > 0 such that where B ε (ū) is the L 2 (0, T ; L 2 (Ω))-ball of center ū and radius ε.
The proof of the necessary condition is similar to the one made in [6] for the case of steady-state Navier-Stokes equations.The proof of the sufficient conditions can be obtained arguing by contradiction, analogously to the approach of some previous papers; see, for instance, [2], [5], [6].
Remark 3.5.The gap between the necessary and sufficient optimality conditions given in Theorems 3.3 and 3.4 is minimal-the same as in finite dimensional optimization problems.This problem does not suffer from the typical two-norm discrepancy arising usually in infinite dimensional optimization problems.This is due to the C 2differentiability of J with respect to the L 2 (0, T ; L 2 (Ω))-norm, thanks to a certain compactness with respect to u in the first two integrals defining J and the fact that the last one is the square of the norm of the control.On the other hand, it is well known that the condition J (ū)v 2 > 0 for every nonzero v = 0 belonging to the cone of critical directions is not a sufficient optimality condition, in general, in infinite dimensional optimization problems.An inequality of type is required in the infinite dimensional case.In finite dimensions, both conditions are equivalent, but this is not the usual case for infinite dimension.However, in our problem we can prove that both conditions are also equivalent.Indeed, let us observe that (3.19) implies that ū is a local solution of the problem Therefore, from the second order necessary conditions we obtain that

Numerical approximation of the control problem.
In this section we consider the complete discretization of the control problem (P).To this end, we consider a family of triangulations {T h } h>0 of Ω, defined in the standard way, e.g., in [3,Chapter 3.3].With each element T ∈ T h , we associate two parameters h T and Downloaded 05/23/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

EDUARDO CASAS AND KONSTANTINOS CHRYSAFINOS
T , where h T denotes the diameter of the set T and T is the diameter of the largest ball contained in T .Define the size of the mesh by h = max T ∈T h h T .We also assume that the following regularity assumptions on the triangulation are satisfied: (i) There exist two positive constants T and δ T such that hT T ≤ T and h hT ≤ δ T ∀T ∈ T h and ∀h > 0.
(ii) Define Ω h = ∪ T ∈T h T , and let Ω h and Γ h denote its interior and its boundary, respectively.We assume that the vertices of T h placed on the boundary Γ h are points of Γ.
Since Ω is convex, from the last assumption we have that Ω h is also convex.Moreover, we know that see, for instance, [28, estimate (5.2.19)].
On the mesh T h we consider two finite dimensional spaces formed by piecewise polynomials in Ω h and vanishing in Ω \ Ω h .We make the following assumptions on these spaces. ( (A3) The subspaces Z h and Q h satisfy the inf-sup condition: ∃c > 0 such that (4.4) inf where b : These assumptions are satisfied by the usual finite elements considered in the discretization of Navier-Stokes equations: "Taylor-Hood," P1-bubble finite element, and some others; see [15,Chapter 2].
We also consider a subspace Y h of Z h defined by and set We proceed now with the discretization in time.Let us consider a grid of points 0 We make the following assumption: We have that the functions of Y σ , Q σ , and U σ are piecewise constant in time.We will look for the discrete controls in the space U σ .An element of this space can be written in the form where χ n and χ T are the characteristic functions of (t n−1 , t n ) and T , respectively.Therefore, the dimension of U σ is 2N τ N h , where N h is the number of triangles in T h .
In U σ we consider the convex subset On the other hand, the elements of Y σ can be written in the form (4.7) where χ n is as above.For every discrete state y σ we will fix y σ (t n ) = y n,h , so that y σ is continuous on the left.In particular, we have y σ (T ) = y σ (t Nτ ) = y Nτ ,h .
To define the discrete control problem we have to consider the numerical discretization of the state equation (1.1), or equivalently (2.1).We achieve this goal by using a discontinuous time-stepping Galerkin method, with piecewise constants in time and conforming finite element spaces in space.For any u ∈ L 2 (0, T ; L 2 (Ω)) the discrete state equation is given by the following: The above scheme is essentially an implicit Euler in time/conforming in space scheme, and can be easily extended to higher order polynomial in time discretizations; see, e.g., [32] and the references within.For stability and error estimates under suitable regularity assumptions for high order discontinuous time-stepping schemes, we refer the reader to [9].Here, we focus on the lowest case of polynomial approximation Downloaded 05/23/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php in time, due to the low regularity imposed by the nature of our optimal control problem.A key feature of the proposed scheme is that the regularity properties of the discrete solution mimics the continuous problem.We will prove later that for any u ∈ L 2 (0, T ; L 2 (Ω)), (4.8) has a unique solution y σ (u) ∈ Y σ .Then we can define the discrete control problem as follows: In the study of the control problem, first we analyze the discrete state equation (4.8); then we study the discrete adjoint state equation; next we prove the convergence of (P σ ); and finally we prove the error estimates for the discretization.

Analysis of the discrete state equation.
By a standard argument, using the identity c(z, w, w) = 0 ∀z ∈ L 4 (Ω) and ∀w ∈ H 1 (Ω) (Lemma 2.1) and Brower's fixed-point theorem, we can easily prove that (4.8) has at least one solution.In this section, we will prove that the solution is unique under some restrictions on σ = (τ, h).For the moment, let us denote y = y u = G(u) and y σ ∈ Y σ a solution of (4.8).We are going to prove some error estimates for y − y σ .To this end, we need to introduce some projection operators.
Definition 4.1.We define the projection operator We also define There exists a constant C > 0 independent of σ such that for every y ∈ H 2,1 (Ω T ) ∩ C([0, T ]; Y) the following estimate holds: Proof.From assumptions (A1)-(A3) and using (4.2) with s = 0 and l = 1 (see also [15, Chapter II]), the definition of P σ , and the stability of P h we get To any element y ∈ C([0, T ]; Y), we associate The next lemma is an immediate consequence of assumptions (A1)-(A3); see again [15,Chapter II].
Lemma 4.4.There exists a constant C > 0 independent of h such that As a consequence of the previous two lemmas we have the following result.Lemma 4.5.There exists a constant C > 0 independent of σ such that for every Proof.From Lemma 4.4 we get Now, using the definition of P σ , an inverse inequality, (4.12), and (4.13) we obtain Before proving the error estimates for y − y σ , we need to establish the corresponding estimates for the Stokes problem.Let us formulate this result as follows.
Lemma 4.6.Let y ∈ H 2,1 (Ω T ) ∩ C([0, T ], Y) be the solution of (2.1) and let ŷσ ∈ Y σ satisfy the following: where ( fn , w h ) = Proof.The existence and uniqueness of the solution ŷσ is easy and well known.The boundedness of {ŷ σ } σ in L ∞ (0, T ; H 1 (Ω h )) was proved in [9,Theorem 4.10].The estimate (4.16) follows from (4.14) and [9,Theorem 4.6].Finally, we prove (4.17).Let us assume that t n−1 < t < t n for some 1 ≤ N τ .Then The second term on the right-hand side of the inequality has been estimated in (4.16).Let us study the first term.For any w ∈ L 2 (Ω) . This estimate and (4.16) infer (4.17).The discrete solution of the linear Stokes problem will subsequently play the role of a global-in-time projection, which facilitates the derivation of error estimates under the restricted regularity assumptions of the control problem (see also [9]).Finally, we obtain the result concerning the discrete state equation (4.8).
Proof.Let us define e = y − y σ = (y − ŷσ ) + (ŷ σ − y σ ) = ê + e σ , where ŷσ is the solution of (4.15).Then we can proceed as in [9,Theorem 5.2]  Finally, using again the boundedness of {ŷ σ } σ in L ∞ (0, T ; H 1 (Ω h )), we get the same estimate as the last one for the third term.Putting all these estimates in (4.20) we obtain Then, using the discrete Gronwall inequality and the fact that e 0,h = 0, we get This inequality along with (4.16) and the identity y − y σ = ê + e σ proves (4.18).Arguing as in the proof of (4.17), we deduce (4.19) from (4.18).The proof of the boundedness of {y σ } σ in L ∞ (0, T ; H 1 (Ω h )) is an easy consequence of the previous results.Indeed, first we recall that {ŷ σ } σ is bounded in L ∞ (0, T ; H 1 (Ω h )) (Lemma 4.6).Now, we write It is enough to prove the boundedness of the first term.From an inverse inequality [3, section 4.5], the estimates (4.17) and (4.19), and the inequality τ ≤ C 0 h 2 we get To conclude the proof, we have to show the uniqueness of a solution of (4.8).Let us assume that y 1 σ , y 2 σ ∈ Y σ are two solutions of (4.8).Then we set y σ = y 2 σ − y 1 σ and we will prove that y σ = 0. Subtracting (4.8) for y 2 σ and y 1 σ and setting w h = y n,h we get Using this in the above identity and the boundedness of and hence Using once again the discrete Gronwall inequality and the fact that y 0,h = 0, we conclude that y σ = 0. Remark 4.8.The estimates (4.16) and (4.18) cannot be improved within our optimal control setting.This is due to the regularity restrictions imposed by the nature of problem.However, if y t ∈ L 2 [0, T ; H 1 (Ω)], then the assumption τ ≤ Ch 2 can be dropped (see, e.g., [9]) and the estimate read as O(τ + h).However, it is expected that improved estimates in the L 2 [0, T ; L 2 (Ω)]-norms hold, using an appropriate duality argument.We will examine this issue in a subsequent work.Finally, we remark that discontinuous time-stepping schemes for linear problems typically exhibit nodal (in time) superconvergence (see, e.g, [32] and the references within), under enhanced regularity assumptions.However, it is not clear whether such properties hold, even for the uncontrolled Navier-Stokes equations, with smooth solutions.
Hereafter, we will assume We establish a corollary of Theorem 4.7 that will be useful later.Corollary 4.9.Assume that max{ u L 2 (0,T ; T ]; Y) be the solution of (2.1) and y σ (v) ∈ Y σ the solution of the discrete equation (4.8) corresponding to the control v. Then there exists a constant C M > 0 such that Moreover, if u σ ∈ U σ for every σ and u σ u weakly in L 2 (0, T ; L 2 (Ω)), then Proof.From (4.18) and (4.21), we get + Ch, Downloaded 05/23/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpwhere C depends on y Ω H 1 (Ω) and y v H 2,1 (ΩT ) , bounded by v L 2 (0,T ;L 2 (Ω)) .On the other hand, since G : we can apply the mean value theorem to get (4.22), with C M depending on M .Using (4.19), we can repeat the same argument to get the estimate in L ∞ (0, T ; L 2 (Ω h )).
We finish this section studying the differentiability of the relation u → y σ (u).Theorem 4.10.The mapping is the unique solution of the following problem: where we have set y σ = y σ (u).
Proof.Let us consider the mapping On the other hand, The proof is a consequence of the implicit function theorem; we need to prove that Downloaded 05/23/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpEDUARDO CASAS AND KONSTANTINOS CHRYSAFINOS ∂Fσ ∂yσ (y σ (u), u) : Y σ −→ Y σ is an isomorphism for every u.In fact, we will prove that ∂Fσ ∂yσ (y σ , u) is an isomorphism for every (y σ , u) ∈ Y σ × L 2 (0, T ; L 2 (Ω)).Since ∂Fσ ∂yσ (y σ , u) is a linear mapping between two spaces of the same finite dimension, it is enough to prove that it is injective.Suppose that ∂Fσ ∂yσ (y σ , u)z σ = 0 for some z σ ∈ Y σ .Applying ∂Fσ ∂yσ (y σ , u)z σ ∈ Y σ to z σ and using that c(y n,h , z n,h , z n,h ) = 0, we get Again, an application of the discrete Gronwall inequality and the fact that z 0,h = 0 imply that z σ = 0.

Analysis of the discrete adjoint state equation.
In this section, as well as in the rest of the paper, the condition (4.21) is assumed.As a consequence of Theorem 4.10 and applying the chain rule, we get that J σ : L 2 (0, T ; L 2 (Ω)) −→ R is of class C ∞ , and we have a first expression of its derivative as follows: where y σ = y σ (u) = G σ (u) and z σ = G σ (u)v is the solution of (4.24).As usual in control theory, we have to introduce the adjoint state to simplify the expression of this derivative.To this end we consider the discrete adjoint state equation: We look for ϕ σ ∈ Y σ such that (4.25) Observe that in the above system, first we compute ϕ Nτ ,h from ϕ Nτ +1,h = γ(y Nτ ,h − y Ω h ) and then we descend in n until n = 1.Unlike the discrete states y σ , we will set for the discrete adjoint states ϕ σ (t n−1 ) = ϕ n,h for every 1 ≤ n ≤ N τ .System (4.25) corresponds to the discretization of the backward equation (3.5).Using that {y σ } σ is bounded in L ∞ (0, T ; H 1 (Ω h )) (Theorem (4.7)), then we can Downloaded 05/23/13 to 193.144.185.28.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpproceed in the same way as we did in the proof of Theorem 4.10 to obtain the existence and uniqueness of a solution of (4.25).Below we check that this is actually the discrete adjoint state equation.To this end we use (4.24) and (4.25) to show that where we have used that From the obtained identity and the expression of J σ (u)v given above we conclude The next theorem states the error estimates in the approximation of the adjoint state equation.Theorem 4.11.Given u ∈ L 2 (0, T ; L 2 (Ω)), let y = y u be the associated state, solution of (2.1), ϕ the associated adjoint state, solution of (3.5), y σ = y σ (u) the associated discrete state, solution of (4.8), and ϕ σ the associated discrete adjoint state, solution of (4.25).Then {ϕ σ } σ is bounded in L ∞ (0, T ; H 1 (Ω h )) and there exists a constant C > 0 independent of σ and u such that As for the discrete adjoint states, we fix (R σ w)(t n−1 ) = (R σ w) n,h .Analogously to (4.12) and (4.14), we have the following estimates for every For the last term of (4.30), we first observe that The first two terms can be estimated in a similar way to the previous one.For the last we get Collecting all the estimates, we infer from (4.30) that To conclude the proof it is enough to use the discrete Gronwall inequality along with (4.11), (4.18), (4.21), (4.29), and the fact that the H 2,1 (Ω T )-norm of ϕ can be estimated by the L 2 (0, T ; L 2 (Ω))-norm of y − y d and the H 1 (Ω)-norm of y Ω , and the L 2 (0, T ; L 2 (Ω))-norm of y is estimated by the L 2 (0, T ; L 2 (Ω))-norm of u.
As a consequence of the previous theorem we have the following result analogous to Corollary 4.9.
Corollary 4.12.Assume that max{ u L 2 (0,T ;L 2 (Ω)) , v L 2 (0,T ;L 2 (Ω)) } ≤ M .Let ϕ u ∈ H 2,1 (Ω T )∩C([0, T ]; Y) be the solution of (3.5) and ϕ σ (v) ∈ Y σ the solution of the discrete equation (4.25) corresponding to the control v. Then there exists a constant C M > 0 such that where C depends on u L 2 (0,T ;L 2 (Ω)) .We proceed analogously to get the estimate for ϕ u − ϕ σ (v) L ∞ (0,T ;L 2 (Ω h )) .Now, we estimate ϕ u − ϕ v in L 2 (0, T ; H 1 (Ω)) and L ∞ (0, T ; L 2 (Ω)), respectively.Let us set ϕ = ϕ u −ϕ v ; then subtracting the equations satisfied by ϕ u and ϕ v , we get Taking w = ϕ and using the identities we deduce by integration in the interval (t, T ) and the equality ϕ(T Since y u , ϕ v ∈ L ∞ (0, T ; H 1 (Ω)), with norms estimated by a constant depending on M , we infer from the above inequality that On the other hand, we have Now the Gronwall inequality implies which also implies with the aid of the previous estimates that which concludes the proof.

Convergence of the discrete control problem.
In this section we analyze the convergence of the solutions of control problems (P σ ) towards solutions of the continuous problem (P).Since these problems are not convex, we will also address the issue of the approximation of local solutions of problem (P).It is clear that every problem (P σ ) has at least one solution because it consists of the minimization of a continuous and coercive function on a nonempty closed subset of a finite dimensional space.The next theorem proves the convergence of these discrete solutions to solutions of problem (P).