Second Order Sufficient Optimality Conditions for Some State-constrained Control Problems of Semilinear Elliptic Equations

This paper deals with a class of optimal control problems governed by elliptic equations with nonlinear boundary condition. The case of boundary control is studied. Pointwise constraints on the control and certain equality and set-constraints on the state are considered. Second order sufficient conditions for local optimality of controls are established.


Introduction.
In contrast to the optimal control of linear systems with a convex objective, where first order necessary optimality conditions are already sufficient for optimality, higher order conditions such as second order sufficient optimality conditions (SSC) should be employed to verify optimality for nonlinear systems.Second order sufficient optimality conditions have also proved to be useful for showing important properties of optimal control problems such as local uniqueness of optimal controls and their stability with respect to certain perturbations.Moreover, they may serve as an assumption to guarantee the convergence of numerical methods in optimal control.In this respect, we refer to the general expositions by Maurer and Zowe [15] and Maurer [14] for different aspects of second order sufficient optimality conditions.The approximation of programming problems in Banach spaces is discussed in Alt [2].Moreover, Alt [3], [4] has established a general convergence analysis for Lagrange-Newton methods in Banach spaces.
Meanwhile, an extensive number of publications have been devoted to different aspects of second order sufficient optimality conditions for control problems governed by ordinary differential equations.The well-known two-norm discrepancy has in particular received a good deal of attention.We refer, for instance, to Ioffe [13] and Maurer [14].
First investigations of second order sufficient optimality conditions for control problems governed by partial differential equations have been published by Goldberg and Tröltzsch [11], [12] for the boundary control of parabolic equations with nonlinear boundary conditions.In [9], Casas, Tröltzsch, and Unger have extended these ideas to elliptic boundary control problems with pointwise constraints on the control.Moreover, they tightened the gap between second order necessary and sufficient optimality conditions.This was done by the consideration of sets of strongly active constraints according to Dontchev et al. [10].This technique is also related to first order sufficient optimality conditions introduced by Maurer and Zowe [15].It should be mentioned that as many as four norms have to be used in this case (L ∞ -norm for differentiation, L 2 -norm to formulate second order sufficient optimality conditions, L 1 -norm for the first order sufficient optimality condition, and certain L p -norms to obtain optimal regularity results).
Bonnans [5] has shown that a very weak form of second order sufficient conditions can be used to verify local optimality for a particular class of semilinear elliptic control problems with constraints on the control: If the second order derivative of the Lagrange function is a Legendre form, then it suffices to have its positivity in all critical directions.
In our paper, the results of [9] will be extended to additional constraints on the state.In this way, we are continuing the investigations by Casas and Tröltzsch [8] on second order necessary conditions.We also rely on general ideas of Maurer and Zowe [15], combining their approach with a detailed splitting technique.
At the beginning, we aimed to establish second order sufficient optimality conditions for boundary control problems governed by semilinear elliptic equations in domains of arbitrary dimension with general pointwise constraints on the control and the state.However, we soon recognized that pointwise state-constraints lead to essential and somewhat surprising difficulties.To establish second order sufficient optimality conditions for problems with pointwise state-constraints given on the whole domain, we had to restrict ourselves to two-dimensional domains with controls appearing linearly in the boundary condition.These obstacles might indicate some limits for the "traditional" type of second order sufficient optimality conditions for control problems governed by PDEs.
If pointwise state-constraints are imposed on compact subsets of the domain, while the other quantities are sufficiently smooth, then arbitrary dimensions can be treated without restrictions on the nonlinearities.In this case the adjoint state belongs to L ∞ (Γ).Moreover, we are able to avoid the assumption of linearity of the boundary condition with respect to the control by introducing some extended form of second order optimality conditions.

2.
The optimal control problem.We consider the problem: Minimize the functional subject to the equation of state to the constraints on the state y In this setting, Ω ⊂ R n is a bounded domain with a Lipschitz boundary Γ according to the definition by Nečas [17].Moreover, sufficiently smooth functions f : Ω × R → R and g, b : Γ × R 2 → R are given.The symbol ∂ ν is used for the derivative in the direction of the unit outward normal ν on Γ.The functionals F i : C(Ω) → R, i = 1, . . ., m, are supposed to be twice continuously Fréchet differentiable, that is, to be of class C 2 .By E we denote a mapping of class C 2 from C(Ω) into a real Banach space Z.K ⊂ Z is a nonempty convex closed set, and u a , The control u is looked for in the control space U = L ∞ (Γ), while the state y is defined as a weak solution of (2.2) in the state space We endow Y with the norm y Y = y C(Ω) + y H 1 (Ω) .The following assumptions are imposed on the given quantities.In the next assumption, fixed parameters p > n − 1 and s, r are used, which depend on n.For the possible (minimal) choice of s and r we refer to the discussion of regularity in (3.13).Roughly speaking, we have y| Γ ∈ L s (Γ) and y ∈ L r (Ω) in the linearized system (2.2) if u ∈ L 2 (Γ).As usual, s and r denote conjugate numbers.For instance, s is defined by 1/s + 1/s = 1.
(A2) For all M > 0 there are constants Downloaded 04/23/13 to 193.144.185.29.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php Remark 2.1.Notice that the estimates in (i)-(iii) imply boundedness and Lipschitz properties of b, f, g, b , f , g in several L-spaces.We omit them, because they follow from the mean value theorem.
(A3) (i) Let us define the norm for y ∈ C(Ω), where A ⊂ Ω is a certain measurable compact subset.Here A stands for a set, where we know y ∈ C(A) for Neumann boundary data given in L 2 (Γ).In the case n = 2 we may take A = Ω, while A ⊂ Ω is needed for n > 2. For A = ∅ we put y C(A) = 0. We assume at a fixed reference state y ∈ C(Ω) that holds with some C F > 0.Moreover, we require with a C M > 0 is supposed.
We shall explain the main constructions of our paper by the following canonical example (P) that fits in the general setting.Example (P).Minimize In particular, it holds that y C(Ω) ≤ M .Casas and Tröltzsch [8] have proved that the mapping u → y(u) from L ∞ (Γ) into Y is of class C 2 .Furthermore, the Lipschitz property holds for all u 1 , u 2 ∈ U ad , where C 2 is a positive constant and • 2 is defined in (A3).For fixed u ∈ U ad we have b(•, y, u) ∈ L p (Γ), hence the weak solution y ∈ Y of (2.2) belongs to the space which is known to be continuously embedded into Y = C(Ω)∩H 1 (Ω) for each q > n/2 and each p > n − 1.
In all of what follows we assume that a reference pair (y, u) ∈ Y × U ad is given, satisfying, together with an associated adjoint state ϕ ∈ W 1,σ (Ω) ∀σ < n/(n − 1), and with Lagrange multipliers the associated standard first order necessary optimality conditions.We will just assume them.They can be proved following Casas [7], Bonnans and Casas [6], or Zowe and Kurcyusz [23].The first order optimality system to be satisfied by (y, u) consists of the state equations (2.2), the constraint u ∈ U ad , the adjoint equations for the adjoint state ϕ, the complementary slackness condition z * , κ − E(y) ≤ 0 ∀κ ∈ K, (3.4) and the variational inequality  Casas [7]; the reader is also referred to Stampacchia [20] for the Dirichlet case).In view of this, we may write where ϕ 0 , ϕ i , and ϕ E solve (3.6) respectively.We have at least ϕ 0 , ϕ i , and ϕ E in W 1,σ (Ω).Moreover, ϕ satisfies the formula of integration by parts It is easy to verify that the optimality conditions can be expressed by the Lagrange function ).Therefore, this definition makes sense.In (3.8), •, • denotes the duality pairing between Z and its dual space Z * .The Lagrange function L is of class C 2 with respect to (y, u) for fixed ϕ, λ, and z * .
Thanks to (3.7), the optimality system can be rewritten in terms of L. Then it is expressed by (2.6), the constraints on the state (2.3), (2.4), the constraints on the control u ∈ U ad , and This form is more convenient for our later evaluations.
Example.In (P), adjoint equation and variational inequality are given by where δ(0) is the Dirac measure.
To shorten our notation, derivatives taken at (y, u, ϕ, λ, z * ) will be indicated by a bar.For instance, L y y, L u (u − u) stand for the derivatives in (3.9) and (3.10), respectively.L yy [y 1 , y 2 ] denotes the second order derivative of L in the directions y 1 , y 2 taken at (y, u, ϕ, λ, z * ).Moreover, L ww [w 1 , w 2 ] is the second order derivative of L in the directions where β ∈ L ∞ (Γ) is nonnegative.For each pair (f, g) ∈ L 1 (Ω) × L 1 (Γ), this system admits a unique solution y ∈ W 1,σ (Ω), where σ < n/(n − 1); see Casas [7].(Notice that a function of L 1 can be considered as a Borel measure.)On the other hand, the solution y of (3.12) belongs to . This regularity result is well known for domains with C 1 -boundary.Moreover, it remains true for domains with Lipschitz boundary in the sense of Nečas [17] (see Stampacchia [19] and Murthy and Stampacchia [16]).On account of this, the mapping We obtain these spaces by embedding results for W 1,σ (Ω) [1], [17], [20].In both cases, this mapping is linear and continuous.Interpolation theory applies to show the following results for D considered as a mapping defined on L 2 (Ω) × L 2 (Γ):

Regularity condition and linearization theorem.
Let us recall that we consider a fixed reference pair (y, u) satisfying, together with (ϕ, λ, z * ), the first order necessary conditions (3.9)- (3.11).
The linearized cone of U ad at u is the set For convenience, we introduce the set of all feasible pairs and y satisfies the state-constraints} (notice that G is the nonlinear control-state-mapping).Following Maurer and Zowe [15], the linearized cone L(M, w) at w = (y, u) is defined by where if y(0) = y 0 (active state constraint).If the state constraint is not active, then (4.5) disappears.
The following regularity assumption (R) is basic for our further analysis: To formulate (R) we combine the two state constraints to one general constraint.We therefore take and put K(T (y)) = {0}×K(E(y)).The regularity condition was introduced by Zowe and Kurcyusz [23] and requires (R) This condition is sufficient for the existence of a (nondegenerate) Lagrange multiplier associated to the state-constraint E(y) ∈ K; see [23].We should emphasize that (R) does not rely on the condition int K = ∅.In Appendix 7.1 we shall present some sufficient conditions for (R) which, however, require int K = ∅.(R) is discussed for the canonical example (P) there.For where r = r y 2 + r u L 2 (Γ) .In the particular case b(x, y, u This theorem is proved in Appendix 7.2.Let us conclude this section by considering some useful estimates for L and for certain remainder terms.First, we evaluate where L denotes the second order derivative of L with respect to (y, u).We have Example.In the case of (P), L admits the form The term connected with ϕ causes trouble, more precisely, An estimate of I is needed with respect to the norm y 2 + u L 2 (Γ) (cf.(4.19)).We therefore have to require at least ϕ ∈ L 2 (Γ) in the second item and ϕ ∈ L ∞ (Γ) in the third one.On the other hand, only ϕ ∈ L r (Γ) follows from ϕ ∈ W 1,σ (Ω) for r < (n − 1)/(n − 2); see Nečas [17, p. 84].For n = 2 we obtain ϕ ∈ L r (Γ) for all r < ∞, while n = 3 yields the regularity ϕ ∈ L r (Γ) for all r < 2. On account of this, the following additional assumption is crucial for our analysis.
(A4) Let one of the following statements be true: We briefly comment on the consequences of these assumptions: (i) is true if f y ∈ L q (Ω), g y ∈ L p (Γ) and if the restrictions of F i , i = 1, . . ., m, and E (y) * z * to Ω and Γ, respectively, belong to L q (Ω), L p (Γ), as well.Moreover, (i) holds for functionals F i , i = 1, . . ., m, and E (y) * z * of C(Ω) * , where the associated real Borel measures are concentrated on the set A ⊂ Ω.
In addition to some assumptions on the regularity of ϕ for n ≥ 3, 4, (ii) requires linearity of b with respect to u, that is b(x, y, u) = b 0 (x, y) + b 1 (x, y)u, (iii) means that b(x, y, u) = b 1 (x, y) + b 2 (x)u, while (iv) is true only for an affine-linear boundary condition (but yet also true for a nonlinear functional F 0 ).
(A4) is obviously satisfied in the example (P).As a consequence of (A3) and (A4), pointwise state-constraints on the whole set Ω can only be handled by the standard part of our theory if u appears linearly in the boundary condition and n = 2.In the considerations below, we denote by r T i the remainder terms associated with the ith order Taylor expansion of a mapping T .For instance, the following first and second order expansions of b(x, y, u) are used at triplets (x, y, u) and (x, where where L indicates that L and its derivatives are taken at (y, u, ϕ, λ, z * ).We have in L and L with some ϑ ∈ (0, 1).On account of the assumptions (A1)-(A4), we are able to verify ), (4.17) and The constant C L > 0 depends in particular on ϕ.For the definition of η we refer to the assumption (A2).The analysis of (4.17)-(4.19) is performed in Appendix 7.3.

5.
Standard second order sufficient optimality condition.Our main aim is to establish sufficient optimality conditions close to the necessary ones derived in Casas and Tröltzsch [8].Therefore, we include also certain first order sufficient optimality conditions.We shall combine an approach going back to Zowe and Maurer [15] with a splitting technique introduced by Dontchev et al. [10].The method of [10] was focused on the optimal control of ordinary differential equations.It was extended later by the authors in [9] to the case of elliptic equations without state-constraints.
In [15], Maurer and Zowe introduced first order sufficient optimality conditions for differentiable optimization problems subject to a general constraint g(w) ≤ 0. For our problem, the application of their approach in its full generality is rather technical.Therefore, in an initial step we incorporate the first order sufficient optimality condition only for the constraints on the control.Later, we shall deal in the same way with additional state-constraints.Downloaded 04/23/13 to 193.144.185.29.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php The role of first order sufficient conditions can be explained most easily by the minimization problem {min f (x) | x a ≤ x ≤ x b }, where f : R n → R is of class C 2 .Let x satisfy the first order necessary conditions (variational inequality).If n = 1, then f (x) = 0 implies that x is a local minimizer (even for concave f ).Therefore, the second order sufficient optimality condition f (x) > 0 is needed only in the case f (x) = 0, where the first order necessary condition is not sufficient.The situation is similar for n > 1: The positive definiteness of f (x) has to be required only on the subspace Define for fixed τ > 0 (arbitrarily small) the set Γ τ is a subset of "strongly active" control constraints (cf.(3.5)).In other words, Γ τ = {x ∈ Γ| |L u (ȳ, ū, φ, λ, z * )(x)| ≥ τ } is the set, where the gradient of the objective (expressed as a function of the control) is sufficiently steep.In the example above, τ can be chosen as the minimal value of all nonvanishing |D i f (x)|.We mention at this point the relation z * , E (y)y ≤ 0 (5.1) ∀ (y, u) ∈ L(M, w), which follows from z * , E (y)y = z * , κ − E(y) ≤ 0 in view of (3.4).
Let P τ : L ∞ (Γ) → L ∞ (Γ) denote the projection operator u → χ Γ\Γτ u = P τ u.In other words, (P τ u)(x) = u(x) holds on Γ \ Γ τ , while (P τ u)(x) = 0 holds on Γ τ .We begin with our first and at the same time simplest second order sufficient optimality condition.(SSC) There exist positive numbers τ and δ such that holds for all pairs w 2 = (y 2 , u 2 ) constructed in the following way: For every w = (y, u) ∈ L(M, w), we split up the control part u in u 1 = (u − P τ u) and u 2 = P τ u.The solutions of the linearized state equation associated with u i are denoted by y i , i = 1, 2. By this construction, we get the representation w = w 1 + w 2 = (y 1 , u 1 ) + (y 2 , u 2 ).Remark 5.1.The coercitivity condition (5.2) of (SSC) is required on the whole set L(M, w) if Γ τ is empty.This rather strong second order condition is obtained by the formal setting τ = ∞.
(5.5) Downloaded 04/23/13 to 193.144.185.29.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpProof.We denote by l = (ϕ, λ, z * ) the triplet of Lagrange multipliers appearing in the first order necessary optimality conditions.Let an arbitrary feasible pair ŵ = (ŷ, û) be given.Then follows from F ( ŵ) = F (w) = 0.The complementary slackness condition implies Hence we can neglect this term, and a second order Taylor expansion yields where l u (x) = g u (x, y(x), u(x)) + ϕ(x)b u (x, y(x), u(x)).Using the variational inequality, we find Let us introduce for convenience the bilinear form B = L (w, l).Next we approximate ŵ − w by w = (y, u) ∈ L(M, w), according to Theorem 4.2.In this way we get a remainder r = (r y , r u ) = ŵ − w − w satisfying the estimate We have w ∈ L(M, w); hence (SSC) applies to B[w] 2 .Now we substitute in B[w] 2 the representation w = w 1 + w 2 described in (SSC) and deduce from (SSC) and (4.19).In the following, c will denote a generic constant.Suppose that < 1 is given and assume û − u L ∞ (Γ) < .Then y i 2 ≤ c u i L 2 (Γ) and Young's inequality together yield (5.9) The expression under the third integral is estimated by û − u L ∞ (Γ) |û − u|.In the other integrals (except the first) we insert (5.8) and derive The same type of estimate applies to B[r] 2 .Altogether, is obtained.By substituting (5.11) in (5.7), we get Using this in the first integral, setting δ = min{τ /2, δ/2}, and substituting the estimate (4.18) for r L 2 , we complete our estimation by for sufficiently small > 0.
Our condition (SSC) does not have the form expected from a comparison with second order conditions in finite dimensional spaces.In particular, the pair (y 2 , u 2 ) constructed in (SSC) does not in general belong to L(M, w).To overcome this difficulty, we introduce another regularity condition (R) τ that is stronger than (R).This new constraint qualification is similar to that one used in Casas and Tröltzsch [8] to derive second order necessary conditions.
Let C τ (u) denote the set of controls u ∈ C(u) having the property u On using (R) τ , we are able to show that the following second order sufficient optimality condition implies (5.4) as well.(SSC) τ There exist positive numbers τ and δ such that

php
Proof.The proof is almost identical to that of Theorem 5.2.The only difference consists in a more detailed splitting.In the first part of the proof we repeat the steps up to the splitting w = w 1 + w 2 after (5.8).Define Φ = T • G. Then we have holds so that w 2 = (y 2 , u 2 ) does not in general belong to the linearized cone.Thanks to the regularity condition (R) τ , the linear version of the Robinson-Ursescu theorem (see Robinson [18]) implies the existence of u H in C τ (u) with the following properties: The inclusion holds, and is satisfied (see the proof of Theorem 4.2 in the appendix).In other words, we find a pair w H = (y H , u H ) in L(M, w) with u H = 0 on Γ τ .Hence, (SSC) applies to B[w H ] 2 .Moreover, the control u H is sufficiently close to u 2 .Now we define ũ2 = u H and ũ1 = u 1 + (u 2 − u H ). Further, let ỹi = G (u)ũ i denote the corresponding solution of the linearized state equation.Then wi = (ỹ i , ũi ) is substituted for w i = (y i , u i ), i = 1, 2. The only difference between the proofs of Theorem 5.2 and 5.3 appears between the formulas and (5.8) and (5.9):We use the splitting w = w1 + w2 instead of w = w 1 + w 2 .Moreover, the first line of the estimate (5.9) is changed as follows: where we have used the estimate (5.13).Then we proceed word for word as in the proof of Theorem 5.2.
The paper [15] shows that "strongly active" state-constraints may also contribute terms to the first order sufficient optimality conditions.However, this leads to a rather technical construction and more restrictive assumptions.We have to suppose that the function b is linear with respect to the control u and n = 2.The corresponding theorem is stated below.Define for fixed β > 0 and τ > 0 the following subset of L(M, w): L β,τ (M, w) is the subset of L(M, w), where the term z * , E(y) does not much contribute to the first order sufficient optimality condition.It is only this set L β,τ (M, w) where we have to require second order conditions, namely, the following condition.(SSC ) There exist positive numbers β, τ , and δ such that holds for all w 2 = (y 2 , u 2 ) obtained in the same way introduced in (SSC) by elements w taken from the smaller set L β,τ (M, w).Using this condition, we formulate the following.
Case I: w = (y, u) ∈ L(M, w)\L β,τ (M, w).This is the case where we deduce (5.17 such that G(0, ξ) ≥ γ > 0 on Γ.Then (5.24) is fulfilled with β = z * γ ∀ u ≤ 0.Moreover, all u ≥ 0, u = 0 do not contribute to L(M, w).Therefore, the coercitivity condition (5.14) is needed only for all u having positive and negative parts U + and U − , where U + dominates U − .However, this information does not essentially improve (SSC).The cone C(u) is defined by Let us redefine L(M, w) by substituting cl C(u) for C(u) and require (SSC) in this form.Then (SSC) appears to be stronger, and Theorem 5.2 holds as well, since cl C(u) ⊃ C(u).However, it can be proved by (R) and the generalized open mapping theorem that (SSC) based on cl C(u) is in fact equivalent to (SSC) established with C(u).This follows by continuity arguments.

Extended second order conditions.
A study of the preceding sections reveals that (SSC) is sufficient for local optimality in any dimension of Ω without restrictions on the form of the nonlinear function b, whenever (A3) is satisfied and ϕ ∈ L ∞ (Γ).ϕ is bounded and measurable if pointwise state-constraints are given only in compact subsets of Ω with the other quantities being sufficiently smooth.In twodimensional domains, pointwise state-constraints can be imposed on Ω, if b(x, y, u) is linear with respect to u.An extension to ϕ ∈ L r (Γ) requires stronger assumptions on b.However, we shall briefly sketch in this section that some extended form of (SSC) may partially improve the results for n ≤ 3.
Let us assume ϕ / ∈ L ∞ (Γ).Then it seems to be natural to introduce in L ∞ (Γ) another norm This definition is justified, as u ∈ L ∞ (Γ) and y ∈ C(Ω) holds in all parts of our paper.For ϕ ∈ L ∞ (Γ), the new norm is equivalent to u L 2 (Γ) .To get rid of the restrictions Downloaded 04/23/13 to 193.144.185.29.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpimposed on b in (A4) we redefine the set of strongly active control constraints Γ τ by Moreover, we substitute the condition Here we have invoked (7.15) for sufficiently large s (n = 2, 3).Now a careful study of the proof of Theorem 5.2 shows that (A4) can be removed on using (6.3) and (6.4).Assuming (6.2), we arrive at the estimate (5.4) with û − u 2 ϕ instead of û − u 2 L 2 (Γ) .Then (5.4) follows from u ϕ ≥ u L 2 (Γ) .The same arguments apply to the first order sufficient conditions in Theorem 5.4 for n = 2 if we redefine L β,τ (M, w) by substituting for (5.15)(a) K = Z (no inequality constraints).Then (R) means F (y)G (u)C(u) = R m .This condition is satisfied if, in addition to the surjectivity property F (y) Ŷ = R m , the following holds: There is a ũ ∈ int L ∞ (Γ) U ad with F (y)ỹ = 0. Here, ỹ denotes the solution of the linearized state equation ( 4 are satisfied with certain constants C p , C 2 .If b u (x, y, u) does not depend on y and u, then we have Proof.We use the first order expansion of b at (x, y, u) and obtain from (2.2), (7.6), and (4.11) the system where and M depends on U ad (notice that the boundedness of U ad implies a uniform bound on all admissible states).Therefore, the discussion of (3.12) yields for p then the Lipschitz property holds in the norm y 2 for y.For p > n − 1, we continue by We have shown (7.7) and (7.8).If b u does not depend on (y, u), This yields , Downloaded 04/23/13 to 193.144.185.29.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpthat is, (7.9).
Consequently, for y = G (u)u, we have (y, u) ∈ L(M, w) and The estimates stated in (4.6) and (4.8) follow immediately.(4.7) is proved completely analogous.Here, e(v) is defined by (7.8), • Y is to be replaced by • 2 , and • L 2 (Γ) is to be substituted for • L ∞ (Γ) .We rely on the continuity of Φ (y) in the L 2 -norm.

Estimates of the Lagrange function.
In this subsection we derive the estimates (4.17 ) is true for sufficiently large s.In the case n ≥ 4 we repeat the analysis of the case n ≥ 3.This leads to the additional assumption ϕ ∈ L r (Γ) for some r > n−1 2 .Now it is easy to derive the estimates (4.17 which contributes to r L 2 .The other terms of r L 2 are handled by the estimates for second order derivatives in (A1)-(A3) in a direct way.Simple evaluations of this type verify (4.17)- (4.18).We leave the details to the reader.

(
A1) For each fixed x ∈ Ω or Γ, respectively, the functions f = f (x, y), g = g(x, y, u), and b = b(x, y, u) are of class C 2 with respect to (y, u).For fixed (y, u), they are Lebesgue measurable with respect to x ∈ Ω or x ∈ Γ, respectively.Throughout the paper, partial derivatives are indicated by associated subscripts.For instance, b yu stands for ∂ 2 b/∂y∂u .By b (x, y, u) and b (x, y, u) we denote the gradient and the Hessian matrix of b with respect to (y, u): b (x, y, u) = b y (x, y, u) b u (x, y, u) , b (x, y, u) = b yy (x, y, u) b yu (x, y, u) b uy (x, y, u) b uu (x, y, u) ; |b | and |b | are defined by adding the absolute values of all entries.