Poincaré and log-Sobolev inequality for stationary Gaussian processes and moving average processes

For stationary Gaussian processes, we obtain the necessary and sufficient conditions for Poincaré inequality and log-Sobolev inequality of process-level and provide the sharp constants. The extension to moving average processes is also presented, as well as several concrete examples.


Introduction and Main Results
Let X := (X n ) n∈Z be a real valued stationary Gaussian process with It is a very important class of stochastic processes both in theory and applications.The limit theorems about stationary Gaussian processes are abundant, see Avram [1] for the central limit theorem, Donsker and Varadhan [3] and L. Wu [9] for the large deviations, H.Djellout and al. [2] for moderate deviations, and the references therein.
In this note we are mainly interested in Poincaré inequality and Logarithmic Sobolev inequality on the product space R Z for the law of the process X.As developed by Ledoux [5] and many other authors, the Poincaré or log-Sobolev yield sharp concentration inequalities, which are much more robust than the limit theorems quoted above.
We begin by describing those two inequalities on E := R Z .Regarding as a tangent space of E = R Z , let ∇ be the corresponding gradient, i.e., for a function F on E, derivable at each coordinate x i , i.e., ∂ x i F exists, let If F : E → R and ∇F : E → l 2 (Z) are continuous, we say that F ∈ C 1 (E).

Definition 1.1:
We say that a R-valued stochastic process X = (X n ) n∈Z satisfies the Poincaré inequality, if there is some best constant c P (X) ∈ R + such that and X satisfies the log-Sobolev inequality (LSI in short) if there is some best constant c LS (X) ∈ R + such that Here Ent P (F (X)) := E P F (X) log F (X) E P F (X) for P-integrable nonnegative F (X), is the Kullback entropy.
The same definition and notation apply also to the random vector in R n .It is well known that c P (X) ≤ c LS (X) (cf.[5]).
For the stationary Gaussian process given in (1.1), if ρ(m) = 0, ∀m = 0, it becomes an i.i.d.sequence for which it is now well known (due to Gross [6], see [5]) To state our main result, let us introduce the (nonnegative and bounded) spectral measure µ on the torus T identified as [−π, π], which is determined by (Bochner's theorem) The main result of this note is: Theorem 1.2:For the stationary Gaussian process in (1.1), we have where f ∞ = esssup t f (t).In particular, X satisfies the Poincaré or log-Sobolev inequality iff µ dt and the density f := dµ/dt is bounded.
Remark 1.4:For stationary Gaussian process X = (X k ) k∈Z with law P (on R Z ), one can consider the abstract Wiener space (R Z , H, P ), where H ⊂ R Z is the Cameron-Martin space associated with P .It is not difficult (but already more difficult than the proof of Theorem 1.2) to check that for any smooth F : R Z → R depending only on a finite number of variables, where ∇ H is the Malliavin gradient, and Γ = (σ 2 ρ(k−l)) k,l∈Z is the covariance matrix of X.By the Gross theorem, (1.8) (an easy derivation of it is given in the proof of Lemma 2.1).This log-Sobolev inequality, though involving the covariance structure of X, is however less convenient (than Theorem 1.2) for the derivation of concentration inequalities.When Γ : l 2 (Z) → l 2 (Z) is bounded, the right hand side (r.h.s. in short) of (1.8) is bounded by .
This note is organized as follows.The next section is devoted to the proof of Theorem 1.2 and its counterpart for continuous time Gaussian processes.In §3, we present an extension to the moving average processes and provide several examples.

Proof of Theorem 1.and the continuous time counterpart 2.1 Proof of Theorem 1.2
It is based on the following (known) observation: Lemma 2.1: For a d-dimensional random vector X of law N (0, Γ), where Γ is the covariance matrix, then where λ max (Γ) denotes the maximal eigenvalue of Γ, i.e., We give its proof for the convenience of the reader and especially for its simplicity.Proof: Let ξ be a random vector of law N (0, I) on R d .Then X and

√
Γξ have the same law.Therefore by (1.4), we have for any bounded where it follows that c P (X) ≤ c LS (X) ≤ λ max (Γ).Furthermore, letting x 0 be an unit eigenvector of Γ associated with λ max (Γ) and F (x) := x, x 0 , we see that where it follows that c P (X) ≥ λ max (Γ).The lemma is proved.
Proof: (Proof of Theorem 1.2) Considering (X k /σ) if necessary, we may assume that σ = 1 without loss of generality.Let X (n) := (X k ) −n≤k≤n , which is centered, Gaussian with the covariance matrix given by the Toplitz matrix In the definition 1.1, one can take only bounded C 1 -function F depending on a finite number of variables (by approximation, the detail is left to the reader).In other words we always have Then in the present situation we get by Lemma 2.1, We divide the proof into two cases.Case 1. ρ(•) / ∈ l 2 (Z).In this case, we have and thus c LS (X) = c P (X) = +∞.Case 2. ρ(•) ∈ l 2 (Z).In this case µ dt and the spectral density The following simple proof is due to the referee.By Rayleigh's principle, and noting that ρ(k − l) = 1 2π e −i(k−l)t f (t)dt, we have for any n ≥ 1, which is obviously bounded from above by f ∞ by Parseval's equality.Conversely, using the equality above and the denseness of trigonometric polynomials in the Banach space C b T of complex valued continuous and bounded functions on T, we have furthermore where the supremum runs over all complex-valued g ∈ C b T such that This supremum equals to f ∞ .

Continuous time stationary Gaussian processes
Let now X = (X t ) t∈R be a real-valued stationary centered Gaussian process, defined on the probability space (Ω, F, P), with continuous covariance function on R, γ(t) := EX 0 X t , ∀t ∈ R.
We can and will assume that for each T > 0, the sample paths of dt) (such version exists for its covariance operator is of trace class, see the proof of Theorem 2.2).Let E = L 2 loc (R), the space of all real-valued locally (dt−) square integrable functions on R, equipped with the project limit topology of L 2 [−T, T ] as T → +∞.Regarding L 2 (R) := L 2 (R, dt) as the tangent space of E, for any Gateaux-differentiable function we can define the gradient In the variational calculus, we have formally ∇ t F (x) = δ δx(t) F (x).When F : E → R and ∇F : E → L 2 (R) are continuous and bounded, we say that F ∈ C 1 b (E).Similarly as in Definition 1.1, let c P (X) ∈ [0, +∞] be the best constant for the following Poincaré inequality and c LS (X) ∈ [0, +∞] be the best constant for the following log-Sobolev inequality These functional inequalities with respect to the L 2 -metric (instead of the Cameron-Martin metric) have been investigated by M. Gourcy and the fourth author [4] for diffusions.We have the following counterpart of Theorem 1.2.

Theorem 2.2:
We have otherwise.
Proof: 1) We begin with an extension of Lemma 2.1: let X be a centered Gaussian random variable valued in a separable Hilbert space H of law N (0, Γ), where the self-adjoint nonnegative definite covariance operator Γ : H → H, determined by It is well known that Γ is of trace class (and conversely if Γ is of trace class, then N (0, Γ) is a probability measure on H).Then using the usual pre-Dirichlet form E|∇F | 2 H (X) and letting c P (X), c LS (X) be the best constants for the corresponding Poincaré and log-Sobolev inequalities, respectively, we have c P (X) = c LS (X) = λ max (Γ).
(2.3) Indeed, let (e n ) n∈N be an orthonormal basis of H such that Γe n = λ n e n where the sequence of eigenvalues (λ n ) n∈N is ranged as non-increasing.As { X, e n ; n ∈ N} are independent with laws {N (0, λ n ); n ∈ N}, we have by the independent tensorization ([5]), where (2.3) follows.
2) By approximation, it is easy to check that ) The covariance operator of X [−T,T ] are given by

By
Step 1), we have only to prove that C (R) be the spaces of complex-valued L 2 -integrable functions on [−T, T ] and R, respectively.For any h ∈ L 2 C (R), let ĥ(t) = 1 √ 2π R e its h(s)ds be the Fourier transform of h.It is well known that h → ĥ is unitary on and also uniformly on R, we have Since the family A of all ĥ with h ∈ L 2 C (R) L 1 C (R) constitutes an algebra separating the points of R, then by monotone class theorem, for any complexvalued measurable and bounded function g on R, say g ∈ b C B(R), we can find a sequence (g n = ĥn ∈ A) such that g n → g in L 2 (R, dt + µ).Thus This implies easily the desired result.

Extension and several examples
In this section we extend Theorem 1.2 to general moving average processes and provide some concrete examples.

Extension to moving average processes
Let (ξ k ) k∈Z be a sequence of i.i.d.real-valued random variables such that Eξ 0 = 0 and Eξ 2 0 = 1, and (a k ) k∈Z ∈ l 2 (Z).Consider the moving average process It is a well defined stationary process with covariance function Its spectral density function is given by