Existence of a unique invariant measure for a class of equicontinuous Markov operators with application to a stochastic model for an autoregulated gene

We extend the so-called lower-bound technique for equicontinuous families of Markov operators by introducing the new concept of uniform equicontinuity on balls. Combined with a semi-concentrating condition, it yields a new abstract mathematical result on existence and uniqueness of invariant measures for Markov operators. It allows us to show the tightness of the set of invariant measures for some classes of Markov operators. This, in turn, gives a useful tool for proving a continuous dependence on given parameters for semi–concentrating Markov semigroups. In the second part we formulate an abstract modelling framework that defines a piecewise deterministic Markov process whose transition operator at the times of intervention yields a semi-concentrating Markov operator that is uniformly equicontinuous on balls. We show that this framework applies to a detailed stochastic model for an autoregulated gene in a bacterium that takes random transcription delay into account.


Introduction
The paper is concerned with Markov operators.Its main aim is to show the utility of the so-called lower bound technique in studying iterates of transformations with some random disturbance.The lower bound technique was established by Doeblin in [11].He proved mixing for Markov chains with transition functions absolutely continuous with respect to some fixed measure.Lasota and Yorke used similar technique for proving the existence of a unique invariant distribution absolutely continuous with respect to Lebesgue measure for Frobenius-Perron operators corresponding to piecewise monotonic transformations [24].Next they also extended the technique to Markov operators acting on arbitrary Borel measures what allowed them to study operators appearing in the theory of fractal and semi-fractal sets [25].Recently it was shown that the technique may be used in verifying the ergodic properties of solutions of infinitely dimensional stochastic differential equations [21].In this paper we study Markov operators corresponding to iterated function systems with some disturbance.We are going to show that under some natural conditions on the unperturbed system its disturbance will possess a unique ergodic measure.Additionally, we prove that the invariant measure will converge, as disturbance decreases to zero, to a unique invariant measure of the original iterated function system.It is worth mentioning here that the non-disturbed system is a type of system appearing for instance in [23] as the model of the cell cycle.Its stability allowed the authors to support their conjecture showing that cell division may serve to confer stability on intracellular biochemical mechanisms that might be unstable in the absence of cell division.This proves its importance in biomathematics.
In the second part of our paper we will show that the system under consideration may be adapted as a model for the stochastic dynamics of an autoregulated gene in a bacterium.Since the model is described by Markov semigroups acting on some space of functions we must have developed techniques valid in general spaces which are neither compact nor locally compact.To do this we had to provide some extension of the lower bound technique to general Polish spaces.
There is a vast literature on mathematical models for gene regulatory networks, e.g.deterministic, in terms of systems of ordinary differential equations (ODEs) ranging from a single gene network, starting with [18], to a large number of interacting genes in a realistic network, such as the phosphorelay and sporulation network in Bacillus subtilis [8].However, researchers realised already early in time the intrinsic stochastic nature of the processes.So a huge family of mathematical models has been developed that incorporate stochasticity to study the consequences (see e.g.[1,22,27,26,40,41] and further references found there).Some of these add random effects as perturbation to a deterministic system, in various ways.These models take into account the (random) time needed by the RNAp for completion of the mRNA transcript, bursting in the translation of mRNA into the gene product and/or variation in molecule degradation.However, only few consider the system under general assumptions on functional forms for differential equations or probability distributions (e.g.[26], to some extent).
Mackey et al. [26] also considered models incorporating bursting and arrived at existence of a unique steady state distribution, though under assumption of time scale separation in the deterministic part of their model and a particular (state-dependent) type of distribution for the arrival time of the next burst.
Stochasticity in the process of transcription and translation has effects on the dynamics of the network and the associated molecular distribution as compared to a deterministic system (e.g.[41]).Various researchers have dealt for example with investigating the occurence of bimodal distributions for the molecular compounds in these systems [26,29].Understanding these effects has become important for the explanation of observed biological phenomena that occur at population level.A growing and dividing genetically homogeneous population of bacteria may develop for example into a phenotypically heterogeneous population of cells [1,14].Random fluctuations in environment, internal chemical composition and the regulatory system are identified as underlying cause [32].
Sections 2-4 provide the derivations of the main mathematical theorems (Theorems 3.3, 4.5 and 4.9) on which the application to a stochastic model for an autoregulated gene in Section 5 is based.Theorem 4.9 shows that the invariant distribution in our setting converges to that obtained through the approach in [23] when randomness in the number of produced proteins vanishes.
Section 2 introduces basic notation and fundamental concepts on Markov operators that are needed in Section 3 to obtain existence of an invariant measure (Theorem 3.3) and separation of supports for different invariant measures (Proposition 3.5) that will lead to uniqueness (Theorem 4.5).Section 4 provides a general abstract model of a piecewise deterministic Markov process that possesses a unique invariant measure.
In Section 5.1 we provide the necessary biochemical background that is incorporated in our stochastic model for an autoregulated gene that is mathematically formulated in Section 5.2.Section 5.3 derives conditions under which the gene model fits into the abstract model framework of Section 4. Finally, in Section 5.5 we discuss values for some of the model parameters as they can be found in literature.

Basic notions
Let X be a closed subset of some separable Banach space (H, • ).We denote by B(x, r) the open ball in X with center at x and radius r, by B(X) the σ-algebra of Borel subsets of X, by M = M(X) the family of all finite Borel measures on X, and by M s the space of all finite signed Borel measures on X.According to Jordan's decomposition theorem any measure µ ∈ M s may be uniquely written as µ + −µ − , where µ + , µ − ∈ M. Using this decomposition we define the total variation norm We shall write M 1 = M 1 (X) for the family of all µ ∈ M such that µ(X) = 1.The elements of M 1 are called distributions.If A ∈ B(X), then by M A 1 we denote the set of all measures µ ∈ M 1 such that µ(X \ A) = 0. We denote by supp µ for µ ∈ M the support of µ, i.e., supp µ = {x ∈ X : µ(B(x, r)) > 0 for any r > 0}.
As usual, B(X) denotes the space of all bounded Borel measurable functions f : X → R and C(X) the subspace of all continuous functions.Both spaces are equipped with the supremum norm • 0 .
We introduce in M s the Fortet-Mourier norm • F given by Then we shall write w-lim It is well known that the convergence in the Fortet-Mourier norm • F is equivalent to the weak convergence (see [12]).An operator P : M → M is called a Markov operator if It is easy to prove that every Markov operator can be uniquely extended to a linear operator on the space of all signed measures M s .
A linear operator U : Setting µ = δ x , the point (Dirac) measure supported at x, in (2.1) we obtain A Markov operator P is called a Feller operator if it possesses the dual operator U such that U (C(X)) ⊂ C(X).
A measure µ * ∈ M is called invariant (or stationary) with respect to We denote by C δ (X), δ > 0, (C δ for abbreviation), the family of all closed sets C for which there exists a finite set ).An operator P is called semi-concentrating if for every δ > 0 there exist C ∈ C δ and θ > 0 such that lim inf n→∞ P n µ(C) > θ for µ ∈ M 1 . (2.3) Let {P λ : λ ∈ Λ} be a family of Markov operators.The family is called uniformly semi-concentrating if for every δ > 0 there exist C ∈ C δ and θ > 0 such that P λ satisfies condition (2.3) for any λ ∈ Λ.
A continuous function V : for some z 0 ∈ X.

A general result
Now we are going to extend known results on existence of an invariant measure to the setting of uniformly equicontinuous Markov operators.They were previously proved for nonexpansive Markov operators (see [25,38]) and the so-called e-chains (see [39]).
We start with the relevant definition.
Definition 3.1.A Markov operator P is called uniformly equicontinuous on balls if for any ε > 0 there is δ > 0 such that for every x ∈ X sup n≥1 .
Following the proof of Theorem 3.1 in [38] we may show the following proposition.
Proposition 3.2.Every Markov operator P which is uniformly equicontinuous on balls is a Feller operator.Theorem 3.3.If a Markov operator P is semi-concentrating and uniformly equicontinuous on balls, then P admits an invariant measure.
Proof.The proof is mostly the same as the proof of Theorem 4.3 in [38].The only difference consists in assuming uniform equicontinuity on balls instead of nonexpansiveness.However nonexpansiveness was only used in proving that sup as diam A → 0. This is satisfied for Markov operators that are uniformly equicontinuous on balls as well.
Lemma 3.4.Let P be a semi-concentrating Markov operator which is uniformly equicontinuous on balls and let µ * be an ergodic invariant measure for P .Then for any x ∈ supp µ * , we have Proof.The proof parallels the proof of Proposition 2 in [19] given for continuous semigroups but to make the paper self-contained and since some details are different we provide it here.We will split the proof into three steps.
Step 1. Fix x ∈ supp µ * and let A ⊂ X be an open neighbourhood of x.Define B = {y ∈ X : where U is dual to P .Now we show that µ * (B) = 0. Assume, contrary to our claim, that µ * (B) > 0. We shall prove that , by the ergodicity of the measure µ * (see Theorem 3.2.4 in [7]).On the other hand, since B ⊂ X \ A and µ * (A) > 0, this leads to a contradiction.So, to finish this step we show that contrary to our definition of z.Therefore U n 1 X\B (z) = 0 for any z ∈ B, and consequently This completes the first part of our proof.
Step 3. From the previous step it easily follows that for an arbitrary ε > 0 we may find ν 1 , . . ., ν N ∈ M B(x,ε) 1 and α 1 , . . ., α N ≥ 0, n 1 , . . ., n N ≥ 0 such that Hence we obtain Consequently for any ε > 0 we obtain that there exists a sequence (µ ε m ) m≥1 of probability measures supported on B(x, ε) such that w-lim On the other hand, for any bounded Lipschitz function f we have as n → ∞.
Since f was an arbitrary bounded Lipschitz function, formula (3.1) is proved and we are done.
From Lemma 3.4 the following proposition follows Proposition 3.5.Let P be a semi-concentrating Markov operator which is uniformly equicontinuous on balls.Then for any two different ergodic invariant measures µ * and µ * we have Proof.Assume, contrary to our claim, that dist(supp µ * , supp µ * ) = 0. Further, let f be a bounded Lipschitz function such that Without loss of generality we may assume that Lip f ≤ 1 and f 0 ≤ 1.
From the uniform equicontinuity of P it follows that there exists δ > 0 such that sup x,y∈X, x−y <δ Let x ∈ supp µ * and y ∈ supp µ * be such that x − y < δ.Then, by the uniform equicontinuity we have and, by Lemma 3.4, taking appropriate limits, we obtain which contradicts the definition of ε.The proof is complete.

An abstract model
Let X be a closed subset of some separable Banach space (H, • ).We will consider the stochastically perturbed dynamical system on X of the form x n+1 = S(x n , t n ) for n = 0, 1, 2, . . .(4.1)We assume that: (1) The function S : X × [0, T ] → X is continuous.
(2) The t n are random variables with values in [0, T ] and the distribution of t n conditional on x n = x is given by where p : where ω : R + → R + is a concave and non-decreasing function such that σ 0 ω(t) t dt < +∞ for some σ > 0.
Additionally, for any x ∈ X there is θ(x) ∈ (0, T ) such that p(x, t) = 0 for t < θ(x) and p(x, t) > 0 for θ(x) < t ≤ T .Similar assumptions were made by Barnsley et al. for classical iterated function systems with place dependent probabilities (see [2]), where the Dini condition for the function ω was introduced.It has been also proved that this assumption is necessary for uniquness of an invariant measure (see [35]).Barnsley and the coauthors additionally assumed that probabilities p are strictly positive but here we have to slightly strengthen this assumption taking the threshold θ(x) when the biological transcription process we shall consider later starts.
We easily check that for any c < 1 we have ω(c n t) < +∞ for any t ≥ 0 and lim t→0 ϕ c (t) = 0.
Fix an ε ∈ (0, ε * ].Let ν ε be a Borel probability measure on (H, • ) such that its support is in B(0, ε).Set additionally ν 0 = δ 0 .For any x ∈ X we set (4.7)We shall consider the Markov chain given by the transition function Then we may write the Markov operator P ε : M 1 → M 1 corresponding to π ε : Its dual operator has the form By induction we define and Observe that for every n ∈ N, t 1 , . . .
Using the above notion we may write the iterates U n ε in the following form where From the definition of P n it follows For brevity we denote t = (t 1 , . . ., t n ) and h = (h 1 , . . ., h n−1 ) and consequently we shall write The following simple lemma follows from the Dini condition and conditions (4.13), (4.15).
given by (4.9) is uniformly equicontinuous on balls.
given by (4.9), satisfy the following Proof.Define V (x) = x − x 0 for some x 0 ∈ X. Obviously V is a Lyapunov function.Then by (4.4) and (4.5) for every ε ∈ [0, ε * ] we have An application of Lemma 2.4.2 in [38] finishes the proof.
We have by Lemma 4.6.We see immediately that the term . . .
as ε → 0 and our proof is complete.
We are now in a position to show formula (4.25).Indeed, since where and B(x i , δ) is such a ball that

Application to a model for an autoregulated gene
In this section we illustrate the applicability of the abstract model discussed in the previous section by means of a mathematical model for the stochastic dynamics of an autoregulated gene in a bacterium.An autoregulated gene is the simplest form of gene regulatory network.In this case, the gene product has a feedback effect -indirectly -on its own transcription either positively or negatively, by stimulating or inhibiting further transcription.The protein produced from the gene typically needs to be activated, i.e. undergo chemical transformations like phosphorylation and dimerisation, before it can affect transcription as so-called transcription factor [37].We believe that larger regulatory networks will also fit to our abstract setting, but a discussion of these is beyond the scope of this paper.
The production of proteins from a single gene is inherently a noisy process: the gene products arrive in bursts containing a variable amount of molecules and the time interval between such bursts is random too [4,17,36].Below we introduce a Markov chain model for the spatial distribution of a gene product and its phosphorylated and dimerised form, just after the arrival of a new burst of gene product.The main objective is to show that this model has a unique invariant distribution, using the fundamental results obtained in the previous sections.The case of deterministic protein production, i.e. when ε = 0, fits to the framework presented in [23] and the results obtained there.Theorem 4.9 shows that the invariant measure µ ε * for our model with ε > 0 will converge to the invariant measure for the deterministic case as ε ↓ 0.

Summary of the biological transcription and translation process
We follow the so-called 'central dogma' in molecular biology for the bacterial transcription and translation machinery, described e.g. in [37].Thus, consider a gene 'a' that codes for the associated protein 'A'.The latter influences its own expression in a manner that is a general motif in regulatory systems encountered in nature, like e.g.phage-λ decision circuit in Escherichia coli or the phosphorelay in Bacillus subtilis [1,33].Namely, A itself cannot act as transcription factor.Only after phosphorylation of A (A + P AP, catalysed by kinases and phosphatases) and subsequent dimerisation (2AP (AP) 2 ), the resulting dimer (AP) 2 , which we denote by D, can bind the promoter of gene a.We call this set of chemical reactions the activation system of A: A + P AP, 2AP D. (5.1) The molecular species A, AP and D diffuse through the cytoplasm in the cell and are subject to degradation.Degradation is crucial in the analysis of the associated model, discussed below.Diffusion of D in the cell is a combination of three-dimensional diffusion in the cytoplasm alternated with one-dimensional diffusion along the DNA in search of its binding site [13].
Binding of D to this binding site at the promoter of a affects the binding of an RNA polymerase (RNAp) to its binding site on the promoter and the subsequent start of transcription and translation.It can be either repressing or activating, depending on whether it prevents RNAp from binding the promoter or, on the contrary, it enhances binding by attracting RNAp to its binding site.RNAp bound to the DNA first forms a closed complex.Before the RNAp can clear its binding site and start transcription, the complex must reform into an open complex.The closed-to-open isomerisation reaction is rate limiting.Clearance of the binding site takes a few seconds [27].Then transcription of the gene starts.The subsequent movement of RNAp over the gene, making a messenger RNA (mRNA) copy, is stochastic.It has been observed that RNAp may halt for substantial time, may back-track over short nucleotide distances or even drop off the DNA strand [34].
Multiple ribosomes, responsible for translating mRNA into protein, can reside on the mRNA strand at the same time in a cue, while transcription is not yet completed.Thus, once an mRNA copy is finished, it will produce a burst of a random number of the gene product A [27].Ribosomes start translation already when the first part of the mRNA transcript becomes available from the RNAp.Further A-proteins become available in a temporal sequence of releases of single proteins.Experimentally, the number of proteins produced has been shown to follow an exponential distribution [26,4,17].

The mathematical model
We assume in the mathematical representation of the system outlined above, that the molecular levels of RNAp, ribosomes and amino acids required for protein production remain constant.This assumption is reasonable if the modelled bacterium lives in a normal temperature range (25 -37 • C, cf.[15] and references found there).We assume further that new transcription can start only after completion of the mRNA transcript and after the responsible RNAp has dissociated from the gene.Otherwise, prevention of an event in which one RNAp overtakes another on the gene complicates the model substantially.This assumption allows to model the time needed for the production of one complete mRNA transcript as a random variable with law independent of the abundance of any of the molecular players.We neglect abortion of the transcription process before its completion.Moreover, the phosphorylation and dimerisation reactions occur on a faster time scale (fraction of seconds) than the transcription process (minutes).Diffusion of protein-sized compounds is limited, since bacterial cells are 'crowded' [9,10].We model the complex diffusive movement of D, which consists of alternating 3D and 1D diffusion, by means of a single effective three-dimensional Fickian diffusion.
Further central assumptions in our model are: (i ) the number of molecules of each of the compounds in the activation system are sufficiently high such that stochastic fluctuations can be ignored and a model in terms of partial differential equations is adequate to describe its dynamics.(ii ) The temporary removal of a single D from the activation system when it binds to its binding site at the promoter hardly influences the dynamics of this system.We therefore do not include this event in the modeling of the reaction kinetics of the activation system.
Let the open bounded domain Ω ⊂ R 3 represent the cytoplasm of the bacterial cell.Assume that the boundary ∂Ω of Ω is sufficiently smooth (say C 2 ).The activation system (5.1) of A is modeled as a system of 3 reaction-diffusion equations in Ω.The concentration of the compounds A, AP and D in Ω are denoted by u 1 , u 2 and u 3 , respectively.The system of equations governing the dynamics of u = (u 1 , u 2 , u 3 ) is where D is a diagonal 3 × 3 matrix with diffusion constants D i > 0 on the diagonal.Λ = diag(λ 1 , λ 2 , λ 3 ) is a diagonal matrix with degradation rates λ i > 0 for each of the compounds.Equation (5.2) is complemented by Neumann boundary conditions.
The reaction term defined by F models the activation system in absense of degradation.An explicit model considered for this example is using Michaelis-Menten kinetics for the enzyme catalysed phosphorylation and dephosphorylation reaction and Mass Action Kinetics for dimerisation.All parameters (rate constants) are assumed to be strictly positive.Note that F is discontinuous at u 1 = −κ + A and u 2 = −κ − A , but real analytic on its domain Consequently, the superposition operator defined by F on H := C(Ω) 3 , where This observation makes us consider solutions to the initial value problem defined by (5.2) in H.For reasons of biological interpretation we are interested in positive solutions only.We consider mild solutions in H (e.g.[30]).That is, continuous maps u : [0, T ] → H that satisfy the Variation of Constants Formula Proposition 5.1.For each u 0 ∈ H + there exist a unique mild solution t → u(t; u 0 ) to (5.2) with u(0; u 0 ) = u 0 that is positive and exists for all time t ≥ 0.Moreover, the semiflow is continuous for all T > 0.
The proof of Proposition 5.1 is deferred to Section 5.3.Bursts of gene product A appear at times 0 < t 1 < t 2 < . . . .Put t 0 := 0 and t n := t n+1 − t n .We now provide a model for the probability distribution function (p.d.f.) p(u, t) for the random variables t n , conditional that the state u n of the system at t n is u: (5.8) Since only the dimer D affects transcription, p(u, s) depends on u through u 3 only.
In the abstract model we require that t n ∈ [0, T ] almost surely, whatever the concentration of D. One may choose T so large that this is biologically reasonable.Mathematically, in the formulation of p(u, t), it requires a truncation procedure.First we describe a p.d.f.p(u, t) that allows for intervals between two consecutive interventions of arbitrary length.We truncate the distribution p at T large and renormalise to obtain the p.d.f.p(u, •) on [0, T ] that ensures that the intervals between interventions will be of length less than T almost surely: (5.9) The following assumption on p(u, t) is crucial for application of our general results on Markov operators: (A) There exists 0 < τ < T such that p(u, t) = 0 for all 0 ≤ t ≤ τ and u ∈ H + .
This assumption holds for our model of the biological transcription process, as we shall now discuss.The random variable t n is the sum of two independent random variables: the time needed for transcription initiation, T ini , and the time needed for the RNAp to complete transcription T tr .The latter does not depend on the state u of the activation system.We start describing the probability distribution function (p.d.f.) p tr of T tr .
T tr is the sum of the random times T i tr (i = 1, . . ., N ) that the RNAp requires to copy the i-th nucleotide of the total of N that constitute gene a and move to the next nucleotide (cf.[34]).We assume T i tr ≥ ∆τ i > 0 almost surely for each i.That is, the RNAp cannot make arbitrarily fast jumps from one nucleotide to the next.It results in an almost sure transcription delay τ tr := i ∆τ i > 0, such that Assumption (A) is satisfied with τ ≥ τ tr .If T i tr has p.d.f.p i tr and we assume that the T i tr are independent, then p tr is the N -fold convolution product of p 1 tr * • • • * p N tr .Recall that we ignore transcription abortion.
Providing an expression for the p.d.f. of T ini is more involved.To that end we also need to model the changes in state of the promoter region, where D and RNAp can bind at their specific binding sites.We distinguish six states: the free promoter (R), the closed complex of RNAp bound to R (RNAp-R c ) and the open complex (RNAp-R o ), all with no D bound.These states are numbered as 1, 2 and 3. States 4, 5 and 6 of the promoter region are similar configurations where also D has bound to its binding site (i.e.RD, RNAp-R c D and RNAp-R o D).The changes in state are modeled as a continuous time Markov process with constant transition rates as indicated in Figure 5.1, except for the transitions in which D binds.These depend on u 3 (t).The transition from RNAp-R o to R or from RNAp-R o D to RD correspond to a transcription initiation event.It results in a transcript of gene a provided that there is no active RNAp already active on the gene.If there is, it is a futile attempt and the initiation is ignored.
Describe the state of the promoter region by an element π = (π 1 , . . ., π 6 ) ∈ [0, 1] 6 , where π i be the probability that the promoter region is in state i.If the change in time of the concentration of D, t → u 3 (t) is given and the state of the promoter at t = 0 is π 0 , then the probabilities π i (t) for the state of the promoter at time t are determined by the nonautonomous system of ordinary differential equations (ODEs) with initial condition π(0) = π 0 .The transition rate k + from R to RD (state 1 to 4) and k + p from RNAp-R c to RNAp-R c D (state 2 to 5) depend on the concentration u 3 of D throughout the cell.We assume that the maps . For example, one may take where k + 0 > 0 and Ω D ⊂ Ω is a region around the location of the binding site of D where D can bind to its site.
Equations (5.10)-(5.12)allow for the occurrence of 'spontaneous' transciption initiations, in the absence of D. The effect of the transcription factor D on the transcription initiation process is in changing the binding rate of RNAp to the promoter from Proof.The structure of the equations is such that π The solution cannot leave R 6 + , since the vectorfield is pointing inwards on the boundary.Therefore Π is invariant.We can write system (5.10)-(5.15)as π (t) = Kπ(t) where the matrix K is positive off-diagonal.So there exists a > 0 such that K a := aI + K ≥ 0. K a is primitive.In fact, inspection of the graph of the possible transitions shows that K 6 a > 0, i.e. there is a directed path of length 6 between each pair of states.'Conservation of mass' arises from 1K = 0, where 1 = (1, . . .1).Thus, 1 is a strictly positive left-eigenvector of K a for the eigenvalue a.The Perron-Frobenius Theorem yields that the K a righteigenspace of the eigenvalue a is one-dimensional too and must contain a strictly positive vector π * , which can be uniquely chosen in Π.Thus, π * (ū 3 ) := π * is the unique steady state in Π and π * is strictly positive.Moreover, all other eigenvalues λ of K a have Reλ < a, which implies that all eigenvalues of K other than 0 have strictly negative real part.This yields the global attractivity of π * in Π.
Each component of π * can be computed using the King-Altman diagramatic method [20], which is essentially Cramer's rule applied to matrices that arise from chemical reaction networks.It yields that where a (i) j , a j ≥ 0 are sums of particular products of rate constants in the transition network that do not depend on u 3 .One has a 0 > 0. Because the rate functions k + and k + p are Lipschitz, they are bounded on bounded subsets of C(Ω) + .The bounded Lipschitz functions are an algebra.So when π * i is restricted to a bounded subset, then π * i (u 3 ) = f (u 3 )/g(u 3 ) with f and g bounded Lipschitz and g(u 3 ) ≥ 1, using the positivity of k + (u 3 ) and k + p (u 3 ) and the constants a j .Then We compute the p.d.f.p ini of T ini by modifying the transition model that we already have.We add a state 7 of the promoter region that can be reached by transition from state 3 and 6, each with a rate k ini .This state represents a (absorbing) transcription initiation attempt.Moreover, remove the transitions from state 3 to 1 and 6 to 4 (see Figure 5.1).πi (t), i = 1, . . ., 7, denotes the probabilities to be in state i at time t in this new model.The dynamics of π(t) is governed by (5.10)- (5.15),where we removed the terms k ini π 3 and k ini π 6 .Also add the equation and initial condition π7 (0) = 0. Then π7 (t) is the probability that transcription initiation occurred before time t, given that concentration of D evolved according to t → u 3 (t).Thus, if the state at t = 0 of the promoter, π 0 , and that of the activation system, u 0 , is given, then put Here we stressed in the notation the dependence of the solution on the initial state π 0 and the development u 3 (•) of the concentration of D over time.It yields We now assume that u 3 (t) changes slowly compared to the transitions of state in the promoter region such that the state of the promoter just after an intervention at time t n is equal to the unique steady state π * (u n 3 ) of system (5.10)-(5.15),where u n 3 is the state of the activation system directly after the arrival of the burst in protein production.Thus, if the state of the activation system directly after an intervention equals u 0 , then p ini (u 0 , t) = k ini π * 3 (t; φ(u 0 , •), π * (u 0 )) + π * 6 (t; φ(u 0 , •), π * (u 0 )) (5.18) for (slowly) changing u, starting from u 0 at t = 0.
Finally, p(u, t) = p ini (u) * p tr (t) = t 0 p ini (u, s)p tr (t − s)ds (5.19) and equation (5.9) defines p(u, t).Since p tr (t) = 0 for 0 ≤ t ≤ τ tr , a similar statement holds for p(u, t), hence p(u, t).So Assumption (A) is satisfied.Since ribosomes start translation already when the first part of the mRNA transcript becomes available from the RNAp, the time of appearance of the first completed gene product A is taken equal to the time of completion of the mRNA transcript.When transcription of gene a is complete a burst of A proteins is added to the system.We model this addition as instantaneous.It causes the state x of the system just before the burst to change to x + h , where h is a random variable in H + of the particular form h = (h A , 0, 0), because no other compounds are added after the translation process than A. Notice that h A models randomness both in the number of proteins in a burst (cf.[27]) and their spatial location directly after completion of translation.
Because the abstract model in Section 4 assumes that the random jumps h are centered at 0, we fix a probability density function where h 1 is a C(Ω)-valued random variable with law ν ε 1 that is supported in B(0, ε) C(Ω).It has been observed experimentally that the number of proteins in a burst follows an exponential distribution approximately [26,4,17], but due to volume and resource restrictions in a cell, there should be a cut-off to this distribution at higher molecular numbers.Note that N need not be the expected number of produced molecules.
Clearly, S(•, t) maps H + into itself.The state u n+1 of the activation system just after the arrival of the burst of proteins at time t n+1 then equals where h ∈ H represents a random deviation from the deterministic jump with law ν ε , independent of the state S(u n , t n ).We thus defined a Markov chain (u n ) in H + .The law µ n ∈ M 1 (H + ) satisfies were the Markov operator P is given by equations (4.8) and (4.9).The set-up is as in the abstract model introduced at the start of Section 4, except that we need to define the invariant closed set X of H + for the deterministic 'flow' S(•, t) on which Assumptions (1)-( 5) hold.This is the objective of the next section.

Verification of assumptions of the abstract model
We show in this section that Assumptions (1)-( 5) for the abstract model, formulated in Section 4, hold for the model for an autoregulated gene described above.Assumptions (1), ( 3) and ( 5) are the most involved.
Let G(u) := F (u) − Λu, viewed as map from D F ⊂ R 3 into R 3 .The dynamical system (Φ t ) t≥0 defined by the initial value problem for the ODE x (t) = G(x(t)) leaves R 3 + invariant.Moreover, 0 is a globally stable steady state of this system in R 3 + , because the total number of A-particles, N (t (5.21) There exist b 3 > 0 satisfying (5.21) if and only if Proof.A sufficient condition for R b to be invariant is, that the vector field G points inwards at any boundary point.For the part of the boundary where u i = 0 for some i this is easily observed.For the other sides of the box R b , observe that for the reaction-diffusion system (5.2), according to [5,6].This allows to establish Proposition 5.1 and Assumption (1) of Section 4: Proof of Proposition 5.1.Standard arguments (e.g.[30]) lead to local existence and uniqueness of the mild solution t → u(t; u 0 ) for initial condition u 0 in H + , because of the local Lipschitz property of the perturbation G on an open neighbourhood of H + that avoids the discontinuities in G relating to the denominators in the Michaelis-Menten rates in (5.3) and (5.4).Let u 0 ∈ H + .According to Lemma 5.3 there exists b ∈ R 3 + that satisfies (5.21) and such that u 0 ∈ H R b .The latter set is invariant for the solution operators (φ t ) t≥0 of (5.2), i.e. u(t; u 0 ) ∈ H R b for all t for which the solution exists.In particular the solution remains positive and is uniformly bounded.Thus the maximal mild solution exists for all time, since no blow-up can occur in finite time.Standard estimates using the Variation of Constants Formula (5.6) yield the joint continuity of the map (u 0 , t) → u(t; u 0 ).Let b ∈ R 3 + satisfy (5.21) such that H R b is an invariant set for the dynamical system (φ t ) t≥0 defined by (5.2).We view the latter as ∂ t u = (D∆u − Λu) − F (u).That is, as a perturbation of the linear semigroup generated by D∆u−Λu by the non-linear map F .The corresponding mild solution u ∈ C(R + , H + ) satisfies   .λ ∆ will depend on Ω, so it may happen that σ ≤ 0 for some Ω.Thus, [6] cannot be expected to be applicable for all sets Ω.We like to have broader generality and thus pursuit a different approach.
Let u and û be two solutions with initial conditions u 0 and û0 in H R b .According to (5.29) one has where Here one exploits that each component u i (t) ∈ (C(Ω), • ∞ ) and that 0 ≤ u i (t) ≤ b i .From (5.26) we then derive that with convolution kernel where α, β and γ are defined as in (5.33).
Proof.Clearly, t → K(t) is a strictly decreasing function.Hence (5.36) Put k 0 := K(0).Gronwall's Lemma yields the estimate of the Lipschitz constant.The remaining statements are now obvious.
The condition k 0 < λ 0 in Lemma 5.6 puts constraints on all degradation rates: they should be sufficiently large.It may be too strong to cover real-life examples.Contractivity is a too strong property to require when only 'contractivity-on-average' is needed.Moreover, (5.36) will be a bad estimate for t large typically, so (5.35) -hence Lemma 5.6 -is not optimal.
As an example of how the contractivity-on-average condition (3) widens the applicability of the results in our biological model, let us consider the situation where λ 0 = λ 3 and λ 1 = λ 2 > λ 3 .That is, the dimer is the most stable compound in view of degradation, while phosphorylation of protein does not affect degradation.Put λ := λ 1 − λ 0 .Then (5.37) Note that K does not depend on b 1 .If we take b 2 > 1 sufficiently large, then (5.37) reduces to for all and t ∈ [0, T ], with λ(t) := a + e (λ + −λ 3 )t + a − e (λ − −λ 3 )t . (5.40) Here λ + and λ − are the positive and negative solution to the equation Finally, a − > 0 if and only if λ + < c 0 .
Proof.Put f (t) := e λ 3 t u(t) − û(t) H . Since K ≥ 0, repeated substitution of equation (5.34) into itself yields where ' * ' denotes convolution of locally integrable functions on R + and K ( * k) the k-fold convolution product of K with itself.Let Lφ(z) be the Laplace transform of a locally integrable function φ on R + : Remark.The sign of λ + − λ 3 may be positive or negative, depending on the values of the constants c 0 and c 1 and that of the degradation rates λ 1 = λ 2 and their deviation λ from λ 3 .This can be seen from the following estimate that simply derives from (5.44):According to Lemma 5.8, if c 0 + c 1 − λ 3 < 0, then there exist a T * > 0 such that λ(t) < 1 for 0 < t < T * .Lemma 5.9.Assume c 0 + c 1 − λ 3 < 0 and that τ < T * .Take T = T * .If there exists a closed interval I ⊂ (τ, T ) and 0 < δ < 1 such that p(u, t) ≥ δ for all u ∈ X and t ∈ I, then there exists 0 ≤ γ < 1 such that T 0 λ(t)p(u, t) dt ≤ γ < 1 for all u ∈ X.That is, Assumption (3) is satisfied.

Main results on the model
We now summarise the major conclusions that can be drawn in the current set-up in our model of an autoregulated gene as piecewise deterministic Markov process from the theoretical results obtained in the first part of the paper.For both theorems one can verify in Section 5.3 that in those cases Assumptions (1)-( 5) of the abstract model are satisfied.Hence Theorem 4.5 yields the result in each case.

T
From the proof of Theorem 4.4 we obtain that there exist integers m, N , a positive number β < δ and a set {x 1 , . . ., x M } ⊂ X such that P m ε µ M i=1 B(x i , δ) > β for any µ ∈ M B,1−δ 1 and ε ∈ Υ. (4.23) Hence for every µ ∈ M B,1−δ 1 and ε ∈ Υ there is a ball B(x i , δ) such that real analytic on an open neighbourhood of each uniformly constant function u ≡ ū, with ū ∈ D F .(Here • denotes the supremum norm.)Wedenote this operator also by F .

Figure 5 . 1 .
Figure 5.1.The six states of the promoter region indicating possible transitions.The direction corresponding to the rate constants with positive upper index in (5.10)-(5.15)are shown.State 7 is added to compute the probability distribution function for the waiting time till the next transcription start event.

. 25 )
Thus, a sufficient condition for invariance is, that the right hand sides in each of the inequalities (5.23)-(5.25) is strictly less than zero for all u ∈ R b .This leads to (5.21).The two parabolas in (b 2 , b 3 )-space that define the condition on b 3 in (5.21) intersect in the positive cone at the value for b 2 given by the right hand side in (5.22) and the condition on b 3 can hold only when b 2 is beyond this point.The region bounded by the two parabolas is unbounded, both in b 2 and b 3 direction.So we obtain the last statement.The invariant region R b for the ODE defined by G results into an invariant set H R b := {f ∈ H : f (x) ∈ R b for all x ∈ Ω}

Lemma 5 . 4 .
Let N > 0 and f 1 A ∈ C(Ω) + be given.Assume that Assumption (A) holds.If b ∈ R 3 + satisfies (5.21) and .27) where d = min i d i , λ ∆ is the smallest positive eigenvalue of −∆ acting on C(Ω) with Neumann conditions, and M = max{ DG(u) : u ∈ R b }.One has .29) So DG(u) 23 = 2k − D and DG(u) 33 = −k − D − λ 3 are both independent of u.So in our setting M ≥ δ > 0, where δ is independent of the region R b

)
We assume that the continuous function S : X ×[0, T ] → X satisfies the Lipschitz type inequality 16. From Lemma 4.3 it follows that there exists a bounded closed set B ⊂ X such that lim inf .6)where (T (t)) t≥0 is the strongly continuous linear semigroup in H defined by the product of the diffusion semigroups (T d i (t)) t≥0 in C(Ω) associated to D i ∆ in each of the components.Let C(Ω) + denote the closed convex cone of positive functions in C(Ω) and put H +