\documentclass[reqno]{amsart} \AtBeginDocument{{\noindent\small {\em Electronic Journal of Differential Equations}, Vol. 2004(2004), No. 05, pp. 1--30.\newline ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu \newline ftp ejde.math.txstate.edu (login: ftp)} \thanks{\copyright 2004 Texas State University - San Marcos.} \vspace{9mm}} \begin{document} \title[\hfilneg EJDE-2004/05\hfil Periodic and invariant measures] {Periodic and invariant measures for stochastic wave equations} \author[Jong Uhn Kim\hfil EJDE-2004/05\hfilneg] {Jong Uhn Kim} \address{Department of Mathematics, Virginia Tech, Blacksburg, VA 24061-0123, USA} \email{kim@math.vt.edu} \date{} \thanks{Submitted December 15, 2002. Published January 2, 2004.} \subjclass[2000]{35L65, 35R60, 60H15} \keywords{Wave equation, Brownian motion, periodic measure, \hfill\break\indent invariant measure, probability distribution, tightness} \begin{abstract} We establish the existence of periodic and invariant measures for a semilinear wave equation with random noise. These are counterparts of time-periodic and stationary solutions of a deterministic equation. The key element in our analysis is to prove that the family of probability distributions of a solution is tight. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \allowdisplaybreaks \section{Introduction} We consider the semi-linear wave equation with random noise \begin{equation} u_{tt} + 2\alpha u_t -\Delta u + \beta u = f(t, x, u) + \sum_{k=1}^{\infty} g_k(t, x,u) \frac{dB_k}{dt} \label{0.1} \end{equation} where $(t, x) \in (0, \infty) \times \mathbb{R}^3$, $\alpha >0$, $\beta >0$ are constants, and $f, g_k$'s are given nonlinear functions. $B_k$'s are mutually independent standard Brownian motions. Later on, we will make precise assumptions on $f$ and $g_k$'s. When $g_k \equiv 0$ for all $k$, and $f(t, x, u) = f_1(t, x) -|u|^{p-1}u$ with $1 \le p \le 3$, \eqref{0.1} is a model equation in nonlinear meson theory. The Cauchy problem and the initial-boundary value problem associated with this deterministic equation have been completely investigated. See Lions \cite{l1}, Reed and Simon \cite{r1}, Temam \cite{t1}, and references therein. It has been also investigated as a model equation whose solutions converge to a global attractor. For an extensive discussion of the dynamical system associated with this equation, see \cite{t1}. On the other hand, the Cauchy problem for the stochastic equation with random noise was discussed by Chow [2], Garrido-Atienza and Real \cite{g1} and Pardoux \cite{p1}. Crauel, Debussche and Flandoli \cite{c3} proved the existence of random attractors for an initial-boundary value problem associated with \eqref{0.1}. Here our goal is to obtain a periodic measure for \eqref{0.1} when the given functions are time-periodic, and an invariant measure when the given functions are independent of time. A probability measure $\mu$ on the natural function class for \eqref{0.1} is called a periodic measure if the initial probability distribution equal to $\mu $ generates time-periodic probability distributions of the solution, and is called an invariant measure if it results in time-invariant probability distributions of the solution. We can handle both the Cauchy problem in the whole space $\mathbb{R}^3$ and an initial-boundary value problem in a bounded domain. But we will present full details only for the case of the whole space $\mathbb{R}^3$, and give a sketch of the procedure for a bounded domain. The case of the whole space is more challenging for lack of compact imbedding of usual Sobolev spaces. For the Cauchy problem for \eqref{0.1}, Chow \cite{c1} used a basic result of Da Prato and Zabczk \cite{d1} for evolution equations. The main difficulty arises from polynomial nonlinearity in the equation. This type of nonlinearity can be handled by the truncation method. For its use for parabolic equations, see Gy\"ongy and Rovira \cite{g2}. The result in \cite{d1} is based on the analysis of stochastic convolutions in the frame of semigroup theory. Our goal is to obtain periodic and invariant measures. For this, we need some basic estimates to ensure tightness of the probability laws. Such estimates can be obtained most conveniently in the frequency domain via the Fourier transform. These new estimates can be derived through the representation formula for solutions in the frequency domain. Hence, it seems natural to obtain solutions in the same context, and we will present the proof of existence independently of the previous works. But we will borrow a truncation device from \cite{c1}. Da Prato and Zabczyk \cite{d1,d2} present some general results on the existence of invariant measures for stochastic semilinear evolution equations, which cover basically two different cases. The first case is the equations with suitable dissipation. By means of translation of the time variable and two-sided Brownian motions, dissipation of the energy results in invariant measures. The second case is the equations associated with compact semigroups. This compactness of semigroups can be used to prove tightness of the probability distributions of a solution, which in turn yields invariant measures. Parabolic equations fall in this category, and so far most of the works on invariant measures for nonlinear equations have been concerned with equations of parabolic type. However, there are other types of equations which are not covered by either of these cases. The equation \eqref{0.1} is one of them. The result of \cite{c3} combined with that of \cite{c4} yields invariant measures for \eqref{0.1} in a bounded space domain. According to their method, only additive noise with sufficiently regular coefficients can be handled. Our method can relax such restrictive assumptions; see remarks in Section 6 below. Our method is based upon the works of Khasminskii \cite{k2} and Parthasarathy \cite{p2}. The idea of \cite{k2} was used in \cite{c2} for quasilinear parabolic equations. The main task is to prove tightness of the probability distributions of a solution. We borrow an essential idea from Parthasarathy \cite[Theorem 2.2]{p2}. But substantial technical adaptation is necessary for our problem. When the space domain is unbounded, we cannot use compact imbedding of usual Sobolev spaces. For reaction-diffusion equations, Wang \cite{w1} overcame this difficulty by approximating the whole space through expanding balls. This idea was also used in Lu and Wang \cite{l2}. We will adopt this for our problem. In section 2, we introduce notation and present some preliminaries for stochastic processes. In Section 3, we prove the existence of a solution and establish some estimates which will be used later. In Section 4, we prove that the family of probability distributions of a solution is tight, and establish the existence of a periodic measure and an invariant measure in Section 5. Finally, we explain how our method can be used for initial-boundary value problem in Section 6. \section{Notation and Preliminaries} When $\mathcal{G}$ is a subset of $\mathbb{R}^n$, $C(\mathcal{G})$ is the space of continuous functions on $\mathcal{G}$, and $C_0(\mathcal{G})$ is the space of continuous functions with compact support contained in $\mathcal{G}$. $H^m(\mathbb{R}^3)$ stands for the usual Sobolev space of order $m$. For a function in $\mathbb{R}^3$, its Fourier transform is given by $$ \hat f(\xi) =\frac{1}{(2\pi)^{3/2}}\int_{\mathbb{R}^3} f(x) e^{-i \xi \cdot x} \,dx $$ and the inversion formula is $$ f(x) =\frac{1}{(2\pi)^{3/2}}\int_{\mathbb{R}^3} \hat f(\xi) e^{i \xi \cdot x} \,d\xi, $$ which will be also expressed by $f = F_{\xi}^{-1}\bigl(\hat f\bigr)$. In this context, it is convenient to define the convolution by $$ \bigl(f \ast g\bigr)(x)= \frac{1}{(2\pi)^{3/2}}\int_{\mathbb{R}^3} f(x-y) g(y)\,dy. $$ For function spaces with respect to the variable $\xi$ in the frequency domain, we use the notation $L^p(\xi)$ and $H^m(\xi)$ to denote $L^p(\mathbb{R}^3)$ and $H^m(\mathbb{R}^3)$, respectively.\par We recall some properties of Sobolev spaces. For $2 \le q \le 6$, there is some positive constant $C_q$ such that \begin{equation} \|\psi\|_{L^q(\mathbb{R}^3)} \le C_q \|\psi\|_{H^1(\mathbb{R}^3)}, \quad \text{for all $\psi \in H^1(\mathbb{R}^3)$}. \label{1.1} \end{equation} We also have \begin{lemma} \label{lm1.1} Suppose $0 \le s <1$ and $ \psi \in L^2(\mathbb{R}^3)$. If \begin{equation} \Big| \int_{\mathbb{R}^3} \psi(x) \bigl(I -\Delta \bigr)\phi(x)\,dx\Big| \le C \|\phi\|_{H^{1+s}(\mathbb{R}^3)} \label{1.2} \end{equation} holds for all $\phi \in C_0^{\infty}(\mathbb{R}^3)$, for some positive constant $C$, then $\psi \in H^{1-s}(\mathbb{R}^3)$. \end{lemma} \begin{proof} By the Parseval's identity, \[ \int_{\mathbb{R}^3} \psi(x) \bigl(I- \Delta\bigr) \phi(x)\,dx %\label{1.3}\\ = \int_{\mathbb{R}^3} \hat \psi(\xi) \bigl(1+ |\xi|^2\bigr)^{(1-s)/2} \overline{\hat \phi(\xi)} \bigl(1+ |\xi|^2\bigr)^{(1+s)/2}\,d\xi \] which, with \eqref{1.2}, yields $\hat \psi(\xi) \bigl(1+ |\xi|^2\bigr)^{(1-s)/2} \in L^2(\mathbb{R}^3)$, and $\|\psi\|_{H^{1-s}(\mathbb{R}^3)} \le C$. %\label{1.4} \end{proof} \begin{lemma} \label{lm1.2} Let $1 \le p <3$ and $ q=\frac{3-p}{2}$. Then, \begin{equation} \| \psi |\psi|^{p-1}\|_{H^q(\mathbb{R}^3)} \le C_p \|\psi\|_{H^1(\mathbb{R}^3)}^p \label{1.5} \end{equation} holds for all $\psi \in H^1(\mathbb{R}^3)$, for some positive constant $C_p$. \end{lemma} \begin{proof} For any $\phi \in C_0^{\infty}(\mathbb{R}^3)$, we see, by \eqref{1.1}, \begin{equation} \bigl\|\psi |\psi|^{p-1}\bigr\|_{L^2(\mathbb{R}^3)} \le C \|\psi\|_{H^1(\mathbb{R}^3)}^p \label{1.6} \end{equation} and \begin{align} \Big|\int_{\mathbb{R}^3} \psi |\psi|^{p-1} \Delta \phi\,dx \Big|&= \Big| p\int_{\mathbb{R}^3} |\psi|^{p-1} \nabla \psi \cdot \nabla \phi\,dx\Big| \label{1.7}\\ & \le C_p \|\nabla \psi\|_{L^2(\mathbb{R}^3)} \bigl\| |\psi|^{p-1} \bigr\|_{L^{6/(p-1)}(\mathbb{R}^3)} \|\nabla \phi\|_{L^{6/(4-p)}(\mathbb{R}^3)} \nonumber \\ & \le C_p \|\psi\|_{H^1(\mathbb{R}^3)}^p \|\phi\|_{H^{(1+p)/2}(\mathbb{R}^3)}. \nonumber \end{align} \eqref{1.5} follows from \eqref{1.6}, \eqref{1.7} and Lemma \ref{lm1.1}. \end{proof} Throughout this paper, $\{B_k(t)\}_{k=1}^{\infty}$ is a set of mutually independent standard Brownian motions over the stochastic basis $\{\Omega, \mathcal{F}, \mathcal{F}_t, P\}$ where $P$ is a probability measure over the $\sigma$-algebra $\mathcal{F}$, $\{\mathcal{F}_t\}$ is a right-continuous filtration over $\mathcal{F}$, and $\mathcal{F}_0$ contains all $P$-negligible sets. $E(\cdot)$ denotes the expectation with respect to $P$. When $\mathcal{X}$ is a Banach space, $\mathcal{B} (\mathcal{X})$ denotes the set of all Borel subsets of $\mathcal{X}$. For $ 1 \le p <\infty$, $L^p\bigl(\Omega; \mathcal{X}\bigr)$ denotes the set of all $\mathcal{X}$-valued $\mathcal{F}$-measurable functions $h$ such that \[ \int_{\Omega} \|h\|_{\mathcal{X}}^p\,dP <\infty. %%\label{1.8} \] $L^{\infty}\bigl(\Omega; \mathcal{X}\bigr)$ is the set of all $\mathcal{X}$-valued $\mathcal{F}$-measurable functions $h$ such that $\|h\|_{\mathcal{X}}$ is essentially bounded with respect to the measure $P$. For general information on stochastic processes, see Karatzas and Shreve \cite{k1}. We need the following fact due to Berger and Mizel \cite{b1}. \begin{lemma} \label{lm1.3} Let $h(t,s;\omega)$ be $\mathcal{B}([0, T]\times [0, T]) \otimes \mathcal{F}$-measurable and adapted to $\{\mathcal{F}_s\}$ in $s$ for each $t$. Suppose that for almost all $\omega \in \Omega$, $h$ is absolutely continuous in $t$, and \begin{gather*} \int_0^T\int_0^t \Big| \frac{\partial h}{\partial t}(t,s)\Big|^2 ds dt < \infty, \quad \text{for almost all $\omega$}, \\ %%\label{1.9} \int_0^t |h(t,s)|^2 ds < \infty, \quad \text{for almost all $\omega$}, %%\label{1.10} \end{gather*} for each $t$. Let \[ z_k(t) = \int_0^t h(t,s) dB_k(s), \quad k=1, 2, \dots. %%\label{1.11} \] Then, it holds that \[ dz_k(t) = h(t,t) dB_k(t) + \Big( \int_0^t \frac{\partial h}{\partial t}(t,s) dB_k(s)\Big) dt. %%\label{1.12} \] \end{lemma} \section{The Cauchy problem} We start from the linear problem. \begin{gather} u_{tt} + 2\alpha u_t - \Delta u + \beta u = f + \sum_{k=1}^{\infty} g_k \frac{dB_k}{dt}, \label{2.1}\\ u(0) = u_0, \quad u_t(0) = u_1. \label{2.2} \end{gather} We suppose that $(u_0, u_1)$ is $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$-valued $\mathcal{F}_0$-measurable, \begin{equation} (u_0, u_1) \in L^2\bigl(\Omega; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr) \label{2.3} \end{equation} and that $f, g_k$'s are $L^2(\mathbb{R}^3)$-valued predictable processes such that \begin{gather} f, g_k \in L^2\bigl(\Omega; L^2(0, T; L^2(\mathbb{R}^3))\bigr) , \label{2.4}\\ E\Big( \sum_{k=1}^{\infty} \int_0^T \|g_k\|_{L^2(\mathbb{R}^3)}^2\,dt \Big) <\infty \label{2.5} \end{gather} for each $T>0$. By taking the Fourier transform, this problem is transformed in the frequency domain as follows. \begin{gather} \hat u_{tt} + 2\alpha \hat u_t + |\xi|^2 \hat u + \beta \hat u = \hat f + \sum_{k=1}^{\infty} \hat g_k \frac{dB_k}{dt}, \label{2.6}\\ \hat u(0) = \hat u_0, \quad \hat u_t(0) = \hat u_1. \label{2.7} \end{gather} Let us first consider more restrictive data. So we assume that $(\hat u_0, \hat u_1)$ is $\mathcal{F}_0$-measurable, and \begin{equation} \hat u_0, \hat u_1 \in L^2(\Omega; C_0(K)) \label{2.8} \end{equation} and that $\hat f, \hat g_k$'s are predictable, and \begin{gather} \hat f, \hat g_k \in L^{\infty}\bigl(\Omega; C_0([0, \infty) \times K)\bigr) \label{2.9} \\ \sum_{k=1}^{\infty} \|\hat g_k\|_{L^{\infty} (\Omega; C_0([0, \infty)\times K))}^2 < \infty \label{2.10} \end{gather} where $K$ is a compact subset of $\mathbb{R}^3$. Then, the solution $\hat u$ of \eqref{2.6} - \eqref{2.7} is given by, for each $\xi \in \mathbb{R}^3$, \begin{align} \hat u(t, \xi) &= e^{-\alpha t}\cos (\sqrt{|\xi|^2 + \gamma} t) \hat u_0(\xi) + e^{-\alpha t} \dfrac{\sin (\sqrt{|\xi|^2 +\gamma} t)}{\sqrt{ |\xi|^2 +\gamma}} \hat u_1(\xi) \nonumber\\ &\quad + \int_0^t e^{-\alpha (t -s)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat f(s, \xi) ds \label{2.11}\\ &\quad + \sum_{k=1}^{\infty} \int_0^t e^{-\alpha (t -s)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat g_k(s, \xi)\,dB_k(s)\nonumber \end{align} for all $t \ge 0$, for almost all $\omega \in \Omega$, where $\gamma = \beta - \alpha^2$. Since $|\xi|^2 + \gamma \le 0$ is possible, we note that \begin{gather*} \cos \bigl( \sqrt{|\xi|^2 +\gamma} t\bigr) = \sum_{k=0}^{\infty} (-1)^k\dfrac{\bigl(|\xi|^2 + \gamma\bigr)^k t^{2k}}{(2k)!} \\ \dfrac{\sin \bigl( \sqrt{|\xi|^2 +\gamma} t\bigr)} {\sqrt{|\xi|^2 +\gamma}} =\sum_{k=0}^{\infty} (-1)^k \dfrac{\bigl(|\xi|^2 + \gamma\bigr)^k t^{2k+1}}{(2k +1)!}. \end{gather*} %\label{2.12} One can apply Ito's formula to \eqref{2.6} for each fixed $\xi$, but we first have to find the manner in which $\hat u$ depends on $\xi$. It is easy to see that $$ e^{-\alpha t} \cos \bigl( \sqrt{|\xi|^2 +\gamma} t\bigr),\quad e^{-\alpha t} \dfrac{\sin \bigl( \sqrt{|\xi|^2 +\gamma} t\bigr)} {\sqrt{|\xi|^2 +\gamma}} $$ and their time derivatives are continuous and uniformly bounded for $t \ge 0 $ and $\xi \in K$. This fact is used to estimate the terms in the right-hand side of \eqref{2.11}. But the last term needs some manipulation for a necessary estimate, because it is not a martingale. Thus, we use Lemma \ref{lm1.3} to write \begin{align*} J_k & := \int_0^t e^{-\alpha (t -s)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat g_k(s, \xi) dB_k(s) \\ %\label{2.13}\\ & =\int_0^t \int_0^s (-\alpha) e^{-\alpha (s-\eta)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (s-\eta)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat g_k(\eta, \xi) dB_k(\eta)\,ds \\ &\quad + \int_0^t \int_0^s e^{-\alpha (s-\eta)} \cos \bigl(\sqrt{|\xi|^2 +\gamma} (s-\eta)\bigr) \hat g_k(\eta, \xi) dB_k(\eta)\,ds, \quad k=1, 2, \dots, \end{align*} and \begin{align*} \partial_t J_k =& \int_0^t \int_0^s \alpha^2 e^{-\alpha (s-\eta)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (s-\eta)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat g_k(\eta, \xi) dB_k(\eta)\,ds \\ %\label{2.14} & + \int_0^t \int_0^s (-\alpha) e^{-\alpha (s-\eta)} \cos \bigl(\sqrt{|\xi|^2 +\gamma} (s-\eta)\bigr) \hat g_k(\eta, \xi) dB_k(\eta)\,ds\\ & + \int_0^t \int_0^s (-\alpha) e^{-\alpha (s-\eta)} \cos \bigl(\sqrt{|\xi|^2 +\gamma} (s-\eta)\bigr) \hat g_k(\eta, \xi) dB_k(\eta)\,ds \\ & - \int_0^t \int_0^s e^{-\alpha (s-\eta)}\sqrt{|\xi|^2 +\gamma} \sin \bigl(\sqrt{|\xi|^2 +\gamma} (s-\eta)\bigr) \hat g_k(\eta, \xi) dB_k(\eta)\,ds\\ & + \int_0^t \hat g_k(s, \xi)\,dB_k(s), \quad k=1, 2, \dots. \end{align*} These integrals are easy to estimate, and we find that for each $T >0$, \begin{equation} \lim_{\xi_1 \to \xi_2}E\Big(\bigl\|J_k(\xi_1) - J_k(\xi_2)\bigr\|_{C^1([0, T])}^2\Big) = 0. \label{2.15} \end{equation} By \eqref{2.10} and \eqref{2.15}, the last term of \eqref{2.11} belongs to $C_0\bigl(K; L^2(\Omega; C^1([0, T]))\bigr)$ for each $T>0$. It is easy to see that other terms also belong to the same function class, and we conclude \[ \hat u \in C_0\bigl(K; L^2(\Omega; C^1([0, T]))\bigr) %\label{2.16} \] for all $T>0$. Let $\tilde K$ be another compact subset whose interior contains $K$. By partition of unity, $\hat u$ can be approximated in $C_0\bigl(\tilde K; L^2(\Omega; C^1([0, T]))\bigr)$ by a sequence of functions of the form \[ \hat u_N =\sum_{j=1}^N a_{Nj}(\xi) b_{Nj}(\omega, t) %\label{2.17} \] where each $a_{Nj} \in C_0(\tilde K)$ and $b_{Nj} \in L^2\bigl(\Omega; C^1([0, T])\bigr)$, and $b_{Nj}(t)$ is $\mathcal{F}_t$-measurable for every $t$. Since each $\hat u_N \in L^2\bigl(\Omega; C^1([0, T]; L^2(\tilde K))\bigr)$ and \[ \| \hat u_N \|_{L^2(\Omega; C^1([0, T]; L^2(\tilde K)))} \le M_{\tilde K} \|\hat u_N\|_{C_0(\tilde K; L^2(\Omega; C^1([0, T])))} %\label{2.18} \] where $M_{\tilde K}$ is a positive constant depending only on $\tilde K$. It follows that $\{\hat u_N\}$ is a Cauchy sequence in $L^2\bigl(\Omega; C^1([0, T]; L^2(\tilde K))\bigr)$. Hence, $\hat u \in L^2\bigl(\Omega; C^1([0, T];L^2(\tilde K))\bigr)$. Meanwhile, each $\hat u_N(t)$ is $L^2(\tilde K)$-valued $\mathcal{F}_t$-measurable and so is $\hat u(t)$. Also, for each $t$, $\hat u_N(t)$ and $\hat u(t)$ are $\mathcal{B} (\tilde K)\otimes \mathcal{F}_t$-measurable. By Ito's formula, we find that for each $\xi \in K$, \begin{align} & |\hat u_t|^2(t) + (|\xi|^2 +\beta+2\epsilon \alpha)|\hat u(t)|^2 + 2\epsilon Re\bigl(\hat u_t(t)\overline{\hat u(t)}\bigr) \nonumber \\ & = |\hat u_1|^2 +(|\xi|^2 + \beta +2 \epsilon \alpha)|\hat u_0|^2 + 2\epsilon Re\bigl( \hat u_1 \overline{\hat u_0}\bigr) \label{2.19} \\ &\quad -(4\alpha - 2\epsilon)\int_0^t |\hat u_t(s)|^2\,ds - 2{\epsilon} \int_0^t (|\xi|^2 + \beta )|\hat u(s)|^2\,ds \nonumber \\ &\quad +\int_0^t 2 Re\Big(\hat f(s) \overline{\hat u_t(s)} + \epsilon\hat f(s)\overline{\hat u(s)}\Big)\,ds + \sum_{k=1}^{\infty}\int_0^t |\hat g_k(s)|^2\,ds \nonumber \\ &\quad + \sum_{k=1}^{\infty} \int_0^t 2 Re \Big(\hat g_k(s) \overline{\hat u_t(s)} + \epsilon\hat g_k(s)\overline{\hat u(s)}\Big)\,dB_k(s)\nonumber \end{align} holds for $t \ge 0$, for almost all $\omega \in \Omega$. In fact, \eqref{2.19} holds in $C_0\bigl(K; L^1(\Omega;C([0, T])\bigr)$, for all $T> 0$, because each term belongs to $C_0\bigl(K; L^1(\Omega;C([0, T])\bigr)$, for all $T>0$. By \eqref{2.10}, we can apply the stochastic Fubini theorem to the last term, and find that \begin{align} & \|\hat u_t(t)\|_{ L^2(\xi)}^2 + \bigl\|\sqrt{|\xi|^2 +\beta+2\epsilon \alpha} \hat u(t)\bigr\|_{ L^2(\xi)}^2 + 2\epsilon \int_{\mathbb{R}^3} Re\bigl(\hat u_t(t)\overline{\hat u(t)}\bigr)d\xi \nonumber \\ & = \|\hat u_1\|_{ L^2(\xi)}^2 +\bigl\|\sqrt{|\xi|^2 + \beta +2 \epsilon \alpha} \hat u_0\bigr\|_{ L^2(\xi)}^2 + 2\epsilon \int_{\mathbb{R}^3}Re\bigl( \hat u_1 \overline{\hat u_0}\bigr) d\xi \nonumber\\ &\quad -(4\alpha - 2\epsilon)\int_0^t\int_{\mathbb{R}^3} |\hat u_t(s)|^2\,d\xi\,ds - 2{\epsilon} \int_0^t \int_{\mathbb{R}^3} (|\xi|^2 + \beta )|\hat u(s)|^2\,d\xi ds\nonumber \\ &\quad +\int_0^t \int_{\mathbb{R}^3} 2 Re\Big(\hat f(s) \overline{\hat u_t(s)} + \epsilon\hat f(s)\overline{\hat u(s)}\Big)\,d\xi ds + \sum_{k=1}^{\infty}\int_0^t \|\hat g_k(s)\|_{ L^2(\xi)}^2\,ds \nonumber \\ &\quad + \sum_{k=1}^{\infty} \int_0^t \int_{\mathbb{R}^3} 2 Re \Big(\hat g_k(s) \overline{\hat u_t(s)} + \epsilon\hat g_k(s)\overline{\hat u(s)}\Big)\,d\xi\,dB_k(s) \label{2.20} \end{align} holds for all $t\ge 0$, for almost all $\omega$. We now choose $\epsilon$ such that \begin{equation} 0 < \epsilon < \alpha\,. \label{2.21} \end{equation} By the Burkholder-Davis-Gundy inequality, we have \begin{align} & E\Big(\sup_{0 \le s \le t}\Big|\sum_{k=1}^{\infty} \int_0^s \int_{\mathbb{R}^3}2 Re\Big(\hat g_k(\eta) \overline{\hat u_t(\eta)} + \epsilon\hat g_k(\eta)\overline{\hat u(\eta)}\Big)d\xi\,dB_k(\eta)\Big|\Big) \nonumber \\ & \le M E\Big(\sum_{k=1}^{\infty} \int_0^t \Big|\int_{\mathbb{R}^3}2 Re\Big(\hat g_k(s) \overline{\hat u_t(s)} + \epsilon\hat g_k(s)\overline{\hat u(s)}\Big)d\xi \Big|^2\,ds\Big)^{1/2} \label{2.22} \\ & \le \rho E\Big(\sup_{0 \le s \le t}\bigl(\|\hat u(s)\|_{ L^2(\xi)}^2 + \|\hat u_t(s)\|_{L^2(\xi)}^2\bigr)\Big) \nonumber \\ &\quad + \frac{M}{\rho} \sum_{k=1}^{\infty}E\Big(\int_0^t \|\hat g_k(s)\|_{ L^2(\xi)}^2 ds\Big), \quad \text{for all $\rho >0$}. \nonumber \end{align} Thus, by using \eqref{2.22} with suitably small $\rho$, we can derive from \eqref{2.20} \begin{align} &E\Big(\sup_{0 \le s\le t}\Big(\|\hat u_t(s)\|_{ L^2(\xi)}^2 +\| \sqrt{|\xi|^2 +\beta} \hat u(s)\|_{ L^2(\xi)}^2 \Big)\Big) \nonumber \\ & \le M E \Big( \|\hat u_1\|_{ L^2(\xi)}^2 +\bigl\|\sqrt{|\xi|^2 + \beta} \hat u_0\bigr\|_{ L^2(\xi)}^2\Big) \label{2.23}\\ & \quad+ M t E\Big(\int_0^t \|\hat f(s)\|_{ L^2(\xi)}^2 ds \Big) + M \sum_{k=1}^{\infty} E\Big(\int_0^t \|\hat g_k(s)\|_{ L^2(\xi)}^2 ds\Big) \nonumber \end{align} where $M$ denotes positive constants independent of $K$ and $t\ge 0$. We now consider the general data satisfying \eqref{2.3} - \eqref{2.5}. Let us fix any $T>0$. We can choose sequences $\{\hat u_{0,n}\}, \{\hat u_{1,n}\}, \{\hat f_n\}$ and $\{\hat g_{k, n}\}$ such that \begin{gather*} \sqrt{|\xi|^2 +\beta} \hat u_{0,n} \to \sqrt{|\xi|^2 + \beta} \hat u_0 \quad \text{in $L^2\bigl(\Omega; L^2(\xi)\bigr)$}, \\%\label{2.24} \\ \hat u_{1,n} \to \hat u_1 \quad \text{in $L^2\bigl(\Omega; L^2(\xi)\bigr)$},\\ %\label{2.25}\\ \hat f_n \to \hat f \quad \text{in $L^2\bigl(\Omega; L^2(0, T; L^2(\xi))\bigr)$},\\ %\label{2.26}\\ \sum_{k=1}^{\infty} E\Big(\int_0^T\|\hat g_{k, n}(s) - \hat g_k(s)\|_{ L^2(\xi)}^2\,ds \Big) \to 0, %\label{2.27} \end{gather*} and each $\hat u_{0, n}$, $\hat u_{1, n}$ satisfy \eqref{2.8}, and each $\hat f_n, \hat g_{k, n}$ satisfy \eqref{2.9}, \eqref{2.10} with some compact subset $K_n$. It follows from \eqref{2.23} that $\bigl( u_n, \partial_t u_n\bigr)$ corresponding to \break $\hat u_{0,n}, \hat u_{1, n}, \hat f_n, \hat g_{k, n}$ forms a Cauchy sequence in $ L^2\bigl(\Omega; C([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3))\bigr)$, and its limit $\bigl(u, u_t\bigr)$ is the solution which also satisfies \eqref{2.23}. In addition, if we have \begin{equation} \sum_{k=1}^{\infty} \bigl\|g_k\bigr\|_{L^{\infty}(\Omega; L^2(0, T; L^2(\mathbb{R}^3)))}^2 < \infty, \quad \text{for all $T>0$}, \label{2.28} \end{equation} then \eqref{2.20} is also valid. We now suppose that there is another solution $\bigl(u^{\ast}, u_t^{\ast}\bigr) \in L^2\bigl(\Omega; C([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3))\bigr)$. Then, $\bigl( u - u^{\ast}, u_t - u_t^{\ast}\bigr)$ is a solution of the deterministic wave equation, and the pathwise uniqueness of solution follows directly from the uniqueness result for the deterministic wave equation. We now summarize what has been established. \begin{lemma} \label{lm2.1} Suppose that $(u_0, u_1)$ satisfies \eqref{2.3} and that $f$ and $g_k$'s satisfy \eqref{2.4} and \eqref{2.5}. Then, there is a pathwise unique solution of \eqref{2.1} and \eqref{2.2} such that $(u(t), u_t(t))$ is a $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$-valued predictable process and \[ (u, u_t) \in L^2\Big(\Omega; C\bigl([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\Big) %\label{2.29} \] for all $T>0$. Furthermore, it satisfies {\rm \eqref{2.23},} and if {\rm \eqref{2.28}} holds, {\rm \eqref{2.20}} is also valid. \end{lemma} Next we will consider the semi-linear case. \begin{gather} u_{tt} + 2\alpha u_t - \Delta u + \beta u = f(t, x, u) + \sum_{k=1}^{\infty} g_k(t, x, u) \frac{dB_k}{dt}, \label{2.30}\\ u(0) = u_0, \quad u_t(0) = u_1. \label{2.31} \end{gather} Let us suppose that \begin{gather} f(t, x, u) = f_1(t, x) + f_2(u) ;\label{2.32}\\ g_k(t, x, u) = g_{k, 1}(t, x) + \phi(x)g_{k, 2}(u) ;\label{2.33}\\ \phi \in H^1(\mathbb{R}^3) \cap L^{\infty}(\mathbb{R}^3) ; \label{2.34}\\ f_1 \in C([0, \infty); L^2(\mathbb{R}^3)), \quad f_1(t) = f_1(t + L),\quad \text{for all $t \ge 0$}, \label{2.35} \end{gather} where $L$ is a fixed positive number; \begin{equation} f_2(0) =0,\quad \|f_2(v) - f_2(w)\|_{L^2(\mathbb{R}^3)} \le M \| v - w \|_{H^1(\mathbb{R}^3)} \label{2.36} \end{equation} for all $v, w \in H^1(\mathbb{R}^3)$, for some positive constant $M$; \begin{gather} g_{k, 1} \in C([0, \infty); L^2(\mathbb{R}^3)),\quad g_{k, 1}(t) =g_{k,1}(t+L), \quad \text{for all $t \ge 0$} ;\label{2.37} \\ \|g_{k,1}(t)\|_{L^2(\mathbb{R}^3))} \le M_k, \quad \text{for all $t \ge 0$} ; \label{2.38}\\ |g_{k, 2}(y)|\le \tilde M_k , \quad | g_{k, 2}(y) - g_{k, 2}(z)| \le \alpha_k |y - z|, \quad \text{for all $y, z \in R$} \label{2.39} \end{gather} with \begin{equation} \sum_{k=1}^{\infty}(M_k^2 + \tilde M_k^2 + \alpha_k^2) < \infty. \label{2.40} \end{equation} We employ the standard iteration scheme. Let us set $u^{(0)} = u_0$ %\label{2.41} and let $u^{(n)}$ be the solution of \begin{gather*} u_{tt} + 2\alpha u_t - \Delta u + \beta u = f(t, x, u^{(n-1)}) + \sum_{k=1}^{\infty} g_k(t, x, u^{(n-1)}) \frac{dB_k}{dt}, \\ %\label{2.42}\\ u(0) = u_0, \quad u_t(0) = u_1. %\label{2.43} \end{gather*} Fix any $T>0$. By subtraction, we can obtain an equation satisfied by $u^{(n+1)} - u^{(n)}$. By treating $f(t, x, u^{(n)}) - f(t, x, u^{(n-1)})$ and $g_k(t, x, u^{(n)}) - g_k(t, x, u^{(n-1)})$ as given functions, we interpret $\bigl(u^{(n+1)} - u^{(n)}, u_t^{(n+1)} - u_t^{(n)}\bigr)$ as a solution of the linear problem. By the pathwise uniqueness of solution for the linear problem, we can apply the estimate \eqref{2.23} with help of \eqref{2.36}, \eqref{2.39} and \eqref{2.40} to derive \begin{align} & E\Big( \sup_{0 \le s\le t} \Big( \|u_t^{(n+1)}(s) - u_t^{(n)}(s)\|_{L^2(\mathbb{R}^3)}^2 + \|u^{(n+1)}(s) - u^{(n)}(s)\|_{H^1(\mathbb{R}^3)}^2\Big)\Big) \nonumber \\ & \le M \int_0^t E\Big( \| u^{(n)}(s) - u^{(n-1)}(s)\|_{H^1(\mathbb{R}^3)}^2\Big)\,ds \label{2.44} \\ & \quad + M\Big(\sum_{k=1}^{\infty} \alpha_k^2\Big) \int_0^t E\Big( \|u^{(n)}(s) - u^{(n-1)}(s)\|_{L^2(\mathbb{R}^3)}^2\Big)\,ds,\quad \text{for all $0 \le t \le T$}. \nonumber \end{align} By induction, we have, for all $0 \le t \le T$ and all $n\ge 1$, \[ E\Big( \sup_{0 \le s\le t} \Big( \|u_t^{(n+1)}(s) - u_t^{(n)}(s)\|_{L^2(\mathbb{R}^3)}^2 + \|u^{(n+1)}(s) - u^{(n)}(s)\|_{H^1(\mathbb{R}^3)}^2\Big)\Big) \le K^n t^n /n ! %\label{2.45} \] for some constant $K$ independent of $n $ and $t$. Thus, the sequence $\{(u^{(n)}, u_t^{(n)})\}$ is a Cauchy sequence in $L^2\Big(\Omega; C\bigl([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\Big)$. The limit $\bigl(u, u_t\bigr)$ is the solution, and $\bigl(u(t), u_t(t)\bigr)$ is $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$-valued $\mathcal{F}_t$-measurable. Suppose that $\bigl(u^{\ast}, u_t^{\ast}\bigr) \in L^2\bigl(\Omega; C([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3))\bigr)$ is another solution. By subtraction, we can obtain an equation satisfied by $v= u - u^{\ast}$. By the same argument as for \eqref{2.44}, we can derive \begin{align*} & E\Big( \sup_{0 \le s\le t} \bigl( \|v_t(s)\|_{L^2(\mathbb{R}^3)}^2 + \|v(s) \|_{H^1(\mathbb{R}^3)}^2\bigr)\Big) \\ %\label{2.46}\\ & \le M \int_0^t E\bigl( \| v(s)\|_{H^1(\mathbb{R}^3)}^2\bigr)\,ds + M\Big(\sum_{k=1}^{\infty} \alpha_k^2\Big) \int_0^t E\bigl( \|v(s)\|_{L^2(\mathbb{R}^3)}^2\bigr)\,ds,\quad \end{align*} for all $0 \le t \le T$. This yields the pathwise uniqueness. Since $T$ can be arbitrarily large, it follows from the pathwise uniqueness that for almost all $\omega \in \Omega, $ \[ \bigl(u, u_t\bigr) \in C\bigl([0, \infty) ; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr). %\label{2.47} \] We now drop the assumption \eqref{2.36} and consider the case \begin{equation} f_2(v)= -v |v|^{p-1}, \quad 1 \le p <3. \label{2.48} \end{equation} Borrowing a truncation device from [2], we set, for a positive integer $N$, \[ f_{2, N}(v) = - \eta_N\bigl(\|v\|_{H^1(\mathbb{R}^3)}\bigr)v |v|^{p-1} %\label{2.49} \] where $\eta_N(y) = \eta(y/N), \eta \in C_0^{\infty}(R)$ such that $0 \le \eta (y) \le 1$, for all $y$, $\eta(y)=1$, for $|y| \le 2$, and $\eta(y)=0$, for $|y|\ge 3$. Then, it follows from \eqref{1.1} that \begin{equation} \bigl\| f_{2, N}(v_1) - f_{2, N}(v_2)\bigr\|_{L^2(\mathbb{R}^3)} \le C_N \|v_1 - v_2\|_{H^1(\mathbb{R}^3)}. \label{2.50} \end{equation} Hence, there is a solution $u_N$ of \eqref{2.30} - \eqref{2.31} with $f= f_1 + f_{2,N}$ such that for each $T>0$, \[ \bigl(u_N, \partial_t u_{N}\bigr) \in L^2\Big(\Omega; C\bigl( [0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\Big), %\label{2.51} \] and for almost all $\omega \in \Omega$, \[ \bigl(u_N, \partial_t u_{N}\bigr) \in C\bigl( [0, \infty); H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr). %\label{2.52} \] Now define \begin{equation} \tau_N = \begin{cases} \inf\bigl\{t : \|u_N(t)\|_{H^1(\mathbb{R}^3)} >N\bigr\}, \\ \infty, & \text{if }\{t :\|u_N(t)\|_{H^1(\mathbb{R}^3)} > N\}=\emptyset \end{cases} \label{2.53} \end{equation} so that $u_N$ satisfies the original equation with $f = f_1 + f_{2}$ for $0 \le t \le\tau_N(\omega), $ for almost all $\omega$. For $N_1 < N_2$, we set \[ v(t) = u_{N_1}(t \wedge \tau_{N_1}\wedge \tau_{N_2}) -u_{N_2}(t \wedge \tau_{N_1}\wedge \tau_{N_2}). %\label{2.54} \] We then note that $v(t)$ is the solution of \[ v_{tt} + 2\alpha v_t - \Delta v + \beta v = F(t, x) + \sum_{k=1}^{\infty} G_k(t, x) \frac{dB_k}{dt} %\label{2.55} \] on the interval $[0, \tau_{N_1}\wedge \tau_{N_2})$ satisfying $v(0)=0,\quad v_t(0) =0$ %\label{2.56} where \begin{gather} F(t, x)= f_{2, N_1}\bigl(u_{N_1}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr)- f_{2, N_2}\bigl(u_{N_2}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr), \\ %\label{2.57} \\ G_k(t, x) = \phi(x)\Big(g_{k, 2}\bigl(u_{N_1}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr) - g_{k, 2}\bigl(u_{N_2}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr)\Big). %\label{2.58} \end{gather} We also note that \begin{align*} & \Big\|f_{2, N_1}\bigl(u_{N_1}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr)- f_{2, N_2}\bigl(u_{N_2}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr)\Big\|_{L^2(\mathbb{R}^3)} \\ %\label{2.59}\\ & = \Big\|f_{2}\bigl(u_{N_1}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr)- f_{2}\bigl(u_{N_2}(t \wedge \tau_{N_1} \wedge \tau_{N_2})\bigr)\Big\|_{L^2(\mathbb{R}^3)}\\ & \le C N_2^{p-1} \|v(t)\|_{H^1(\mathbb{R}^3)} \end{align*} for all $1 \le N_1 < N_2$ and all $t \ge 0$, for almost all $\omega$. We can treat $v$ as a solution of the linear equation where $F$ and $G_k$'s are given functions. By the pathwise uniqueness of solution of the linear problem, $v$ must satisfy \begin{align*} & E\Big( \sup_{0 \le s\le t} \bigl( \|v_t(s)\|_{L^2(\mathbb{R}^3)}^2 + \|v(s) \|_{H^1(\mathbb{R}^3)}^2\bigr)\Big) \\ %\label{2.60}\\ & \le M \int_0^t E\bigl( \| v(s)\|_{H^1(\mathbb{R}^3)}^2\bigr)\,ds + M\Big(\sum_{k=1}^{\infty} \alpha_k^2\Big) \int_0^t E\bigl( \|v(s)\|_{L^2(\mathbb{R}^3)}^2\bigr)\,ds, \end{align*} for all $t \ge 0$. It follows from the Gronwall inequality that $v(t) \equiv 0$, for all $t \ge 0$, for almost all $\omega$. Thus, $\tau_{N_1} \le \tau_{N_2}$, for almost all $\omega$. Let \[ \tau_{\infty} = \lim_{N \to \infty} \tau_N %\label{2.61} \] and define \[ u(t) = \lim_{N \to \infty} u_N(t), \quad \text{for $0 \le t < \tau_{\infty}$}. %\label{2.62} \] Apparently, $u(t \wedge \tau_N) = u_N(t \wedge \tau_N)$, for all $t \ge 0$ and all $N \ge 1$, and consequently, \[ \bigl( u, u_t\bigr) \in C\bigl([0, \tau_{\infty}); H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr), \quad \text{for almost all $\omega$}. %\label{2.63} \] On account of \eqref{2.33} - \eqref{2.34} and \eqref{2.37} - \eqref{2.40}, we can use \eqref{2.20} so that for each $N \ge 1$, \begin{align*} & \|u_t(t\wedge \tau_N)\|_{L^2(\mathbb{R}^3)}^2 + \|\nabla u(t\wedge \tau_N)\|_{L^2(\mathbb{R}^3)}^2 + (\beta + 2\epsilon \alpha) \|u(t\wedge \tau_N)\|_{L^2(\mathbb{R}^3)}^2 \\ %\label{2.64}\\ & + 2 \epsilon \langle u_t(t\wedge \tau_N), u(t\wedge \tau_N)\rangle \\ & = \|u_1\|_{L^2(\mathbb{R}^3)}^2 + \|\nabla u_0\|_{L^2(\mathbb{R}^3)}^2 + (\beta + 2\epsilon \alpha)\|u_0\|_{L^2(\mathbb{R}^3)}^2 + 2\epsilon \angle u_1, u_0 \rangle \\ &\quad -(4\alpha - 2\epsilon)\int_0^{t\wedge \tau_N} \!\!\!\|u_t(s)\|_{L^2(\mathbb{R}^3)}^2\,ds - 2\epsilon \int_0^{t\wedge \tau_N} \!\! \bigl(\|\nabla u(s)\|_{L^2(\mathbb{R}^3)}^2 + \beta \|u(s)\|_{L^2(\mathbb{R}^3)}^2\bigr)ds\\ &\quad + 2\int_0^{t\wedge \tau_N} \langle f_1(s) + f_{2,N}(u(s)), u_t(s) +\epsilon u(s)\rangle ds\\ &\quad +\sum_{k=1}^{\infty}\int_0^{t\wedge \tau_N}\bigl\|g_{k,1}(s) + \phi g_{k, 2}(u(s))\bigr\|_{L^2(\mathbb{R}^3)}^2\,ds\\ &\quad + \sum_{k=1}^{\infty}\int_0^{t\wedge \tau_N} 2 \langle g_{k,1}(s) + \phi g_{k, 2}(u(s)) , u_t(s) + \epsilon u(s) \rangle dB_k(s) \end{align*} for all $t \ge 0$, for almost all $\omega$, where $\langle\cdot , \cdot\rangle$ is the inner product in $L^2(\mathbb{R}^3)$. We write \begin{align*} Q(t) & = \|u_t(t)\|_{L^2(\mathbb{R}^3)}^2 + \|\nabla u(t)\|_{L^2(\mathbb{R}^3)}^2 + (\beta+ 2\epsilon \alpha) \|u(t)\|_{L^2(\mathbb{R}^3)}^2 \\%\label{2.65}\\ & + 2\epsilon \langle u_t(t), u(t)\rangle + \frac{2}{p+1} \int_{\mathbb{R}^3} |u(t)|^{p+1} dx. \end{align*} We then have \begin{align} Q(t\wedge \tau_N ) =& Q(0) -(4\alpha - 2\epsilon)\int_0^{t\wedge \tau_N} \|u_t(s)\|_{L^2(\mathbb{R}^3)}^2\,ds \nonumber \\ & - 2{\epsilon}\int_0^{t\wedge \tau_N} \bigl(\|\nabla u(s)\|_{L^2(\mathbb{R}^3)}^2 + \beta \|u(s)\|_{L^2(\mathbb{R}^3)}^2\bigr)ds \nonumber \\ & - 2 \epsilon\int_0^{t\wedge \tau_N}\int_{\mathbb{R}^3} |u(s)|^{p+1}\,dx\,ds + 2\int_0^{t\wedge \tau_N} \langle f_1(s), u_t(s) +\epsilon u(s)\rangle \,ds \nonumber\\ & + \sum_{k=1}^{\infty}\int_0^{t\wedge \tau_N}\bigl\|g_{k,1}(s) + \phi g_{k, 2}(u(s))\bigr\|_{L^2(\mathbb{R}^3)}^2\,ds \label{2.66} \\ &+ \sum_{k=1}^{\infty}\int_0^{t\wedge \tau_N} 2 \langle g_{k,1}(s) + \phi g_{k,2}(u(s)), u_t(s) + \epsilon u(s) \rangle dB_k(s) \nonumber \end{align} for all $t \ge 0$, for almost all $\omega \in \Omega$. In addition to \eqref{2.3}, we assume \[ u_0 \in L^{p+1}\bigl(\Omega; L^{p+1}(\mathbb{R}^3)\bigr) %\label{2.67} \] so that $E\bigl(Q(0)\bigr) < \infty$. %\label{2.68} By the same argument as for \eqref{2.23}, we can derive from \eqref{2.66} \begin{align*} E\Big( \sup_{0 \le s \le t} Q(s \wedge \tau_N)\Big) &\le E\bigl(Q(0)\bigr) + M t E \Big(\int_0^{t\wedge \tau_N} \|f_1(s)\|_{L^2(\mathbb{R}^3)}^2 \,ds\Big) \\ %\label{2.69}\\ &\quad + M t \sum_{k=1}^{\infty} \Big( M_k^2 + \tilde M_k^2 \|\phi\|_{L^2(\mathbb{R}^3)}^2\Big) \end{align*} for all $t \ge 0 $ and all $N \ge 1$, for some positive constant $M$. Thus, for each $T>0$, \[ E\Big(\sup_{0 \le t\le T}Q(t\wedge \tau_N)\Big) \le M_T, \quad \text{for all $N\ge 1$}, %\label{2.70} \] where $M_T$ is a positive constant independent of $N$. By the same argument as in \cite{c1}, this implies that $\tau_N \uparrow \infty $ as $N \to \infty$, for almost all $\omega$. Using this fact and Fatou's lemma, we pass $N \to \infty$ to arrive at \begin{equation} E\big( \sup_{0 \le t \le T}Q(t)\big) \le M_T. \label{2.71} \end{equation} Hence, for each $T>0$, we have obtained a solution \[ \bigl(u, u_t\bigr) \in L^2\Big(\Omega; C\bigl([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\Big). %\label{2.72} \] Suppose that there is another solution \[ \bigl(u^{\ast}, u_t^{\ast}\bigr) \in L^2\Big(\Omega; C\bigl([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\Big). %\label{2.73} \] We define $\tau_N^{\ast}$ by \eqref{2.53} in terms of $u^{\ast}$. Then, by means of \begin{align*} & \bigl\|f_{2}\bigl(u(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})\bigr)- f_{2}\bigl(u^{\ast}(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})\bigr)\bigl\|_{L^2(\mathbb{R}^3)} \\ %\label{2.74}\\ & \le C N^{p-1} \bigl\|u(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})- u^{\ast}(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})\bigl\|_{H^1(\mathbb{R}^3)}, \end{align*} we can derive \begin{align*} & E\Big(\sup_{0 \le t \le T}\Big( \bigl\|u(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})- u^{\ast}(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})\bigl\|_{H^1(\mathbb{R}^3)}^2 \\ %\label{2.75}\\ & \quad + \bigl\|u_t(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})- u_t^{\ast}(t \wedge \tau_{N} \wedge \tau_{N}^{\ast})\bigl\|_{L^2(\mathbb{R}^3)}^2\Big)\Big) =0 \end{align*} for all $N\ge 1$. Since $\tau_N^{\ast}\wedge T \uparrow T$, as $N \to \infty$, for almost all $\omega$, we have \[ \bigl(u, u_t\bigr) = \bigl(u^{\ast}, u_t^{\ast}\bigr) \quad \text{in $ C\bigl([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)$} %\label{2.76} \] for almost all $\omega$. This proves the pathwise uniqueness. Next by passing $N \to \infty$ in \eqref{2.66}, we find \begin{align*} Q(t ) =& Q(0) -(4\alpha - 2\epsilon)\int_0^{t} \|u_t(s)\|_{L^2(\mathbb{R}^3)}^2\,ds\\ %\label{2.77}\\ & - 2{\epsilon}\int_0^{t} \bigl(\|\nabla u(s)\|_{L^2(\mathbb{R}^3)}^2 + \beta \|u(s)\|_{L^2(\mathbb{R}^3)}^2\bigr)ds\\ & - 2{\epsilon}\int_0^{t}\int_{\mathbb{R}^3} |u(s)|^{p+1}\,dx\,ds\\ &+ 2\int_0^{t} \langle f_1(s), u_t(s) +\epsilon u(s)\rangle\,ds + \sum_{k=1}^{\infty}\int_0^{t}\|g_{k,1}(s) + \phi g_{k, 2}(u(s))\|_{L^2(\mathbb{R}^3)}^2\,ds\\ &+ \sum_{k=1}^{\infty}\int_0^{t} 2 \langle g_{k,1}(s) +\phi g_{k,2}(u(s)), u_t(s) + \epsilon u(s)\rangle dB_k(s) \end{align*} for all $t \ge 0, $for almost all $\omega \in \Omega$. Thus, $Q(t)$ is a solution of \begin{align} \dfrac{d Q(t)}{dt} =& -(4\alpha - 2\epsilon) \|u_t(t)\|_{L^2(\mathbb{R}^3)}^2 - 2{\epsilon} \bigl(\|\nabla u(t)\|_{L^2(\mathbb{R}^3)}^2 + \beta \|u(t)\|_{L^2(\mathbb{R}^3)}^2\bigr) \nonumber\\ & - 2{\epsilon} \int_{\mathbb{R}^3} |u(t)|^{p+1}\,dx + 2 \langle f_1(t), u_t(t) +\epsilon u(t)\rangle \nonumber\\ & + \sum_{k=1}^{\infty}\big\|g_{k,1}(t) + \phi g_{k, 2}(u(t))\bigr\|_{L^2(\mathbb{R}^3)}^2 \label{2.78}\\ &+ \sum_{k=1}^{\infty} 2 \langle g_{k,1}(t) +\phi g_{k,2}(u(t)), u_t(t) + \epsilon u(t)\rangle \dfrac{dB_k(t)}{dt}. \nonumber \end{align} Recalling that $\epsilon$ was chosen by \eqref{2.21}, we can choose $\delta = \delta(\epsilon, \alpha, \beta) > 0$, $\kappa =\kappa(\epsilon,\alpha, \beta) > 0$ %\label{2.79} so that \begin{align} & -(4\alpha - 2\epsilon) \|u_t(t)\|_{L^2(\mathbb{R}^3)}^2 - 2{\epsilon} \bigl(\|\nabla u(t)\|_{L^2(\mathbb{R}^3)}^2 + \beta \|u(t)\|_{L^2(\mathbb{R}^3)}^2\bigr) \nonumber \\ & - 2{\epsilon} \int_{\mathbb{R}^3} |u(t)|^{p+1}\,dx + 2 \langle f_1(t), u_t(t) +\epsilon u(t) \rangle \label{2.80}\\ & \le -\delta Q(t) + \kappa \|f_1(t)\|_{L^2(\mathbb{R}^3)}^2 \nonumber \end{align} for all $t \ge 0$, for almost all $\omega$. \par Let $Y(t)$ be the solution of the following initial value problem. \begin{align} \frac{d Y(t)}{dt} = & - \delta Y(t) + \kappa \|f_1(t)\|_{L^2(\mathbb{R}^3)}^2 + \sum_{k=1}^{\infty}\bigl\|g_{k,1}(t) + \phi g_{k, 2}(u(t))\bigr\|_{L^2(\mathbb{R}^3)}^2 \nonumber\\ &+ \sum_{k=1}^{\infty} 2 \langle g_{k,1}(t) + \phi g_{k,2}(u(t)), u_t(t) + \epsilon u(t) \rangle \frac{dB_k(t)}{dt}, \label{2.81} \end{align} \begin{equation} Y(0) =Q(0). \label{2.82} \end{equation} It follows from \eqref{2.78} and \eqref{2.81} that $Q(t) - Y(t)$ is continuously differentiable in $t$ for almost all $\omega$, and, by \eqref{2.80} and \eqref{2.81}, \[ \frac{d}{dt} \bigl(Q(t) - Y(t)\bigr) \le -\delta \bigl(Q(t) - Y(t)\bigr) %\label{2.83} \] for all $t\ge 0$, for almost all $\omega$. Hence, by \eqref{2.82}, $Q(t) \le Y(t)$ for all $t$, for almost all $\omega$. Since $Y(t)$ can be given by \begin{align*} Y(t) =& Q(0)e^{-\delta t} + \kappa \int_0^{t} e^{-\delta(t-s)} \|f_1(s)\|_{L^2(\mathbb{R}^3)}^2\,ds \\ %\label{2.84}\\ &+\sum_{k=1}^{\infty}\int_0^{t} e^{-\delta(t-s)}\bigl\|g_{k,1}(s) + \phi g_{k, 2}(u(s))\bigr\|_{L^2(\mathbb{R}^3)}^2\,ds\\ &+ \sum_{k=1}^{\infty}\int_0^{t} e^{-\delta(t-s)} 2 < g_{k,1}(s) + \phi g_{k,2}(u(s)), u_t(s) + \epsilon u(s)> dB_k(s), \end{align*} we have \begin{align} Q(t) \le& Q(0)e^{-\delta t} + \kappa \int_0^{t} e^{-\delta(t-s)} \|f_1(s)\|_{L^2(\mathbb{R}^3)}^2\,ds \nonumber \\ &+ \sum_{k=1}^{\infty}\int_0^{t} e^{-\delta(t-s)}\bigl\|g_{k,1}(s) + \phi g_{k, 2}(u(s)) \bigr\|_{L^2(\mathbb{R}^3)}^2\,ds \label{2.85}\\ &+ \sum_{k=1}^{\infty}\int_0^{t} e^{-\delta(t-s)} 2 \langle g_{k,1}(s) + \phi g_{k,2}(u(s)), u_t(s) + \epsilon u(s)\rangle dB_k(s)\nonumber \end{align} for all $t \ge 0$, for almost all $\omega$. By means of \eqref{2.34}, \eqref{2.35}, \eqref{2.37} - \eqref{2.40} and \[ E\Big( \sum_{k=1}^{\infty}\int_0^{t} e^{-\delta (t - s)} 2 \langle g_{k,1}(s) + g_{k,2}(u(s)), u_t(s) + \epsilon u(s)\rangle dB_k(s)\Big) = 0, %\label{2.86} \] we can derive from \eqref{2.85} that \begin{equation} E\bigl(Q(t) \bigr) \le M, \quad \text{for all $t \ge 0$}. \label{2.87} \end{equation} By virtue of \eqref{2.21}, this implies that \begin{equation} E\Big( \bigl\|u_t(t)\bigr\|_{L^2(\mathbb{R}^3)}^2 + \bigl\|u(t)\bigr\|_{H^1(\mathbb{R}^3)}^2\Big) \le M, \quad \text{for all $t \ge 0$}. \label{2.88} \end{equation} For later use, we can further derive \begin{equation} E\bigl( |Q(t)|^2\bigr) \le M, \quad \text{for all $t \ge 0, $} \label{2.89} \end{equation} provided $(u_0, u_1) \in L^4\bigl(\Omega; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr) $ and $u_0 \in L^{2p + 2}\bigl(\Omega; L^{p+1}(\mathbb{R}^3)\bigr)$. This follows from \eqref{2.85} and \begin{align*} & E\Big( \Big| \sum_{k=1}^{\infty}\int_0^{t} e^{-\delta (t-s)} 2 \langle g_{k,1}(s) + g_{k,2}(u(s)), u_t(s) + \epsilon u(s)\rangle dB_k(s)\Big|^2 \Big) \\%\label{2.90}\\ & \le M E\Big( \sum_{k=1}^{\infty} \int_0^t e^{-2 \delta (t-s)} \bigl(M_k^2 + \tilde M_k^2\|\phi\|_{L^2(\mathbb{R}^3)}^2\bigr) \bigl( \|u_t\|_{L^2(\mathbb{R}^3)}^2 + \epsilon^2 \|u\|_{L^2(\mathbb{R}^3)}^2\bigr)ds\Big)\\ & \le M, \quad \text{for all $t \ge 0$}, \end{align*} where \eqref{2.88} has been used. We now state the existence result we have obtained. \begin{lemma} \label{lm2.2} Suppose that $(u_0, u_1)$ is the same as in Lemma \ref{lm2.1} with additional assumption $u_0 \in L^{p+1}\bigl(\Omega; L^{p+1}(\mathbb{R}^3)\bigr) $ and that $f, g_k$ satisfy the conditions \eqref{2.32} - \eqref{2.35}, \eqref{2.37} - \eqref{2.40} and \eqref{2.48}. Then, there is a pathwise unique solution of {\rm \eqref{2.30}} and {\rm \eqref{2.31}} such that $\bigl(u(t), u_t(t)\bigr)$ is a $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$-valued predictable process and \[ (u, u_t) \in L^2\Big(\Omega; C\bigl([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\Big) %\label{2.91} \] for all $T>0. $ Furthermore, {\rm \eqref{2.87}} is valid, and if $(u_0, u_1) \in L^4\bigl(\Omega; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr) $ and $u_0 \in L^{2p +2}\bigl(\Omega: L^{p+1}(\mathbb{R}^3)\bigr)$, then {\rm \eqref{2.89}} is also valid. \end{lemma} \section{Tightness of probability laws} Let $\bigl(u, u_t\bigr)$ be the solution of \eqref{2.30} - \eqref{2.31} in Lemma \ref{lm2.2}. In this section, the goal is to establish tightness of the probability laws for $\bigl(u,u_t\bigr)$. We will present basic estimates which are necessary for tightness of the probability distributions of a solution. For this, we suppose that the initial value $\bigl(u_0, u_1\bigr)$ belongs to $ L^4\bigl(\Omega; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr) $ and $u_0 \in L^{2p+2}\bigl(\Omega; L^{p+1}(\mathbb{R}^3)\bigr)$. We also retain all other conditions in Lemma \ref{lm2.2} so that a unique solution $\bigl(u, u_t\bigr) \in L^4\Big(\Omega; C\bigl([0, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\Big)$ may exist for all $T >0$. \subsection*{Estimate I} We choose a function $\chi_R$ such that $\chi_R(x)= \chi(x/R), \chi \in C^{\infty}(\mathbb{R}^3)$, and \[ \chi(x)=\begin{cases} 1,& |x|\le 1\\ 0, & |x| \ge 2 \end{cases}, \quad 0 \le \chi(x) \le 1, \text{ for all $x \in \mathbb{R}^3$}. %\label{3.1} \] We set $ v = \bigl(1 - \chi_R\bigr) u. $ %\label{3.2} Then, $v$ is the solution of \begin{gather*} v_{tt} + 2\alpha v_t - \Delta v + \beta v = F + \sum_{k=1}^{\infty} G_k \frac{dB_k}{dt} \\ %\label{3.3}\\ v(0) =v_0 = \bigl( 1 - \chi_R\bigr) u_0 , \quad v_t(0) =v_1 = \bigl(1 - \chi_R\bigr)u_1 %\label{3.4} \end{gather*} where \begin{gather*} F = \bigl(1 - \chi_R\bigr) f + 2 \nabla \chi_R \cdot \nabla u + u \Delta \chi_R \\%\label{3.5} \\ G_k = \bigl(1 - \chi_R\bigr) g_k %\label{3.6} \end{gather*} By the uniqueness of the solution, $v$ can be obtained as a solution of the linear problem of the form \eqref{2.1}--\eqref{2.2}. Hence, we can apply \eqref{2.20} to $v$ so that \begin{align} & \|v_t(t)\|_{L^2(\mathbb{R}^3)}^2 + \|\nabla v(t)\|_{L^2(\mathbb{R}^3)}^2 + (\beta + 2\epsilon \alpha)\|v(t)\|_{L^2(\mathbb{R}^3)}^2 + 2\epsilon \langle v_t(t), v(t)\rangle \nonumber \\ & = \|v_1\|_{L^2(\mathbb{R}^3)}^2 + \|\nabla v_0\|_{L^2(\mathbb{R}^3)}^2 + (\beta + 2\epsilon \alpha)\|v_0\|_{L^2(\mathbb{R}^3)}^2 + 2\epsilon \langle v_1, v_0\rangle \nonumber\\ &\quad -(4\alpha - 2\epsilon)\int_0^t \|v_t(s)\|_{L^2(\mathbb{R}^3)}^2\,ds - 2\epsilon \int_0^t \bigl(\|\nabla v(s)\|_{L^2(\mathbb{R}^3)}^2 + \beta \|v(s)\|_{L^2(\mathbb{R}^3)}^2\bigr)ds \nonumber \\ &\quad + \int_0^t 2 \langle F(s), v_t(s) + \epsilon v(s)\rangle ds + \sum_{k=1}^{\infty} \int_0^t \|G_k(s)\|_{L^2(\mathbb{R}^3)}^2\,ds \label{3.7}\\ &\quad + \sum_{k=1}^{\infty} \int_0^t 2 < G_k, v_t(s) + \epsilon v(s)> dB_k(s) \nonumber \end{align} for all $t \ge 0$, for almost all $\omega$, where $\langle\cdot ,\cdot\rangle$ is the inner product in $ L^2(\mathbb{R}^3)$. We write \begin{align*} Q_R(t) =& \bigl\|\bigl(1 - \chi_R\bigr) u_t(t)\bigr\|_{L^2(\mathbb{R}^3)}^2 + \bigl\| \nabla\bigl((1-\chi_R) u(t)\bigr)\bigr\|_{L^2(\mathbb{R}^3)}^2 \\%\label{ 3.8}\\ & + (\beta +2\epsilon \alpha) \bigl\|\bigl(1 -\chi_R\bigr) u(t)\bigr\|_{L^2(\mathbb{R}^3)}^2 + 2\epsilon\langle (1- \chi_R) u_t(t), (1-\chi_R) u(t)\rangle \\ & +\frac{2}{p+1} \int_{\mathbb{R}^3} \bigl(1 -\chi_R\bigr)^2 |u(t)|^{p+1} dx \end{align*} Then, by the same argument as for \eqref{2.85}, we derive from \eqref{3.7} that \begin{align} Q_R (t) \le& Q_R(0)e^{-\delta t} + M\int_0^{t} e^{-\delta(t-s)} \bigl\|(1- \chi_R) f_1(s)\bigr\|_{L^2(\mathbb{R}^3)}^2\,ds\nonumber \\ & +\frac{M}{\mathbb{R}^2} \int_0^t e^{-\delta(t-s)} \bigl( \|\nabla u\|_{L^2(\mathbb{R}^3)}^2 + \|u\|_{L^2(\mathbb{R}^3)}^2 \bigr)\,ds \nonumber\\ &+ \sum_{k=1}^{\infty}\int_0^{t} e^{-\delta(t-s)}\bigl\|\bigl(1- \chi_R\bigr) \bigl( g_{k,1}(s) + \phi g_{k, 2}(u(s))\bigr)\bigr\|_{L^2(\mathbb{R}^3)}^2\,ds \label{3.9}\\ &+ \sum_{k=1}^{\infty}\int_0^{t} e^{-\delta(t-s)} 2 \big\langle (1- \chi_R)\bigl( g_{k,1}(s) +\phi g_{k,2}(u(s))\bigr),\nonumber \\ &\qquad (1-\chi_R)\bigl( u_t(s) + \epsilon u(s)\bigr) \big\rangle dB_k(s) \nonumber \end{align} for all $R \ge 1 $ and all $t \ge 0$, for almost all $\omega$. Here $\delta$ and $M$ are positive constants independent of $R, t$ and $\omega. $ It follows from \eqref{2.35} and \eqref{2.37} that $\bigl\{f_1(t)\bigr\}_{t \ge 0}$ and $\bigl\{g_{k,1}(t)\bigr\}_{t \ge 0}$ are compact subsets of $L^2(\mathbb{R}^3)$. We also use \eqref{2.34}, \eqref{2.38} - \eqref{2.40} and \eqref{2.88} to derive from \eqref{3.9} that $E\bigl( Q_R(t) \bigr) \to 0$, as $R \to \infty$, uniformly in $t$. %\label{3.10} \subsection*{Estimate II} It follows from \eqref{2.11} and Lemma \ref{lm1.3} that for $j=1, 2, 3$, \begin{align} i\xi_j \hat u(t, \xi) =& e^{-\alpha t}\cos (\sqrt{|\xi|^2 + \gamma} t) i \xi_j \hat u_0(\xi) \nonumber \\ & + e^{-\alpha t} \dfrac{\sin (\sqrt{|\xi|^2 +\gamma} t)}{\sqrt{ |\xi|^2 +\gamma}} i\xi_j \hat u_1(\xi) \nonumber \\ & + \int_0^t e^{-\alpha (t -s)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)}{\sqrt{|\xi|^2 + \gamma}} i\xi_j \hat f(s) ds \label{3.11}\\ & + \sum_{k=1}^{\infty} \int_0^t e^{-\alpha (t -s)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)}{\sqrt{|\xi|^2 + \gamma}} i\xi_j \hat g_k(s)\,dB_k(s) \nonumber \end{align} where $ i = \sqrt{-1}$, $\xi=(\xi_1, \xi_2, \xi_3)$, and \begin{align} \hat u_t(t, \xi) = & -\alpha e^{-\alpha t}\cos (\sqrt{|\xi|^2 + \gamma} t) \hat u_0(\xi) -\alpha e^{-\alpha t} \dfrac{\sin (\sqrt{|\xi|^2 +\gamma} t)}{\sqrt{ |\xi|^2 +\gamma}} \hat u_1(\xi) \nonumber\\ & - e^{-\alpha t}\sqrt{|\xi|^2 + \gamma} \sin (\sqrt{|\xi|^2 + \gamma} t) \hat u_0(\xi) + e^{-\alpha t} {\cos (\sqrt{|\xi|^2 +\gamma} t)} \hat u_1(\xi) \nonumber \\ & -\alpha \int_0^t e^{-\alpha (t -s)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat f(s) ds \nonumber \\ & + \int_0^t e^{-\alpha (t -s)} {\cos \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)} \hat f(s)\,ds \label{3.12}\\ & -\alpha \sum_{k=1}^{\infty} \int_0^t e^{-\alpha (t -s)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat g_k(s) dB_k(s) \nonumber\\ & + \sum_{k=1}^{\infty} \int_0^t e^{-\alpha (t -s)} {\cos \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr)} \hat g_k(s)\,dB_k(s). \nonumber \end{align} Then, we define \[ \hat I_1 = \int_0^t e^{-\alpha(t-s)} \cos \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr) \hat f_2(s)\,ds %\label{3.13} \] where $\hat f_2(s)$ is the Fourier transform of $f_2(u(s))$. It follows from \eqref{1.5}, \eqref{2.48} and \eqref{2.89} that for some positive constant $M$, \[ E\bigl( \|f_2(u(t))\|_{H^{q}(\mathbb{R}^3)}\bigr) \le M,\quad \text{for all $t\ge 0$}, %\label{3.14} \] where $q = (3-p)/2$. We also define $\Xi_r(\xi) =\Xi(\xi/r)$, where $\Xi \in C^{\infty}(\mathbb{R}^3), 0 \le \Xi(\xi) \le 1$, for all $\xi\in \mathbb{R}^3$, and \[ \Xi(\xi) = \begin{cases} 1, & |\xi|\le 1 \\ 0, & |\xi|\ge 2.\end{cases} %\label{3.15} \] We use different symbols to emphasize that $\chi_R$ is defined in the space domain and $\Xi_r$ is defined in the frequency domain. Then, by writing $I_1 = F_{\xi}^{-1}\bigl(\hat I_1\bigr)$, we have \begin{align*} E\bigl( \|\chi_R I_1(t)\|_{H^{q}(\mathbb{R}^3)}\bigr) & \le M E\bigl(\|I_1(t)\|_{H^q(\mathbb{R}^3)}\bigr) \\ %\label{3.16}\\ & \le ME\Big(\int_0^t e^{-\alpha(t-s)} \|f_2(u(s))\|_{H^{q}(\mathbb{R}^3)} ds\Big) \le M \end{align*} for all $R\ge 1 $ and all $t \ge 0$, for some positive constants $M. $ Hence, \begin{align} E\Big(\bigl\|(1 - \Xi_{r}) \bigl( \hat \chi_R \ast \hat I_1(t)\bigr)\bigr\|_{ L^2(\xi)}\Big) & \le E \Big( \int_{|\xi|\ge r} \bigl| \hat \chi_R \ast \hat I_1(t)\bigr|^2\,d\xi\Big)^{1/2} \nonumber \\ & \le \frac{1}{r^q} E\Big(\bigl\|\chi_R I_1(t)\bigr\|_{H^q(\mathbb{R}^3)}\Big) \to 0 \label{3.17} \end{align} as $r \to \infty$, uniformly in $t\ge 0$. In the meantime, by using \begin{equation} \frac{\partial}{\partial \xi}\Big( \hat \chi_R \ast \hat I_1(t)\Big) = \Big(\frac{\partial}{\partial \xi} \hat \chi_R\Big)\ast \hat I_1(t), \label{3.18} \end{equation} we have \begin{align} E\Big(\bigl\|\Xi_{r}\bigl(\hat \chi_R \ast \hat I_1(t)\bigr)\bigr\|_{H^1(\xi)}\Big) & \le M(R, r) E\bigl( \|\hat I_1(t)\|_{L^2(\xi)}\bigr) \nonumber \\ & \le M(R, r) \quad \text{for all $t \ge 0$}, \label{3.19} \end{align} where $M(R,r)$ denotes positive constants depending on $R$ and $r$. Next we consider \[ \hat I_2 = \int_0^t e^{-\alpha(t-s)} \cos \bigl(\sqrt{|\xi|^2 +\gamma} (t-s)\bigr) \hat f_1(s)\,ds. %\label{3.20} \] As above, \begin{equation} \Big\|\Xi_{4r} \bigl(\hat \chi_R \ast \hat I_2(t)\bigr)\Big\|_{H^1(\xi)} \le M(R, r) \|\hat I_2(t)\|_{L^2(\xi)} \le M(R, r), \quad \text{for all $t \ge 0$}, \label{3.21} \end{equation} and, by the triangle inequality, \begin{align*} \Big\|(1- \Xi_{4r})\bigl(\hat \chi_R \ast \hat I_2(t)\bigr)\Big\|_{ L^2(\xi)} & \le \Big\|(1- \Xi_{4r})\Big(\hat \chi_R \ast \bigl(\Xi_{r} \hat I_2(t)\bigr)\Big)\Big\|_{ L^2(\xi)}\\ & \quad + \Big\|(1- \Xi_{4r})\Big(\hat \chi_R \ast \bigl((1- \Xi_{r}) \hat I_2(t)\bigr)\Big)\Big\|_{ L^2(\xi)}. %\label{3.22}\\ \end{align*} The last term can be estimated as follows. \begin{align} \Big\|(1- \Xi_{4r})\Big(\hat \chi_R \ast \bigl((1 -\Xi_{r})\hat I_2(t)\bigr)\Big)\Big\|_{ L^2(\xi)} & \le \|\hat \chi_R\|_{L^1(\xi)} \bigl\|(1-\Xi_{r}) \hat I_2(t)\bigr\|_{ L^2(\xi)} \nonumber\\ & \le M(R) \bigl\|(1-\Xi_{r}) \hat I_2(t)\bigr\|_{ L^2(\xi)}. \label{3.23} \end{align} Since the set $\bigl\{\hat f_1(s)\bigr\}_{s \ge 0}$ is compact in $L^2(\xi)$, we find that $\bigl\|(1 -\Xi_r) \hat f_1(s)\bigr\|_{L^2(\xi)} \to 0$, as $ r\to \infty$, uniformly in $s$, %\label{3.24} which yields \begin{equation} \bigl\|(1 -\Xi_{r}) \hat I_2(t)\bigr\|_{ L^2(\xi)} \to 0, \quad \text{as $r \to \infty$, uniformly in $t$}. \label{3.25} \end{equation} Next we see that \begin{align*} &\Big|\int_{\mathbb{R}^3} \bigl(1-\Xi_{4r}(\xi)\bigr) \hat \chi_R(\xi -\eta)\Xi_{r}(\eta)\hat I_2(t, \eta)\,d\eta\Big| \\ & \le \int_{\mathbb{R}^3} \bigl( 1- \Xi_{4r}(\xi)\bigr) |\hat \chi_R(\xi - \eta)| \Xi_{r}(\eta) |\hat I_2(t, \eta)|\,d\eta \\%\label{3.26}\\ & \le \int_{\mathbb{R}^3} \bigl(1 - \Xi_{r}(\xi - \eta)\bigr) |\hat \chi_R(\xi -\eta)| |\hat I_2(t, \eta)|\,d\eta\\ & = \Big( \bigl((1-\Xi_r)|\hat \chi_R|\bigr)\ast |\hat I_2(t)|\Big)(\xi). \end{align*} Thus, \begin{align} \Big\|\bigl(1 - \Xi_{4r}(\xi)\bigr)\Big(\hat \chi_R \ast \bigl(\Xi_{r} \hat I_2(t)\bigr)\Big) \Big\|_{ L^2(\xi)} %\nonumber\\ &\le \bigl\|(1 -\Xi_{r}) |\hat \chi_R| \bigr\|_{ L^1(\xi)} \|\hat I_2(t)\|_{ L^2(\xi)} \label{3.27} \\ & \le M \bigl\|(1 -\Xi_{r}) |\hat \chi_R| \bigr\|_{ L^1(\xi)}, \quad \text{for all $t \ge 0$}. \nonumber \end{align} It follows from \eqref{3.23}, \eqref{3.25} and \eqref{3.27} that for each fixed $R>0$, \[ \Big\|(1- \Xi_{4r})\bigl(\hat \chi_R \ast \hat I_2(t)\bigr)\Big\|_{ L^2(\xi)} \to 0 \quad \text{as $ r \to \infty$, uniformly in $t$}. %\label{3.28} \] Let us define \[ \hat I_3(t) = \sum_{k=1}^{\infty}\int_0^t e^{-\alpha(t-s)} \cos \bigl(\sqrt{|\xi|^2 + \gamma}(t-s)\bigr)\hat g_{k,1}(s)\,dB_k(s). %\label{3.29} \] Then, we can proceed in the same manner as for $\hat I_2(t)$. Applying \eqref{3.21} to $\hat I_3(t)$, we find \begin{align*} E\Big(\Big\|\Xi_{4r} \bigl(\hat \chi_R \ast \hat I_3(t)\bigr) \Big\|_{H^1(\xi)}^2\Big) & \le M(R, r) E\Big(\|\hat I_3(t)\|_{ L^2(\xi)}^2\Big) \nonumber \\ & \le M(R, r)\sum_{k=1}^{\infty} E\Big( \int_0^t e^{-2\alpha(t-s)} \| \hat g_{k,1}(s)\|_{ L^2(\xi)}^2\Big)\,ds \nonumber \\ & \le M(R, r), \quad \text{for all $t \ge 0$, by \eqref{2.38} and \eqref{2.40}} %\label{3.30} \end{align*} where $M(R, r)$ denotes positive constants depending only on $R$ and $r$. Also, applying \eqref{3.23} to $\hat I_3(t)$, we have \begin{align} &E\Big(\Big\|(1- \Xi_{4r})\Big(\hat \chi_R \ast \bigl((1 -\Xi_{r})\hat I_3(t)\bigr)\Big)\Big\|_{ L^2(\xi)}^2\Big) \nonumber\\ & \le M(R) E\Big(\bigl\|(1-\Xi_{r}) \hat I_3(t)\bigr\|_{ L^2(\xi)}^2\Big) \label{3.31} \\ & \le M(R) \sum_{k=1}^{\infty} E\Big(\int_0^t e^{-2\alpha(t-s)} \bigl\|(1-\Xi_{r}) \hat g_{k,1}(s)\bigr\|_{ L^2(\xi)}^2\Big)\,ds. \nonumber \end{align} By virtue of \eqref{2.37}, \eqref{2.38} and \eqref{2.40}, the last term of \eqref{3.31} converges to zero as $r \to \infty$ uniformly in $t \ge 0$. By the same argument as for \eqref{3.27}, we have \begin{align*} %\label{3.32}\\ & E\Big(\Big\|\bigl(1 - \Xi_{4r}(\xi)\bigr)\Big(\hat \chi_R \ast \bigl(\Xi_{r} \hat I_3(t)\bigr)\Big) \Big\|_{ L^2(\xi)}^2\Big) \\ &\le E\Big(\bigl\|(1 -\Xi_{r}) |\hat \chi_R| \bigr\|_{ L^1(\xi)}^2 \bigl\|\hat I_3(t)\bigr\|_{L^2(\xi)}^2\Big)\\ &\le M \bigl\|(1 -\Xi_{r}) |\hat \chi_R| \bigr\|_{ L^1(\xi)}^2, \quad \text{for all $t \ge 0$, by \eqref{2.38} and \eqref{2.40}.} \end{align*} Apparently, this converges to zero as $r \to \infty$ uniformly in $t \ge 0$. Thus, we conclude that for each fixed $R>0$, \[ E\Big(\Big\|\bigl(1 - \Xi_{4r}(\xi)\bigr)\Big(\hat \chi_R \ast \hat I_3(t)\Big) \Big\|_{ L^2(\xi)}^2\Big) \to 0 \quad \text{as $r \to \infty$ uniformly in $t \ge 0$}. %\label{3.33} \] Next we define \[ \hat I_4(t) = \sum_{k=1}^{\infty}\int_0^t e^{-\alpha(t-s)} \cos \bigl(\sqrt{|\xi|^2 + \gamma}(t-s)\bigr)\hat G_{k,2}(s)\,dB_k(s) %\label{3.34} \] where $\hat G_{k, 2}(s)$ is the Fourier transform of $\phi g_{k,2}(u(s))$. By virtue of \eqref{2.34}, \eqref{2.39}, \eqref{2.40} and \eqref{2.88}, we find \begin{align*} & E\Big(\bigl\| \chi_R I_4(t)\bigr\|_{H^1(\mathbb{R}^3)}^2\Big) \le M E\Big( \bigl\|\sqrt{|\xi|^2 +1} \hat I_4(t)\bigr\|_{L^2(\xi)}^2\Big) \\ %\label{3.35}\\ & \le M \sum_{k=1}^{\infty}E\Big( \int_0^t e^{-2\alpha (t-s)} \bigl\|\phi g_{k,2}(u(s))\bigr\|_{H^1(\mathbb{R}^3)}^2\,ds\Big) \\ & \le M, \quad \text{for all $t \ge 0$, and $R\ge 1$}. \end{align*} By the same argument as for \eqref{3.17}, we find that for each fixed $R\ge 1$, \[ E\Big(\bigl\|(1 - \Xi_{r}) \bigl( \hat \chi_R \ast \hat I_4(t)\bigr)\bigr\|_{ L^2(\xi)}\Big) \to 0, \quad \text{as $r \to \infty$ uniformly in $t \ge 0$}. %\label{3.36} \] Also, as in \eqref{3.19}, \[ E\Big(\bigl\|\Xi_{r}\bigl(\hat \chi_R \ast \hat I_4(t)\bigr)\bigr\|_{H^1(\xi)}^2\Big) \le M(R, r), \quad \text{for all $t \ge 0$}. %\label{3.37} \] Next let \[ \hat I_5(t) = e^{-\alpha t} \cos \bigl(\sqrt{|\xi|^2 + \gamma} \,\, t\bigr) i\xi_j \hat u_0(\xi). %\label{3.38} \] Then, by \eqref{3.18}, \[ E\Big(\Big\|\Xi_{4r} \bigl(\hat \chi_R \ast \hat I_5(t)\bigr)\Big\|_{H^1(\xi)}^2\Big) \le M(R, r) E\Big(\|\hat I_5(t)\|_{L^2(\xi)}^2\Big) %\label{3.39}\\ \le M(R, r), \quad \text{for all $t \ge 0$}. \] As in \eqref{3.23}, we find \begin{align} & E\Big(\Big\|(1- \Xi_{4r})\Big(\hat \chi_R \ast \bigl((1 -\Xi_{r})\hat I_5(t)\bigr)\Big)\Big\|_{ L^2(\xi)}^2\Big) \nonumber \\ & \le M(R) E\Big( \bigl\|(1 - \Xi_r) \hat I_5(t)\bigr\|_{L^2(\xi)}^2\Big) \label{3.40} \\ & \le M(R) E\Big(\bigl\|(1- \Xi_r) i\xi_j \hat u_0(\xi)\bigr\|_{L^2(\xi)}^2\Big), \quad \text{for all $t \ge 0$}. \nonumber \end{align} Since $\bigl\|(1 - \Xi_r) i\xi_j \hat u_0(\xi)\bigl\|_{L^2(\xi)}^2$ converges to zero as $r \to \infty$ for almost all $\omega$, it follows from the dominated convergence theorem that the last term of \eqref{3.40} converges to zero as $r \to \infty$ uniformly in $t \ge 0$. Next as in \eqref{3.27}, we see that for each fixed $R > 0$, \begin{align*} &E\Big(\Big\|\bigl(1 - \Xi_{4r}(\xi)\bigr)\Big(\hat \chi_R \ast \bigl(\Xi_{r} \hat I_5(t)\bigr)\Big) \Big\|_{ L^2(\xi)}^2\Big) \\ %\label{3.41}\\ &\le \|(1 -\Xi_{r}) |\hat \chi_R| \|_{ L^1(\xi)}^2 E\Big( \|\hat I_5(t)\|_{L^2(\xi)}^2\Big) \to 0 \end{align*} as $r \to \infty$ uniformly in $t \ge 0$. \par The only property of the function $e^{-\alpha t}\cos \bigl(\sqrt{|\xi|^2 + \gamma} \,t\bigr)$ that has been used in the above estimates is that the function is uniformly bounded and the uniform bound decays to zero exponentially fast as $t \to \infty$. Thus, we can obtain the same estimate if $e^{-\alpha t} \cos \bigl(\sqrt{|\xi|^2 + \gamma} \, t\bigr)$ is replaced by $$ e^{-\alpha t}\dfrac{ \sin \bigl(\sqrt{|\xi|^2 + \gamma} \, t\bigr)}{\sqrt{|\xi|^2 +\gamma}}\quad \text{or}\quad e^{-\alpha t}\dfrac{ \sin \bigl(\sqrt{|\xi|^2 + \gamma} \, t\bigr)}{\sqrt{|\xi|^2 +\gamma}} i\xi_j $$ So we can estimate all other terms in the right-hand sides of \eqref{2.11}, \eqref{3.11} and \eqref{3.12} as above.\par Let us define a linear mapping $\Lambda$ by $$ \Lambda(\Theta) = \Big(\Theta_1, \frac{\partial \Theta_1}{\partial x_1}, \frac{\partial \Theta_1}{\partial x_2}, \frac{\partial \Theta_1}{\partial x_3}, \Theta_2\Big), \quad\text{for $\Theta =\bigl(\Theta_1, \Theta_2\bigr) \in H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$}. %\label{3.42} $$ Then, it is evident that $\Lambda$ is an isometry from $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ onto $\mathcal{S} $ which is a closed subspace of $\bigl(L^2(\mathbb{R}^3)\bigr)^5$. Let us write \[ \Psi = \Lambda\bigl(u, u_t \bigr). %\label{3.43} \] By combining all the above analysis, we can conclude the following facts. \[ E\Big(\bigl\|(1-\chi_{R})\Psi(t)\bigr\|_{\bigl(L^2(\mathbb{R}^3)\bigr)^5 } \Big) \to 0 \quad \text{as $R \to \infty$ uniformly in $t\ge 0$}; %\label{3.44} \] For each fixed $R \ge 1$, \[ E\Big( \Big\|(1- \Xi_{r}) \bigl( \hat \chi_{R} \ast \hat \Psi(t)\bigr)\Big\|_{\bigl(L^2(\xi)\bigr)^5}\Big) \to 0 \quad \text{ as $r \to \infty$ uniformly in $t \ge 0$}; %\label{3.45} \] For each $R\ge 1$ and $r \ge 1$, there is a positive constant $M(R, r)$ such that \begin{equation} E\Big( \Big\| \Xi_{r} \bigl( \hat \chi_{R} \ast \hat \Psi(t)\bigr)\Big\|_{\bigl(H^1(\xi)\bigr)^5}\Big) \le M(R, r), \quad \text{for all $t \ge 0$}. \label{3.46} \end{equation} Let $\epsilon >0$ be given. We can choose positive numbers $R_k$ and $\epsilon_k$ such that \begin{equation} \label{3.47} E\Big(\bigl\|(1-\chi_{R_k})\Psi\bigr\|_{\bigl(L^2(\mathbb{R}^3)\bigr)^5} \Big) <\epsilon_k \quad\mbox{and}\quad \sum_{k=1}^{\infty} m_k \epsilon_k < \epsilon %\label{3.48} \end{equation} where $\{m_k\}$ is a sequence of increasing positive integers with $m_k \to \infty$ as $k \to \infty$. Next, we choose $r_k$ such that \begin{equation} E\Big( \Big\|(1- \Xi_{r_k}) \bigl( \hat \chi_{R_k} \ast \hat \Psi(t)\bigr)\Big\|_{ \bigl(L^2(\xi)\bigr)^5}\Big) < \epsilon_k. \label{3.49} \end{equation} We define $\mathcal{G}_k$ to be the set of all $\mathbb{R}^5$-valued functions $\Phi \in \bigl(L^2(\mathbb{R}^3)\bigr)^5 $ such that \begin{equation} \mathop{\rm supp} \hat \Phi \subset \Big\{ \xi : |\xi| \le 2r_k \Big\},\qquad %\label{3.50} \\ \|\hat \Phi \|_{ \bigl(H^1(\xi)\bigr)^5} \le \frac{M(R_k, r_k)}{m_k \epsilon_k} \label{3.51} \end{equation} where $M(R_k, r_k)$ is the positive constant in \eqref{3.46} with $R=R_k, r=r_k$. Then, each $\mathcal{G}_k$ is a compact subset of $\bigl(L^2(\mathbb{R}^3)\bigr)^5. $ Next we define \[ \mathcal{H} _k = \Big\{ \Theta \in \bigl(L^2(\mathbb{R}^3)\bigr)^5 : \|\Theta - \Phi\|_{ \bigl(L^2(\mathbb{R}^3)\bigr)^5} \le \frac{1}{m_k},\quad \text{for some $\Phi \in \mathcal{G}_k$}\Big\}. %\label{3.52} \] Suppose $\Psi(t, \omega) = \Lambda\bigl( u(t, \omega), u_t(t, \omega)\bigr) \notin \mathcal{H}_k$, for some $\omega \in \Omega, t \ge 0$. We can write \[ \Psi = (1 - \chi_{R_k}) \Psi + F_{\xi}^{-1}\Big( (1-\Xi_{r_k})\bigl(\hat \chi_{R_k} \ast \hat \Psi\bigr) + \Xi_{r_k}\bigl(\hat \chi_{R_k} \ast \hat \Psi\bigr)\Big). %\label{3.53} \] Obviously, either $ F_{\xi}^{-1}\Big( \Xi_{r_k}\bigl(\hat \chi_{R_k} \ast \hat \Psi\bigr)\Big) \notin \mathcal{G}_k$ or $ F_{\xi}^{-1}\Big( \Xi_{r_k}\bigl(\hat \chi_{R_k} \ast \hat \Psi\bigr)\Big) \in \mathcal{G}_k$. If $\Psi \notin \mathcal{H} _k$ and $ F_{\xi}^{-1}\Big( \Xi_{r_k}\bigl(\hat \chi_{R_k} \ast \hat \Psi\bigr)\Big) \in \mathcal{G}_k$, then it must occur that either \[ \Big\| \bigl(1-\Xi_{r_k}\bigr)\bigl(\hat \chi_{R_k}\ast \hat \Psi\bigr)\Big\|_{ \bigl(L^2(\xi)\bigr)^5} > \frac{1}{2 m_k} %\label{3.54} \] or \[ \bigl\|(1- \chi_{R_k})\Psi\bigr\|_{\bigl( L^2(\mathbb{R}^3)\bigr)^5} > \frac{1}{2m_k}. %\label{3.55} \] Therefore, for fixed $t \ge 0$, \begin{align*} & \Big\{ \omega : \Psi \notin \mathcal{H} _k\Big\} \subset \Big\{ \omega : F_{\xi}^{-1}\bigl( \Xi_{r_k}(\hat \chi_{R_k}\ast \hat \Psi)\bigr) \notin \mathcal{G}_k\Big\} \\ %\label{3.56}\\ &\quad \bigcup \Big\{\omega : \bigl\| \bigl(1- \Xi_{r_k}\bigr)(\hat \chi_{R_k}\ast \hat \Psi)\bigr\|_{ \bigl(L^2(\xi)\bigr)^5} > \frac{1}{2m_k}\Big\} \\ & \quad \bigcup \Big\{ \omega : \bigl\|(1- \chi_{R_k})\Psi\bigr\|_{\bigl(L^2(\xi)\bigr)^5} > \frac{1}{2m_k}\Big\}. \end{align*} By \eqref{3.46} and \eqref{3.51}, \[ P \Big\{ \omega : F_{\xi}^{-1}\bigl( \Xi_{r_k}(\hat \chi_{R_k}\ast \hat \Psi)\bigr) \notin \mathcal{G}_k\Big\} \le m_k \epsilon_k. %\label{3.57} \] Also, it follows from \eqref{3.47} and \eqref{3.49} that \[ P\Big\{\omega : \bigl\| \bigl(1- \Xi_{r_k}\bigr)(\hat \chi_{R_k}\ast \hat \Psi)\bigr\|_{ \bigl(L^2(\xi)\bigr)^5} > \frac{1}{2m_k}\Big\} \le 2 m_k \epsilon_k %\label{3.58} \] and \[ P\Big\{ \omega : \bigl\|(1- \chi_{R_k})\Psi\bigr\|_{\bigl(L^2(\xi)\bigr)^5} > \frac{1}{2m_k}\Big\} \le 2 m_k \epsilon_k. %\label{3.59} \] These yield $P\Big\{ \omega : \Psi(t, \omega) \notin \mathcal{H}_k\Big\} \le 5 m_k \epsilon_k$, for every $t \ge 0$. %\label{3.60} We define \[ \mathcal{K} _{\epsilon} = \bigcap_{k=1}^{\infty} \mathcal{H} _k. %\label{3.61} \] Then, $\mathcal{K} _{\epsilon}$ is closed and totally bounded in $L^2(\mathbb{R}^3)^5$. Thus, $ \mathcal{K} _{\epsilon} \cap \mathcal{S} $ is a compact subset of $\mathcal{S} $, and $\Lambda^{-1}\bigl(\mathcal{K}_{\epsilon}\cap \mathcal{S} \bigr)$ is a compact subset of $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. Furthermore, \begin{align*} P\Big\{ \omega : \bigl(u(t, \omega), u_t(t, \omega)\bigr) \notin \Lambda^{-1}\bigl(\mathcal{K}_{\epsilon} \cap \mathcal{S} \bigr)\Big\}& = P\Big\{ \omega : \Psi(t, \omega) \notin \mathcal{K}_{\epsilon}\Big\}\\ %\label{3.62}\\ & \le \sum_{k=1}^{\infty} 5 m_k \epsilon_k < 5\epsilon, \quad \text{for every $ t \ge 0$}. \end{align*} Let $\mathcal{L}(t) = \mathcal{L} \Big( \bigl(u(t), u_t(t)\bigr)\Big)$ be the probability distribution for $\bigl(u(t), u_t(t)\bigr)$ for each $t\ge 0$. By the above analysis, the family of probability measures $\bigl\{\mathcal{L}(t) \bigr\}_{t\ge 0}$ on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ is tight. \section{Existence of periodic and invariant measures} It is clear that we could take any $s \ge 0$ as the initial time for the Cauchy problem \eqref{2.30} - \eqref{2.31}. We define $X(t, s; \zeta) =(u, u_t)$ to be the solution of \eqref{2.30} for $t \ge s$ satisfying $ (u(s), u_t(s))= \zeta$, where $\zeta$ is $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$-valued $\mathcal{F}_s$-measurable such that $ \zeta \in L^2\bigl(\Omega; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr) $ and $u(s) \in L^{p+1}\bigl(\Omega; L^{p+1}(\mathbb{R}^3)\bigr)$. Then, $X\bigl(\cdot , s; \zeta\bigr) \in L^2\bigl(\Omega; C([s, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3))\bigr)$, for all $T>s$, and \eqref{2.87} holds for all $t \ge s$. For each $ 0 \le s < t$, $z \in H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ and $\Gamma \in \mathcal{B} \bigl(H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)$, we set \begin{equation} p\bigl(s, z; t, \Gamma\bigr) = P\Big\{ \omega : X(t, s; z) \in \Gamma\Big\}. \label{4.1} \end{equation} \begin{lemma} \label{lm4.1} Let $\phi$ be a bounded continuous function on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. For each $0 \le s < t$, the integral $$ \int_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)} p(s, z; t, dy) \phi(y) $$ is continuous in $z \in H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. \end{lemma} \begin{proof} Let $\bigl\{z_n\bigr\}_{n=1}^{\infty}$ be a Cauchy sequence in $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ such that \[ \lim_{n \to \infty}z_n = z_{\ast}. %\label{4.2} \] Consider the probability distributions for $X(t, s; z_n)$, $n=1, 2, \dots$, and $X(t, s; z_{\ast})$. All the estimates in Section 4 are valid uniformly in $z_n$ and $0 \le s < t$, for the sequence $\bigl\{z_n\bigr\}_{n=1}^{\infty}$ is contained in a compact subset of $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. Consequently, the family of probability measures $p(s, z_n; t, \cdot ), n=1, 2, \dots$, is tight for each $0 \le s 0$. Then, there is a compact subset $\Upsilon_{\epsilon}$ of $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ such that \[ p\bigl(s^{\ast}, z_{\ast}; t^{\ast}, \Upsilon_{\epsilon}\bigr) > 1-\epsilon,\quad p\bigl(s^{\ast}, z_n; t^{\ast}, \Upsilon_{\epsilon}\bigr) > 1-\epsilon,\quad \text{for all $n \ge 1$}. %\label{4.3} \] It follows from \eqref{2.71} that \[ E\Big(\sup_{s^{\ast} \le t \le t^{\ast}} \bigl\|X(t, s^{\ast}; z_n)\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)}^2\Big) \le M(t^{\ast}),\quad \text{for all $n \ge 1$},%\label{4.4} \] and \[ E\Big(\sup_{s^{\ast}\le t \le t^{\ast}} \bigl\|X(t, s^{\ast}; z_{\ast})\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)}^2\Big) \le M(t^{\ast}). %\label{4.5} \] Thus, we can choose a positive number $L$ such that \[ P \Big\{\omega : \sup_{s^{\ast} \le t \le t^{\ast}}\bigl\|X(t, s^{\ast}; z_{n})\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)} > L\Big\} < \epsilon, \quad \text{for all $n \ge 1$}, %\label{4.6} \] and \[ P \Big\{\omega : \sup_{s^{\ast} \le t \le t^{\ast}}\bigl\|X(t, s^{\ast}; z_{\ast})\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)} > L\Big\} < \epsilon. %\label{4.7} \] We then define stopping times by \[ \mathcal{T} _n = \begin{cases} \inf\bigl\{t : \bigl\|X(t, s^{\ast}; z_n)\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)} > L\bigr\},\\ \infty,\qquad \text{if } \bigl\{t : \bigl\|X(t, s^{\ast}; z_n)\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)} > L \bigr\} = \emptyset \end{cases} %\label{4.8} \] and \[ \mathcal{T} _{\ast} = \begin{cases} \inf\bigl\{t : \bigl\|X(t, s^{\ast}; z_{\ast})\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)} > L\bigr\},\\ \infty, \qquad \text{if } \bigl\{t : \bigl\|X(t, s^{\ast}; z_{\ast}) \bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)} > L \bigr\} =\emptyset.\end{cases} %\label{4.9} \] We write \[ Y_n(t) = X\bigl(t\wedge \mathcal{T} _n \wedge \mathcal{T}_{\ast}, s^{\ast}; z_n\bigr) %\label{4.10} \quad\mbox{and}\quad Y_{\ast}(t) = X\bigl( t \wedge \mathcal{T} _n \wedge \mathcal{T}_{\ast}, s^{\ast};z_{\ast}\bigr). %\label{4.11} \] Since $ \bigl\|f_2(v) - f_2(w)\bigr\|_{L^2(\mathbb{R}^3)} \le M(L)\| v - w\|_{H^1(\mathbb{R}^3)}$, %\label{4.12} for all $ v, w \in H^1(\mathbb{R}^3)$ satisfying $\|v\|_{H^1(\mathbb{R}^3)} \le L, \|w\|_{H^1(\mathbb{R}^3)}\le L$, for some positive constant $M(L)$, we can derive \begin{equation} E\Big(\bigl\|Y_n(t^{\ast}) - Y_{\ast}(t^{\ast})\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)}^2\Big) \le M\bigl(t^{\ast}, L\bigr) \|z_n - z_{\ast}\|_{H^1(\mathbb{R}^3)\times L^2(\mathbb{R}^3)}^2, \label{4.13} \end{equation} where $M\bigl(t^{\ast}, L\bigr)$ is a positive constant independent of $n$. Let $\phi$ be a bounded continuous function on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. Then, $\phi$ is uniformly continuous on $\Upsilon_{\epsilon}$. Thus, there is some $\delta >0$ such that for all $v, w \in \Upsilon_{\epsilon} $ satisfying $\|v - w \|_{H^1(\mathbb{R}^3)\times L^2(\mathbb{R}^3)} <\delta$, \begin{equation} \bigl|\phi(v) - \phi(w)\bigr| < \epsilon. \label{4.14} \end{equation} Let us write, for $n =1, 2, \dots$, \begin{align*} \Omega_n =& \Big\{\omega : X\bigl(t^{\ast}, s^{\ast}; z_n\bigr)\in \Upsilon_{\epsilon}\Big\} \bigcap \Big\{\omega : X\bigl(t^{\ast}, s^{\ast}; z_{\ast}\bigr) \in \Upsilon_{\epsilon}\Big\} \\ %\label{4.15}\\ &\bigcap \Big\{\omega : \sup_{s^{\ast} \le t \le t^{\ast}}\bigl\|X(t, s^{\ast}; z_{n})\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)}\le L\Big\}\\ & \bigcap \Big\{\omega : \sup_{s^{\ast} \le t \le t^{\ast}}\bigl\|X(t, s^{\ast}; z_{\ast})\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)}\le L\Big\}. \end{align*} For every $n \ge 1$, it holds that \begin{equation} P\Big( \Omega_n \Big) > 1 - 4\epsilon. \label{4.16} \end{equation} For each $n \ge 1$, if $\omega \in \Omega_n$, then \[ Y_{\ast}(t^{\ast}) = X\bigl(t^{\ast}, s^{\ast}; z_{\ast}\bigr) %\label{4.17} \quad\mbox{and}\quad Y_n(t^{\ast}) = X\bigl(t^{\ast}, s^{\ast}; z_n\bigr). %\label{4.18} \] It follows from \eqref{4.13} that \begin{align} & P\Big( \Omega_n \bigcap \Big\{\omega : \bigl\|X(t^{\ast}, s^{\ast}; z_n) - X(t^{\ast}, s^{\ast}; z_{\ast})\bigr\|_{H^1(\mathbb{R}^3)\times L^2(\mathbb{R}^3)} \ge \delta\Big\}\Big) \nonumber \\ & \le \frac{M\bigl(t^{\ast}, L\bigr)}{\delta^2} \bigl\| z_n - z_{\ast}\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)}^2, \quad \text{for all $n \ge 1$}. \label{4.19} \end{align} Let us define \[ M_{\phi} = \sup_{w \in H^1(\mathbb{R}^3)\times L^2(\mathbb{R}^3)} \bigl|\phi(w)\bigr| %\label{4.20} \] By \eqref{4.16}, we see that for all $n \ge 1$, \begin{gather*} \Big| \int_{\Omega\setminus \Omega_n} \phi\bigl( X(t^{\ast}, s^{\ast}; z_n)\bigr)\,dP\Big| < 4\epsilon M_{\phi}, \\%\label{4.21}\\ \Big| \int_{\Omega\setminus \Omega_n} \phi\bigl( X(t^{\ast}, s^{\ast}; z_{\ast})\bigr)\,dP\Big| < 4\epsilon M_{\phi}, %\label{4.22}\\ \end{gather*} and, by \eqref{4.14} and \eqref{4.19}, \begin{align*} & \int_{\Omega_n} \Big| \phi\bigl( X(t^{\ast}, s^{\ast}; z_n)\bigr) -\phi\bigl(X(t^{\ast}, s^{\ast}; z_{\ast})\bigr)\Big|\,dP \\%\label{4.23}\\ &\le \frac{2 M_{\phi} M\bigl(t^{\ast}, L\bigr)}{\delta^2} \bigl\|z_n - z_{\ast}\bigr\|_{H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)}^2 + \epsilon. \end{align*} Finally we arrive at \[ \varlimsup_{z_n \to z_{\ast}} \Big|\int_{\Omega} \phi\bigl( X(t^{\ast}, s^{\ast}; z_n)\bigr) -\phi\bigl(X(t^{\ast}, s^{\ast}; z_{\ast})\bigr)\,dP\Big| < \epsilon + 8\epsilon M_{\phi}, %\label{4.24} \] which yields the continuity, for $\epsilon >0$ was arbitrarily chosen. \end{proof} \begin{lemma} \label{lm4.2} $\bigl(u(t), u_t(t)\bigr)$ is a $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$-valued Markov process. \end{lemma} \begin{proof} By the uniqueness of solution, it holds that for any $0 \le r < s< t$, and $z \in H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$, \[ X(t, r; z) = X\bigl(t, s; X(s, r; z)\bigr) %\label{4.25} \] for almost all $\omega$. We define \[ \mathcal{P}_{s,t}\phi(z) = E\Big( \phi\bigl(X(t, s;z)\bigr)\Big) %\label{4.26} \] for each bounded Borel function $\phi$ on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. It is enough to show that \[ E\Big( \phi\bigl(X(t, s; X(s, r;z)) \bigr) \Big| \mathcal{F}_s\Big) = \mathcal{P}_{s,t}\phi\bigl(y\bigr)\Big|_{y= X(s,r;z)} %\label{4.27} \] for almost all $\omega$, for each bounded continuous function $\phi$. Let us recall the proof of Lemma \ref{lm2.2}. The solution was obtained by the truncation method. Let $X_N = X_N(t, s; \zeta)$ denote the solution $\bigl(u_N, \partial_t u_N\bigr)$ of \eqref{2.30} with $f = f_1 + f_{2, N}$ satisfying $\bigl(u_N(s), \partial_t u_N(s)\bigr) = \zeta$, where $\zeta$ is $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$-valued $\mathcal{F}_s$-measurable such that $ \zeta \in L^2\bigl(\Omega; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr) $ and $u_N(s) \in L^{p+1}\bigl(\Omega; L^{p+1}(\mathbb{R}^3)\bigr)$. Then, we know that for each $T>s$, \[ X(t, s; \zeta) =\lim_{N \to \infty} X_N(t, s; \zeta) \quad \text{in $C\bigl([s, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)$} %\label{4.28} \] for almost all $\omega$. For each $N\ge 1$ and each bounded continuous function $\phi$ on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$, it holds that \[ E\Big(\phi\bigl(X_N(t, s; \zeta)\bigr) \Big| \mathcal{F}_s\Big) = E \Big(\phi\bigl(X_N(t, s; y)\bigr)\Big)\Big|_{y=\zeta} %\label{4.29} \] for almost all $\omega$, which follows directly from the argument in \cite[p.250]{d1}, for $f_{2, N}(\cdot)$ satisfies \eqref{2.50}. Since $\phi$ is a bounded continuous function, we pass $N \to \infty$ to arrive at $$ E\Big(\phi\bigl(X(t, s; \zeta)\bigr) \Big| \mathcal{F}_s\Big) = E \Big(\phi\bigl(X(t, s; y)\bigr)\Big)\Big|_{y=\zeta} %\label{4.30 $$ for almost all $\omega$. \end{proof} Next we show periodicity of the transition function. \begin{lemma} \label{lm4.3} Let $p(s, z; t, \Gamma)$ be defined by \eqref{4.1}. Then, \begin{equation} p(s+L, z; t+L, \Gamma)=p(s, z; t, \Gamma) \label{4.31} \end{equation} for all $0 \le s < t, z \in H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$, and $\Gamma \in \mathcal{B} \bigl(H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)$. \end{lemma} \begin{proof} Let $\{\Omega^{(j)}, \mathcal{F}_t^{(j)}, P^{(j)}\}, j=1, 2$, be a stochastic basis, and let $B_k^{(j)}(t)$, $k=1, 2, \dots, $ be mutually independent standard Brownian motions over this stochastic basis for each $j=1, 2$. Let $\bigl(u^{(j)}, u_t^{(j)}\bigr)$ be the solution of \eqref{2.30} with $B_k =B_k^{(j)}$, $k=1,2, \dots$, satisfying $\bigl(u^{(j)}(s),u_t^{(j)}(s)\bigr)=z$, over the stochastic basis $\{\Omega^{(j)}, \mathcal{F}_t^{(j)}, P^{(j)}\}$. We will show that $\bigl(u^{(1)}(t),u_t^{(1)}(t)\bigr)$ and $\bigl(u^{(2)}(t),u_t^{(2)}(t)\bigr)$ have the same probability law. We may replace each $B_k^{(j)}(t)$ by $B_k^{(j)}(t)- B_k^{(j)}(s)$. Recalling that the solution was obtained by the truncation procedure, let $\bigl(u_N^{(j)}, \partial_t u_N^{(j)}\bigr)$ be the solution with $f = f_1 + f_{2, N}$. It is enough to show that $\bigl(u_N^{(1)}(t),\partial_t u_N^{(1)}(t)\bigr)$ and $\bigl(u_N^{(2)}(t),\partial_t u_N^{(2)}(t)\bigr)$ have the same probability law for each $N \ge 1$. Fix any $N$ and drop the subscript $N$. Then, $\bigl(u^{(j)}, u_t^{(j)}\bigr)$ was obtained by iteration scheme. Choose any $T >0$, and suppose that $f^{(j)}, g_k^{(j)}$'s are $C\bigl([s, T]; L^2(\mathbb{R}^3)\bigr)$-valued random variables which are predictable processes over $\{\Omega^{(j)}, \mathcal{F}_t^{(j)}, P^{(j)}\}, j=1, 2$, such that the joint distribution of $\Big(f^{(1)}, \{g_k^{(1)}\}_{k=1}^m, \{B_k^{(1)}\}_{k=1}^m\Big)$ is the same as that of $\Big(f^{(2)}, \{g_k^{(2)}\}_{k=1}^m, \{B_k^{(2)}\}_{k=1}^m\Big)$, for each $m\ge 1$. Let us define for $j=1,2$, and $m \ge 1$, \begin{align} \hat v^{(j, m)}(t, \xi) =& e^{-\alpha (t -s)}\cos (\sqrt{|\xi|^2 + \gamma} (t -s)) \hat v_0^{(j,m)}(\xi) \nonumber\\ & + e^{-\alpha (t-s)} \dfrac{\sin (\sqrt{|\xi|^2 +\gamma} (t-s))}{\sqrt{ |\xi|^2 +\gamma}} \hat v_1^{(j,m)}(\xi) \nonumber\\ & + \int_s^t e^{-\alpha (t -\eta)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-\eta)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat f^{(j)}(\eta) d\eta \label{4.32}\\ & + \sum_{k=1}^{m} \int_s^t e^{-\alpha (t -\eta)} \dfrac{\sin \bigl(\sqrt{|\xi|^2 +\gamma} (t-\eta)\bigr)}{\sqrt{|\xi|^2 + \gamma}} \hat g_k^{(j)}(\eta) dB_k^{(j)}(\eta) \nonumber \end{align} where $\bigl(v_0^{(j,m)}, v_1^{(j,m)}\bigr)=z$. Then, $\bigl(v^{(j, m)}, v_t^{(j, m)}\bigr)$ is a $C\bigl([s, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)$-valued random variable which is a predictable process over $\{\Omega^{(j)}, \mathcal{F}_t^{(j)}, P^{(j)}\}, j=1, 2$. Furthermore, $\Big(\bigl(v^{(1,m)}, v_t^{(1,m)}\bigr), \{B_k^{(1)}\}_{k=1}^m\Big)$ and $\Big(\bigl(v^{(2,m)}, v_t^{(2,m)}\bigr), \{B_k^{(2)}\}_{k=1}^m\Big)$ have the same joint distribution. In the meantime, as $m \to \infty$, $$ \bigl(v^{(j, m)}, v_t^{(j, m)}\bigr) \to \bigl(v^{(j)}, v_t^{(j )}\bigr) \quad \text{ in $L^2\bigl(\Omega^{(j)}; C\bigl([s, T]; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)\bigr)$} %\label{4.33} $$ where $v^{(j)}$ is defined by the right-hand side of \eqref{4.32} with $m$ replaced by $\infty$. Consequently, the joint distribution of $\Big(\bigl(v^{(1)}, v_t^{(1)}\bigr),\{B_k^{(1)}\}_{k=1}^m\Big)$ is equal to that of $\Big(\bigl(v^{(2)},v_t^{(2)}\bigr), \{B_k^{(2)}\}_{k=1}^m\Big)$ for each $m \ge 1$. At the same time, $$ \Big( f_1 + f_{2, N}(v^{(1)}), \bigl\{g_{k,1}+\phi g_{k,2}(v^{(1)})\bigr\}_{k=1}^m, \{B_k^{(1)}\}_{k=1}^m\Big) $$ and $$ \Big( f_1 + f_{2, N}(v^{(2)}), \bigl\{g_{k,1}+\phi g_{k,2}(v^{(2)})\bigr\}_{k=1}^m, \{B_k^{(2)}\}_{k=1}^m\Big) $$ have the same joint distribution for $m \ge 1$. Thus, approximate solutions at each step of the iterations scheme have the same joint distribution, and $\bigl(u^{(1)}(t), u_t^{(1)}(t)\bigr)$ and $\bigl(u^{(2)}(t), u_t^{(2)}(t)\bigr)$ have the same distribution for each $s \le t \le T$. With the aid of \eqref{2.32} - \eqref{2.35}, \eqref{2.37} - \eqref{2.40} and \eqref{2.48}, we apply this observation to $\bigl(u^{(1)}, u_t^{(1)}\bigr) = \bigl(u(\cdot), u_t(\cdot)\bigr)$, $\bigl(u^{(2)}, u_t^{(2)}\bigr) = \bigl(u( \cdot \,\, + L), u_t( \cdot \,\,+ L)\bigr), B_k^{(1)}(t) = B_k(t) - B_k(s)$ and $B_k^{(2)}(t)=B_k(t+L) - B_k(s+L) $ to arrive at \eqref{4.31}. \end{proof} With the aid of the above lemmas under the assumptions \eqref{2.32} - \eqref{2.35}, \eqref{2.37} - \eqref{2.40} and \eqref{2.48}, we will establish the existence of a periodic measure. \begin{theorem} \label{thm4.4} There exists a probability measure $\mu$ on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ such that if the initial distribution is equal to $\mu$, then the probability distribution of the solution to {\rm \eqref{2.30} - \eqref{2.31}} is $L$-periodic in time. \end{theorem} \begin{proof} Here we use the notation $p(s, z; t, \Gamma)$ defined by \eqref{4.1}. By Lemma \ref{lm4.3}, the transition function $p(s, z; t, \Gamma)$ is $L$-periodic. Choose any $z \in H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. Then, by Lemma \ref{lm2.2}, there is a unique solution of \eqref{2.30} - \eqref{2.31} satisfying \eqref{2.89}. Following Khasminskii \cite{k2}, we set $$ \mu_N =\frac{1}{N}\sum_{k=1}^N p(0, z; kL, \cdot ). %\label{4.34] $$ Then, by virtue of the analysis in Section 4, $\bigl\{\mu_N\bigr\}_{N\ge 1}$ is a tight sequence of probability measures on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. Thus, there is a weakly convergent subsequence $\bigl\{\mu_{N_k}\bigr\}_{k=1}^{\infty}$. \par Let $\mu = \lim_{k \to \infty} \mu_{N_k}$ %\label{4.35 and write $$ \mathcal{X} = H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3). %\label{4.36} $$ It follows from \eqref{2.89} that $$ \int_{\mathcal{X}} \|y\|_{\mathcal{X}}^4 p(0, z; kL, dy) \le M, \quad \text{for all $k\ge 1$}, %\label{4.37} $$ for some constant $M$. Hence, by means of the weak convergence of $\bigl\{\mu_{N_k}\bigr\}$, cut-off functions and Fatou's lemma, we find $$ \int_{\mathcal{X}} \|y\|_{\mathcal{X}}^4 d\mu(y) \le M. %\label{4.38} $$ In fact, this is a necessary condition for $\mu$ to be the probability distribution of a random function in $L^4\bigl(\Omega; H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)\bigr)$. We now assume that $\bigl( u_0, u_1\bigr)$ satisfies the conditions in Lemma \ref{lm2.2} and the distribution of $\bigl( u_0, u_1\bigr)$ is equal to $\mu$. Choose any bounded continuous function $\phi$ on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$. By using Lemma \ref{lm4.1}, Lemma \ref{lm4.3} and the Chapman-Kolmogorov equation, we see that for each $t \ge 0$, \begin{align} &\int_{\mathcal{X}}d\mu(y) \int_{\mathcal{X}}p(0, y; t, d\zeta)\phi(\zeta) \nonumber\\ & = \lim_{k \to \infty} \frac{1}{N_k} \sum_{k=1}^{N_k} \int_{\mathcal{X}}p(0, z; kL, dy) \int_{\mathcal{X}}p(0, y; t, d\zeta)\phi(\zeta) \nonumber\\ &= \lim_{k \to \infty} \frac{1}{N_k} \sum_{k=1}^{N_k}\int_{\mathcal{X}} p(0, z; t + kL, d\zeta) \phi(\zeta) \nonumber\\ & = \lim_{k \to \infty} \frac{1}{N_k} \sum_{k=1}^{N_k}\int_{\mathcal{X}} p(0, z; t + L + kL, d\zeta) \phi(\zeta) \label{4.39} \\ & = \lim_{k \to \infty} \frac{1}{N_k} \sum_{k=1}^{N_k} \int_{\mathcal{X}}p(0, z; kL, dy) \int_{\mathcal{X}}p(0, y; t + L, d\zeta)\phi(\zeta)\nonumber\\ & = \int_{\mathcal{X}} d\mu(y) \int_{\mathcal{X}}p(0, y; t+L, d\zeta)\phi(\zeta). \nonumber \end{align} This yields that for each Borel subset $\Gamma$ of $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3) $ and each $t \ge 0$, \begin{align*} P\Big\{\omega : \bigl(u(t), u_t(t) \bigr) \in \Gamma\Big\} & =\int_{\mathcal{X}} d\mu(y) p(0, y; t, \Gamma) \\ %\label{4.40}\\ & =\int_{\mathcal{X}} d\mu(y) p(0, y; t+L, \Gamma)\\ &= P\Big\{\omega : \bigl( u(t + L), u_t(t + L)\bigr) \in \Gamma\Big\}. \end{align*} This completes the proof of Theorem \ref{thm4.4}. \end{proof} Next we assume that $f_1$ and $g_k$'s are independent of time, and retain all other conditions in Lemma \ref{lm2.2}. Then, Lemma \ref{lm2.2}, Lemma \ref{lm4.1} and Lemma \ref{lm4.2} are still valid, and \eqref{4.31} is also valid for arbitrary $L >0$. We will prove the existence of an invariant measure. \begin{theorem} \label{thm4.5} There exists an invariant measure on $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ for \eqref{2.30}. \end{theorem} \begin{proof} We first note that the result of Theorem \ref{thm4.4} cannot be used directly because periodic measures lack uniqueness. As above we follow Khasminskii \cite{k2} to choose any $z \in \mathcal{X} = H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3) $ and set \begin{equation} \mu_N = \frac{1}{N}\int_0^N p(0, z; t, \cdot ) \,dt, \label{4.41} \end{equation} which is a well-defined probability measure over $\mathcal{X}$. For this, we argue as follows. For each $z \in \mathcal{X} $ and each bounded continuous function $\phi$ on $\mathcal{X}$, the integral $$ \int_{\mathcal{X}} p\bigl(0, z; t, dy\bigr) \phi(y) = \int_{\Omega} \phi\bigl(X(t, 0; z)\bigr)\,dP $$ is continuous in $t$, which implies that for each closed subset $G \subset \mathcal{X}$, the function $p\bigl(0, z; t, G\bigr)$ is upper semi-continuous in $t$. By the Dynkin system theorem, $p(0, z; t, \Gamma)$ is $\mathcal{B} \bigl([0, \infty)\bigr)$-measurable in $t $ for each Borel subset $\Gamma \subset \mathcal{X}$. Thus, the right-hand side of \eqref{4.41} defines a probability measure over $\mathcal{X}$. Again by the analysis in Section 4, the sequence $\{u_N\}_{N=1}^{\infty}$ is tight, and we can find a subsequence $\{\mu_{N_k}\}_{k=1}^{\infty} $ which converges weakly to a probability measure on $\mathcal{X}$. Let $\mu = \lim_{k \to \infty} \mu_{N_k}$. %\label{4.42 Choose any bounded continuous function $\phi$ on $\mathcal{X}$. It is enough to show that $$ \int_{\mathcal{X}} \,d\mu(y) \int_{\mathcal{X}} p(0, y; t, d\zeta) \phi(\zeta) = \int_{\mathcal{X}} \,d\mu(y) \int_{\mathcal{X}} p(0, y; t + L, d\zeta) \phi(\zeta), %\label{4.43 $$ for all $t \ge 0 $ and all $ L >0$. Since \eqref{4.31} is valid for every $L>0$, we can proceed in the same manner as in \eqref{4.39} to find \begin{align*} & \int_{\mathcal{X}} d\mu(y) \int_{\mathcal{X}} p(0, y; t, d\zeta)\phi(\zeta) \\%\label{4.44}\\ & = \lim_{k \to \infty} \frac{1}{N_k} \int_0^{N_k} ds \int_{\mathcal{X}} p(0, z; s, dy) \int_{\mathcal{X}}p(0, y; t, d\zeta)\phi(\zeta)\\ &= \lim_{k \to \infty} \frac{1}{N_k} \int_0^{N_k} ds \int_{\mathcal{X}}p(0, z; t + s, d\zeta) \phi(\zeta)\\ & = \lim_{k \to \infty} \frac{1}{N_k} \int_{-L}^{N_k - L} ds \int_{\mathcal{X}}p(0, z; t + L + s, d\zeta) \phi(\zeta)\\ & = \lim_{k \to \infty} \frac{1}{N_k} \int_0^{N_k} ds \int_{\mathcal{X}}p(0, z; t + L + s, d\zeta) \phi(\zeta)\\ & =\lim_{k \to \infty} \frac{1}{N_k} \int_0^{N_k} ds \int_{\mathcal{X}}p(0, z; s, dy) \int_{\mathcal{X}}p(0, y; t + L, d\zeta)\phi(\zeta)\\ & = \int_{\mathcal{X}} d\mu(y) \int_{\mathcal{X}}p(0, y; t+L, d\zeta)\phi(\zeta), \quad \text{for all $ t\ge 0$ and all $L>0$}. \end{align*} This completes the proof. \end{proof} \section{Remarks on the case of a bounded domain} %5 Let $\mathcal{G}$ be a bounded domain in $\mathbb{R}^3$ with smooth boundary $\partial \mathcal{G}$. We consider the initial-boundary value problem. \begin{gather*} u_{tt} + 2\alpha u_t -\Delta u + \beta u = f(t, x, u) + \sum_{k=1}^{\infty} g_k(t, x, u) \frac{dB_k}{dt}, \quad (t, x) \in (0, \infty) \times \mathcal{G},\\ %\label{5.1} u = 0, \quad (t, x) \in (0, \infty) \times \partial \mathcal{G},\\ %\label{5.2} u(0) = u_0, u_t(0) = u_1 \quad x \in \mathcal{G}. %\label{5.3} \end{gather*} Here we impose the same conditions on $f$ and $g_k$'s as in the previous section. Following \cite{p2}, our basic function class is $$ V^{s} = \Big\{ \psi : \sum_{k=1}^{\infty} \lambda_k^s |\langle \psi, \phi_k\rangle |^2 < \infty\Big\},\quad s \in R %\label{5.4} $$ where $\langle \cdot , \cdot \rangle$ is the inner product in $L^2(\mathcal{G})$, and $\{\phi_k\}_{k=1}^{\infty}$ is a complete orthonormal system for $L^2(\mathcal{G})$ which consists of the eigenfunctions: \begin{gather*} - \Delta \phi_k = \lambda_k \phi_k \quad \text{in } \mathcal{G} \\ \phi_k = 0 \quad \text{on }\partial \mathcal{G}. \end{gather*} %\label{5.5} Then, $V^0 = L^2(\mathcal{G})$, $V^1 = H_0^1(\mathcal{G})$ and $V^2 = H_0^1(\mathcal{G}) \cap H^2(\mathcal{G})$. We also note that for $1 \le p< 3$ and $ q=(3-p)/2$, \begin{equation} \|\psi |\psi|^{p-1}\|_{V^q} \le C_p \|\psi\|_{V^1}^p \label{5.6} \end{equation} for all $\psi \in V^1$, for some positive constant $C_p$. By Poincare's inequality, we may include the case $\beta=0$. We can also take $\phi \equiv 1$ in \eqref{2.33}. By Galerkin approximation in terms of $\phi_k$'s, we can prove Lemma \ref{lm2.1} with $H^1(\mathbb{R}^3) \times L^2(\mathbb{R}^3)$ replaced by $V^1 \times V^0$. Then, by iteration and truncation method, we prove Lemma \ref{lm2.2}. If $(u_0, u_1) \in L^6\bigl(\Omega; V^1 \times V^0\bigr) $ and $u_0 \in L^{3p+3}\bigl(\Omega; L^{p+1}(\mathbb{R}^3)\bigr)$, then we can use the same procedure as for \eqref{2.89} to obtain \begin{equation} E\Big(|Q(t)|^3\Big) \le M,\quad \text{for all $t\ge 0$}, \label{5.7} \end{equation} for some positive constant $M$. Next we define the operator $P_N$ on $V^s$ by $$ \psi \mapsto \sum_{k=1}^{N} <\psi, \phi_k> \phi_k. %\label{5.8} $$ We then define $v_N = \bigl(I -P_N\bigr) u$, %\label{5.9} so that $v_N$ can be the solution of \begin{gather*} \begin{aligned} & \partial_{tt}v_N + 2\alpha \partial_t v_N -\Delta v_N + \beta v_N \\%\label{5.10} & = (I - P_N) f(t, x, u) + \sum_{k=1}^{\infty} (I - P_N) g_k(t, x,u) \frac{dB_k}{dt}, \quad (t, x) \in (0, \infty) \times \mathcal{G}, \end{aligned} \\ v_N = 0, \quad (t, x) \in (0, \infty) \times \partial \mathcal{G},\\ %\label{5.11} v_N(0) = (I - P_N) u_0, \quad \partial_t v_N(0) = (I - P_N) u_1, \quad x \in \mathcal{G}. %\label{5.12} \end{gather*} By virtue of \eqref{5.6}, we have $$ \bigl\|u |u|^{p-1}\bigr\|_{V^{q}} \le C_p \|u\|_{V^1}^p %\label{5.13} $$ which, together with \eqref{5.7}, yields $$ E\Big(\bigl\|\bigl(I - P_N\bigr) \bigl(u |u|^{p-1}\bigr) \bigr\|_{V^0}^2\Big) \le C_p \lambda_{N+1}^{-q} M %\label{5.14} $$ for all $t\ge 0$. By virtue of \eqref{2.35}, \eqref{2.37} - \eqref{2.40}, it is easy to see that as $N \to \infty$, $$ \bigl\|( I - P_N)f_1(t)\bigr\|_{V^0} \to 0, \quad \text{uniformly in $t\ge 0$}, %\label{5.15} $$ and $$ \sum_{k=1}^{\infty} E\Big( \bigl\|\bigl(I - P_N\bigr)g_{k,1}(t)\bigr\|_{V^0}^2 + \bigl\|\bigl(I-P_N\bigr)\phi g_{k,2}(u(t))\bigl\|_{V^0}^2\Big) \to 0 %\label{5.16} $$ uniformly in $t \ge 0$. Using these and an equation similar to \eqref{3.7}, we find that $$ E\Big( \bigl\|\bigl(I - P_N\bigr) u(t)\bigr\|_{V^1}^2 + \bigl\|\bigl(I - P_N\bigr) u_t(t)\bigr\|_{V^0}^2\Big) \to 0%\label{5.17 $$ as $ N \to \infty$ uniformly in $t$. Here the term $u|u|^{p-1}$ is handled differently from the previous procedure, because the operator $P_N$ does not preserve the polynomial structure of the term. By virtue of \eqref{5.7}, we have \begin{equation} E\Big(\bigl\|\bigl(P_N u(t), P_N u_t(t)\bigr)\bigr\|_{V^1 \times V^0} \Big) \le M, \quad \text{for all $t \ge 0$ and $ N \ge 1$}. \label{5.18} \end{equation} Now let $\epsilon>0$ be given. Then, there are positive integer $N_k$ and positive number $\epsilon_k$ such that \begin{gather*} E\Big( \bigl\|\bigl(I - P_{N_k}\bigr) u(t)\bigr\|_{V^1} + \bigl\|\bigl(I - P_{N_k}\bigr) u_t(t)\bigr\|_{V^0}\Big) < \epsilon_k ,\\ %\label{5.19} \sum_{k=1}^{\infty} m_k \epsilon_k < \epsilon %\label{5.20} \end{gather*} where $\{m_k\}$ is a sequence of increasing positive integers with $m_k \to \infty$ as $k \to \infty$. We define $\mathcal{S}_k$ to be the set of all $\mathbb{R}^2$-valued functions $\Phi \in V^1 \times V^0$ such that $$ P_{N_k} \Phi = \Phi %\label{5.21} \quad \mbox{and}\quad \bigl\| \Phi\bigr\|_{V^1 \times V^0} \le \dfrac{M}{m_k \epsilon_k} %\label{5.22} $$ where $M$ is the same positive constant as in \eqref{5.18}. Apparently, $\mathcal{S}_k$ is a compact subset of $V^1 \times V^0$. Next we define $$ \mathcal{U} _k = \Big\{ \Theta \in V^1 \times V^0 : \| \Theta - \Phi\|_{ V^1 \times V^0} \le \frac{1}{m_k},\quad \text{for some $\Phi \in \mathcal{S}_k$}\Big\}. %\label{5.23} $$ Then, $\bigcap_{k=1}^{\infty} \mathcal{U} _k$ is a compact subset of $V^1 \times V^0$. For each $k \ge 1$, we can write $$ \bigl(u(t), u_t(t)\bigr) = \bigl(P_{N_k}u(t), P_{N_k}u_t(t)\bigr) + \Big(\bigl(I- P_{N_k}\bigr)u(t), \bigl(I -P_{N_k}\bigr)u_t(t)\Big). %\label{5.24} $$ By the same argument as in Section 4, we can conclude that $$ P\big\{ \omega : \bigl(u(t), u_t(t)\bigr) \notin \bigcap_{k=1}^{\infty} \mathcal{U}_k\big\} \le 2 \epsilon, \quad \text{for each $t \ge 0$}. %\label{5.25} $$ The remaining procedure is the same as the one used in Section 5 to prove the existence of periodic and invariant measures. \begin{thebibliography}{00} \bibitem{b1} Berger, M.A. and Mizel, V.J., \emph{Volterra equations with Ito integrals I}, J. Integral Eqs., Vol. 2 (1980), 187--245. \bibitem{c1} 2 Chow, P.L., \emph{Stochastic wave equations with polynomial nonlinearity}, Ann. Appl. Probab., Vol. 12 (2002), 361--381. \bibitem{c2} Chow, P. L. and Khasminskii, R. Z., \emph{Stationary solutions of nonlinear stochastic evolution equations}, Stochastic Anal. Appl., Vol. 15 (1997), 671--699. \bibitem{c3} Crauel, H., Debussche, A. and Flandoli, F., \emph{Random attractors}, J. Dynamics and Differential Equations, Vol. 9 (1997), 307--341. \bibitem{c4} Crauel, H. and Flandoli, F., \emph{Attractors for random dynamical systems.}, Prob. Th. Rel. Fields, Vol. 100 (1994), 365--393. \bibitem{d1} Da Prato, G. and Zabczyk, J., \emph{Stochastic equations in infinite dimensions}, Cambridge University Press, Cambridge, 1992. \bibitem{d2} Da Prato, G. and Zabczyk, J., \emph{Ergodicity for infinite dimensional systems}, Cambridge University Press, Cambridge, 1996. \bibitem{g1} Garrido-Atienza, M. J. and Real, J., \emph{Existence and uniqueness of solutions for delay stochastic evolution equations of second order in time}, Stochstics and Dynamics, Vol. 3 (2003), 141--167. \bibitem{g2} Gy\"ongy, I. and Rovira, C., \emph{On stochastic partial differential equations with polynomial nonlinearities}, Stoch. Stoch. Rep. Vol. 67 (1999), 123--146. \bibitem{k1} Karatzas, I. and Shreve, S., \emph{Brownian motion and stochastic calculus}, 2nd edition, Springer, New York-Berlin-Heidelberg, 1997. \bibitem{k2} Khasminskii, R. Z., \emph{Stochastic stability of differential equations}, Sijthoff and Noordhoff, Alphen aan den Rijn, Holland, 1980. \bibitem{l1} Lions, J. L., \emph{Quelques m\'ethodes de r\'esolution des probl\`emes aux limites non lin\'eaires}, Dunod, Gauthier-Villars, Paris, 1969. \bibitem{l2} Lu, K. and Wang, B., \emph{Global attractors for the Klein-Gordon-Schr\"odinger equation in unbounded domains}, J. Differential Equations, Vol. 170 (2001), 281--316. \bibitem{p1} Pardoux, E., \emph{\'Equations aux d\'erivees partielles stochastiques non lin\'eaires monotones}, J. Th\`ese, Univ. Paris XI, 1975. \bibitem{p2} Parthasarathy, K. R., \emph{Probability measures on metric spaces}, Academic Press, New York and London, 1967. \bibitem{r1} Reed, M. and Simon, B., \emph{Methods of modern mathematical physics}, Vol. II, Academic Press, New York-San Francisco-London, 1975. \bibitem{t1} Temam, R., \emph{Infinite-dimensional dynamical systems in mechanics and physics}, second edition, Springer, New York-Heidelberg-Berlin, 1997. \bibitem{w1} Wang, B., \emph{Attractors for reaction-diffusion equations in unbounded domains}, Physica D, Vol. 128 (1999), 41--52. \end{thebibliography} \end{document}