Skip to main content

Advertisement

A unified complex noncentral Wishart type distribution inspired by massive MIMO systems

Article metrics

Abstract

The eigenvalue distributions from a complex noncentral Wishart matrix S=XHX has been the subject of interest in various real world applications, where X is assumed to be complex matrix variate normally distributed with nonzero mean M and covariance Σ. This paper focuses on a weighted analytical representation of S to alleviate the restriction of normality; thereby allowing the choice of X to be complex matrix variate elliptically distributed for the practitioner. New results for eigenvalue distributions of more generalised forms are derived under this elliptical assumption, and investigated for certain members of the complex elliptical class. The distribution of the minimum eigenvalue enjoys particular attention. This theoretical investigation has proposed impact in communications systems (where massive datasets can be conveniently formulated in matrix terms), in particular the case where the noncentral matrix has rank one which is useful in practice.

Introduction

Communications systems with multiple-input-multiple-output (MIMO) design have become very popular since they allow higher bit rate and because of their applications in the analysis of signal-to-noise ratio (SNR). The literature of research on MIMO systems insists on MIMO systems to be modelled using complex matrix variate distributions (see Ratnarajah and Vaillancourt (2005); Bekker et al. (2018); Ferreira et al. (2020)), in particular due to the flexibility these distributions provide in terms of the massive amounts of data that springs forth from these MIMO systems. He et al. (2016) in particular mentions the unification of random matrix theory (RMT) models, and draws a comparison between such unified models and so-called big data analytics. The authors make specific mention to one of the foundations of big data analytics in communications systems, namely matrix analysis. Zhang and Qui (2015) and He et al. (2018) also have thoughts on the implementation and use of large RMT as building blocks to model the massive (big) data arising from massive MIMO systems, mentioning several benefits to the use of RMT in this regard.

In a practical sense, let X denote the channel propagation matrix in a MIMO channel context, with n “inputs” and p “outputs”, colloquially referred to as “receivers” and “transmitters” respectively. Usually, the coefficients of X are assumed to be complex matrix variate normally distributed, andFootnote 1 E(X)=0, which reflects the standard i.i.d. Rayleigh fading assumption. However, in practice MIMO channels don’t always exhibit this, stemming from a line-of-sight connection between the transmitters and receivers (Kang and Alouini 2006; Zhou et al. 2015; Jayaweera and Poor 2003) motivates the channel matrix X to be modelled having non-zero mean, to account for environments with strong line-of-sight paths between transmitters and receivers. In order to encompass all channel characteristics, Taricco and Riegler (2011) suggests employing correlated Rician fading models - which directly pertains to modeling X with a non-zero mean. It is with these thoughts in mind that this paper assumes E(X)=M0.

When evaluating different performance measures of MIMO systems, the complex channel coefficients have been taken to be complex matrix variate normal distributed so far. de Souza and Yacoub (2008) stated that the Rayleigh probability density function (pdf) (assumed within a signal fading environment) is a consequence based on the assumption from the central limit theorem for large number of partial waves, the resultant process is decomposed into two orthogonal zero-mean and equal standard deviation normal random processes. This is an approximation and the restriction of complex normal is restrictive as it is not always a large number of interfering signals. Thus a more general assumption than complex matrix variate normal may not be that far from reality (see also Ollila et al. (2011)). This paper challenges this assumption of a channel being fed by normal inputs, and sets the platform for introducing previously unconsidered models to the MIMO communications systems domain. Indeed, He et al. (2016) and Qiu (2017) explicitly asks what the consequences of analyses are when the entries of X is not normal. The contribution of the work in this paper aims to assist answering this question.

The Wishart distribution emanating from the underlying complex normal channel matrix X is of particular interest, and has been studied to a wide extent in literature (see for example, James (1964); Gupta and Varga (1995); Ratnarajah and Vaillancourt (2005)). However, Choi et al. (2007) discussed the viable and necessary contribution of the complex matrix variate t distribution as assumption for the underlying channel matrix. This paper focus onFootnote 2 S=XHX, but from a generalized view of assumingFootnote 3\(\mathbf {X}\in \mathbb {C}_{1}^{n\times p}\) to be the complex matrix variate elliptical distribution, to address the criticism against the questionable use of the normal model. This complex matrix variate elliptical distribution contains the well-studied complex matrix variate normal distribution as a special case, but enjoys the flexibility to have different members which may serve as alternatives for the well-studied normal case. The complex matrix variate t- and slash distributions are also members of the complex elliptical class and bear close resemblance and familiarity to the well-studied normal case; with this notion, results pertaining to the underlying complex channel matrix distributed according to these distributions are presented. The distribution under consideration, that is, the distribution of S, is referred to as a complex noncentral Wishart type distribution.

He et al. (2016, 2018) mentions a crucial point of consideration for big data analytics is the “big” data matrix (in this case, X, or effectively S), and the study of its eigenvalues. The distribution of the minimum eigenvalue of the noncentral Wishart type distribution is thus investigated and expressions for the corresponding cumulative distribution functions (cdfs) derived. The distribution for the minimum eigenvalue from a noncentral Wishart form is crucial for the design and analysis of certain specialised MIMO systems (see Heath and Love (2005); Dharmawansa and McKay (2011)). For computational convenience the focus is on matrices with rank one noncentral matrix parameter. The low rank assumption is reportedly well modelled in practice (see Hansen and Bolcskei (2004)), and allows for tractable expressions in implementable computation of the derived results.

The paper is organized as follows. “Complex noncentral Wishart type” section contains some preliminary results required for the derivations in this paper. The main results relating to the distribution of the complex noncentral Wishart type distribution are also derived and some particular cases highlighted. In “Minimum eigenvalue cdf under rank one noncentrality” section the cdf of the minimum eigenvalue of the newly derived distributions is presented with special cases. Numerical experiments are discussed in “Numerical experiments” section, followed by some conclusions.

Complex noncentral Wishart type

In this section, the definition of the complex matrix variate elliptical distribution is presented, along with the lemma useful for the construction of the complex matrix variate elliptical model. Subsequently the derived complex noncentral Wishart type distribution is presented along with the corresponding joint eigenvalue distribution. Some particular cases, which is of interest for the practitioner, are highlighted.

The complex matrix variate elliptical distribution, which contains the well-studied complex matrix variate normal distribution as a special case, is defined next (see Bekker et al. (2018); Ferreira et al. (2020)).

Definition 1

The complex matrix variate \(\mathbf {X}\in \mathbb {C}_{1}^{n\times p}\), whose distribution is absolutely continuous, has the complex matrix variate elliptical distribution with parameters \(\mathbf {M}\in \mathbb {C} _{1}^{n\times p}\), \(\mathbf {\Phi }\in \mathbb {C}_{2}^{n\times n}\), \(\mathbf { \Sigma }\in \mathbb {C}_{2}^{p\times p}\), denoted by \(\mathbf {X}\sim \mathcal { C}E_{n\times p}(\mathbf {M},\mathbf {\Phi \otimes \Sigma,}g)\), if it has the following pdfFootnote 4:

$$ h(\mathbf{X})=\frac{1}{\left(\det \mathbf{\Phi }\right)^{p}\left(\det \mathbf{\Sigma }\right)^{n}}\;g\left[ -tr\left(\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\Phi^{-1}(\mathbf{X}-\mathbf{M})\right) \right] $$
(1)

with g(·) a generator function.

Chu (1973) and Gupta and Varga (1995) demonstrates that real elliptical distributions can always be expanded as an integral of a set of normal pdfs. We report the result by Provost and Cheong (2002) as a useful lemma, defining the complex matrix variate elliptical distribution as a weighted representation of complex matrix variate normal pdfs. This representation can be used to explore the distribution of S when the distribution of X can be that of any member of the complex matrix variate elliptical class.

Lemma 1

If \(\mathbf {X}\sim \mathcal {C}E_{n\times p}(\mathbf {M}, \mathbf {\Phi \otimes \Sigma },g)\) with pdf h(X) (see (1)), then there exists a scalar weight function \(\mathcal {W}(\cdot)\) onFootnote 5\(\mathbb {R}^{+}\) such that

$$ h(\mathbf{X})=\int\limits_{\mathbb{R}^{+}}\mathcal{W}(t)f_{\mathcal{C} N_{n\times p}(\mathbf{M},\mathbf{\Phi \otimes }t^{-1}\mathbf{\Sigma })}(\mathbf{X|}t)dt $$
(2)

where \(\mathbf {X|}t\sim \mathcal {C}N_{n\times p}\left (\mathbf {M},\mathbf {\Phi \otimes }t^{-1}\mathbf {\Sigma }\right)\) has the complex normal distribution with pdf (see James (1964))

$$ f_{\mathcal{C}N_{n\times p}(\mathbf{M},\mathbf{\Phi \otimes}t^{-1}\mathbf{\Sigma })}(\mathbf{X|}t)=\frac{1}{\pi^{pn}\det \left(\mathbf{\Phi}\right)^{p}\det \left(t^{-1}\mathbf{\Sigma }\right)^{n}} etr\left[-\left(t\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\mathbf{\Phi }^{-1}(\mathbf{X}-\mathbf{M})\right) \right] $$
(3)

and the weight function \(\mathcal {W}(\cdot)\)is given by

$$ \mathcal{W}(t)=\pi^{np}t^{-np}\mathcal{L}^{-1}\left\{ g\left[-{tr}\left(\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\mathbf{\Phi }^{-1}(\mathbf{X}-\mathbf{M})\right) \right] \right\} $$

where \(\mathcal {L}\) is the Laplace transform operator.

Three special cases of the complex matrix variate elliptical model are of interest in this paper.

  1. 1

    Firstly, the complex random matrix \(\mathbf {X}\in \mathbb {C}_{1}^{n\times p}\) has the complex matrix variate normal distribution with weight function \(\mathcal {W}(\cdot)\) in Lemma 1 given by

    $$ \mathcal{W}(t)=\delta (t-1) $$
    (4)

    where δ(·) is the dirac delta function.

  2. 2

    Secondly, \(\mathbf {X}\in \mathbb {C}_{1}^{n\times p}\) has the complex matrix variatetdistribution (see Provost and Cheong (2002)) with the parameters \(\mathbf {M}\in \mathbb {C}_{1}^{n\times p} \), \(\mathbf {\Phi }\in \mathbb {C}_{2}^{n\times n}\), \(\mathbf {\Sigma }\in \mathbb {C}_{2}^{p\times p}\) and degrees of freedom v>0, denoted by \( \mathbf {X}\sim \mathcal {C}t_{n\times p}(\mathbf {M},\mathbf {\Phi \otimes \Sigma },v)\), with pdf

    $$ f(\mathbf{X})=\frac{v^{np}\mathcal{C}\Gamma \left(np+v\right) }{\pi^{np} \mathcal{C}\Gamma_{p}(v)}\left\{ 1+\frac{1}{v}{tr}\left(\mathbf{\Sigma }^{-1}(\mathbf{X}-\mathbf{M})^{H}\mathbf{\Phi }^{-1}(\mathbf{X}- \mathbf{M})\right) \right\}^{-(np+v)} $$

    where \(\mathcal {C}\Gamma _{p}(a)\) denotes the complex multivariate gamma function Footnote 6, and Γ(·) denotes the usual gamma function. In this case the weight function \(\mathcal {W}(\cdot)\) in Lemma 1 is given by

    $$ \mathcal{W}(t)=\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{\Gamma (\frac{v}{2})}t^{\frac{v}{2}-1}e^{-t\frac{v}{2}}. $$
    (5)
  3. 3

    Thirdly, \(\mathbf {X}\in \mathbb {C}_{1}^{n\times p}\) has the complex matrix variate slash distribution (see Lachos and Labra (2014)) with the parameters \(\mathbf {M}\in \mathbb {C}_{1}^{n\times p}\), \(\mathbf { \Phi }\in \mathbb {C}_{2}^{n\times n}\), \(\mathbf {\Sigma }\in \mathbb {C}_{2}^{p\times p}\) and shape parameter b>0, denoted by \(\mathbf {X}\sim \mathcal {C}s_{n\times p}(\mathbf {M},\mathbf {\Phi \otimes \Sigma },b)\), with pdf

    $$ f(\mathbf{X})=\int\limits_{0}^{1}bt^{b-1}f_{\mathcal{C}N_{n\times p}(\mathbf{ M},\mathbf{\Phi \otimes }t^{-1}\mathbf{\Sigma })}(\mathbf{X|}t)dt $$

    In this case the weight function \(\mathcal {W}(\cdot)\) in Lemma 1 is given by

    $$ \mathcal{W}(t)=bt^{b-1}. $$
    (6)

The case where Φ=In is of particular interest (in Lemma 1). Hence, Σ represents the covariance structure of the columns of the random matrix variate X, in other words, the covariance structure of the transmitters. Subsequently, the complex noncentral Wishart type distribution is derived (the proof is contained in the Appendix).

Theorem 1

Suppose that \(\mathbf {X}\in \mathbb {C}_{1}^{n\times p} (n\geq p)\)is a random matrix distributed as \(\mathcal {C}E_{n\times p}(\mathbf {M},\mathbf {I}_{n}\mathbf {\otimes \Sigma },g)\). Then \(\mathbf {S=X}^{H} \mathbf {X}\in \mathbb {C}_{2}^{p\times p}\) has a complex noncentral Wishart type distribution with pdf

$$\begin{array}{@{}rcl@{}} f\left(\mathbf{S}\right) &=&\frac{\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \notag \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np}{etr}\left(-t\left(\mathbf{ \Sigma }^{-1}\mathbf{S+\Delta }\right) \right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) \mathcal{W}\left(t\right) dt \end{array} $$
(7)

where Δ=Σ−1MHM denotes the noncentral matrix parameter and 0F1(·) denotes the complex hypergeometric function of matrix argument (see Constantine (1963)). This distribution is denoted by SISCWp(n,M,InΣ) (Integral Series of Complex Wishart).

Remark 1

Suppose that M=0. Then Δ=0, and the pdf (7) simplifies to

$$ f^{central}\left(\mathbf{S}\right) =\int\limits_{\mathbb{R}^{+}}\frac{\det \left(\mathbf{S}\right)^{n-p}{etr}\left(-\left(t^{-1}\mathbf{\Sigma }\right)^{-1}\mathbf{S}\right) }{\mathcal{C}\Gamma_{p}(n)\det \left(t^{-1} \mathbf{\Sigma }\right)^{n}}\mathcal{W}\left(t\right) dt, $$
(8)

\(\mathbf {S}\in \mathbb {C}_{2}^{p\times p},\) which reflects the distribution as in Ferreira et al. (2020), eq. 2.2.

Remark 2

The complex noncentral Wishart type distribution (see (7)) can be written in terms of the complex central Wishart type distribution:

$$ f\left(\mathbf{S}\right) =\int\limits_{\mathbb{R}^{+}}f^{central}\left(\mathbf{S}\right) {etr}\left(-t\mathbf{\Delta }\right) \text{ } _{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) \mathcal{W}\left(t\right) dt, $$

where fcentral(S) denotes the pdf of the central complex Wishart type distribution (see (8)).

Special cases of the distribution in (7) are highlighted next.

  1. 1

    By choosing \(\mathcal {W}\left (t\right) \) as the dirac delta function (4), (7) simplifies to

    $$ f\left(\mathbf{S}\right) =\frac{\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}{etr} \left(-\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right) \text{ }_{0}F_{1}\left(n;\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) $$

    where \(\mathbf {S}\in \mathbb {C}_{2}^{p\times p}\), which is the complex matrix variate normal distribution as in James (1964).

  2. 2

    By choosing \(\mathcal {W}\left (t\right) \) as the t distribution weight (5), expanding the complex hypergeometric function per definitionFootnote 7 (see Constantine (1963)), and using Gradshteyn and Ryzhik (2007), p. 815, eq. 7.522.9, eq. 7.525.1, (7)simplifies to

    $$\begin{array}{@{}rcl@{}} &&f\left(\mathbf{S}\right) \\ &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{ \Gamma (\frac{v}{2})}\frac{\det \left(\mathbf{S}\right)^{n-p}}{\mathcal{C} \Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S} \right) }{k!\left[ n\right]_{\kappa }}\int\limits_{\mathbb{R}^{+}}t^{np+ \frac{v}{2}+2k-1}\exp \left[ -t{tr}\left(\mathbf{\Sigma }^{-1}\mathbf{ S+\Delta +}\frac{v}{2}\right) \right] dt \\ &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{\Gamma (\frac{v}{2})} \frac{\det \left(\mathbf{S}\right)^{n-p}}{\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{ C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) }{k!\left[ n \right]_{\kappa }}\frac{\Gamma \left(np+\frac{v}{2}+2k\right) }{\left({tr}\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta +}\frac{v}{2}\right) \right)^{np+\frac{v}{2}+2k}} \end{array} $$

    where \(\mathbf {S}\in \mathbb {C}_{2}^{p\times p}.\)

  3. 3

    Similarly, by choosing \(\mathcal {W}\left (t\right) \) as the slash distribution weight (6), expanding the complex hypergeometric function per definition, and using Gradshteyn and Ryzhik (2007), p. 346, eq. 3.381.1, (7) simplifies to

    $$\begin{array}{@{}rcl@{}} f\left(\mathbf{S}\right) &=&\frac{b\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \sum_{k=0}^{\infty }\sum_{\kappa }\frac{C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) }{k!\left[ n\right]_{\kappa }} \int\limits_{0}^{1}t^{np+b+2k-1}\exp \left[ -t{tr}\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right] dt \\ &=&\frac{b\det \left(\mathbf{S}\right)^{n-p}}{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{C_{\kappa }\left(\mathbf{\Delta \Sigma }^{-1}\mathbf{S} \right) }{k!\left[ n\right]_{\kappa }}\frac{\gamma \left(np+b+2k,{tr} \left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right) }{{tr} \left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right)^{np+b+2k}} \end{array} $$

    where γ(·,·) denotes the lower incomplete gamma function (see Gradshteyn and Ryzhik (2007), p. 899, eq. 8.350.1), and \(\mathbf {S}\in \mathbb {C}_{2}^{p\times p}.\)

Eigenvalue distributions arising from complex Wishart random matrices are of interests in a variety of fields, especially in the case of wireless communications (see Dharmawansa and McKay (2011) and references therein). Expressions for the joint pdf of the eigenvalues of S (see (7)) and some special cases are derived (the proof is contained in the Appendix). Note that the ordered eigenvalues of S is denoted by λ1>λ2>...>λp>0. The ordered eigenvalues of the noncentral matrix parameter Δ is denoted by μ1>μ2>...>μp>0.

Theorem 2

Suppose that \(\mathbf {S}\in \mathbb {C}_{2}^{p\times p}\) is distributed with pdf (7), and let λ1>λ2>...>λp>0 represent the ordered eigenvalues of S. Then the eigenvalues of S, Λ=diag(λ1,λ2,...,λp), has joint pdf

$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\mathcal{C}\Gamma_{p}(p) \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \int\limits_{\mathbb{R}^{+}}t^{np}{etr}\left(-t\mathbf{\Delta }\right) \\ &&\times \int\limits_{\mathbf{E}\in U\left(p\right) }{etr}\left(-t \mathbf{\Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) d\mathbf{E} \mathcal{W}\left(t\right) dt \notag \end{array} $$
(9)

where Δdenotes the noncentral matrix parameter, and U(p) denotes the unitary manifold (see Appendix).

In the following corollary, particular attention is given to the case when Σ=σ2Ip. This assumption is meaningful within the MIMO paradigm, when the practitioner may assume that the transmitters are sufficiently spatially far from each other, so that an assumption of independence can be made (see also Kang and Alouini (2006)) (the proof is contained in the Appendix).

Corollary 1

Suppose that \(\mathbf {S}\in \mathbb {C} _{2}^{p\times p}\) is distributed with pdf (7), and let λ1>λ2>...>λp>0 represent the ordered eigenvalues of S. Furthermore suppose that Σ=σ2Ip. Then the eigenvalues of S, Λ=diag(λ1,λ2,...,λp), has joint pdf

$$\begin{array}{@{}rcl@{}} &&f(\mathbf{\Lambda }) \notag \\ &=&\frac{\pi^{p\left(p-1\right) }}{\left(\left(n-p\right) !\right)^{p}} \frac{\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\prod\limits_{k< l}^{p}\left(\mu_{k}-\mu_{l}\right) \right) \sigma^{2np-p^{2}+1}}\int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \notag \\ &&\times \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda _{i}\right) \right) \mathcal{W}\left(t\right) dt \notag \\ &=&\frac{\pi^{p\left(p-1\right) }}{\left(\left(n-p\right) !\right)^{p}} \mathcal{K}\left(\mathbf{\Lambda }\right) \int\limits_{\mathbb{R} ^{+}}t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2} \mathbf{\Lambda }\right) \right) \notag \\ &&\times \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda_{i}\right) \right) \mathcal{W}\left(t\right) dt \end{array} $$
(10)

where \(\mathbf {\Delta }\in \mathbb {C}_{2}^{p\times p}\) denotes the noncentral matrix parameter, 0F1(·;·) denotes the confluent hypergeometric function of scalar argument, and where

$$ \mathcal{K}\left(\mathbf{\Lambda }\right) =\frac{\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\prod\limits_{k< l}^{p}\left(\mu_{k}-\mu_{l}\right) \right) \sigma^{2np-p^{2}+1}}. $$
(11)

For interest, special cases of the distribution in (10) are highlighted next.

  1. 1

    By choosing \(\mathcal {W}\left (t\right) \) as the dirac delta function (4), observe from (10) and (11) that

    $$f(\mathbf{\Lambda})=\frac{\pi^{p\left(p-1\right)}}{\left(\left(n-p\right) !\right)^{p}}\mathcal{K}\left(\mathbf{\Lambda }\right) {etr}\left(-\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \det \left(\text{ }_{0}F_{1}\left(n-p+1;\sigma^{-2}\mu_{j}\lambda_{i}\right) \right). $$

    When σ2=1, this result simplifies to p. 41, eq. 2.52 of McKay (2006).

  2. 2

    By choosing \(\mathcal {W}\left (t\right) \) as the t distribution weight (5) and using (11), (10) simplifies to

    $$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}\pi^{p\left(p-1\right) }}{\Gamma (\frac{v}{2})\left(\left(n-p\right)!\right)^{p}}\mathcal{K}\left(\mathbf{\Lambda }\right) \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+\frac{v}{2}+1}{etr} \left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\mathbf{\Lambda }+} \frac{v}{2}\right) \right) \\ &&\times \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda_{i}\right) \right) dt. \end{array} $$
  3. 3

    By choosing \(\mathcal {W}\left (t\right) \) as the slash distribution weight (6) and using (11), (10) simplifies to

    $$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{b\pi^{p\left(p-1\right) }}{\left(\left(n-p\right) !\right)^{p}}\mathcal{K}\left(\mathbf{\Lambda }\right) \\ &&\times \int\limits_{0}^{1}t^{np-p^{2}+b}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\mathbf{\Lambda }}\right) \right) \det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda_{i}\right) \right) dt. \end{array} $$

Suppose now the noncentral matrix Δ has Lp non-zero eigenvalues, thus, rank(Δ)=Lp. For the case Σ=σ2Ip, the joint pdf of eigenvalues of S, Λ=diag(λ1,λ2,...,λp), is presented in the following theorem (the proof is contained in the Appendix 1).

Theorem 3

Suppose that S is distributed with pdf (7), and let λ1>λ2>...>λp>0 represent the ordered eigenvalues of \(\mathbf {S}\in \mathbb {C}_{2}^{p\times p}\). Furthermore suppose that Σ=σ2Ip, and that Δ has arbitrary rank L<p with eigenvalues μ1>μ2>...>μL>0. Then the eigenvalues of S, Λ=diag(λ1,λ2,...,λp), has joint pdf

$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda })=&&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\left(n-p\right) !\right) ^{p}\left(\prod\limits_{k< l}^{L}\left(\mu_{k}-\mu_{l}\right) \right) \left(\prod\limits_{i=1}^{L}\mu_{i}^{p-L}\right) \mathcal{C}\Gamma _{p-L}(p-L)\sigma^{2np-p^{2}+1}} \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+1} {etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda } \right) \right) \det \left(\mathbf{T}\right) \mathcal{W}\left(t\right) dt \end{array} $$

where Δ denotes the noncentral matrix parameter, and where T is a p×p matrix with (i,j)th entry

$$ \left\{\mathbf{T}\right\}_{i,j}= \left\{\begin{array}{lll} _{0}F_{1}\left(n-p+1 ;t^{2}\sigma^{-2}\mu_{i}\lambda_{j}\right) \quad\quad & i=1,\ldots,p \quad\quad & j=1,\ldots,L\\ \frac{\left(t^{2}\lambda_{i}\right)^{k}\left(n-p\right) !}{\left(n-p+k\right)!} \quad\quad & i=1,\ldots,p \quad\quad & j=L+1,\ldots,p \end{array}\right.. $$
(12)

Minimum eigenvalue cdf under rank one noncentrality

The distribution of the minimum eigenvalue from a complex Wishart random matrix is important in certain MIMO designs (see Heath and Love (2005)), and is thus of interest here. For computational convenience, we assume that the noncentral matrix has rank one; thus \(\mathbf {\Delta \Sigma } ^{-1}\in \mathbb {C}_{1}^{p\times p}\) has rank one and is represented via its eigendecomposition as

$$ \mathbf{\Delta \Sigma }^{-1}=\mu \mathbf{\gamma \gamma }^{H} $$
(13)

where \(\mathbf {\gamma }\in \mathbb {C}_{1}^{p\times 1}\) and γHγ=1 (see also (Dharmawansa and McKay 2011)). In (13), μ denotes the single eigenvalue of ΔΣ−1. The following contributions are made in this section:

  • The derivation of the exact cdf of the minimum eigenvalue of S=XHXISCWp(n,M,InΣ) for the case when \(\mathbf {X}\in \mathbb {C} _{1}^{n\times p}\), \(\mathbf {X}\in \mathbb {C}_{1}^{n\times n}\), and \(\mathbf {X }\in \mathbb {C}_{1}^{n\times 2}\), and assuming \(\mathbf {\Delta \Sigma } ^{-1}\in \mathbb {C}_{1}^{p\times p}\) has rank one; and

  • Exact results of the minimum eigenvalue of S as described, for the special cases of (4), (5), and (6).

To derive the cdf of the minimum eigenvalue of \(\mathbf {S}\in \mathbb {C}_{2}^{p\times p}\) under this assumption, the following approach is employed:

$$F_{\min }\left(y\right) =1-P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right). $$

Knowing that

$$P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =P\left(\mathbf{S }>y\mathbf{I}_{p}\right) $$

the cdf of the minimum eigenvalue can be found using (7) directly, therefore avoiding cumbersome derivations and computations of deriving the joint eigenvalue pdfs and subsequently marginal distributions with pdfs like (9).

For the complex noncentral Wishart type distribution with pdf (7), the cdf of the minimum eigenvalue is derived next (the proof is contained in the Appendix 1).

Theorem 4

Suppose that \(\mathbf {X}\in \mathbb {C}_{1}^{n\times p}\) is distributed as \(\mathcal {C}E_{n\times p}(\mathbf {M},\mathbf {I}_{n}\mathbf { \otimes \Sigma },g)\), where \(\mathbf {M}\in \mathbb {C}_{1}^{n\times p}\) has rank one, and S=XHXISCWp(n,M,InΣ)with pdf (7). The cdf of λmin(S) is given by

$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}y^{np}\frac{{etr }\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma } ^{-1}\right) }{\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right) ^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{np+2k}\left(y\mu \right) ^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\mathcal{Q}_{n,p,t}^{r}\left(y\right) \mathcal{W}\left(t\right) dt $$
(14)

where y>0, Δ denotes the noncentral matrix parameter, and

$$ \mathcal{Q}_{n,p,t}^{r}\left(y\right) =\int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y}\right)^{n-p}{etr}\left(-ty\mathbf{\Sigma } ^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right) d\mathbf{Y} $$
(15)

where \(\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}\).

As before, special cases of the distribution in (14) are highlighted next.

  1. 1

    By choosing \(\mathcal {W}\left (t\right) \) as the dirac delta function (4), (14) simplifies to the result by Dharmawansa and McKay (2011).

  2. 2

    By choosing \(\mathcal {W}\left (t\right) \) as the t distribution weight (5), observe from (14) that

    $$\begin{array}{@{}rcl@{}} &&P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) \\ &=&\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{\Gamma (\frac{v}{2})}\int\limits_{\mathbb{ R}^{+}}y^{np}\frac{{etr}\left(-t\left(\mathbf{\Delta }+\frac{v}{2} \right) \right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{np+2k+\frac{v}{2}}\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\mathcal{Q}_{n,p,t}^{r}\left(y\right) dt \\ &=&I_{1} \end{array} $$

    where \(\mathcal {Q}_{n,p,t}^{r}\left (y\right) \) is given by (15). Thus Fmin(y)=1−I1.

  3. 3

    By choosing \(\mathcal {W}\left (t\right) \) as the slash distribution weight (6), observe from (14) that

    $$\begin{array}{@{}rcl@{}} &&P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) \\ &&=b\int\limits_{0}^{1}y^{np}\frac{{etr}\left(-t\left(\mathbf{\Delta } \right) \right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C }\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{np+2k+b}\left(y\mu \right)^{k}}{k!\left(n\right) _{k}}\binom{k}{r}\mathcal{Q}_{n,p,t}^{r}\left(y\right) dt \\ &&=I_{2} \end{array} $$

    where \(\mathcal {Q}_{n,p,t}^{r}\left (y\right) \) is given by (15). Thus Fmin(y)=1−I2.

The following result gives the exact minimum eigenvalue distribution for n×n complex noncentral Wishart type matrices with n degrees of freedom (the proof is contained in the Appendix).

Theorem 5

Suppose that \(\mathbf {X}\in \mathbb {C}_{1}^{n\times n}\) is distributed as \(\mathcal {C}E_{n\times n}(\mathbf {M},\mathbf {I}_{n}\mathbf { \otimes \Sigma },g)\), where \(\mathbf {M}\in \mathbb {C}_{1}^{n\times n}\) has rank one, and S=XHXISCWn(n,M,InΣ)with pdf (7). The cdf of λmin(S) is given by

$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}{etr}\left(-t \mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) \sum_{j=0}^{\infty }\frac{\left(yt^{2}\mu \right)^{j}}{j!\left(n\right) _{j}}\text{ }_{1}F_{1}\left(n;n+j,t{tr}\mathbf{\Delta }\right) \mathcal{W}\left(t\right) dt $$
(16)

where 1F1(·) denotes the confluent hypergeometric function (see Gradshteyn and Ryzhik (2007), p. 1010, eq. 9.14.1).

Remark 3

See that (27) can also be expressed as

$$ \sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(yt^{2}\mu \right)^{k}}{ k!\left(n\right)_{k}}\binom{k}{r}\left(n\right)_{r}\left(\frac{1}{\mu ty }\right)^{r}\left({tr}^{r}\mathbf{\Delta }\right) =\Phi_{3}\left(n,n,t{tr}\mathbf{\Delta,}yt^{2}\mu \right) $$

where Φ3(·) denotes the Humbert confluent hypergeometric function of two variables (see Bateman and Erdélyi (1953), p. 225, eq. 5.7.1.22). Thus (16) can be written as

$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}{etr}\left(-t \mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) \Phi_{3}\left(n,n,t{tr}\mathbf{\Delta,}yt^{2}\mu \right) \mathcal{W} \left(t\right) dt. $$

Special cases of the distribution in (16) are highlighted next.

  1. 1

    By choosing \(\mathcal {W}\left (t\right) \) as the dirac delta function (4), (16) simplifies to the result by Dharmawansa and McKay (2011).

  2. 2

    By choosing \(\mathcal {W}\left (t\right) \) as the t distribution weight (5) and by applying Gradshteyn and Ryzhik (2007), p. 815, eq. 7.522.9, from (16) it follows:

    $$ F_{\min }\left(y\right) =1-\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}}}{ \Gamma (\frac{v}{2})}\sum_{j=0}^{\infty }\frac{\left(y\mu \right)^{j}}{ j!\left(n\right)_{j}}\frac{\Gamma \left(v+2j\right) }{\left({tr} \left(y\mathbf{\Sigma }^{-1}+\mathbf{\Delta +}\frac{v}{2}\right)\right)^{ \frac{v}{2}+2j}} \times \text{}_{2}F_{1}\left(n,v+2j;n+j,\frac{{tr}\mathbf{ \Delta }}{{tr}\left(y\mathbf{\Sigma }^{-1}+\mathbf{\Delta +}\frac{v}{2} \right) }\right) $$
    (17)

    where 2F1(·) denotes the Gauss hypergeometric function (see Gradshteyn and Ryzhik (2007), p. 1010, eq. 9.14.2).

  3. 3

    By choosing \(\mathcal {W}\left (t\right) \) as the slash distribution weight (6), observe from (16) that:

    $$ F_{\min }\left(y\right) =1-b\sum_{j=0}^{\infty }\frac{\left(y\mu \right) ^{j}}{j!\left(n\right)_{j}}\int\limits_{0}^{1}{etr}\left(-t\left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}\right) \right) t^{b+2j-1}\text{ } _{1}F_{1}\left(n;n+j,t{tr}\mathbf{\Delta }\right) dt. $$
    (18)

The following result gives the exact minimum eigenvalue distribution for 2×2 complex noncentral Wishart type matrices with arbitrary degrees of freedom. Scenarios of this 2×2 nature has been investigated in the literature for both exemplary- as well as practical reasons (see Ratnarajah and Vaillancourt (2005), for example) (the proof is contained in the Appendix 1).

Theorem 6

Suppose that \(\mathbf {X}\in \mathbb {C}_{1}^{n\times 2}\) is distributed as \(\mathcal {C}E_{n\times 2}(\mathbf {M},\mathbf {I}_{n}\mathbf { \otimes \Sigma },g)\), where \(\mathbf {M}\in \mathbb {C}_{1}^{n\times 2}\) has rank one, and S=XHXISCW2(n,M,InΣ)with pdf (7). Thus, S is a 2×2 complex noncentral Wishart type matrix with arbitrary degrees of freedom n. The cdf of λmin(S) is given by

$$ F_{\min }\left(y\right) =1-\int\limits_{\mathbb{R}^{+}}\frac{{etr} \left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma } ^{-1}\right) }{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right) ^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(yt^{2}\mu \right)^{k} }{k!\left(n\right)_{k}}\binom{k}{r}\left(\frac{{tr}\left(\mathbf{ \Delta }\right) }{yt\mu }\right)^{r}\rho \left(r,y,t\right) \mathcal{W} \left(t\right) dt $$
(19)

with

$$\begin{array}{@{}rcl@{}} &&\rho \left(r,y,t\right) \notag\\ &=&\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma _{2}\left(i_{1}-i_{2}+2\right) \\ &&\times \left(\frac{\mu }{{tr}\left(\mathbf{\Delta }\right) }\right) ^{h}\left(\det \mathbf{\Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr}\left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma }\right) } \right) \left(ty\right)^{2n+i_{2}-2i_{1}-4} \notag \end{array} $$
(20)

where \(\mathfrak {C}_{n}^{v}\left (\cdot \right) \) denotes the Gegenbauer polynomial (see Gradshteyn and Ryzhik (2007), p. 991, eq. 8.932.1).

Special cases of the distribution in (19) are highlighted next.

  1. 1

    By choosing \(\mathcal {W}\left (t\right) \) as the dirac delta function (4), (19) and (20) simplifies to (see Dharmawansa and McKay (2011)):

    $$\begin{array}{@{}rcl@{}} &&F_{\min }\left(y\right) \\ &=&1-\frac{{etr}\left(-\mathbf{\Delta } \right) {etr}\left(-y\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{2}(n)\det \left(\mathbf{\Sigma }\right)^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{ k}{r}\left(\frac{{tr}\left(\mathbf{\Delta }\right) }{y\mu }\right) ^{r} \notag \\ &&\times \sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma _{2}\left(i_{1}-i_{2}+2\right) \notag \\ &&\times \left(\frac{\mu }{{tr}\left(\mathbf{\Delta }\right) }\right) ^{h}\left(\det \mathbf{\Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr}\left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma }\right) } \right) y^{2n+i_{2}-2i_{1}-4}. \notag \end{array} $$
    (21)
  2. 2

    By choosing \(\mathcal {W}\left (t\right) \) as the t distribution weight (5), (19) and (20) simplifies using Gradshteyn and Ryzhik (2007), p. 346, eq. 3.381.4:

    $$\begin{array}{@{}rcl@{}} &&F_{\min }\left(y\right) \notag \\ &=&1-\frac{\left(\frac{v}{2}\right)^{\frac{v}{2}} }{\Gamma (\frac{v}{2})}\frac{1}{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{ \Sigma }\right)^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\left(\frac{{tr} \left(\mathbf{\Delta }\right) }{yt\mu }\right) ^{r}\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}} \notag \\ &&\times \sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}}\binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right) _{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) \left(\frac{\mu }{ {tr}\left(\mathbf{\Delta }\right) }\right)^{h} \notag \\ &&\times \left(\det \mathbf{ \Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}}y^{2n+i_{2}-2i_{1}-4} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr} \left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma } \right) }\right) \notag \\ &&\times \frac{\Gamma \left(2n+2k-r+i_{2}-2i_{1}-4+\frac{v}{2} \right) }{\left({tr}\left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}+ \frac{v}{2}\right) \right)^{2n+2k-r+i_{2}-2i_{1}-4+\frac{v}{2}}}. \end{array} $$
    (22)
  3. 3

    By choosing \(\mathcal {W}\left (t\right) \) as the slash distribution weight (6), (19) and (20) simplifies using Gradshteyn and Ryzhik (2007), p. 346, eq. 3.381.1:

    $$\begin{array}{@{}rcl@{}} &&F_{\min }\left(y\right) \\ &=&1-\frac{b}{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right) ^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(y\mu \right)^{k}}{ k!\left(n\right)_{k}}\binom{k}{r}\!\left(\frac{{tr}\left(\mathbf{ \Delta }\right) }{yt\mu }\right) ^{r}\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}} \notag \\ &&\ \binom{i_{1}}{i_{2}}\binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right) _{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) \left(\frac{\mu }{ {tr}\left(\mathbf{\Delta }\right) }\right)^{h}\left(\det \mathbf{ \Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}}y^{2n+i_{2}-2i_{1}-4} \notag \\ && \times \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr} \left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma } \right) }\right) \frac{\gamma \left(2n+i_{2}-2i_{1}-4-r+2k+b,{tr} \left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}\right) \right) }{\left({ tr}\left(\mathbf{\Delta +}y\mathbf{\Sigma }^{-1}\right) \right) ^{2n+i_{2}-2i_{1}-4-r+2k+b}}. \notag \end{array} $$
    (23)

Numerical experiments

In this section, simulation and analytical results are presented to illustrate the contribution of the derived results. For the cdfs (16) and (19), the covariance matrix Σ is assumed to be given by:

$$ \left\{ \mathbf{\Sigma }\right\}_{i,j}=\exp \left(-\frac{\pi^{3}}{32} \left(i-j\right)^{2}\right) $$

where 1≤i,jp. The mean matrix M is constructed as:

$$ \mathbf{M=a}^{H}\mathbf{b} $$

where \(\mathbf {a}\in \mathbb {C}_{1}^{1\times n}\) and \(\mathbf {b}\in \mathbb {C }_{1}^{1\times p}\) is given by:

$$\begin{array}{@{}rcl@{}} \left\{ a\right\}_{i} &=&\exp \left(2\left(i-1\right) l\pi \cos \left(\theta \right) \right) \\ \left\{ b\right\}_{j} &=&\exp \left(2\left(j-1\right) l\pi \cos \left(\theta \right) \right) \end{array} $$

where \(l=\sqrt {-1}\), \(\theta =\frac {\pi }{4}\), and i=1,...,n and j=1,...,p. These specific constructions of the covariance and mean matrices are meaningful when modeling practical MIMO channels with a nonzero mean (see Dharmawansa and McKay (2011); McKay and Collings (2005)). Table 1 compares the analytical values of the cdf of λmin(S) where \( \mathbf {X}\in \mathbb {C}_{1}^{2\times 2}\) for the underlying t distribution (see (17)), the underlying slash distribution (see (18)) and the underlying normal distribution (see (21)) with corresponding simulated values (computed in Matlab R2013a). The tail behaviour of the simulated values relates to those of its analytical counterparts.

Table 1 Analytical ((16), (17), and (18)) and simulated values of cdf of λmin(S)

The following figures illustrate the cdfs (16) and (19) for n=2 and n=3 respectively, for the different weight functions under consideration in this paper. In Figs. 1 and 2, it is observed that (17) and (18) tends to the normal case as the value of v and b respectively increases - as does (22) and (23).

Fig. 1
figure1

Cdf ((16), (17), and (18)) for different values of v=3,10 and b=3,10 when n=2 (left), zoomed in subset on right

Fig. 2
figure2

Cdf ((21), (22), and (23)) for different values of v=3,10 and b=3,10 when n=3 (left), zoomed in subset on right

These figures (Figs. 1 and 2) illustrates the value which the underlying complex matrix variate elliptical assumption provides the practitioner having the engineering expertise with. The proposed elliptical platform in this paper allows theoretical, and resultant practical access to previously unconsidered models; providing flexibility for modeling that may yield improved fits to experimental data in practice (see Yacoub (2007) for example).

Conclusion

In this paper, exact results were presented for a variety of characteristics pertaining to a complex noncentral Wishart type distribution. In particular, the pdf of a complex noncentral Wishart type matrix S=XHX, where \(\mathbf {X}\in \mathbb {C} _{1}^{n\times p}\sim \mathcal {C}E_{n\times p}(\mathbf {M},\mathbf {I\otimes \Sigma },g)\) and the pdf of its associated ordered eigenvalues have been derived. Some special cases were investigated, of which the pdf of the eigenvalues when Σ=σ2I (which is of practical importance in communications systems) and the noncentral matrix has arbitrary rank L<p. Subsequently, the exact cdf of the minimum eigenvalue of S was derived for the case when \(\mathbf {X}\in \mathbb {C}_{1}^{n\times n}\), as well as when \(\mathbf {X}\in \mathbb {C} _{1}^{n\times 2}\). These cdfs were derived under the assumption that the noncentral matrix has rank one, which is a practical assumption. This theoretical investigation has proposed impact in big data and communication systems to allow the practitioner a flexible choice of underlying model for X, and thus S; thereby alleviating the restricted assumption of normality.

Appendix

Matrix spaces; seeRatnarajah (2003): The set of all n×p(np) matrices, E, with orthonormal columns is called the Stiefel manifold, denoted by \(\mathcal {C}V_{p,n}\). Thus \(\mathcal {C}V_{p,n}=\left \{ \mathbf {E}\left (n\times p\right) ;\mathbf {E }^{H}\mathbf {E}=\mathbf {I}_{p}\right \}.\) The volume of this manifold is given by

$$Vol\left(\mathcal{C}V_{p,n}\right) =\int\limits_{\mathcal{C}V_{p,n}}\left(\mathbf{E}^{H}d\mathbf{E}\right) =\frac{2^{p}\pi^{np}}{\mathcal{C}\Gamma _{p}(n)}. $$

If n=p then a special case of the Stiefel manifold is obtained, the so-called unitary manifold, defined as \(\mathcal {C}V_{p,p}=\left \{ \mathbf {E} \left (p\times p\right) ;\mathbf {E}^{H}\mathbf {E}=\mathbf {I}_{p}\right \} \equiv U\left (p\right) \) where U(p) denotes the group of unitary p×p matrices. The volume of U(p) is given by \( Vol\left (U\left (p\right) \right) =\int \limits _{U\left (p\right) }\left (\mathbf {E}^{H}d\mathbf {E}\right) =\frac {2^{p}\pi ^{p^{2}}}{\mathcal {C}\Gamma _{p}(p)}.\)

Complex noncentral Wishart type section proofs

Proof of Theorem 1

From (3), the pdf of X|t follows as

$$\begin{array}{*{20}l} f\left(\mathbf{X}|t\right) =\pi^{-np}\det \left(t^{-1}\mathbf{\Sigma } \right)^{-n}{etr}\left(-\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{X }^{H}\mathbf{X}\right) {etr}\left(-\left(t\mathbf{\Sigma } ^{-1}\right) \mathbf{M}^{H}\mathbf{M}\right) {etr}\left(2\left(t \mathbf{\Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{X}\right) \end{array} $$

Let X=ET, where \(\mathbf {E}:n\times p\in \mathcal {C}V_{p,n}\) such that EHE=Ip and T is an upper triangular matrix with real and positive diagonal elements. Then S=XHX=THT (the Cholesky decomposition of S). FromRatnarajah (2003) it thus follows that

$$\begin{array}{@{}rcl@{}} f(\mathbf{S,E|}t)=2^{-p}\pi^{-np}\det \left(t^{-1}\mathbf{\Sigma }\right) ^{-n}{etr}\left(-\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{S} \right) \det \left(\mathbf{S}\right)^{n-p}\\ \times {etr}\left(-\left(t \mathbf{\Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{M}\right) {etr} \left(2\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{ET} \right). \end{array} $$

Subsequently,

$$\begin{array}{@{}rcl@{}} f\left(\mathbf{S|}t\right) &=&\int\limits_{\mathcal{C}V_{p,n}}f(\mathbf{S,E| }t)\left(\mathbf{E}^{H}d\mathbf{E}\right) \\ &=&2^{-p}\pi^{-np}\det \left(t^{-1}\mathbf{\Sigma }\right)^{-n}{etr} \left(-\left(t\mathbf{\Sigma }^{-1}\right) \mathbf{S}\right) \det \left(\mathbf{S}\right)^{n-p}{etr}\left(-t\mathbf{\Delta }\right) \\ &&\times \int\limits_{\mathcal{C}V_{p,n}}{etr}\left(2\left(t\mathbf{ \Sigma }^{-1}\right) \mathbf{M}^{H}\mathbf{ET}\right) \left(\mathbf{E}^{H}d \mathbf{E}\right). \end{array} $$

Using eq. 3.37 fromRatnarajah (2003), see that

$$\int\limits_{\mathcal{C}V_{p,n}}{etr}\left(2\left(t\mathbf{\Sigma } ^{-1}\right) \mathbf{M}^{H}\mathbf{ET}\right) \left(\mathbf{E}^{H}d\mathbf{E }\right) =\frac{2^{p}\pi^{np}}{\mathcal{C}\Gamma_{p}(n)}\text{ } _{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right). $$

Thus

$$f\left(\mathbf{S|}t\right) =\frac{\det \left(\mathbf{S}\right)^{n-p}}{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}t^{np} {etr}\left(-t\left(\mathbf{\Sigma }^{-1}\mathbf{S+\Delta }\right) \right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S} \right) $$

and finally, from (2):

$$f\left(\mathbf{S}\right) =\int\limits_{\mathbb{R}^{+}}f\left(\mathbf{S|} t\right) \mathcal{W}\left(t\right) dt $$

which leaves the final result.

Proof of Theorem 2

Using eq. 93 ofJames (1964) and (7), the joint pdf of the eigenvalues λ1>λ2>...>λp>0 of S is given by

$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) }{ \mathcal{C}\Gamma_{p}(p)}\int\limits_{\mathbf{E}\in U\left(p\right) }f\left(\mathbf{E\Lambda E}^{H}\right) d\mathbf{E} \\ &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda } \right)^{n-p}}{\mathcal{C}\Gamma_{p}(p)\mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\int\limits_{\mathbb{R}^{+}}t^{np}{ etr}\left(-t\mathbf{\Delta }\right) \\ &&\times \int\limits_{\mathbf{E}\in U\left(p\right) }{etr}\left(-t \mathbf{\Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) \text{ }_{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{E\Lambda E}^{H}\right) d\mathbf{E} \mathcal{W}\left(t\right) dt \end{array} $$

which completes the proof.

Proof of Corollary 1

Substituting Σ=σ2Ip into (9) and usingJames (1964), p. 480, eq. 30, observe that

$$\begin{array}{@{}rcl@{}} f(\mathbf{\Lambda }) &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda }\right)^{n-p}}{\mathcal{C}\Gamma_{p}(p) \mathcal{C}\Gamma_{p}(n)\sigma^{2np}}\int\limits_{\mathbb{R}^{+}}t^{np} {etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-t\sigma^{-2} \mathbf{\Lambda }\right) \notag \\ &&\times \int\limits_{\mathbf{E}\in U\left(p\right) }\text{ }_{0}F_{1}\left(n;t^{2}\sigma^{-2}\mathbf{\Delta E\Lambda E}^{H}\right) d \mathbf{E}\mathcal{W}\left(t\right) dt \notag \\ &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right)^{2}\right) \det \left(\mathbf{\Lambda } \right)^{n-p}}{\mathcal{C}\Gamma_{p}(p)\mathcal{C}\Gamma_{p}(n)\sigma ^{2np}}\int\limits_{\mathbb{R}^{+}}t^{np}{etr}\left(-t\mathbf{\Delta } \right) {etr}\left(-t\sigma^{-2}\mathbf{\Lambda }\right) \notag \\ &&\times \text{ } _{0}F_{1}\left(n;t^{2}\sigma^{-2}\mathbf{\Delta,\Lambda }\right) \mathcal{ W}\left(t\right) dt. \end{array} $$
(24)

UsingGross and Richards (1989), eq. 4.8, see that

$$ _{0}F_{1}\left(n;\mathbf{\Delta,}t^{2}\sigma^{-2}\mathbf{\Lambda }\right) =\frac{\det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu _{j}\lambda_{i}\right) \right) }{t^{p\left(p-1\right) }\sigma^{-p\left(p-1\right) }\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \prod\limits_{k< l}^{p}\left(\mu_{k}-\mu_{l}\right) }\frac{\mathcal{C} \Gamma_{p}(p)\mathcal{C}\Gamma_{p}(n)}{\left(\left(n-p\right) !\right) ^{p}}. $$
(25)

Substituting (25) into (24) simplifies to (10).

Proof of Theorem 3

Consider from (10)

$$\begin{array}{@{}rcl@{}} &&f(\mathbf{\Lambda }) \\ &=&\int\limits_{\mathbb{R}^{+}}\frac{\pi^{p\left(p-1\right) }\det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\left(n-p\right) !\right) ^{p}\sigma^{2np-p^{2}+1}}\left(\prod\limits_{k< l}^{p}\left(\lambda _{k}-\lambda_{l}\right) \right) t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \\ &&\times \frac{\det \left(\text{ }_{0}F_{1}\left(n-p+1;t^{2}\sigma^{-2}\mu_{j}\lambda _{i}\right) \right) }{\left(\prod \limits_{k< l}^{p}\left(\mu_{k}-\mu _{l}\right) \right) }\mathcal{W}\left(t\right) dt. \end{array} $$

In particular, consider

$$\begin{array}{*{20}l} \mathcal{J}={\lim}_{\mu_{L+1},...,\mu_{p}\rightarrow 0}\frac{\det \left(f_{i}\left(\mu_{j}\right)_{i,j=1,...,p}\right) }{\prod\limits_{k< l}^{p} \left(\mu_{k}-\mu_{l}\right) } \end{array} $$

where fi(μj)=0F1(np+1;t2σ−2μiλj). Applying Lemma 5, p. 340 ofChiani et al. (2010):

$$\mathcal{J}=\frac{\det \left[ \begin{array}{cccccc} f_{1}\left(\mu_{1}\right) & \cdots & f_{1}\left(\mu_{L}\right) & f_{1}^{\left(p-L-1\right) }\left(0\right) & \cdots & f_{1}^{\left(0\right) }\left(0\right) \\ \vdots & & & & & \vdots \\ f_{p}\left(\mu_{1}\right) & \cdots & f_{p}\left(\mu_{L}\right) & f_{p}^{\left(p-L-1\right) }\left(0\right) & \cdots & f_{p}^{\left(0\right) }\left(0\right) \end{array} \right] }{\mathcal{C}\Gamma_{p-L}(p-L)\left(\prod\limits_{k< l}^{L}\left(\mu_{k}-\mu_{l}\right) \right) \left(\prod\limits_{i=1}^{L}\mu _{i}^{p-L}\right) } $$

where

$$f_{i}^{\left(k\right) }\left(0\right) =\frac{\left(t^{2}\sigma ^{-2}\lambda_{i}\right)^{k}\left(n-p\right) !}{\left(n-p+k\right) !}. $$

This leaves

$$\begin{array}{@{}rcl@{}} &&\int\limits_{\mathbb{R}^{+}}\frac{\pi^{p\left(p-1\right) }\det \left(\mathbf{\Lambda }\right)^{n-p}}{\left(\left(n-p\right) !\right) ^{p}\sigma^{2np-p^{2}+1}}\left(\prod\limits_{k< l}^{p}\left(\lambda _{k}-\lambda_{l}\right) \right) t^{np-p^{2}+1}{etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda }\right) \right) \mathcal{JW} \left(t\right) dt \\ &=&\frac{\pi^{p\left(p-1\right) }\left(\prod\limits_{k< l}^{p}\left(\lambda_{k}-\lambda_{l}\right) \right) \det \left(\mathbf{\Lambda } \right)^{n-p}}{\left(\left(n-p\right) !\right)^{p}\left(\prod\limits_{k< l}^{L}\left(\mu_{k}-\mu_{l}\right) \right) \left(\prod\limits_{i=1}^{L}\mu_{i}^{p-L}\right) \mathcal{C}\Gamma _{p-L}(p-L)\sigma^{2np-p^{2}+1}} \\ &&\times \int\limits_{\mathbb{R}^{+}}t^{np-p^{2}+1} {etr}\left(-t\left(\mathbf{\Delta +}\sigma^{-2}\mathbf{\Lambda } \right) \right) \det \left(\mathbf{T}\right) \mathcal{W}\left(t\right) dt \end{array} $$

where T is a p×p matrix as given in (22).

Minimum eigenvalue cdf proofs

Proof of Theorem 4

Consider from (7):

$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) }{ \mathcal{C}\Gamma_{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \notag \\ &&\times \int\limits_{\mathbf{S-}y\mathbf{I}_{p}}\det \left(\mathbf{S}\right)^{n-p} {etr}\left(-t\mathbf{\Sigma }^{-1}\mathbf{S}\right) \text{ } _{0}F_{1}\left(n;t^{2}\mathbf{\Delta \Sigma }^{-1}\mathbf{S}\right) d \mathbf{S}\mathcal{W}\left(t\right) dt \end{array} $$

where \(\mathbf {S-}y\mathbf {I}_{p}\mathbf {\in }\mathbb {C}_{2}^{p\times p}\). Consider now the transformation S=y(Ip+Y) with Jacobian \(d\mathbf {S}=y^{p^{2}}d\mathbf {Y}\) (seeDharmawansa and McKay (2011)). It follows that

$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}y^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}} \\ &&\times\! \int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y} \right)^{n-p}{etr}\left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) \text{ }_{0}F_{1}\left(n;yt^{2}\mathbf{\Delta \Sigma }^{-1}\left(\mathbf{I} _{p}\mathbf{+Y}\right) \right) d\mathbf{Y}\mathcal{W}\left(t\right) dt. \end{array} $$

By applying the definition of the complex hypergeometric function and the assumption of rank one for the noncentral matrix parameter (see (13)) the following is obtained:

$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}y^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{\kappa }\frac{1}{k!\left[ n\right]_{\kappa }} \\ &&\times \int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y} \right)^{n-p}{etr}\left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) C_{\kappa }\left(yt^{2}\mu \mathbf{\gamma }^{H}\left(\mathbf{I}_{p}\mathbf{ +Y}\right) \mathbf{\gamma }\right) d\mathbf{Y}\mathcal{W}\left(t\right) dt \end{array} $$

where \(\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}\). Since having only one eigenvalue results in the partition κ to reduce to a single partition, per definition of zonal polynomials it follows that [n]κ=(n)k and Cκ(A)=tr(A)k, and

$$C_{\kappa }\left(yt^{2}\mu \mathbf{\gamma }^{H}\left(\mathbf{I}_{p}\mathbf{ +Y}\right) \mathbf{\gamma }\right) =\left(yt^{2}\mu \right)^{k}\sum_{r=0}^{k}\binom{k}{r}{tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right). $$

Hence

$$\begin{array}{@{}rcl@{}} P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) &=&\int\limits_{ \mathbb{R}^{+}}t^{np}y^{np}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{p}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{1}{k!\left(n\right)_{k}} \\ &&\times \int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{p}\mathbf{+Y} \right)^{n-p}{etr}\left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) \left(yt^{2}\mu \right)^{k}\binom{k}{r}{tr}^{r}\left(\mathbf{\gamma \gamma }^{H}\mathbf{Y}\right) d\mathbf{Y}\mathcal{W}\left(t\right) dt \end{array} $$

where \(\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}\), and leaves the final result.

Proof of Theorem 5

Letting n=p, see from (14) and (15) that

$$P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =\int\limits_{ \mathbb{R}^{+}}y^{n^{2}}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma _{n}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{t^{n^{2}+2k}\left(y\mu \right)^{k}}{k!\left(n\right) _{k}}\binom{k}{r}\mathcal{Q}_{n,n,t}^{r}\left(y\right) \mathcal{W}\left(t\right) dt $$

where \(\mathcal {Q}_{n,n,t}^{r}\left (y\right) \) is as defined in (15). FollowingMathai (1997), p. 365, eq. 6.1.20:

$$\begin{array}{@{}rcl@{}} \mathcal{Q}_{n,n,t}^{r}\left(y\right) &=&\int\limits_{\mathbf{Y}}{etr} \left(-ty\mathbf{\Sigma }^{-1}\mathbf{Y}\right) C_{r}\left(\mathbf{\gamma \gamma }^{H}\mathbf{Y}\right) d\mathbf{Y} \\ &=&\frac{\mathcal{C}\Gamma_{n}\left(n,r\right) \left(\det \mathbf{\Sigma } \right)^{n}}{t^{n^{2}}y^{n^{2}}}\left(\frac{1}{\mu ty}\right) ^{r}C_{r}\left(\mathbf{\Delta }\right) \\ &=&\frac{\mathcal{C}\Gamma_{n}\left(n\right) \left(n\right)_{r}\left(\det \mathbf{\Sigma }\right)^{n}}{t^{n^{2}}y^{n^{2}}}\left(\frac{1}{\mu ty} \right)^{r}\left({tr}^{r}\mathbf{\Delta }\right) \end{array} $$

where \(\mathbf {Y\in }\mathbb {C}_{2}^{p\times p}\), and \(\mathcal {C}\Gamma _{n}\left (n,r\right) \) denotes the complex multivariate gamma function relating to r (seeMathai (1997)). Subsequently

$$ P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =\int\limits_{ \mathbb{R}^{+}}{etr}\left(-t\mathbf{\Delta }\right) {etr}\left(-ty\mathbf{\Sigma }^{-1}\right) \sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{ \left(yt^{2}\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r}\left(n\right)_{r}\left(\frac{1}{\mu ty}\right)^{r}\left({tr}^{r}\mathbf{ \Delta }\right) \mathcal{W}\left(t\right) dt. $$
(26)

Consider the summation component in (26). This component can be rewritten as follows:

$$ \sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{\left(yt^{2}\mu \right)^{k}}{ k!\left(n\right)_{k}}\binom{k}{r}\left(n\right)_{r}\left(\frac{1}{\mu ty }\right)^{r}\left({tr}^{r}\mathbf{\Delta }\right) =\sum_{j=0}^{\infty }\frac{\left(yt^{2}\mu \right)^{j}}{j!\left(n\right)_{j}}\text{ } _{1}F_{1}\left(n;n+j,t{tr}\mathbf{\Delta }\right) $$
(27)

Substituting (27) into (26) leaves the final result.

Proof of Theorem 6

Substituting p=2, see from (14) and (15) that

$$ P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) =\int\limits_{ \mathbb{R}^{+}}y^{2n}\frac{{etr}\left(-t\mathbf{\Delta }\right) { etr}\left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right)^{n}}\sum_{k=0}^{\infty }\sum_{r=0}^{k}\frac{ t^{2n+2k}\left(y\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r} \mathcal{Q}_{n,2,t}^{r}\left(y\right) \mathcal{W}\left(t\right) dt $$
(28)

where from (15) andDharmawansa and McKay (2011), eq. 41:

$$\begin{array}{@{}rcl@{}} \mathcal{Q}_{n,2,t}^{r}\left(y\right) &=&\int\limits_{\mathbf{Y}}\det \left(\mathbf{I}_{2}\mathbf{+Y}\right)^{n-2}{etr}\left(-ty\mathbf{ \Sigma }^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma } ^{H}\mathbf{Y}\right) d\mathbf{Y} \\ &=&\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\binom{n-2}{i_{1}}\binom{i_{1}}{ i_{2}}\int\limits_{\mathbf{Y}}{tr}^{i_{2}}\left(\mathbf{Y}\right) \det \left(\mathbf{Y}\right)^{i_{1}-i_{2}}{etr}\left(-ty\mathbf{\Sigma } ^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right) d\mathbf{Y.} \end{array} $$

where \(\mathbf {Y\in }\mathbb {C}_{2}^{2\times 2}\). By usingDharmawansa and McKay (2011), eq. 17 and setting p=i2,a=i1i2+2,t=r,A=tyΣ−1 and R=γγH, see that

$$\begin{array}{@{}rcl@{}} &&\int\limits_{\mathbf{Y}}{tr}^{i_{2}}\left(\mathbf{Y}\right) \det \left(\mathbf{Y}\right)^{i_{1}-i_{2}}{etr}\left(-ty\mathbf{\Sigma } ^{-1}\mathbf{Y}\right) {tr}^{r}\left(\mathbf{\gamma \gamma }^{H} \mathbf{Y}\right) d\mathbf{Y} \\ &=&\frac{i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) }{\left(\det \left(ty\mathbf{\Sigma }^{-1}\right) \right)^{i_{1}-i_{2}+2+\frac{i_{2}}{2}}} \\ &&\times \sum_{h=0}^{\min \left(i_{2},r\right) }\frac{\left(-1\right)^{h} \binom{r}{h}}{\left(\det \left(ty\mathbf{\Sigma }^{-1}\right) \right)^{ \frac{h}{2}}}{tr}^{r-h}\left(\mathbf{\gamma \gamma }^{H}ty\mathbf{ \Sigma }^{-1}\right) {tr}^{h}\left(\mathbf{\gamma \gamma }^{H}\right) \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{{tr}\left(ty \mathbf{\Sigma }^{-1}\right) }{2\sqrt{\det \left(ty\mathbf{\Sigma } ^{-1}\right) }}\right). \end{array} $$

Noting that γHγ=1 (see (13)), it follows that

$$\begin{array}{@{}rcl@{}} &&\mathcal{Q}_{n,2,t}^{r}\left(y\right) \notag \\ &=&\sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma_{2}\left(i_{1}-i_{2}+2\right) \left(\det \mathbf{\Sigma }\right)^{i_{1}- \frac{i_{2}}{2}+\frac{h}{2}+2} \notag \\ &&\times \left(\frac{{tr}\left(\mathbf{\Delta }\right) }{\mu }\right)^{r-h}\mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr} \left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma } \right) }\right) t^{i_{2}-2i_{1}-4-r}y^{i_{2}-2i_{1}-4-r}. \end{array} $$
(29)

Substituting (29) into (28), the following is obtained:

$$\begin{array}{@{}rcl@{}} &&P\left(\lambda_{\min }\left(\mathbf{S}\right) >y\right) \\ &=&\int\limits_{ \mathbb{R}^{+}}\frac{{etr}\left(-t\mathbf{\Delta }\right) {etr} \left(-ty\mathbf{\Sigma }^{-1}\right) }{\mathcal{C}\Gamma_{2}(n)\det \left(\mathbf{\Sigma }\right)^{n-2}}\sum_{k=0}^{\infty }\sum_{r=0}^{k} \frac{\left(yt^{2}\mu \right)^{k}}{k!\left(n\right)_{k}}\binom{k}{r} \left(\frac{{tr}\left(\mathbf{\Delta }\right) }{yt\mu }\right)^{r} \\ &&\times \sum_{i_{1}=0}^{n-2}\sum_{i_{2}=0}^{i_{1}}\sum_{h=0}^{\min \left(i_{2},r\right) }\left(-1\right)^{h}\binom{n-2}{i_{1}}\binom{i_{1}}{i_{2}} \binom{r}{h}i_{2}!\left(i_{1}-i_{2}+2\right)_{r}\mathcal{C}\Gamma _{2}\left(i_{1}-i_{2}+2\right) \\ &&\times \left(\frac{\mu }{{tr}\left(\mathbf{\Delta }\right) }\right) ^{h}\left(\det \mathbf{\Sigma }\right)^{i_{1}+\frac{h}{2}-\frac{i_{2}}{2}} \mathfrak{C}_{i_{2}-h}^{i_{1}-i_{2}+2+r}\left(\frac{1}{2}{tr}\left(\mathbf{\Sigma }^{-1}\right) \sqrt{\det \left(\mathbf{\Sigma }\right) } \right) \left(ty\right)^{2n+i_{2}-2i_{1}-4}\mathcal{W}\left(t\right) dt \end{array} $$

which leaves the final result.

Notes

  1. 1.

    E(·) denotes the expectation operator.

  2. 2.

    XH denotes the conjugate transpose of X.

  3. 3.

    \(\mathbb {C}_{1}^{n\times p}\) denotes the space of n×p complex matrices, and \(\mathbb {C}_{2}^{p\times p}\) denotes the space of Hermitian positive definite matrices of dimension p.

  4. 4.

    etr(·)=etr(·) where tr(X) denotes the trace of matrix X, and X−1 denotes the inverse of matrix X.

  5. 5.

    \(\mathbb {R}^{+}\) denotes the positive real line.

  6. 6.

    \(\mathcal {C}\Gamma _{p}(a)=\pi ^{\frac {1}{2} p(p-1)}\prod \limits _{i=1}^{p}\Gamma \left (a-(i-1)\right) \) (see (James 1964)).

  7. 7.

    Cκ(Z) denotes the complex zonal polynomial of Z corresponding to the partition κ=(k1,…,kp),k1kp≥0, k1++kp=k and \(\sum _{\kappa }\) denotes summation over all partitions κ. [n]k denotes the generalized Pochammer coefficient relating to partition κ.

Abbreviations

cdf:

Cumulative distribution function

ISCW:

Integral series of complex Wishart

MIMO:

Multiple-input-multiple-output

pdf:

Probability distribution function

RMT:

Random matrix theory

SNR:

Signal-to-noise ratio

References

  1. Bateman, H., Erdélyi, A.: Higher Transcendental Functions, Vol. I. McGraw–Hill, New York (1953).

  2. Bekker, A., Arashi, M., Ferreira, J. T.: New bivariate gamma types with MIMO application. Commun. Stat. Theory Methods (2018). https://doi.org/10.1080/03610926.2017.1417428.

  3. Chiani, M., Win, M. Z., Shin, H.: MIMO Networks: the effect of interference. IEEE Trans. Inf. Theory. 56(1), 336–350 (2010).

  4. Choi, S. H., Smith, P., Allen, B., Malik, W. Q., Shafi, M.: Severely fading MIMO channels: Models and mutual information. In: IEEE International Conference on Communications, pp. 4628–4633. IEEE (2007).

  5. Chu, K. C.: Estimation and decision for linear systems with elliptically random process. IEEE Trans. Autom. Control.18, 499–505 (1973).

  6. Constantine, A. G.: Some noncentral distribution problems in multivariate analysis. Ann. Math. Stat.34, 1270–1285 (1963).

  7. de Souza, R. A. A., Yacoub, M. D.: Bivariate Nakagami-m distribution with arbitrary correlation and fading parameters. IEEE Trans. Wirel. Commun.7(12), 5227–5232 (2008).

  8. Dharmawansa, P., McKay, M. R.: Extreme eigenvalue distributions of some complex noncentral Wishart and gamma-Wishart random matrices. J. Multivar. Anal.102, 847–868 (2011).

  9. Ferreira, J. T., Bekker, A., Arashi, M.: Advances in Wishart-type modelling of channel capacity. REVStat (2020).

  10. Gradshteyn, I. S., Ryzhik, I. M.: Table of Integral, Series, and Products, 7th Ed. Academic Press, New York (2007).

  11. Gross, K. I., Richards, D. S. P.: Total positivity, spherical series, and hypergeometric functions of matrix argument. J. Approximation Theory. 59, 224–246 (1989).

  12. Gupta, A. K., Varga, T.: Normal mixture representations of matrix variate elliptically contoured distributions. Sankhya. 57, 68–78 (1995).

  13. Hansen, J., Bolcskei, H.: A geometrical investigation of the rank-1 Ricean MIMO channel at high SNR. In: IEEE Proceedings International Symposium on Information Theory. IEEE (2004).

  14. Heath, R. W., Love, D. J.: Multimode antenna selection for spatial multiplexing systems with linear receivers. IEEE Trans. Signal Process.53(8), 3042–3056 (2005).

  15. He, X., Chu, L., Qui, R. C., Ai, Q., Ling, Z.: A Novel Data-Driven Situation Awareness Approach for Future Grids Using Large Random Matrices for Big Data Modeling. IEEE Access. 6, 13855–13865 (2018).

  16. He, Y., Yu, F. R., Zhao, N., Yin, H., Yao, H., Qui, R. C.: Big data analytics in mobile cellular networks. IEEE Access. 4, 1985–1996 (2016).

  17. James, A. T.: Distributions of matrix variate and latent roots derived from normal samples. Ann. Math. Stat.35, 475–501 (1964).

  18. Jayaweera, S. K., Poor, H. V.: MIMO Capacity results for Rician fading channels. In: IEEE Proceedings of the Global Telecommunications Conference. IEEE (2003).

  19. Kang, M., Alouini, M.: Capacity of MIMO Rician Channels. IEEE Trans. Wirel. Commun.5(1), 112–123 (2006).

  20. Lachos, V. H., Labra, F. V.: Multivariate skew-normal/independent distributions: properties and inference. Pro Math.28(56), 11–53 (2014).

  21. Mathai, A. M.: Jacobians of Matrix Transformations and Functions of Matrix Argument. World Science Publishing Co., Singapore (1997).

  22. McKay, M., Collings, I.: General capacity bounds for spatially correlated MIMO Rician channels. IEEE Trans. Inf. Theory. 51(9), 625–672 (2005).

  23. McKay, M.: Random Matrix Theory Analysis of multiple antenna communication systems [Unpublished PhD thesis]. University of Sydney (2006).

  24. Ollila, E., Eriksson, J., Koivunen, V.: Complex elliptically symmetric random variables - generation, characterization, and circularity tests. IEEE Trans. Signal Process.59(1), 58–69 (2011).

  25. Provost, S. B., Cheong, Y. H.: The distribution of Hermitian quadratic forms in elliptically contoured random vectors. J. Stat. Plan. Infer.102, 303–316 (2002).

  26. Qiu, R. C.: Large random matrices big data analytics, in Big Data of Complex Networks. Chapman &Hall/CRC Big Data Series, Boca Raton Fl. (2017).

  27. Ratnarajah, T.: Topics in complex random matrices and information theory [Unpublished PhD thesis]. University of Ottawa (2003).

  28. Ratnarajah, T., Vaillancourt, R.: Quadratic forms on complex random matrices and multiple-antenna systems. IEEE Trans. Inf. Theory. 51(8), 2976–2984 (2005).

  29. Taricco, G., Riegler, E.: On the ergodic capacity of correlated Rician fading MIMO channels with interference. IEEE Trans. Inf. Theory. 57(7), 4123–4138 (2011).

  30. Yacoub, M. D.: The κ- μ distribution and the η- μ distribution. IEEE Antennas Propag. Mag.49(1), 68–81 (2007).

  31. Zhang, C., Qui, C. R.: Massive MIMO as a Big Data System: Random Matrix Models and Testbed. IEEE Access. 3, 837–851 (2015).

  32. Zhou, S., Alfano, G., Nordio, A., Chiasserini, C.: Ergodic capacity analysis of MIMO relay network over Rayleigh-Rician channels. IEEE Commun. Lett.19(4), 601–604 (2015).

Download references

Acknowledgements

The authors acknowledge the support of the StatDisT group at the University of Pretoria, South Africa, as well as Dr. D.A. Burger for editorial assistance and insight. Furthermore, the authors thank the anonymous reviewer, associate editor, as well as the Editor-in-Chief for their constructive suggestions which improved the paper.

Funding

This work is based on the research supported in part by the National Research Foundation of South Africa (SARChI Research Chair- UID: 71199; and Grant ref. CPRR160403161466 nr. 105840). Opinions expressed and conclusions arrived at are those of the author and are not necessarily to be attributed to the NRF.

Availability of data and materials

Not applicable.

Author information

JF wrote the manuscript draft as well as coding for the numerical examples. JF derived the mathematical expressions, where AB provided critical feedback and checked the derivations for correctness. Both authors read and approved the final manuscript.

Correspondence to Johannes T. Ferreira.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Elliptical
  • Multiple-input-multiple-output (MIMO)
  • Rank one
  • Wishart type