Skip to main content

A note on inconsistent families of discrete multivariate distributions

Abstract

We construct a d-dimensional discrete multivariate distribution for which any proper subset of its components belongs to a specific family of distributions. However, the joint d-dimensional distribution fails to belong to that family and in other words, it is ‘inconsistent’ with the distribution of these subsets. We also address preservation of this ‘inconsistency’ property for the symmetric Binomial distribution, and some discrete distributions arising from the multivariate discrete normal distribution.

Introduction

If a multivariate distribution is parametrically specified, often the lower-dimensional marginals follow the same distribution of an appropriate dimension. For example, all the lower-dimensional distributions of a multivariate Gaussian distribution are Gaussian. The converse is however not necessarily true. For example, Dutta and Genton (2014) gave a construction of a non-Gaussian multivariate distribution with all lower-dimensional Gaussians and considered some generalizations of this for a certain class of elliptical and skew-elliptical distributions. In the discrete case, all the lower-dimensional marginals of a bivariate Binomial distribution (Bairamov and Gultekin 2010) follow the Binomial distribution. Conversely, given a set of marginal distributions, different dependence structures can give rise to different joint distributions. In a more general setting, Hoeffding (1940) and Fréchet (1951) independently obtained a characterization of the class of bivariate distributions with given univariate marginals. Characterization problems for discrete distributions using conditional distributions and their expectations have been investigated by several authors; see, e.g., Dahiya and Korwar (1977), Ruiz and Navarro (1995), and Nguyen et al. (1996). The paper by Conway (1979) also discussed some additional properties and derived appropriate relationships for such systems, while a method of constructing multivariate distributions with specified univariate marginals and a given correlation matrix was studied by Cuadras (1992).

The main aim of this paper is the converse question for the discrete multivariate setup. More specifically, we construct a discrete multivariate distribution (see, e.g., Johnson et al. (1997)), all of whose lower-dimensional marginals follow the same symmetric distribution but the joint distribution does not conform to that pattern and has a different distribution. We say that the joint distribution is ‘inconsistent’ with the distribution of its marginals. The main idea of this construction is based on a transformation using the sign function applied to a class of discrete symmetric random variables. The sign function plays a key role in yielding a distribution with ‘restricted support’.

The structure of the paper is as follows. In Section 2, we first motivate our construction for the bivariate set-up with a simple case starting from a symmetric Binomial distribution. Generalization of this result to symmetric discrete distributions with a specific support in higher dimensions is explored in Section 3 where our main result is stated. We also address preservation of this ‘inconsistency’ property for the symmetric Binomial distribution in Section 4, a class of symmetric discrete distributions constructed from symmetric continuous distributions in Section 5, and the multivariate discrete skew-normal distribution in Section 6. Proofs of all the theorems are provided in the Appendix.

A motivating bivariate example

Consider a random variable U that follows a Binomial distribution with parameters n=2 and \(p=\frac {1}{2}\). The support of this distribution is the set \({\mathcal {N}}_{2}=\{0,1,2\}\), and it has the following probability mass function (pmf):

$$q_{0}(u)=\left(\begin{array}{l}2\\ u\end{array}\right) \frac{1}{4}\, \text{if}\, u \in {\mathcal{N}}_{2}. $$

Let U 1 and U 2 be two independent copies of U. The support of the joint distribution of U 1 and U 2 is \({\mathcal {N}}_{2}^{2} = \{0,1,2\} \times \{0,1,2\}\). Thus, the joint pmf of U 1 and U 2 is

$$q_{0}(u_{1},u_{2})=\left(\begin{array}{c}2\\ {u_{1}}\end{array}\right) \left(\begin{array}{c}2\\ {u_{2}}\end{array}\right) \frac{1}{16}\, \text{if}\, (u_{1},u_{2}) \in {\mathcal{N}}_{2}^{2}. $$

Define X=U−1. The support of the distribution of X is the set \({\mathcal {Z}}_{1}=\{-1,0,1\}\), and it has the pmf:

$$q(x)=\left(\begin{array}{c}2\\ {x+1}\end{array}\right) \frac{1}{4}\, \text{if}\, x \in {\mathcal{Z}}_{1}. $$

Clearly, this is a version of the Binomial distribution which is symmetric about 0. Let X 1 and X 2 be two independent copies of X. The support of the joint distribution (X 1,X 2)T is \({\mathcal {Z}}_{1}^{2} = \{-1,0,1\} \times \{-1,0,1\}\). Thus, the joint pmf of X 1 and X 2 is

$$ q(x_{1}, x_{2})=\left(\begin{array}{c}2\\ {x_{1}+1}\end{array}\right) \left(\begin{array}{c}2\\ {x_{2}+1}\end{array}\right) \frac{1}{16}\, \text{if}\, (x_{1},x_{2}) \in {\mathcal{Z}}_{1}^{2}. $$
(1)

Recall the univariate sign function, namely,

$$S(x)= \left\{\begin{array}{ll} -1, &\text{if}~ x < 0,\\ 0, &\text{if}~ x = 0,\\ 1, &\text{if}~ x > 0. \end{array}\right. $$

We use a modified version of this sign function. Consider the following transformation:

$$ X_{1}^{*}=X_{1}S_{2,1}\text{ and }X_{2}^{*}=X_{2}S_{1,2}, $$

where S 1,2 and S 2,1 are defined as follows:

$$S_{1,2}= \left\{\begin{array}{ll} -1, &\text{if}~ X_{1} < 0,~ \text{or} ~(X_{1}=0~ \text{and}~ X_{2} \neq 0),\\ 0, &\text{if}~ (X_{1}=0 ~\text{and}~ X_{2}=0),\\ 1, &\text{if}~ X_{1} > 0, \end{array}\right. $$

and

$$S_{2,1}= \left\{\begin{array}{ll} -1, &\text{if}~ X_{2} < 0,~ \text{or}~(X_{2}=0 ~\text{and}~ X_{1} \neq 0),\\ 0, &\text{if}~(X_{2}=0~ \text{and}~ X_{1}=0),\\ 1, &\text{if}~ X_{2} > 0. \end{array}\right. $$

We now study the distribution of \(X_{1}^{*}\).

$${\begin{aligned} P(X_{1}^{*}=-1) &= P(X_{1}S_{2,1}=-1)\\ &= P(X_{1}=1, S_{2,1}=-1)+P(X_{1}=-1, S_{2,1}=1)\\ &= P(X_{1}\,=\,1, X_<0)\!+P(X_{1}=1, X_{2}=0, X_{1} \neq 0)+P(X_{1}=-1, X_{2}>0)\\ &= P(X_{1}=1, X_{2}=-1)+P(X_{1}=1, X_{2}=0)+P(X_{1}=-1, X_{2}=1)\\ &= \frac{1}{16}+\frac{1}{8}+\frac{1}{16} = \frac{1}{4}, \end{aligned}} $$

and

$$\begin{array}{*{20}l} P(X_{1}^{*}=0) &= P(X_{1}S_{2,1}=0)\\ &= P(X_{1}=0, S_{2,1} \neq 0)+P(X_{1} \neq 0, S_{2,1}=0)+P(X_{1}=0, S_{2,1}=0)\\ &= \big\{P(X_{1}=0, X_{2}<0)+P(X_{1}=0, X_{2}>0)\big\}+0+P(X_{1}=0, X_{2}=0)\\ &= P(X_{1}=0, X_{2}=-1)+P(X_{1}=0, X_{2}=1)+P(X_{1}=0, X_{2}=0)\\ &= \frac{1}{8}+\frac{1}{8}+\frac{1}{4} = \frac{1}{2}. \end{array} $$

This now implies that \(P(X_{1}^{*}=1)=1/4\). Hence, the distribution of \(X_{1}^{*}\) is the same as that of the distribution of X. Similarly, one can show that \(X_{2}^{*}\) is identically distributed with X.

By our construction, the support of the joint distribution is on a restricted set, i.e.,

$$X_{1}^{*} \cdot X_{2}^{*}=X_{1}S_{2,1} \cdot X_{2}S_{1,2}=X_{1}S_{1,2} \cdot X_{2}S_{2,1} $$

which is non-negative with probability one. The joint distribution of \((X_{1}^{*}, X_{2}^{*})^{T}\) is as follows:

$$\begin{array}{*{20}l} &P(X_{1}^{*}\,=\,-1, X_{2}^{*}\,=\,-1) \,=\, P(X_{1}\,=\,-1, X_{2}=1)+P(X_{1}=1, X_{2}=-1) = \frac{1}{16}+\frac{1}{16} = \frac{1}{8},\\ &P(X_{1}^{*}=-1, X_{2}^{*}=0) = P(X_{1}=1, X_{2}=0) = \frac{1}{8},\\ &P(X_{1}^{*}=0, X_{2}^{*}=-1) = P(X_{1}=0, X_{2}=1) = \frac{1}{8},\\ &P(X_{1}^{*}=0, X_{2}^{*}=0) = P(X_{1}=0, X_{2}=0) = \frac{1}{4},\\ &P(X_{1}^{*}=0, X_{2}^{*}=1) = P(X_{1}=0, X_{2}=-1) = \frac{1}{8},\\ &P(X_{1}^{*}=1, X_{2}^{*}=0) = P(X_{1}=-1, X_{2}=0) = \frac{1}{8}, \text{and} \\ &P(X_{1}^{*}=1, X_{2}^{*}=1) = P(X_{1}=-1, X_{2}=-1)+P(X_{1}=1, X_{2}=1) = \frac{1}{16}+\frac{1}{16} = \frac{1}{8}. \end{array} $$

Let \(\mathbb {I}_{A}(x)\) be the indicator function, which is defined as

$$\mathbb{I}_{A}(x)= \left\{\begin{array}{ll} 1, &\text{if} ~x \in A,\\ 0, &\text{if}~ x \notin A. \end{array}\right. $$

It is easy to check that the joint pmf of \((X_{1}^{*}, X_{2}^{*})^{T}\) can be re-written in a consise form as follows:

$$q^{*}(x_{1},x_{2}) = q(x_{1},x_{2}) \mathbb{I}_{\{0\}}(x_{1}x_{2}) + 2q(x_{1},x_{2})\mathbb{I}_{(0,\infty)}(x_{1}x_{2}) \text{~for~} (x_{1},x_{2}) \in {\mathcal{Z}}_{1}^{2}, $$

where the expression of q(x 1,x 2) is defined in (1). The joint distribution of \((X_{1}^{*},X_{2}^{*})^{T}\) is asymmetric, and clearly does not belong to the family of distributions containing the joint distribution of (X 1,X 2)T as the support of \(\left (X_{1}^{*},X_{2}^{*}\right)^{T}\) is restricted to two quadrants only, although the univariate distributions of \(X_{1}^{*}\) and \(X_{2}^{*}\) are the same as that of the distribution of X. The plot in Fig. 1 demonstrates the basic idea of shifting of probability mass to only two quadrants (see the right panel of Fig. 1), and clearly shows the difference between the pmfs q(x 1,x 2) and q (x 1,x 2).

Fig. 1
figure 1

3D plots of the two probability mass functions q(x 1,x 2) and q (x 1,x 2) on \({\mathcal {Z}}_{2}^{2}\)

This motivates us to construct a d-dimensional random vector such that the distribution of any proper subset of its components belongs to a certain family of distributions, but the joint distribution, with all the d components taken together, fails to conform to that family of distributions.

General symmetric discrete multivariate distributions

We now consider a general discrete distribution symmetric about the point 0 with support on a finite or countably infinite set \(\mathbb {S}\), given by the pmf

$$ q(x)=\sum\limits_{j \in \mathbb{S}} \delta(x-j)m_{j}, $$
(2)

where 0≤m j ≤1 and \(m_{j} = m_{-j} \text {~for all~} j \in \mathbb {S}\) with \(\sum _{j \in \mathbb {S}} m_{j} = 1\).

Let X 1,…,X d be independent copies of Xq(·). Here, the function δ(·) is defined as follows:

$$\delta(x)= \left\{\begin{array}{ll} 1, &\text{if} ~x = 0,\\ 0, &\text{if}~ x \neq 0. \end{array}\right. $$

We have I {0}(x)=δ(x). The joint pmf of (X 1,…,X d )T is given by

$$q(\mathbf{x})=\prod\limits_{i=1}^{d} q(x_{i})=\prod\limits_{i=1}^{d}\left\{\sum\limits_{j \in \mathbb{S}} \delta(x_{i}-j)m_{j}\right\}, $$

where \(\mathbf {x} = (x_{1}, \ldots, x_{d})^{T} \in \mathbb {S}^{d}\).

Again, consider the transformation \((X_{1}, \ldots, X_{d}) \rightarrow \left (X_{1}^{*}, \ldots, X_{d}^{*}\right)\) such that

$$ X_{1}^{*}=X_{1}S_{2,1}, \ldots, X_{i-1}^{*}=X_{i-1}S_{i,i-1},\ldots, X_{d-1}^{*}=X_{d-1}S_{d,d-1}, X_{d}^{*}=X_{d}S_{1,d}, $$
(3)

where the modified sign function is defined as

$$S_{i,i-1}= \left\{\begin{array}{ll} -1, &\text{if}~ X_{i} < 0~ \text{or}~ (X_{i}=0 ~\text{and}~ X_{i-1} \neq 0),\\ 0, &\text{if} ~(X_{i}=0~ \text{and} ~X_{i-1}=0),\\ 1, &\text{if}~ X_{i} > 0, \end{array}\right. $$

for i=2,…,d, and

$$S_{1,d}= \left\{\begin{array}{ll} -1, &\text{if}~ X_{1} < 0~ \text{or}~ (X_{1}=0 ~\text{and} ~X_{d} \neq 0),\\ 0, &\text{if}~ (X_{1}=0 ~\text{and}~ X_{d}=0),\\ 1, &\text{if}~ X_{1} > 0. \end{array}\right. $$

Now, \(\prod _{i=1}^{d} X_{i}^{*} = X_{1}S_{2,1} \cdot X_{2}S_{3,2} \cdots X_{d}S_{1,d} = X_{1}S_{1,d} \cdot X_{2}S_{2,1} \cdots X_{d}S_{d,d-1}\). Fix i{2,…,d}. Again, X i <0S i,i−1=−1, hence X i S i,i−1>0, while X i >0S i,i−1=1, hence X i S i,i−1>0. Also, X i =0X i S i,i−1=0. To summarize, X i S i,i−1≥0 for any i=2,…,d. Similarly, X 1 S 1,d ≥0. Hence, we obtain \(\prod _{i=1}^{d} X_{i}^{*} \geq 0\).

Theorems 1 and 2 below state the joint distribution of \(\left (X_{1}^{*}, \ldots, X_{d}^{*}\right)^{T}\) as well as the distribution of the lower-dimensional vectors.

Theorem 1

The joint pmf of \(\left (X_{1}^{*}, \ldots, X_{d}^{*}\right)^{T}\) is

$$q^{*}(\mathbf{x})= q(\mathbf{x}) \text{} \mathbb{I}_{\{0\}}\left(\prod\limits_{i=1}^{d} x_{i}\right) + 2 q(\mathbf{x}) \text{} \mathbb{I}_{(0, \infty)}\left(\prod\limits_{i=1}^{d} x_{i}\right) \text{for~} \mathbf{x} \in \mathbb{S}^{d}. $$

Theorem 2

Any sub-vector \(\left (\!X^{*}_{k_{1}}, \ldots, X^{*}_{k_{d'}}\!\right)^{T}\) of \(\left (X_{1}^{*}, \ldots, X_{d}^{*}\right)^{T}\) with d <d is component-wise independent, and the joint distribution of \(\left (X^{*}_{k_{1}}, \ldots, X^{*}_{k_{d'}}\right)^{T}\) is the same as that of \(\left (X_{k_{1}}, \ldots, X_{k_{d'}}\right)^{T}\).

In particular, this result holds true if \(\mathbb {S}=\mathbb {Z}=\{\ldots,-1,0,1,\ldots \}\) or any proper subset of \(\mathbb {Z}\) which is symmetric about 0, (i.e., \(x \in \mathbb {S} \Leftrightarrow -x \in \mathbb {S}\)).

The symmetric binomial distribution

We started our investigation in Section 2 using a discrete distribution symmetric about 0 with support on the set \({\mathcal {N}}_{2}\), and extended it to the case of a general symmetric discrete distribution symmetric about 0, supported on a finite or countably infinite subset \(\mathbb {S}\) of \(\mathbb {R}\). However, it is interesting to see whether such ‘inconsistency’ results continue to hold for other symmetric distributions for which the point of symmetry is not necessarily 0.

Suppose that U follows a symmetric distribution with a finite or countably infinite support and point of symmetry u 0. Define X=Uu 0. Then, X follows a symmetric distribution about 0 with a finite or countably infinite support (as mentioned in (2)).

Assume UBinomial(n,1/2), and consider u 0 to be n/2. We now have versions of Theorems 1 and 2 for symmetric Binomial distributions.

Discrete symmetric distributions

Given any continuous distribution with distribution function (df) F on \(\mathbb {R}\), we can define its ‘discrete analogue’ (see, e.g., Alzaatreh et al. 2012) with pmf q d (·) as follows:

$$ q_{d}(x)=F(x+1)-F(x), ~x \in \mathbb{Z}, $$

with q d (x)≥0 and \(\sum _{x \in \mathbb {Z}}q_{d}(x)=1\). If UF(·), then this implies that [ U]q d (·). Here, [ u] denotes the largest integer less than or equal to u.

Assume F to be symmetric about the point 0, i.e., F(x)+F(−x)=1 for any \(x \in \mathbb {R}\). Now, note that q d (0)=q d (−1) and the point of symmetry of [U] is clearly \(-\frac {1}{2}\). Define \(X=[\!U]+\frac {1}{2}\). Then, X is a discrete variate with support on a countably infinite set \(\mathbb {S}\) which is symmetric about the point 0, say, Xq dS (·). In particular, if we take F(x)=Φ(x), where Φ(·) is the df of the standard normal distribution, then we obtain the discrete normal distribution (dN) (Roy 2003). The pmf of X (say, q dN (·)) simplifies to be Φ(x+1/2)−Φ(x−1/2) with \(x \in \mathbb {S}\). Using this discretization idea, Chakraborty and Chakravarty (2016) have proposed a discrete logistic distribution starting from the continuous two-parameter logistic distribution. Now, we can construct versions of Theorems 1 and 2 for such symmetric discrete probability distributions.

The multivariate discrete normal and related distributions

In the multivariate setting (say, \(\mathbb {R}^{d}\)), let U d N d (0,I d ) be the d-dimensional normal distribution with I d as the d×d identity matrix. We can define an analogue of the discrete normal (dN) distribution on \(\mathbb {S}^{d}\) as follows:

$$ {\mathbf{X}}_{d}=[\mathbf{U}_{d}]+\frac{1}{2}\mathbf{1}_{d}=\left([U_{1}]+\frac{1}{2},\ldots,[U_{d}]+\frac{1}{2}\right)^{T}. $$
(4)

The joint pmf of X d is the product \(\prod \limits _{i=1}^{d} q_{dN}(x_{i})\).

We now apply the transformation defined in Eq. (6) on the vector X d stated in Eq. (4) to obtain \({\mathbf {X}}_{d}^{*}\). Define

$$ \mathbf{Y}_{d}^{*} = A\mathbf{1}_{d} + B\mathbf{X}_{d}^{*}, $$

where \(A, B \in \mathbb {S}\) are discrete random variables independent of \(\mathbf {X}_{d}^{*}\). The support of the random vector \(\mathbf {Y}_{d}^{*}\) is the set \(\mathbb {S}_{*}^{d}\), where \(\mathbb {S}_{*}\) is a countable set which depends on the joint distribution of A and B. Define \(\mathbf {Y}_{d-1,d}^{*}\) based on \(\mathbf {Y}_{d}^{*}\) by selecting a subset of size d−1 from the set {1,…,d}.

Theorem 3

The distributions of all the (d−1)-dimensional random vectors \(\mathbf {Y}_{d-1,d}^{*}\) belong to the same family of distributions, but that of \(\,\mathbf {Y}_{d}^{*}\) does not.

We now state consequences of Theorem 3 in some popular and important classes of symmetric and asymmetric distributions derived from the multivariate discrete normal distribution:

  • (Discrete Scale Mixture) Consider A=0 (a degenerate random variable) and B=W, where W is a non-negative, discrete random variable independent of \(\mathbf {X}_{d}^{*}\).

  • (Discrete Skew-Normal) Consider a discrete normal variate X d+1q dN (·) independent of \(\mathbf {X}_{d}^{*}\). Define A=δ|X d+1| and \(B=\sqrt {1-\delta ^{2}}\) (a degenerate random variable) with −1<δ<1 (also see pp.128-129 of Azzalini (2014)).

Using Theorem 3, we now obtain this ‘inconsistency result’ for a class of discrete skew-normal distributions. One can also extend this results to a class of skew-elliptical distributions starting from the discrete scale mixture. More general results using the idea of modulation of symmetric discrete distributions proposed recently by Azzalini and Regoli (2014) still remain open.

Appendix: Proofs and Mathematical Details

Proof 1 (Proof of Theorem 1) We break the proof into two parts, namely, Case I and Case II.

Case I: \(\prod _{i=1}^{d} X_{i}^{*}=0\). This fact now implies that at least one of the \(X_{i}^{*}\)’s is zero. Without loss of generality, let \(X_{1}^{*}=0\) and consider the event \(\left (X_{1}^{*}=0, X_{2}^{*}=x_{2},\ldots, X_{d}^{*}=x_{d}\right)\). Under the assumption that \(X_{1}^{*}=0\), we now establish the following facts:

(F1) \(X_{i}^{*}=0 \Rightarrow X_{i}=0\) for any i=1,…,d. To show this, we note that if \(X_{1}^{*}=0\), then X 1 S 2,1=0. Now, either X 1=0 or S 2,1=0. If X 1=0 then we are done. If S 2,1=0, then X 2=X 1=0, in particular X 1=0. More generally, \(X_{i}^{*}=0 \Rightarrow X_{i}=0\) for any i=1,…,d.

(F2) If \(X_{1}^{*}=0\), then X d =−x d . To show this, suppose that \(X_{d}^{*} \neq 0\), then we obtain X d S 1,d ≠0 and in particular X d ≠0. We have assumed that \(X_{1}^{*}=0\), now using (F1) we know that X 1=0. Combining the facts that X 1=0 and X d ≠0, we have by definition S 1,d =−1. Again, \(X_{d}^{*}=x_{d} \Rightarrow X_{d}S_{1,d}=x_{d}\) and thus we obtain X d =−x d . On the other hand, suppose that \(X_{d}^{*} = 0\). In this case x d =0. By (F1), \(X_{d}^{*}=0 \Rightarrow X_{d}=0\). Thus we trivially have X d =−x d (both sides being equal to 0). Hence, in all the cases, we have X d =−x d .

(F3) We have X i =x i S i+1,i for i=2,…,d−1. To show this, we start with any i{2,…,d−1}. Now \(X_{i}^{*}=x_{i}\), which implies X i S i+1,i =x i . First take x i ≠0. Then X i ≠0 and S i+1,i ≠0.

Since X i ≠0, S i+1,i simplifies to the following function:

$$S_{i+1,i}= \left\{\begin{array}{ll} -1, &\text{if}~ X_{i+1} \leq 0,\\ 1, &\text{if}~ X_{i+1} > 0. \end{array}\right.$$

Thus, S i+1,i only takes the values −1 or 1, which implies that \(S_{i+1,i}^{2}=1\). We can multiply S i+1,i on both sides of the equation X i S i+1,i =x i to obtain X i =x i S i+1,i .

On the other hand, suppose x i =0. Using (F1), we obtain X i =0 which trivially satisfies X i =x i S i+1,i (as both sides equal zero). This is true for all i=2,…,d−1.

(F4) We have P(X i =x i S i+1,i )=P(X i =x i ). To show this, we first consider the case when x i ≠0. Then S i+1,i =±1. Hence P(X i =x i S i+1,i )=P(X i x i ), which is P(X i =x i ) by virtue of symmetry. The claim follows trivially when x i =0.

Now, we consider the probability

$$\begin{aligned} &P(X_{1}^{\ast}=0, X_{2}^{*}=x_{2},\ldots, X_{d}^{\ast}=x_{d})\\ &= P(X_{1}=0, X_{2}=x_{2}S_{3,2},\ldots, X_{d-1}=x_{d-1}S_{d,d-1}, X_{d}=-x_{d})\\ &= P(X_{1}\,=\,0) P(X_{2}\,=\,x_{2}S_{3,2}) \cdots P(X_{d-1}\,=\,x_{d-1}S_{d,d-1}) P(X_{d}=-x_{d}) \left[\mathrm{Using independence}\right]\\ &= P(X_{1}=0) P(X_{2}=x_{2}) \cdots P(X_{d-1}=x_{d-1}) P(X_{d}=x_{d}) \left[\mathrm{Using (F4)}\right]\\ &= P(X_{1}=0, X_{2}=x_{2}, \ldots, X_{d-1}=x_{d-1}, X_{d}=x_{d}) \end{aligned} $$

In this case, the joint pmf of \(\left (X_{1}^{*}, \ldots, X_{d}^{*}\right)^{T}\) denoted by q (x) is given as

$$q^{*}(\mathbf{x})= q (\mathbf{x}). $$

This completes the first part.

Case II: Assume \(\prod _{i=1}^{d} X_{i}^{*} > 0\). This implies that none of the \(X_{i}^{*}\)’s are zero. Since \(X_{i}^{*} \neq 0\) implies that X i ≠0 for i=1,…,d, we can ignore the value of the sign function S(·) at 0 and it simplifies to the following function:

$$S(x)= \left\{\begin{array}{ll} -1, &\text{if}~ x < 0,\\ 1, &\text{if}~ x > 0. \end{array}\right. $$

Thus, in this case we always have S(x)=±1. Also, note that here the modified sign function S i,i−1 simplifies to S(X i ), and hence \(X_{i}^{*}=X_{i}S(X_{i+1})\), i=1,…,d−1 and \(X_{d}^{*}=X_{d}S(X_{1})\).

We now consider enumerating the joint probability \(P\left (X_{1}^{*}=x_{1}, \ldots, X_{d}^{*}=x_{d}\right)\) with the restriction that \(\prod _{i=1}^{d} x_{i} > 0\).

Lemma 1

The event \(\left (X_{1}^{*}=x_{1}, \ldots, X_{d}^{*}=x_{d}\right)\) is equivalent to the occurrence of either of the following two events:

  • (i) \(\left (X_{1}=x_{1}, X_{2}=x_{2}S(x_{2}), X_{3}=x_{3}S(x_{2}x_{3}), \ldots, X_{d}=x_{d}S\left (\prod _{j=2}^{d} x_{j}\right)\right)\text {, or}\)

  • (ii) \(\left (X_{1}=-x_{1}, X_{2}=-x_{2}S(x_{2}), X_{3}=-x_{3}S(x_{2}x_{3}), \ldots, X_{d}=-x_{d}S\left (\prod _{j=2}^{d} x_{j}\right)\right).\)

Proof 2 (Proof of Lemma 1)

Note that \(|X_{k}|=|X_{k}^{*}|\) for all k=1,…,d. Also, in this case, the value of S(X 2) can either be 1, or −1. We now consider two separate cases.

Case (i): Assume S(X 2)=1, and we get X 1=x 1. Recall that \(X_{2}^{*}=X_{2}S(X_{3})=|X_{2}|S(X_{2})S(X_{3})\), hence we get \(X_{2}^{*}=|X_{2}|S(X_{3})\) and \(X_{2}=X_{2}^{*}/S(X_{3})\). Now, the following holds:

$$X_{2}^{*}=x_{2} \Rightarrow |X_{2}|S(X_{3})=x_{2} \Rightarrow |x_{2}|S(X_{3})=x_{2} \Rightarrow S(X_{3})=\frac{x_{2}}{|x_{2}|}=S(x_{2}). $$

Then \(X_{2}^{*}=X_{2}S(X_{3}) \Rightarrow X_{2}= \frac {X_{2}^{*}}{S(X_{3})}=X_{2}^{*}S(X_{3})=x_{2}S(x_{2})\). Here, we use the fact that S(u)=1/S(u).

Again, \(X_{3}^{*}=X_{3}S(X_{4})=|X_{3}|S(X_{3})S(X_{4})\). Using the expression of S(X 3) derived above, we obtain

$$X_{3}^{*}=x_{3} \Rightarrow |X_{3}|S(X_{3})S(X_{4})=x_{3} \Rightarrow |x_{3}|S(x_{2})S(X_{4})=x_{3}. $$

Therefore,

$$S(X_{4})=\frac{x_{3}}{|x_{3}|S(x_{2})}=\frac{S(x_{3})}{S(x_{2})}=S(x_{3})S(x_{2})=S(x_{2}x_{3}). $$

Note that S(xy)=S(x)S(y). Then

$$X_{3}^{*}=X_{3}S(X_{4}) \Rightarrow X_{3}=\frac{X_{3}^{*}}{S(X_{4})}=X_{3}^{*}S(X_{4})=x_{3} S(x_{2}x_{3}). $$

Further, \(X_{4}^{*}=X_{4}S(X_{5})=|X_{4}|S(X_{4})S(X_{5})\) and using the expression of S(X 4) obtained above we have, \(X_{4}^{*}=x_{4} \Rightarrow |X_{4}|S(X_{4})S(X_{5})=x_{4} \Rightarrow |x_{4}|S(x_{2}x_{3})S(X_{5})\). This implies that

$$S(X_{5})=\frac{x_{4}}{|x_{4}|S(x_{2}x_{3})} = \frac{S(x_{4})}{S(x_{2}x_{3})} = S(x_{2}x_{3}x_{4}). $$

Therefore, \(X_{4}^{*}=X_{4}S(X_{5}) \Rightarrow X_{4}= \frac {X_{4}^{*}}{S(X_{5})}=\frac {x_{4}}{S(x_{2}x_{3}x_{4})}=x_{4} S(x_{2}x_{3}x_{4})\). Proceeding in a similar fashion, we obtain X d =x d S(x 2x d ).

Case (ii): The proof follows by taking S(X 2)=−1, and repeating the line of arguments stated above for Case (i).

Now, we can enumerate the quantity \(P\left (X_{1}^{*}=x_{1}, X_{2}^{*}=x_{2},\ldots, X_{d}^{*}=x_{d}\right)\) as follows

$$\begin{aligned} &P\left(X_{1}^{*}=x_{1}, X_{2}^{*}=x_{2},\ldots, X_{d}^{*}=x_{d}\right)\\ &= P(X_{1}S(X_{2})=x_{1}, X_{2}S(X_{3})=x_{2},\ldots, X_{d}S(X_{1})=x_{d}) \\ &= P\left(X_{1}=-x_{1}, X_{2}=-x_{2}S(x_{2}),\ldots, X_{i}=-x_{i}S\left(\prod\limits_{j=1}^{i} x_{j}\right),\ldots,X_{d}=-x_{d}S\left(\prod\limits_{j=1}^{d} x_{j}\right)\right) \\ &~~~ +P\left(X_{1}=x_{1}, X_{2}=x_{2}S(x_{2}),\ldots, X_{i}=x_{i}S\left(\prod\limits_{j=1}^{i} x_{j}\right),\ldots, X_{d}=x_{d}S\left(\prod\limits_{j=1}^{d} x_{j}\right)\right) \\ &= P(X_{1}\,=\,-x_{1}) P(X_{2}\,=\,-x_{2}S(x_{2})) \!\cdots\! P\left(X_{i}\,=\,\,-\,x_{i}S\left(\prod\limits_{j=1}^{i} x_{j}\right)\right) \cdots P\left(X_{d}\,=\,\,-\,x_{d}S\left(\prod\limits_{j=1}^{d} x_{j}\right)\right)\\ &~~~ + P(X_{1}=x_{1}) P(X_{2}=x_{2}S(x_{2})) \cdots P\left(X_{i}=x_{i}S\left(\prod\limits_{j=1}^{i} x_{j}\right)\right) \cdots P\left(X_{d}=x_{d}S \left(\prod\limits_{j=1}^{d} x_{j}\right)\right)\\ &~\left\lbrack\text{Using independence}\right\rbrack\\ &= P(X_{1}=x_{1}) \cdots P(X_{i}=x_{i}) \cdots P(X_{d}=x_{d}) + P(X_{1}=x_{1}) \cdots P(X_{i}=x_{i}) \cdots P(X_{d}=x_{d})\\ &~\left\lbrack\text{Using symmetry about 0, and the fact that~} S(\cdot)=\pm 1\!\right\rbrack\\ &= 2 P\left(X_{1}=x_{1}, \ldots, X_{d}=x_{d}\right). \end{aligned} $$

Hence in this case, the joint pmf of \(\left (X_{1}^{*},\ldots, X_{d}^{*}\right)^{T}\) is given by

$$q^{*}(\mathbf{x})=2 q(\mathbf{x}). $$

This completes the proof of the second part.

Combining these two cases, we may write the joint pmf of \(\left (X_{1}^{*},\ldots, X_{d}^{*}\right)^{T}\) as follows:

$$ q^{*}(\mathbf{x})= q(\mathbf{x}) \text{} \mathbb{I}_{\{0\}}\left(\prod\limits_{i=1}^{d} x_{i}\right) + 2 q(\mathbf{x}) \text{} \mathbb{I}_{(0,\infty)}\left(\prod\limits_{i=1}^{d} x_{i}\right). $$
(5)

This completes the proof. □

Proof 3 (Proof of Theorem 2)

First we consider the univariate distributions, namely, when d=1 and compute the probability \(P(X_{t}^{*}=x_{t})\), denoted by q (x t ) for a fixed t{1,…,d}. Now, suppose that x t =0. Note that \(X_{t}^{*}=X_{t}S_{t+1,t}=0 \Leftrightarrow X_{t}=0\) since S t+1,t =0 also requires X t to be zero. We trivially have the reverse, i.e., \(X_{t}=0 \Rightarrow X_{t}^{*}=0\). So, we have \(X_{t}^{*}=0 \iff X_{t}=0\) and hence \(P\left (X_{t}^{*}=0\right) = P\left (X_{t}=0\right)\). Thus, we obtain q (x t )=q(x t ) for any t=1,…,d.

On the other hand, if x t ≠0, then S t+1,t simplifies to the following function

$$S_{t+1,t}= \left\{\begin{array}{ll} -1, &\text{if} ~X_{t+1} \leq 0,\\ 1, &\text{if}~ X_{t+1} > 0, \end{array}\right. $$

and we get

$$\begin{array}{*{20}l} P(X_{t}^{*}=x_{t}) &= P(X_{t}S_{t+1,t}=x_{t})\\ &= P(X_{t}=x_{t}, S_{t+1,t}=1)+P(X_{t}=-x_{t}, S_{t+1,t}=-1)\\ &= P(X_{t}=x_{t}, X_{t+1} > 0)+P(X_{t}=-x_{t}, X_{t+1} \leq 0)\\ &= P(X_{t}=x_{t}) P(X_{t+1} > 0)+P(X_{t}=-x_{t}) P(X_{t+1} \leq 0)\\ &= P(X_{t}=x_{t})\{P(X_{t+1} > 0)+P(X_{t+1} \leq 0)\}\\ &= P(X_{t} = x_{t}). \end{array} $$

Thus, in all these cases, we obtain

$$ q^{*}(x_{t})=q(x_{t}) \text{~for~} t=1,\ldots,d. $$
(6)

Define the following random vectors:

$${} \mathbf{X}_{(-1)}\,=\,(X_{2}, \ldots, X_{d})^{T}, \mathbf{X}_{(-t)}\,=\,\left(X_{1}, \ldots, X_{t-1}, X_{t+1},\ldots, X_{d}\right)^{T}\!, \mathbf{X}_{(-d)}\,=\,(X_{1}, \ldots, X_{d-1})^{T} ~\text{and} $$
$${} \mathbf{X}_{(-1)}^{*}=\left(X_{2}^{*}, \ldots, X_{d}^{*}\right)^{T}\!, \mathbf{X}_{(-t)}^{*}=\left(X_{1}^{*}, \ldots, X_{t-1}^{*}, X_{t+1}^{*},\ldots, X_{d}^{*}\right)^{T}\!, \mathbf{X}_{(-d)}^{*}=\left(X_{1}^{*}, \ldots, X_{d-1}^{*}\right)^{T}\!, $$
$${} \text{and~the~sets~} I_{(-1)}=\{2, \ldots, d\}, I_{(-t)}=\{1, \ldots, t-1, t+1,\ldots, d\}, I_{(-d)}=\{1, \ldots, d-1\}. $$

Let x (−t) = (x 1,…,x t−1,x t+1,…,x d )T, and q(x (−t)) = P(X (−t)=x (−t)) for t = 1,…,d. We want to compute the probability \(P\left (\mathbf {X}_{(-t)}^{*}=\mathbf {x}_{(-t)}\right)\), and denote it by q (x (−t)). Further, we denote the joint probability P(X (−t)=x (−t),X t =x t ) by q(x (−t),x t ), which is nothing but P(X 1=x 1,…,X d =x d ), and the joint probability \(P\left (\mathbf {X}_{(-t)}^{*}=\mathbf {x}_{(-t)}, X_{t}^{*}=x_{t}\right)\) by q (x (−t),x t ), which is nothing but \(P\left (X_{1}^{*}=x_{1},\ldots,X_{d}^{*}=x_{d}\right)\). We now consider three separate cases.

Case I: \(\phantom {\dot {i}\!}\prod _{i \in I_{(-t)}}x_{i}=0\)

Note that \(\prod _{i \in I_{(-t)}}x_{i}=0 \Rightarrow \prod _{i=1}^{d} x_{i}=0\). Hence, we get q (x)=q(x)=q(x (−t))q(x t ). This now gives the following:

$$ {}q^{*}(\mathbf{x}_{(-t)})\,=\,\sum\limits_{x_{t} \in \mathbb{S}} q^{*}(\mathbf{x})\,=\,q\left(\mathbf{x}_{(-t)}\right)\sum\limits_{x_{t} \in \mathbb{S}} q(x_{t})\,=\,q\left(\mathbf{x}_{(-t)}\right)\,=\,\!\prod\limits_{i \in I_{(-t)}}q(x_{i})\,=\,\!\!\!\!\prod\limits_{i \in I_{(-t)}}q^{*}(x_{i}) ~[\!\text{Using (6)}]. $$

Case II: \(\phantom {\dot {i}\!}\prod _{i \in I_{(-t)}}x_{i} > 0\)

Consider three separate sub-cases

(i) If x t =0, then \(\prod _{i=1}^{d} x_{i}=0\), and q (x)=q(x)=q(x (−t))q(0). So, q (x (−t),0)=q(x (−t))q(0).

(ii) If x t >0, then \(\prod _{i=1}^{d} x_{i} > 0\). Thus, q (x)=2q(x)=2q(x (−t))q(x t ). Hence, \(q^{*}\left (\mathbf {x}_{(-t)}, x_{t}\right)=2 q\left (\mathbf {x}_{(-t)}\right) \sum _{t:x_{t}>0} q(x_{t})\).

(iii) If x t <0, then \(\prod _{i=1}^{d} x_{i} < 0\), which cannot happen by the construction of \(X_{i}^{*}\)’s. Hence, q (x (−t),x t )=0.

Considering these three sub-cases, we have

$$\begin{array}{*{20}l} q^{*}(\mathbf{x}_{(-t)})&=\sum\limits_{t : x_{t} < 0} q^{*}\left(\mathbf{x}_{(-t)}, x_{t})+q^{*}(\mathbf{x}_{(-t)}, 0\right)+\sum\limits_{t:x_{t} > 0}q^{*}\left(\mathbf{x}_{(-t)}, x_{t}\right)\\ &= 0 + q\left(\mathbf{x}_{(-t)}\right)q(0) + 2 q\left(\mathbf{x}_{(-t)}\right) \sum\limits_{t:x_{t}>0} q(x_{t})\\ &= q\left(\mathbf{x}_{(-t)}\right) \sum\limits_{x_{t} \in \mathbb{S}} q(x_{t}) ~\lbrack\text{\!Using symmetry about}~0\rbrack \\ &= q\left(\mathbf{x}_{(-t)}\right) = \prod\limits_{i \in I_{(-t)}}q(x_{i}) = \prod\limits_{i \in I_{(-t)}}q^{*}(x_{i}) ~[\text{\!Using (6)}]. \end{array} $$

Case III: \(\phantom {\dot {i}\!}\prod _{i \in I_{(-t)}}x_{i} < 0\)

Again, consider three separate sub-cases

(i) If x t =0, then \(\prod _{i=1}^{d} x_{i}=0\). Hence, q (x)=q(x)=q(x (−t))q(0), i.e., q (x (−t),0)=q(x (−t))q(0).

(ii) If x t <0, then \(\prod _{i=1}^{d} x_{i} > 0\). Hence, q (x)=2q(x)=2q(x (−t))q(x t ), i.e., \(q^{*}\left (\mathbf {x}_{(-t)}, x_{t}\right)=2 q\left (\mathbf {x}_{(-t)}\right) \sum _{t:x_{t}<0} q(x_{t})\).

(iii) If x t >0, then \(\prod _{i=1}^{d} x_{i} < 0\). Hence, q (x (−t),x t )=0.

Combining these three sub-cases, we obtain

$$\begin{array}{*{20}l} q^{*}(\mathbf{x}_{(-t)})&=\sum\limits_{x_{t} < 0} q^{*}\left(\mathbf{x}_{(-t)}, x_{t}\right)+q^{*}\left(\mathbf{x}_{(-t)}, 0\right)+\sum\limits_{x_{t} > 0}q^{*}\left(\mathbf{x}_{(-t)}, x_{t}\right)\\ &= 2 q\left(\mathbf{x}_{(-t)}\right) \sum\limits_{t:x_{t}<0} q(x_{t}) + q\left(\mathbf{x}_{(-t)}\right)q(0) + 0\\ &= q\left(\mathbf{x}_{(-t)}\right) \sum\limits_{x_{t} \in \mathbb{S}} q(x_{t}) ~\left\lbrack\text{Using symmetry about} ~0\right\rbrack \\ &= q\left(\mathbf{x}_{(-t)}\right) = \prod\limits_{i \in I_{(-t)}}q(x_{i}) = \prod\limits_{i \in I_{(-t)}}q^{*}(x_{i}) ~\left[\text{Using (6)}\right]. \end{array} $$

In all these cases, the components of a sub-vector \(\left (X_{k_{1}}^{*}, \ldots,X_{k_{d-1}}^{*}\right)^{T}\) of \(\left (X_{1}^{*}, \ldots, X_{d}^{*}\right)^{T}\) are independent of each other. Again, if we take a sub-vector with size less than d−1, then the components will also be independent as all the sub-vectors of size d−1 are component-wise independent. Thus, we can say that any sub-vector \(\left (X_{k_{1}}^{*},\ldots,X_{k_{d'}}^{*}\right)^{T}\) of \(\left (X_{1}^{*}, \ldots, X_{d}^{*}\right)^{T}\) with d <d is component-wise independent. Now,

$$\begin{array}{*{20}l} &P\left(X_{k_{1}}^{*}=x_{1}, X_{k_{2}}^{*}=x_{2},\ldots,X_{k_{d'}}^{*}=x_{d'}\right)\\ &=P\left(X_{k_{1}}^{*}=x_{1}\right) P\left(X_{k_{2}}^{*}=x_{2}\right) \cdots P\left(X_{k_{d'}}^{*}=x_{d'}\right)~~\left\lbrack\text{Using independence}\right\rbrack\\ &=P\left(X_{k_{1}}=x_{1}\right) P\left(X_{k_{2}}=x_{2}\right) \cdots P\left(X_{k_{d'}}=x_{d'}\right)\\ &~~~~\left\lbrack\text{Since the marginal distributions of~} X_{i}^{*} \text{~and~} X_{i} \text{~are identical for~}~i=k_{1},\ldots,k_{d'} \right\rbrack \\ &=P\left(X_{k_{1}}=x_{1}, X_{k_{2}}=x_{2},\ldots,X_{k_{d'}}=x_{d'}\right)~~\left\lbrack\text{Again using independence}\right\rbrack. \end{array} $$

Hence, the joint distribution of \(\left (X_{k_{1}}^{*},\ldots,X_{k_{d'}}^{*}\right)^{T}\) is same as that of \(\left (X_{k_{1}},\ldots,X_{k_{d'}}\right)^{T}\). □

Proof 4 (Proof of Theorem 3)

(Proof of Theorem 3) Let us denote the joint distribution of (A,B) by the pmf G(a,b) with \(a,b \in \mathbb {S}\). Recall that we have assumed A,B to be independent of \(\mathbf {X}_{d}^{*}\). So, the conditional distribution of \(\mathbf {Y}_{d}^{*}\) given (A=a,B=b) is same as its unconditional distribution. We will use this fact throughout the proof of this theorem.

The joint distribution of \(\mathbf {Y}_{d}^{*}\) is given by

$$ \begin{aligned} r^{*}(\mathbf{y}_{d})&= \sum\limits_{a,b \; \in \; \mathbb{S}} P\left(\mathbf{Y}_{d}^{*}={\mathbf{y}}_{d}|a,b\right) G(a,b)\\ &= \sum\limits_{a,b \; \in \; \mathbb{S}} \mathrm{P}\left(\mathbf{X}_{d}^{*}=\frac{\mathbf{y}_{d}-a\mathbf{1}_{d}}{b}\right)G(a,b)\\ &= \sum\limits_{a,b \; \in \; \mathbb{S}} q^{*}\left(\frac{\mathbf{y}_{d}-a\mathbf{1}_{d}}{b}\right)G(a,b)\\ &= \sum\limits_{a,b \; \in \; \mathbb{S}} \left \{ q\left(\frac{\mathbf{y}_{d}-a\mathbf{1}_{d}}{b}\right) \text{} \mathbb{I}\left(\prod\limits_{i=1}^{d} \frac{y_{i}-a}{b}=0\right) + 2 q\left(\frac{\mathbf{y}_{d}-a\mathbf{1}_{d}}{b}\right) \text{} \mathbb{I}\!\left(\prod\limits_{i=1}^{d} \frac{y_{i}-a}{b} > 0\right) \right \} G(a,b). \end{aligned} $$

Recall the expression of q (x) from Eq. (5).

Define \(\mathbf {Y}_{(-1)}^{*} = \left (Y_{2}^{*},Y_{3}^{*},\ldots,Y_{d}^{*}\right)^{T}\) with I (−1)={2,3,…,d}; \(\mathbf {Y}_{(-t)}^{*} = \left (Y_{1}^{*},\ldots,Y_{t-1}^{*}, Y_{t+1}^{*}, \ldots,Y_{d}^{*}\right)^{T}\) with I (−t)={1,…,t−1,t+1,…,d}, t=2,…,(d−1); and \(\mathbf {Y}_{(-d)}^{*} = \left (Y_{1}^{*},Y_{2}^{*},\ldots,Y_{d-1}^{*}\right)^{T}\) with I (−d) = {1,2,…,d}. Also, define y (−t) = (y 1,…,y t−1,y t+1,…,y d ). Consider

$$r^{*}(\mathbf{y}_{(-t)})=\sum_{k \in S_{*}}\sum_{a,b \; \in \; \mathbb{S}} P\left(Y_{1}^{*}=y_{1},\ldots,Y_{t}^{*}=k,\ldots,Y_{d}^{*}=y_{d}|a,b\right) G(a,b). $$

Now, for any fixed \(a,b \in \mathbb {S}\), we have

$${} P\left(Y_{1}^{*}\,=\,y_{1},\ldots,Y_{t}^{*}\,=\,k,\ldots,Y_{d}^{*}\,=\,y_{d}\right)\,=\,\!P\left(X_{1}^{*}\,=\,\frac{y_{1}-a}{b},\ldots,X_{t}^{*}\,=\,\frac{k-a}{b},\ldots,X_{d}^{*}\,=\,\frac{y_{d}-a}{b}\right). $$

Then,

$$\begin{aligned} r^{*}(\mathbf{y}_{(-t)}) &= \sum\limits_{a,b \; \in \; \mathbb{S}} \sum\limits_{k \in S_{*}} P\left(X_{1}^{*}=\frac{y_{1}-a}{b},\ldots,X_{t}^{*}=\frac{k-a}{b},\ldots,X_{d}^{*}=\frac{y_{d}-a}{b}\right) G(a,b)\\ &= \sum\limits_{a,b \; \in \; \mathbb{S}} P\left(X_{1}^{*}=\frac{y_{1}-a}{b},\ldots,X_{t-1}^{*}=\frac{y_{t-1}-a}{b},X_{t+1}^{*}=\frac{y_{t+1}-a}{b},\ldots,X_{d}^{*}\!=\frac{y_{d}-a}{b}\right) G(a,b)\\ &= \sum\limits_{a,b \; \in \; \mathbb{S}} q^{*} \left(\frac{\mathbf{y}_{(-t)}-a\mathbf{1}_{d-1}}{b}\right) G(a,b) ~~\left[\text{Recall the expression of}\, q^{*}(\mathbf{x}_{(-t)}) \right] \\ &= \sum\limits_{a,b \; \in \; \mathbb{S}} q \left(\frac{\mathbf{y}_{(-t)}-a\mathbf{1}_{d-1}}{b}\right) G(a,b) = \sum\limits_{a,b \; \in \; \mathbb{S}} \left\{ \prod\limits_{i \in I_{(-t)}} q \left(\frac{y_{i}-a}{b}\right) \right\} ~G(a,b). \end{aligned} $$

Thus, the random vectors \({\mathbf {Y}_{(-t)}^{*}}\) for t=2,…,d−1 possess the same joint distribution. Similarly, one may argue that the random vectors \({\mathbf {Y}_{(-1)}^{*}}\) and \({\mathbf {Y}_{(-d)}^{*}}\) also follow the same joint distribution as well. Further, we can argue that any sub-vector \(\left (Y_{k_{1}}^{*}, Y_{k_{2}}^{*},\ldots,Y_{k_{d'}}^{*}\right)^{T}\) of \(\left (Y_{1}^{*}, Y_{2}^{*},\ldots, Y_{d}^{*}\right)^{T}\), where d <d, has the same joint distribution. □

References

  • Alzaatreh, A, Lee, C, Famoye, F: On the discrete analogues of continuous distributions. Stat. Methodol. 9, 589–603 (2012).

    Article  MathSciNet  MATH  Google Scholar 

  • Azzalini, A: With the collaboration of A. Capitanio. The Skew-Normal and Related Families. Cambridge University Press, UK (2014).

    MATH  Google Scholar 

  • Azzalini, A, Regoli, G: Modulation of symmetry for discrete variables and some extensions. Stat 3, 56–67 (2014).

    Article  Google Scholar 

  • Bairamov, I, Gultekin, OE: Discrete distributions connected with the bivariate Binomial. Hacettepe J. Math. Stat. 39, 109–120 (2010).

    MathSciNet  MATH  Google Scholar 

  • Chakraborty, S, Chakravarty, D: A new discrete probability distribution with integer support on (−,). Commun. Stat. Theory Methods. 45, 492–505 (2016).

    Article  MathSciNet  MATH  Google Scholar 

  • Conway, DA: Multivariate distributions with specified marginals (1979). Available at https://statistics.stanford.edu/sites/default/files/OLK%20NSF%20145.pdf. Accessed 10 June 2017.

  • Cuadras, CM: Probability distributions with given multivariate marginals and given dependence structure. J. Multivar. Anal. 42, 51–66 (1992).

    Article  MathSciNet  MATH  Google Scholar 

  • Dahiya, R, Korwar, R: On characterizing some bivariate discrete distributions by linear regression. Sankhyā: Indian J. Stat. Series A. 39, 124–129 (1977).

    MathSciNet  MATH  Google Scholar 

  • Dutta, S, Genton, MG: A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families. J. Multivar. Anal. 132, 82–93 (2014).

    Article  MathSciNet  MATH  Google Scholar 

  • Fréchet, M: Sur les tableaux de corrélation dont les marges sont données. Ann. Univ. de Lyon Sect. A, Series 3. 14, 53–77 (1951).

    MATH  Google Scholar 

  • Hoeffding, W: Massatabinvariate korrelations-theorie. Scriften Math. Inst. Univ. Berlin. 5, 181–233 (1940).

    Google Scholar 

  • Johnson, NL, Kotz, S, Balakrishnan, N: Discrete Multivariate Distributions. Wiley, New York (1997).

    MATH  Google Scholar 

  • Nguyen, TT, Gupta, AK, Wang, Y: A characterization of certain discrete exponential families. Ann. Inst. Stat. Math. 48, 573–576 (1996).

    Article  MathSciNet  MATH  Google Scholar 

  • Roy, D: The discrete normal distribution. Commun. Stat. Theory Methods. 32, 1871–1883 (2003).

    Article  MathSciNet  MATH  Google Scholar 

  • Ruiz, JM, Navarro, J: Characterization of discrete distributions using expected values. Stat Papers. 36, 237–252 (1995).

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank the Reviewers and Editors for comments that improved the paper.

Funding

This research was supported by the King Abdullah University of Science and Technology (KAUST).

Availability of data and materials

Not applicable.

Authors’ contributions

SG, SD, MG contributed equally to the research. All authors read and approved the final manuscript.

Competing interests

No competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Not applicable.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marc G. Genton.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghosh, S., Dutta, S. & Genton, M. A note on inconsistent families of discrete multivariate distributions. J Stat Distrib App 4, 7 (2017). https://doi.org/10.1186/s40488-017-0061-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40488-017-0061-8

Keywords

AMS Subject Classification