Open Access

A class of continuous bivariate distributions with linear sum of hazard gradient components

Journal of Statistical Distributions and Applications20163:10

DOI: 10.1186/s40488-016-0048-x

Received: 11 April 2016

Accepted: 10 May 2016

Published: 21 May 2016

Abstract

The main purpose of this article is to characterize a class of bivariate continuous non-negative distributions such that the sum of the components of underlying hazard gradient vector is a linear function of its arguments. It happens that this class is a stronger version of the Sibuya-type bivariate lack of memory property. Such a class is allowed to have only certain marginal distributions and the corresponding restrictions are given in terms of marginal densities and hazard rates. We illustrate the methodology developed by examples, obtaining two extended versions of the bivariate Gumbel’s law.

Keywords

Bivariate hazard gradient Bivariate lack of memory property Characterization Gumbel’s bivariate exponential Marshall-Olkin model Sibuya’a dependence function

MSC code

Primary: 62H05 60E05

Introduction and motivation

Marshall and Olkin (1967) introduced the classical bivariate lack of memory property (to be denoted by B L M P 1) via relation
$$ S_{X_{1},X_{2}}(x_{1}+t,x_{2}+t)=S_{X_{1},\,X_{2}}(x_{1},x_{2})S_{X_{1},\,X_{2}}(t,t) $$
(1)

for all x 1,x 2≥0 and t>0, where \(S_{X_{1},X_{2}}(x_{1},x_{2}) = P(X_{1}> x_{1}, X_{2} > x_{2})\) is the joint survival function of non-negative continuous random vector (X 1,X 2). The functional Eq. (1) tells us that, independently of t, the B L M P 1 preserves the distribution of both (X 1,X 2) and its residual lifetime vector X t = [(X 1t,X 2t) | X 1>t,X 2>t].

The only bivariate distribution with exponential marginals which possesses B L M P 1 has a joint survival function given by
$$ S_{X_{1},\,X_{2}}(x_{1},x_{2}) = \exp \{- \lambda_{1} x_{1} - \lambda_{2} x_{2} - \lambda_{3} \max(x_{1},x_{2})\}, \quad x_{1},x_{2} \geq 0, $$
(2)
where λ i ≥0, i=1,2,3. The Marshall-Olkin (MO) bivariate exponential distribution (2) is generated by the stochastic representation
$$ (X_{1},X_{2}) = \left[\min(T_{1},T_{3}),\min(T_{2},T_{3})\right], $$
(3)

where T i are independent and exponentially distributed random variables with parameters λ i >0, i=1,2,3. Unless λ 3=0, the distribution (2) exhibits singularity along the line x 1=x 2 and its contribution to \(S_{X_{1},X_{2}}(x_{1},x_{2})\) is \(\alpha = P(X_{1}=X_{2}) = \frac {\lambda _{3}}{\lambda _{1}+\lambda _{2}+\lambda _{3}}\), e.g. Marshall and Olkin (1967). Hence, the MO bivariate exponential distribution is not absolutely continuous and does not have a probability density with respect to the two-dimensional Lebesgue measure.

Apart from MO bivariate exponential distribution, other known solutions of (1) are the bivariate distributions obtained by Freund (1961), Block and Basu (1974), Proschan and Sullo (1974), Friday and Patil (1977) and all distributions considered by Kulkarni (2006). Consult Chapter 10 in Balakrishnan and Lai (2009) for a related discussion as well.

Johnson and Kotz (1975) introduced another version of the bivariate lack of memory property (the local bivariate lack of memory property), which was rediscovered by Roy (2002) and named B L M P 2. The authors require conditional distributions {X 1 | X 2>x 2} and {X 2 | X 1>x 1} to possess (preserve) the univariate lack of memory property. The only absolutely continuous distribution with such a property is the Gumbel’s type I bivariate exponential distribution given by
$$ S_{X_{1},\,X_{2}}(x_{1},x_{2}) = \exp \{- \lambda_{1} x_{1} - \lambda_{2} x_{2} - \theta \lambda_{1} \lambda_{2} x_{1} x_{2} \}, \quad x_{1},x_{2} \geq 0 $$
(4)

where λ i >0,i=1,2 and θ[0,1], see Gumbel (1960). It is negative quadrant dependent since \(S_{X_{1},X_{2}}(x_{1},x_{2}) \leq S_{X_{1}}(x_{1})S_{X_{2}}(x_{2})\) for all x 1,x 2≥0.

If the first partial derivatives of \(S_{X_{1},X_{2}}(x_{1},x_{2})\) exist, one can define B L M P 1 and B L M P 2 alternatively. We will explore the conditional failure (hazard) rates defined by
$$r_{i}(x_{1},x_{2}) = \frac{\partial}{\partial x_{i}} [- \ln S_{X_{1},\,X_{2}}(x_{1},x_{2})], \; i=1,2. $$

Marshall (1975) and Johnson and Kotz (1975) interpreted r i (x 1,x 2) as the conditional failure rate of X i evaluated at x i under condition that X j >x j for i=1,2, ij. Equivalently, the conditional hazard rates are the univariate hazard rates of conditional distributions of each variate, given certain inequality of the remainder.

Marshall (1975) named the vector R(x 1,x 2)=(r 1(x 1,x 2),r 2(x 1,x 2)) the hazard gradient of the distribution (X 1,X 2). The hazard gradient vector R(x 1,x 2) (if exists) uniquely determines the bivariate distribution by means of line integral
$$ S_{X_{1},\,X_{2}}(x_{1},x_{2}) = \exp \left\{ - \int_{\mathcal{C}} \mathbf{R(z)} \mathrm{d} \textbf{z} \right\}, $$
(5)

where \(\mathcal {C}\) is any sufficiently smooth continuous path beginning at (0,0) and terminating at (x 1,x 2). Equation (5) holds provided that along the path of integration \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) is absolutely continuous and R(x 1,x 2) exists for almost all x 1 and x 2, see Marshall (1975). The term “almost all” means that the set where the partial derivative does not exist has insignificant measure in the first quadrant, i.e. has a two-dimensional Lebesgue measure zero. See details and a multivariate version in Marshall (1975).

All bivariate distributions which have first partial derivatives and possess B L M P 1 are characterized by the equation
$$r_{1}(x_{1},x_{2}) + r_{2}(x_{1},x_{2}) = a_{0} $$
for almost all x 1,x 2≥0, where a 0 is a non-negative constant, see Theorem 2 in Kulkarni (2006). In particular, the last relation is valid for the MO bivariate exponential distribution (2) with a 0=λ 1+λ 2+λ 3 for all x 1x 2, since
$$\mathbf{R}(x_{1},x_{2}) = (r_{1}(x_{1},x_{2}),r_{2}(x_{1},x_{2})) =\left\{ \begin{array}{ll} (\lambda_{1}+\lambda_{3}, \lambda_{2}),& \text{if}~x_{1} > x_{2},\\ \text{does not exist},& \text{if}~x_{1} = x_{2},\\ (\lambda_{1}, \lambda_{2}+\lambda_{3}),& \text{if}~ x_{1} < x_{2}. \end{array} \right. $$

Thus, the sum r 1(x 1,x 2)+r 2(x 1,x 2) is not defined along the line x 1=x 2 having a zero two-dimensional Lebesgue measure. The reason is that the conditional probability P(X i >x i | X j >x j ) i,j=1,2, ij, experiences a cusp at x 1=x 2, i.e. it is continuous but not differentiable at x 1=x 2 and the corresponding failure rate r i (x 1,x 2) is not defined, see Singpurwalla (2006), page 99.

In fact, Johnson and Kotz (1975) identify B L M P 2 by a local “constancy” of the failure rates r i (x 1,x 2), i=1,2, see their sections 3(iv) and 5.4. For distribution (4) one gets r i (x 1,x 2)=λ i +a 1 x 3−i , i=1,2, where a 1 is a non-negative constant. Thus, substituting a 0=λ 1+λ 2 and a 1=θ λ 1 λ 2, the sum of conditional hazard rates in B L M P 2 case is specified by
$$r_{1}(x_{1},x_{2}) + r_{2}(x_{1},x_{2}) = a_{0} + a_{1}x_{1} + a_{1}x_{2} $$
for all x 1,x 2≥0.
It would be challenging to link B L M P 1 and B L M P 2 in a new class of bivariate continuous distributions, to be denoted by \(\mathcal {L}(\mathbf {x};\mathbf {a})\), satisfying relation
$$ r(x_{1},x_{2}) = r_{1}(x_{1},x_{2}) + r_{2}(x_{1},x_{2}) = a_{0} + a_{1} x_{1} + a_{2} x_{2} $$
(6)

for almost all x 1,x 2≥0, where x =(x 1,x 2) and a =(a 0,a 1,a 2) is a parameter vector with non-negative elements, including the possibility a 1a 2.

To show one more member of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\), let us consider joint survival function
$$ S_{X_{1},\,X_{2}}(x_{1},x_{2}) = \exp \left\{- \lambda_{1} {x_{1}^{2}} - \lambda_{2} {x_{2}^{2}} -\lambda_{3} \max(x_{1}, x_{2})\right\}, \quad x_{1}, x_{2} \geq 0, $$
(7)

which has a singular component along the line x 1=x 2 and meets (3). One can verify that relation (6) is satisfied by setting a 0=λ 3>0, a 1=2λ 1>0 and a 2=2λ 2>0. The distribution given by (7) belongs to the generalized Marshall-Olkin (GMO) distributions introduced by Li and Pellerey (2011). The random variables T 1,T 2 and T 3 in (3) are assumed to be independent in the class of GMO distributions, relaxing Marshall-Olkin assumption of exponential marginality. It is direct to check that \(S_{X_{1},X_{2}}(x_{1},x_{2}) \geq S_{X_{1}}(x_{1})S_{X_{2}}(x_{2})\) for all x 1,x 2≥0, i.e. the bivariate distribution in (7) is positive quadrant dependent.

Thus, one can find many examples of bivariate continuous distributions possessing B L M P 1 and B L M P 2 represented by (1) and (4), respectively, as well as those exhibiting positive or negative quadrant dependence that belong to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) specified by (6). The class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) is composed of non-negative bivariate distributions which are absolutely continuous or continuous with singularity along the line L={x 1=x 2≥0} such that the sum of the components of underlying hazard gradient is a linear function of both arguments x 1 and x 2.

It happens that the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) (therefore both B L M P 1 and B L M P 2) is a particular case of the Sibuya-type bivariate lack of memory property recently introduced by Pinto and Kolev (2015c) as follows.

Definition 1.

The non-negative continuous bivariate distribution (X 1,X 2) with marginal survival functions \(\phantom {\dot {i}\!}S_{X_{1}}(x_{1})\) and \(S_{X_{2}}(x_{2})\) possesses Sibuya-type bivariate lack of memory property (to be abbreviated S-BLMP), if and only if
$$ \frac{S_{\mathbf{X}_{t}}(x_{1}, x_{2})}{S_{X_{1t}} (x_{1})S_{X_{2t}} (x_{2})} = \frac{S_{X_{1},\,X_{2}}(x_{1}, x_{2})}{S_{X_{1}}(x_{1}) S_{X_{2}}(x_{2})} $$
(8)

for all x 1,x 2,t≥0, where \(\phantom {\dot {i}\!}S_{\mathbf {X}_{t}}(x_{1}, x_{2})\) is the joint survival function of residual lifetime vector X t and \(\phantom {\dot {i}\!}S_{X_{it}} (x_{i})\) are its marginal survival functions, i=1,2.

The S-BLMP is a new concept. Definition 1 tells us that the random vector (X 1,X 2) and its residual lifetime vector X t should share, for all t≥0, the same dependence function \(\phantom {\dot {i}\!}D_{X_{1},X_{2}}(x_{1},x_{2})\) introduced by Sibuya (1960) as follows
$$D_{X_{1},\,X_{2}}(x_{1},x_{2}) = \ln \frac{S_{X_{1},\,X_{2}}(x_{1}, x_{2})}{S_{X_{1}}(x_{1}) S_{X_{2}}(x_{2})}. $$

We will refer to \(\phantom {\dot {i}\!}D_{X_{1},X_{2}}(x_{1},x_{2})\) as “Sibuya’a dependence function” hereafter. It exhibits interesting connections with important cases of dependencies, see Kolev (2016) for related facts.

The article is organized as follows. In Section 2 we first discuss the S-BLMP and justify our attention to its stronger version: the linear Sibuya-type BLMP introduced by Pinto (2014), see Definition 2. We show its equivalence with the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) and present characterizations. Naturally, Theorems 1 and 2 are consequences of corresponding statements in Pinto and Kolev (2015c). It will be recognized in Section 3 that only certain marginal distributions of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) are allowed. The corresponding conditions in terms of marginal densities and hazard rates are presented. Restrictions of parameters a 0,a 1 and a 2 are given in Theorem 3 and Proposition 1. As a result, one would be able to generate members of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\), some of them being extensions of Gumbel’s law (4), see examples 1A and 3. A discussion and conclusions finalize the paper.

A stronger version of S-BLMP

The marginal survival functions of residual lifetime vector X t are given by
$$S_{X_{1t}} (x_{1}) = \frac{S_{X_{1},\,X_{2}}(x_{1}+t, x_{2})}{S_{X_{1},\,X_{2}}(t, t)} \quad \text{and} \quad S_{X_{2t}} (x_{2}) = \frac{S_{X_{1},\,X_{2}}(x_{1}, x_{2}+t)}{S_{X_{1},\,X_{2}}(t, t)}. $$
Therefore, the relation (8) can be rewritten as
$$ S_{X_{1},\,X_{2}}(x_{1}+t, x_{2}+t) = S_{X_{1},\,X_{2}}(x_{1}, x_{2})S_{X_{1},\,X_{2}}(t, t)B(x_{1},x_{2};t), $$
(9)

for all x 1,x 2,t≥0, where the continuous function \(B(x_{1},x_{2};t) = \frac {S_{X_{1t}} (x_{1}) S_{X_{2t}} (x_{2})}{S_{X_{1}} (x_{1}) S_{X_{2}} (x_{2})}\) is such that B(x 1,x 2;0)=B(0,0;t)=1. The function B(x 1,x 2;t) is named “aging factor”, see page 457 in Balakrishnan and Lai (2009).

Without some simplifying assumptions on B(x 1,x 2;t) the class of bivariate distributions possessing S-BLMP is too cumbersome to be of use. General characterizations of the S-BLMP in terms of functional equations involving Sibuya’s dependence function are presented by Pinto and Kolev (2015c), see their Lemma 1 for example.

In order to get useful models, one should investigate a stronger version of S-BLMP. In this paper we will perform a detailed analysis for a particular aging function of the form B(x 1,x 2;t)= exp{−a 1 x 1 ta 2 x 2 t}, where a 1 and a 2 are given nonnegative constants. One can immediately recognize that in addition to relation (8) in Definition 1 one must assume that
$$ \qquad \qquad S_{X_{it}}(x_{i}) = S_{X_{i}}(x_{i}) \exp \{ -a_{i} x_{i} t\} \quad \text{for} \quad a_{i} \geq 0, \; i=1,2. $$
(10)

Thus, we arrive to the stronger version of the S-BLMP, i.e. the linear Sibuya-type BLMP introduced by Pinto (2014) as follows.

Definition 2.

The non-negative continuous bivariate distribution (X 1,X 2) possesses linear Sibuya BLMP (to be abbreviated LS-BLMP), if and only if (8) and (10) are satisfied for all x 1,x 2,t≥0 and a i ≥0, i=1,2.

Therefore, the bivariate continuous distributions possessing LS-BLMP can be equivalently represented by relation
$$ S_{X_{1},\,X_{2}}(x_{1}+t,x_{2}+t)=S_{X_{1},\,X_{2}}(x_{1},x_{2})S_{X_{1},\,X_{2}}(t,t)\exp\{ -a_{1}x_{1}t-a_{2}x_{2}t\} $$
(11)

for all x 1,x 2≥0 and t>0. Really, let (11) be true. Put x i =0 in (11) to get relations (10), i=1,2. Substitute the exponent from (10) into (11) to obtain (8). Conversely, let (8) and (10) be fulfilled. Use expression (10) in the left hand side of (8) to restore (11).

To justify the choice of specific form exp{−a 1 x 1 ta 2 x 2 t} of the “aging factor” in (11), let us consider a system composed by two elements with lifetimes represented by nonnegative continuous random variables X 1 and X 2. Suppose that during the first t units of time the system is protected by breakdowns (by warranty or insurance, say). It is reasonable to assume that t< min(x 1,x 2). After those first t units of time the system can be affected by two independent “fatal shocks” governed by homogeneous Poisson processes. Assume finally that the i-th unit is damaged with intensity a i t, i.e. the corresponding shock arrival times are exponentially distributed, to be denoted T i Exp(a i t), i=1,2, see Fig. 1.
Fig. 1

An interpretation of relation (11)

Therefore, the probability of the system survival \(S_{X_{1},\,X_{2}}(x_{1}+t,x_{2}+t)\) is given by (11). Really, the right hand side in (11) is the probability \(S_{X_{1},\, X_{2}}(t, t)\) that both elements survive the protected initial t units of time multiplied by \(S_{X_{1},\, X_{2}}(x_{1}, x_{2}) \exp \{-a_{1} x_{1}t - a_{2} x_{2}t\}\), being the probability of absence of shocks during the following x i units of time for the i-th element, i=1,2.

The next characterization theorem gives the joint survival function of bivariate continuous distributions possessing LS-BLMP.

Theorem 1.

The continuous bivariate distribution (X 1,X 2) has LS-BLMP defined by (11) for all x 1,x 2≥0 and t>0 if and only if \(S_{X_{1},X_{2}}(x_{1},x_{2})\) is a non-degenerate bivariate survival function given by
$$ S_{X_{1},\,X_{2}}(x_{1},x_{2}) =\left\{ \begin{array}{ll} S_{X_{1}}(x_{1}-x_{2}) \exp \left\{ - a_{0} x_{2} - a_{1} x_{1} x_{2} - \frac{a_{2}- a_{1}}{2} {x_{2}^{2}} \right\},& \text{if}~ x_{1} > x_{2} \geq 0,\\ \exp \left\{ - a_{0} x_{1} - \frac{a_{1} + a_{2}}{2} {x_{1}^{2}} \right\},& \text{if}~ x_{1} = x_{2} \geq 0,\\ S_{X_{2}}(x_{2}-x_{1}) \exp \left\{ - a_{0} x_{1} - a_{2} x_{1} x_{2} - \frac{a_{1}- a_{2}}{2} {x_{1}^{2}} \right\},& \text{if}~ x_{2} > x_{1} \geq 0, \end{array} \right. $$
(12)

where a 0,a 1,a 2≥0 and \(S_{X_{i}}(x_{i})\) are the marginal survival functions, i=1,2.

Proof.

Follows step by step the proof of Theorem 4 in Pinto and Kolev (2015c) with A i (x)=a i x, i=1,2.

Theorem 1 tells us that (12) is the solution of the functional Eq. (11).

In Pinto and Kolev (2015c) is also shown that S-BLMP characterizes the distributions belonging to the class defined by relation
$$ r_{1}(x_{1},x_{2}) + r_{2}(x_{1},x_{2}) = a_{0} + A_{1}(x_{1}) + A_{2}(x_{2}), $$
(13)

where a 0>0 and the continuous integrable functions A i (x i ) are such that A i (0)=0 and A i (x i )>−a 0 for all x i >0, i=1,2.

One can deduce that when B(x 1,x 2;t)= exp{−a 1 x 1 ta 2 x 2 t}, then relation (13) transforms into (6), i.e. A i (x i )=a i x i , i=1,2 and we obtain the \(\mathcal {L}(\mathbf {x};\mathbf {a})\) class.

When \(S_{X_{1},X_{2}}(x_{1},x_{2})\) is absolutely continuous, its vector hazard gradient R(x 1,x 2) exists everywhere in the interior of the set
$$\mathcal{A} = \left\{ (x_{1},x_{2}) \in \mathbb{R}^{2}_{+} \; | \; S_{X_{1},\,X_{2}}(x_{1},x_{2})>0 \right\}, $$
where \(\mathbb {R}^{2}_{+}\) is the first quadrant. In other words, relation (6) is well defined for all x 1,x 2≥0. If it happens that \(S_{X_{1},X_{2}}(x_{1},x_{2})\) is continuous, its vector hazard gradient R(x 1,x 2) is useful even when it does not exist everywhere in the interior of the set \(\mathcal {A}\), see Marshall (1975). Because of possible singularity of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) along the line L having zero two-dimensional Lebesgue measure, we will assume hereafter that first partial derivatives of \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) exist and are continuous in \(\mathcal {A} \backslash L\).

The next characterization theorem holds for bivariate continuous distributions belonging to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) whose survival functions possess continuous first partial derivatives, and hence continuous hazard gradient vector R(x 1,x 2), in \(\mathcal {A} \backslash L\).

Theorem 2.

If the first partial derivatives of \(S_{X_{1},X_{2}}(x_{1},x_{2})\) exist and are continuous in \(\mathcal {A} \backslash L\), then relation (6) is fulfilled if and only if the joint survival function can be represented by (12).

Proof.

Follows step by step the proof of Theorem 2 in Pinto and Kolev (2015c) with A i (x)=a i x, i=1,2.

Observe that if our base relation is (11), then the LS-BLMP can be characterized by the joint survival function specified by (12), without the assumption of existence of hazard gradient vector R(x 1,x 2) as in Theorem 2.

Applying the Sklar’s theorem to (12), one can obtain the survival copula corresponding to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\), see Pinto and Kolev (2015a).

Remark 1.

(hazard vector elements in singularity and absolutely continuous cases). The bivariate survival functions considered in Theorem 2 are not necessarily absolutely continuous (and therefore not differentiable in \(\mathcal {A}\)). In fact, we do not exclude the possibility of existence of a singular component along the line L={x 1=x 2≥0}, i.e. it may happen that P(X 1=X 2)>0. In such a case, when (x 1,x 2) belongs to the set \(\left \{ (x_{1}, x_{2}) \in \mathbb {R}^{2}_{+} \, | \, x_{1} = x_{2} = x \right \}\) the function \(S_{X_{1},X_{2}}(x_{1},x_{2})\) is not differentiable as well as the hazard gradient vector R(x 1,x 2) does not exist. But if in Theorem 2 the continuity of the first partial derivatives of \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) holds true in \(\mathcal {A}\), then \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) is absolutely continuous, see page 357 in Apostol (1974).

If the joint survival function \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) is degenerate (i.e. has degenerate marginal distributions), from (12) we get
$$S_{X_{1},\,X_{2}}^{de}(x_{1},x_{2}) =\left\{ \begin{array}{ll} \exp \left\{ - a_{0} x_{2} - a_{1} x_{1} x_{2} - \frac{a_{2}- a_{1}}{2} {x_{2}^{2}} \right\}, & \text{if}~ x_{2} < x_{1} = const,\\ \exp \left\{ - a_{0} x - \frac{a_{1} + a_{2}}{2} x^{2} \right\}, & \text{if}~ x_{2} = x_{1} = x = const,\\ \exp \left\{ - a_{0} x_{1} - a_{2} x_{1} x_{2} - \frac{a_{1}- a_{2}}{2} {x_{1}^{2}} \right\},& \text{if}~ x_{1} < x_{2} = const. \end{array} \right. $$

Obviously, \(S_{X_{1},X_{2}}^{de}(x_{1},x_{2})\) does not have differentiable failure rates.

Now we will identify members of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) with independent marginals. Hence, when \((X_{1},X_{2}) \in \mathcal {L}(\mathbf {x};\mathbf {a})\), we will find solutions of functional equation
$$S_{X_{1},\,X_{2}}(x_{1},x_{2})=S_{X_{1}}(x_{1})S_{X_{2}}(x_{2}) \quad \text{for all} \quad x_{1},x_{2} \geq 0. $$
Let \(r_{X_{i}}(x_{i})\) be hazard rates of random variables X i , i=1,2. The independence between X 1 and X 2 implies that \(\phantom {\dot {i}\!}r_{i}(x_{1},x_{2}) = r_{X_{i}}(x_{i}), \; i=1,2\). Therefore, relation (6) transforms into
$$r(x_{1},x_{2}) = r_{X_{1}}(x_{1}) + r_{X_{2}}(x_{2}) = a_{0} + a_{1} x_{1} + a_{2} x_{2}. $$
The last equation is equivalent to both
$$r_{X_{1}}(x_{1}) = \alpha_{1} + a_{1}x_{1} \quad \text{and} \quad r_{X_{2}}(x_{2}) = \alpha_{2} + a_{2}x_{2}, $$
where α 1 [0,a 0] and α 2=a 0α 1. Thus, we obtain the following result.

Corollary 1.

The vector (X 1,X 2) with independent marginals belongs the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) if and only if the marginal survival functions \(S_{X_{i}}(x_{i})\) have one of the following three possible analytic forms
$$ \exp\{-\alpha_{i} x_{i}\}, \quad \; \exp\{-0.5a_{i} {x_{i}^{2}}\} \quad \text{or} \quad \; \exp\{-\alpha_{i} x_{i} - 0.5a_{i} {x_{i}^{2}}\}, \quad i= 1,2, $$
where α 1+α 2=a 0 with α 1[0,a 0] and a i ≥0, i=1,2.

Therefore, the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) has only 9 members with independent marginals. It is interesting to note that if S X (x)= exp{−α x i −0.5a x 2}, (see the third option in Corollary 1), then X has a linear failure rate r X (x)=α 1+a x. These type of univariate distributions have been introduced by Kodlin (1967), see Sen (2006) as well. Additionally, since the two first analytic forms in Corollary 1 can be included in the third one (taking a i =0 or α i =0), the members of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) with independent marginals can also be seen as an extension of the (univariate) linear hazard rate model to the bivariate setup.

Let us note that the bivariate linear failure distribution introduced by Hangal and Ahmadi (2011) belongs to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\).

Finally, one can analyze relation (6) from a completely different point of view. Define the function
$$\psi(x_{1},x_{2}) = r(x_{1},x_{2}) - a_{0} \quad \text{and set} \quad G_{1}(x_{1}) = \psi(x_{1},0), \; G_{2}(x_{2}) = \psi(0,x_{2}). $$
The linear functions G i (x i )=a i x i satisfy the functional equation
$$G_{i}(x_{i}+y_{i}) = G_{i}(x_{i})+ G_{i}(y_{i}) \quad \text{for all} x_{i}, y_{i} \geq 0, \; i = 1,2. $$

Therefore, G i (x), i=1,2, are additive functionals and only continuous solutions of Cauchy functional equation f(x+y)=f(x)+f(y), see Theorem 1.1 in Sahoo and Kannappan (2011). In fact, we proved the following statement.

Lemma 1.

The class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) of non-negative bivariate continuous distributions specified by relation (6) can be equivalently defined by linear (additive) functionals G 1(x 1)=r(x 1,0)−a 0 and G 2(x 2)=r(0,x 2)−a 0, being the only continuous solutions of the Cauchy functional equation f(x+y)=f(x)+f(y).

Restrictions on the marginal densities or failure rates

Theorem 2 characterizes bivariate continuous distributions belonging to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\), i.e. having joint survival function \(S_{X_{1},X_{2}}(x_{1},x_{2})\) specified by relation (12), which imply some more restrictions on the margins of these distributions. In other words, the corresponding joint survival function is valid only for certain marginal distributions of X 1 and X 2.

Here we will obtain the associated constraints in terms of marginal densities and hazard rates. As a result, we will get the admissible values of the parameter vector a =(a 0,a 1,a 2). The methodology will be illustrated by several typical examples.

3.1 Marginal density restrictions

The next statement shows the corresponding parameter constraints in terms of marginal densities when a 1+a 2>0. The case a 1=a 2=0, e.g. the B L M P 1, is detailed studied by Kulkarni (2006).

Theorem 3.

Let X i be a random variable with absolutely continuous density \(f_{X_{i}}(x_{i}), \; i = 1,2.\) Let a 0,a 1,a 2≥0 and a 1+a 2>0. Then \(S_{X_{1},X_{2}}(x_{1},x_{2})\) in (12) is a proper bivariate survival function if and only if
$$ A(x_{i},x_{j}) - a_{i} x_{j} + \frac{d}{dx_{i}} \log f_{X_{i}}(x_{i} - x_{j}) + a_{i}[x_{j} A(x_{i},x_{j})-1]\frac{S_{X_{i}}(x_{i}- x_{j})}{f_{X_{i}}(x_{i}- x_{j})} \geq 0 $$
(14)
where A(x i ,x j )=a 0+a i x i +(a j a i )x j for all x i x j ≥0, ij, i,j=1,2. The singularity contribution into the joint survival function is given by
$$ \begin{aligned} \alpha = P(X_{1} = X_{2}) = &\left[ f_{X_{1}}(0) + f_{X_{2}}(0) - a_{0} \right]\sqrt {\frac{\pi }{2(a_{1} + a_{2})}} \\ & \times \exp \left\{ \frac{{a_{0}^{2}}}{2(a_{1} + a_{2})} \right\} \left[ 1 - Erf \left(\frac{a_{0}}{\sqrt {2(a_{1} + a_{2})}} \right) \right], \end{aligned} $$
(15)

where \(Erf (x) = \frac {2}{\sqrt {\pi }} {\int _{0}^{x}} \exp \{ -t^{2}\} dt\). In addition, the survival function specified by (12) is absolutely continuous if and only if \(f_{X_{1}}(0) + f_{X_{2}}(0) = a_{0}\).

Proof.

Let \(S_{X_{1},X_{2}}(x_{1},x_{2})\) be given by (12). Then, it is a proper if
$$\frac{\partial^{2}}{\partial x_{1}\partial x_{2}}S_{X_{1},\,X_{2}}(x_{1},x_{2}) \geq 0, $$

After some algebra the last condition transforms into inequality (14).

Since \(S_{X_{1},X_{2}}(x_{1},x_{2})\) may have a singular component along the line x 1=x 2, then α=P(X 1=X 2) [0,1]. The bivariate survival function \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) in (12) will be proper if and only if both the absolutely continuous part \(S_{X_{1},\,X_{2}}^{ac}(x_{1},x_{2})\) and the singular part \(S_{X_{1},\,X_{2}}^{si}(x_{1},x_{2})\) are survival functions and
$$S_{X_{1},X_{2}}(x_{1},x_{2}) = (1-\alpha) S_{X_{1},X_{2}}^{ac}(x_{1},x_{2}) + \alpha S_{X_{1},X_{2}}^{si}\left(\max\{x_{1},x_{2}\}\right) $$
for α [ 0,1]. An equivalent expression in terms of joint densities is given by
$$f_{X_{1},\,X_{2}}(x_{1},x_{2}) = (1-\alpha) f_{X_{1},\,X_{2}}^{ac}(x_{1},x_{2}) + \alpha f_{X_{1},\,X_{2}}^{si}(\max\{x_{1},x_{2}\}), $$
where
$$(1-\alpha) f_{X_{1},\,X_{2}}^{ac}(x_{1},x_{2}) = \frac{\partial^{2}}{\partial x_{1}\partial x_{2}}S_{X_{1},\,X_{2}}(x_{1},x_{2}). $$
To ensure existence of \(S_{X_{1},X_{2}}(x_{1},x_{2})\) we should evaluate the probability α which is equivalent to impose that 1−α=P(X 1>X 2)+P(X 2>X 1)[0,1], so one has to calculate the probabilities in the last sum. We have
$$ P(X_{1} > X_{2}) = \int_{0}^{\infty} {\int_{0}^{u}} (1-\alpha) f_{X_{1},\,X_{2}}^{ac} (u,v)\,\mathrm{d}v \,\mathrm{d}u. $$
Computing the inner integral \(I(u) = {\int _{0}^{u}} (1-\alpha) f_{X_{1},\,X_{2}}^{ac} (u,v)\,\mathrm {d}v\) we get
$$I(u) = - f_{X_{1}}(0)\exp \left\{ - a_{0} u - \frac{a_{2} + a_{1}}{2} u^{2} \right\} - a_{1} u \exp \left\{ - a_{0} u - \frac{a_{2} + a_{1}}{2} u^{2} \right\} + f_{X_{1}}(u). $$
Therefore,
$$ \begin{array}{l} P(X_{1} > X_{2}) = \int_{0}^{\infty} I(u) \,\mathrm{d}u\\ \\ = - f_{X_{1}}(0) \int_{0}^{\infty} \exp \left\{ - a_{0} u - \frac{a_{2} + a_{1}}{2} u^{2} \right\} \,\mathrm{d}u - a_{1} \int_{0}^{\infty} u \exp \left\{ - a_{0} u - \frac{a_{2} + a_{1}}{2} u^{2} \right\} \,\mathrm{d}u + 1. \end{array} $$
(16)
In order to solve integrals in (16), we use the following two expressions taken from Gradshteyn and Ryzhik (2007) for positive constants c and d:
$$ \int_{0}^{\infty} \exp \{-c u^{2} - d u \} \,\mathrm{d}u = \frac{1}{2}\sqrt{\frac{\pi}{c}} \exp \left\{ \frac{d^{2}}{4c} \right\} \left[ 1 - Erf \left(\frac{d}{2 \sqrt{c}} \right) \right], $$
(see equation 3.322.2 on page 336), and
$$ \int_{0}^{\infty} u \exp \{- c u^{2} -d u \} \,\mathrm{d}u = \frac{1}{2c} - \frac{d}{4c}\sqrt{\frac{\pi}{c}} \exp \left\{ \frac{d^{2}}{4c} \right\} \left[ 1- Erf \left(\frac{d}{2 \sqrt{c}} \right) \right], $$
(see equation 3.462.5 on page 365).
Substituting \(c = \frac {a_{1}+a_{2}}{2}\) and d=a 0 we obtain from (16)
$$\begin{aligned} P(X_{1} > X_{2}) & = \frac{a_{2}}{a_{1} + a_{2}} - \left[ f_{X_{1}}(0) - \frac{a_{0} a_{1}}{a_{1} + a_{2}} \right]\sqrt {\frac{\pi }{2(a_{1} + a_{2})}}\exp \left\{ \frac{{a_{0}^{2}}}{2(a_{1} + a_{2})} \right\}\\ & \times \left[ 1 - Erf \left(\frac{a_{0}}{\sqrt {2(a_{1} + a_{2})}} \right) \right]. \end{aligned} $$
In a similar way we get
$$\begin{aligned} P(X_{2} > X_{1}) & = \frac{a_{1}}{a_{1} + a_{2}} - \left[ f_{X_{2}}(0) - \frac{a_{0} a_{2}}{a_{1} + a_{2}} \right]\sqrt {\frac{\pi }{2(a_{1} + a_{2})}} \exp \left\{ \frac{{a_{0}^{2}}}{2(a_{1} + a_{2})}\right\} \\ & \times \left[ 1 - Erf \left(\frac{a_{0}}{\sqrt {2(a_{1} + a_{2})}} \right) \right]. \end{aligned} $$

From the last expressions and (16) we arrive to (15). To conclude the proof, observe that \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) in (12) is absolutely continuous if and only if α=0, which is equivalent to required condition \(f_{X_{1}}(0) + f_{X_{2}}(0) = a_{0}\).

Notice that if inequality (14) is fulfilled for some a 0=u>0, then it is also satisfied for all a 0u. Denote by
$$\tau = \text{the greatest lower bound of the set of possible values of}~a_{0}~\text{satisfying} (14). $$

Depending on the marginal densities, it may happen in some special cases that \(\tau > f_{X_{1}}(0) + f_{X_{2}}(0)\), which contradicts (15) since always α=P(X 1=X 2)≥0, i.e. \(\phantom {\dot {i}\!}a_{0} \leq f_{X_{1}}(0) + f_{X_{2}}(0)\). Such high τ-values would be outside of the parameter space \(\mathcal {A}\) of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\).

The range of possible values of a 0 are shown in Proposition 1. It is crucial for the construction of proper bivariate survival functions belonging to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) from pre-specified marginal densities as we will see later.

Proposition 1.

Suppose \(\tau \leq f_{X_{1}}(0) + f_{X_{2}}(0).\) If
$$ a_{0} \in \left[ \max\{\tau,\max(f_{X_{1}}(0), f_{X_{2}}(0))\}, f_{X_{1}}(0) + f_{X_{2}}(0) \right], $$
(17)

then Theorem 3 is fulfilled.

Proof.

In the absence of singularity (whenever α=P(X 1=X 2)=0), one concludes from (15) that \(\phantom {\dot {i}\!}f_{X_{1}}(0) + f_{X_{2}}(0) - a_{0} = 0\). Therefore, always \(\phantom {\dot {i}\!}a_{0} \leq f_{X_{1}}(0) + f_{X_{2}}(0),\) which is the upper bound for a 0 in (17).

The increase of the singular contribution into \(S_{X_{1},X_{2}}(x_{1},x_{2})\) implies increasing of the probability α=P(X 1=X 2) up to 1. Let us denote by
$$E(a_{0},a_{1},a_{2}) = a_{0} \sqrt {\frac{\pi }{2(a_{1} + a_{2})}} \exp \left\{\frac{{a_{0}^{2}}}{2(a_{1} + a_{2})} \right\} \left[ 1 - Erf \left(\frac{a_{0}}{\sqrt {2(a_{1} + a_{2})}} \right) \right]. $$
It is direct to check that 0≤E(a 0,a 1,a 2)≤1. We may represent (16) as
$$P(X_{1} > X_{2}) = 1 - \frac{f_{X_{1}}(0)}{a_{0}} E(a_{0},a_{1},a_{2}) - \frac{a_{1}}{a_{1} + a_{2}} [1 - E(a_{0},a_{1},a_{2})]. $$

The right hand side of the last equation is non-negative if \(f_{X_{1}}(0) \leq a_{0}.\) By analogy, from the expression for P(X 2>X 1) we obtain \(f_{X_{2}}(0) \leq a_{0}\).

Notice that if \(a_{0} \in [\max \{f_{X_{1}}(0), f_{X_{2}}(0)\}, f_{X_{1}}(0) + f_{X_{2}}(0)] \) then α[0,1]. Finally, the lower bound in (17) can be obtained by taking into account the restriction on a 0 imposed by inequality (14) and possible related τ−values.

Remark 2.

(absolutely continuous rule). Observe that whenever the upper bound a U for a 0 given by (17) is attainable, i.e. if \(\phantom {\dot {i}\!}a_{U} = a_{0} = f_{X_{1}}(0) + f_{X_{2}}(0),\) one obtains an absolutely continuous bivariate distribution for which (6) is valid. Therefore, Eq. (15), besides representing the constraint a 1+a 2>0, also offers a way to identify the presence of singularity.

For absolutely continuous distributions belonging to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\), it may happen that the lower and upper bound in (17) coincide, being even zero when \( f_{X_{1}}(0) = f_{X_{2}}(0) = 0\), or equivalently, when \(r_{X_{1}}(0) = r_{X_{2}}(0) = 0\), indicating that a 0=0. For example, consider a joint distribution given by \(S_{X_{1},X_{2}}(x_{1},x_{2}) = \exp \{-0.5a_{1} {x_{1}^{2}} -0.5a_{2} {x_{2}^{2}}\}\). Observe that both \( f_{X_{1}}(0) = f_{X_{2}}(0) = 0\) implying a 0=0 and α=P(X 1=X 2)=0. This special absolutely continuous bivariate distribution with independent marginals case may be treated as an “exception”, compare with Corollary 1.

Finally, note that all members of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) with independent marginals are absolutely continuous.

Remark 3.

(singularity of the BLMP 1 models). In the particular case a 1=a 2=0, the condition (14) transforms into inequality (ii) of Theorem 5.1 in Marshall and Olkin (1967). In addition, the possible interval values of a 0 in (17) are compatible with those given by Kulkarni (2006) in her Remark 1.

Remark 4.

(singularity of the GMO models). Let us consider the joint survival function of (Y 1,Y 2) belonging to the class of GMO distributions and represented by (7), with a i =2λ i ,i=1,2, and λ 3=a 0. It is direct to check that
$$P(Y_{1} = Y_{2}) = a_{0} \sqrt {\frac{\pi }{2(a_{1} + a_{2})}} \exp \left\{\frac{{a_{0}^{2}}}{2(a_{1} + a_{2})} \right\} \left[ 1 - Erf \left(\frac{a_{0}}{\sqrt {2(a_{1} + a_{2})}} \right) \right]. $$

Observe that the right hand side in the last equation is just the function E(a 0,a 1,a 2) used in the proof of Proposition 1.

We noted in the proof of Theorem 3 that if the survival function \(S_{X_{1},X_{2}}(x_{1},x_{2})\) given by (12) is proper then \(\frac {\partial ^{2}}{\partial x_{1}\partial x_{2}}S_{X_{1},\,X_{2}}(x_{1},x_{2})\) should be non-negative. This condition is equivalent to the requirement
$$S_{X_{1},\,X_{2}}(x_{1},y_{1}) + S_{X_{1},\,X_{2}}(x_{2},y_{2}) - S_{X_{1},\,X_{2}}(x_{1},y_{2}) -S_{X_{1},\,X_{2}}(x_{2},y_{1}) \geq 0 $$
for any two points (x 1,y 1) and (x 2,y 2) in \(\mathbb {R}^{2}_{+}\) such that x 1x 2 and y 1y 2. For example, if x 1x 2y 1y 2 and a 2=0 we conclude from (12) that the last inequality regarding the joint survival function is equivalent to
$$\frac{S_{X_{2}}(y_{1} - x_{1}) - S_{X_{2}}(y_{2} - x_{1})}{S_{X_{2}}(y_{1} - x_{2})- S_{X_{2}}(y_{2} - x_{2})} \leq \exp \left\{ - (x_{2} - x_{1}) \left[ a_{0} + \frac{a_{1}}{2}(x_{2} + x_{1}) \right] \right\}. $$

In general, such constraints between marginal survival functions are not easily verified. Relations (14) and (15) in Theorem 3 give alternative conditions in terms of absolutely continuous marginal densities \(f_{X_{i}} (x), i = 1,2\). However, depending on the complexity of the analytical form of the densities involved, it may be difficult to check these restrictions.

3.2 Marginal failure rates restrictions

The next result offers another set of equivalent constraints for parameters of \(\mathcal {L}(\mathbf {x};\mathbf {a})\), but in terms of marginal failure rates \(r_{X_{i}}(x), \; i = 1,2\).

Theorem 4.

Let the marginal failure rates \(r_{X_{i}} (x), i = 1, 2,\) be differentiable functions and for some nonnegative constants a 0,a 1 and a 2, with a 1+a 2>0, the following relations hold
$$ 0 \leq r_{X_{i}}(x_{i}) \leq a_{0} + a_{i} x_{i}; $$
(18a)
$$ \begin{aligned} r_{X_{i}}(x_{i}-x_{j})&[A(x_{i},x_{j})-a_{i} x_{j} - r_{X_{i}}(x_{i}-x_{j})]\\ &+ \frac{d}{dx_{i}} r_{X_{i}}(x_{i} - x_{j}) + a_{i}[x_{j}A(x_{i},x_{j})-1] \geq 0, \end{aligned} $$
(18b)
with A(x i ,x j )=a 0+a i x i +(a j a i )x j , for x i x j ≥0,i,j=1,2,ij;
$$ a_{0} \in \left[ \max\{\tau,\max(r_{X_{1}}(0), r_{X_{2}}(0))\}, r_{X_{1}}(0) + r_{X_{2}}(0) \right], $$
(18c)

where τ denotes the greatest lower bound of the set of values of a 0 for which the inequality (18b) is satisfied.

Then the joint survival function \(S_{X_{1},X_{2}}(x_{1},x_{2})\) given by (12) is proper with marginals \(\phantom {\dot {i}\!}S_{X_{i}}(x_{i}) = \exp \left (- \int _{0}^{x_{i}} r_{X_{i}} (u) du \right), x_{i} \geq 0, i=1,2.\) The joint distribution is absolutely continuous if and only if \(\phantom {\dot {i}\!}a_{0} = r_{X_{1}}(0) + r_{X_{2}}(0)\), otherwise it possesses a singular component.

Proof.

Let \(S_{X_{1},\,X_{2}}(x_{1},x_{2})\) be given by (12) and let the univariate failure rates \(r_{X_{i}}(x_{i})\) be differentiable functions, i=1,2. Suppose that x 2x 1≥0. Then substituting x 1=0 in
$$r_{1}(x_{1},x_{2}) =\left\{ \begin{array}{ll} r_{X_{1}}(x_{1}-x_{2}) + a_{1} x_{2},& \text{if}~ x_{1} > x_{2} \geq 0,\\ a_{0} + (a_{1} - a_{2}) x_{1} + a_{2} x_{2} - r_{X_{2}}(x_{2}-x_{1}),& \text{if}~ x_{2} > x_{1} \geq 0 \end{array} \right. $$
one gets \(\phantom {\dot {i}\!}r_{1}(0,x_{2}) = a_{0} + a_{2} x_{2} - r_{X_{2}}(x_{2}) \geq 0\) and therefore \(r_{X_{2}}(x_{2}) \leq a_{0} + a_{2} x_{2}\). By analogy, we conclude that \(r_{X_{1}}(x_{1}) \leq a_{0} + a_{1} x_{1}\) if x 1x 2≥0 and (18a) is established.

All other relations are consequence of Theorem 3.

Conditions (18a), (18b) and (18c) in Theorem 4 imply several simple practical steps that help to fix the permissible parameter space of coefficients a 0,a 1 and a 2 in \(\mathcal {L}(\mathbf {x};\mathbf {a})\). We discuss them in the next remark.

Remark 5.

(parameter space of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) ). Inequality (18a) says that the bivariate distributions from the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) satisfying (6) cannot have marginal distributions with failure rates \(r_{X_{i}}(x_{i})\) above the line a 0+a i x i for i=1,2. For example, distributions with univariate failure rate of the form \(a {x_{i}^{2}},\) for a>0 are unable to meet (6).

In addition, since \(r_{X_{i}}(.)\) is a failure rate, then \(\int _{0}^{\infty } r_{X_{i}}(u) du = \infty \) and because of (18a) the support of X i ,i=1,2, cannot be bounded from above, i.e. has to be the entire half line [0,).

To summarize, the parameter space for the coefficients a 0,a 1 and a 2 satisfying (6) in terms of marginal failure rates is given by
$$\{a_{0} \in \left[ \max\{\tau,\max(r_{X_{1}}(0), r_{X_{2}}(0))\}, r_{X_{1}}(0) + r_{X_{2}}(0) \right], \; a_{1}, a_{2} \geq 0, \; a_{1}+ a_{2} > 0\}, $$
because \(r_{X_{i}}(0) = f_{X_{i}}(0)\) for i=1,2. Finally, note that admissible values for coefficients a 1 and a 2 may be further limited as a consequence of inequality (18b) from Theorem 4, see Example 2.

The converse of Theorem 4 also holds for non-degenerate distributions and the statement is given below.

Proposition 2.

If \(S_{X_{1},X_{2}}(x_{1},x_{2})\) is non-degenerate bivariate survival function given by Eq. (12) and has differentiable marginal failure rates then it must satisfy conditions (18a) to (18c) in Theorem 4.

Proof.

Follows step by step the proof of Proposition 1 in Kulkarni (2006).

The rules established in Theorem 4 may serve as a useful guide for constructing bivariate distributions possessing property (6). The building scheme may be relaxed under additional available information regarding monotone behavior of marginal failure rates. In fact, the class of bivariate distributions \(\mathcal {L}(\mathbf {x};\mathbf {a})\) may have arbitrary combination of marginal failure rates: increasing, decreasing, constant, bathtub, etc., implying corresponding restrictions for the parameter space, of course.

3.3 Examples

The next two examples illustrate how relations in Theorem 4 can be applied to construct bivariate distributions from \(\mathcal {L}(\mathbf {x};\mathbf {a})\) with given marginal failure rates.

Example 1.

(constant failure rate marginals). Assume that
$$S_{X_{i}}(x) = \exp \{ - \lambda_{i} x \}, \quad \text{i.e.} \quad X_{i} \sim Exp(\lambda_{i}) \quad \lambda_{i} > 0, \; i=1,2. $$

Then \(f_{X_{i}}(x) = \lambda _{i} \exp \{- \lambda _{i} x \}\), \(r_{X_{i}}(x) = \lambda _{i}\) and \(f_{X_{i}}(0) = r_{X_{i}}(0) = \lambda _{i}, \; i = 1,2.\) From (18c) we obtain the first restriction: max(λ 1,λ 2)≤a 0λ 1+λ 2.

Let x 1x 2. Since a 1≥0, inequality (18b) transforms into
$$0 \leq a_{1} \leq (\lambda_{1} + a_{1} x_{2}) [a_{0} - \lambda_{1} + a_{1}(x_{1} - x_{2}) + a_{2} x_{2}], $$
for all x 1x 2≥0. The function (λ 1+a 1 x 2)[a 0λ 1+a 1(x 1x 2)+a 2 x 2] is non-decreasing and its minimum is equal to λ 1(a 0λ 1) when x 1=x 2=0. Therefore, 0≤a 1λ 1(a 0λ 1).

To find the greatest lower bound τ for which condition (18c) in Theorem 4 is true, is equivalent to verify when \(\lambda _{2} (a_{0} - \lambda _{2}) + \lambda _{2} a_{0} a_{1} x_{2} + \lambda _{2} a_{0} a_{1} {x_{2}^{2}} \geq 0.\) The last inequality is satisfied when a 0λ 2. But we got this lower bound for a 0 already.

Analogously, when x 2x 1 we obtain 0≤a 2λ 2(a 0λ 2).

Summarizing, the parameter space is
$$\max(\lambda_{1}, \lambda_{2}) \leq a_{0} \leq \lambda_{1} + \lambda_{2}, ~a_{1} + a_{2} > 0~ \text{and}~0 \leq a_{i} \leq \lambda_{i} (a_{0} - \lambda_{i}),~ i=1,2. $$

We will consider two possible cases:

1A. The bivariate survival function will be absolutely continuous if a 0=λ 1+λ 2. Hence, a i =θ i λ 1 λ 2 for θ i (0,1],i=1,2. With these specific parameters we get from (12) the representation
$$ S_{X_{1},\,X_{2}}(x_{1},x_{2}) =\left\{ \begin{array}{ll} \exp \left\{ - \left[\lambda_{1} x_{1} + \lambda_{2} x_{2} + \lambda_{1} \lambda_{2} x_{2} (\theta_{1} x_{1} + \frac{\theta_{2} - \theta_{1}}{2} x_{2}) \right] \right\},& \text{if}~ x_{1} \geq x_{2},\\ \exp \left\{ - \left[\lambda_{1} x_{1} + \lambda_{2} x_{2} + \lambda_{1} \lambda_{2} x_{1} (\theta_{2} x_{2} + \frac{\theta_{1} - \theta_{2}}{2} x_{1}) \right] \right\},& \text{if}~ x_{2} \geq x_{1}, \end{array} \right. $$
(19)

which may be named Generalized Gumbel’s bivariate exponential distribution.

Observe, that fixing θ 1=θ 2=θ in the last relation we obtain as a particular case the Gumbel’s type I bivariate exponential distribution (4).

1B. A bivariate survival function with absolutely continuous and singular components can also be constructed when a 0<λ 1+λ 2. Suppose λ 1>λ 2 and let a 0=λ 1. Notice that with this parameter choice the restrictions in (18c) are fulfilled. Hence we obtain a 1=0 and a 2=θ λ 2(λ 1λ 2), where θ(0,1]. Substituting these parameter values in (12) we get
$$S_{X_{1},\,X_{2}}(x_{1},x_{2}) =\left\{ \begin{array}{ll} \exp \left\{ - \left[\lambda_{1} x_{1} + \frac{\theta \lambda_{2} (\lambda_{1} - \lambda_{2})}{2} {x_{2}^{2}} \right] \right\},& \text{if}~ x_{1} \geq x_{2},\\ \begin{aligned} & \exp \left\{ - \left[(\lambda_{1} - \lambda_{2}) x_{1} + \lambda_{2} x_{2} \right] \right\}\\ & \times \exp \left\{ - \left[ \theta \lambda_{2} (\lambda_{1} - \lambda_{2}) x_{1} (x_{2} - \frac{x_{1}}{2}) \right] \right\} \end{aligned},& \text{if}~ x_{2} \geq x_{1}. \end{array} \right. $$

In what follows, we will build a bivariate distribution with increasing marginal failure rates.

Example 2.

(increasing failure rate marginals). Consider
$$S_{X_{i}}(x) = \exp \left\{ - \lambda_{i} x^{2} - \lambda_{3} x \right\} \quad \text{for} \quad x \geq 0, \lambda_{i} > 0, \lambda_{3} > 0, \; i=1,2. $$

Since \(f_{X_{i}}(x) = (2 \lambda _{i} x + \lambda _{3}) \exp \left \{ - \lambda _{i} x^{2} - \lambda _{3} x \right \},\) then \(r_{X_{i}}(x) = 2 \lambda _{i} x + \lambda _{3}, i=1,2,\) and the marginals have increasing failure rate. First limitations on the parameter space come from inequalities (18a) and (18c), i.e. a 0λ 3 and a i ≥2λ i , i=1,2.

When x 1x 2≥0, from (18b) we get a nonnegative increasing in x 1 and x 2 function
$$2 \lambda_{1} - a_{1} - [ 2 \lambda_{1} (x_{1} - x_{2}) + \lambda_{3} + a_{1} x_{2} ] [ (2 \lambda_{1} - a_{1}) (x_{1} - x_{2}) + \lambda_{3} - a_{0} - a_{2} x_{2} ] \geq 0 $$
with a minimum at the point (0,0). Hence we obtain a 1≤2λ 1+λ 3(a 0λ 3).

Analogously, for x 1x 2≥0, we get 2λ 2a 2≤2λ 2+λ 3(a 0λ 3).

Summarizing, we have the constraints
$$\lambda_{3} \leq a_{0} \leq 2 \lambda_{3} ~\text{and} ~2 \lambda_{i} \leq a_{i} \leq 2 \lambda_{i} + \lambda_{3}(a_{0} - \lambda_{3}),~ i=1,2. $$
2A. If a 0=2λ 3 we obtain an absolutely continuous bivariate survival function. In this case \(a_{i} = 2 \lambda _{i} + \theta _{i} {\lambda _{3}^{2}}, \theta _{i} \in \, [0,1], i=1,2\) and letting these values in (12) one gets
$$S_{X_{1},\,X_{2}}(x_{1},x_{2}) =\left\{ \begin{array}{ll} \begin{aligned} & \exp \left\{ - \left[ \lambda_{1} {x_{1}^{2}} + \lambda_{3} x_{1} + \lambda_{2} {x_{2}^{2}} \right] \right\}\\ & \times \exp \left\{ - \left[ \lambda_{3} x_{2} + {\lambda_{3}^{2}} x_{2} (\theta_{1} x_{1} + \frac{\theta_{2} - \theta_{1}}{2} x_{2}) \right] \right\} \end{aligned},& \text{if}~ x_{1} \geq x_{2},\\ \\ \begin{aligned} & \exp \left\{ - \left[ \lambda_{2} {x_{2}^{2}} + \lambda_{3} x_{2} + \lambda_{1} {x_{1}^{2}} \right] \right\} \\ & \times \exp \left\{ - \left[ \lambda_{3} x_{1} + {\lambda_{3}^{2}} x_{1} (\theta_{2} x_{2} + \frac{\theta_{1} - \theta_{2}}{2} x_{1}) \right] \right\} \end{aligned},& \text{if}~ x_{2} \geq x_{1}. \end{array} \right. $$

Observe that the expression of the joint survival function involves a complete second degree polynomial in the exponent. In addition, notice that θ 1=θ 2=0 implies independence between X 1 and X 2.

2B. A bivariate survival function having absolutely continuous and singular component can also be captured substituting
$$a_{0} = (1+ \theta_{0}) \lambda_{3}, \quad \theta_{0} \in\, [0,1), \quad a_{i} = 2 \lambda_{i} + \theta_{0} \theta_{i} {\lambda_{3}^{2}} \quad \text{and} \quad \theta_{i} \in\, [0,1], \;i=1,2$$
in (12). Let θ 0=0 in the corresponding expression to get relation (7), i.e the distribution from the class of GMO distributions, see Li and Pellerey (2011).

Example 3.

(“min” operation based construction). Let the survival function of the bivariate random vector (Y 1,Y 2) follow Gumbel’s type I bivariate exponential distribution given by (4) which is a member of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\). Assume that Y 3E x p(λ 3) is independent of (Y 1,Y 2). Therefore, (X 1,X 2)=[ min(Y 1,Y 3), min(Y 2,Y 3)] belongs to \(\mathcal {L}(\mathbf {x};\mathbf {a})\) as well and its survival function is given by
$$ S_{X_{1},\,X_{2}}(x_{1},x_{2}) =\left\{ \begin{array}{ll} \exp \{- (\lambda_{1} + \lambda_{3}) x_{1} - \lambda_{2} x_{2} - \theta \lambda_{1} \lambda_{2} x_{1} x_{2} \},& \text{if}~x_{1} \geq x_{2},\\ \exp \{- \lambda_{1} x_{1} - (\lambda_{2} + \lambda_{3}) x_{2} - \theta \lambda_{1} \lambda_{2} x_{1} x_{2} \},& \text{if}~x_{2} \geq x_{1}. \end{array} \right. $$
(20)

Notice that \(S_{Y_{1}, Y_{2}}(x_{1}, x_{2})\) is absolutely continuous, but \(S_{X_{1},X_{2}}(x_{1}, x_{2})\) displays a singular component along the line x 1=x 2. The expression (20) is given by Pinto and Kolev (2015c) in their Example 1.

It is worth noting that \(S_{X_{1},\,X_{2}}(x_{1}, x_{2})\) given by (20), despite being continuous (but not absolutely continuous), preserves the local constancy of the failure rates r 1(x 1,x 2) and r 2(x 1,x 2) in a very similar fashion as Gumbel’s bivariate distribution (4) does, i.e. r i (x 1,x 2)=λ i +a 1 x 3−i , i=1,2,. The hazard components of (20) are given by
$$r_{i}(x_{1},x_{2}) =\left\{ \begin{array}{ll} \lambda_{i} + \lambda_{3} + \theta \lambda_{1} \lambda_{2} x_{3-i},& \text{if}~x_{i} > x_{3-i},\\ \text{does not exist} & \text{if}~x_{1} = x_{2},\\ \lambda_{i} + \theta \lambda_{1} \lambda_{2} x_{3-i},& \text{if}~x_{i} < x_{3-i} \end{array} \right. $$
for i=1,2. Therefore, we may consider (20) as a Gumbel’s extended bivariate exponential distribution with a singularity along the line x 1=x 2. Substituting λ 3=0 in (20), one will get absolutely continuous Gumbel’s version (4).

Remark 6.

(expanding Gumbel’s bivariate law). The construction in Example 3 incorporates a singular component into the resulting distribution, which belongs to a wider class of Extended Marshall Olkin (EMO) bivariate distributions introduced by Pinto and Kolev (2015b). In fact, we assume that T 1 and T 2 are dependent random variables, but independent of T 3 in stochastic representation (3). Particular members of the EMO-class are the MO bivariate exponential distribution satisfying (3) and the GMO distributions (remind that the joint distribution given by (7) is member of the GMO-class).

Discussion and conclusions

In this paper we study a stronger version of S-BLMP introduced by in Pinto and Kolev (2015c), see Definition 1. We define the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) and characterize it by the following equivalent relations \(\mathcal {L}(\mathbf {x};\mathbf {a}) \Leftrightarrow (6) \Leftrightarrow (12)\). In addition,
$$\text{Lemma}~1 \Leftrightarrow \mathcal{L}(\mathbf{x};\mathbf{a}) \Leftrightarrow \text{LS}-\text{BLMP} \Leftrightarrow (\mathbf{11}). $$

Thus, the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) might be treated as a key tool to deepen the BLMP-notion giving possibility to model the aging phenomena in the complement to the “non-aging” one, which fixes the world on Eqs. (1) or (4), via B L M P 1 and B L M P 2 correspondingly. Our new base equations are (6) or (11). We are convinced that the class introduced is promising in modeling dynamic aging dependence, being much more realistic than the virtual “non-aging world”. The class \(\mathcal {L}(\mathbf {x};\mathbf {a})\) includes symmetric and asymmetric continuous distributions with possible singularity; those which are positive or negative quadrant dependent; distributions from the GMO and EMO classes, etc. This huge variety of bivariate distributions would help to choose the “right” model consistent with the physical nature of the observations. The selection of bivariate distribution to be used should depend on considerations involving both the physical scenario at hand, and the properties of chosen distribution.

The seminal Gumbel’s type I bivariate exponential distribution given by (4) has a central place in probability theory, being object of many characterization results. To count several of them, it is: the only absolutely continuous bivariate distribution possessing B L M P 2; belongs to the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\); a key bivariate extreme value distribution, etc. We got two extensions of the Gumbel’s type I distribution: one being absolutely continuous and the other having a singular component, represented by (19) and (20) respectively, consult Remark 6 as well. We do believe that further characterizations based on those relations will elevate the generalized Gumbel’s laws as a starting point and a base for obtaining new bivariate models, with higher flexibility and chances to better model the genuine dependence structure.

We did not consider in this article inference procedures related to the model (6), nor its application for real data set. But, in Pinto and Kolev (2015b) we perform a Bayesian analysis using the EMO distribution (20) (being a member of the class \(\mathcal {L}(\mathbf {x};\mathbf {a})\)) for a soccer data with ties studied by Meintanis (2007) and many other authors. According to the Deviance Information Criterion, our model (20) presented better fit than the Marshall-Olkin bivariate Weibull distribution, recently introduced by Kundu and Gupta (2013), who analyzed the same data set.

In general, one may use techniques for estimation of bivariate density function with partially differentiable kernels, e.g. Scott (1992). Another option is to apply the Kaplan-Meier estimate of bivariate survival function, even in the case of censoring following Dabrowska (1988), for example. Once the model is selected, goodness of the fit can be tested with conventional methods.

Declarations

Acknowledgments

The authors are grateful to the Editors and the referees for their suggestions which helped to improve this article. The first author is grateful for the support of the Central Bank of Brazil. The second author is partially supported by FAPESP (2013/07375-0) and TUBITAK grants.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Statistics, University of São Paulo

References

  1. Apostol, TM: Mathematical Analysis. 2nd Edition. Addison-Wesley, Reading (1974).Google Scholar
  2. Balakrishnan, N, Lai, C-D: Continuous Bivariate Distributions. 2nd Edition. Springer, New York (2009).MATHGoogle Scholar
  3. Block, HW, Basu, AP: A continuous bivariate exponential extension. J. Am. Stat. Assoc. 69, 1031–1037 (1974).MathSciNetMATHGoogle Scholar
  4. Dabrowska, D: Kaplan-Meier estimate on the plane. Ann. Stat. 16, 1475–1489 (1988).MathSciNetView ArticleMATHGoogle Scholar
  5. Freund, E: A bivariate extension of the exponential distribution. J. Am. Stat. Assoc. 56, 971–977 (1961).MathSciNetView ArticleMATHGoogle Scholar
  6. Friday, DS, Patil, GP: A bivariate exponential model with applications to reliability and computer generation of random variables. In: Tsokos, CP, Shimi, IN (eds.)The Theory and applications of Reliability, vol. 1, pp. 527–549. Academic Press, New York (1977).Google Scholar
  7. Gradshteyn, IS, Ryzhik, IM: Tables of Integrals, Series and Products. 7th Edition. Academic Press, Boston (2007).MATHGoogle Scholar
  8. Gumbel, E: Bivariate exponential distributions. J. Am. Stat. Assoc. 55, 698–707 (1960).MathSciNetView ArticleMATHGoogle Scholar
  9. Hangal, D, Ahmadi, K: Bivariate linear failure distribution. Int. J. Stat. Manag. Sci. 6, 73–84 (2011).Google Scholar
  10. Johnson, NL, Kotz, S: A vector multivariate hazard rate. J. Multivariate Anal. 5, 53–66 (1975).MathSciNetView ArticleMATHGoogle Scholar
  11. Kodlin, D: A new response time distribution. Biometrics. 23, 227–239 (1967).MathSciNetView ArticleGoogle Scholar
  12. Kolev, N: Characterizations of the class of bivariate Gompertz distributions. J. Multivariate Anal. 148, 173–179 (2016).MathSciNetView ArticleMATHGoogle Scholar
  13. Kulkarni, HV: Characterizations and modelling of multivariate lack of memory property. Metrika. 64, 167–180 (2006).MathSciNetView ArticleMATHGoogle Scholar
  14. Kundu, D, Gupta, A: Bayes estimation for the Marshall-Olkin bivariate Weibull distribution. Comput. Stat. Data Anal. 57, 271–281 (2013).MathSciNetView ArticleGoogle Scholar
  15. Li, X, Pellerey, F: Generalized Marshall-Olkin distributions and related bivariate aging properties. J. Multivariate Anal. 102, 1399–1409 (2011).MathSciNetView ArticleMATHGoogle Scholar
  16. Marshall, AW: Some comments on the hazard gradient. Stoch. Process. Appl. 3, 293–300 (1975).MathSciNetView ArticleMATHGoogle Scholar
  17. Marshall, AW, Olkin, I: A multivariate exponential distribution. J. Am. Stat. Assoc. 62, 30–41 (1967).MathSciNetView ArticleMATHGoogle Scholar
  18. Meintanis, S: Test of fit for Marshall-Olkin distributions with applications. J. Stat. Plann. Infer. 137, 3954–3963 (2007).MathSciNetView ArticleMATHGoogle Scholar
  19. Pinto, J: Deepening the Notions of Dependence and Aging in Bivariate Probability Distributions. PhD Thesis. Sao Paulo University Press, Sao Paulo (2014).Google Scholar
  20. Pinto, J, Kolev, N: Copula representations for invariant dependence functions. In: Glau, K, Scherer, M, Zagst, R (eds.)Innovations in Quantitative Risk Management, vol.99, pp. 411–421. Springer Series in Mathematics & Statistics, Springer Heidelberg (2015a).Google Scholar
  21. Pinto, J, Kolev, N: Extended Marshall-Olkin model and its dual version. In: Cherubini, U, Durante, F, Mulinacci, S (eds.)Marshall-Olkin Distributions - Advances in Theory and Applications, vol. 141, pp. 87–113. Springer Series in Mathematics & Statistics, Springer Heidelberg (2015b).Google Scholar
  22. Pinto, J, Kolev, N: Sibuya-type bivariate lack of memory property. J. Multivariate Anal. 134, 119–128 (2015c).Google Scholar
  23. Proschan, F, Sullo, P: Estimating the parameters of a bivariate exponential distribution in several sampling situations. In: Proschan, F, Serfling, RJ (eds.)Reliability and Biometry: Statistical Analysis of Life Lengths, pp. 423–440. Society of Industrial and Applied Mathematics, Philadelphia (1974).Google Scholar
  24. Roy, D: On bivariate lack of memory property and a new definition. Ann. Inst. Stat. Math. 54, 404–410 (2002).MathSciNetView ArticleMATHGoogle Scholar
  25. Sahoo, PK, Kannappan, P: Introduction to Functional Equations. CRC Press, Boca Raton (2011).MATHGoogle Scholar
  26. Sen, A: Linear failure rate distributions. In: Kotz, S, Read, C, Balakrishnana, N, Vidakovic, B (eds.)Encyclopedia of Statistical Sciences. 2nd ed, pp. 4212–4217. Wiley, New Jersey (2006).Google Scholar
  27. Scott, D: Multivariate Density Estimation: Theory, Practice and Visualization. Wiley, New York (1992).View ArticleMATHGoogle Scholar
  28. Sibuya, M: Bivariate extreme statistics I. Ann. I.st. Stat. Math. 11, 195–210 (1960).MathSciNetView ArticleMATHGoogle Scholar
  29. Singpurwalla, N: Reliability and risk: a Bayesian perspective. Wiley, Chichester (2006).View ArticleMATHGoogle Scholar

Copyright

© Pinto and Kolev. 2016