Open Access

The generalized Cauchy family of distributions with applications

Journal of Statistical Distributions and Applications20163:12

https://doi.org/10.1186/s40488-016-0050-3

Received: 23 February 2016

Accepted: 13 July 2016

Published: 3 August 2016

Abstract

A family of generalized Cauchy distributions, T-Cauchy{Y} family, is proposed using the T-R{Y} framework. The family of distributions is generated using the quantile functions of uniform, exponential, log-logistic, logistic, extreme value, and Fréchet distributions. Several general properties of the T-Cauchy{Y} family are studied in detail including moments, mean deviations and Shannon’s entropy. Some members of the T-Cauchy{Y} family are developed and one member, gamma-Cauchy{exponential} distribution, is studied in detail. The distributions in the T-Cauchy{Y} family are very flexible due to their various shapes. The distributions can be symmetric, skewed to the right or skewed to the left.

Keywords

T-R{Y} framework Quantile function Moments Shannon’s entropy

1. Introduction

The Cauchy distribution, named after Augustin Cauchy, is a simple family of distributions for which the expected value does not exist. Also, the family is closed under the formation of sums of independent random variables, and hence is an infinitely divisible family of distributions. The Cauchy distribution was used by Stigler (1989) to obtain an explicit expression for P(Z 1 ≤ 0, Z 2 ≤ 0) where (Z 1, Z 2) T follows the standard bivariate normal distribution. The Cauchy distribution has been used in many applications such as mechanical and electrical theory, physical anthropology, measurement problems, risk and financial analysis. It was also used to model the points of impact of a fixed straight line of particles emitted from a point source (Johnson et al. 1994). In Physics, it is called a Lorenzian distribution, where it is the distribution of the energy of an unstable state in quantum mechanics.

Eugene et al. (2002) introduced the beta-generated family of distributions using the beta as the baseline distribution. Based on the beta-generated family, Alshawarbeh et al. (2013) proposed the beta-Cauchy distribution. The beta-generated family was extended by Alzaatreh et al. (2013) to the T-R(W) family. The cumulative distribution function (CDF) of the T-R(W) distribution is \( G(x)={\displaystyle {\int}_a^{W\left(F(x)\right)}r(t)dt,} \) where r(t) is the probability density function (PDF) of a random variable T with support (a, b) for − ∞ ≤ a < b ≤ ∞. The link function W : [0, 1] →  is monotonic and absolutely continuous with W(0) → a and W(1) → b.

Aljarrah et al. (2014) considered the function W(.) to be the quantile function of a random variable Y and defined the T-R{Y} family. In the T-R{Y} framework, the random variable T is a ‘transformer’ that is used to ‘transform’ the random variable R into a new family of generalized distributions of R. Many families of generalized distributions have appeared in the literature. Alzaatreh et al. (2014, 2015) studied the T-gamma and the T-normal families. Almheidat et al. (2015) studied the T-Weibull family. In this paper, a family of generalized Cauchy distribution is proposed and studied.

This article focuses on the generalization of the Cauchy distribution and studies some new distributions and their applications. The article gives a brief review of the T-R{Y} framework and defines several new generalized Cauchy sub-families. It contains some general properties of the T-Cauchy{Y} distributions. A member of the T-Cauchy{Y} family, the gamma-Cauchy{exponential} distribution, is studied. The study includes moments, estimation and applications. Some concluding remarks were provided.

2. The T-Cauchy{Y} family of distributions

The T-R{Y} framework defined in Aljarrah et al. (2014) (see also Alzaatreh et al. 2014) is given as follows. Let T, R and Y be random variables with CDF F Z (x) = P(Z ≤ x), and corresponding quantile function Q Z (p), where Z = T, R, Y and the quantile function is defined as Q Z (p) = inf{z : F Z (z) ≥ p}, 0 < p < 1. If densities exist, we denote them by f Z (x), for Z = T, R and Y. Now assume the random variables T, Y (a, b) for − ∞ ≤ a < b ≤ ∞. The random variable X in T-R{Y} family of distributions is defined as
$$ {F}_X(x)={\displaystyle {\int}_a^{{\mathrm{Q}}_Y\left({F}_R(x)\right)}{f}_T(t)dt}={F}_T\left({Q}_Y\left({F}_R(x)\right)\right). $$
(1)
The corresponding PDF associated with (1) is
$$ {f}_X(x)={f}_T\left({Q}_Y\left({F}_R(x)\right)\right)\times {Q}_Y^{\prime}\left({F}_R(x)\right)\times {f}_R(x). $$
(2)
Alternatively, (2) can be written as
$$ {f}_X(x)={f}_R(x)\times \frac{f_T\left({Q}_Y\left({F}_R(x)\right)\right)}{f_Y\left({Q}_Y\left({F}_R(x)\right)\right)}. $$
(3)
The hazard function of the random variable X can be written as
$$ {h}_X(x)={h}_R(x)\times \frac{h_T\left({Q}_Y\left({F}_R(x)\right)\right)}{h_Y\left({Q}_Y\left({F}_R(x)\right)\right)}. $$
(4)

Alzaatreh et al. (2013, 2014, 2015) studied, respectively, the T-R{exponential}, T-normal{Y} and T-gamma{Y} families of distributions. Aljarrah et al. (2014) studied some general properties of the T-R{Y} family. Next, we define the T-Cauchy{Y} family.

Let R be a random variable that follows the Cauchy distribution with PDF f R (x) = f C (x) = π − 1 θ − 1(1 + (x/θ)2)− 1 and CDF F R (x) = F C (x) = 0.5 + π − 1 tan− 1(x/θ), xθ > 0, then (3) reduces to
$$ {f}_X(x)={f}_C(x)\times \frac{f_T\left({Q}_Y\left({F}_C(x)\right)\right)}{f_Y\left({Q}_Y\left({F}_C(x)\right)\right)}. $$
(5)
Hereafter, the family of distributions in (5) will be called the T-Cauchy{Y} family. It is clear that the PDF in (5) is a generalization of Cauchy distribution. From (1), if \( T\overset{d}{=}Y, \) then \( X\overset{d}{=}\mathrm{Cauchy}\left(\theta \right). \) Also, if \( Y\overset{d}{=}\mathrm{Cauchy}\left(\theta \right), \) then \( X\overset{d}{=}T. \) Furthermore, when T ~ beta(a, b) and Y ~ uniform(0, 1), the T-Cauchy{Y} reduces to the beta-Cauchy distribution (Alshawarbeh et al. 2013). When T ~ Power(a) and Y ~ uniform(0, 1), the T-Cauchy{Y} reduces to the exponentiated-Cauchy distribution (Sarabia and Castillo 2005). Table 1 gives six quantile functions of known distributions (in standard form) which will be applied to generate T-Cauchy{Y} sub-families in the following subsections. It is straightforward to use non-standard quantile functions. By using non-standard quantile functions, many resulting distributions in the T-R{Y} family will have more than five parameters, which are not practically useful (Johnson et al. 1994, p. 12). Thus, we focus on the standard quantile functions in this paper.
Table 1

Quantile functions for different Y distributions

Y

Q Y (p)

(a) Uniform

p

(b) Exponential

−log(1−p)

(c) Log-logistic

p / (1−p)

(d) Logistic

log[p /(1−p)]

(e) Extreme value

log[−log(1−p)]

(f) Fréchet

−1/log(1−p)

2.1 T-Cauchy{uniform} family of distributions

By using the quantile function of the uniform distribution in Table 1, the corresponding CDF to (1) is
$$ {F}_X(x)={F}_T\left\{{F}_C(x)\right\}, $$
(6)
and the corresponding PDF to (6) is
$$ {f}_X(x)={f}_C(x)\times {f}_T\left({F}_C(x)\right),\kern0.24em x\in \mathrm{\mathbb{R}}. $$
(7)

2.2 T-Cauchy{exponential} family of distributions

By using the quantile function of the exponential distribution in Table 1, the corresponding CDF to (1) is
$$ {F}_X(x)={F}_T\left\{- \log \left(1-{F}_C(x)\right)\right\} $$
(8)
and the corresponding PDF to (8) is
$$ {f}_X(x)=\frac{f_C(x)}{1-{F}_c(x)}\times {f}_T\left(- \log \left(1-{F}_c(x)\right)\right),\kern0.24em x\in \mathrm{\mathbb{R}}. $$
(9)

Note that the CDF and the PDF in (8) and (9) can be written as F X (x) = F T (H C (x)) and f X (x) = h C (x)f T (H C (x)) where h C (x) and H C (x) are the hazard and cumulative hazard functions for the Cauchy distribution, respectively. Therefore, the T-Cauchy{exponential} family of distributions arises from the ‘hazard function of the Cauchy distribution’.

2.3 T-Cauchy{log-logistic} family of distributions

By using the quantile function of the log-logistic distribution in Table 1, the corresponding CDF to (1) is
$$ {F}_X(x)={F}_T\left\{{F}_C(x)/\left[1-{F}_C(x)\right]\right\}, $$
(10)
and the corresponding PDF is
$$ {f}_X(x)=\frac{f_C(x)}{{\left(1-{F}_C(x)\right)}^2}\times {f}_T\left(\frac{F_C(x)}{1-{F}_C(x)}\right),\kern0.24em x\in \mathrm{\mathbb{R}}. $$
(11)
which is a family of generalized Cauchy distributions arising from the ‘odds’ of the Cauchy distribution.

2.4 T-Cauchy{logistic} family of distributions

By using the quantile function of the logistic distribution in Table 1, the corresponding CDF to (1) is
$$ {F}_X(x)={F}_T\left\{ \log \left({F}_C(x)/\left[1-{F}_C(x)\right]\right)\right\}, $$
(12)
and the corresponding PDF is
$$ {f}_X(x)=\frac{f_C(x)}{F_C(x)\left[1-{F}_C(x)\right]}\times {f}_T\left( \log \left({F}_C(x)/\left[1-{F}_C(x)\right]\right)\right), \kern0.24em x\in \mathrm{\mathbb{R}}. $$
(13)

Note that (13) can be written as \( {f}_X(x)=\frac{h_C(x)}{F_C(x)}\times {f}_T\left( \log \left(\frac{F_C(x)}{1-{F}_C(x)}\right)\right) \), which is a family of generalized Cauchy distributions arising from the ‘logit function’ of the Cauchy distribution.

2.5 T-Cauchy{extreme value} family of distributions

By using the quantile function of the extreme value distribution in Table 1, the corresponding CDF to (1) is
$$ {F}_X(x)={F}_T\left\{ \log \left(- \log \left[1-{F}_C(x)\right]\right)\right\}, $$
(14)
and the corresponding PDF is
$$ {f}_X(x)=\frac{f_C(x)}{\left[{F}_C(x)-1\right] \log \left(1-{F}_C(x)\right)}\times {f}_T\left\{ \log \left[- \log \left(1-{F}_C(x)\right)\right]\right\},\kern0.24em x\in \mathrm{\mathbb{R}}. $$
(15)

The CDF in (14) and the PDF in (15) can be written as F x (x) = F T (log H C (x)) and f X (x) = {h C (x)/H C (x)}f T (log H C (x)) respectively.

2.6 T-Cauchy{Fréchet} family of distributions

By using the quantile function of the Fréchet distribution in Table 1, the corresponding CDF to (1) is
$$ {F}_X(x)={F}_T\left\{-1/ \log \left({F}_C(x)\right)\right\}, $$
(16)
and the corresponding PDF is
$$ {f}_X(x)=\frac{f_C(x)}{F_C(x){\left( \log {F}_C(x)\right)}^2}\times {f}_T\left\{-1/ \log \left({F}_C(x)\right)\right\},\kern0.24em x\in \mathrm{\mathbb{R}}. $$
(17)
Figures 1 and 2 show some examples of two members of the T-Cauchy{Y} family. The first example is Weibull-Cauchy{exponential} distribution which can be obtained by replacing the random variable T in (9) with Weibull(c, γ) random variable. The second example is Lomax-Cauchy{log-logistic} distribution which can be obtained by replacing the random variable T in (11) with Lomax(α, λ) random variable. From the figures, it appears that the shapes of the distributions can be left-skewed, right-skewed or symmetric.
Fig. 1

Graphs for the PDF of the Weibull-Cauchy{exponential} distribution when θ = 1

Fig. 2

Graphs for the PDF of the Lomax-Cauchy{log-logistic} distribution

3. Some properties of the T-Cauchy{Y} family of distributions

In this section, we discuss some general properties of the T-Cauchy family of distributions. The proofs are omitted for straightforward results.

Lemma 1: Let T be a random variable with PDF f T (x), then
  1. (i)

    The random variable X = − θ cot(πF Y (T)) follows the T-Cauchy{Y} distribution.

     
  2. (ii)

    The quantile function for T-Cauchy{Y} family is Q X (p) = − θ cot(πF Y (Q T (p))).

     

The Shannon’s entropy (Shannon 1948) of a random variable X is a measure of variation of uncertainty and it is defined as η X  = − E{log(f(X))}. The following proposition provides an expression for the Shannon’s entropy for the T-Cauchy{Y} family.

Proposition 1: The Shannon’s entropy for the T-Cauchy{Y} family is given by
$$ {\eta}_X= \log \left(\theta \right)- \log \left(\pi \right)+{\eta}_T+E\left( \log {f}_Y(T)\right)-2E\left\{ \log \left({F}_Y(T)\right)\right\}-2{\displaystyle \sum_{j=1}^{\infty }{V}_j}E{\left[{F}_Y(T)\right]}^{2j}. $$
(18)
Proof: By using the result in Aljarrah et al. (2014), the Shannon’s entropy for the T-Cauchy{Y} is
$$ {\eta}_X={\eta}_T+E\left( \log {f}_Y(T)\right)-E\left\{ \log {f}_C\left({Q}_C\left({F}_Y(T)\right)\right)\right\}. $$
(19)
Now, one can show that
$$ \log {f}_C\left({Q}_C\left({F}_Y(t)\right)\right)=- \log \left(\pi \theta \right)+2 \log \left( \sin \left(\pi {F}_Y(t)\right)\right). $$
(20)
On using the following series expansion from Gradshteyn and Ryzhik (2007, p. 55)
$$ \log \left( \sin \left(\pi x\right)\right)= \log \left(\pi x\right)+{\displaystyle \sum_{j=1}^{\infty }{V}_j}{x}^{2j}, $$
(21)
where \( {V}_j=\frac{{\left(-1\right)}^j{\left(2\pi \right)}^{2j}{B}_{2j}}{2j(2j)!} \) and B j is the Bernoulli number, we get the result in (18).□

Next proposition gives a general expression for the r-th moment for the T-Cauchy{Y} family.

Proposition 2: The r-th moment for the T-Cauchy{Y} family of distributions is given by
$$ E\left({X}^r\right)={\left(-1\right)}^r{\theta}^r{\displaystyle \sum_{k=0}^{\infty }{c}_k}E{\left[{F}_Y(T)\right]}^{2k-r}, $$
(22)
where c 0 = π − r , \( {c}_m=\pi {m}^{-1}{\displaystyle {\sum}_{k=1}^m\left(kr-m+k\right){w}_k{c}_{m-k},\kern0.24em m\ge 1} \) and \( {w}_k=\frac{{\left(-1\right)}^k{2}^{2k}{B}_{2k\;}{\pi}^{2k-1}}{(2k)!}. \)
Proof: From Lemma 1(i), the r-th moment for the T-Cauchy{Y} family can be written as E(X r ) = (−1) r θ r E(cot π(F Y (T))) r . Now, using the following series expansion (see Abramowitz and Stegun 1964, p.75), \( \cot \kern0ex \left(\pi x\right)={\displaystyle \sum_{k=0}^{\infty }{w}_k{x}^{2k-1},\left|x\right|}<\pi, \) where \( {w}_k=\frac{{\left(-1\right)}^k{2}^{2k}{B}_{2k\;}{\pi}^{2k-1}}{(2k)!}. \) Therefore,
$$ {\left( \cot \pi \left({F}_Y(t)\right)\right)}^r={\displaystyle \sum_{k=0}^{\infty }{c}_k}{\left({F}_Y(t)\right)}^{2k-r}, $$
(23)
where c 0 = π − r , \( {c}_m=\pi {m}^{-1}{\displaystyle \sum_{k=1}^m\left(kr-m+k\right){w}_k{c}_{m-k},}\kern0.5em m\ge 1 \) [see Gradshteyn and Ryzhik 2007, p. 17]. □

As an example of the applicability of the results in Lemma 1 and Propositions 1 and 2, we use these results and apply them on the T-Cauchy{exponential}. One can get similar results by choosing any of the T-Cauchy{Y} families.

Corollary 1: Based on Lemma 1, if T is a random variable with PDF f T (x), then
  1. (i)

    The random variable X = θ cot(πe − T ) follows a distribution in the T-Cauchy{exponential} family.

     
  2. (ii)

    The quantile functions for T-Cauchy{exponential} family is \( {Q}_X(p)=\theta \cot \left(\pi {e}^{-{Q}_T(p)}\right). \)

     
Corollary 2: The Shannon’s entropy for the T-Cauchy{exponential} family is given by
$$ {\eta}_X= \log \left(\theta \right)- \log \left(\pi \right)+{\eta}_T+{\mu}_T-2{\displaystyle \sum_{j=1}^{\infty }{V}_j}{M}_T\left(-2j\right), $$
where M T (.) is the moment generating function of the random variable T.

Proof: The result follows from Proposition 1 and the fact that E(log f Y (T)) = μ T .□

Corollary 3: The r-th moment for the T-Cauchy{exponential} family of distributions is given by
$$ E\left({X}^r\right)={\theta}^r{\displaystyle \sum_{k=0}^{\infty }{c}_k}{M}_T\left(r-2k\right), $$
where c k is defined in Proposition 2.

Proof: The result follows from Proposition 2 and the fact that cot(πF Y (u)) = − cot(πe − u ).□

Proposition 3: The mode(s) of the T-Cauchy{exponential} family are the solutions of the equation
$$ x=\frac{\theta }{2\pi }{h}_C(x)\left\{1+\frac{f_T^{\prime}\left({H}_C(x)\right)}{f_T\left({H}_C(x)\right)}\right\}. $$
(24)

Proof: For Cauchy distribution, one can show that \( {f}_C^{\prime }(x)=-2\pi {\theta}^{-1}x{f}_C^2(x) \) and \( {h}_C^{\prime }(x)=-2\pi {\theta}^{-1}x{h}_C(x)+{h}_C^2(x). \) On finding \( {f}_X^{\prime }(x) \) by using Eq. (9) and setting the derivative to zero, it is easy to get the result in (24). □

4. Gamma-Cauchy{exponential} distribution

For the remaining sections, we investigate in details the properties, parameter estimation and applications of a new distribution of the T-Cauchy{Y} family, the gamma-Cauchy{exponential} distribution. This distribution is interesting as it consists of special cases of exponentiated Cauchy and distributions of record values from the Cauchy distribution. Let T be a random variable that follows the gamma distribution with parameters \( \alpha \) and β. From Eqs. (8) and (9), the PDF and CDF of gamma-Cauchy{exponential} distribution are, respectively, given by
$$ f(x)=\frac{{\left[- \log \left(0.5-{\pi}^{-1}{ \tan}^{-1}\left(x/\theta \right)\right)\right]}^{\alpha -1}{\left[0.5-{\pi}^{-1}{ \tan}^{-1}\left(x/\theta \right)\right]}^{\frac{1}{\beta }-1}}{\pi \theta {\beta}^{\alpha}\varGamma \left(\alpha \right)\left(1+{\left(x/\theta \right)}^2\right)},\kern0.5em x\in \mathrm{\mathbb{R}}, $$
(25)
$$ F(x)=\frac{\gamma \left[\alpha, -{\beta}^{-1} \log \left(0.5-{\pi}^{-1}ta{n}^{-1}\left(x/\theta \right)\right)\right]}{\varGamma \left(\alpha \right)},\kern0.36em x\in \mathrm{\mathbb{R}}, $$
(26)
where \( \gamma \left(\alpha, x\right)={\displaystyle {\int}_0^x{t}^{\alpha -1}}{e}^{-t}dt \) is the incomplete gamma function. For simplicity, a random variable X with PDF f(x) in (25) is said to follow the gamma-Cauchy{exponential} distribution and is denoted by GC(α, β, θ).
Some special cases are worth mentioning:
  1. (i)

    GC(1, β, θ) is the exponentiated Cauchy distribution proposed by Sarabia and Castillo (2005). In particular GC(1,1,1) is the standard Cauchy distribution.

     
  2. (ii)

    GC(1, n − 1, θ), n is the distribution of the minimum of n independent Cauchy random variables.

     
  3. (iii)

    GC(n + 1, 1, θ), n is the distribution of the nth upper record in a sequence of independent Cauchy random variables.

     
Remarks: The following results follow from Corollary 1, Corollary 2 and Proposition 3.
  1. (i)

    If a random variable Y follows a gamma distribution with parameters \( \alpha \) and β, then X = θ cot(πe − Y ) follows the GC(α, β, θ) distribution.

     
  2. (ii)

    The quantile function of GC(α, β, θ) is \( Q(p)=\theta \cot \left(\pi {e}^{-\beta {\gamma}^{-1}\left[\alpha, p\varGamma \left(\alpha \right)\right]}\right),\kern0.5em 0<p<1. \)

     
  3. (iii)

    The Shannon’s entropy for the GC(α, β, θ) distribution is given by

     

\( {\eta}_X=\alpha \left(1+\beta \right)+ \log \left({\pi}^{-1}\theta \beta \varGamma \left(\alpha \right)\right)+\left(1-\alpha \right)\psi \left(\alpha \right)-2{\displaystyle \sum_{j=1}^{\infty }{V}_j{\left(1+2j\beta \right)}^{-\alpha },} \) where ψ(.) is the digamma function and V j is defined in Eq. (21).

Proposition 4: The GC(α, β, θ) distribution is unimodal and the mode is at m = θx where x is the solution of the equation
$$ k(x)= \log \left(\frac{{ \cot}^{-1}(x)}{\pi}\right)\left[2x{ \cot}^{-1}(x)-1+1/\beta \right]+\alpha -1=0. $$

Proof: It is not difficult to show that the mode of f(x) in (25) is the solution of k(x/θ) = 0, where k(x) is defined above. Therefore, the mode of f(x) is at m = θx where k(x) = 0. To show the unimodality of f(x), consider A(x) = log(π − 1 cot− 1(x)) and B(x) = 2x cot− 1(x). Clearly A(x) is a strictly decreasing function (since it is equal to log(1 − F C (x))). Furthermore, A(x) < 0 for all x. Now, B′(x) = 2[−x/(1 + x 2) + cot− 1(x)]. Therefore, B′(x) > 0 for all x ≤ 0. If x > 0, we have B′(x) < B′(0) = π/2 since B″(x) < 0. Since \( \underset{x\to \infty }{ \lim }{B}^{\prime }(x)=0. \) we get B′(x) > 0 for all x > 0. Therefore, B(x) is strictly increasing for all x. Now, let us prove the claim that η(x) = A(x)B(x) is a decreasing function on .

Proof of the claim: Let 0 ≤ x ≤ y, then 0 ≤ − A(x) ≤ − A(y) and 0 ≤ B(x) ≤ B(y). This implies that η(x) ≥ η(y). Now let x < 0, then η′(x) = − 2x/(x 2 + 1) − 2(x 2 + 1)− 1 x log(π − 1 cot− 1(x)) + 2 cot− 1(x)log(π − 1 cot− 1(x)). Since the middle term in η′(x) is negative, consider \( \psi (x)=\frac{x}{x^2+1}-{ \cot}^{-1}(x) \log \left({ \cot}^{-1}(x)/\pi \right). \) On differentiation, \( {\psi}^{\prime }(x)=\frac{1}{x^2+1}\left\{\frac{2}{x^2+1}+ \log \left({ \cot}^{-1}(x)/\pi \right)\right\}. \) It is easy to show that the term \( \zeta (x)=\frac{2}{x^2+1}+ \log \left({ \cot}^{-1}(x)/\pi \right) \) is strictly increasing on x ≤ 0 with ζ(0) > 0 and ζ(−∞) → 0. Thus, ζ(x) > 0 for all x < 0. This implies that ψ(x) is strictly increasing on x < 0 with ψ(0) > 0 and ψ(−∞) → 0. That is, ψ(x) > 0 for all x < 0. Therefore η′(x) ≤ 0 for all x < 0. Hence, η(x) = A(x)B(x) is a decreasing function in . This completes the proof of the claim. The fact that η(−∞) → 2 and η(∞) → − ∞ implies that η(x) = 0 has a unique solution. Now, B(x) − 1 + 1/β is only a shift by − 1 + 1/β and therefore remains a strictly increasing function. One can show that the term A(x)[B(x) − 1 + 1/β] remains a decreasing function for all x and hence k(x) remains a decreasing function in with k(−∞) → α + 1 > 0 and k(∞) → − ∞. This ends the proof.□

In Fig. 3, various graphs of f(x) are provided for different parameter values of α and β where θ = 1. The plots indicate that the gamma-Cauchy{exponential} distribution can be symmetric, right-skewed or left-skewed. Also, it appears that gamma-Cauchy{exponential} is symmetric only for the trivial case when α = β = 1.
Fig. 3

Graphs for the PDF of the gamma-Cauchy{exponential} distribution when θ = 1

In the following subsection, we provide some results related to the moments of GC(α, β, θ) distribution.

4.1 Moments of gamma-Cauchy{exponential} distribution

From Corollary 3, the r-th moment of the GC(α, β, θ) can be written as
$$ {\mu}_r^{\prime}\left(\alpha, \beta, \theta \right)=E\left({X}^r\right)={\theta}^r{\displaystyle \sum_{k=0}^{\infty }{c}_k}{\left[1-\beta \left(r-2k\right)\right]}^{-\alpha }, $$
(27)
where c k is defined in Eq. (22). Therefore, the mean of GC(α, β, θ) is
$$ {\mu}_1^{\prime}\left(\alpha, \beta, \theta \right)=\theta {\displaystyle \sum_{k=0}^{\infty }{c}_k}{\left[1-\beta \left(1-2k\right)\right]}^{-\alpha }, $$
where c k is defined in (22) with r = 1. Note that \( {\mu}_1^{\prime}\left(\alpha, \beta, \theta \right) \) is defined here for α > 1 and β < 1.

The next proposition establishes the condition for the existence of r-th moment of the GC(α, β, θ) distribution.

Proposition 5: The r-th moment of the GC(α, β, θ) distribution exists if and only if α > r and β − 1 > r.

Proof: Without loss of generality, we assume θ = 1 and apply a similar idea as in Alshawarbeh et al. (2012). We write
$$ E\left({X}^r\right)={\displaystyle {\int}_{-\infty}^{-1}{x}^r}g(x)dx+{\displaystyle {\int}_{-1}^1{x}^r}g(x)dx+{\displaystyle {\int}_1^{\infty }{x}^r}g(x)dx. $$
(28)
Since the middle integrand is bounded by 2, it suffices to investigate the existence of the first and third integrands of the right hand side of Eq. (28). Consider \( {I}_1={\displaystyle {\int}_1^{\infty }{x}^r}g(x)dx \) and \( {I}_2={\displaystyle {\int}_{-\infty}^{-1}{x}^r}g(x)dx. \) Consider the following inequality from Abramowitz and Stegun (1964), p. 68
$$ x<- \log \left(1-x\right)<\frac{x}{1-x},\kern1em x<1,\kern0.24em x\ne 0. $$
(29)
On using the inequality in (29) and for α > 1, we have
$$ {I}_1\le \frac{1}{\pi {\beta}^{\alpha}\varGamma \left(\alpha \right)}{\displaystyle {\int}_1^{\infty}\frac{x^r}{1+{x}^2}}{\left(1/2+{\pi}^{-1}ta{n}^{-1}(x)\right)}^{\alpha -1}{\left(1/2-{\pi}^{-1}ta{n}^{-1}(x)\right)}^{1/\beta -\alpha }dx. $$
Let us write \( \delta (x)=\frac{x^r}{1+{x}^2}{\left(1/2+{\pi}^{-1}ta{n}^{-1}(x)\right)}^{\alpha -1}{\left(1/2-{\pi}^{-1}ta{n}^{-1}(x)\right)}^{1/\beta -\alpha } \). Then one can show that as x → ∞, δ(x) ~ x − (1/β − α − r + 2). Therefore, I 1 exists if and only if 1/β − α > r − 1. Since α > 1, this implies 1/β > r. Now, if α < 1, the inequality in (29) implies that
$$ {I}_1\le \frac{1}{\pi {\beta}^{\alpha}\varGamma \left(\alpha \right)}{\displaystyle {\int}_1^{\infty}\frac{x^r}{1+{x}^2}}{\left(1/2+{\pi}^{-1}ta{n}^{-1}(x)\right)}^{\alpha -1}{\left(1/2-{\pi}^{-1}ta{n}^{-1}(x)\right)}^{1/\beta -1}. $$

Let \( \zeta (x)=\frac{x^r}{1+{x}^2}{\left(1/2+{\pi}^{-1}ta{n}^{-1}(x)\right)}^{\alpha -1}{\left(1/2-{\pi}^{-1}ta{n}^{-1}(x)\right)}^{1/\beta -1}. \) As x → ∞, ζ(x) ~ x − (1/β − r + 1). So I 1 exists if and only if 1/β > r. Similarly one can show that I 2 exists if and only if α > r.□

Next, we consider recursive relation for the r-th moment of the GC(α, β, θ) distribution.

Proposition 6: Let X ~ GC(α, β, 1) and n. Then
  1. (i)

    \( {\mu}_{2n}^{\prime}\left(\alpha, \beta \right)=\frac{1}{\pi \beta {\left(1-\beta \right)}^{\alpha -1}}{\displaystyle \sum_{j=1}^n\frac{{\left(-1\right)}^{j-1}}{2n-2j+1}}\left\{{\mu}_{2n-2j+1}^{\prime}\left(\alpha, \frac{\beta }{1-\beta}\right)-{\mu}_{2n-2j+1}^{\prime}\left(\alpha -1,\frac{\beta }{1-\beta}\right)\right\}+{\left(-1\right)}^n. \)

     
  2. (ii)

    \( {\mu}_{2n+1}^{\prime}\left(\alpha, \beta \right)=\frac{1}{\pi \beta {\left(1-\beta \right)}^{\alpha -1}}{\displaystyle \sum_{j=1}^n{\displaystyle \sum_{i=0}^j\frac{{\left(-1\right)}^{n-j}}{2j}}}\left(\begin{array}{c}\hfill n\hfill \\ {}\hfill j\hfill \end{array}\right)\left(\begin{array}{c}\hfill j\hfill \\ {}\hfill i\hfill \end{array}\right)\left\{{\mu}_{2i}^{\prime}\left(\alpha, \frac{\beta }{1-\beta}\right)-{\mu}_{2i}^{\prime}\left(\alpha -1,\kern0.5em \frac{\beta }{1-\beta}\right)\right\}+{\left(-1\right)}^n{\mu}^{\prime}\left(\alpha, \beta \right). \)

     
Proof: From (25) and using the substitution u = tan − 1(x), we have
$$ \pi {\beta}^{\alpha}\varGamma \left(\alpha \right){\mu}_{2n}^{\prime }={{\displaystyle {\int}_{-\pi /2}^{\pi /2}\left( \tan u\right)}}^{2n}{\left(- \log \left(0.5-{\pi}^{-1}u\right)\right)}^{\alpha -1}{\left(0.5-{\pi}^{-1}u\right)}^{1/\beta -1}du={\displaystyle \sum_{j=0}^n{I}_j}, $$
(30)
where
$$ {I}_0={\left(-1\right)}^n{{\displaystyle {\int}_{-\pi /2}^{\pi /2}\left(- \log \left(0.5-{\pi}^{-1}u\right)\right)}}^{\alpha -1}{\left(0.5-{\pi}^{-1}u\right)}^{1/\beta -1}du $$
and
$$ {I}_j={\left(-1\right)}^{j-1}{\displaystyle {\int}_{-\pi /2}^{\pi /2}{ \tan}^{2n-2j}}(u){ \sec}^2(u){\left(- \log \left(0.5-{\pi}^{-1}u\right)\right)}^{\alpha -1}{\left(0.5-{\pi}^{-1}u\right)}^{1/\beta -1}du,\kern0.5em j\ge 1. $$
It is easy to see that I 0 = (−1) n π β α Γ(α). Also, it is not difficult to show that
$$ {I}_j=\frac{{\left(-1\right)}^{j-1}\varGamma \left(\alpha \right)}{2n-2j+1}{\left(\frac{\beta }{1-\beta}\right)}^{\alpha -1}\left\{{\mu}_{2n-2j+1}^{\prime}\left(\alpha, \frac{\beta }{1-\beta}\right)-{\mu}_{2n-2i+1}^{\prime}\left(\alpha -1,\frac{\beta }{1-\beta}\right)\right\}. $$
(31)
Substituting (31) in (30), we get the result in (i). For the proof of (ii), one can easily see that
$$ \pi \kern0.1em {\beta}^{\alpha}\varGamma \left(\alpha \right){\mu}_{2n+1}^{\prime }={\left(-1\right)}^n\pi \kern0.1em {\beta}^{\alpha}\varGamma \left(\alpha \right){\mu}^{\prime}\left(\alpha, \beta \right)+{\displaystyle \sum_{j=1}^n{I}_j}, $$
where
$$ {I}_j={\left(-1\right)}^{n-j}\left(\begin{array}{c}\hfill n\hfill \\ {}\hfill j\hfill \end{array}\right){\displaystyle {\int}_{-\pi /2}^{\pi /2}{ \sec}^{2j-1}}(u) \sec u \tan u{\left(- \log \left(0.5-{\pi}^{-1}u\right)\right)}^{\alpha -1}{\left(0.5-{\pi}^{-1}u\right)}^{1/\beta -1}du. $$
The rest of the proof is not difficult to show.□
Table 2 provides the mean, variance, skewness and kurtosis of the GC(α, β, 1) for various values of \( \alpha \) and β. For fixed \( \alpha \), the mean and skewness are increasing functions of β. Also, for fixed β, the mean is an increasing function of \( \alpha \). Furthermore, the values of the skewness from Table 2 show that the distribution is very flexible in terms of shapes and the distribution can be left or right skewed.
Table 2

Mean, variance, skewness and kurtosis calculations for GC(α, β,1)

α

β

Mean

Variance

Skewness

Kurtosis

4

0.01

-10.7296

56.4986

-5.2367

*

 

0.1

-0.8567

0.7824

-3.8549

*

 

0.2

0.0774

0.6096

0.2118

*

 

0.3

0.8738

2.7446

18.0382

*

5

0.01

-8.0673

21.2643

-3.4321

44.2181

 

0.1

-0.5021

0.3953

-1.7868

17.6345

 

0.2

0.4480

0.6855

2.4384

56.5459

 

0.3

1.5732

6.8542

22.1028

*

8

0.01

-4.6280

3.5337

-1.9178

5.9542

 

0.1

0.1248

0.2153

0.7560

3.2951

 

0.2

1.6343

2.7528

6.3891

395.2955

 

0.3

5.3887

124.9174

13.2288

*

10

0.01

-3.5986

1.6310

-1.5609

15.6352

 

0.1

0.4431

0.2448

0.9704

8.1938

 

0.2

2.7883

8.3416

8.3283

802.9949

 

0.3

11.1914

840.3657

5.1306

*

*Does not exist

Skewness and kurtosis of a distribution can be measured by β 1 = μ 3/σ 3 and β 2 = μ 4/σ 4, respectively. However, the third and fourth moments of GC(α, β, θ) do not always exist (see Proposition 5). Alternatively, we can define the measure of asymmetry and tail weight based on quantile function. The Galton’s skewness S defined by Galton (1883) and the Moors’ kurtosis K defined by Moors (1988) are given by
$$ S=\frac{Q\left(6/8\right)-2Q\left(4/8\right)+Q\left(2/4\right)}{Q\left(6/8\right)-Q\left(2/8\right)}. $$
(32)
$$ K=\frac{Q\left(7/8\right)-Q\left(5/8\right)+Q\left(3/8\right)-Q\left(1/8\right)}{Q\left(6/8\right)-Q\left(2/8\right)}. $$
(33)
When the distribution is symmetric, S = 0 and when the distribution is right (or left) skewed S > 0 (or S < 0). As K increases the tail of the distribution becomes heavier. To investigate the effect of the two shape parameters \( \alpha \) and β on the GC(α, β, θ) distribution, Eqs. (32) and (33) are used to obtain the Galtons’ skewness and Moors’ kurtosis where the quantile function is defined in Remark (ii). Figure 4 displays the Galton’s skewness and Moors’ kurtosis for the GC(α, β, 1). From this figure, it appears that the GC(α, β, θ) distribution has a wide range of skewness and kurtosis. It can be left skewed, right skewed or symmetric.
Fig. 4

Graphs of Galton’s skewness and Moors’ kurtosis for the distribution GC(α, β, 1)

4.2 Estimation of parameters by maximum likelihood method

Let X 1, X 2, …, X n be a random sample of size n drawn from the GC(α, β, θ). The log-likelihood function is given by
$$ \ell =-n \log \left(\pi \theta \Gamma \left(\alpha \right){\beta}^{\alpha}\right)- \log \left(1+\left({x}_i/\theta \right)\right)+\left({\beta}^{-1}-1\right){\displaystyle \sum_{i=1}^n \log}\left({z}_i\right)+\left(\alpha -1\right){\displaystyle \sum_{i=1}^n \log}\left(- \log \left({z}_i\right)\right), $$
(34)
where z i  = 0.5 − π − 1 tan − 1(x i /θ).
The derivatives of (34) with respect to \( \alpha \), β and θ are given by
$$ \frac{\partial \ell }{\partial \alpha }=-n\psi \left(\alpha \right)-n \log \beta +{\displaystyle \sum_{i=1}^n \log}\left(- \log \left({z}_i\right)\right), $$
(35)
$$ \frac{\partial \ell }{\partial \beta }=-\frac{n\alpha }{\beta }+\frac{1}{\beta^2}{\displaystyle \sum_{i=1}^n \log}\left({z}_i\right), $$
(36)
$$ \frac{\partial \ell }{\partial \theta }=-\frac{n}{\theta }+{\displaystyle \sum_{i=1}^n\frac{x_i}{\theta \left(\theta +{x}_i\right)}}+{\displaystyle \sum_{i=1}^n\frac{x_i}{\pi \left({\theta}^2+{x}_i^2\right){z}_i}\left\{\left(\alpha -1\right){\left( \log \left({z}_i\right)\right)}^{-1}+1/\beta -1\right\}}, $$
(37)
where ψ(α) = ∂ log Γ(α)/∂α, is the digamma function.

Therefore, the MLE \( \widehat{\alpha}, \) \( \widehat{\beta} \) and θ are obtained by setting the Eqs. (35), (36) and (37) to zero and solving them simultaneously. Note that the number of equations can be reduced to two by using Eq. (36) to get \( \beta ={\displaystyle \sum_{i=1}^n\frac{ \log \left({z}_i\right)}{n\alpha }}. \) The initial value for the parameter θ can be taken as θ 0 = 1. From Remark (i) in Gamma-Cauchy{exponential} distribution, the random variable Y i  = − log[0.5 − π − 1 tan− 1(X i /θ 0)], i = 1, 2, …, n follows a gamma distribution with parameters α and β. Therefore, by equating the sample mean and sample variance of Y i with the corresponding population mean and variance, the initial estimates for α and β are, respectively, \( {\alpha}_0={\overline{y}}^2/{s}_y^2 \) and \( {\beta}_0={s}_y^2/\overline{y} \) where \( \overline{y} \) and \( {s}_y^2 \) are the sample mean and sample variance for y i i = 1, …, n.

4.3 Simulation study

A simulation study is conducted to evaluate the MLE in terms of estimates and standard deviations for various parameter combinations and different sample sizes. We consider the values 0.5, 0.9, 1, 2, 5 for the parameter \( \alpha \), 0.5, 1, 3 for the parameter β, and 1, 2 for the parameter θ. The simulation study for the MLE is conducted for a total of six parameter combinations and the process is repeated 200 times. Three different sample sizes n = 50, 100, 300 are considered. The ML estimates and the standard deviations are presented in Table 3. From this table, it appears that the ML estimates of \( \alpha \) and θ tend to be overestimated. As expected, as the sample size increases, the bias and standard deviation values for all the estimates decrease.
Table 3

Estimates and standard deviations for the parameters using MLE method

Sample size

Actual values

ML estimates

Standard deviation

n

α

β

\( \theta \)

\( \widehat{\alpha} \)

\( \widehat{\beta} \)

\( \widehat{\theta} \)

\( \widehat{\alpha} \)

\( \widehat{\beta} \)

\( \widehat{\theta} \)

50

1

1

1

1.1605

0.9260

1.1455

0.0518

0.0460

0.0538

 

0.5

1

2

0.5377

0.9552

2.1399

0.0180

0.0426

0.0978

 

0.5

3

2

0.5351

2.8849

2.1952

0.0154

0.1207

0.1065

 

0.9

0.5

1

1.0229

0.4960

1.0579

0.0481

0.0222

0.0442

 

2

0.5

1

2.9571

0.4448

1.2255

0.4271

0.0282

0.0917

 

5

0.5

1

7.9697

0.4487

0.8862

0.9537

0.0183

0.0737

100

1

1

1

1.0717

0.9617

1.0499

0.0216

0.0230

0.0248

 

0.5

1

2

0.5182

1.0090

2.0843

0.0084

0.0229

0.0457

 

0.5

3

2

0.5165

2.9594

2.0505

0.0065

0.0591

0.0441

 

0.9

0.5

1

0.9375

0.4932

1.0309

0.0181

0.0105

0.0203

 

2

0.5

1

2.2258

0.4753

1.0866

0.0646

0.0131

0.0281

 

5

0.5

1

6.3436

0.4737

0.9243

0.4015

0.0089

0.0367

300

1

1

1

1.0244

0.9983

1.0270

0.0094

0.0114

0.0111

 

0.5

1

2

0.5076

0.9908

2.0320

0.0041

0.0102

0.0229

 

0.5

3

2

0.5113

2.9636

2.0718

0.0039

0.0315

0.0241

 

0.9

0.5

1

0.9242

0.4953

1.0159

0.0086

0.0053

0.0102

 

2

0.5

1

2.1286

0.4844

1.0369

0.0270

0.0071

0.0123

 

5

0.5

1

5.2815

0.4987

0.9562

0.1368

0.0040

0.0149

4.4 Applications

In this section, the GC(α, β, θ) distribution is fitted to two data sets. The first data set from Bjerkedal (1960), represents the survival time in days of 72 guinea pigs infected with virulent tubercle bacilli. The first data set is

10, 33, 44, 56, 59, 72, 74, 77, 92, 93, 96, 100, 100, 102, 105, 107, 107, 108, 108, 108, 109, 112, 121, 122, 122,124,130, 134, 136, 139, 144,146, 153, 159, 160, 163, 163,168, 171, 172, 176,113, 115, 116, 120, 183,195, 196, 197, 202, 213, 215, 216, 222, 230,231, 240, 245, 251, 253, 254, 255, 278, 293, 327, 342, 347, 361,402, 432, 458, 555

 

The data is skewed-to-the right with skewness = 1.3134 and kurtosis = 3.8509.

The second data set from Durbin and Koopman (2001), represents the measurements of the annual flow of the Nile River at Ashwan from 1871–1970. The second data set is

1120, 1160, 963, 1210, 1160, 1160, 813, 1230, 1370, 1140, 995, 935, 1110, 994, 1020, 960, 1180, 799, 958, 1140, 1100, 1210, 1150, 1250, 1260, 1220, 1030, 1100, 774, 840, 874, 694, 940, 833, 701, 916, 692, 1020, 1050, 969, 831, 726, 456, 824, 702, 1120, 1100, 832, 764, 821, 768, 845, 864, 862, 698, 845, 744, 796, 1040, 759, 781, 865, 845, 944, 984, 897, 822, 1010, 771, 676, 649, 846, 812, 742, 801, 1040, 860, 874, 848, 890, 744, 749, 838, 1050, 918, 986, 797, 923, 975, 815, 1020, 906, 901, 1170, 912, 746, 919, 718, 714, 740

 

The data is approximately symmetric with skewness = 0.3175 and kurtosis = 2.6415.

We fitted the two data sets to the GC(α, β, θ) distribution and compared the results with Cauchy, gamma-Pareto proposed by Alzaatreh et al. (2012) and beta-Cauchy distributions proposed by Alshawarbeh et al. (2013). The maximum likelihood estimates, the log-likelihood value, the AIC (Akaike Information Criterion), the Kolmogorov- Smirnov (K-S) test statistic, and the p-value for the K-S statistic for the fitted distributions to the data sets 1 and 2 are reported in Tables 4 and 5 respectively.
Table 4

Parameter estimates for the survival time data

Distribution

Cauchy

Gamma-Pareto

Beta-Cauchy

Gamma-Cauchy{exponential}

Parameter

Estimates

ĉ = 139.3079

(9.4281)a

\( \widehat{\theta} \) = 48.1262

(7.6793)

\( \widehat{\alpha} \) = 6.030

(0.9770)

ĉ = 0.4497

(0.0760)

\( \widehat{\theta} \) = 10

\( \widehat{\alpha} \) = 13.9274

(18.5335)

\( \widehat{\beta} \) = 4.5828

(3.6504)

ĉ = 117.9055

(37.6269)

\( \widehat{\theta} \) = 27.0884

(99.7681)

\( \widehat{\alpha} \) = 16.1591

(2.6666)

\( \widehat{\beta} \) = 0.1027

(0.0238)

\( \widehat{\theta} \) = 110.1742

(35.4345)

Log-likelihood

-437.5967

-465.4670

-424.4339

-424.4423

AIC

879.1934

934.9340

856.8679

854.8847

K-S

0.1416

0.2606

0.0760

0.0752

K-S p-value

0.1114

0.0001

0.8005

0.8105

astandard error

Table 5

Parameter estimates for the annual flow of the Nile River data

Distribution

Cauchy

Gamma-Pareto

Beta-Cauchy

Gamma-Cauchy{exponential}

Parameter

Estimates

ĉ = 879.3679

(17.3969)a

\( \widehat{\theta} \) = 103.8804

(13.44841)

\( \widehat{\alpha} \) = 5.0437

(0.6902)

ĉ = 0.1357

(0.0195)

\( \widehat{\theta} \) = 456

\( \widehat{\alpha} \) = 50.9201

(66.0939)

\( \widehat{\beta} \) = 25.1275

(29.7478)

ĉ = 712.2062

(445.6252)

\( \widehat{\theta} \) = 482.3092

(361.7110)

\( \widehat{\alpha} \) = 322.5715

(6.4901)

\( \widehat{\beta} \) = 0.0103

(0.0003)

\( \widehat{\theta} \) = 103.5797

(76.1343)

Log-likelihood

-674.4637

-696.7975

-653.4892

-654.3825

AIC

1352.9270

1397.5950

1314.9780

1314.7650

K-S

0.1311

0.1705

0.0736

0.0637

K-S p-value

0.0642

0.0060

0.6515

0.8120

astandard error

The results in Tables 4 and 5 show the GC(α, β, θ) and beta-Cauchy provide an adequate fit to the survival time data while the GC(α, β, θ) distribution provides the best fit (based on KS p-value) to the annual flow of the Nile River data. The fact that GC(α, β, θ) distribution has only three parameters compared with the beta-Cauchy distribution makes GC(α, β, θ) a natural choice for fitting these two data sets. A closer look at the parameter estimates for the beta-Cauchy distribution indicates that the estimates of α, β and θ in the beta-Cauchy distribution are not statistically significant for the two examples. This is an indication that beta-Cauchy is over-parameterized for fitting these two data sets. This supports the point that the three-parameter GC(α, β, θ) distribution should be used to fit the two data sets. Figure 5 displays the histogram and the fitted density functions for the two data sets, which support the results in Tables 4 and 5.
Fig. 5

Histograms and the fitted distributions for the two data sets

5. Concluding remarks

A family of generalized Cauchy distributions, T-Cauchy{Y} family, is proposed using the T-R{Y} framework. Several properties of the T-Cauchy{Y} family are studied including moments and Shannon’s entropy. Some members of the T-Cauchy{Y} family are presented. A member of the T-Cauchy{Y} family, the gamma-Cauchy{exponential} distribution, is studied in detail. This distribution is interesting as it consists of exponentiated Cauchy distribution and distributions of record values of Cauchy distribution as special cases. Various properties of the gamma-Cauchy{exponential} distribution are studied, including mode, moments and Shannon’s entropy. Unlike the Cauchy distribution, the gamma-Cauchy{exponential} distribution can be right-skewed or left-skewed. Also, the moments of the gamma-Cauchy{exponential} distribution exist under certain restrictions on the parameters. In particular, the r-th moment for the gamma- Cauchy{exponential} distribution exists if and only if α, β − 1 > r and this is not the case for the Cauchy distribution. The flexibility of the gamma-Cauchy{exponential} distribution and the existence of the moments in some cases make this distribution as an alternate to the Cauchy distribution in situations where the Cauchy distribution may not provide an adequate fit.

Declarations

Acknowledgement

The first author gratefully acknowledges the support received from the Social Fund Policy Grant at Nazarbayev University.

Authors’ contributions

The authors, AA, CL, FF and IG with the consultation of each other carried out this work and drafted the manuscript together. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, Nazarbayev University
(2)
Department of Mathematics, Central Michigan University
(3)
Department of Mathematics and Statistics, University of North Carolina Wilmington

References

  1. Abramowitz, M, Stegun, IA: Handbook of mathematical functions with formulas, graphs and mathematical tables, vol. 55. Dover Publication, Inc., New York (1964)MATHGoogle Scholar
  2. Aljarrah, MA, Lee, C, Famoye, F: On generating T-X family of distributions using quantile functions. J. Stat. Distrib. Appl. 1, 1–17 (2014)View ArticleMATHGoogle Scholar
  3. Almheidat, M, Famoye, F, Lee, C: Some generalized families of Weibull distribution: properties and applications. Int. J. Stat. Probab. 4, 18–35 (2015)View ArticleGoogle Scholar
  4. Alshawarbeh, E, Lee, C, Famoye, F: The beta-Cauchy distribution. J. Probab. Stat. Sci. 10(1), 41–57 (2012)MathSciNetGoogle Scholar
  5. Alshawarbeh, E, Famoye, F, Lee, C: Beta-Cauchy distribution: some properties and applications. J. Stat. Theory Appl. 12(4), 378–391 (2013)MathSciNetView ArticleGoogle Scholar
  6. Alzaatreh, A, Famoye, F, Lee, C: Gamma-Pareto distribution and its applications. J. Mod. Appl. Stat. Methods 11(1), 78–94 (2012)MATHGoogle Scholar
  7. Alzaatreh, A, Lee, C, Famoye, F: A new method for generating families of continuous distributions. Metron 71(1), 63–79 (2013)MathSciNetView ArticleMATHGoogle Scholar
  8. Alzaatreh, A, Lee, C, Famoye, F: T-normal family of distributions: a new approach to generalize the normal distribution. J. Stat. Distrib. Appl. 1, 1–16 (2014)View ArticleGoogle Scholar
  9. Alzaatreh, A, Lee, C, Famoye, F: Family of generalized gamma distributions: properties and applications. To appear in Hacettepe Journal of Mathematics and Statistics. (2015)Google Scholar
  10. Bjerkedal, T: Acquisition of resistance in guinea pigs infected with different doses of virulent tubercle bacilli. Am. J. Epidemiol. 72, 130–148 (1960)Google Scholar
  11. Durbin, J, Koopman, SJ: Time series analysis by state space methods. Oxford University Press, Oxford (2001)MATHGoogle Scholar
  12. Eugene, N, Lee, C, Famoye, F: Beta-normal distribution and its applications. Commun. Stat. Theory Methods 31(4), 497–512 (2002)MathSciNetView ArticleMATHGoogle Scholar
  13. Galton, F: Enquiries into human faculty and its development. Macmillan, London (1883)View ArticleGoogle Scholar
  14. Gradshteyn, IS, Ryzhik, IM: Tables of integrals, series and products, 7th edn. Elsevier, Inc., London (2007)MATHGoogle Scholar
  15. Johnson, NL, Kotz, S, Balakrishnan, N: Continuous Univariate Distributions, vol. 1, 2nd edn. Wiley, New York (1994)MATHGoogle Scholar
  16. Moors, JJA: A quantile alternative for kurtosis. Statistician 37, 25–32 (1988)View ArticleGoogle Scholar
  17. Sarabia, JM. Castillo, E: About a class of max-stable families with applications to income distributions. Metron 63, 505–527 (2005)MathSciNetGoogle Scholar
  18. Shannon, CE: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–432 (1948)MathSciNetView ArticleMATHGoogle Scholar
  19. Stigler, SM: Letter to the editor: normal orthant probabilities. Am. Stat. 43(4), 291 (1989)MathSciNetGoogle Scholar

Copyright

© The Author(s). 2016