Skip to main content

A new discrete pareto type (IV) model: theory, properties and applications

Abstract

Discrete analogue of a continuous distribution (especially in the univariate domain) is not new in the literature. The work of discretizing continuous distributions begun with the paper by Nakagawa and Osaki (1975) to the best of the knowledge of the author. Since then several authors proposed discrete analogues of known continuous models. In this paper, we propose and study a discrete analogue of the continuous Pareto (type IV) distribution, namely the discrete Pareto (type IV) distribution (DPIV, henceforth, in short) that has three parameters. Its probability mass function can be approximately symmetric, right-skewed and left-skewed shapes, and the hazard rate function possesses decreasing and upside-down bathtub shapes. Also, the proposed discrete distribution can be under-, over- or equi- dispersion. The flexibility of the new discrete model is illustrated by means of three applications to real life data sets arising out of various domains affecting our life.

Introduction

The discrete distributions are useful when count phenomenon occurs. The discrete models are as important as the continuous models. Nowadays, both types of models can be used in fascinating ways to explore real life data sets available in different fields of studies. One convincing way is the compounding technique, where the discrete and continuous models are mixed together for better exploration of phenomenons under study. The other interesting technique began with the work of (Nakagawa and Osaki 1975), who first introduced the concept of discretizing a continuous model into discrete one. There are many situations where it is inappropriate to describe the lifetime of devices on a continuous scale. For example, a piece of equipment operates in cycles and experimenter observes that the number of cycles successfully completed prior to the failure. In such case, the time to failure is more appropriately represented by the number of times they are used before they fail, which is a discrete random variable. Salvia and Bollinger (1982), and Padgett and Spurrier (1985) discussed discrete hazard rate functions (h.r.f.) and failure rate models by giving illustrations to such situation. Xie et al. (2002) also defined another discrete h.r.f. \(h(k) = \log [S(k - 1)/S(k)]\) for a random variable K, where S() is the reliability function, which gives similar results to those for continuous h.r.f.’s. (Bracquemond and Gaudoin 2003) presented a survey on discrete lifetime distributions and suggested two ways by which discrete distributions can be derived from the continuous ones; (i) to consider a characteristic property of a continuous distribution and to build the similar property in discrete time, and (ii) to consider discrete lifetime as the integer part of continuous lifetime. Lai (2013) also appreciated the discretization of a continuous model but stressed that a continuous lifetime random variable may be characterized either by its cumulative distribution function (c.d.f.), probability density function (p.d.f.) or h.r.f. which are equivalent in the sense that one can be uniquely determined by the other.

In reliability theory, classification of lifetime models is defined in terms of their survival function (s.f.) and other reliability characteristics. For example, the increasing (decreasing) failure rate IFR (DFR) class, increasing (decreasing) failure rate average IFRA (DFRA) class, the new better (worse) than used NBU (NWU) class, new better (worse) than used in expectation NBUE (NWUE) class and increasing (decreasing) mean residual lifetime IMRL (DMRL) class etc. (See Kemp 2004) and the references cited therein). The discretization of a continuous lifetime distribution retains the same functional form of the survival function, therefore, many reliability characteristics and properties shall remain unchanged. Thus, discretization of a continuous lifetime model is an interesting and simple approach to derive a discrete lifetime model corresponding to the continuous one.

In the last two or three decades, there has been a growing interest in introducing discretized continuous distributions. For more details the reader is referred to (Chakraborty 2015) and the references cited therein. In the literature, there are several different methods of discretizing a continuous probability model. We will consider the approach which is due to (Nakagawa and Osaki 1975) and (Roy 2003). This is a rather more general approach for discretizing continuous models, adopted in this paper. The following proposition may lead to this approach.

Proposition 1

Given a continuous random variable X with survival function SX(x), a discrete random variable Y can be defined as Y=x, where \(\lfloor {x}\rfloor =\max \{m\in \mathbb {Z}\mid m\le x\},\) the floor function. The probability mass function (p.m.f.) \(P \left (Y = y\right)\) of Y is then given by

$$\begin{array}{@{}rcl@{}} P \left(Y = y\right) &=&P\left(y\leq X< y+1\right)\notag\\ &=&P\left(X\geq y\right)-P\left(X\geq y+1\right)\notag\\ &=& S_{X}(y)-S_{X}(y+1), \end{array} $$
(1)

where \(y\in \mathbb {Z},\) and \(\mathbb {Z}\) is the set of integers. Consequently, a continuous failure-time model can be used to generate a discrete time model by introducing a grouping on the time axis. To put it in a simple way, if X is a continuous random variable, then the p.m.f. of its integer part, that is T=dX=X, can be viewed as a discrete concentration of the p.d.f. of X. Such discretized distributions retain the same functional form of the survival function as that of the continuous ones and the reliability characteristics also do not change.

The p.d.f. a continuous Pareto (Type IV) distribution (with the location parameter 0, inequality parameter γ, and the shape parameter α, also known the tail index, for details, see Arnold (1983)) is given by

$$ f(x)=\frac{\alpha}{\gamma}\left(\frac{x}{\sigma}\right)^{\frac{1}{\gamma}-1}\left[1+\left(\frac{x}{\sigma}\right)^{\frac{1}{\gamma}}\right]^{-\alpha-1}, x\geq 0. $$

The associated c.d.f. and the survival function are, respectively,

$$ F(x)=1-\left(1+\left(\frac{x}{\sigma}\right)^{\frac{1}{\gamma}}\right)^{-\alpha}, $$
$$ S(x)=\left(1+\left(\frac{x}{\sigma}\right)^{\frac{1}{\gamma}}\right)^{-\alpha}. $$
(2)

Using Eq. (1), the discrete Pareto (IV) (DPIV, henceforth, in short) distribution can be defined as

$$ g(x)=\theta^{\log\left(1+\left(\frac{x}{\sigma}\right)^{\frac{1}{\gamma}}\right)}-\theta^{\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)}; $$
(3)

\(x\in \mathbb {N^{*}};\) where \(\mathbb {N^{*}}=\mathbb {N}\cup \{0\}, \theta =\exp (-\alpha),\) and 0<θ<1, σ>0. A random variable X with the p.m.f. as given in Eq. (3) will be said to have the DPIV distribution.

From (3), the c.d.f. and the survival function of a random variable that follows the DPIV distribution are given as follows

$$ G(x) = 1-\theta^{\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)}\quad x=1,2,\cdots, $$
(4)
$$ S(x) =\theta^{\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)}\ \quad x=1,2,\cdots. $$
(5)

The next result discusses the limiting behavior of the DPIV distribution corresponding to various parameter choices at the boundary.

Result 1

  • \({\lim }_{x\rightarrow \infty } g(x)=0.\)

  • \({\lim }_{\sigma \rightarrow 0,\infty }g(x)=0.\)

  • \({\lim }_{\gamma \rightarrow 0,\infty }g(x)=\theta ^{\log \left (\frac {x}{\sigma }+1\right)+1}-\theta ^{\log \left (\frac {x+1}{\sigma }+1\right)+1}.\)

  • \({\lim }_{\theta \rightarrow 0,1}g(x)=0.\)

Some representative plots of the DPIV p.m.f. are provided in Fig. 1, for varying parameter values.

Fig. 1
figure 1

Plots of DPIV p.m.f. for various parameter values

From the plots in Fig. 1, it appears that the distribution is right skewed. If we wanted to apply this to some real life application, we would desire the data to also possess this right skewed characteristic for a better model fit. It is also important to note that the Pareto (Type IV) distribution is highly sensitive to changes in the α parameter as this is the shape parameter (also known as the tail index). Furthermore, from Fig. 1, it represents the fact that for larger values of the parameter α, and γ the mode moves to the right, indicating that the proposed distribution is quite versatile in nature; while smaller values of α appear to have a significant effect on the respective probabilities, and of course, on the values of the moments. We found that mass points were more evenly distributed when α≤2,γ=0.4. As we will see in the “Estimation” section, this sensitivity will become an issue in estimating the parameters under the maximum likelihood method.

The rest of the paper will be organized in the following way. “Structural properties” introduces the discrete Pareto (IV) distribution. The maximum likelihood estimation in DPIV distribution is discussed in detail with simulation studies in “Estimation”. For illustrative purposes, three different data sets from various real life scenarios are re-analyzed to show the applicability of the proposed DPIV distribution “Application”. Finally, we conclude this paper by providing some final remarks in “Concluding remarks” sections.

Structural properties

In this section, we discuss some important structural properties of the DPIV distribution. At first, we have the following lemma.

Lemma 1

If a random variable Y follows the Pareto distribution with parameters α,γ,σ, then X=Y follows the DPIV \(\left (\theta, \gamma, \sigma \right).\)

Proof

Follows immediately from (3).

Stochastic ordering is an integral tool to judge comparative behaviors of random variables. Many stochastic orders exist and have various applications. Theorem 1 and Corollary 1 (below) give some results on the stochastic orderings of the DPIV. The orders considered here are the stochastic order ≤st, and the expectation order ≤E. □

Theorem 1

The DPIV \(\left (\theta, \gamma, \sigma \right)\) has the following properties.

  • Suppose \(X_{1}\sim \text {DPIV} \left (\theta, \gamma, \sigma _{1}\right)\) and \(X_{2}\sim \text {DPIV} \left (\theta, \gamma, \sigma _{2}\right)\). If σ1<σ2, then X1stX2.

  • Suppose \(X_{1}\sim \text {DPIV} \left (\theta, \gamma _{1}, \sigma \right)\) and \(X_{2}\sim \text {DPIV} \left (\theta, \gamma _{2}, \sigma \right)\). If γ1>γ2, then X2stX1.

  • Suppose \(X_{1}\sim \text {DPIV} \left (\theta _{1}, \gamma, \sigma \right)\) and \(X_{2}\sim \text {DPIV} \left (\theta _{2}, \gamma, \sigma \right)\). If θ2>θ1, then X1stX2.

Proof

Follows immediately from the c.d.f. of the DPIV \(\left (\theta, \gamma, \sigma \right)\) distribution.

Next, we describe the expectation ordering in the next Corollary which follows from Theorem 1. □

Corollary 1

  • Suppose \(X_{1}\sim \text {DPIV} \left (\theta, \gamma, \sigma _{1}\right)\) and \(X_{2}\sim \text {DPIV} \left (\theta, \gamma, \sigma _{2}\right)\). If σ1<σ2, then X1EX2.

  • Suppose \(X_{1}\sim \text {DPIV} \left (\theta, \gamma _{1}, \sigma \right)\) and \(X_{2}\sim \text {DPIV} \left (\theta, \gamma _{2}, \sigma \right)\). If γ1>γ2, then X2EX1.

  • Suppose \(X_{1}\sim \text {DPIV} \left (\theta _{1}, \gamma, \sigma \right)\) and \(X_{2}\sim \text {DPIV} \left (\theta _{2}, \gamma, \sigma \right)\). If θ2>θ1, then X1EX2.

Notice that for the DPIV \(\left (\theta, \gamma, \sigma \right)\) distribution

$$\begin{array}{*{20}l} &g^{2}(x)-g(x-1)\times g(x+1)\notag\\ &=\theta^{2\log\left(1+\left(\frac{x}{\sigma}\right)^{\frac{1}{\gamma}}\right)}+\theta^{2\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)} -\theta^{\log\left(1+(\frac{x}{\sigma})^{\frac{1}{\gamma}}\right)+\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)}\notag\\ &-\theta^{\log\left(1+(\frac{x-1}{\sigma})^{\frac{1}{\gamma}}\right)+\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)} +\theta^{\log\left(1+(\frac{x-1}{\sigma})^{\frac{1}{\gamma}}\right)+\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)}. \end{array} $$
(6)

For θ>0,σ>0,γ>0, the distribution is infinitely divisible since the expression in Eq. (6) can be negative and g(0)≠0,g(1)≠0, see Warde and Katti (1971) for details. Then, in this case the distribution has its mode at zero. Furthermore, the infinitely divisible distribution plays an important role in many areas of statistics, for example, in stochastic processes and in actuarial statistics. When a distribution G is infinitely divisible, then for any integer y≥2, there exists a distribution Gy such that G is the y-fold convolution of G, namely, \(G = G_{x}^{*x}.\) Also, when a distribution is infinitely divisible an upper bound for the variance can be obtained when θ>0,σ>0,γ>0 (see Johnson and Kotz (1982), page 75), which is given by

$$Var(X)\leq \frac{g(1)}{g(0)}.$$

The following results hold for the p.m.f. of the DPIV distribution in Eq. (1) which are listed as follows:

  • For all k=0,1,, and for any m≥1,

    $$\binom{k+m}{m}g(x+m)g(0)\geq g(k)g(m), $$

    for further details, see Steutel and Van Harn (2004), page 51, Proposition 8.4.

  • For all \(x = 0,1,\cdots, g(x) \leq \exp (-1).\) See Steutel and Van Harn (2004), page 56, Proposition 9.2.

  • The distribution is strictly log-concave and strongly unimodal, see theorem 3 in Keilson and Gerber (1971).

  • The cumulants of an infinitely divisible distribution on the set of non- negative integers (as far as they exist) are non-negative, see Steutel and Van Harn (2004), page 47, Corollary 7.2. This will imply that the skewness of the new distribution is positive, since the third cumulant equals the third central moment.

Increasing and decreasing failure rate

The purpose of this section is to find a relationship between the parameters of this model in order to study the failure rates. From the beginning of this section, we have that the failure rate, r(x) is given by : \(r(x)=\frac {P(x)}{S(x)}=1-\theta ^{\phi (x)}; \phi (x)=\log \left (\frac {1+\left (\frac {1+x}{\sigma }\right)^{\frac {1}{\gamma }}}{1+\left (\frac {x}{\sigma }\right)^{\frac {1}{\gamma }}}\right).\) Next, on setting r(1)=r(2), we get \(\sigma =\left [\frac {\left (4-2^{1+\frac {1}{\gamma }}\right)}{2^{\frac {2}{\gamma }}-3^{\frac {1}{\gamma }}}\right ]^{-\gamma }.\) If γ is an integer, the failure rate is indeterministic. Therefore, this parametric relationship yields information about the failure rate if γ<2. Note that from (4), r(x) is decreasing if θϕ(x) is increasing. Since 0≤θ≤1, θϕ(x) will be increasing if ϕ(x) is increasing. Furthermore, we observe the following:

  • If \(\frac {1}{\gamma }<1,\) then ϕ(x) is increasing, and with θ increasing (equivalently α decreasing), r(x) is decreasing. This proves the fact that it has a DFR in such a scenario.

  • If \(\frac {1}{\gamma }>1,\) then ϕ(x) is decreasing, and with θ decreasing (equivalently α increasing), r(x) is increasing. This proves the fact that it has a IFR in such a scenario.

According to (Kemp 2004), page 3074 one can say the following relationships for discrete distributions which are applicable to the DPIV distribution Eq. (3) given below.

IFR/DFRIFRA/DFRANBU/NWUNBUE/NWUEDMRL/IMRL.

Moments and generating functions

The rth moment of a random variable X with the p.m.f. in Eq. (1) will be for any \(r\in \mathbb {Z}^{+}\):

$$\begin{array}{*{20}l} E\left(X^{r}\right)&=\sum\limits_{x=0}^{\infty}x^{r}p(x)\\ &=\sum\limits_{x=1}^{\infty}\left(x^{r}-(x-1)^{r}\right)S(x)\\ &\leq r\sum\limits_{x=1}^{\infty}x^{r-1}\theta^{\log\left(1+\left(\frac{x+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)}\\ &\leq r \sum\limits_{x=1}^{\infty}x^{r-1}\left(\frac{x}{\sigma}\right)^{\frac{-\alpha}{\gamma}},\quad \text{using} \quad\theta=\exp(-\alpha)\\ & \leq r \sum\limits_{x=1}^{\infty}\sigma^{\frac{\alpha}{\gamma}}\left(\frac{1}{x^{1-r+\frac{\alpha}{\gamma}}}\right)\\ &=r\sigma^{\frac{\alpha}{\gamma}}\sum\limits_{x=1}^{\infty}\left(\frac{1}{x^{1-r+\frac{\alpha}{\gamma}}}\right). \end{array} $$

Then, \(E\left (X^{r}\right)\) will be convergent if \(\frac {\alpha }{\gamma }>r,\) i.e., \(\exp (-\gamma \times r)>\theta,\) where \(\theta =\exp (-\alpha)\). Consequently, we can write the following theorem:

Theorem 2

\(E\left (X^{r}\right)\) exists if and only if \(\exp (-\gamma \times r)>\theta.\)

Proof

Immediately follows from the previous discussion. □

Probability generating function and factorial moments

The relationship between the probabilities and the associated factorial moments (see Johnson et al. (2005), page 59)

$$P\left(X=x\right) =\sum\limits_{r\geq 0}(-1)^{r} \frac{\mu'_{[x+r]}}{x!r!}, $$

which is due to (Frechet 1940; 1943). Again, one may write

$$\sum\limits_{i\geq x}P\left(X=i\right)=\sum\limits_{j\geq x}(-1)^{x+j}\binom{j-1}{x-1}\frac{\mu'_{[j]}}{j!}, $$

which is due to (Laurent 1965).

Theorem 3

Characterization via minimization

Let Xis,i=1,2,3,... be non-negative independent and identically distributed (i.i.d.) integer valued random variables with \(X_{(1)}=\min _{1\leq i\leq n}X_{i}\). Then, \(X_{(1)}\sim \text {DPIV}\left (\theta ^{n}, \sigma, \gamma \right)\) iff XiDPIV(θ,σ,γ).

Proof

Sufficiency part:Let \(X_{i}\sim \text {DPIV}\left (\theta,\sigma,\gamma \right)\). Then, \(S(X)=\theta ^{\log \left (1+\left (\frac {x+1}{\sigma }\right)^{\frac {1}{\gamma }}\right)}, x=1,2,3,..\). Consider x(1)=1,2,...:

$$\begin{array}{*{20}l} S\left(X_{(1)}\right)&=P\left(X_{(1)}\geq x_{(1)}\right)\\ &=\left\{P(X_{(1)}\geq x_{(1)})\right\}^{n}\\ &=\theta^{n\log\left\{1+\frac{1+x_{(1)}}{\sigma}\right\}}. \end{array} $$

Hence \(X_{(1)}\sim \text {DPIV}\left (\theta ^{n},\sigma, \gamma \right).\)

Necessary part:Let \(S\left (X_{(1)}\right)=\theta ^{n\log \left \{1+\frac {1+x_{(1)}}{\sigma }\right \}}; x_{1}=1,...\).

We know that

$$\begin{array}{*{20}l} S(X)&=P(X_{1}\geq x)\notag\\ &=\left(P\left(X_{(1)}\geq x\right)\right)^{n}\\ &=\theta\log\left(1+\left(\frac{1+x_{(1)}}{\sigma}\right)^{\frac{1}{\gamma}}\right). \end{array} $$

Hence the proof. □

Theorem 4

Let Xis,i=1,2,3,... be non-negative independent integer valued random variables with \(X_{(1)}=\min \limits _{1\leq i\leq n}X_{i}.\) Then, \(X_{(1)}\sim {DPIV}\left (\delta, \sigma, \gamma \right)\) if and only if XiDPIV(θi,σ,γ), where \(\delta =\prod \nolimits _{i=1}^{n}\theta _{i}.\)

Proof

It is similar to that of Theorem 3 and hence omitted. □

Theorem 5

If \(Y\sim \text {DPIV}\left (\theta, \sigma, \gamma \right),\) then

$$\frac{P\left(Y>x\right)}{\left(1+\left(\frac{1+x}{\sigma}\right)^{\frac{1}{\gamma}}\right)} \rightarrow 1, $$

as \(x\rightarrow \infty.\)

Proof

Let \(x\rightarrow \infty,\) and also let t=t(x) be the unique integer such that t(x)≤xt(x)+1. As a consequence, we have \(S\left (t(x)\right)\geq P\left (Y>x\right)\geq S\left (t(x)+1\right).\) Therefore,

$$ \left[\frac{\left(1+\left(\frac{1+x}{\sigma}\right)^{\frac{1}{\gamma}}\right)}{\left(1+\left(\frac{1+t(x)}{\sigma}\right)^{\frac{1}{\gamma}}\right)}\right]^{\alpha} \geq \frac{P\left(Y>x\right)}{\left[\left(1+\left(\frac{1+x}{\sigma}\right)^{\frac{1}{\gamma}}\right)\right]^{-\alpha}} \geq \left[\frac{\left(1+\left(\frac{1+x}{\sigma}\right)^{\frac{1}{\gamma}}\right)}{\left(1+\left(\frac{t(x)+2}{\sigma}\right)^{\frac{1}{\gamma}}\right)}\right]^{\alpha}. $$

Next, note that since \(\frac {x}{t(x)}\rightarrow 1,\) the sequence in the middle is bounded by two sequences which converges to 1 as \(x\rightarrow \infty.\) Hence, the proof. □

Estimation

For a random sample of size n drawn from the p.m.f. in Eq. (3), the log-likelihood function is given by

$$ \begin{aligned} \ell=\sum\limits_{i=1}^{n}\log\left[\theta^{\log\left(1+\left(\frac{x_{i}}{\sigma}\right)^{\frac{1}{\gamma}}\right)}- \theta^{\log\left(1+\left(\frac{x_{i}+1}{\sigma}\right)^{\frac{1}{\gamma}}\right)}\right]. \end{aligned} $$
(7)

The corresponding maximum likelihood equations are (by taking partial derivatives of w.r.t. θ,σ and γ respectively)

$$ \begin{aligned} \frac{\partial \ell}{\partial \theta} =&\left[\theta^{\log \left(\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right]\\ &\times\left[\log \left(\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \theta^{\log \left(\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)-1}-\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \theta ^{\log \left(\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)-1}\right]. \end{aligned} $$
(8)
$$ {}\begin{aligned} \frac{\partial \ell}{\partial \sigma} =& \left[\gamma \sigma \left(\theta^{\log \left(\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right]\\ &\times \left[\log (\theta) \left(\frac{\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma}} \theta^{\log \left(\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }}+1}-\frac{\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1}\right)\right]. \end{aligned} $$
(9)
$$ \begin{aligned} \frac{\partial \ell}{\partial \gamma} =&\left[\gamma^{2} \left(\theta^{\log \left(\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right] \\ &\times\left[\log (\theta) \left(\frac{\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x_{i}+1}{\sigma }\right) \theta^{\log \left(\left(\frac{x_{i}+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x_{i}+1}{\sigma} \right)^{\frac{1}{\gamma }}+1}-\frac{\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x_{i}}{\sigma} \right) \theta^{\log \left(\left(\frac{x_{i}}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x_{i}}{\sigma} \right)^{\frac{1}{\gamma }}+1}\right)\right]. \end{aligned} $$
(10)

The maximum likelihood estimates of θ,σ and γ can be obtained by setting Eqs. (8)-(10) equal to zero and solving simultaneously using bivariate Newton-Raphson method. The asymptotic variance-covariance matrix of the MLEs of parameters θ,σ and γ are obtained by inverting the Fisher’s information matrix with elements which are negative expected values of second order derivatives of the log-likelihood function (). Using the general theory of MLEs, (see Appendix A) the asymptotic distribution of \(\left (\hat {\theta }, \hat {\sigma }, \hat {\gamma }\right)\) is a trivariate normal with mean \(\left (\theta, \sigma, \gamma \right)\) and variance-covariance matrix is given by

$$\left[\begin{array}{ccc} E\left(-\frac{\partial^{2} \ell}{\partial \theta^{2}}\right) &E\left(-\frac{\partial^{2} \ell}{\partial\theta\partial \sigma} \right)&E\left(-\frac{\partial^{2} \ell}{\partial\theta\partial \gamma}\right) \\ E\left(-\frac{\partial^{2} \ell}{\partial \sigma\partial \theta} \right)& E\left(-\frac{\partial^{2} \ell}{\partial \sigma^{2}}\right) &E\left(-\frac{\partial^{2} \ell}{\partial\sigma \partial \gamma}\right)\\ E\left(-\frac{\partial^{2} \ell}{\partial\gamma\partial \theta}\right)&E\left(-\frac{\partial^{2} \ell}{\partial\gamma \partial \sigma}\right) & E\left(-\frac{\partial^{2} \ell}{\partial \gamma^{2}}\right)\\ \end{array}\right] ^{-1}.$$

The exact expressions for various expectations above are cumbersome. However, in practice we would estimate above matrix by the inverse of observed Fisher’s information matrix using the following approximations

$$ \begin{aligned} &E\left(-\frac{\partial^{2}\ell}{\partial \theta^{2}}\right) \approx-\frac{\partial^{2} \ell}{\partial \theta^{2}}|_{\theta=\hat{\theta}, \sigma=\hat{\sigma}, \gamma=\hat{\gamma}}\\ &E\left(-\frac{\partial^{2} \ell}{\partial \sigma^{2}}\right) \approx-\frac{\partial^{2} \ell}{\partial \sigma^{2}}|_{\theta=\hat{\theta}, \sigma=\hat{\sigma}, \gamma=\hat{\gamma}}\\ &E\left(-\frac{\partial^{2} \ell}{\partial \gamma^{2}}\right) \approx-\frac{\partial^{2} \ell}{\partial \gamma^{2}}|_{\theta=\hat{\theta}, \sigma=\hat{\sigma}, \gamma=\hat{\gamma}}\\ &E\left(-\frac{\partial^{2} \ell}{\partial \sigma\partial \theta} \right) \approx-\frac{\partial^{2} \ell}{\partial \sigma\partial \theta}|_{\theta=\hat{\theta}, \sigma=\hat{\sigma}, \gamma=\hat{\gamma}}\\ &E\left(-\frac{\partial^{2} \ell}{\partial \gamma\partial \theta} \right) \approx-\frac{\partial^{2} \ell}{\partial \gamma\partial \theta}|_{\theta=\hat{\theta}, \sigma=\hat{\sigma}, \gamma=\hat{\gamma}}\\ &E\left(-\frac{\partial^{2} \ell}{\partial \sigma\partial \gamma} \right) \approx-\frac{\partial^{2} \ell}{\partial \sigma\partial \gamma}|_{\theta=\hat{\theta}, \sigma=\hat{\sigma}, \gamma=\hat{\gamma}}. \end{aligned} $$
(11)

The expressions are provided in Appendix B. For simulation study, we consider the following choices of the parameters (for a random sample of sizes n=50,100,200) respectively. A thorough simulation study was performed by generating 2000 samples of various sizes from \(DPIV\left (\theta, \gamma, \sigma \right).\)

  • Choice 1:θ=0.2,γ=0.2,σ=2.

  • Choice 2:θ=0.4,γ=0.7,σ=2.5.

  • Choice 3:θ=0.7,γ=1.5,σ=3.

  • Choice 4:θ=0.9,γ=1,σ=1.7.

  • Choice 5:θ=0.5,γ=0.5,σ=1.

  • Choice 6:θ=0.8,γ=0.7,σ=1.2.

  • Choice 7:θ=0.1,γ=2,σ=2.5.

The parameter estimates with their associated 95% confidence intervals are provided in Tables 1, 2, 3, 4, 5, 6 and 7.

Table 1 Maximum likelihood estimates of the parameters (Choice 1)
Table 2 Maximum likelihood estimates of the parameters (Choice 2)
Table 3 Maximum likelihood estimates of the parameters (Choice 3)
Table 4 Maximum likelihood estimates of the parameters (Choice 4)
Table 5 Maximum likelihood estimates of the parameters (Choice 5)
Table 6 Maximum likelihood estimates of the parameters (Choice 6)
Table 7 Maximum likelihood estimates of the parameters (Choice 7)

Comment on the simulation study: From Tables 1, 2, 3, 4, 5, 6 and 7, we observe that the convergence of N-R method is slightly strong for smaller values of θ,γ, and σ as θ,γ, and σ are close to the corresponding actual values in expectation. The associated confidence intervals (in the parenthesis) cover the actual values of θ,γ, and σ quite satisfactorily. From the estimated value of θ, i.e., \(\widehat {\theta }\) one can obtain an estimate of α by using invariance property of the MLE. Also, notice that there exists an appreciable amount of error when the sample size is very small (values of the estimates for n=50 in Tables 1, 2, 3, 4, 5, 6 and 7. Since, α appears in powers of Xi’s (from the original model, before reparametrization) it is expected to have more fluctuations in N-R method, resulting in weak convergence of the method which results in an unstable estimated values in some sense. Furthermore, as mentioned in the “Introduction” section, for a choice of α≤2 (equivalently \(\theta =\exp (-\alpha)\leq 0.1353,\) the estimated value of θ is not good. A full scale simulation study with all possible such combination of values of each of the parameters of DPIV distribution under the Bayesian paradigm is required which will be a subject matter of a separate article. One may consider the method of moments estimation to examine if such an anomaly can be removed or at least, in principle be reduced to a desired accuracy level.

Application

Data set 1

In this section we re-analyze the data which is used by Krishna and Pundir (2009). The data set comprises the recordings of (Phyo 1973) of the total number of carious teeth among the four deciduous molars in a sample of 100 children 10 and 11 years old. Symmetry between right and left molars is presumed and only the right molars are considered with a time unit of two years. The data are given in Table 8. The p-values of χ2-statistic are 0.000024,0.2614,0.5438,0.4169,0.1339 and 0.6238 for Poisson, geometric, DBD, DPareto, GBPareto(discrete) and DPIV distributions, respectively. This reveals that Poisson and Geometric distributions are not good fit at all, whereas GBPareto (discrete), DBD and DPareto are good fit with DPIV being the best one.

Table 8 Data set on total number of carious teeth among the four deciduous molars

We compute the expected frequencies (Ei) for fitting Poisson, Geometric, DBD, DPareto, GBPareto, DPIV distributions and pool the frequencies for 3 or more in order to apply χ2-test for goodness of fit. For the calculation of expected frequencies we use ML estimates in each case. The estimated value of the parameter is given in parenthesis in column one of Table 9.

Table 9 Table for goodness of fit

Data set 2

A second application of the distributions is for modeling discrete data in which the frequencies at successive values increase. In general, most real discrete data are unimodal, multimodal or with decreasing frequencies. However, for this data set a reverse pattern can be observed. As an illustration, we consider data on duckweed fronds for plants growing in pure water observed weekly (for details, see Hand et al. (1993)) presented in Table 10. We fit this data to DPIV\(\left (\theta, \gamma,\sigma \right)\) (equivalently, DPIV), Poisson, Geometric, and

Table 10 The number of duckweed fronds for plants growing in pure water observed weekly

to the generalized Poisson distribution (GPD) with p.m.f. (see, for details, Consul (1989)) given by

$$ P(\theta,\lambda)=\left\{\begin{array}{ll} \theta(\theta+\lambda x)^{x-1}e^{-(\theta+\lambda x)}/x! & x=0,1,2,\ldots \\ 0 & \text{for}\ x>m\,\, \text{if }\, \lambda<0, \end{array}\right. $$

where θ>0, \(\max (-1, -\theta /m)\leq \lambda \leq 1\) and m(≥4) is the largest positive integer for which θ+mλ≥0 when λ<0.

From the above table (Table 11) it appears that clearly, the DPIV distribution provides the best fit.

Table 11 The estimated parameters and goodness of fit for the duckweed fronds for plants growing data

Data set 3

In this Section, the discrete gamma-Lomax distribution is applied to a data set that is taken from (Consul 1989). The data set represents the observed frequencies of the number of outbreaks of strike in the coal-mining industry in the U.K. during 1948−1959. The data are depicted in Table 12 along with the expected frequencies corresponding to DPIV, Poisson, Geometric, and the Generalized Poisson distribution utilized by Consul (1989) as given in earlier “Data set 2” section. Consul (1989) applied the Generalized Poisson distribution (GPD) to this data set to examine the efficacy of the GPD model.

Table 12 The number of outbreaks of strike in the coal-mining industry in UK

From the above table (Table 13) it appears that clearly, the DPIV distribution provides the best fit.

Table 13 The estimated parameters and goodness of fit for the outbreaks of strike in the coal-mining industry in UK data

Concluding remarks

In this paper, we have proposed a new discrete analogue of the continuous Pareto (IV) distribution (DPIV distribution in short with three parameters) and derived some of its interesting distributional properties. The DPIV distribution offers good flexibilities in terms of shapes for the probability mass functions and hazard rate functions. The “Application” shows that the DPIV can be useful in fitting data which are positively skewed as well as to other data sets with slightly different shapes.The estimation of the model parameters are discussed in the classical set up under the method of maximum likelihood. From the “Application” sections, it appears that the DPIV distribution provides a better alternative to the existing discrete Pareto probability models.

Appendix

Appendix A

In this section we provide some justification on the consistency of the maximum likelihood estimators which are given as follows: Consistency of MLEs: Under certain regularity conditions, Lehman and Casella (1998) in Theorem 3.10 page 449 has proved that MLE \(\hat {\delta }\) is a consistent estimator of the parameter δ. Several conditions can be checked accordingly in our case. First condition states that the parameter space \(\Omega =\left (\theta \in (0,1), \quad \sigma \in (0,\infty), \gamma \in (0,\infty)\right)\) is a subset on the real line. The support of the random variable X is independent of the parameters. Next, the second condition \(E\left (\frac {\partial \log f}{\partial \delta }\right)=0\) can be verified easily. The third condition that the Fisher’s information \(I(\delta)=E\left (-\frac {\partial ^{2} \log f}{\partial \delta ^{2}}\right)\) is positive definite which can be verified easily. The final condition to verify is that \(\left |\frac {\partial ^{3} \log f}{\partial \delta ^{3}}\right |\leq M(x)\) with \(E\left (M(X)\right)<\infty.\) In this case, we consider \(M(x)=\left [ \frac {1}{f\left (x;\delta \right)\left (1+\frac {\delta }{\sigma }\right)^{k}}\times \frac {\partial ^{3} \log f}{\partial \delta ^{3}} \right ]^{2}.\) For a carefully selected large k, we verified numerically (on using Mathematica software) that for all \(\Omega =\left (\theta \in (0,1), \quad \sigma \in (0,\infty), \gamma \in (0,\infty)\right)\) and \(x=1,2,\cdots \left |\frac {\partial ^{3} \log f}{\partial \delta ^{3}}\right |\leq M(x).\) Furthermore, observe that \(E\left (M(X)\right)=\left [\left (1+\frac {\delta }{\sigma }\right)^{k}\right ]^{-1} Var\left (\frac {\partial ^{3} \log f}{\partial \delta ^{3}}\right),\) because \({\sum \nolimits }_{x\in \mathbb {N^{*}}}\frac {\partial ^{3} \log f}{\partial \delta ^{3}}=0.\) Then, \(Var\left (\frac {\partial ^{3} \log f}{\partial \delta ^{3}}\right)\) is finite whenever E(X) and Var(X) are finite. Therefore, all the regularity conditions are satisfied, one may say the MLE of δ given by \(\hat {\delta }\) is a consistent estimator of δ and in our case \(\hat {\delta }\sim N_{3}\left (\delta, \left [I(\delta)\right ]^{-1}\right).\)

Appendix B

In this section, we provide the elements of the observed Fisher Information matrix which are as follows:

$$ \begin{aligned} \frac{\partial^{2} \ell}{\partial \theta^{2}}\notag\\ =&\left[\theta^{2} \left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)^{2}\right]\notag\\ &\times\left[-\log^{2}\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)+\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}+\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right.\right.\notag\\ &\left.-2 \log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right) \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma}}+1\right)} +\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)\notag\\ &\left.\times\left(\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)} -\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}+\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right) \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right]^{-1}. \end{aligned} $$
$$ \begin{aligned} \frac{\partial^{2} \ell}{\partial \sigma^{2}}\notag\\ =&\left[\log \theta\left(\!-\log \theta \left(\left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\,-\,\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)\! \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)^{2}\right.\right.\notag\\ &\left.\times \gamma \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)} -\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right. \end{aligned} $$
$$ \begin{aligned} &\left.\times\left(\left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right.\notag\\ &\left.+\left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right.\notag\\ &\left.\times\left(\left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma}} \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)} -\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right.\notag\\ &\left.\left.+\log\theta \left(\left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\frac{x}{\sigma }\right)^{2/\gamma} \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma}}+1\right)^{2} \left(\frac{x+1}{\sigma }\right)^{2/\gamma} \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right)\!\right]\notag\\ &\times\left[\gamma^{2} \sigma^{2} \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)^{2}\right]^{-1}. \end{aligned} $$
$$ \begin{aligned} \frac{\partial^{2} \ell}{\partial \gamma^{2}}\notag\\ =&\left[\log\theta\left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta ^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right) \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \log^{2}\left(\frac{x}{\sigma }\right) \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right.\notag\\ &\times\left(\log \theta \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}\!+1\right)\,-\,\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}\,+\,1\right)^{2} \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \log^{2}\left(\frac{x+1}{\sigma }\right) \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\! \left(\log (\theta) \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}\!+1\right)\notag\\ &-\log \theta \left(\left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x}{\sigma }\right) \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right.\\&-\left.\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x+1}{\sigma }\right) \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)^{2}\\&+2 \gamma \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\\ &\quad\left(\left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x}{\sigma }\right) \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\notag\\ &\left. \left.-\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x+1}{\sigma }\right) \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right]\notag\\ &\times \left[\gamma^{4}\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)^{2}\right]^{-1}. \end{aligned} $$
$$ \begin{aligned} \frac{\partial^{2} \ell}{\partial\theta \partial\sigma}\notag\\ =&\left[\!\log \theta \left(\left(\!\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}\,-\,\left(\!\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}\right) \left(\!\log \left(\left(\!\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)\,-\,\log \left(\!\left(\!\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)\right) \theta^{\log \left(\left(\frac{x}{\sigma}\right)^{\frac{1}{\gamma }}+1\right)+\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right.\notag\\ &+\left(\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right) \left(\left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right.\notag\\ &\left.\left.-\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)\right]\notag\\ &\times \left[\gamma \theta \sigma \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right) \left(\theta ^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)^{2}\right]^{-1}. \end{aligned} $$
$$ \begin{aligned} \frac{\partial^{2} \ell}{\partial\sigma \partial\gamma}\notag\\ =&\left[\log \theta \left(\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta ^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right) \left(\frac{\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1}-\frac{\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1}\right)\right.\notag\\ &\left.- \gamma^{-1}\log \theta \left(\frac{\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x+1}{\sigma }\right) \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1}-\frac{\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x}{\sigma }\right) \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1}\right) \right.\notag\\ &\times \left(\frac{\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1}-\frac{\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }} \theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}}{\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1}\right)\notag\\ &+\left[\!\left(\!\theta^{\log \left(\!\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}\,+\,1\right)}\,-\,\theta^{\log \left(\!\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}\,+\,1\right)}\!\right)\! \left(\!\left(\!\left(\!\frac{x\,+\,1}{\sigma }\right)^{\frac{1}{\gamma }}\!+1\!\right)^{2} \left(\!\frac{x}{\sigma }\!\right)^{\frac{1}{\gamma }} \log \left(\!\frac{x}{\sigma }\!\right) \theta^{\log \left(\left(\!\frac{x}{\sigma }\!\right)^{\frac{1}{\gamma }}\,+\,1\right)} \left(\!\log (\theta) \left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}\,+\,1\!\right)\right.\right.\notag\\ &\left.\left.-\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)^{2} \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }} \log \left(\frac{x+1}{\sigma }\right) \theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)} \left(\log (\theta) \left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)\right)\right]\notag\\ &\left.\times \gamma\left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma}}+1\right)^{2} \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma}}+1\right)^{2}\right]\notag\\ &\times \left[\gamma^{2} \sigma \left(\theta^{\log \left(\left(\frac{x}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}-\theta^{\log \left(\left(\frac{x+1}{\sigma }\right)^{\frac{1}{\gamma }}+1\right)}\right)^{2}\right]^{-1}. \end{aligned} $$

Availability of data and materials

All the data sets utilized in this manuscript are available on the specific references made freely.

Abbreviations

DPIV:

discrete Pareto (type IV) distribution

IFR (DFR):

increasing (decreasing) failure rate

IFRA (DFRA):

increasing (decreasing) failure rate average

NBU (NWU):

new better (worse) than used

NBUE (NWUE):

new better (worse) than used in expectation

IMRL (DMRL):

Increasing (decreasing) mean residual lifetime

s.f.:

survival function

p.m.f.:

probability mass function

h.r.f.:

hazard rate function

References

  • Arnold, B. C.: Pareto Distributions. International Co–operative Publishing House, Maryland (1983).

    MATH  Google Scholar 

  • Bracquemond, C., Gaudoin, O.: A survey on discrete lifetime distributions. Int. J. Reliab. Qual. Saf. Eng. 10, 69–98 (2003).

    Article  Google Scholar 

  • Chakraborty, S.: Generating discrete analogues of continuous probability distributions–A survey of methods and constructions. J. Stat. Distrib. Appl. 2(Article 6), 1–30 (2015).

    MATH  Google Scholar 

  • Consul, P. C.: Generalized Poisson Distributions: Properties and Applications. Marcel Dekker, Inc., New York (1989).

    MATH  Google Scholar 

  • Frechet, M.: Les Probabilities Associaees a un Systeme d’Evenements Compatibles et Dependants, 1, Evenements en Nombre Fini Fixe, Actualites scientifiques et industrielles, No. 859. Hermann, Paris (1940).

    MATH  Google Scholar 

  • Frechet, M.: Les Probabilities Associaees a un Systeme d’Evenements Compatibles et Dependants, 2,, Evenements en Nombre Fini Fixe, Actualites scientifiques et industrielles, No. 942. Hermann, Paris (1943).

    Google Scholar 

  • Hand, D. J., Daly, F., McConway, K., Lunn, D., Ostrowski, E.: A handbook of small data sets. CRC Press, Boca Raton, Florida (1993).

    Book  Google Scholar 

  • Johnson, N. L., Kotz, S: Developments in discrete distributions, 1969–1980, correspondent paper. Int. Stat. Rev./Rev. Int. Stat. 50(1), 71–101 (1982).

    Article  Google Scholar 

  • Johnson, N. L., Kotz, S., Kemp, A. W.: Univariate Discrete Distributions. Second Edition. John Wiley and Sons, Hoboken, New Jersey (2005).

    Book  Google Scholar 

  • Kemp, A. W.: Classes of discrete lifetime distributions. Comput. Stat. Theory Methods. 33, 3069–3093 (2004).

    Article  MathSciNet  Google Scholar 

  • Keilson, J., Gerber, H.: Some results for discrete unimodality. J. Am. Stat. Assoc. 66, 386–389 (1971).

    Article  Google Scholar 

  • Krishna, H., Pundir, P. S.: Discrete Burr and discrete Pareto distributions. Stat. Methodol. 6, 177–188 (2009).

    Article  MathSciNet  Google Scholar 

  • Lai, C. D.: Issues concerning constructions of discrete lifetime models. Qual. Tech. Quant. Manag. 10, 251–262 (2013).

    Article  Google Scholar 

  • Laurent, A. G.: Probability distributions, factorial moments, empty cell test, Classical and Contagious Discrete Distributions(Patil, G. P., ed.)Calcutta Statistical Publishing Society, Oxford, Pergamon (1965).

    Google Scholar 

  • Lehman, E. L, Casella, G.: Theory of Point Estimation. Springer, New York (1998).

    Google Scholar 

  • Nakagawa, T., Osaki, S.: The discrete Weibull distribution. IEEE Trans. Reliab. R–24, 300–301 (1975).

    Article  Google Scholar 

  • Phyo, I.: Use of a Chain Binomial in the Epidemiology of Caries. J. Dent. Res. 52, 750–752 (1973).

    Article  Google Scholar 

  • Padgett, W. J., Spurrier, J. D.: On discrete failure models. IEEE Trans. Reliab. R–34, 253–256 (1985).

    Article  Google Scholar 

  • Roy, D.: The discrete normal distribution. Commun. Stat. Theory Methods. 32, 1871–1883 (2003).

    Article  MathSciNet  Google Scholar 

  • Salvia, A. A., Bollinger, R. C.: On discrete hazard functions. IEEE Trans. Reliab. R–31, 458–459 (1982).

    Article  Google Scholar 

  • Steutel, F. W., Van Harn, K.: Infinite Divisibility of Probability Distributions on the Real Line. Marcel Dekker, New York (2004).

    MATH  Google Scholar 

  • Warde, W. D., Katti, S. K.: Infinite divisibility of discrete distributions II. Ann. Math. Stat. 42, 1088–1090 (1971).

    Article  MathSciNet  Google Scholar 

  • Xie, M., Gaudoin, O., Bracquemond, C.: Redefining failure rate function for discrete distributions. Int. J. Reliab. Qual. Saf. Eng. 9, 275–285 (2002).

    Article  Google Scholar 

Download references

Acknowledgements

The authors acknowledges several of the references from which some useful ideas have generated in preparing this manuscript.

Funding

The author did not receive any funding in preparing this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Dr. Indranil Ghosh, being the solo author contributed in its entirety in preparation for the manuscript.

Corresponding author

Correspondence to Indranil Ghosh.

Ethics declarations

Competing interests

The authors have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghosh, I. A new discrete pareto type (IV) model: theory, properties and applications. J Stat Distrib App 7, 3 (2020). https://doi.org/10.1186/s40488-020-00104-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40488-020-00104-x

Mathematics Subject Classification (2010)