Open Access

A folded Laplace distribution

Journal of Statistical Distributions and Applications20152:10

DOI: 10.1186/s40488-015-0033-9

Received: 28 September 2015

Accepted: 7 October 2015

Published: 22 October 2015

Abstract

We study a class of probability distributions on the positive real line, which arise by folding the classical Laplace distribution around the origin. This is a two-parameter, flexible family with a sharp peak at the mode, very much in the spirit of the classical Laplace distribution. We derive basic properties of the distribution, which include the probability density function, distribution function, quantile function, hazard rate, moments, and several related parameters. Further properties related to mixture representation, Lorenz curve, mean residual life, and entropy are included as well. We also discuss parameter estimation for this new stochastic model and illustrate its potential applications with real data.

Keywords

Exponential distribution Folded distribution Laplace distribution Moment estimation Oil price data

Classification codes

60E05 62E15 62F10 62P05

Introduction

We present a theory of a class of distributions on \(\mathbb R_{+}=\,[0,\infty)\), obtained by folding the classical Laplace distribution given by the probability density function (PDF)
$$ f(x) = \frac{1}{{2\sigma }}{e^{- \left|\frac{{x - \mu }}{\sigma }\right|}},\,\,\, x \in \mathbb R, $$
(1)
over to the interval [ 0,). The folding is accomplished via the transformation
$$ Y=|X|, $$
(2)
where X is a Laplace random variable with PDF (1), so that the PDF of Y becomes
$$ g(y)=f(y)+f(-y), \,\,\, y\in \mathbb R_{+}. $$
(3)
A substitution of (1) into (3) results in the following PDF of the folded version of Laplace distributed X (Cooray 2008):
$$ g(y)=\frac{1}{\sigma} \left\{ \begin{array}{ll} e^{- \frac{\mu}{\sigma }} \cosh \left(\frac{y}{\sigma} \right) & \text{for \(0\leq y < \mu\)}, \\ e^{- \frac{y}{\sigma}} \cosh \left(\frac{\mu}{\sigma} \right) & \text{for \(\mu \leq y\)}. \\ \end{array} \right. $$
(4)
Note that when μ=0, this reduces to
$$ g(y)=\frac{1}{\sigma}e^{- \frac{y}{\sigma }}, \,\,\, y\in \mathbb R_{+}, $$
(5)
which is the PDF of an exponential distribution with mean σ. This is to be expected, as in this case the Laplace distribution is centered about the origin. Thus, the folded Laplace distribution can be thought of as a generalization of exponential distribution, and in fact it shares the form of the density with the latter when the values are larger than μ. Figure 1 provides an illustration of the folded Laplace PDF. As we shall see in the sequel, this is a rather flexible family with a sharp peak at the mode, resembling the Laplace distribution. The latter has gained popularity in recent years in numerous areas of applications see, e.g., Kotz et al. (2001). We hope that this positive version of the Laplace distribution will prove to be another useful stochastic model as well.
Fig. 1

PDFs of the folded Laplace distributions with σ=1 and varying values of μ (panel a) and with μ=1 and varying values of σ (panel b)

Folded distributions are important and popular models that have found many interesting applications. As noted by several authors [see, e.g., Leone et al. (1961), Psarakis and Panaretos (1990), Cooray et al. (2006), Cooray (2008)], they frequently arise when the data are recorded with disregard to their sign. Perhaps the best known such model is the folded normal distribution, developed in the 1960s and 1970s [see, e.g., Elandt (1961), Leone et al. (1961), Johnson (1962, 1963), Rizvi (1971), and Sundberg (1974)]. Since then, this distribution, along with its various extensions, has been re-visited and applied in diverse fields [see, e.g., Nelson (1980), Sinha (1983), Yadavalli and Singh (1995), Naulin (2003), Bohannon et al. (2004), Lin (2004), Lin (2005), Asai and McAleer (2006), Kim (2006), Liao (2010), Jung et al. (2011), Chakraborty and Chatterjee (2013), Tsagris et al. (2014)]. Other families of folded distributions recently studied include folded t and generalized t distributions [see Psarakis and Panaretos (1990, 2001), Brazauskas and Kleefeld (2011, 2014), Scollnik (2014), and Nadarajah and Bakar (2015)], folded Cauchy distribution [see Johnson et al. (1994, 1995), Cooray (2008), Nadarajah and Bakar (2015)], folded Gumbel distribution [see Nadarajah and Bakar (2015)], folded normal slash distribution [see Gui et al. (2013)], folded beta distribution see [Berenhaut and Bergen (2011)], folded binomial distribution [see Porzio and Ragozini (2009)], folded logistic distribution [see Cooray et al. (2006), Nadarajah and Kotz (2007), Cooray (2008)], folded exponential transformation [see Piepho (2003)], and doubly-folded bivariate normal distribution [see Stracener (1973)]. Let us note that the folded Laplace (FL) distribution, and a more general folded exponential power family, were recently briefly treated in Cooray (2008) and Nadarajah and Bakar (2015), respectively. Our works offers a more comprehensive theory focused on FL distributions, including numerous new results, estimation, and data examples.

We begin our journey in Section 2, where we define the folded Laplace (FL) model and derive its properties. Section 3 is devoted to statistical inference related to the FL model. In particular, we establish the existence and uniqueness of moment estimators of the FL parameters, and derive their asymptotic behavior. This is followed by Section 4, which contains a data example illustrating modeling potential of the FL distribution. Proofs and technical results are collected in the Appendix.

Definition and properties

We start with a formal definition of the folded Laplace (FL) model.

Definition 2.1.

A random variable on R + given by the density function (4), where μ≥0 and σ>0, is said to have a folded Laplace distribution, denoted by F L(μ,σ).

Remark 2.1.

Note that the PDF of a F L(μ,σ) distribution can be written more compactly as
$$g(y)=\frac{1}{\sigma}\cdot \exp\left(\frac{-\max\{\mu,y\}}{\sigma}\right) \cdot \cosh \left(\frac{\min\{\mu, y\}}{\sigma} \right), \,\,\, y\in R_{+}. $$

Moreover, it is easy to see that folding Laplace distribution (1) with the mode μ>0 or one with the mode −μ<0 results in the same distribution, so that we can restrict attention to non-negative values of μ. In fact, this is always the case when the folding mechanism (2) is applied to a location family f(x)=h(xμ), \(x,\mu \in \mathbb R\).

Remark 2.2.

It is straightforward to see that the FL PDF is unimodal with the mode at μ, and becomes more symmetric as the mode μ increases, as can be seen in Fig. 1. On the other hand, as μ gets towards the origin, the distribution becomes exponential with the PDF given by (5). This figure also shows that the FL PDF becomes flatter as the parameter σ increases. In the boundary case σ=0, the distribution is understood as a point mass at μ, which can be seen by taking the limit of the FL cumulative distribution function (CDF) given below as σ→0.

Remark 2.3.

It should be noted that the FL distribution is not a member of exponential family (unless μ=0).

Remark 2.4.

A more general model, which was briefly treated in Nadarajah and Bakar (2015) and deserves further study, arises by folding the exponential power distribution (Subbotin 1923) and is given by the PDF
$$ f(x) = \frac{1}{2\sigma p^{1/p}\Gamma(1+1/p)}\left\{e^{- \frac{|x - \mu|^{p}}{p \sigma^{p}}} + e^{- \frac{|x + \mu|^{p}}{p \sigma^{p}}}\right\},\,\,\, x \in \mathbb R. $$
(6)

Its special cases include the folded Laplace distribution (p=1) as well as the folded normal distribution (p=2).

Remark 2.5.

Another probability distribution that has a sharp peak at the mode and is restricted to the positive half-line is the log-Laplace distribution (see, e.g., Kozubowski and Podgórski 2003a,b). In analogy with the log-normal distribution, this model describes the random variable Y= exp(X), where X is Laplace-distributed with the PDF (1), and is given by the PDF
$$ g(y)=\frac{\alpha}{2\delta} \left\{ \begin{array}{ll} (y/\delta)^{\alpha-1} & \text{for \(0\leq y < \delta\)}, \\ (y/\delta)^{-\alpha-1} & \text{for \(y\geq \delta\)}, \\ \end{array} \right. $$
(7)

where α=1/σ>0 is a tail parameter and δ= exp(μ) is a scale parameter. However, in contrast with the FL distribution, this distribution has a power-law tails.

The CDF and the quantile function (QF) of the FL model are straightforward to derive [see Cooray (2008), Liu (2014)], and both admit close forms. The CDF is given by
$$ G(y)= \left\{ \begin{array}{ll} e^{\frac{-\mu}{\sigma}} \sinh \left(\frac{y}{\sigma} \right) & \text{for \(0\leq y < \mu\)}, \\ 1-e^{\frac{-y}{\sigma}} \cosh \left(\frac{\mu}{\sigma} \right) & \, \text{for}\; y\geq \mu, \end{array} \right. $$
(8)
while the QF is
$$ Q(q)= \left\{ \begin{array}{ll} \sigma \cdot \log \left[q e^{\frac{\mu}{\sigma}}+ \sqrt{q^{2} e^{\frac{2\mu}{\sigma}}+1}\right] & \;\text{for}\; 0 \le q \le \frac{1}{2}\left(1-e^{\frac{-2\mu}{\sigma}}\right), \\ \sigma \cdot \log \left[\cosh \left(\frac{\mu}{\sigma} \right)/(1-q)\right] & \;\text{for }\; 1> q\ge \frac{1}{2}\left(1-e^{\frac{-2\mu}{\sigma}}\right). \end{array} \right. $$
(9)
In particular, the median is [see Cooray (2008), Liu (2014)]
$$ m = Q(1/2) = \sigma \log\left[ 2 \cosh \left(\frac{\mu}{\sigma} \right)\right]. $$
(10)

Remark 2.6.

The QF above can be used to simulate random variates Y from an FL distribution via Y=Q(U), where U is a standard uniform variate, available in standard statistical packages. Alternatively, simulation can be accomplished by first generating a Laplace variate T=σ X+μ, where X is a standard Laplace, followed by Y=|T|. The standard Laplace variate can be generated via X=E 1E 2, where the {E i } are independent standard exponential variates [see, e.g., Kotz et al. (2001)].

2.1 The hazard rate

The hazard rate (failure rate, mortality rate) of the FL model, defined as the ratio of the PDF to the survival function, is provided in the result below. Its routine derivation, which can be found in Liu (2014), will be omitted.

Proposition 2.1.

If YF L(μ,σ), then the hazard rate of Y is given by
$$ h(y)= \left\{ \begin{array}{ll} \frac{1}{\sigma}\cdot \frac{e^{\frac{-\mu}{\sigma}}\cosh \left(\frac{y}{\sigma}\right)}{1-e^{\frac{-\mu}{\sigma}}\sinh\left(\frac{y}{\sigma}\right)} & \text{for }\;0 \le y \le \mu \\ \frac{1}{\sigma} & \text{for}\; y\ge \mu. \end{array} \right. $$
(11)

Moreover, this function is concave up and monotonically increasing from h(0)= exp(−μ/σ)/σ to h(μ)=1/σ on the interval [0,μ].

2.2 The moment generating function and moments

The following result, which was stated in Cooray (2008) and derived in Liu (2014), provides an explicit formula for the moment generating function (MGF) of the FL model.

Proposition 2.2.

If YF L(μ,σ), then the moment generating function of Y is given by
$$ M_{Y}(t)= \mathbb E e^{tY} = \frac{1}{2}\left(\frac{e^{\mu t}-e^{\frac{-\mu}{\sigma}}}{\sigma t +1}-\frac{e^{\mu t}+e^{\frac{-\mu}{\sigma}}}{\sigma t -1}\right), \,\,\, t < \frac{1}{\sigma}. $$
(12)

By taking the derivatives of the MGF at t=0, we can recover the moments of the FL distribution. The latter are given in the following result, whose lengthy albeit routine derivation shall be omitted [details can be found in Liu (2014)].

Proposition 2.3.

If YF L(μ,σ), then the nth moment of Y is given by
$$ \mathbb E \left[Y^{n}\right] = \frac{\sigma^{n}}{2}n!e^{\frac{-\mu}{\sigma}}\left[1-(-1)^{n}\right]+\frac{1}{2\sigma}\sum_{k=0}^{n}\frac{n!}{(n-k)!}\sigma^{k+1}\mu^{n-k}\left[1+(-1)^{k}\right]. $$
(13)
In particular, we have
$$ \mathbb EY=\mu +\sigma e^{\frac{-\mu}{\sigma}} \,\,\, \text{and} \,\,\, \mathbb E[Y^{2}]=\mu^{2}+2\sigma^{2}, $$
(14)
so that the variance is
$$ \mathbb Var(Y)=2\sigma^{2} -\sigma^{2} e^{\frac{-2\mu}{\sigma}}-2\mu \sigma e^{\frac{-\mu}{\sigma}}. $$
(15)

The following result concerning the coefficient of variation connected with the FL model plays an important role in investigating the existence and uniqueness of moment estimators of μ and σ (see Section 3) and can be a useful aid in FL model validation. Its proof is included in the Appendix.

Proposition 2.4.

If YF L(μ,σ) with both μ,σ>0, then the coefficient of variation of Y is
$$ C_{Y} = \frac{\sqrt{\mathbb Var(Y)}}{\mathbb E(Y)} = \frac{\sqrt{2\ -e^{-2\delta} - 2\delta e^{-\delta}}}{\delta +e^{-\delta}}, $$
(16)

where δ=μ/σ[0,). Moreover, we have 0<C Y <1.

2.3 Skewness and kurtosis

Straightforward albeit lengthy calculations [see Liu (2014)] show that the coefficients of skewness and kurtosis of YF L(μ,σ) are given by
$$ \gamma_{Y} = \frac{\mathbb E(Y-\mathbb EY)^{3}}{[\mathbb Var(Y)]^{3/2}} = \frac{3\delta^{2}+6\delta e^{-2\delta} + 2 e^{-3\delta}}{\left(2 - e^{-2\delta} - 2 \delta e^{-\delta}\right)^{3/2}} $$
(17)
and
$$ {}\kappa_{Y} = \frac{\mathbb E(Y-\mathbb EY)^{4}}{[\mathbb Var(Y)]^{2}} = \frac{24 - 24 \delta e^{-\delta} -4\delta^{3} e^{-\delta} -12 e^{-2\delta} - 12 \delta^{2} e^{-2\delta} - 12 \delta e^{-3\delta} - 3 e^{-4\delta}}{\left(2 - e^{-2\delta} - 2 \delta e^{-\delta}\right)^{2}}, $$
(18)

respectively, where δ=μ/σ [0,). Since γ Y >0, every FL distribution is skewed to the right. When δ=0 (which occurs when μ=0), then γ Y =2 and κ Y =9, which are the skewness and the kurtosis of an exponential random variable (to which the FL model reduces in this case).

2.4 The mean/median/mode inequality

One common rule of thumb states that for unimodal distributions, the mean, the median, and the mode often occur in either alphabetical or reverse-alphabetical order [see, e.g., Dharmadhikari and Joag-Dev (1988)]. As shown in the following result, which is proved in the Appendix, the FL distribution is not an exception in this regard.

Proposition 2.5.

If YF L(μ,σ) then
$$ M(Y)<m(Y)<\mathbb E(Y), $$
(19)

where M(Y), m(Y), and \(\mathbb E(Y)\) are the mode, the median, and the mean of Y, respectively.

In the special case when μ=0, the distribution reduces to the exponential one, and the inequality in (19) turns into 0<σ log2<σ, which is trivially true.

2.5 Truncated distributions and stochastic representation

Here we discuss folded Laplace distribution that is truncated above or below μ, and use the resulting distributions to derive a stochastic representation of the FL model. Straightforward calculations show that the PDF of a folded Laplace variable truncated above at μ, X=Y|Yμ, is given by
$$ h_{1}(y)= \left\{ \begin{array}{ll} \frac{1}{\sigma} \frac{\cosh(y/\sigma)}{ \sinh (\mu/\sigma)}\;& \text{for \(0 \leq y \leq \mu\)} \\ 0\; &\text{otherwise}. \end{array} \right. $$
(20)
On the other hand, a folded Laplace variable truncated below at μ, W=Y|Yμ, has an exponential distribution with mean σ, shifted by μ units to the right, so its PDF is
$$ h_{2}(y)= \left\{ \begin{array}{ll} \frac{1}{\sigma}e^{-\frac{y-\mu}{\sigma}}\; &\text{for \(y \geq \mu\)}\\ 0\;& \text{for \(y < \mu\)}. \end{array} \right. $$
(21)

We shall skip a routine derivation of the following result, which provides a stochastic representation of an FL random variable in terms of the above two truncated variables.

Proposition 2.6.

If YF L(μ,σ), then
$$ Y \overset{d}= IX+(1-I)W, $$
(22)
where X has the PDF (20), W has the PDF (21), I is an indicator variable given by
$$ I = \left\{ \begin{array}{ll} 1 \; & \,\text{with probability}\,\, p = \frac{1}{2}\left(1-e^{\frac{-2\mu}{\sigma}}\right), \\ 0\; & \, \text{with probability}\,\, q = \frac{1}{2}\left(1+e^{\frac{-2\mu}{\sigma}}\right), \end{array} \right. $$
(23)

and all three variables on the right-hand-side of (22) are mutually independent.

2.6 The mean residual life

The mean residual life function,
$$ m(t)=\mathbb E(Y-t|Y>t), $$
(24)
is an important concept in a variety of fields, including reliability and insurance, to name just a few [see, e.g., Jeong (2014) and references therein]. To compute (24) for an FL distributed Y, we start with the conditional distribution of Yt given Y>t, called the excess random variable. In view of Proposition 2.6, it is easy to see that, for t>μ, the excess random variable is simply exponential with mean σ. On the other hand, routine algebra shows that when 0≤tμ, the CDF and the PDF of the excess random variable, Yt|Y>t, are given by
$$ F_{Y-t|Y>t}(y)= \left\{ \begin{array}{ll} \alpha+\beta \sinh \left(\frac{y+t}{\sigma} \right) & \text{for \(0\leq y < \mu-t\)} \\ 1-\gamma e^{\frac{-y}{\sigma}} &\text{for \(y \geq \mu-t\)} \\ \end{array} \right. $$
(25)
and
$$f_{Y-t|Y>t}(y)= \left\{ \begin{array}{ll} \frac{\beta}{\sigma} \cosh \left(\frac{y+t}{\sigma} \right) & \text{for \(0 \leq y < \mu-t\)} \\ \frac{\gamma}{\sigma} e^{\frac{-y}{\sigma}}\; & \text{for \(y \geq \mu-t\)}, \\ \end{array} \right. $$
respectively, where
$$\alpha = 1-\frac{1}{1- e^{-\frac{\mu}{\sigma}} \sinh (t/\sigma)},\,\,\, \beta=\frac{e^{\frac{-\mu}{\sigma}}}{1- e^{-\frac{\mu}{\sigma}} \sinh (t/\sigma)}, \,\,\, \gamma = \frac{e^{-\frac{t}{\sigma}} \cosh (\mu/\sigma)}{1- e^{-\frac{\mu}{\sigma}} \sinh (t/\sigma)}. $$

Straightforward calculations lead to the result below, whose proof shall be omitted.

Proposition 2.7.

If XF L(μ,σ) then the mean residual life function (24) is given by m(t)=σ for tμ and
$$ m(t) = \frac{\mu-t}{1-e^{-\frac{\mu}{\sigma}}\sinh (t/\sigma)} + \sigma e^{-\frac{\mu-t}{\sigma}} $$
(26)

for 0≤tμ.

Remark 2.7.

Note that the function m given above is continuous on [0,] with the value of μ+σ exp(−μ/σ) for t=0, which is to be expected as m(0) is just the mean of Y itself.

2.7 The Lorenz curve

The Lorenz curve, defined as
$$L(y) = \frac{1}{\mathbb E Y}{\int_{0}^{y}}t\cdot g(t)dt, $$
where g is the PDF of the random variable Y, is a standard tool in economics, used to measure the social or wealth inequality [see, e.g., Gastwirth (1971)]. When we substitute the PDF of FL distribution given in (4), we obtain
$$ L(y) = \frac{1}{b} \cdot \left\{ \begin{array}{ll} e^{-\frac{\mu}{\sigma}} \left\{ \sigma+y \sinh (y/\sigma) - \sigma \cosh(y/\sigma) \right\} & \text{for \(0\leq y \le \mu\)} \\ \mu + \sigma e^{-\frac{\mu}{\sigma}} - (\sigma+y) e^{-\frac{y}{\sigma}} \cosh(\mu/\sigma) & \text{for \(y \ge \mu\)}, \\ \end{array} \right. $$
(27)

where \(a =\mu +2\sigma e^{\frac {-\mu }{\sigma }}-\mu e^{\frac {-2\mu }{\sigma }}\) and \(b=\mu +\sigma e^{\frac {-\mu }{\sigma }}\) is the mean of the FL distribution.

2.8 Entropy

Here we derive Shannon entropy,
$$ H(X) = \mathbb E[-\log g(X)], $$
(28)

of an FL random variable X with PDF g. This is a standard measure of uncertainty, introduced in Shannon (1948). The following result, which is proven in the Appendix, contains relevant details.

Proposition 2.8.

If XF L(μ,σ) then
$$ H(X) = \log(2\sigma)-\log\theta +\theta^{2}[1+\log\theta -\log\left(1+\theta^{2}\right)]-\theta\left[\frac{\pi}{2}-\tan^{-1}\theta\right], $$
(29)

where θ= exp(−μ/σ).

Remark 2.8.

When μ=0, so that X reduces to an exponential variable with mean σ, we obtain H(X)= logσ+1, which is the entropy in this special case.

Parameter estimation

Here we consider the problem of estimating the parameters μ and σ of the folded Laplace distribution. We shall focus on the method of moments, which is computationally straightforward. Maximum likelihood estimation for this case, which is theoretically and computationally much more involved, is currently under investigation and will be reported elsewhere.

Let Y 1,Y 2,…,Y n be independent and identically distributed (IID) random variables that follow the F L(μ,σ) model, and let \(M_{1}=\bar {Y}_{n}=\frac {1}{n}\sum _{i=1}^{n}Y_{i}\) and \(M_{2}=\frac {1}{n}\sum _{i=1}^{n}{Y_{i}^{2}}\) be the first two sample moments. To derive the method of moment estimators (MMEs) of the two parameters we shall use an equivalent alternative parameterization, where μ is replaced by
$$ \delta=\frac{\mu}{\sigma} \in\,[0,\infty). $$
(30)
When we set the first two moments of an FL distribution (given in Proposition 2.3) equal to the sample moments {M i }, we obtain the following system of two equations in two unknowns:
$$ \begin{aligned} & M_{1}=\sigma\left(\delta+e^{-\delta}\right), \\ & M_{2}=\sigma^{2}\left(\delta^{2}+2\right). \end{aligned} $$
(31)
By squaring the first equation in (31) and taking the ratios of the corresponding sides, we can eliminate σ and obtain a single equation for the parameter δ:
$$ \frac{\delta^{2}+2}{\left(\delta+e^{-\delta}\right)^{2}} = \frac{M_{2}}{{M_{1}^{2}}}. $$
(32)
As shown in the proof of Proposition 3.1 (see below) given in the Appendix, the function
$$ h(\delta) = \frac{\delta^{2}+2}{\left(\delta+e^{-\delta}\right)^{2}}, \,\,\, \delta\in [0,\infty), $$
(33)
that appears on the left-hand-side of (32), is monotonically decreasing in δ with
$$ h(0) = 2 \,\,\, \text{and} \,\,\, {\lim}_{\delta\rightarrow \infty} h(\delta) = 1. $$
(34)
Thus, Eq. (32) admits a unique solution whenever
$$ 1< \frac{M_{2}}{{M_{1}^{2}}} <2. $$
(35)
In turn, under the condition (35), the system of Eq. (31) has a unique solution given by
$$ \hat{\delta}_{n} = r\left(\frac{M_{2}}{{M_{1}^{2}}} \right), \,\,\, \hat{\sigma}_{n}=\sqrt{\frac{M_{2}}{\left[r\left(\frac{M_{2}}{{M_{1}^{2}}}\right)\right]^{2}+2}}, $$
(36)

where r is the inverse of the function h. The following result summarizes this discussion.

Proposition 3.1.

Let Y 1,…,Y n be a random sample from an F L(μ,σ) distribution, and let M 1 and M 2 be the first and the second sample moments based on the {Y i }, respectively. Then, there exist unique moment estimators of δ=μ/σ and σ, given by (36), whenever the condition (35) is satisfied.

Note that since the sample variance
$${S_{n}^{2}}=\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\bar{Y}_{n})^{2}=\frac{1}{n}\sum_{i=1}^{n}{Y_{i}^{2}}-\bar{Y}_{n}^{2} = M_{2}-{M_{1}^{2}} $$
satisfies the relation \({S_{n}^{2}}\geq 0\), the left-hand-side inequality in (35) is generally true, unless we have an exceptional case where all the sample values are equal (and \(M_{2} = {M_{1}^{2}}\)). To be consistent with Eq. (32), in this case we set \(\hat {\delta }_{n}=\infty \), leading to \(\hat {\sigma }_{n}=0\). When we re-write the moment Eq. (31) equivalently as
$$ \begin{aligned} & M_{1}=\mu+\sigma e^{\frac{-\mu}{\sigma}} \\ & M_{2}=\mu^{2}+2\sigma^{2} \end{aligned} $$
(37)
and substitute σ=0, we obtain \(\hat {\mu }_{n}=M_{1} = \bar {Y}_{n}\) (this special FL distribution assigns the entire mass to a single point \(\hat {\mu }_{n}\)). Further, the right-hand-side inequality in (35) can be stated as
$$\bar{Y}_{n}^{2}>\frac{1}{n}\sum_{i=1}^{n}{Y_{i}^{2}}-\bar{Y}_{n}^{2}={S_{n}^{2}}, $$
or, equivalently, as \(S_{n}/\bar {Y}_{n}<1\). But this is expected to be true for large n, since the sample coefficient of variation \(S_{n}/\bar {Y}_{n}\) converges to the theoretical one, which is known to be less than 1 (see Proposition 2.4). In the boundary case \(M_{2} = 2{M_{1}^{2}}\) (\(S_{n}/\bar {Y}_{n}=1\)), to be consistent with Eq. (32), we set \(\hat {\delta }_{n}=0\). This should be interpreted as \(\hat {\mu }_{n}=0\), in which case, according to (37), we would set \(\hat {\sigma }=M_{1}\) (so that the resulting FL distribution is exponential). We propose the same interpretation when \(M_{2} > 2{M_{1}^{2}}\). With these practical conventions, the MMEs of μ and σ always exist and are unique.

Remark 3.1.

In view of (30), the MME of μ is \(\hat {\mu }_{n}= \hat {\delta }_{n}\hat {\sigma }_{n}\).

Remark 3.2.

When one of the two parameters is known, the MME of the other one can be easily obtained from the first moment equation in (31). Clearly, when δ is given, then the parameter σ is uniquely estimated as
$$\hat{\sigma}_{n} = \frac{M_{1}}{\delta+e^{-\delta}}. $$

Alternatively, with a known σ, the MME of δ is the unique value for which v(δ)=δ+ exp(−δ)=M 1/σ, provided that M 1/σ≥1. This is easily deduced from the properties of the function v, which is continuous and monotonically increasing on the interval [0,), with v(0)=1 and v(δ)→ as δ. The condition \(M_{1}/\sigma = \bar {Y}_{n}/\sigma \geq 1\) is expected to be true when n is large, since by law of large numbers \(\bar {Y}_{n}\) converges to the mean \(\mathbb E Y\) given by the right-hand-side of the first equation in (31), which is greater than or equal to σ. In the boundary case M 1/σ=1 we have \(\hat {\delta }_{n}=0\) (so that also \(\hat {\mu }_{n}=0\)), indicating an exponential distribution. One can follow the same interpretation when M 1/σ≤1.

Remark 3.3.

To find the MMEs of δ and σ in practice one can use standard Newton-Raphson algorithm or utilize a statistical package (such as R) to compute the unique zero of the (well-behaved) function \(h(\delta) - M_{2}/{M_{1}^{2}}\), δ[0,).

Standard large sample theory results (see, e.g., Rao 1973) show that the estimators (36) are consistent and asymptotically normal.

Proposition 3.2.

The vector \((\hat {\delta }_{n}, \hat {\sigma }_{n})\) of MMEs given in Proposition 3.1 is

(i) consistent;

(ii) asymptotically normal, that is \(\sqrt {n}[ (\hat {\delta }_{n}, \hat {\sigma }_{n}) - (\delta,\sigma)]\) converges in distribution to a bivariate normal distribution with the (vector) mean zero and the covariance matrix
$$ \Sigma_{MME}= \frac{1}{\left(e^{-\delta}[2+\delta+\delta^{2}]-2\right)^{2}} \left[ \begin{array}{cc} w_{11} & w_{12}\\ w_{12} & w_{22} \end{array} \right], $$
(38)
where
$$ \begin{array}{rcl} w_{11} & = & 8-e^{-\delta}\delta^{5} - 10e^{-\delta}\delta^{3} - 14e^{-\delta}\delta - 4e^{-2\delta}\delta^{2} - 7e^{-2\delta} +5\delta^{2},\\ w_{12} & = & \frac{\sigma}{2}\left\{ e^{-2\delta}[ 2+ 8\delta +2 \delta^{2} +\delta^{3} + \delta^{4} ] + e^{-\delta} [ 2\delta^{4} +14\delta^{2} + 2\delta-2 ] -10 \delta\right\},\\ w_{22} & = & \sigma^{2} \left\{ 5 - e^{-2\delta}[ \delta^{3} - \delta^{2} - 4\delta -5 ] - e^{-\delta} [ \delta^{3} +4\delta + 10 ]\right\}. \end{array} $$
(39)

Remark 3.4.

The MMEs \((\hat {\mu }_{n}, \hat {\sigma }_{n})\) of the parameters μ and σ, where \(\hat {\mu }_{n}= \hat {\delta }_{n}\hat {\sigma }_{n}\), are also consistent and asymptotically normal, with \(\sqrt {n}[ (\hat {\mu }_{n}, \hat {\sigma }_{n}) - (\mu,\sigma)]\) converging in distribution to a bivariate normal distribution with the (vector) mean zero and the covariance matrix
$$ \tilde{\Sigma}_{MME}= \frac{\sigma^{2}}{\left(e^{-\delta}[2+\delta+\delta^{2}]-2\right)^{2}} \left[ \begin{array}{cc} \tilde{w}_{11} & \tilde{w}_{12}\\ \tilde{w}_{12} & \tilde{w}_{22} \end{array} \right], $$
(40)
where
$$ \begin{array}{rcl} \tilde{w}_{11} & = & e^{-2\delta}\left(2\delta^{4} + 6\delta^{3} + 9\delta^{2} + 2\delta -7\right) - e^{-\delta}\left(8\delta^{2} -16\delta \right) +8,\\ \tilde{w}_{12} & = & -\frac{1}{2} e^{-\delta} \left\{ e^{-\delta}\left(\delta^{4} -3\delta^{3} -10\delta^{2} -18\delta-2 \right) -6\delta^{2} +18\delta + 2\right\},\\ \tilde{w}_{22} & = & -e^{-2\delta}\left(\delta^{3} -\delta^{2} -4\delta -5\right) - e^{-\delta}\left(\delta^{3} +4\delta +10\right) +5. \end{array} $$
(41)

The proof of the above is similar to that of Proposition 3.2, and shall be omitted.

Illustrative data examples

In this section we present an application of the folded Laplace distribution in modeling West Texas Intermediate (WTI) and Brent Oil historical oil prices. The WTI data are collected from January 3, 1986 to February 15, 2003, with the total of 1416 data points with the WTI Spot Price FOB (dollars per barrel). The data source is US Department of Energy via wikiposit.org (http://wikiposit.org/uid?DOE.RWTC). The Brent Oil prices, taken from the Invest Excel (http://investexcel.net/), cover the period from January 1, 2009 to January 1, 2012, with the total of 778 data points (dollars per barrel).

We work with the daily returns Y k =S k /S k−1, where S k represents the oil price on day k. Clearly, the values are positive, with n 1=1415 and n 2=777 daily returns derived from WTI and Brent daily oil prices, respectively. Our goal is to model the oil price returns using the F L(μ,σ) distribution. We apply the method of moments discussed in Section 3 to estimate the parameters μ and σ of the FL model. The results of the estimation are summarized in Table 1, containing the MMEs along with their standard errors. The latter are computed from the asymptotic distribution given by (4041).
Table 1

Estimation of the WTI and the Brent oil data. The standard errors (SE) that appear next to the estimates are approximated from the asymptotic distributions of the estimators

Data

Sample

The 1st

The 2nd

  

Set

Size n

Moment

Moment

\(\hat {\mu }\) (SE)

\(\hat {\sigma }\) (SE)

WTI oil prices

1415

1.002

1.001

1.002 (0.001)

0.038 (0.001)

Brent oil prices

777

1.001

1.003

1.001 (0.001)

0.015 (0.001)

Figure 2 a and b, respectively, show histograms of the WTI and the Brent oil data, along with the theoretical FL PDFs (with estimated parameters).
Fig. 2

Histogram of the WTI oil data along with the theoretical folded Laplace PDF, where μ=1.002 and σ=0.038 (panel a) and histogram of the Brent oil data along with the theoretical folded Laplace PDF, where μ=1.001 and σ=0.015 (panel b)

Next, we produced Q-Q plots of the WTI and the Brent oil data, obtaining a nearly straight lines [see Fig. 3 a and b, respectively]. Overall, it appears the FL model fits the WTI data and the Brent oil prices data reasonably well. The above examples illustrate modeling potential of the FL distribution in situations where the underlying phenomena are restrictive to positive values and the empirical distributions resemble the Laplace distribution with its sharp peak at the mode.
Fig. 3

Quantile (Q-Q) plot of the WTI oil price data against fitted FL distribution, based on n=1415 daily returns (panel a) and quantile (Q-Q) plot of the Brent oil price data against fitted FL distribution, based on n=777 daily returns (panel b)

Appendix

Here we collect selected proofs of the results presented above, which are preceded by a technical lemma.

Lemma 5.1.

For c≥0 let
$$I_{c} = {\int_{0}^{c}} \left(e^{x}+e^{-x}\right)\log \left(e^{x}+e^{-x}\right) dx. $$
Then
$$I_{c} = \left(e^{c}-e^{-c}\right)\log \left(e^{c}+e^{-c}\right) - \left(e^{c}-e^{-c}\right) +4\tan^{-1}\left(e^{c}\right)-\pi. $$

Proof.

Standard integration by parts leads to
$$I_{c} = \left(e^{c}-e^{-c}\right)\log \left(e^{c}+e^{-c}\right) - {\int_{0}^{c}} \left[\left(e^{x}-e^{-x}\right)^{2} / \left(e^{x}+e^{-x}\right)\right] dx. $$
Upon the substitution u= exp(x), the integral on the right-hand-side above becomes
$${\int_{0}^{c}} \left[\left(e^{x}-e^{-x}\right)^{2} / \left(e^{x}+e^{-x}\right)\right] dx = \int_{1}^{e^{c}} \left(1-\frac{4}{1+u^{2}}+\frac{1}{u^{2}}\right)du. $$
Subsequent straightforward calculations produce the desired result.

Proof of Proposition 2.4.

The condition C Y <1 is equivalent to \(\sqrt {\mathbb Var(Y)}< \mathbb E(Y)\). When we substitute the expressions for the mean and the variance, then, after some algebra, we end up with the inequality
$$x^{2}-2+2 e^{-2x}+4x e^{-x} > 0\,\,\, \text{for all \(x>0\)}, $$
where x=μ/σ(0,). Further algebra shows that the above inequality can be formulated as
$$ x+2 e^{-x} > \sqrt{2\left(1+e^{-2x}\right)} \,\,\, \text{for all \(x>0\)}. $$
(42)
By the well-known relation between arithmetic and geometric means we have
$$\sqrt{2\left(1+e^{-2x}\right)} < \frac{2+\left(1+e^{-2x}\right)}{2} = \frac{3+e^{-2x}}{2} \,\,\, \text{for all \(x>0\)}. $$
Thus, relation (42) will be established if we can show that
$$\frac{3+e^{-2x}}{2} < x+2 e^{-x} \,\,\, \text{for all \(x>0\)}, $$
or, equivalently,
$$ v(x) = 2x+4e^{-x} -3 - e^{-2x} > 0 \,\,\, \text{for all \(x>0\)}. $$
(43)

Note that the function v defined in (43) is continuous and v (x)=2(1−e x )2>0 for x>0, showing that v is increasing on the interval (0,). Since v(0)=0, the result follows.

Proof of Proposition 2.5.

First, note that, since σ>0 and exp(−μ/σ)>0, we have
$$m(Y)=\sigma \log\left(e^{\frac{\mu}{\sigma}}+e^{\frac{-\mu}{\sigma}}\right)>\sigma \log\left(e^{\frac{\mu}{\sigma}}\right) =\sigma \cdot \frac{\mu}{\sigma}=\mu=M(Y), $$
so it remains to prove that the mean exceeds the median. This is equivalent to
$$\frac{\mu}{\sigma}+e^{\frac{-\mu}{\sigma}}>\log\left(e^{\frac{\mu}{\sigma}}+e^{\frac{-\mu}{\sigma}}\right). $$
To see this, we let \(t=\frac {\mu }{\sigma }>0\), so that the above inequality becomes
$$h(t)>0 \,\,\, \text{for all} \,\,\, t>0, $$
where h(t)=t+e t − log(e t +e t ). Observe that
$$h'(t)=-\frac{\left(e^{-t}-1\right)^{2}}{e^{t}+e^{-t}}<0, $$
so that the function h is decreasing on (0,). Thus, our inequality will follow if we can show that
$$ {\lim}_{t\rightarrow \infty}h(t)\ge 0. $$
(44)

However, the function h can be written as h(t)=e t − log[1+ exp(−2t)], which clearly converges to zero at infinity. Thus, the relation (44) holds and the result follows.

Proof of Proposition 2.8.

When we apply the definition of Shannon entropy, given in (28), to an F L(μ,σ) random variable X with PDF g given in (4), this results in
$$H(X) = -\int_{0}^{\mu} g(x)\log g(x)dx - \int_{\mu}^{\infty} g(x)\log g(x)dx = I + II. $$
Standard algebra leads to
$$I = \left[ \log(2\sigma)+\frac{\mu}{\sigma}\right] \int_{0}^{\mu} g(x)dx-\frac{1}{2\sigma}e^{-\frac{\mu}{\sigma}} \int_{0}^{\mu} \left(e^{\frac{x}{\sigma}}+e^{-\frac{x}{\sigma}}\right)\log \left(e^{\frac{x}{\sigma}}+e^{-\frac{x}{\sigma}}\right) dx, $$
while
$${}II= \log(2\sigma)\int_{\mu}^{\infty} g(x)dx - \log\left(e^{\frac{\mu}{\sigma}} + e^{-\frac{\mu}{\sigma}} \right) \int_{\mu}^{\infty} g(x)dx + \frac{1}{2\sigma} \left(e^{\frac{\mu}{\sigma}} + e^{-\frac{\mu}{\sigma}} \right)\int_{\mu}^{\infty} \frac{x}{\sigma}e^{-\frac{x}{\sigma}} dx, $$
so that, after further algebra, we obtain
$$ {}I +II = \log(2\sigma) + \frac{\mu}{\sigma} G(\mu) - \log\left(e^{\frac{\mu}{\sigma}} + e^{-\frac{\mu}{\sigma}} \right)[1-G(\mu)] +\frac{1}{2\sigma}\left(e^{\frac{\mu}{\sigma}} + e^{-\frac{\mu}{\sigma}} \right) A - \frac{1}{2\sigma} e^{-\frac{\mu}{\sigma}} B. $$
(45)
Here, G is the F L(μ,σ) CDF, so that G(μ)=[1− exp(−2μ/σ)]/2, while
$$A = \int_{\mu}^{\infty} \frac{x}{\sigma}e^{-\frac{x}{\sigma}} dx = (\mu+\sigma)e^{-\frac{\mu}{\sigma}} $$
and
$$B = \int_{0}^{\mu} \left(e^{\frac{x}{\sigma}}+e^{-\frac{x}{\sigma}}\right)\log \left(e^{\frac{x}{\sigma}}+e^{-\frac{x}{\sigma}}\right) dx = \sigma \int_{0}^{\frac{\mu}{\sigma}} \left(e^{y}+e^{-y}\right)\log \left(e^{y}+e^{-y}\right) dy. $$

When we compute B using Lemma 5.1, and substitute the resulting expression, along with the A and G(μ) given above, into (45), we obtain the result after straightforward algebra. This completes the proof.

Proof of Proposition 3.1.

It is enough to show that the function h defined in (33) is monotonically decreasing on the interval (0,) and satisfies the conditions given in (34). Indeed, it is obvious that in this case the MME of δ is as specified in (36), which in turn leads to the MME of σ, obtained by solving either one of the Eq. (31) with δ replaced by \(\hat {\delta }_{n}\). Since the values of h at the boundary of its domain are obtained easily, we consider the issue of the monotonicity. Straightforward calculations show that the derivative of h is
$$\frac{d}{d\delta}h(\delta) = \frac{2\left[ e^{-\delta}\left(2+\delta+\delta^{2}\right)-2\right] }{\left(\delta+e^{-\delta}\right)^{3}}. $$
Simple algebra shows that this quantity is negative for each δ(0,) if and only if we have
$$ v(\delta) = 2 e^{\delta} - 2 - \delta - \delta^{2} > 0, \,\,\, \delta \in (0,\infty). $$
(46)
Since the function v above is continuous on [0,) and differentiable on (0,) with v(0)=0, it is enough to prove that the derivative v (δ) is strictly positive for each δ(0,). It is easy to see that latter condition is equivalent to
$$e^{\delta} > \delta +\frac{1}{2}, \,\,\, \delta \in (0,\infty), $$
which is known to be true. This completes the proof.

Proof of Proposition 3.2.

Write the estimators as
$$ (\hat{\delta}_{n}, \hat{\sigma}_{n}) = H(\overline{Y}_{n}, \overline{X}_{n}) = (H_{1}(\overline{Y}_{n}, \overline{X}_{n}), H_{2}(\overline{Y}_{n}, \overline{X}_{n})), $$
(47)
where
$$H_{1}(y_{1},y_{2}) = r\left(\frac{y_{2}}{{y_{1}^{2}}} \right), \,\,\, H_{2}(y_{1},y_{2}) =\sqrt{\frac{y_{2}}{\left[r\left(\frac{y_{2}}{{y_{1}^{2}}}\right)\right]^{2}+2}} $$
and r(·) is the inverse of h. The quantity \( \overline {X}_{n}\) in (47) is the sample mean of \(X_{i}={Y_{i}^{2}}\), i=1,…,n. To prove consistency, apply the law of large numbers to the sequence \(Z_{i}=(Y_{i}, {Y_{i}^{2}})'\) and conclude that the sample mean \(\overline {Z}_{n}\) converges in distribution to the population mean
$$m_{Z}=\mathbb E(Z_{i}) = \left(\sigma\left(\delta+e^{-\delta}\right), \sigma^{2}\left(\delta^{2}+2\right)\right)'. $$
Consequently, by continuous mapping theorem, the sequence (47) converges in distribution to H(m Z )=(δ,σ). Next, by the classical multivariate central limit theorem, we have the convergence in distribution \(\sqrt {n} (\overline {Z}_{n} - m_{Z}) \stackrel {d}{\rightarrow } \mathrm {N}(0,\Sigma)\), where the right-hand-side denotes the bivariate normal distribution with mean vector zero and covariance matrix
$$\Sigma = \left[ \begin{array}{cc} \mathbb V ar(Y_{i}) & \mathbb C ov(Y_{i}, {Y_{i}^{2}})\\ \mathbb C ov(Y_{i}, {Y_{i}^{2}}) & \mathbb V ar({Y_{i}^{2}}) \end{array} \right]. $$
A straightforward calculation, facilitated by Proposition 2.3, shows that
$$\Sigma = \left[ \begin{array}{cc} \sigma^{2}\left(2-e^{-2\delta}-2\delta e^{-\delta}\right) & \sigma^{3}\left(e^{-\delta}(4-\delta^{2}) +4\delta \right) \\ \sigma^{3}\left(e^{-\delta}(4-\delta^{2}) +4\delta \right) & \sigma^{4}\left(8\delta^{2} +20 \right) \end{array} \right]. $$
Standard large sample theory (see, e.g., Rao 1973) leads to the conclusion that, as n, the variables
$$\sqrt{n} (H(\overline{Z}_{n}) - H(m_{Z})) = \sqrt{n}[ (\hat{\delta}_{n}, \hat{\sigma}_{n})' - (\delta,\sigma)'] $$
converge in distribution to a bivariate normal vector with mean vector zero and covariance matrix Ω=D Σ D , where
$$D = \left[ \left. \frac{\partial H_{i}}{\partial y_{j}}\right|_{(y_{1},y_{2})=m_{Z}}\right]_{i,j=1}^{2} $$
is the matrix of partial derivatives of the vector-valued function H. A rather lengthy calculation yields
$$D = \frac{1}{e^{-\delta}[2+\delta+\delta^{2}]-2} \left[ \begin{array}{cc} -\frac{\delta^{2}+2}{\sigma} & \frac{\delta+e^{-\delta}}{2\sigma^{2}}\\ \delta & -\frac{1-e^{-\delta}}{2\sigma} \end{array} \right], $$
which, after more algebra, produces the asymptotic covariance matrix (38). This concludes the proof.

Declarations

Acknowledgments

The authors thank the associate editor and two anonymous referees for their helpful remarks and information about reference (Cooray 2008), of which we were unaware. This paper was written while the second author was on a sabbatical leave, and the assistance provided by the University of Nevada, Reno is greatly acknowledged. Kozubowski’s research was also partially funded by the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 318984 - RARE.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Health and Human Services, State of Nevada
(2)
Department of Mathematics and Statistics, University of Nevada

References

  1. Asai, M, McAleer, M: Asymmetric multivariate stochastic volatility. Econ Rev. 25, 453–473 (2006).MATHMathSciNetView ArticleGoogle Scholar
  2. Berenhaut, KS, Bergen, LD: Stochastic orderings, folded beta distributions and fairness in coin flips. Statist. Probab. Lett. 81(6), 632–638 (2011).MATHMathSciNetView ArticleGoogle Scholar
  3. Bohannon, RG, Gardner, JV, Sliter, RW: Holocene to Pliocene tectonic evolution of the region offshore of the Los Angeles urban corridor, Southern California. Tectonics. 23(1), TC1016 (2004).View ArticleGoogle Scholar
  4. Brazauskas, V, Kleefeld, A: Folded and log-folded-t distributions as models for insurance loss data. Scand. Actuar. J. 1, 59–74 (2011).MathSciNetView ArticleGoogle Scholar
  5. Brazauskas, V, Kleefeld, A: Authors’ reply to ’Letter to the Editor regarding folded models and the paper by Brazauskas and Kleefeld (2011)’. Scand. Actuar. J. 8, 753–757 (2014).MathSciNetView ArticleGoogle Scholar
  6. Chakraborty, AK, Chatterjee, M: On multivariate folded normal distribution. Sankhya. 75, 1–15 (2013).MATHMathSciNetView ArticleGoogle Scholar
  7. Cooray, K: Statistical modeling of skewed data using newly formed parametric distributions (2008). PhD Dissertation, University of Nevada, Las Vegas.
  8. Cooray, K, Gunasekera, S, Ananda, MMA: The folded logistic distribution. Comm. Statist. Theory Methods. 35(1-3), 385–393 (2006).MATHMathSciNetView ArticleGoogle Scholar
  9. Dharmadhikari, SW, Joag-Dev, K: Unimodality, Convexity, and Applications. Academic Press, New York (1988).MATHGoogle Scholar
  10. Elandt, RC: The folded normal distribution: Two methods of estimating parameters from moments. Technometrics. 3, 551–562 (1961).MATHMathSciNetView ArticleGoogle Scholar
  11. Gastwirth, JL: A general definition of the Lorenz curve. Econometrica. 39, 1037–1039 (1971).MATHView ArticleGoogle Scholar
  12. Gui, W, Chen, P, Wu, H: A folded normal slash distribution and its applications to non-negative measurements. J. Data Sci. 11(2), 231–247 (2013).MathSciNetGoogle Scholar
  13. Jeong, JH: Statistical Inference on Residual Life. Springer, New York (2014).View ArticleGoogle Scholar
  14. Johnson, NL: The folded normal distribution: Accuracy of estimation by maximum likelihood. Technometrics. 4, 249–256 (1962).MATHMathSciNetView ArticleGoogle Scholar
  15. Johnson, NL: Cumulative sum control charts for folded normal distribution. Technometrics. 5(4), 451–458 (1963).MATHView ArticleGoogle Scholar
  16. Johnson, NL, Kotz, S, Balakrishnan, N: Continuous Univariate Distributions. 2nd Ed., Vol. 1. John Wiley & Sons, New York (1994).MATHGoogle Scholar
  17. Johnson, NL, Kotz, S, Balakrishnan, N: Continuous Univariate Distributions. 2nd Ed., Vol. 2. John Wiley & Sons, New York (1995).MATHGoogle Scholar
  18. Jung, S, Foskey, M, Marron, JS: Principal arc analysis on direct product manifolds. Ann. Appl. Statist. 5(1), 578–603 (2011).MATHMathSciNetView ArticleGoogle Scholar
  19. Kim, HJ: On the ratio of two folded normal distributions. Comm. Statist. Theory Methods. 35, 965–977 (2006).MATHMathSciNetView ArticleGoogle Scholar
  20. Kotz, S, Kozubowski, TJ, Podgórski, K: The Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance. Birkhäuser, Boston (2001).View ArticleGoogle Scholar
  21. Kozubowski, TJ, Podgórski, K: Log-Laplace distributions. Int. Math. J. 3, 467–495 (2003a).MATHMathSciNetGoogle Scholar
  22. Kozubowski, TJ, Podgórski, K: A log-Laplace growth rate model. Math. Sci. 28, 49–60 (2003b).MathSciNetGoogle Scholar
  23. Leone, FC, Nelson, LS, Nottingham, RB: The folded normal distribution. Technometrics. 3, 543–550 (1961).MathSciNetView ArticleGoogle Scholar
  24. Liao, MY: Economic tolerance design for folded normal data. Intern. J. Production Res. 48(14), 4123–4237 (2010).MATHView ArticleGoogle Scholar
  25. Lin, HC: The measurement of a process capability for folded normal process data. Int. J. Adv. Manuf. Technol. 24, 223–228 (2004).Google Scholar
  26. Lin, PC: Application of the generalized folded-normal distribution to the process capability measures. Intern. J. Adv. Manufacturing Tech. 26(7–8), 825–830 (2005).View ArticleGoogle Scholar
  27. Liu, Y: A folded Laplace distribution. Thesis, University of Nevada (2014).
  28. Nadarajah, S, Bakar, SAA: New folded models for the log-transformed Norwegian fire claim data. Comm. Statist. Theory Methods. 44(20), 4408–40 (2015).MathSciNetView ArticleGoogle Scholar
  29. Nadarajah, S, Kotz, S: Moments of the folded logistic distribution. Progr. Natur. Sci. (English Ed.) 17(6), 696–697 (2007).MATHMathSciNetView ArticleGoogle Scholar
  30. Naulin, V: Electromagnetic transport components and sheared flows in drift-Alfvén turbulence. Phys. Plasma. 10, 4016–4028 (2003).View ArticleGoogle Scholar
  31. Nelson, LS: The folded normal distribution. J. Qual. Technol. 12(4), 236 (1980).Google Scholar
  32. Piepho, HP: The folded exponential transformation for proportions. The Statistician. 52(4), 575–589 (2003).MathSciNetGoogle Scholar
  33. Porzio, GC, Ragozini, G: On the stochastic ordering of folded binomials. Statist. Probab. Lett. 79(9), 1299–1304 (2009).MATHMathSciNetView ArticleGoogle Scholar
  34. Psarakis, S, Panaretos, J: The folded t distribution. Comm. Statist. Theory Methods. 19(7), 2717–2734 (1990).MathSciNetView ArticleGoogle Scholar
  35. Psarakis, S, Panaretos, J: On some bivariate extensions of the folded normal and folded t distributions. J. Appl. Statist. Sci. 10(2), 119–135 (2001).MathSciNetGoogle Scholar
  36. Rao, CR: Linear Statistical Inference and its Applications. Wiley, New York (1973).MATHView ArticleGoogle Scholar
  37. Rizvi, MH: Some selection problems involving folded normal distribution. Technometrics. 13, 355–369 (1971).MATHMathSciNetView ArticleGoogle Scholar
  38. Scollnik, D: Regarding folded models and the paper by Brazauskas and Kleefeld (2011). Scand. Actuar. J. 3, 278–181 (2014).MathSciNetView ArticleGoogle Scholar
  39. Sinha, SK: Folded normal distribution - a Bayesian approach. J. Indian Statist. Assoc. 21, 31–34 (1983).Google Scholar
  40. Shannon, CE: A mathematical theory of communication. Bell Syst Tech. J. 27(3), 379–423 (1948).MATHMathSciNetView ArticleGoogle Scholar
  41. Stracener, JT: An investigation of the doubly-folded bivariate normal distribution (1973). PhD Dissertation, Southern Methodist University.
  42. Subbotin, MT: On the law of frequency of errors. Mat. Sb. 31, 296–301 (1923).MATHGoogle Scholar
  43. Sundberg, RM: On estimation and testing for the folded normal distribution. Comm. Stat. 3, 55–72 (1974).MATHMathSciNetGoogle Scholar
  44. Tsagris, M, Beneki, C, Hassani, H: On the folded normal distribution. Mathematics. 2, 12–28 (2014).MATHView ArticleGoogle Scholar
  45. Yadavalli, VSS, Singh, N: Determination of reliability density function when the failure rate is a random variable. Microelectron. Reliab. 35(4), 699–701 (1995).View ArticleGoogle Scholar

Copyright

© Liu and Kozubowski. 2015