Skip to main content

The linearly decreasing stress Weibull (LDSWeibull): a new Weibull-like distribution

Abstract

Motivated by an engineering pullout test applied to a steel strip embedded in earth, we show how the resulting linearly decreasing force leads naturally to a new distribution, if the force under constant stress is modeled via a three-parameter Weibull. We term this the LDSWeibull distribution, and show that inference on the parameters of the underlying Weibull can be made upon collection of data from such pullout tests. Various classical finite-sample and asymptotic properties of the LDSWeibull are studied, including existence of moments, distribution of extremes, and maximum likelihood based inference under different regimes. The LDSWeibull is shown to have many similarities with the Weibull, but does not suffer from the problem of having an unbounded likelihood function under certain parameter configurations. We demonstrate that the quality of its fit can also be very competitive with that of the Weibull in certain applications.

Introduction

Mechanically stabilized earth is a method of constructing vertical retaining walls which is often seen in overpasses in populated metropolitan areas where space is at a premium. It consists of reinforcements which are buried in soil in layers. These reinforcements are attached to a vertical facing wall. The types of reinforcements vary, but are generally classified as either inextensible (steel) or extensible (polymeric). Of interest here are the steel reinforcements which are generally flat steel strips, flat steel strips with ribs on them, or welded wire mats which look like a ladder or a grid. The strip reinforcements are generally 50 mm (2 inches) wide and 4 mm (0.16 inches) thick.

Consider the case of the smooth steel strip. These types of reinforcements are not generally used in construction, but are often studied in the laboratory. If a smooth steel strip were used, while in service the stress that it would be subject to would be equal along its entire length (nominally, assuming a constant soil pressure). To establish the serviceability of these reinforcements, they are subjected to what is known as a pullout test. That is, they are embedded in backfill and an axial force is applied to the head of the reinforcement. A frictional stress is generated along the reinforcement and soil interface, and this frictional stress is cumulative so that the stress at the head of the reinforcement is equal to the total frictional stress that the entire strip is experiencing, while the stress at the middle is half of that, and so on. This results in a continuous, linearly decreasing force within the reinforcement from the head to the tail. This is much like the linkage between two cars in a train, which must withstand only the stress placed upon it by the cars behind. Thus if a train is being pulled by only locomotives in the front, the linkages between cars at the front of the train are subjected to much more stress than those near the rear of the train. (This train analogy is apt because ribbed strip or the wire mat reinforcements will largely behave in this manner.)

The data gathered from pullout tests can be used to estimate the survival distribution of reinforcements under the conditions of the pullout test, but how can it be used to estimate the survival distribution under actual service conditions?

The Weibull distribution, named for Waloddi Weibull, was popularized by Weibull in his papers from 1939 through 1961, the key paper being (Weibull 1951). It has found wide applicability in engineering practice. In his work, Weibull was studying the strength of materials, but the distribution actually appeared somewhat earlier than that in the late 1920’s in the study of extreme values; see (Rinne 2009) for a thorough review. (It should be noted that Weibull was unaware of this earlier work and derived his distribution independently.) In particular, it arises as the minimum (or maximum) of a random sample with support that is bounded below (for the minimum) or above (for the maximum). The old proverb is that the strength of a chain is equal to the strength of its weakest link (the minimum). The proverb may also be applied to the strength of materials in that the strength of the material is equal to the strength of its weakest point. So it is no surprise that the Weibull distribution arises in the study of the strength of materials and has found wide applicability.

Suppose that it is reasonable to assume that a smooth steel strip reinforcement has a Weibull survival distribution were it exposed to a constant stress along its length. It is well known that the minimum of independent and identically distributed (iid) Weibull random variables has a Weibull distribution. That is, suppose that Y1,Y2,…,Yn are iid Weibull with shape β, location/threshold μ, and scale σ under the following pasteurization for the cumulative distribution function (cdf):

$$ F(y;\beta, \mu, \sigma) = 1-\exp\left\{-[(y-\mu)/\sigma]^{\beta}\right\}, \qquad y>\mu,\ \beta>0,\ \sigma>0,\ \mu>0. $$
(1)

This 3-parameter Weibull will be referred to as Weibull (μ,β,σ). Then Y(1)=min(Y1,Y2,…,Yn) has cdf given by:

$$F_{1}(y;\beta, \mu, \sigma) = 1-\left\{1-F(y;\beta, \mu, \sigma)\right\}^{n} = 1-\exp\left\{-\left[n^{1/\beta}(y-\mu)/\sigma\right]^{\beta}\right\}. $$

That is, Y(1) is Weibull with shape β, location μ, and scale σ/n1/β.

Consider now a continuous system of length L that is viewed as being composed of n independent “links” of equal length. Assume that the strength of the entire system is Weibull(β,μ,σ), and that the strengths of the individual links are also Weibull. Then each link must have a Weibull (β,μ,σn(1/β)) distribution. Note that this requires that as the number of links, n, increases, the scale increases in a corresponding fashion. That is, shorter links are stronger links (stochastically).

One end is denoted the “head” (location 0) and the other the “tail” (location L). The head is exposed to a stress S0, which decreases linearly along the system to 0 at the tail. The stress at location l is thus \(S_{l}=S_{0}\left (1-\frac {l}{L}\right)\). If we view the system as before; that is, as having a Weibull strength with it being viewed as n independent “links” that are also Weibull, what is the distribution of the system under these conditions as a function of S0?

Suppose that we have Y1,Y2,…,Yn that are iid Weibull (β,μ,σn(1/β)). The system reliability is given by:

$$ \begin{aligned} R(S_{0}) &= P(\text{All segments survive})\\ &= P\left(Y_{1}>S_{0}\left[1-\frac{0}{n}\right], Y_{2}>S_{0}\left[1-\frac{1}{n}\right], \ldots, Y_{n}>S_{0}\left[1-\frac{n-1}{n}\right]\right)\\ &= \prod_{i=0}^{n-1}P\left(Y_{i+1}>S_{0}\left[1-\frac{i}{n}\right]\right).\\ \end{aligned} $$
(2)

Note that if \(S_{0}\left [1-\frac {i}{n}\right ]<\mu \), then the probability associated with that i is 1 since the strength Yi must be greater than μ. so the product need only run to the largest k such that

$$ S_{0}\left(1-\frac{k}{n}\right) > \mu \iff k < n\left(1-\frac{\mu}{S_{0}}\right) \iff k=\text{min}\left\{n-1, \left\lfloor n\left(1-\frac{\mu}{S_{0}}\right)\right\rfloor\right\}, $$
(3)

where · is the floor function.

Thus,

$$ \begin{aligned} R(S_{0}) &= \prod_{i=0}^{k}\exp\left\{-\left[\left(S_{0}\left(1-\frac{i}{n}\right)-\mu\right)/\left(\sigma n^{1/\beta}\right)\right]^{\beta}\right\}, S_{0} > \mu\\ &= \exp\left\{-\sum\limits_{i=0}^{k}\left[\left(S_{0}\left(1-\frac{i}{n}\right)-\mu\right)/\left(\sigma n^{1/\beta}\right)\right]^{\beta}\right\}.\\ \end{aligned} $$
(4)

Taking the natural log and the limit as n tends to infinity:

$$ {\lim}_{n\to\infty}\log R(S_{0}) = -{\lim}_{n\to\infty}\sum\limits_{i=0}^{k}\left[\left(S_{0}\left(1-\frac{i}{n}\right)-\mu\right)/\left(\sigma n^{1/\beta}\right)\right]^{\beta}, $$
(5)

where \(k=\text {min}\left \{n-1, \left \lfloor n\left (1-\frac {\mu }{S_{0}}\right)\right \rfloor \right \}\). For sufficiently large n, we can approximate this sum using an integral as:

$$ \begin{aligned} {\lim}_{n\to\infty}\log R(S_{0}) &\approx \frac{1}{\sigma^{\beta}}\int\nolimits_{0}^{1-\mu/S_{0}}\left[S_{0}(1-x)-\mu\right]^{\beta}dx\\ &= -\frac{\left(S_{0}-\mu\right)^{\beta+1}}{\sigma^{\beta}(\beta+1)S_{0}}, S_{0}>\mu.\\ \end{aligned} $$
(6)

Thus, \(R(S_{0}) = \exp \left \{-\frac {\left (S_{0}-\mu \right)^{\beta +1}}{\sigma ^{\beta }(\beta +1)S_{0}}\right \}, S_{0}>\mu \). Note that in the case of the standard two-parameter Weibull (μ=0), the resulting reliability is Weibull with shape β and scale σ(β+1)1/β.

Consider a reparametrization with θ=μ, γ=β+1, and δ=[σβ(β+1)]1/(β+1). This yields a cdf (which is one minus the reliability) of the form:

$$ F(x;\theta,\gamma,\delta) = 1-\exp\left\{\frac{-(x-\theta)^{\gamma}}{x\delta^{\gamma}}\right\}, \qquad\text{for \(x\geq\theta\)}. $$
(7)

Note that this bridges the gap and allows us to estimate the reliability of in-service steel strip reinforcements (which are exposed to a constant stress along its length and have a Weibull survival distribution) via the results of pullout tests (which expose the strip to a linearly decreasing stress along its length and have the distribution derived above). That is, if we obtain a sample under pullout test conditions and estimate the parameters θ, γ, and δ via maximum likelihood estimation (MLE) and obtain \(\hat {\theta }\), \(\hat {\gamma }\), and \(\hat {\delta }\), then by invariance the MLEs for the parameters of the Weibull are:

$$ \hat{\mu}=\hat{\theta}, \qquad \hat{\beta}=\hat{\gamma}-1, \qquad \hat{\sigma}=\left[\hat{\delta}^{\hat{\gamma}}/\hat{\gamma}\right]^{1/(\hat{\gamma}-1)}. $$
(8)

We term the distribution arrived at in the above discussion the linearly decreasing stress Weibull (LDSWeibull); a new Weibull-like distribution. A formal definition along with the derivation of basic and classical properties is presented in “Formal definition, basic properties, and results” section. Maximum likelihood and other types of estimation procedures, along with accompanying asymptotic results, are developed in “Estimation procedures and asymptotic results” section. These procedures are subsequently investigated with simulation studies in “Simulation results” section. We conclude the paper with an application on real data in “Real data application” section.

Formal definition, basic properties, and results

This section formally introduces the LDSWeibull and derives basic properties. It is obvious from (7) that θ is a pseudo-location parameter, γ is a shape parameter, and δ a pseudo-scale parameter. For ease of handling, it will be more convenient to work with the following one-to-one reparametrization, (θ,γ,δ)β=(θ,γ,τ), where τ=δγδ=τ1/γ, whence τ>0 is more obviously seen to be a pseudo-scale parameter.

Definition 1

The LDSWeibull (θ,γ,τ) has parameter space: Ω={(θ,γ,τ):θ≥0, γ>1, τ>0}. Its cdf is given by:

$$ F(x;\theta,\gamma,\tau) = \int\limits_{\theta}^{\infty}f(x;\theta,\gamma,\tau)\,dx =1-\exp\left\{-\frac{(x-\theta)^{\gamma}}{x\tau}\right\}, \qquad\text{for }x\geq\theta. $$
(9)

The density function is therefore:

$$ f(x;\theta,\gamma,\tau) = \frac{(x-\theta)^{\gamma}}{x\tau}\left[\frac{\gamma}{x-\theta}-\frac{1}{x}\right]\exp\left\{-\frac{(x-\theta)^{\gamma}}{x\tau}\right\} I_{[\theta,\infty)}(x). $$
(10)

Note that the LDSWeibull inherits parameter identifiability from the Weibull, since the transformation (8) is one-to-one.

Remark 1

For θ=0 the LDSWeibull (0,γ,τ) is a two-parameter Weibull with shape γ−1, and scale τ1/(γ−1).

Apart from its intimate connection with the Weibull, there are possibly many related distributions that overlap with the proposed LDSWeibull. Note that written in the form

$$G(x;\alpha,\boldsymbol{\beta}) = 1-\exp\left\{-\alpha H(x;\boldsymbol{\beta})\right\},\qquad\alpha>0, $$

the cdf can more generally be seen to be a member of the very broad class of distributions introduced by (Gurvich et al. 1997), that can be generated from the Weibull by taking H(x;β) to be a non-negative and monotone increasing function, possibly depending on the vector of parameters β. In our case, the LDSWeibull (θ,γ,τ) is obtained by setting α=1/τ and H(x;θ,γ)=(xθ)γ/x. (Bourguignon et al. 2014) introduce an interesting variant of G(x;α,β) by taking H(x;β) to be a positive power of the ratio of any continuous cdf and its survival function, but the LDSWeibull (θ,γ,τ) does not appear to obey that particular construct.

Existence of moment generating function

We have managed to determine necessary and sufficient conditions for existence of the moment generating function (mgf). These conditions impose a restriction on the shape parameter γ.

Theorem 1

The mgf of the LDSWeibull (θ,γ,τ) satisfies \(M(t) = \mathbb {E} e^{tX}<\infty \), for all \(t\in \mathbb {R}\) in the restricted parameter range:

$$\Omega_{C}=\{(\theta,\gamma,\tau):\theta\geq 0,\ \gamma>2,\ \tau>0\}\subset\Omega. $$

Otherwise, M(t)= for all \(t\in \mathbb {R}\) if γ≤2.

Proof

See Appendix: Proof of Theorem 1. □

Attempts at finding a closed form for M(t) (in terms of known special functions) have not, however, yielded positive results. This leads to challenges in devising preliminary parameter estimators such as method of moments. Due to the lack of an analytic expression for the quantile function, and the usual intractability of moments of order statistics, alternative treatments such as probability weighted moments (Greenwood et al. 1979) and L-moments (Hosking 1990), do not appear to be feasible either.

Completeness and minimal sufficiency

There is little hope in being able to determine a complete statistic, but it’s not hard to show that the order statistics are minimal.

Theorem 2

For a random sample X=(X1,…,Xn) from the LDSWeibull (θ,γ,τ), the order statistics T(X)=(X(1),…,X(n)) are minimal sufficient.

Proof

Let x=(x1,…,xn) and y=(y1,…,yn) denote two independent random samples from the LDSWeibull (θ,γ,τ), where the log-density of f(x) is given by:

$$ {\begin{aligned} \log f(\boldsymbol{x}) = \sum\log f(x_{i}) = (\gamma-1)\sum\log(x_{i}-\theta) + \sum\log [(\gamma-1)x_{i}+\theta ]-\sum\frac{(x_{i}-\theta)^{\gamma}}{\tau x_{i}}-n\log\tau-\sum\log{x_{i}}^{2}. \end{aligned}} $$
(11)

We note that the family is (trivially) dominated by Lebesgue measure, and hence invoking (Schervish (1995), Theorem 2.29), we need only show that, for any fixed choice of (θ,γ,τ):

$$\log f(\boldsymbol{x})-\log f(\boldsymbol{y}) = h(\boldsymbol{x},\boldsymbol{y})\quad\Longleftrightarrow\quad T(\boldsymbol{x})=T(\boldsymbol{y}). $$

To this end, and ignoring summands that depend on x and/or y only, note that

$$ {\begin{aligned} \log f(\boldsymbol{x})-\log f(\boldsymbol{y}) = (\gamma-1)\sum\log\left(\frac{x_{i}-\theta}{y_{i}-\theta}\right) + \sum\log\left(\frac{(\gamma-1)x_{i}+\theta}{(\gamma-1)y_{i}+\theta}\right) + \frac{1}{\tau}\sum\left[\frac{(y_{i}-\theta)^{\gamma}}{y_{i}}-\frac{(x_{i}-\theta)^{\gamma}}{x_{i}}\right]. \end{aligned}} $$
(11)

Now, it is obvious that T(x)=T(y) immediately implies logf(x)− logf(y)=0, which is therefore independent of the parameters. To see that the converse is also true, note that the only way (11) can be independent of (θ,γ,τ), is if each of the three summands is itself free of (θ,γ,τ), whence we must have

$$\begin{array}{@{}rcl@{}} \sum\log(x_{i}-\theta) &=& \sum\log(y_{i}-\theta)\\ \sum\log [(\gamma-1)x_{i}+\theta] &=& \sum\log [(\gamma-1)y_{i}+\theta ]\\ \sum \frac{(x_{i}-\theta)^{\gamma}}{x_{i}} &=& \sum \frac{(y_{i}-\theta)^{\gamma}}{y_{i}} \end{array} $$

Because of the intimate connections with θ and γ, these three requirements can only be met if T(x)=T(y). □

Distribution of the extremes

For a random sample X1,…,Xn from the LDSWeibull (θ,γ,τ), we now consider the distributions of the minimum and maximum order statistics, X(1) and X(n), respectively. Some exact results can be obtained using the usual techniques. Specifically, the survival function of X(1) is given by

$$P(X_{(1)}>x) = [1-F(x)]^{n} = \exp\left\{-\frac{(x-\theta)^{\gamma}}{x(\tau/n)}\right\}, $$

which implies that X(1) is therefore LDSWeibull (θ,γ,τ/n). The cdf of X(n) is of course F(x)n, but this does not appear to have an immediately recognizable form.

It is also possible to obtain the asymptotic distribution of the (appropriately normalized) extremes by invoking the Fisher-Tippett Theorem; see e.g., David and Nagaraja (2003, §10.5). The following theorem reveals that the extremes of the LDSWeibull are in the domain of attraction of the Gumbel.

Theorem 3

Let X(1) and X(n) denote the minimum and maximum order statistics, respectively, in a random sample from the LDSWeibull (θ,γ,τ) with parameter space Ω, as in Definition 1. Then we have the following convergence in distribution results, for any \(x\in \mathbb {R}\).

  • For the maximum,

    $${\lim}_{n\rightarrow\infty}P\left(\frac{X_{(n)}-a_{n}}{b_{n}}\leq x\right) = \exp\left\{-e^{-x}\right\}. $$
  • For the minimum,

    $${\lim}_{n\rightarrow\infty}P\left(\frac{X_{(1)}-a_{n}}{b_{n}}\leq x\right) = 1-\exp\left\{-e^{x}\right\}. $$

In each case, the normalizing constants an and bn can be chosen to satisfy the pair of equations

$$(a_{n}-\theta)^{\gamma}-\tau a_{n}\log(n)=0, \qquad\text{and}\qquad b_{n}=\frac{a_{n}(a_{n}-\theta)}{[a_{n}(\gamma-1)+\theta]\log(n)}.$$

Proof

Note that the derivative of the inverse of the hazard function is

$$ {\begin{aligned} \frac{\partial}{\partial x}{\Bigg[\frac{1-F(x)}{f(x)}\Bigg]} &= \frac{\partial}{\partial x}{\Bigg[\frac{x^{2}\tau}{(x-\theta)^{\gamma-1}[x(\gamma-1)+\theta]}\Bigg]}\\ &= \frac{2x\tau}{(x-\theta)^{\gamma-1}(x(\gamma-1)+\theta)}-\frac{x^{2}\tau(\gamma-1)}{(x-\theta)^{\gamma}[x(\gamma-1) +\theta]}-\frac{x^{2}\tau(\gamma-1)}{(x-\theta)^{\gamma-1}[x(\gamma-1)+\theta]^{2}}. \end{aligned}} $$
(12)

Since γ>1, each of these terms is of order O(1/xε), for some ε>0, and thus they all converge to zero as x. The required result now follows by invoking (David and Nagaraja (2003), Theorem 10.5.2). □

Estimation procedures and asymptotic results

For a random sample x1,…,xn from the LDSWeibull (θ,γ,τ), we have the log-likelihood function:

$$ {\begin{aligned} \ell(\boldsymbol{\beta}) = \left\{\begin{array}{lr} (\gamma-1)\sum\log(x_i-\theta) + \sum\log [(\gamma-1)x_i+\theta ]-\sum\frac{(x_i-\theta)^{\gamma}}{\tau x_i}-n\log\tau-\sum\log{x_i}^{2}, & \theta\leq x_{(1)}, \\ -\infty, & \theta>x_{(1)}. \end{array}\right. \end{aligned}} $$
(13)

Denote by β0=(θ0,γ0,τ0)T the true parameter vector.

Remark 2

Note that, unlike the Weibull, this log-likelihood function is bounded and will therefore have a non-degenerate extremum for all (θ,γ,τ)Ω. As discussed in (Rinne (2009), §11.3.2), a known issue with the 3-parameter Weibull in (1) is that as μx(1), (β−1) log(x(1)μ)→ for β<1, and therefore the MLE of β does not exist when β<1.

To demonstrate the spectrum of possibilities for the various regimes of the MLEs, we will now consider the following subset of just three special cases taken from the exhaustive list of all 7 possible combinations of known and unknown parameter values.

Case 1: (γ,τ) known

It would appear that the maximizer of θ would occur at the boundary value of x(1), however the first two derivatives yield:

$$\begin{array}{@{}rcl@{}} \psi_{1}(\boldsymbol{\beta})\equiv\frac{\partial\ell(\boldsymbol{\beta})}{\partial\theta} &=& -\sum\frac{(\gamma-1)}{x_i-\theta}+\sum\frac{1}{(\gamma-1)x_i+\theta}+\sum\frac{\gamma(x_i-\theta)^{\gamma-1}}{\tau x_i} \\ \frac{\partial^{2}\ell(\boldsymbol{\beta})}{\partial\theta^{2}} &=& -\sum\frac{(\gamma-1)}{(x_i-\theta)^{2}}-\sum\frac{1}{[(\gamma-1)x_i+\theta]^{2}}-\sum\frac{\gamma(\gamma-1)(x_i-\theta)^{\gamma-2}}{\tau x_i}. \end{array} $$

Since each of the summands in the second derivative is positive, it follows that 2/θ2<0, whence (β) is concave in θ, and thus the MLE is the unique maximum of (β), and occurs at an interior point, albeit close to x(1) (which can therefore be used as an initial estimate).

Theorem 4

Let β0 be in the restricted parameter range ΩC, with γ0 and τ0 known. Then the MLE \(\hat {\theta }_{{\gamma }_{0},\tau _{0}}\) of θ0 is consistent.

Proof

Take the (continuous) estimating equation (of which \(\hat {\theta }_{{\gamma }_{0},\tau _{0}}\) is the unique root by the above argument) to be

$$\begin{array}{@{}rcl@{}} g_{\gamma,\tau}(\theta)\!\equiv\frac{1}{n}\frac{\partial\ell(\boldsymbol{\beta})}{\partial\theta}&\,=\,&\frac{1}{n} \sum\limits_{i=1}^{n}\frac{1}{(\gamma-1)x_i+\theta} + \frac{\gamma}{\tau}\frac{1}{n}\sum\limits_{i=1}^{n}\frac{(x_i-\theta)^{\gamma-1}}{x_i}\!-(\gamma-1)\frac{1}{n}\sum\limits_{i=1}^{n}\frac{1}{x_i-\theta}\\ &{\xrightarrow{p}} & \mathbb{E}\left[\frac{1}{(\gamma-1)X+\theta}\right]+\frac{\gamma}{\tau}\mathbb{E}\left[\frac{(X-\theta)^{\gamma-1}}{X}\right]- (\gamma-1)\mathbb{E}\left[\frac{1}{X-\theta}\right], \end{array} $$

by the weak law of large numbers applied to each of the sample averages. Since this limiting value is well-defined for each θ, consistency of the unique root \(\hat {\theta }_{{\gamma }_{0},\tau _{0}}\) follows by (Van der Vaart (1998), Lemma 5.10). □

Case 2: θ known

With the argument r as placeholder for γ, first define the terms:

$$ Q_{\theta}=\sum\limits_{i=1}^{n}\log(x_i-\theta), \quad R_{\theta}(r)=\sum\limits_{i=1}^{n}\frac{x_i}{(r-1)x_i+\theta}, \quad S_{\theta}(r)=\sum\limits_{i=1}^{n}\frac{(x_i-\theta)^{r}}{x_i}, $$
(14)

and note that

$$S^{\prime}_{\theta}(r)=\frac{\partial S_{\theta}(r)}{\partial r}=\sum\frac{(x_i-\theta)^{r}\log(x_i-\theta)}{x_i}. $$

The corresponding score functions are then:

$$\begin{array}{@{}rcl@{}} \psi_{2}(\boldsymbol{\beta})\equiv\frac{\partial\ell(\boldsymbol{\beta})}{\partial\gamma} &=& Q_{\theta}+R_{\theta}(\gamma)-\frac{1}{\tau}S^{\prime}_{\theta}(\gamma) \\ \psi_{3}(\boldsymbol{\beta})\equiv\frac{\partial\ell(\boldsymbol{\beta})}{\partial\tau} &=& \sum\frac{(x_i-\theta)^{\gamma}}{\tau^2x_i} - \frac{n}{\tau} \end{array} $$

Solving ψ3(β)=0 leads to the profile MLE for τ, \(\hat {\tau }_{\theta }=S_{\theta }(\gamma)/n\), whence substitution into ψ2 leads to the profile score equation for γ

$$ \psi_{2}(\theta,\gamma,\hat{\tau}_{\theta})=Q_{\theta} + R_{\theta}(\gamma) - \frac{nS'_{\theta}(\gamma)}{S_{\theta}(\gamma)}=0, $$
(15)

with solution \(\hat {\gamma }_{\theta }\).

It will be more convenient to write the score function (15) in normalized form:

$$ h_{\theta}(r)\equiv\frac{1}{n}\psi_{2}(\theta,\gamma,\hat{\tau}_{\theta})=\frac{1}{n}Q_{\theta} + \frac{1}{n}R_{\theta}(r) - \frac{S'_{\theta}(r)}{S_{\theta}(r)}. $$
(16)

Thus the MLEs for γ0 and τ0 satisfy the equations

$$h_{\theta_{0}}(\hat{\gamma}_{\theta_{0}})=0, \qquad\text{and} \qquad \hat{\tau}_{\theta_{0}} = \frac{1}{n}S_{\theta_{0}}(\hat{\gamma}_{\theta_{0}}), $$

where, due to the monotonicity property in Proposition 1, \(\hat {\gamma }_{\theta _{0}}\) is easily determined as either the boundary value \(\hat {\gamma }_{\theta _{0}}=1\), or as the unique root of \(h_{\theta _{0}}(r)\) in (16).

Proposition 1

The function hθ(r) in (16) is monotone decreasing over the interval r>1.

Proof

First note that Rθ(r) is monotone decreasing, since

$$R^{\prime}_{\theta}(r) = -\sum\left[\frac{x_i}{(r-1)x_i+\theta}\right]^2<0. $$

We now show that the term Tθ(r)=Sθ′(r)/Sθ(r) is monotone increasing, whence the desired result will follow since hθ(r) will then be the sum of a constant term, Qθ, and two monotone decreasing functions. To this end, write \(S_{\theta }(r)=\sum y_{i}^r/x_i=tM(r)\), where \(y_i=x_i-\theta >0, t=\sum x_{i}^{-1}\), and \(M(r)=\sum p_ie^{rz_i}\) corresponds to the moment generating function of a discrete random variable (say Z), with values zi= logyi, and masses 0≤pi=(txi)−1≤1, i=1,…,n. This is sufficient to establish the result, since noting that the cumulant generating function K(r)= logM(r) is convexFootnote 1, we have

$$T^{\prime}_{\theta}(r) = \frac{S_{\theta}(r)S_{\theta}^{\prime\prime}(r)-S_{\theta}^{\prime}(r)^{2}}{S_{\theta}(r)^{2}} = \frac{M_{\theta}(r)M_{\theta}^{\prime\prime}(r)-M_{\theta}^{\prime}(r)^{2}}{M_{\theta}(r)^{2}} = K^{\prime\prime}(r)>0. $$

The question of whether or not \(\hat {\gamma }_{\theta _{0}}\) ever attains the boundary value of 1 is interesting. It is certainly possible to construct a set of real values x1,…,xn such that hθ(1)<0, but whether or not such values correspond to bona fide realizations from an LDSWeibull over some set with positive measure remains an open issue. It is however possible to establish a limiting result as follows.

Proposition 2

Let β0 be in the restricted parameter range ΩC, and suppose that plimnhθ(1)>0. Then, with probability 1 in the limit as n, \(\hat {\gamma }_{\theta _{0}}\) is the unique root of \(h_{\theta _{0}}(r)\).

Proof

Due to Proposition 1, it suffices to show that plimnhθ()<0. Defining Y=Xθ, note that Y>0 a.s., and \({\text {plim}_{n\rightarrow \infty }} Q_{\theta }/n=\mathbb {E}\log Y\) by the weak law of large numbers. (Note that the finiteness of all moments for logY follows from the finiteness of all moments for X with parameters in ΩC.) Since \(R_{\theta }(\infty)={\lim }_{r\rightarrow \infty }R_{\theta }(r)/n=0\), it follows that \(R_{\theta }(\infty)/n{\xrightarrow {p}} 0\). Now assume (without loss of generality) that 0<y1=y(1)y(n)< are ordered, and note that in view of the representation

$$T_{\theta}(r)= \sum\limits_{i}\left(\frac{y_{i}^{r} /x_i}{\sum\nolimits_{j} y_{j}^{r} /x_j}\right)\log y_{i} =\sum\limits_{i}\frac{\left(\frac{y_{i}}{y_{1}}\right)^{r} /x_i}{\sum\nolimits_{j}\left(\frac{y_{j}}{y_{1}}\right)^{r} /x_j}\log y_{i}, $$

Lemma 1 in the Appendix is applicable with \(c_{i}(r)=(\frac {y_i}{y_1})/x_{i}^{1/r}\), since for sufficiently large r, we have for i<j:

$$c_{i}(r)^r=\left(\frac{y_i}{y_1}\right)^{r}\frac{1}{y_i+\theta} < \left(\frac{y_j}{y_1}\right)^{r}\frac{1}{y_j+\theta}=c_{j}(r)^{r}, $$

whence it follows that Tθ()= logy(n) and therefore plimnTθ()=. Putting everything together gives:

$$h(\infty){\xrightarrow{p}} \mathbb{E}\log Y + 0 - \infty < 0. $$

Identifiability of the LDSWeibull model combined with third order differentiability of logf(x;β), plus domination of appropriate derivatives of the latter as well as f(x;β) by integrable functions, establishes consistency and asymptotic efficiency of the MLEs directly via classical conditions.

Theorem 5

Let β0 be in the restricted parameter range ΩC, with θ0 known. Then, the MLEs \(\hat {\gamma }_{\theta _{0}}\) and \(\hat {\tau }_{\theta _{0}}\) of γ0 and τ0, respectively, satisfy:

$$\sqrt{n}\left(\left(\begin{array}{c} \hat{\gamma}_{\theta_{0}} \\ \hat{\tau}_{\theta_{0}}\end{array}\right) -\left(\begin{array}{c} {\gamma}_{0} \\ \tau_{0} \end{array}\right)\right) \xrightarrow{d}\mathcal{N}\left({0}, J^{-1}(\boldsymbol{\beta}_{0})I(\boldsymbol{\beta}_{0})J^{-1}(\boldsymbol{\beta}_{0})\right), $$

where J(β) is the Hessian matrix of logf(x;β), and I(β) is the Fisher Information matrix (per observation), each defined accordingly as:

$$ {\begin{aligned} J(\boldsymbol{\beta})\equiv\mathbb{E}\left[\begin{array}{cc} \frac{\partial^{2}\log f(x;\boldsymbol{\beta})}{\partial\gamma^{2}} & \frac{\partial^{2}\log f(x;\boldsymbol{\beta})}{\partial\gamma\partial\tau} \\ \frac{\partial^{2}\log f(x;\boldsymbol{\beta})}{\partial\gamma\partial\tau} & \frac{\partial^{2}\log f(x;\boldsymbol{\beta})}{\partial\tau^{2}} \\ \end{array}\right], \qquad I(\boldsymbol{\beta})\equiv\mathbb{E}\left[\begin{array}{cc} \left(\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\gamma}\right)^2 & \frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\gamma}\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\tau} \\ \frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\gamma}\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\tau} & \left(\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\tau}\right)^2 \end{array}\right]. \end{aligned}} $$
(17)

Proof

See Appendix: Proof of Theorem 5. □

Case 3: All parameters unknown

To start, a nonparametric estimator of the survival (or reliability) function should be provided for each x(i), where x(i) is the i-th order statistic. The usual empirical survival function is \(\hat {S}(x_{(i)}) = (n-i)/n\), but we employ instead a common adjustment, \(\hat {S}(x_{(i)})=(n-i+1)/(n+1)\) to avoid the problematic situation of log(0) when i=n.

Now replace θ with the consistent estimate x(1), and equate empirical and population survival functions at x(i):

$$ 1-\hat{F}(x_{(i)})=\frac{n-i+1}{n+1} \approx \exp{\left(-\frac{(x_{(i)}-x_{(1)})^{\gamma}}{\tau x_{(i)}}\right)} = 1-F(x_{(i)}). $$
(18)

(A perhaps more common justification of (18) is to note the well-known property of uniform order statistics: \(\mathbb {E}[F(X_{(i)})]=i/(n+1)\).) Performing a log-log transformation of both sides then leads to:

$$y_{i}\equiv\log{\left(-\log{\left(\frac{n-i+1}{n+1}\right)}\right)}+\log(x_{(i)}) \approx \gamma \log{(x_{(i)}-x_{(1)})}-\log\tau\equiv\gamma z_{i}-\log\tau. $$

Denoting by yi the left-hand-side term of the above expression, and zi= log(x(i)x(1)), we have, with the addition of the error term εi=yi−(a+bzi), the linear regression model

$$ y_i = a+{bz}_i+\varepsilon_i, \qquad i=2,\ldots,n, \quad\text{with }b=\gamma \text{ and } a=-\log\tau. $$
(19)

Obtaining the least squares estimates \(\hat {a}\) and \(\hat {b}\) for the regression parameters, yields the following starting values for the LDSWeibull model parameters:

$$ \theta^{(0)}=x_{(1)}, \qquad \gamma^{(0)}=\hat{b}, \qquad\tau^{(0)}=\exp{(-\hat{a})}. $$
(20)

Armed with these initial values, which are consistent by the next theorem, one can employ an efficient optimization algorithm to maximize (13) and obtain the MLEs \((\hat {\theta },\hat {\gamma },\hat {\tau })\). We note in passing that the procedure outlined above is nearly identical to the so-called “regression method” for estimating the parameters of the generalized extreme value distribution; see e.g., (Rinne (2009), Chapter 10).

Remark 3

(Central quantile limiting behavior) Note that consistent estimation of the right-hand-side of (18) subsumes the following limiting behavior for the integer 2≤in appearing in the left-hand-side of (18):

$$ {\lim}_{n\rightarrow\infty}\frac{i}{n+1} = q_i,\qquad 0<q_i=F(\xi_i)<1, $$
(21)

where ξi is the population quantile corresponding to qi, and for notational expedience we omit the implicit dependence ii(n) in the limiting behavior of order statistics for the central quantile case (David and Nagaraja (2003), Chapter 10). (Note however that qi and ξi on the right-hand-side of (21) do not depend on n.)

Theorem 6

Let β0 be in the restricted parameter range ΩC, and assume the central quantile limiting behavior in Remark 3. Then, the initial estimates β(0)=(θ(0),γ(0),τ(0))T given by (20) resulting from a random sample X1,…,Xn from the LDSWeibull (θ,γ,τ), are consistent for β0.

Proof

See Appendix: Proof of Theorem 6. □

Simulation results

In this section we carry out a small simulation study to investigate the sampling properties of the MLEs for the three cases defined in “Estimation procedures and asymptotic results” section. To this end, Tables 1, 2, and 3 report the bias, variance, mean squared error (MSE), and coefficient of variation (CV) of the MLEs, empirically determined from 1000 simulated realizations.

Table 1 Summary statistics for the MLE of θ under Case 1: γ and τ are known
Table 2 Summary statistics for the MLEs of γ and τ under Case 2: θ is known
Table 3 Summary statistics for the MLEs of θ,γ and τ under Case 3: all parameters are unknown

We see a consistent decrease in all the metrics (bias, variance, MSE, CV) with increasing sample size, as expected. Interestingly, it appears that in general the parameter τ suffers from the most uncertainty, particularly noticeable in some large CV values at low samples.

Real data application

The motivating derivation of the LDSWeibull in Section 1 would behoove us to apply it to the results of a pullout test in order to infer the parameters of the underlying Weibull according to (8). In lack of such data, in this section we illustrate an application where the LDSWeibull (θ,γ,τ) provides a competitive fit to the Weibull (μ,β,σ).

To assess the prospective wind power at a given site, a distribution is often fit to the observed wind speeds. Although different locations tend to have different wind speed profiles, the Weibull has been found to closely mirror the actual distribution of hourly/ten-minute wind speeds at many locations (Masters 2013). In these cases the Weibull shape parameter β is often close to 2, and a Rayleigh distribution can therefore be used, offering a less accurate but simpler model.

The R package bReeze contains the data set “winddata”, consisting of measured wind speed and direction at 10-min intervals collected by a meteorological mast, for a total of 36,548 consecutive observations on 17 variables. Of these variables, we selected winddata$v1_40m_max, which contains the maximum wind speed (m/s) over each 10-min interval recorded by the mast at a height of 40m above ground level. We divided up this long time series into 252 shorter time series of length n=144, each comprising the maximum wind speeds over a 24 h period (144 10-min intervals). The 6 (anomalous) wind speed values of zero were simply discarded before creating the resulting 252 time series data sets.

Parameters for the two distributions were estimated for each of these 252 data sets, and the differences in the attained maximized log-likelihood (LDSWeibull minus Weibull) recorded. No parameter restriction were placed on the LDSWeibull (θ,γ,τ), but for compatibility with the LDSWeibull and the reason mentioned in Remark 2, the parameter space for the Weibull (μ,β,σ) was restricted to μ>0, β≥1, and σ>0. Summary statistics for these log-likelihood differences are listed on Table 4. We can see that, although Weibull fits better approximately 75% of the time, the difference is typically very small.

Table 4 Summary statistics for the attained differences in the maximized log-likelihoods between LDSWeibull (θ,γ,τ) and Weibull (μ,β,σ) fits to each of the 252 daily time series data sets created from “winddata”

The top panels of Figs. 1 and 2 show some typical series where the differences in the log-likelihood are in excess of 5, and between 1 and 5, respectively. The corresponding LDS vs. Weibull marginal fits are displayed in the bottom panels as solid and dot-dash lines, respectively. The dashed KDE line tracking the shaded histogram corresponds to kernel density estimation. These plots typify two regimes: (i) generally calm days with a moderate burst of wind in Fig. 1, and (ii) a windy day with higher bursts (possibly as a consequence of a storm) in Fig. 2. The first regime is characterized by Weibull fits that coincide with an exponential (β=1), whereas the second is more of the Rayleigh type (β≈2).

Fig. 1
figure 1

Illustration of the LDS vs. Weibull marginal fits for selected daily time series of “winddata” where the difference in the attained maximized log-likelihood is in excess of 5. The dashed line tracking the shaded histogram corresponds to kernel density estimation (KDE, with the Sheather-Jones plug-in bandwidth)

Fig. 2
figure 2

Illustration of the LDS vs. Weibull marginal fits for selected daily time series of “winddata” where the difference in the attained maximized log-likelihood is between 1 and 5. The dashed line tracking the shaded histogram corresponds to kernel density estimation (KDE) with the Sheather-Jones plugin bandwith

Although we do not seek an exhaustive analysis here but merely an illustrative one, it is interesting to consider the question of goodness-of-fit. Anderson-Darling (AD) and Kolmogorov-Smirnov (KS) tests yield p-values lower than 10−4 in all the cases of Fig. 1, confirming the suspicion that neither distribution is sufficiently rich to capture this regime. The second regime of Fig. 2 is different however, as shown in Table 5. At the usual 5% significance level, the Weibull model only resoundly fits on Day 116, whereas the LDSWeibull fits in all but Day 26. In all of these examples, the distinctive feature is that the LDSWeibull model appears to be better able to resolve the peaks.

Table 5 Anderson-Darling (AD) and Kolmogorov-Smirnov (KS) goodness-of-fit test p-values for the fitted Weibull and LDSWeibull models of Fig. 2

Appendix

Lemmas

Lemma 1

Let 0<y1yn< be an ordered sample of positive real numbers. Then, for any continuous function g(·),

$$U_{n}(r) \equiv \sum\limits_{i=1}^n \frac{c_{i}(r)^{r}}{\sum\nolimits_{j=1}^n c_{j}(r)^{r}}g(y_i) \longrightarrow g(y_n), \quad\text{as }r\rightarrow\infty, $$

provided that for some sufficiently large r, we have the ordering 0<c1(r)<<cn(r)< for all rr.

Proof

$$\begin{array}{@{}rcl@{}} U_{n}(r) &=& \sum\limits_i \frac{c_{i}^rg(y_i)}{\sum\nolimits_j c_{j}^{r}} = \sum \frac{g(y_i)}{\sum (c_j/c_i)^{r}} \\ &=& \sum\limits_{i=1}^n \frac{g(y_i)}{(c_1/c_i)^r+\cdots+(c_{i-1}/c_i)^r+1+(c_{i+1}/c_i)^r+\cdots+(c_{n}/c_i)^{r}}. \end{array} $$

Considering the terms in the denominator of the above summand, note that, for rr, since ci/cj<1 if i<j, and ci/cj>1 if i>j, we have that:

$${\lim}_{r\rightarrow\infty}\left(\frac{c_i}{c_j}\right)^r= \left\{\begin{array}{ll} 0, & \text{if }i<j, \\ \infty, & \text{if }i>j. \\ \end{array}\right. $$

Thus the first n−1 denominators of Un(r), corresponding to i=1,…,n−1, converge to as r, while the last denominator converges to 1, which gives:

$${\lim}_{r\rightarrow\infty} U_{n}(r) = \frac{g(y_1)}{\infty} +\cdots+ \frac{g(y_{n-1})}{\infty} + \frac{g(y_n)}{1} = g(y_n). $$

Lemma 2

Let Xn\,k,1≤kn, be a triangular array of random variables such that (i) plimnXn,k=Xk, and (ii) |Xn,k|≤|Y| a.s. for all n, with \(\mathbb {E}|Y|<\infty \). Then, it follows that:

$${\text{plim}_{n\rightarrow\infty}} \sum\limits_{k=1}^{n}X_{n,k} = \sum\limits_{k=1}^{\infty}{\text{plim}_{n\rightarrow\infty}} X_{n,k}=\sum\limits_{k=1}^{\infty}X_{k}. $$

Proof

By (Serfling (1980), Theorem §1.3.6), the hypothesized conditions on the sequence Xn,k imply that \(X_{n,k}{\xrightarrow {L_{1}}} X_{k}\), that is, \({\lim }_{n\rightarrow \infty }\mathbb {E}|X_{n,k}-X_{k}|=0\). Then, invoking the triangle inequality, we have, with the understanding that Xn,k=0 a.s. for k>n, that

$$\begin{array}{@{}rcl@{}} \left|\sum\limits_{k=1}^{n}X_{n,k}-\sum\limits_{k=1}^{\infty}X_{k}\right| &=& \left|\sum\limits_{k=1}^{n}\left(X_{n,k}-X_{k}\right)-\sum\limits_{k=n+1}^{\infty}X_{k}\right| \leq \sum\limits_{k=1}^{\infty}\left|X_{n,k}-X_{k}\right|+\sum\limits_{k=n+1}^{\infty}\left|X_{k}\right|, \end{array} $$

whence

$${\lim}_{n\rightarrow\infty}\mathbb{E}\left|\sum\limits_{k=1}^{n}X_{n,k}\,-\,\sum\limits_{k=1}^{\infty}X_{k}\right| \leq {\lim}_{n\rightarrow\infty}\sum\limits_{k=1}^{\infty}\mathbb{E}\left|X_{n,k}\,-\,X_{k}\right|+{\lim}_{n\rightarrow\infty}\sum\limits_{k=n+1}^{\infty}\mathbb{E}\left|Y\right| \,=\, \sum\limits_{k=1}^{\infty}0+0=0, $$

and therefore \(\sum \nolimits _{k=1}^{n}X_{n,k}{\xrightarrow {L_{1}}} \sum \nolimits _{k=1}^{\infty }X_{k} \left (\text {and}\ \sum \nolimits _{k=1}^{\infty }X_{n,k}{\xrightarrow {L_{1}}} \sum \nolimits _{k=1}^{\infty }X_{k}\right)\). The result now follows because convergence in the L1 norm implies convergence in probability (Serfling (1980), Theorem §1.3.2). □

Proof of Theorem 1

We will show that

$$ M(t)=\mathbb{E}\left(e^{tX}\right)={\int\limits_{\theta}^{\infty}\frac{1}{\tau x}e^{tx}(x-\theta)^{\gamma} \left[\frac{\gamma}{(x-\theta)}-\frac{1}{x}\right]e^{\frac{-(x-\theta)^{\gamma}}{x\tau}}\,dx}<\infty, $$
(22)

for γ>2,τ>0 and θ≥0, in a neighborhood 0<t<ε. (Note that it suffices to consider t>0 throughout since etx<etx, for t>0). Letting x=y+θ, the mgf becomes

$$ M(t)=\frac{e^{t\theta}}{\tau}\displaystyle{\int\limits_{0}^{\infty} {\left[\frac{y^{\gamma}}{y+\theta}\left(\frac{\gamma}{y}-\frac{1}{y+\theta}\right)\right]\, \exp\left\{ty-\frac{y^{\gamma}}{{(y+\theta)}\tau}\right\}\,dy}}, $$
(23)

and note that we only need to check convergence for θ near 0 and . Hence we split the proof into the following 3 cases. Case θ=0. Although this corresponds to a Weibull distribution, for which existence of the mgf is well-known Rinne (2009), we outline here a new argument that will bound the mgf, and will subsequently be repeated with minor changes in the θ>0 case.

$$\begin{array}{*{20}l} M(t)&=\frac{1}{\tau}{\int\limits_{0}^{\infty}{e^{tx}x^{\gamma-1}\left[\frac{\gamma-1}{x}\right] e^{{\frac{-x^{\gamma-1}}{\tau}}}}\,dx}=\frac{\gamma-1}{\tau}{\int\limits_{0}^{\infty}\exp \left\{tx-\frac{1}{\tau}x^{\gamma-1}\right\}x^{\gamma-2}}\,dx\\ &=\frac{\gamma-1}{\tau}{\int\limits_{0}^{\infty}\exp\left\{tx-\frac{1}{2\tau}x^{\gamma-1} -\frac{1}{2\tau}x^{\gamma-1}\right\}x^{\gamma-2}\,dx}. \end{array} $$

Now, splitting the integral, we have

$$M(t)=\frac{\gamma-1}{\tau}\left[\underbrace{{\int\limits_{0}^{b}{e^{{x} (t-{\frac{1}{2\tau}x^{\gamma-2}})}{e^{-\frac{1}{2\tau}x^{\gamma-1}}}x^{\gamma-2}}\,dx}}_{A}+ \underbrace{{\int\limits_{b}^{\infty}{e^{{x}(t-{\frac{1}{2\tau}x^{\gamma-2}})} {e^{-\frac{1}{2\tau}x^{\gamma-1}}}x^{\gamma-2}}\,dx}}_{B}\right], $$

and we seek to bound each of the integrals A and B, for some b>0 sufficiently large. Since A constitutes the integral of a smooth function over a finite range, it follows immediately that A<. For B, since for x>b sufficiently large, γ>2, and any fixed t>0 and τ>0, we have xt<xγ−1/(2τ) which implies exp{x(tτ−1xγ−2/2)}<1, this term can be dropped from the integrand of B. Performing the substitution y=xγ−1/(2τ), then leads to

$$0<B<{\int\limits_{b}^{\infty}{e^{\frac{-x^{\gamma-1}}{2\tau}}{x^{\gamma-2}}\,dx}}< {\int\limits_{0}^{\infty}{e^{\frac{-x^{\gamma-1}}{2\tau}}{x^{\gamma-2}}\,dx}} =\frac{2\tau}{\gamma-1}{\int\limits_{0}^{\infty}{e^{-y}\,dy}}<\infty. $$

Case θ>0. Starting from (23), we also separate the integral into two,

$$\begin{aligned} M(t)&=\frac{e^{t\theta}}{\tau}\underbrace{\int\limits_{0}^{b}{e^{ty-\frac{y^{\gamma}}{{2(y+\theta)}\tau}-\frac{y^{\gamma}}{{2(y+\theta)}\tau}} \left[\frac{y^{\gamma-1}}{(y+\theta)^{2}}({\gamma}{(y+\theta)}-y)\right]\,dy}}_{A}\\ &\qquad\qquad\qquad+\frac{e^{t\theta}}{\tau}\underbrace{{\int\limits_{b}^{\infty}{e^{ty-\frac{y^{\gamma}}{{2(y+\theta)}\tau}-\frac{y^{\gamma}}{{2(y+\theta)}\tau}} \left[\frac{y^{\gamma-1}}{(y+\theta)^{2}}({\gamma}{(y+\theta)}-y)\right]\,dy}}}_{B}, \end{aligned} $$

whence by a similar argument to the previous case, we have 0<A<. In B, note that when b is sufficiently large, we have, for y>b,

$$ty-\frac{y^{\gamma}}{{2(y+\theta)}\tau}=y\Bigg(t-\frac{y^{\gamma-1}}{{2y}{(1+\frac{\theta}{y})}{\tau}}\Bigg)<0, $$

whence we can omit the exponential of this term, as before, since the part of the integrand involving it would be bounded by e0=1. Thus,

$$\begin{aligned} B&<{\int\limits_{b}^{\infty}{\exp\left\{-\frac{y^{\gamma}}{{2y}{\left(1+\frac{\theta}{y}\right)\tau}}\right\} \left[\frac{y^{\gamma-1}}{{y^{2}}{\left(1+\frac{\theta}{y}\right)}^{2}}\left({\gamma}{y}{(1+\theta/y)}-{y}\right)\right]\,dy}}\\ &\qquad\qquad\qquad={\int\limits_{b}^{\infty}{\exp\left\{-\frac{y^{\gamma-1}}{{2}{\left(1+\frac{\theta}{y}\right)\tau}}\right\} \left[\frac{y^{\gamma-2}}{{\left(1+\frac{\theta}{y}\right)}^{2}}\left({\gamma}{(1+\theta/y)}-1\right)\right]\,dy}}, \end{aligned} $$

Now, since 2−2<(1+θ/y)−2<1−2 for y>b sufficiently large, we have

$$B<{\int\limits_{0}^{\infty}{y^{\gamma-2}}({2\gamma-1)}e^{-y^{\gamma-1}/(4\tau)}\,dy}\equiv C, $$

and performing the substitution x=yγ−1/(4τ), yields

$$C=\frac{4\tau(2\gamma-1)}{\gamma-1}{\int\limits_{0}^{\infty}e^{-x}\,dx}<\infty, $$

whence M(t)=A+B<A+C<. Case γ=2. To show γ=2 is sharp, let γ=2−ε, where 0<ε is small, t>0, and θ≥0. With these substitutions, reverting back to the τ=δγ parametrization, we have

$$M(t)={\int\limits_{\theta}^{\infty}{e^{tx-\frac{{(x-\theta)}^{2-\varepsilon}}{x\delta^{2-\varepsilon}}}} {\left[\frac{2-\varepsilon}{x-\theta}-\frac{1}{x}\right]}\,dx,} $$

whence, letting x=y+θ, we can successively refine the lower bound on M(t) as follows:

$$\begin{aligned} M(t)&=e^{t\theta}\int\limits_{0}^{\infty}\exp\left\{ty-\frac{y^{2-\varepsilon}}{(y+\theta)\delta^{2-\varepsilon}}\right\} \left[\frac{2-\varepsilon}{y}-\frac{1}{y+\theta}\right]\,dy\\ &>\int\limits_{0}^{\infty}\exp\left\{y\left(t-\frac{{y}^{1-\varepsilon}}{{(y+\theta)}\delta^{2-\varepsilon}}\right) \right\}\left[\frac{{(2-\varepsilon)}{(y+\theta)}-y}{y{(y+\theta)}}\right]\,dy\\ &>{\int\limits_{b}^{\infty}{e^{y(t-\frac{{y}^{-\varepsilon}}{\delta^{2-\varepsilon}})} {\left[\frac{{(2-\varepsilon)}{(y+\theta)}-(y+\theta)}{y{(y+\theta)}}\right]}\,dy}}\\ &>\exp\left\{\frac{-b^{-\varepsilon}}{\delta^{2-\varepsilon}}\right\}{\int\limits_{b}^{\infty}{e^{ty}{\frac{1}{y}}{(1-\varepsilon)}\,dy}}, \end{aligned} $$

for any 0<b<. Substituting x=ty, the mgf becomes

$${\begin{aligned} M(t)&\,=\,e^{\frac{-b^{-\varepsilon}}{\delta^{2-\varepsilon}}}{(1-\varepsilon)} {\int\limits_{tb}^{\infty}{e^{x}{\frac{t}{x}}{\frac{1}{t}}\,dx}}=e^{\frac{-b^{-\varepsilon}}{\delta^{2-\varepsilon}}} {(1-\varepsilon)}{\int\limits_{b}^{\infty}\frac{e^{x}}{x}\,dx}>{e^{b}}e^{\frac{-b^{-\varepsilon}} {\delta^{2-\varepsilon}}}{(1-\varepsilon)}{\int\limits_{b}^{\infty}\frac{1}{x}\,dx}\,=\,\infty. \end{aligned}} $$

Thus M(t) will not be finite in any neighborhood of t=0, whence γ=2 is a sharp upper bound on the divergence of the mgf.

Proof of Theorem 5

We invoke (Serfling (1980), Theorem §4.2.2), and (Van der Vaart (1998), Theorem 5.41), where we must establish (i)–(v) as follows.

  • The absolute values of the two first order partials of f(x;β) are dominated by measurable functions in the vicinity of (γ0,τ0), whose integrals are finite. These derivatives are

    $$\begin{array}{@{}rcl@{}} \frac{1}{f(x;\boldsymbol{\beta})}\frac{\partial f(x;\boldsymbol{\beta})}{\partial \gamma} &=& \frac{x}{x(\gamma-1)+\theta} + \frac{1-(x-\theta)^{\gamma}}{x\tau}\log(x-\theta), \\ \frac{1}{f(x;\boldsymbol{\beta})}\frac{\partial f(x;\boldsymbol{\beta})}{\partial \tau} &=& \frac{x\tau-(x-\theta)^{\gamma}}{x\tau^{2}}, \end{array} $$

    whose absolute value is dominated by the function g1(x)=A(1+xB)(1+| logx|), for sufficiently large constants A and B, that is,

    $$\left|\frac{\partial f(x;\boldsymbol{\beta})}{\partial\gamma}\right|\leq g_{1}(x)f(x;\boldsymbol{\beta}), \qquad\text{and}\qquad \left|\frac{\partial f(x;\boldsymbol{\beta})}{\partial\tau}\right|\leq g_{1}(x)f(x;\boldsymbol{\beta}), $$

    whence \(\int g_{1}(x)f(x;\boldsymbol {\beta })dx=\mathbb {E} g_{1}(X)<\infty \).

  • The absolute values of the three 2nd order partials of f(x;β) are dominated by measurable functions in the vicinity of (γ0,τ0), whose integrals are finite. For example, the derivative with highest order terms is

    $$\begin{array}{*{20}l} \frac{1}{f(x;\boldsymbol{\beta})}\frac{\partial^2 f(x;\boldsymbol{\beta})}{\partial \gamma^{2}} &= \left[\frac{2x}{x(\gamma-1)+\theta}-\frac{(x-\theta)^{\gamma}}{\tau(x(\gamma-1)+\theta)}\right]\log(x-\theta) \\ &\qquad\qquad\qquad\qquad+\left[1+\frac{(x-\theta)^{2\gamma}}{\tau^2x^{2}}-\frac{3(x-\theta)^{\gamma}}{x}\right]\log^{2}(x-\theta), \end{array} $$

    whose absolute value is dominated by the function g2(x)=A(1+xB)(1+| logx|+| logx|2), for sufficiently large constants A and B, that is,

    $$\left|\frac{\partial^2 f(x;\boldsymbol{\beta})}{\partial\gamma^{2}}\right|\leq g_{2}(x)f(x;\boldsymbol{\beta}), $$

    whence \(\int g_{2}(x)f(x;\boldsymbol {\beta })dx=\mathbb {E} g_{2}(X)<\infty \). Tedious computations show that the remaining second order partials are likewise dominated by g2(x)f(x;β).

  • The absolute values of the four third order partials of logf(x;β) are dominated by integrable measurable functions in the vicinity of (γ0,τ0). The appropriate derivatives are:

    $$\begin{array}{@{}rcl@{}} \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\gamma^{3}} &=& {\frac {2{x}^{3}}{ \left[ \left(\gamma-1 \right) x+\theta \right]^{3}}}-{\frac { \left(x-\theta \right)^{\gamma}}{x\tau }}\log^{3}\left(x-\theta \right), \\ \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\gamma^{2}\partial\tau} &=& {\frac { \left(x-\theta \right)^{\gamma} }{x{\tau}^{2}}}\log^{2}(x-\theta), \\ \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\gamma\partial\tau^{2}} &=& -{\frac { 2\left(x-\theta\right)^{\gamma}}{x{\tau}^{3}}}\log(x-\theta), \\ \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\tau^{3}} &=& {\frac {6\,\left(x-\theta \right)^{\gamma}-2\,x\tau}{x{\tau}^{4}}}, \end{array} $$

    and we see that for (γ,τ) ranging over a sufficiently small neighborhood of (γ0,τ0), the absolute values of all of these are dominated by the general (integrable) function

    $$ g_{3}(x)=A\left(1+x^{B}\right)\left(1+|\log x|+\cdots+|\log x|^{3}\right), $$
    (24)

    since for sufficiently large constants A and B, we have \(\mathbb {E} g_{3}(X)<\infty \).

  • The Hessian matrix,

    $$J(\boldsymbol{\beta})=\mathbb{E}\left[\begin{array}{cc} -{\frac {{x}^{2}}{\left[\left(\gamma-1\right) x+\theta \right]^{2}}}-{\frac{\left(x-\theta \right)^{\gamma}}{x\tau}} \log^{2}\left(x-\theta \right) & {\frac{\left(x-\theta \right)^{\gamma}}{x{\tau}^{2}}}\log\left(x-\theta\right) \\ {\frac { \left(x-\theta \right)^{\gamma}}{x{\tau}^{2}}}\log\left(x-\theta\right) & {\frac{-2\left(x-\theta \right)^{\gamma}+x\tau}{x{\tau}^{3}}} \\ \end{array}\right], $$

    exists and is nonsingular at (θ0,γ0,τ0). The existence part is verified by noting that, as in case (v) below, each term is finite due to the fact that it is dominated by the general integrable function (24). The matrix will be nonsingular if the first and second columns are linearly independent (a.s.). A glance at these terms reveals that it is impossible for one to be a multiple of the other (with positive probability).

  • The diagonal entries of I(β) are finite, I11(β)< and I22(β)<, when evaluated at (θ0,γ0,τ0). Once again this follows similarly to case (iv) above by noting that the squares of each of the terms

    $$\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\gamma} = \log\left(x-\theta \right) +{\frac {x}{ \left(\gamma-1 \right)x+\theta}}-{\frac{\left(x-\theta \right)^{\gamma}}{x\tau}}\log\left(x-\theta \right), $$

    and

    $$\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\tau} = {\frac{\left(x-\theta \right)^{\gamma}-x\tau}{x{\tau}^{2}}}, $$

    are both dominated by the general integrable function (24). Note that the finiteness of the diagonals immediately implies that the off-diagonal term of I(β) is also finite.

Proof of Theorem 6

With α=(a,b)T, write model (19) in vector/matrix form,

$$ \boldsymbol{y} = Z\boldsymbol{\alpha}+\boldsymbol{\varepsilon}, $$
(25)

and note that the (limit in probability of the) least squares estimates can be written as

$$\begin{array}{@{}rcl@{}} {\text{plim}_{n\rightarrow\infty}}\hat{\boldsymbol{\alpha}} &=& \boldsymbol{\alpha} + \left({\text{plim}_{n\rightarrow\infty}}\frac{1}{n-1}Z^{T} Z\right)^{-1}\left({\text{plim}_{n\rightarrow\infty}}\frac{1}{n-1}Z^{T}\boldsymbol{\varepsilon}\right) \\ &\equiv &\boldsymbol{\alpha} + \left({\text{plim}_{n\rightarrow\infty}} W\right)^{-1}\left({\text{plim}_{n\rightarrow\infty}}\boldsymbol{w}\right). \end{array} $$

To establish the required result, we will show that w=op(1), and W=Op(1). We will first derive the following basic results.

  • plimnX(1)=θ. This follows easily by noting that

    $$P(X_{(1)}>x) = \exp\left\{-\frac{(x-\theta)^{\gamma}}{x\tau/n} \right\}, $$

    whence X(1)LDSWeibull(θ,γ,τ/n), so that for x>θ, P(X(1)>x)→e=0, which implies P(X(1)x)→1 as n; whereas P(X(1)>θ)=e−0=1, so that P(X(1)θ)=0.

  • plimnX(i)=ξi=F−1(qi), for 2≤in. This is a consequence of the asymptotic normality of X(i), which is a consistent estimate of the central quantile ξi. The asymptotic normality follows from the fact that F(·) is differentiable and f(ξi)>0 (David and Nagaraja (2003), Chapter 10).

  • From (i) and (ii), it follows immediately that plimnyi= log[− log(1−qi)]+ logξi, and plimnzi= log(ξiθ).

  • In fact, since yi and zi are dominated by an integrable function (simply let x(i)x(n) in their corresponding definitions), implies the stronger L1 convergence:

    $$y_i {\xrightarrow{L_{1}}} \log\left[-\log(1-q_i)\right]+\log\xi_{i}\equiv y_{i}^\ast, \qquad\text{and}\qquad z_i {\xrightarrow{L_{1}}} \log(\xi_i-\theta)\equiv z_{i}^\ast. $$

    This follows because dominated convergence in probability implies L1 norm convergence (Lemma 2).

  • Now note that since

    $$\xi_i = F^{-1}(q_i) \quad\Longleftrightarrow\quad \log\left[-\log(1-q_i)\right]+\log\xi_i = \gamma\log(\xi_i-\theta)-\log\tau, $$

    yi and zi defined in (iv) satisfy the population regression equation: yi=a+bzi.

  • We can state the results in (iv)-(v) equivalently as:

    $$\varepsilon_i=y_i-(a+{bz}_i) {\xrightarrow{L_{1}}} y_{i}^\ast-(a+{bz}_{i}^\ast)=0, $$

    and since L1 norm convergence implies the weaker convergence in probability (see proof of Lemma 2), we have that plimnεi=0.

  • Thus, from (iv)-(vi), we have that for any real numbers λ1 and λ2, \((\lambda _1+\lambda _2z_i)\varepsilon _{i}{\xrightarrow {L_{1}}} 0\), which also implies plimn(λ1+λ2zi)εi=0.

To prove the first assertion (that plimnw=0), invoke the Cramer-Wold device and Lemma 2 to see that, for any vector of reals λ=(λ1,λ2)T, and using the result in (vii),

$${\text{plim}_{n\rightarrow\infty}} \boldsymbol{\lambda}^{T} Z^{T}\boldsymbol{\varepsilon} = {\text{plim}_{n\rightarrow\infty}}\sum\limits_{i=2}^{n}(\lambda_1+\lambda_2z_i)\varepsilon_i = \sum\limits_{i=2}^{n}{\text{plim}_{n\rightarrow\infty}}(\lambda_1+\lambda_2z_i)\varepsilon_i = 0, $$

whence plimnZTε=0, and therefore

$$\boldsymbol{w} = \frac{1}{n-1}Z^{T}\boldsymbol{\varepsilon} = \left(\frac{1}{n-1}\right)\left(Z^{T}\boldsymbol{\varepsilon}\right) = o_{p}(1)o_{p}(1) = o_{p}(1). $$

To prove the second assertion (that W is bounded in probability), note that

$$W=\frac{1}{n-1}Z^{T} Z = \left[\begin{array}{cc} 1 & \frac{1}{n-1}\sum\nolimits_{i=2}^{n} z_{i} \\ \frac{1}{n-1}\sum\nolimits_{i=2}^{n} z_{i} & \frac{1}{n-1}\sum\nolimits_{i=2}^{n} z_{i}^{2} \\ \end{array}\right]. $$

An informal argument will now suffice. In the limit as n, since the quantiles ξi are dense in the support of X, we have from (ii), and using the transformation u=F(x), that

$${\text{plim}_{n\rightarrow\infty}}\frac{1}{n}\sum\limits_{i=1}^{n} X_{(i)}= \lim\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{i=1}^{n}\xi_i=\int\nolimits_{0}^{1} F^{-1}(u)du = \mathbb{E} (X). $$

which generalizes immediately to

$${\text{plim}_{n\rightarrow\infty}}\frac{1}{n}\sum\limits_{i=1}^{n} g(X_{(i)})=\lim\limits_{n\rightarrow\infty}\frac{1}{n}\sum\limits_{i=1}^{n} g(\xi_i)=\int\nolimits_{0}^{1} g(F^{-1}(u))du = \mathbb{E} g(X), $$

for any integrable function g(·). Heuristically then, the fact that plimnzi= log(ξiθ) implies that

$$ {\begin{aligned} {\text{plim}_{n\rightarrow\infty}}\frac{1}{n-1}\sum\limits_{i=2}^{n} z_{i} = \mathbb{E}^{(T)}\log(X-\theta),\qquad\text{and}\qquad {\text{plim}_{n\rightarrow\infty}}\frac{1}{n-1}\sum\limits_{i=2}^{n} z_{i}^{2} = \mathbb{E}^{(T)}\log^{2}(X-\theta), \end{aligned}} $$
(26)

where \(\mathbb {E}^{(T)}\) denotes possible resulting truncation in the expectation operator in view of the fact that the summations begin at i=2 and may not span the entire support of the quantile function (see Remark 3). Now, since each of the sample averages in (26) is Op(1), we deduce that

$$ {\begin{aligned} {\text{plim}_{n\rightarrow\infty}}\left(\frac{1}{n-1}\sum\limits_{i=2}^{n} z_{i}\right)^{2} = [\mathbb{E}^{(T)}\log(X-\theta)]^{2}\neq\mathbb{E}^{(T)}\log^{2}(X-\theta)={\text{plim}_{n\rightarrow\infty}}\frac{1}{n-1}\sum\limits_{i=2}^{n} z_{i}^{2}. \end{aligned}} $$
(27)

whence we conclude that W is a.s. nonsingular and therefore Op(1).

Availability of data and materials

The dataset analyzed in the current study is available in the CRAN R package bReeze repository [https://CRAN.R-project.org/package=bReeze].

Notes

  1. Standard result for the cumulant generating function of any random variable, easily established by invoking Holder’s Inequality.

Abbreviations

cdf:

Cumulative distribution function

CV:

Coefficient of variation

iid:

Independent and identically distributed

mgf:

Moment generating function

MLE:

maximum likelihood estimator

References

  • Bourguignon, M., Silva, R. B., Cordeiro, G. M.: The weibull-g family of probability distributions. J. Data Sci. 12, 53–68 (2014).

    MathSciNet  Google Scholar 

  • David, H. A., Nagaraja, H. N.: Order Statistics. 3rd edition. Wiley, New York (2003).

    Book  Google Scholar 

  • Greenwood, J. A., Landwehr, J. M., Matalas, N. C., Wallis, J. R.: Probability weighted moments: definition and relation to parameters of several distributions expressable in inverse form. Water Resour. Res. 15(5), 1049–1054 (1979).

    Article  Google Scholar 

  • Gurvich, M. R., Dibenedetto, A. T., Ranade, S. V.: A new statistical distribution for characterizing the random strength of brittle materials. J. Mater. Sci. 32(10), 2559–2564 (1997).

    Article  Google Scholar 

  • Hosking, J. R. M.: L-moments: Analysis and estimation of distributions using linear combinations of order statistics. J. R. Stat. Soc. Ser. B (Methodol.) 52(1), 105–124 (1990).

    MathSciNet  MATH  Google Scholar 

  • Masters, G. M.: Renewable and efficient electric power systems. Wiley, Hoboken, New Jersey (2013).

    Google Scholar 

  • Rinne, H.: The Weibull distribution: a handbook. CRC Press, Boca Raton (2009).

    MATH  Google Scholar 

  • Schervish, M. J.: Theory of statistics. Springer, New York (1995).

    Book  Google Scholar 

  • Serfling, R. J.: Approximation Theorems of Mathematical Statistics. Wiley, New York (1980).

    Book  Google Scholar 

  • Van der Vaart, A. W.: Asymptotic Statistics. Cambridge university press, Cambridge (1998).

    Book  Google Scholar 

  • Weibull, W.: A statistical distribution function of wide applicability. J. Appl. Mech. 18(3), 293–297 (1951).

    MATH  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

RWB developed the proof of Theorem 1, and spent considerable effort in trying to obtain analytical forms for several integrals that did not make it into the paper. CP worked closely with AAT to develop the results of “Formal definition, basic properties, and results” and “Estimation procedures and asymptotic results” sections, and was largely responsible for conducting the simulations in “Simulation results” section. JS was the main instigator of the paper, responsible for deriving the new distribution from first principles as described in the Introduction. He also spent considerable time looking for possible applications and datasets. AAT worked closely with CP to develop the results of “Formal definition, basic properties, and results” and “Estimation procedures and asymptotic results” sections, and was largely responsible for the real data analysis of “Real data application” section. All authors read and approved the final manuscript.

Corresponding author

Correspondence to A. Alexandre Trindade.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barnard, R.W., Perera, C., Surles, J.G. et al. The linearly decreasing stress Weibull (LDSWeibull): a new Weibull-like distribution. J Stat Distrib App 6, 11 (2019). https://doi.org/10.1186/s40488-019-0100-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40488-019-0100-8

Keywords