 Research
 Open Access
 Published:
The linearly decreasing stress Weibull (LDSWeibull): a new Weibulllike distribution
Journal of Statistical Distributions and Applications volume 6, Article number: 11 (2019)
Abstract
Motivated by an engineering pullout test applied to a steel strip embedded in earth, we show how the resulting linearly decreasing force leads naturally to a new distribution, if the force under constant stress is modeled via a threeparameter Weibull. We term this the LDSWeibull distribution, and show that inference on the parameters of the underlying Weibull can be made upon collection of data from such pullout tests. Various classical finitesample and asymptotic properties of the LDSWeibull are studied, including existence of moments, distribution of extremes, and maximum likelihood based inference under different regimes. The LDSWeibull is shown to have many similarities with the Weibull, but does not suffer from the problem of having an unbounded likelihood function under certain parameter configurations. We demonstrate that the quality of its fit can also be very competitive with that of the Weibull in certain applications.
Introduction
Mechanically stabilized earth is a method of constructing vertical retaining walls which is often seen in overpasses in populated metropolitan areas where space is at a premium. It consists of reinforcements which are buried in soil in layers. These reinforcements are attached to a vertical facing wall. The types of reinforcements vary, but are generally classified as either inextensible (steel) or extensible (polymeric). Of interest here are the steel reinforcements which are generally flat steel strips, flat steel strips with ribs on them, or welded wire mats which look like a ladder or a grid. The strip reinforcements are generally 50 mm (2 inches) wide and 4 mm (0.16 inches) thick.
Consider the case of the smooth steel strip. These types of reinforcements are not generally used in construction, but are often studied in the laboratory. If a smooth steel strip were used, while in service the stress that it would be subject to would be equal along its entire length (nominally, assuming a constant soil pressure). To establish the serviceability of these reinforcements, they are subjected to what is known as a pullout test. That is, they are embedded in backfill and an axial force is applied to the head of the reinforcement. A frictional stress is generated along the reinforcement and soil interface, and this frictional stress is cumulative so that the stress at the head of the reinforcement is equal to the total frictional stress that the entire strip is experiencing, while the stress at the middle is half of that, and so on. This results in a continuous, linearly decreasing force within the reinforcement from the head to the tail. This is much like the linkage between two cars in a train, which must withstand only the stress placed upon it by the cars behind. Thus if a train is being pulled by only locomotives in the front, the linkages between cars at the front of the train are subjected to much more stress than those near the rear of the train. (This train analogy is apt because ribbed strip or the wire mat reinforcements will largely behave in this manner.)
The data gathered from pullout tests can be used to estimate the survival distribution of reinforcements under the conditions of the pullout test, but how can it be used to estimate the survival distribution under actual service conditions?
The Weibull distribution, named for Waloddi Weibull, was popularized by Weibull in his papers from 1939 through 1961, the key paper being (Weibull 1951). It has found wide applicability in engineering practice. In his work, Weibull was studying the strength of materials, but the distribution actually appeared somewhat earlier than that in the late 1920’s in the study of extreme values; see (Rinne 2009) for a thorough review. (It should be noted that Weibull was unaware of this earlier work and derived his distribution independently.) In particular, it arises as the minimum (or maximum) of a random sample with support that is bounded below (for the minimum) or above (for the maximum). The old proverb is that the strength of a chain is equal to the strength of its weakest link (the minimum). The proverb may also be applied to the strength of materials in that the strength of the material is equal to the strength of its weakest point. So it is no surprise that the Weibull distribution arises in the study of the strength of materials and has found wide applicability.
Suppose that it is reasonable to assume that a smooth steel strip reinforcement has a Weibull survival distribution were it exposed to a constant stress along its length. It is well known that the minimum of independent and identically distributed (iid) Weibull random variables has a Weibull distribution. That is, suppose that Y_{1},Y_{2},…,Y_{n} are iid Weibull with shape β, location/threshold μ, and scale σ under the following pasteurization for the cumulative distribution function (cdf):
This 3parameter Weibull will be referred to as Weibull (μ,β,σ). Then Y_{(1)}=min(Y_{1},Y_{2},…,Y_{n}) has cdf given by:
That is, Y_{(1)} is Weibull with shape β, location μ, and scale σ/n^{1/β}.
Consider now a continuous system of length L that is viewed as being composed of n independent “links” of equal length. Assume that the strength of the entire system is Weibull(β,μ,σ), and that the strengths of the individual links are also Weibull. Then each link must have a Weibull (β,μ,σn^{(1/β)}) distribution. Note that this requires that as the number of links, n, increases, the scale increases in a corresponding fashion. That is, shorter links are stronger links (stochastically).
One end is denoted the “head” (location 0) and the other the “tail” (location L). The head is exposed to a stress S_{0}, which decreases linearly along the system to 0 at the tail. The stress at location l is thus \(S_{l}=S_{0}\left (1\frac {l}{L}\right)\). If we view the system as before; that is, as having a Weibull strength with it being viewed as n independent “links” that are also Weibull, what is the distribution of the system under these conditions as a function of S_{0}?
Suppose that we have Y_{1},Y_{2},…,Y_{n} that are iid Weibull (β,μ,σn^{(1/β)}). The system reliability is given by:
Note that if \(S_{0}\left [1\frac {i}{n}\right ]<\mu \), then the probability associated with that i is 1 since the strength Y_{i} must be greater than μ. so the product need only run to the largest k such that
where ⌊·⌋ is the floor function.
Thus,
Taking the natural log and the limit as n tends to infinity:
where \(k=\text {min}\left \{n1, \left \lfloor n\left (1\frac {\mu }{S_{0}}\right)\right \rfloor \right \}\). For sufficiently large n, we can approximate this sum using an integral as:
Thus, \(R(S_{0}) = \exp \left \{\frac {\left (S_{0}\mu \right)^{\beta +1}}{\sigma ^{\beta }(\beta +1)S_{0}}\right \}, S_{0}>\mu \). Note that in the case of the standard twoparameter Weibull (μ=0), the resulting reliability is Weibull with shape β and scale σ(β+1)^{1/β}.
Consider a reparametrization with θ=μ, γ=β+1, and δ=[σ^{β}(β+1)]^{1/(β+1)}. This yields a cdf (which is one minus the reliability) of the form:
Note that this bridges the gap and allows us to estimate the reliability of inservice steel strip reinforcements (which are exposed to a constant stress along its length and have a Weibull survival distribution) via the results of pullout tests (which expose the strip to a linearly decreasing stress along its length and have the distribution derived above). That is, if we obtain a sample under pullout test conditions and estimate the parameters θ, γ, and δ via maximum likelihood estimation (MLE) and obtain \(\hat {\theta }\), \(\hat {\gamma }\), and \(\hat {\delta }\), then by invariance the MLEs for the parameters of the Weibull are:
We term the distribution arrived at in the above discussion the linearly decreasing stress Weibull (LDSWeibull); a new Weibulllike distribution. A formal definition along with the derivation of basic and classical properties is presented in “Formal definition, basic properties, and results” section. Maximum likelihood and other types of estimation procedures, along with accompanying asymptotic results, are developed in “Estimation procedures and asymptotic results” section. These procedures are subsequently investigated with simulation studies in “Simulation results” section. We conclude the paper with an application on real data in “Real data application” section.
Formal definition, basic properties, and results
This section formally introduces the LDSWeibull and derives basic properties. It is obvious from (7) that θ is a pseudolocation parameter, γ is a shape parameter, and δ a pseudoscale parameter. For ease of handling, it will be more convenient to work with the following onetoone reparametrization, (θ,γ,δ)↦β=(θ,γ,τ), where τ=δ^{γ}⇔δ=τ^{1/γ}, whence τ>0 is more obviously seen to be a pseudoscale parameter.
Definition 1
The LDSWeibull (θ,γ,τ) has parameter space: Ω={(θ,γ,τ):θ≥0, γ>1, τ>0}. Its cdf is given by:
The density function is therefore:
Note that the LDSWeibull inherits parameter identifiability from the Weibull, since the transformation (8) is onetoone.
Remark 1
For θ=0 the LDSWeibull (0,γ,τ) is a twoparameter Weibull with shape γ−1, and scale τ^{1/(γ−1)}.
Apart from its intimate connection with the Weibull, there are possibly many related distributions that overlap with the proposed LDSWeibull. Note that written in the form
the cdf can more generally be seen to be a member of the very broad class of distributions introduced by (Gurvich et al. 1997), that can be generated from the Weibull by taking H(x;β) to be a nonnegative and monotone increasing function, possibly depending on the vector of parameters β. In our case, the LDSWeibull (θ,γ,τ) is obtained by setting α=1/τ and H(x;θ,γ)=(x−θ)^{γ}/x. (Bourguignon et al. 2014) introduce an interesting variant of G(x;α,β) by taking H(x;β) to be a positive power of the ratio of any continuous cdf and its survival function, but the LDSWeibull (θ,γ,τ) does not appear to obey that particular construct.
Existence of moment generating function
We have managed to determine necessary and sufficient conditions for existence of the moment generating function (mgf). These conditions impose a restriction on the shape parameter γ.
Theorem 1
The mgf of the LDSWeibull (θ,γ,τ) satisfies \(M(t) = \mathbb {E} e^{tX}<\infty \), for all \(t\in \mathbb {R}\) in the restricted parameter range:
Otherwise, M(t)=∞ for all \(t\in \mathbb {R}\) if γ≤2.
Proof
See Appendix: Proof of Theorem 1. □
Attempts at finding a closed form for M(t) (in terms of known special functions) have not, however, yielded positive results. This leads to challenges in devising preliminary parameter estimators such as method of moments. Due to the lack of an analytic expression for the quantile function, and the usual intractability of moments of order statistics, alternative treatments such as probability weighted moments (Greenwood et al. 1979) and Lmoments (Hosking 1990), do not appear to be feasible either.
Completeness and minimal sufficiency
There is little hope in being able to determine a complete statistic, but it’s not hard to show that the order statistics are minimal.
Theorem 2
For a random sample X=(X_{1},…,X_{n}) from the LDSWeibull (θ,γ,τ), the order statistics T(X)=(X_{(1)},…,X_{(n)}) are minimal sufficient.
Proof
Let x=(x_{1},…,x_{n}) and y=(y_{1},…,y_{n}) denote two independent random samples from the LDSWeibull (θ,γ,τ), where the logdensity of f(x) is given by:
We note that the family is (trivially) dominated by Lebesgue measure, and hence invoking (Schervish (1995), Theorem 2.29), we need only show that, for any fixed choice of (θ,γ,τ):
To this end, and ignoring summands that depend on x and/or y only, note that
Now, it is obvious that T(x)=T(y) immediately implies logf(x)− logf(y)=0, which is therefore independent of the parameters. To see that the converse is also true, note that the only way (11) can be independent of (θ,γ,τ), is if each of the three summands is itself free of (θ,γ,τ), whence we must have
Because of the intimate connections with θ and γ, these three requirements can only be met if T(x)=T(y). □
Distribution of the extremes
For a random sample X_{1},…,X_{n} from the LDSWeibull (θ,γ,τ), we now consider the distributions of the minimum and maximum order statistics, X_{(1)} and X_{(n)}, respectively. Some exact results can be obtained using the usual techniques. Specifically, the survival function of X_{(1)} is given by
which implies that X_{(1)} is therefore LDSWeibull (θ,γ,τ/n). The cdf of X_{(n)} is of course F(x)^{n}, but this does not appear to have an immediately recognizable form.
It is also possible to obtain the asymptotic distribution of the (appropriately normalized) extremes by invoking the FisherTippett Theorem; see e.g., David and Nagaraja (2003, §10.5). The following theorem reveals that the extremes of the LDSWeibull are in the domain of attraction of the Gumbel.
Theorem 3
Let X_{(1)} and X_{(n)} denote the minimum and maximum order statistics, respectively, in a random sample from the LDSWeibull (θ,γ,τ) with parameter space Ω, as in Definition 1. Then we have the following convergence in distribution results, for any \(x\in \mathbb {R}\).

For the maximum,
$${\lim}_{n\rightarrow\infty}P\left(\frac{X_{(n)}a_{n}}{b_{n}}\leq x\right) = \exp\left\{e^{x}\right\}. $$ 
For the minimum,
$${\lim}_{n\rightarrow\infty}P\left(\frac{X_{(1)}a_{n}}{b_{n}}\leq x\right) = 1\exp\left\{e^{x}\right\}. $$
In each case, the normalizing constants a_{n} and b_{n} can be chosen to satisfy the pair of equations
Proof
Note that the derivative of the inverse of the hazard function is
Since γ>1, each of these terms is of order O(1/x^{ε}), for some ε>0, and thus they all converge to zero as x→∞. The required result now follows by invoking (David and Nagaraja (2003), Theorem 10.5.2). □
Estimation procedures and asymptotic results
For a random sample x_{1},…,x_{n} from the LDSWeibull (θ,γ,τ), we have the loglikelihood function:
Denote by β_{0}=(θ_{0},γ_{0},τ_{0})^{T} the true parameter vector.
Remark 2
Note that, unlike the Weibull, this loglikelihood function is bounded and will therefore have a nondegenerate extremum for all (θ,γ,τ)∈Ω. As discussed in (Rinne (2009), §11.3.2), a known issue with the 3parameter Weibull in (1) is that as μ→x_{(1)}, (β−1) log(x_{(1)}−μ)→∞ for β<1, and therefore the MLE of β does not exist when β<1.
To demonstrate the spectrum of possibilities for the various regimes of the MLEs, we will now consider the following subset of just three special cases taken from the exhaustive list of all 7 possible combinations of known and unknown parameter values.
Case 1: (γ,τ) known
It would appear that the maximizer of θ would occur at the boundary value of x_{(1)}, however the first two derivatives yield:
Since each of the summands in the second derivative is positive, it follows that ∂^{2}ℓ/∂θ^{2}<0, whence ℓ(β) is concave in θ, and thus the MLE is the unique maximum of ℓ(β), and occurs at an interior point, albeit close to x_{(1)} (which can therefore be used as an initial estimate).
Theorem 4
Let β_{0} be in the restricted parameter range Ω_{C}, with γ_{0} and τ_{0} known. Then the MLE \(\hat {\theta }_{{\gamma }_{0},\tau _{0}}\) of θ_{0} is consistent.
Proof
Take the (continuous) estimating equation (of which \(\hat {\theta }_{{\gamma }_{0},\tau _{0}}\) is the unique root by the above argument) to be
by the weak law of large numbers applied to each of the sample averages. Since this limiting value is welldefined for each θ, consistency of the unique root \(\hat {\theta }_{{\gamma }_{0},\tau _{0}}\) follows by (Van der Vaart (1998), Lemma 5.10). □
Case 2: θ known
With the argument r as placeholder for γ, first define the terms:
and note that
The corresponding score functions are then:
Solving ψ_{3}(β)=0 leads to the profile MLE for τ, \(\hat {\tau }_{\theta }=S_{\theta }(\gamma)/n\), whence substitution into ψ_{2} leads to the profile score equation for γ
with solution \(\hat {\gamma }_{\theta }\).
It will be more convenient to write the score function (15) in normalized form:
Thus the MLEs for γ_{0} and τ_{0} satisfy the equations
where, due to the monotonicity property in Proposition 1, \(\hat {\gamma }_{\theta _{0}}\) is easily determined as either the boundary value \(\hat {\gamma }_{\theta _{0}}=1\), or as the unique root of \(h_{\theta _{0}}(r)\) in (16).
Proposition 1
The function h_{θ}(r) in (16) is monotone decreasing over the interval r>1.
Proof
First note that R_{θ}(r) is monotone decreasing, since
We now show that the term T_{θ}(r)=Sθ′(r)/S_{θ}(r) is monotone increasing, whence the desired result will follow since h_{θ}(r) will then be the sum of a constant term, Q_{θ}, and two monotone decreasing functions. To this end, write \(S_{\theta }(r)=\sum y_{i}^r/x_i=tM(r)\), where \(y_i=x_i\theta >0, t=\sum x_{i}^{1}\), and \(M(r)=\sum p_ie^{rz_i}\) corresponds to the moment generating function of a discrete random variable (say Z), with values z_{i}= logy_{i}, and masses 0≤p_{i}=(tx_{i})^{−1}≤1, i=1,…,n. This is sufficient to establish the result, since noting that the cumulant generating function K(r)= logM(r) is convex^{Footnote 1}, we have
□
The question of whether or not \(\hat {\gamma }_{\theta _{0}}\) ever attains the boundary value of 1 is interesting. It is certainly possible to construct a set of real values x_{1},…,x_{n} such that h_{θ}(1)<0, but whether or not such values correspond to bona fide realizations from an LDSWeibull over some set with positive measure remains an open issue. It is however possible to establish a limiting result as follows.
Proposition 2
Let β_{0} be in the restricted parameter range Ω_{C}, and suppose that plim_{n→∞}h_{θ}(1)>0. Then, with probability 1 in the limit as n→∞, \(\hat {\gamma }_{\theta _{0}}\) is the unique root of \(h_{\theta _{0}}(r)\).
Proof
Due to Proposition 1, it suffices to show that plim_{n→∞}h_{θ}(∞)<0. Defining Y=X−θ, note that Y>0 a.s., and \({\text {plim}_{n\rightarrow \infty }} Q_{\theta }/n=\mathbb {E}\log Y\) by the weak law of large numbers. (Note that the finiteness of all moments for logY follows from the finiteness of all moments for X with parameters in Ω_{C}.) Since \(R_{\theta }(\infty)={\lim }_{r\rightarrow \infty }R_{\theta }(r)/n=0\), it follows that \(R_{\theta }(\infty)/n{\xrightarrow {p}} 0\). Now assume (without loss of generality) that 0<y_{1}=y_{(1)}≤⋯≤y_{(n)}<∞ are ordered, and note that in view of the representation
Lemma 1 in the Appendix is applicable with \(c_{i}(r)=(\frac {y_i}{y_1})/x_{i}^{1/r}\), since for sufficiently large r, we have for i<j:
whence it follows that T_{θ}(∞)= logy_{(n)} and therefore plim_{n→∞}T_{θ}(∞)=∞. Putting everything together gives:
□
Identifiability of the LDSWeibull model combined with third order differentiability of logf(x;β), plus domination of appropriate derivatives of the latter as well as f(x;β) by integrable functions, establishes consistency and asymptotic efficiency of the MLEs directly via classical conditions.
Theorem 5
Let β_{0} be in the restricted parameter range Ω_{C}, with θ_{0} known. Then, the MLEs \(\hat {\gamma }_{\theta _{0}}\) and \(\hat {\tau }_{\theta _{0}}\) of γ_{0} and τ_{0}, respectively, satisfy:
where J(β) is the Hessian matrix of logf(x;β), and I(β) is the Fisher Information matrix (per observation), each defined accordingly as:
Proof
See Appendix: Proof of Theorem 5. □
Case 3: All parameters unknown
To start, a nonparametric estimator of the survival (or reliability) function should be provided for each x_{(i)}, where x_{(i)} is the ith order statistic. The usual empirical survival function is \(\hat {S}(x_{(i)}) = (ni)/n\), but we employ instead a common adjustment, \(\hat {S}(x_{(i)})=(ni+1)/(n+1)\) to avoid the problematic situation of log(0) when i=n.
Now replace θ with the consistent estimate x_{(1)}, and equate empirical and population survival functions at x_{(i)}:
(A perhaps more common justification of (18) is to note the wellknown property of uniform order statistics: \(\mathbb {E}[F(X_{(i)})]=i/(n+1)\).) Performing a loglog transformation of both sides then leads to:
Denoting by y_{i} the lefthandside term of the above expression, and z_{i}= log(x_{(i)}−x_{(1)}), we have, with the addition of the error term ε_{i}=y_{i}−(a+bz_{i}), the linear regression model
Obtaining the least squares estimates \(\hat {a}\) and \(\hat {b}\) for the regression parameters, yields the following starting values for the LDSWeibull model parameters:
Armed with these initial values, which are consistent by the next theorem, one can employ an efficient optimization algorithm to maximize (13) and obtain the MLEs \((\hat {\theta },\hat {\gamma },\hat {\tau })\). We note in passing that the procedure outlined above is nearly identical to the socalled “regression method” for estimating the parameters of the generalized extreme value distribution; see e.g., (Rinne (2009), Chapter 10).
Remark 3
(Central quantile limiting behavior) Note that consistent estimation of the righthandside of (18) subsumes the following limiting behavior for the integer 2≤i≤n appearing in the lefthandside of (18):
where ξ_{i} is the population quantile corresponding to q_{i}, and for notational expedience we omit the implicit dependence i≡i(n) in the limiting behavior of order statistics for the central quantile case (David and Nagaraja (2003), Chapter 10). (Note however that q_{i} and ξ_{i} on the righthandside of (21) do not depend on n.)
Theorem 6
Let β_{0} be in the restricted parameter range Ω_{C}, and assume the central quantile limiting behavior in Remark 3. Then, the initial estimates β^{(0)}=(θ^{(0)},γ^{(0)},τ^{(0)})^{T} given by (20) resulting from a random sample X_{1},…,X_{n} from the LDSWeibull (θ,γ,τ), are consistent for β_{0}.
Proof
See Appendix: Proof of Theorem 6. □
Simulation results
In this section we carry out a small simulation study to investigate the sampling properties of the MLEs for the three cases defined in “Estimation procedures and asymptotic results” section. To this end, Tables 1, 2, and 3 report the bias, variance, mean squared error (MSE), and coefficient of variation (CV) of the MLEs, empirically determined from 1000 simulated realizations.
We see a consistent decrease in all the metrics (bias, variance, MSE, CV) with increasing sample size, as expected. Interestingly, it appears that in general the parameter τ suffers from the most uncertainty, particularly noticeable in some large CV values at low samples.
Real data application
The motivating derivation of the LDSWeibull in Section 1 would behoove us to apply it to the results of a pullout test in order to infer the parameters of the underlying Weibull according to (8). In lack of such data, in this section we illustrate an application where the LDSWeibull (θ,γ,τ) provides a competitive fit to the Weibull (μ,β,σ).
To assess the prospective wind power at a given site, a distribution is often fit to the observed wind speeds. Although different locations tend to have different wind speed profiles, the Weibull has been found to closely mirror the actual distribution of hourly/tenminute wind speeds at many locations (Masters 2013). In these cases the Weibull shape parameter β is often close to 2, and a Rayleigh distribution can therefore be used, offering a less accurate but simpler model.
The R package bReeze contains the data set “winddata”, consisting of measured wind speed and direction at 10min intervals collected by a meteorological mast, for a total of 36,548 consecutive observations on 17 variables. Of these variables, we selected winddata$v1_40m_max, which contains the maximum wind speed (m/s) over each 10min interval recorded by the mast at a height of 40m above ground level. We divided up this long time series into 252 shorter time series of length n=144, each comprising the maximum wind speeds over a 24 h period (144 10min intervals). The 6 (anomalous) wind speed values of zero were simply discarded before creating the resulting 252 time series data sets.
Parameters for the two distributions were estimated for each of these 252 data sets, and the differences in the attained maximized loglikelihood (LDSWeibull minus Weibull) recorded. No parameter restriction were placed on the LDSWeibull (θ,γ,τ), but for compatibility with the LDSWeibull and the reason mentioned in Remark 2, the parameter space for the Weibull (μ,β,σ) was restricted to μ>0, β≥1, and σ>0. Summary statistics for these loglikelihood differences are listed on Table 4. We can see that, although Weibull fits better approximately 75% of the time, the difference is typically very small.
The top panels of Figs. 1 and 2 show some typical series where the differences in the loglikelihood are in excess of 5, and between 1 and 5, respectively. The corresponding LDS vs. Weibull marginal fits are displayed in the bottom panels as solid and dotdash lines, respectively. The dashed KDE line tracking the shaded histogram corresponds to kernel density estimation. These plots typify two regimes: (i) generally calm days with a moderate burst of wind in Fig. 1, and (ii) a windy day with higher bursts (possibly as a consequence of a storm) in Fig. 2. The first regime is characterized by Weibull fits that coincide with an exponential (β=1), whereas the second is more of the Rayleigh type (β≈2).
Although we do not seek an exhaustive analysis here but merely an illustrative one, it is interesting to consider the question of goodnessoffit. AndersonDarling (AD) and KolmogorovSmirnov (KS) tests yield pvalues lower than 10^{−4} in all the cases of Fig. 1, confirming the suspicion that neither distribution is sufficiently rich to capture this regime. The second regime of Fig. 2 is different however, as shown in Table 5. At the usual 5% significance level, the Weibull model only resoundly fits on Day 116, whereas the LDSWeibull fits in all but Day 26. In all of these examples, the distinctive feature is that the LDSWeibull model appears to be better able to resolve the peaks.
Appendix
Lemmas
Lemma 1
Let 0<y_{1}≤⋯≤y_{n}<∞ be an ordered sample of positive real numbers. Then, for any continuous function g(·),
provided that for some sufficiently large r^{∗}, we have the ordering 0<c_{1}(r)<⋯<c_{n}(r)<∞ for all r≥r^{∗}.
Proof
Considering the terms in the denominator of the above summand, note that, for r≥r^{∗}, since c_{i}/c_{j}<1 if i<j, and c_{i}/c_{j}>1 if i>j, we have that:
Thus the first n−1 denominators of U_{n}(r), corresponding to i=1,…,n−1, converge to ∞ as r→∞, while the last denominator converges to 1, which gives:
□
Lemma 2
Let X_{n\,k},1≤k≤n, be a triangular array of random variables such that (i) plim_{n→∞}X_{n,k}=X_{k}, and (ii) X_{n,k}≤Y a.s. for all n, with \(\mathbb {E}Y<\infty \). Then, it follows that:
Proof
By (Serfling (1980), Theorem §1.3.6), the hypothesized conditions on the sequence X_{n,k} imply that \(X_{n,k}{\xrightarrow {L_{1}}} X_{k}\), that is, \({\lim }_{n\rightarrow \infty }\mathbb {E}X_{n,k}X_{k}=0\). Then, invoking the triangle inequality, we have, with the understanding that X_{n,k}=0 a.s. for k>n, that
whence
and therefore \(\sum \nolimits _{k=1}^{n}X_{n,k}{\xrightarrow {L_{1}}} \sum \nolimits _{k=1}^{\infty }X_{k} \left (\text {and}\ \sum \nolimits _{k=1}^{\infty }X_{n,k}{\xrightarrow {L_{1}}} \sum \nolimits _{k=1}^{\infty }X_{k}\right)\). The result now follows because convergence in the L_{1} norm implies convergence in probability (Serfling (1980), Theorem §1.3.2). □
Proof of Theorem 1
We will show that
for γ>2,τ>0 and θ≥0, in a neighborhood 0<t<ε. (Note that it suffices to consider t>0 throughout since e^{−tx}<e^{tx}, for t>0). Letting x=y+θ, the mgf becomes
and note that we only need to check convergence for θ near 0 and ∞. Hence we split the proof into the following 3 cases. Case θ=0. Although this corresponds to a Weibull distribution, for which existence of the mgf is wellknown Rinne (2009), we outline here a new argument that will bound the mgf, and will subsequently be repeated with minor changes in the θ>0 case.
Now, splitting the integral, we have
and we seek to bound each of the integrals A and B, for some b>0 sufficiently large. Since A constitutes the integral of a smooth function over a finite range, it follows immediately that A<∞. For B, since for x>b sufficiently large, γ>2, and any fixed t>0 and τ>0, we have xt<x^{γ−1}/(2τ) which implies exp{x(t−τ^{−1}x^{γ−2}/2)}<1, this term can be dropped from the integrand of B. Performing the substitution y=x^{γ−1}/(2τ), then leads to
Case θ>0. Starting from (23), we also separate the integral into two,
whence by a similar argument to the previous case, we have 0<A<∞. In B, note that when b is sufficiently large, we have, for y>b,
whence we can omit the exponential of this term, as before, since the part of the integrand involving it would be bounded by e^{0}=1. Thus,
Now, since 2^{−2}<(1+θ/y)^{−2}<1^{−2} for y>b sufficiently large, we have
and performing the substitution x=y^{γ−1}/(4τ), yields
whence M(t)=A+B<A+C<∞. Case γ=2. To show γ=2 is sharp, let γ=2−ε, where 0<ε is small, t>0, and θ≥0. With these substitutions, reverting back to the τ=δ^{γ} parametrization, we have
whence, letting x=y+θ, we can successively refine the lower bound on M(t) as follows:
for any 0<b<∞. Substituting x=ty, the mgf becomes
Thus M(t) will not be finite in any neighborhood of t=0, whence γ=2 is a sharp upper bound on the divergence of the mgf.
Proof of Theorem 5
We invoke (Serfling (1980), Theorem §4.2.2), and (Van der Vaart (1998), Theorem 5.41), where we must establish (i)–(v) as follows.

The absolute values of the two first order partials of f(x;β) are dominated by measurable functions in the vicinity of (γ_{0},τ_{0}), whose integrals are finite. These derivatives are
$$\begin{array}{@{}rcl@{}} \frac{1}{f(x;\boldsymbol{\beta})}\frac{\partial f(x;\boldsymbol{\beta})}{\partial \gamma} &=& \frac{x}{x(\gamma1)+\theta} + \frac{1(x\theta)^{\gamma}}{x\tau}\log(x\theta), \\ \frac{1}{f(x;\boldsymbol{\beta})}\frac{\partial f(x;\boldsymbol{\beta})}{\partial \tau} &=& \frac{x\tau(x\theta)^{\gamma}}{x\tau^{2}}, \end{array} $$whose absolute value is dominated by the function g_{1}(x)=A(1+x^{B})(1+ logx), for sufficiently large constants A and B, that is,
$$\left\frac{\partial f(x;\boldsymbol{\beta})}{\partial\gamma}\right\leq g_{1}(x)f(x;\boldsymbol{\beta}), \qquad\text{and}\qquad \left\frac{\partial f(x;\boldsymbol{\beta})}{\partial\tau}\right\leq g_{1}(x)f(x;\boldsymbol{\beta}), $$whence \(\int g_{1}(x)f(x;\boldsymbol {\beta })dx=\mathbb {E} g_{1}(X)<\infty \).

The absolute values of the three 2nd order partials of f(x;β) are dominated by measurable functions in the vicinity of (γ_{0},τ_{0}), whose integrals are finite. For example, the derivative with highest order terms is
$$\begin{array}{*{20}l} \frac{1}{f(x;\boldsymbol{\beta})}\frac{\partial^2 f(x;\boldsymbol{\beta})}{\partial \gamma^{2}} &= \left[\frac{2x}{x(\gamma1)+\theta}\frac{(x\theta)^{\gamma}}{\tau(x(\gamma1)+\theta)}\right]\log(x\theta) \\ &\qquad\qquad\qquad\qquad+\left[1+\frac{(x\theta)^{2\gamma}}{\tau^2x^{2}}\frac{3(x\theta)^{\gamma}}{x}\right]\log^{2}(x\theta), \end{array} $$whose absolute value is dominated by the function g_{2}(x)=A(1+x^{B})(1+ logx+ logx^{2}), for sufficiently large constants A and B, that is,
$$\left\frac{\partial^2 f(x;\boldsymbol{\beta})}{\partial\gamma^{2}}\right\leq g_{2}(x)f(x;\boldsymbol{\beta}), $$whence \(\int g_{2}(x)f(x;\boldsymbol {\beta })dx=\mathbb {E} g_{2}(X)<\infty \). Tedious computations show that the remaining second order partials are likewise dominated by g_{2}(x)f(x;β).

The absolute values of the four third order partials of logf(x;β) are dominated by integrable measurable functions in the vicinity of (γ_{0},τ_{0}). The appropriate derivatives are:
$$\begin{array}{@{}rcl@{}} \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\gamma^{3}} &=& {\frac {2{x}^{3}}{ \left[ \left(\gamma1 \right) x+\theta \right]^{3}}}{\frac { \left(x\theta \right)^{\gamma}}{x\tau }}\log^{3}\left(x\theta \right), \\ \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\gamma^{2}\partial\tau} &=& {\frac { \left(x\theta \right)^{\gamma} }{x{\tau}^{2}}}\log^{2}(x\theta), \\ \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\gamma\partial\tau^{2}} &=& {\frac { 2\left(x\theta\right)^{\gamma}}{x{\tau}^{3}}}\log(x\theta), \\ \frac{\partial^{3}\log f(x;\boldsymbol{\beta})}{\partial\tau^{3}} &=& {\frac {6\,\left(x\theta \right)^{\gamma}2\,x\tau}{x{\tau}^{4}}}, \end{array} $$and we see that for (γ,τ) ranging over a sufficiently small neighborhood of (γ_{0},τ_{0}), the absolute values of all of these are dominated by the general (integrable) function
$$ g_{3}(x)=A\left(1+x^{B}\right)\left(1+\log x+\cdots+\log x^{3}\right), $$(24)since for sufficiently large constants A and B, we have \(\mathbb {E} g_{3}(X)<\infty \).

The Hessian matrix,
$$J(\boldsymbol{\beta})=\mathbb{E}\left[\begin{array}{cc} {\frac {{x}^{2}}{\left[\left(\gamma1\right) x+\theta \right]^{2}}}{\frac{\left(x\theta \right)^{\gamma}}{x\tau}} \log^{2}\left(x\theta \right) & {\frac{\left(x\theta \right)^{\gamma}}{x{\tau}^{2}}}\log\left(x\theta\right) \\ {\frac { \left(x\theta \right)^{\gamma}}{x{\tau}^{2}}}\log\left(x\theta\right) & {\frac{2\left(x\theta \right)^{\gamma}+x\tau}{x{\tau}^{3}}} \\ \end{array}\right], $$exists and is nonsingular at (θ_{0},γ_{0},τ_{0}). The existence part is verified by noting that, as in case (v) below, each term is finite due to the fact that it is dominated by the general integrable function (24). The matrix will be nonsingular if the first and second columns are linearly independent (a.s.). A glance at these terms reveals that it is impossible for one to be a multiple of the other (with positive probability).

The diagonal entries of I(β) are finite, I_{11}(β)<∞ and I_{22}(β)<∞, when evaluated at (θ_{0},γ_{0},τ_{0}). Once again this follows similarly to case (iv) above by noting that the squares of each of the terms
$$\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\gamma} = \log\left(x\theta \right) +{\frac {x}{ \left(\gamma1 \right)x+\theta}}{\frac{\left(x\theta \right)^{\gamma}}{x\tau}}\log\left(x\theta \right), $$and
$$\frac{\partial\log f(x;\boldsymbol{\beta})}{\partial\tau} = {\frac{\left(x\theta \right)^{\gamma}x\tau}{x{\tau}^{2}}}, $$are both dominated by the general integrable function (24). Note that the finiteness of the diagonals immediately implies that the offdiagonal term of I(β) is also finite.
Proof of Theorem 6
With α=(a,b)^{T}, write model (19) in vector/matrix form,
and note that the (limit in probability of the) least squares estimates can be written as
To establish the required result, we will show that w=o_{p}(1), and W=O_{p}(1). We will first derive the following basic results.

plim_{n→∞}X_{(1)}=θ. This follows easily by noting that
$$P(X_{(1)}>x) = \exp\left\{\frac{(x\theta)^{\gamma}}{x\tau/n} \right\}, $$whence X_{(1)}∼LDSWeibull(θ,γ,τ/n), so that for x>θ, P(X_{(1)}>x)→e^{−∞}=0, which implies P(X_{(1)}≤x)→1 as n→∞; whereas P(X_{(1)}>θ)=e^{−0}=1, so that P(X_{(1)}≤θ)=0.

plim_{n→∞}X_{(i)}=ξ_{i}=F^{−1}(q_{i}), for 2≤i≤n. This is a consequence of the asymptotic normality of X_{(i)}, which is a consistent estimate of the central quantile ξ_{i}. The asymptotic normality follows from the fact that F(·) is differentiable and f(ξ_{i})>0 (David and Nagaraja (2003), Chapter 10).

From (i) and (ii), it follows immediately that plim_{n→∞}y_{i}= log[− log(1−q_{i})]+ logξ_{i}, and plim_{n→∞}z_{i}= log(ξ_{i}−θ).

In fact, since y_{i} and z_{i} are dominated by an integrable function (simply let x_{(i)}↦x_{(n)} in their corresponding definitions), implies the stronger L_{1} convergence:
$$y_i {\xrightarrow{L_{1}}} \log\left[\log(1q_i)\right]+\log\xi_{i}\equiv y_{i}^\ast, \qquad\text{and}\qquad z_i {\xrightarrow{L_{1}}} \log(\xi_i\theta)\equiv z_{i}^\ast. $$This follows because dominated convergence in probability implies L_{1} norm convergence (Lemma 2).

Now note that since
$$\xi_i = F^{1}(q_i) \quad\Longleftrightarrow\quad \log\left[\log(1q_i)\right]+\log\xi_i = \gamma\log(\xi_i\theta)\log\tau, $$yi∗ and zi∗ defined in (iv) satisfy the population regression equation: yi∗=a+bzi∗.

We can state the results in (iv)(v) equivalently as:
$$\varepsilon_i=y_i(a+{bz}_i) {\xrightarrow{L_{1}}} y_{i}^\ast(a+{bz}_{i}^\ast)=0, $$and since L_{1} norm convergence implies the weaker convergence in probability (see proof of Lemma 2), we have that plim_{n→∞}ε_{i}=0.

Thus, from (iv)(vi), we have that for any real numbers λ_{1} and λ_{2}, \((\lambda _1+\lambda _2z_i)\varepsilon _{i}{\xrightarrow {L_{1}}} 0\), which also implies plim_{n→∞}(λ_{1}+λ_{2}z_{i})ε_{i}=0.
To prove the first assertion (that plim_{n→∞}w=0), invoke the CramerWold device and Lemma 2 to see that, for any vector of reals λ=(λ_{1},λ_{2})^{T}, and using the result in (vii),
whence plim_{n→∞}Z^{T}ε=0, and therefore
To prove the second assertion (that W is bounded in probability), note that
An informal argument will now suffice. In the limit as n→∞, since the quantiles ξ_{i} are dense in the support of X, we have from (ii), and using the transformation u=F(x), that
which generalizes immediately to
for any integrable function g(·). Heuristically then, the fact that plim_{n→∞}z_{i}= log(ξ_{i}−θ) implies that
where \(\mathbb {E}^{(T)}\) denotes possible resulting truncation in the expectation operator in view of the fact that the summations begin at i=2 and may not span the entire support of the quantile function (see Remark 3). Now, since each of the sample averages in (26) is O_{p}(1), we deduce that
whence we conclude that W is a.s. nonsingular and therefore O_{p}(1).
Availability of data and materials
The dataset analyzed in the current study is available in the CRAN R package bReeze repository [https://CRAN.Rproject.org/package=bReeze].
Notes
 1.
Standard result for the cumulant generating function of any random variable, easily established by invoking Holder’s Inequality.
Abbreviations
 cdf:

Cumulative distribution function
 CV:

Coefficient of variation
 iid:

Independent and identically distributed
 mgf:

Moment generating function
 MLE:

maximum likelihood estimator
References
Bourguignon, M., Silva, R. B., Cordeiro, G. M.: The weibullg family of probability distributions. J. Data Sci. 12, 53–68 (2014).
David, H. A., Nagaraja, H. N.: Order Statistics. 3rd edition. Wiley, New York (2003).
Greenwood, J. A., Landwehr, J. M., Matalas, N. C., Wallis, J. R.: Probability weighted moments: definition and relation to parameters of several distributions expressable in inverse form. Water Resour. Res. 15(5), 1049–1054 (1979).
Gurvich, M. R., Dibenedetto, A. T., Ranade, S. V.: A new statistical distribution for characterizing the random strength of brittle materials. J. Mater. Sci. 32(10), 2559–2564 (1997).
Hosking, J. R. M.: Lmoments: Analysis and estimation of distributions using linear combinations of order statistics. J. R. Stat. Soc. Ser. B (Methodol.) 52(1), 105–124 (1990).
Masters, G. M.: Renewable and efficient electric power systems. Wiley, Hoboken, New Jersey (2013).
Rinne, H.: The Weibull distribution: a handbook. CRC Press, Boca Raton (2009).
Schervish, M. J.: Theory of statistics. Springer, New York (1995).
Serfling, R. J.: Approximation Theorems of Mathematical Statistics. Wiley, New York (1980).
Van der Vaart, A. W.: Asymptotic Statistics. Cambridge university press, Cambridge (1998).
Weibull, W.: A statistical distribution function of wide applicability. J. Appl. Mech. 18(3), 293–297 (1951).
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Affiliations
Contributions
RWB developed the proof of Theorem 1, and spent considerable effort in trying to obtain analytical forms for several integrals that did not make it into the paper. CP worked closely with AAT to develop the results of “Formal definition, basic properties, and results” and “Estimation procedures and asymptotic results” sections, and was largely responsible for conducting the simulations in “Simulation results” section. JS was the main instigator of the paper, responsible for deriving the new distribution from first principles as described in the Introduction. He also spent considerable time looking for possible applications and datasets. AAT worked closely with CP to develop the results of “Formal definition, basic properties, and results” and “Estimation procedures and asymptotic results” sections, and was largely responsible for the real data analysis of “Real data application” section. All authors read and approved the final manuscript.
Corresponding author
Correspondence to A. Alexandre Trindade.
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Barnard, R.W., Perera, C., Surles, J.G. et al. The linearly decreasing stress Weibull (LDSWeibull): a new Weibulllike distribution. J Stat Distrib App 6, 11 (2019) doi:10.1186/s4048801901008
Received
Accepted
Published
DOI
Keywords
 Pullout test
 Reliability
 Extreme values
 Maximum likelihood estimate
 Wind speed data