# Skewness-kurtosis adjusted confidence estimators and significance tests

- Wolf-Dieter Richter
^{1}Email author

**3**:4

https://doi.org/10.1186/s40488-016-0042-3

© Richter. 2016

**Received: **29 October 2015

**Accepted: **3 February 2016

**Published: **16 February 2016

## Abstract

First and second kind modifications of usual confidence intervals for estimating the expectation and of usual local alternative parameter choices are introduced in a way such that the asymptotic behavior of the true non-covering probabilities and the covering probabilities under the modified local non-true parameter assumption can be asymptotically exactly controlled. The orders of convergence to zero of both types of probabilities are assumed to be suitably bounded below according to an Osipov-type condition and the sample distribution is assumed to satisfy a corresponding tail condition due to Linnik. Analogous considerations are presented for the power function when testing a hypothesis concerning the expectation both under the assumption of a true hypothesis as well as under a modified local alternative. A limit theorem for large deviations by S.V. Nagajev/V.V. Petrov applies to prove the results. Applications are given for exponential families.

## Keywords

## Mathematics Subject Classification

## Introduction

Asymptotic normality of the distribution of the suitably centered and normalized arithmetic mean of i.i.d. random variables is one of the best studied and most often exploited facts in asymptotic statistics. It is supplemented in local asymptotic normality theory by limit theorems for the corresponding distributions under the assumption that the mean is shifted of order *n*
^{−1/2}. There are many successful simulations and real applications of both types of central limit theorems, and one may ask for a more detailed explanation of this success. The present note is aimed to present such additional theoretical explanation under certain circumstances. Moreover, the note is aimed to stimulate both analogous consideration in more general situations and checking the new results by simulation. Furthermore, based upon the results presented here, it might become attractive to search for additional explanation to various known simulation results in the area of asymptotic normality which is, however, behind the scope of the present note.

Based upon Nagajev’s and Petrov’s large deviation result in (Nagaev 1965; Petrov 1968), skewness-kurtosis modifications of usual confidence intervals for estimating the expectation and of usual local alternative parameter choices are introduced here in a way such that the asymptotic behavior of the true non-covering probabilities and the covering probabilities under the modified local non-true parameter assumption can be exactly controlled. The orders of convergence to zero of both types of probabilities are suitably bounded below by assuming an Osipov-type condition, see (Osipov 1975), and the sample distribution is assumed to satisfy a corresponding Linnik condition, see (Ibragimov and Linnik 1971; Linnik 1961).

Analogous considerations are presented for the power function when testing a hypothesis concerning the expectation both under the assumption of a true hypothesis and under a local alternative. Finally, applications are given for exponential families.

A concrete situation where the results of this paper apply is the case sensitive preparing of the machine settings of a machine tool. In this case, second and higher order moments of the manipulated variable do not change from one adjustment to another one and may be considered to be known over time.

It might be another aspect of stimulating further research if one asks for the derivation of limit theorems in the future being close to those in (Nagaev 1965; Petrov 1968) but where higher order moments are estimated.

Let *X*
_{1},…,*X*
_{
n
} be i.i.d. random variables with the common distribution law from a shift family of distributions, *P*
_{
μ
}(*A*)=*P*(*A*−*μ*),*A*∈
, where
denotes the Borel *σ*-field on the real line, the expectation equals *μ*,*μ*∈*R*, and the variance is *σ*
^{2}. It is well known that \(T_{n}=\sqrt {n}(\bar {X}_{n} -\mu)/\sigma \) is asymptotically standard normally distributed, *T*
_{
n
}∼*A*
*N*(0,1). Hence, *P*
_{
μ
}(*T*
_{
n
}>*z*
_{1−α
})→*α*, and under the local non-true parameter assumption, \(\mu _{1,n}=\mu +\frac {\sigma } {\sqrt {n}}(z_{1-\alpha }-z_{\beta })\), i.e. if one assumes that a sample is drawn with a shift of location (or with an error in the variable), then \( P_{\mu _{1,n}}(T_{n} \leq z_{1-\alpha })= P_{\mu _{1,n}}\left (\sqrt {n}\frac {\bar X_{n}-\mu _{1,n}}{\sigma } \leq z_{\beta }\right)\rightarrow \beta \) as *n*→*∞*, where *z*
_{
q
} denotes the quantile of order *q* of the standard Gaussian distribution.

*μ*where the true non-covering probabilities satisfy the asymptotic relation

*n*

^{−1/2}-locally chosen non-true parameters satisfy

The aim of this note is to prove refinements of the latter two asymptotic relations where *α*=*α*(*n*)→0 and *β*=*β*(*n*)→0 as *n*→*∞*, and to prove similar results for two-sided confidence intervals and for the power function when testing corresponding hypotheses.

## Expectation estimation

### 2.1 First and second kind adjusted one-sided confidence intervals

*X*satisfies the Linnik condition of order

*γ*,0<

*γ*<1/2, if

*g*

_{1}=

*E*(

*X*−

*E*(

*X*))

^{3}/

*σ*

^{3/2}is the skewness of

*X*. Moreover, let the first kind (order) adjusted upper asymptotic confidence interval for

*μ*be defined by

*α*(

*n*) and

*β*(

*n*) satisfy an Osipov-type condition of order

*γ*if

This condition means that neither *α*(*n*) nor *β*(*n*) tend to zero as fast as or even faster than *n*
^{−γ
} exp{−*n*
^{2γ
}/2}, i.e. min{*α*(*n*),*β*(*n*)}≫*n*
^{−γ
} exp{−*n*
^{2γ
}/2}, and that max{*z*
_{1−α(n)},*z*
_{1−β(n)}}=*o*(*n*
^{
γ
}),*n*→*∞*. Here, *o*(.) stands for the small Landau symbol.

If two functions *f,g* satisfy the relation \(\lim \limits _{n\rightarrow \infty }f(n)/g(n)=1\) then this asymptotic equivalence will be expressed as *f*(*n*)∼*g*(*n*),*n*→*∞*.

###
**Theorem**
**1**.

*α*(

*n*)

*↓*0,

*β*(

*n*)

*↓*0 as

*n*→

*∞*and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\right.\left.\!\!\!\frac {1}{4}\right ]\) then

*g*

_{2}=

*E*(

*X*−

*E*(

*X*))

^{4}/

*σ*

^{4}−3 is the kurtosis of

*X*, the second kind adjusted upper asymptotic confidence interval for

*μ*

###
**Theorem**
**2**.

*α*(

*n*)

*↓*0,

*β*(

*n*)

*↓*0 as

*n*→

*∞*and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{4},\right.\left.\!\!\!\frac {3}{10}\right ]\) then

###
**Remark**
**1**.

*z*

_{1−α }(

*s*) where

*g*

_{1}is replaced by −

*g*

_{1},

*s*=1,2, and

###
**Remark**
**2**.

In many situations where limit theorems are considered as they were in Section 1, the additional assumptions (1) and (2) may, possibly unnoticed, be fulfilled. In such situations, Theorems 1 and 2, together with the following theorem, give more insight into the asymptotic relations stated in Section 1.

###
**Theorem**
**3**.

Note that *O*(.) means the big Landau symbol.

### 2.2 Two-sided confidence intervals

*s*∈{1,2},

*α*>0, put \( L(s;\alpha)=\bar {X}_{n}-\frac {\sigma }{\sqrt {n}}z_{1-\alpha }(s)\) and \( R(s;\alpha)=\bar {X}_{n}+\frac {\sigma }{\sqrt {n}}z^{-}_{1-\alpha }(s).\) Further, let

*α*

_{ i }(

*n*)>0,

*i*=1,2,

*α*

_{1}(

*n*)+

*α*

_{2}(

*n*)<1, and

If conditions (1) and (2) are fulfilled then *P*
_{
μ
}((−*∞*,*L*(*s*;*α*
_{1}(*n*))) *covers*
*μ*)∼*α*
_{1}(*n*) and *P*
_{
μ
}((*R*(*s*;*α*
_{2}(*n*)),*∞*) *covers*
*μ*)∼*α*
_{2}(*n*) as *n*→*∞*.

With more detailed notation *μ*
_{1,n
}(*s*)=*μ*
_{1,n
}(*s*;*α*,*β*) and \(\mu ^{-}_{1,n}(s)=\mu ^{-}_{1,n}(s;\alpha,\beta)\),

\(P_{\mu _{1,n}(s;\alpha _{1}(n),\beta _{1}(n))} ((L(s;\alpha _{1}(n)),\infty)\; covers\; \mu)\sim \beta _{1}(n)\),

\(P_{\mu ^{-}_{1,n}(s;\alpha _{2}(n),\beta _{2}(n))} ((-\infty, R(s;\alpha _{2}(n)))\; covers\; \mu)\sim \beta _{2}(n), n\rightarrow \infty.\)

The following corollary has thus been proved.

###
**Corollary**
**1**.

*α*

_{1}(

*n*)

*↓*0,

*α*

_{2}(

*n*)

*↓*0 as

*n*→

*∞*and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\!\!\right.\left.\frac {1}{4}\right ]\) if

*s*=1 and for \(\gamma \in \left (\frac {1}{4},\!\!\right.\left.\frac {3}{10}\right ]\) if

*s*=2, and with (

*α*(

*n*),

*β*(

*n*))=(

*α*

_{1}(

*n*),

*α*

_{2}(

*n*)), then

## Testing

### 3.1 Adjusted quantiles

*H*

_{0}:

*μ*≤

*μ*

_{0}versus the alternative

*H*

_{ A }:

*μ*>

*μ*

_{0}. The first and second kind adjusted decision rules of the one-sided asymptotic Gauss test suggest to reject

*H*

_{0}if

*T*

_{ n,0}>

*z*

_{1−α(n)}(

*s*) for

*s*=1 or

*s*=2, respectively, where \(T_{n,0}=\sqrt {n}(\bar {X}_{n}-\mu _{0})/\sigma \). Because

*μ*

_{1,n }(

*s*))

_{ n=1,2,...}, satisfy

Similar consequences for testing *H*
_{1}:*μ*>*μ*
_{0} or *H*
_{2}:*μ*≠*μ*
_{0} are omitted, here.

### 3.2 Adjusted statistics

Let \(T_{n}{(1)}=T_{n}-\frac {g_{1}}{6\sqrt {n}}{T_{n}^{2}}\) and \(T_{n}{(2)}=T_{n}{(1)}-\frac {3g_{2}-8{g_{1}^{2}}}{72n}{T_{n}^{3}}\) be the first and second kind adjusted asymptotically Gaussian statistics, respectively, where \(T_{n}=\frac {\sqrt {n}}{\sigma }\left (\bar {X}_{n} - \mu \right)\).

###
**Theorem**
**4**.

*s*∈{1,2} then

Clearly, the results of this theorem apply to both hypothesis testing and confidence estimation in a similar way as described in the preceding sections.

The material of the present paper is part of a talk presented by the author at the Conference of European Statistics Stakeholders, Rome 2014, see Abstracts of Communication, p.90, and arXiv:1504.02553. A more advanced ‘testing-part’ of this talk is presented in (Richter 2016) and deals with higher order comparisons of statistical tests.

## Application to exponential families

Let *ν* denote a *σ*-finite measure and assume that the distribution *P*
_{
𝜗
} has the Radon-Nikodym density \(\frac {dP_{\vartheta }}{d\nu }(x)= \frac {e^{\vartheta x}}{\int e^{\vartheta x}\nu (dx)}=e^{\vartheta x-B(\vartheta)}\), say. For basics on exponential families we refer to Brown (1986). We assume that *X*(*𝜗*)∼*P*
_{
𝜗
} and \(X_{1}=X(\vartheta)-{ E}X(\vartheta)+\mu \sim \widetilde {P}_{\mu }\) where *𝜗* is known and *μ* is unknown. In the product-shift-experiment [ *R*
^{
n
},
\(\left.,\;\left \{\widetilde {P}^{\times n}_{\mu },\,\mu \in R\right \}\right ]\), expectation estimation and testing may be done as in Sections 2 and 3, respectively, where *g*
_{1}=*B*
^{′′′}(*𝜗*)/(*B*
^{′′}(*𝜗*))^{3/2} and *g*
_{2} allows a similar representation.

Another problem which can be dealt with is to test the hypothesis *H*
_{0}:*𝜗*≤*𝜗*
_{0} versus the alternative *H*
_{1n
}:*𝜗*≥*𝜗*
_{1n
} if one assumes that the expectation function *𝜗*→*B*
^{′}(*𝜗*)=*E*
_{
𝜗
}
*X* is strongly monotonous. For this case, we finally present just the following particular result which applies to both estimating and testing.

###
**Proposition**
**1**.

## Sketch of proofs

###
*Proof of Theorems 1 and 2*.

*x*=

*z*

_{1−α(n)}=

*o*(

*n*

^{ γ }),

*n*→

*∞*for \(\gamma \in \left (\frac {1}{6},\!\!\right.\left.\frac {3}{10}\right ]\), and if (1) then, according to (Linnik 1961; Nagaev 1965), \(P_{\mu }(T_{n}>x)\sim f_{n,s}^{(X)}(x), x\rightarrow \infty \) where \( f_{n,s}^{(X)}(x)=\frac {1}{\sqrt {2\pi }x} \exp \left \{-\frac {x^{2}}{2}+\frac {x^{3}}{\sqrt {n}}\sum \limits _{k=0}^{s-1}a_{k}\left (\frac {x}{\sqrt {n}}\right)^{k}\right \} \) and

*s*is an integer satisfying \(\frac {s}{2(s+2)}<\gamma \leq \frac {s+1}{2(s+3)}\), i.e.

*s*=1 if \(\gamma \in \left (\frac {1}{6},\!\!\right.\left. \frac {1}{4}\right ]\) and

*s*=2 if \(\gamma \in \left (\frac {1}{4},\!\!\right.\left. \frac {3}{10}\right ]\). Here, the constants \(a_{0}=\frac {g_{1}}{6},\, a_{1}=\frac {g_{2}-3{g_{1}^{2}}}{24} \) are due to the skewness

*g*

_{1}and kurtosis

*g*

_{2}of

*X*. Note that \(\frac {g_{1}x^{2}}{6\sqrt {n}}=o(x)\) because x =

*o*(

*n*

^{1/2}), thus \(x+\frac {g_{1}x^{2}}{6\sqrt {n}}=o(n^{\gamma })\), and \(P_{\mu }\left (T_{n}>x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right) \sim f_{n,1}\left (x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right)\). Hence, \(P_{\mu }\left (T_{n}>x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right)\sim 1-\Phi (x).\) Similarly,

*P*

_{ μ }(

*T*

_{ n }>

*z*

_{1−α(n)}(

*s*))∼

*α*(

*n*),

*s*=1,2. Further, \(P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\)

*P*

_{ μ },

*μ*∈(−

*∞*,

*∞*)} is assumed to be a shift family. It follows that \(P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\)

*g*

_{1},

*g*

_{2}are skewness and kurtosis of −

*X*

_{1}. Thus,

*P*

_{ μ }(

*A*

*C*

*I*

^{ u }

*d*

*o*

*e*

*s*

*n*

*o*

*t*

*c*

*o*

*v*

*e*

*r*

*μ*)=

*P*

_{ μ }(

*T*

_{ n }>

*z*

_{1−α(n)}(

*s*)) and \(P_{\mu _{1,n}(s)}(ACI^{u} \; covers\; \mu) =P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\), the theorems are proved.

###
*Proof of Remark 1*.

###
*Proof of Theorem 3*.

*x*=

*x*

_{1−α }. Let us put

*x*≥1 then it follows from (3) that \( y\geq e^{\frac {x^{2}}{2}}\), hence

*x*

^{2}≤ ln(

*y*

^{2}). It follows again from (3) that \( y^{2}\leq \ln (y^{2})e^{x^{2}}\), thus \(x^{2}\geq \ln \left (\frac {y^{2}}{\ln y^{2}}\right).\) After one more such step,

Let us remark that the inverse of the function *w*→*w*
*e*
^{
w
} is called the Lambert *W* function. An asymptotic representation of the solution of (3) as *y*→*∞* can therefore be derived from the more general representation (4.19) of *W* in (Corless et al. 1996) if one reads (3) as *w*
*e*
^{
w
}=*y*
^{2}. Our derivation of the particular result needed here, however, is much more elementary than the general one given in the paper just mentioned.

###
*Proof of Theorem 4*.

*s*=1. According to (Linnik 1961),

Moreover, \(P_{\mu _{1n}(1;\alpha (n),\beta (n))}(T_{n}{(1)}\!\leq \! z_{1-\alpha (n)})\,=\,P_{\mu _{1n}(1;\alpha (n),\beta (n))}\!\left (T_{n}\!\leq \! \left (f_{n}^{(1)}\right)^{-1} \!\left (z_{1-\alpha (n)}\right)\!\right)\!=\) \(P_{\mu _{1n}(1;\alpha (n),\beta (n))}\left (\sqrt {n}\frac {\overline {X}_{n} -\mu _{1n}(1)}{\sigma } \leq z_{1-\alpha (n)}(1)+\frac {z^{2}_{1-\alpha (n)}g_{1}}{6\sqrt {n}}+\!O\left (\frac {z^{3}_{1-\alpha (n)}}{n}\right)- \sqrt {n}\frac {\mu _{1n}(1)-\mu _{0}}{\sigma }\right)\! \sim f_{n,1}^{(-X)}\left (-z_{\beta (n)}(1)+ O\left (\frac {z^{3}_{1-\alpha (n)}}{n}\right)\right) \sim 1-\Phi (z_{1-\beta (n)})=\beta (n).\)

###
*Proof of Proposition 1*.

*B*

^{′′′}(

*𝜗*

_{0})/(

*B*

^{′′}(

*𝜗*

_{0}))

^{3/2}=

*g*

_{1}, the proof of Proposition 1 is finished.

## Declarations

### Acknowledgements

The author is grateful to the Reviewers for their valuable hints and declares no conflicts of interest.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- Brown, LD: Fundamentals of statistical exponential families. IMS, Lecture Notes and Monograph Series. Hayward, CA (1986).Google Scholar
- Corless, RM, Gonnet, GH, Hare, DEG, Jeffrey, DJ, Knuth, DE: On the Lambert
*W*-Funktion. Adv. Comp. Math. 5, 329–359 (1996).View ArticleMathSciNetMATHGoogle Scholar - Ibragimov, IA, Linnik, YW: Independent and stationary sequence. Walters, Nordhoff. Translation from Russian edition, 1965 (1971).Google Scholar
- Linnik, YV: Limit theorems for sums of independent variables taking into account large deviations. I-III. Theor. Probab. Appl. 6 (1961). 131–148, 345–360; 7 (1962), 115–129.Google Scholar
- Nagaev, SV: Some limit theorems for large deviations. Theory Probab. Appl. 10, 214–235 (1965).View ArticleMathSciNetMATHGoogle Scholar
- Osipov, LV: Multidimensional limit theorems for large deviations. Theory Probab. Appl. 20, 38–56 (1975).View ArticleMATHGoogle Scholar
- Petrov, VV: Asymptotic behaviour of probabilities of large deviations. Theor. Probab. Appl. 13, 408–420 (1968).View ArticleGoogle Scholar
- Richter, W-D: Skewness-kurtosis controlled higher order equivalent decisions. Open Stat. Probability J. 7, 1–9 (2016). doi:10.2174/1876527001607010001.Google Scholar