Open Access

Skewness-kurtosis adjusted confidence estimators and significance tests

Journal of Statistical Distributions and Applications20163:4

https://doi.org/10.1186/s40488-016-0042-3

Received: 29 October 2015

Accepted: 3 February 2016

Published: 16 February 2016

Abstract

First and second kind modifications of usual confidence intervals for estimating the expectation and of usual local alternative parameter choices are introduced in a way such that the asymptotic behavior of the true non-covering probabilities and the covering probabilities under the modified local non-true parameter assumption can be asymptotically exactly controlled. The orders of convergence to zero of both types of probabilities are assumed to be suitably bounded below according to an Osipov-type condition and the sample distribution is assumed to satisfy a corresponding tail condition due to Linnik. Analogous considerations are presented for the power function when testing a hypothesis concerning the expectation both under the assumption of a true hypothesis as well as under a modified local alternative. A limit theorem for large deviations by S.V. Nagajev/V.V. Petrov applies to prove the results. Applications are given for exponential families.

Keywords

Orders of confidence Orders of modified local alternatives True non-covering probabilities Local non-true parameter choice Covering probabilities Linnik condition Osipov-type condition Skewness-kurtosis adjusted decisions Order of significance Error probabilities of first and second kind Exponential family Large deviations

Mathematics Subject Classification

62E20; 62F05; 62F12; 60F10

Introduction

Asymptotic normality of the distribution of the suitably centered and normalized arithmetic mean of i.i.d. random variables is one of the best studied and most often exploited facts in asymptotic statistics. It is supplemented in local asymptotic normality theory by limit theorems for the corresponding distributions under the assumption that the mean is shifted of order n −1/2. There are many successful simulations and real applications of both types of central limit theorems, and one may ask for a more detailed explanation of this success. The present note is aimed to present such additional theoretical explanation under certain circumstances. Moreover, the note is aimed to stimulate both analogous consideration in more general situations and checking the new results by simulation. Furthermore, based upon the results presented here, it might become attractive to search for additional explanation to various known simulation results in the area of asymptotic normality which is, however, behind the scope of the present note.

Based upon Nagajev’s and Petrov’s large deviation result in (Nagaev 1965; Petrov 1968), skewness-kurtosis modifications of usual confidence intervals for estimating the expectation and of usual local alternative parameter choices are introduced here in a way such that the asymptotic behavior of the true non-covering probabilities and the covering probabilities under the modified local non-true parameter assumption can be exactly controlled. The orders of convergence to zero of both types of probabilities are suitably bounded below by assuming an Osipov-type condition, see (Osipov 1975), and the sample distribution is assumed to satisfy a corresponding Linnik condition, see (Ibragimov and Linnik 1971; Linnik 1961).

Analogous considerations are presented for the power function when testing a hypothesis concerning the expectation both under the assumption of a true hypothesis and under a local alternative. Finally, applications are given for exponential families.

A concrete situation where the results of this paper apply is the case sensitive preparing of the machine settings of a machine tool. In this case, second and higher order moments of the manipulated variable do not change from one adjustment to another one and may be considered to be known over time.

It might be another aspect of stimulating further research if one asks for the derivation of limit theorems in the future being close to those in (Nagaev 1965; Petrov 1968) but where higher order moments are estimated.

Let X 1,…,X n be i.i.d. random variables with the common distribution law from a shift family of distributions, P μ (A)=P(Aμ),A , where denotes the Borel σ-field on the real line, the expectation equals μ,μR, and the variance is σ 2. It is well known that \(T_{n}=\sqrt {n}(\bar {X}_{n} -\mu)/\sigma \) is asymptotically standard normally distributed, T n A N(0,1). Hence, P μ (T n >z 1−α )→α, and under the local non-true parameter assumption, \(\mu _{1,n}=\mu +\frac {\sigma } {\sqrt {n}}(z_{1-\alpha }-z_{\beta })\), i.e. if one assumes that a sample is drawn with a shift of location (or with an error in the variable), then \( P_{\mu _{1,n}}(T_{n} \leq z_{1-\alpha })= P_{\mu _{1,n}}\left (\sqrt {n}\frac {\bar X_{n}-\mu _{1,n}}{\sigma } \leq z_{\beta }\right)\rightarrow \beta \) as n, where z q denotes the quantile of order q of the standard Gaussian distribution.

Let \(ACI^{u}= \left [\left.\bar {X}_{n} - \frac {\sigma }{\sqrt {n}}z_{1-\alpha }, \infty \right)\right.\) denote the upper asymptotic confidence interval for μ where the true non-covering probabilities satisfy the asymptotic relation
$$ P_{\mu}(ACI^{u} {\; does\; not\; cover\;} \mu)\rightarrow \alpha,\; n\rightarrow \infty. $$
Because \( P_{\mu _{1,n}}\left (\bar {X}_{n}-\frac {\sigma }{\sqrt {n}}z_{1-\alpha }<\mu \right) = P_{\mu _{1,n}}\left (\sqrt {n}\frac {\bar {X}_{n}-\mu _{1,n}}{\sigma }\leq z_{\beta }\right)\), the covering probabilities under n −1/2-locally chosen non-true parameters satisfy
$$P_{\mu_{1,n}}(ACI^{u} {\; covers\;} \mu) \rightarrow \beta, \; n\rightarrow \infty.$$

The aim of this note is to prove refinements of the latter two asymptotic relations where α=α(n)→0 and β=β(n)→0 as n, and to prove similar results for two-sided confidence intervals and for the power function when testing corresponding hypotheses.

Expectation estimation

2.1 First and second kind adjusted one-sided confidence intervals

According to (Ibragimov and Linnik 1971; Linnik 1961), it is said that a random variable X satisfies the Linnik condition of order γ,0<γ<1/2, if
$$ {E}_{\mu} \exp\left\{|X-\mu|^{\frac{4\gamma}{2\gamma+1}}\right\} <\infty. $$
(1)
Let us define the first kind (or first order) adjusted asymptotic Gaussian quantile by
$$z_{1-\alpha(n)}(1)=z_{1-\alpha(n)} +\frac{g_{1}}{6\sqrt{n}} z^{2}_{1-\alpha(n)} $$
where g 1=E(XE(X))3/σ 3/2 is the skewness of X. Moreover, let the first kind (order) adjusted upper asymptotic confidence interval for μ be defined by
$$ACI^{u}(1)=\left[\left.\bar{X}_{n} -\frac{\sigma}{\sqrt{n}} z_{1-\alpha(n)}(1), \infty\right)\right. $$
and denote a first kind modified non-true local parameter choice by
$$\mu_{1,n}(1)=\mu_{1,n}+\frac{\sigma g_{1}} {6n} \left(z^{2}_{1-\alpha(n)}-z^{2}_{\beta(n)}\right). $$
Let us say that the probabilities α(n) and β(n) satisfy an Osipov-type condition of order γ if
$$ n^{\gamma}\exp\left\{\frac{n^{2\gamma}}{2}\right\} \cdot \min\left\{\alpha(n),\beta(n)\right\}\rightarrow \infty,\; n\rightarrow \infty. $$
(2)

This condition means that neither α(n) nor β(n) tend to zero as fast as or even faster than n γ exp{−n 2γ /2}, i.e. min{α(n),β(n)}n γ exp{−n 2γ /2}, and that max{z 1−α(n),z 1−β(n)}=o(n γ ),n. Here, o(.) stands for the small Landau symbol.

If two functions f,g satisfy the relation \(\lim \limits _{n\rightarrow \infty }f(n)/g(n)=1\) then this asymptotic equivalence will be expressed as f(n)g(n),n.

Theorem 1.

If α(n)0, β(n)0 as n and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\right.\left.\!\!\!\frac {1}{4}\right ]\) then
$$ P_{\mu} (ACI^{u}(1) {\; does\; not\; cover\;} \mu)\sim \alpha(n), \, n\rightarrow \infty $$
and
$$ P_{\mu_{1,n}(1)} (ACI^{u}(1) {\; covers\;} \mu)\sim \beta(n), \, n\rightarrow \infty. $$
Let us define the second kind adjusted asymptotic Gaussian quantile
$$z_{1-\alpha(n)}(2)=z_{1-\alpha(n)}(1) +\frac{3g_{2}-4{g_{1}^{2}}}{72n} z^{3}_{1-\alpha(n)} $$
where g 2=E(XE(X))4/σ 4−3 is the kurtosis of X, the second kind adjusted upper asymptotic confidence interval for μ
$$ACI^{u}(2)=\left[\vphantom{\frac{0}{0}}\bar{X}_{n} \right.\left.-\frac{\sigma}{\sqrt{n}} z_{1-\alpha(n)}(2), \infty\right), $$
and a second kind modified non-true local parameter choice
$$\mu_{1,n}(2) =\mu_{1,n}(1)+ \frac{\sigma\left(3g_{2}-4{g_{1}^{2}}\right)} {72n^{3/2}}\left(z^{3}_{1-\alpha(n)}-z_{\beta(n)}^{3}\right). $$

Theorem 2.

If α(n)0, β(n)0 as n and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{4},\right.\left.\!\!\!\frac {3}{10}\right ]\) then
$$ P_{\mu} (ACI^{u}(2) {\; does\; not\; cover\;} \mu)\sim \alpha(n), \, n\rightarrow \infty $$
and
$$ P_{\mu_{1,n}(2)} (ACI^{u}(2) {\; covers\;} \mu)\sim \beta(n), \, n\rightarrow \infty. $$

Remark 1.

Under the same assumptions, analogous results are true for lower asymptotic confidence intervals, i.e. for \(ACI^{l}(s)=\left (-\infty, \bar {X}_{n}+\frac {\sigma }{\sqrt {n}}z^{-}_{1-\alpha }(s)\right), s=1,2:\)
$$P_{\mu}(ACI^{l}(s)\; does\; not\; cover\; \mu) \sim \alpha(n) $$
and
$$P_{\mu^{-}_{1,n}(s)}(ACI^{l}(s)\; covers\; \mu) \sim \beta(n),\, n\rightarrow \infty. $$
Here, \(z^{-}_{1-\alpha }(s)\) means the quantity z 1−α (s) where g 1 is replaced by −g 1,s=1,2, and
$$\mu^{-}_{1,n}(s)=\mu-\frac{\sigma}{\sqrt{n}} (z_{1-\alpha}-z_{\beta})+\frac{\sigma g_{1}}{6n} \left(z^{2}_{1-\alpha}-z^{2}_{\beta}\right)- \frac{\sigma\left(3g_{2}-4{g_{1}^{2}}\right)} {72n^{3/2}}\left(z^{3}_{1-\alpha}- z^{3}_{\beta}\right)I_{\{2\}}(s). $$

Remark 2.

In many situations where limit theorems are considered as they were in Section 1, the additional assumptions (1) and (2) may, possibly unnoticed, be fulfilled. In such situations, Theorems 1 and 2, together with the following theorem, give more insight into the asymptotic relations stated in Section 1.

Theorem 3.

Large Gaussian quantiles satisfy the asymptotic representation
$$z_{1-\alpha}=\sqrt{-2\ln\alpha-\ln|\ln\alpha|- \ln(4\pi)}\cdot \left(1+O\left(\frac{\ln|\ln\alpha|}{(\ln\alpha)^{2}}\right)\right), \alpha\rightarrow +0. $$

Note that O(.) means the big Landau symbol.

2.2 Two-sided confidence intervals

For s{1,2},α>0, put \( L(s;\alpha)=\bar {X}_{n}-\frac {\sigma }{\sqrt {n}}z_{1-\alpha }(s)\) and \( R(s;\alpha)=\bar {X}_{n}+\frac {\sigma }{\sqrt {n}}z^{-}_{1-\alpha }(s).\) Further, let α i (n)>0, i=1,2,α 1(n)+α 2(n)<1, and
$$ACI(s;\alpha_{1}(n),\alpha_{2}(n))=\left[L(s;\alpha_{1}(n)), R(s;\alpha_{2}(n))\right]. $$

If conditions (1) and (2) are fulfilled then P μ ((−,L(s;α 1(n))) covers μ)α 1(n) and P μ ((R(s;α 2(n)),) covers μ)α 2(n) as n.

With more detailed notation μ 1,n (s)=μ 1,n (s;α,β) and \(\mu ^{-}_{1,n}(s)=\mu ^{-}_{1,n}(s;\alpha,\beta)\),

\(P_{\mu _{1,n}(s;\alpha _{1}(n),\beta _{1}(n))} ((L(s;\alpha _{1}(n)),\infty)\; covers\; \mu)\sim \beta _{1}(n)\),

\(P_{\mu ^{-}_{1,n}(s;\alpha _{2}(n),\beta _{2}(n))} ((-\infty, R(s;\alpha _{2}(n)))\; covers\; \mu)\sim \beta _{2}(n), n\rightarrow \infty.\)

The following corollary has thus been proved.

Corollary 1.

If α 1(n)0, α 2(n)0 as n and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\!\!\right.\left.\frac {1}{4}\right ]\) if s=1 and for \(\gamma \in \left (\frac {1}{4},\!\!\right.\left.\frac {3}{10}\right ]\) if s=2, and with (α(n),β(n))=(α 1(n),α 2(n)), then
$$ P_{\mu} (ACI(s;\alpha_{1}(n),\alpha_{2}(n)) {\; does\; not\; cover\;} \mu)\sim (\alpha_{1}(n)+\alpha_{2}(n)), \, n\rightarrow \infty. $$
Moreover,
$$ \max\limits_{\nu\in\{\mu_{1,n} (s;\alpha_{1}(n),\beta_{1}(n)), \mu^{-}_{1,n}(s;\alpha_{2}(n),\beta_{2}(n)) \}} P_{\nu} (ACI(s) {\; covers\;} \mu)\leq\max\left\{\beta_{1}(n),\beta_{2}(n)\right\}. $$

Testing

3.1 Adjusted quantiles

Let us consider the problem of testing the hypothesis H 0:μμ 0 versus the alternative H A :μ>μ 0. The first and second kind adjusted decision rules of the one-sided asymptotic Gauss test suggest to reject H 0 if T n,0>z 1−α(n)(s) for s=1 or s=2, respectively, where \(T_{n,0}=\sqrt {n}(\bar {X}_{n}-\mu _{0})/\sigma \). Because
$$ P_{\mu_{0}}(reject\; H_{0})=P_{\mu_{0}}(ACI^{u}(s) \; does\; not\; cover\; \mu_{0}), $$
it follows from Theorems 1 and 2 that under the conditions given there the (sequence of) probabilities of an error of first kind satisfy the asymptotic relation
$$ P_{\mu_{0}}(reject\; H_{0})\sim \alpha(n), n\rightarrow \infty. $$
Concerning the power function of this test, because
$$ P_{\mu_{1,n}(s)}(do\; not\; reject\; H_{0})= P_{\mu_{1,n}(s)}(ACI^{u}(s) \; covers\; \mu_{0}), $$
it follows under the same assumptions that the probabilities of a second kind error in the case that the sequence of the modified local parameters is (μ 1,n (s)) n=1,2,..., satisfy
$$ P_{\mu_{1,n}(s)}(do\; not\; reject\; H_{0})\sim \beta(n), n\rightarrow \infty. $$

Similar consequences for testing H 1:μ>μ 0 or H 2:μμ 0 are omitted, here.

3.2 Adjusted statistics

Let \(T_{n}{(1)}=T_{n}-\frac {g_{1}}{6\sqrt {n}}{T_{n}^{2}}\) and \(T_{n}{(2)}=T_{n}{(1)}-\frac {3g_{2}-8{g_{1}^{2}}}{72n}{T_{n}^{3}}\) be the first and second kind adjusted asymptotically Gaussian statistics, respectively, where \(T_{n}=\frac {\sqrt {n}}{\sigma }\left (\bar {X}_{n} - \mu \right)\).

Theorem 4.

If the conditions (1) and (2) are satisfied for a certain \(\gamma \in \left (\frac {s}{2s+4},\!\right.\left.\frac {s+1}{2s+6}\right ]\) where s{1,2} then
$$P_{\mu_{0}}\left(T_{n}{(s)}>z_{1-\alpha(n)}\right)\sim \alpha(n), \; n\rightarrow \infty $$
and
$$ P_{\mu_{1,n}(s)}\left(T_{n}{(s)}\leq z_{1-\alpha(n)}\right)\sim \beta (n), \; n\rightarrow \infty. $$

Clearly, the results of this theorem apply to both hypothesis testing and confidence estimation in a similar way as described in the preceding sections.

The material of the present paper is part of a talk presented by the author at the Conference of European Statistics Stakeholders, Rome 2014, see Abstracts of Communication, p.90, and arXiv:1504.02553. A more advanced ‘testing-part’ of this talk is presented in (Richter 2016) and deals with higher order comparisons of statistical tests.

Application to exponential families

Let ν denote a σ-finite measure and assume that the distribution P 𝜗 has the Radon-Nikodym density \(\frac {dP_{\vartheta }}{d\nu }(x)= \frac {e^{\vartheta x}}{\int e^{\vartheta x}\nu (dx)}=e^{\vartheta x-B(\vartheta)}\), say. For basics on exponential families we refer to Brown (1986). We assume that X(𝜗)P 𝜗 and \(X_{1}=X(\vartheta)-{ E}X(\vartheta)+\mu \sim \widetilde {P}_{\mu }\) where 𝜗 is known and μ is unknown. In the product-shift-experiment [ R n , \(\left.,\;\left \{\widetilde {P}^{\times n}_{\mu },\,\mu \in R\right \}\right ]\), expectation estimation and testing may be done as in Sections 2 and 3, respectively, where g 1=B ′′′(𝜗)/(B ′′(𝜗))3/2 and g 2 allows a similar representation.

Another problem which can be dealt with is to test the hypothesis H 0:𝜗𝜗 0 versus the alternative H 1n :𝜗𝜗 1n if one assumes that the expectation function 𝜗B (𝜗)=E 𝜗 X is strongly monotonous. For this case, we finally present just the following particular result which applies to both estimating and testing.

Proposition 1.

If conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\frac {1}{4}\right ]\) then
$$P_{\vartheta_{0}}^{\times n}\left(\sqrt{n}\frac{\overline{X}_{n}-B'(\vartheta_{0})}{\sqrt{B^{\prime\prime}(\vartheta_{0})}}> z_{1-\alpha(n)}+\frac{B^{\prime\prime\prime}(\vartheta_{0})}{6\sqrt{n}(B^{\prime\prime}(\vartheta_{0}))^{3/2}} z^{2}_{1-\alpha(n)}\right)\sim\alpha(n),n\;\rightarrow \infty. $$

Sketch of proofs

Proof of Theorems 1 and 2.

If condition (2) is satisfied then x=z 1−α(n)=o(n γ ),n for \(\gamma \in \left (\frac {1}{6},\!\!\right.\left.\frac {3}{10}\right ]\), and if (1) then, according to (Linnik 1961; Nagaev 1965), \(P_{\mu }(T_{n}>x)\sim f_{n,s}^{(X)}(x), x\rightarrow \infty \) where \( f_{n,s}^{(X)}(x)=\frac {1}{\sqrt {2\pi }x} \exp \left \{-\frac {x^{2}}{2}+\frac {x^{3}}{\sqrt {n}}\sum \limits _{k=0}^{s-1}a_{k}\left (\frac {x}{\sqrt {n}}\right)^{k}\right \} \) and s is an integer satisfying \(\frac {s}{2(s+2)}<\gamma \leq \frac {s+1}{2(s+3)}\), i.e. s=1 if \(\gamma \in \left (\frac {1}{6},\!\!\right.\left. \frac {1}{4}\right ]\) and s=2 if \(\gamma \in \left (\frac {1}{4},\!\!\right.\left. \frac {3}{10}\right ]\). Here, the constants \(a_{0}=\frac {g_{1}}{6},\, a_{1}=\frac {g_{2}-3{g_{1}^{2}}}{24} \) are due to the skewness g 1 and kurtosis g 2 of X. Note that \(\frac {g_{1}x^{2}}{6\sqrt {n}}=o(x)\) because x =o(n 1/2), thus \(x+\frac {g_{1}x^{2}}{6\sqrt {n}}=o(n^{\gamma })\), and \(P_{\mu }\left (T_{n}>x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right) \sim f_{n,1}\left (x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right)\). Hence, \(P_{\mu }\left (T_{n}>x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right)\sim 1-\Phi (x).\) Similarly, P μ (T n >z 1−α(n)(s))α(n), s=1,2. Further, \(P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\)
$$ {}=P_{\mu_{1,n}(s)}\left(\frac{\sqrt{n}}{\sigma} (\bar{X}_{n} -\mu_{1,n}(s)\right)\!< z_{1-\alpha(n)}(s) -\frac{\sqrt{n}}{\sigma}(\mu_{1,n}(s)-\mu)) =P_{0}\left(\frac{\sqrt{n}}{\sigma} \bar{X}_{n}< z_{\beta(n)}(s)\right). $$
The latter equality holds because {P μ ,μ(−,)} is assumed to be a shift family. It follows that \(P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\)
$$ =P_{0}\left(\frac{\sqrt{n}}{\sigma} (-\bar{X}_{n})\geq z_{1-\beta(n)}+ \frac{-g_{1}}{6\sqrt{n}}z^{2}_{1-\beta(n)} +I_{\{2\}}(s) \frac{3g_{2}-4{g_{1}^{2}}}{72n}z^{3}_{1-\beta(n)}\right). $$
Note that −g 1,g 2 are skewness and kurtosis of −X 1. Thus,
$$P_{\mu_{1,n}(s)}\left(T_{n}\leq z_{1-\alpha(n)}(s)\right)\sim f_{n,s}^{(-X)}(z_{1-\beta(n)}(s)) \sim\beta(n), n \rightarrow \infty. $$
Because P μ (A C I u d o e s n o t c o v e r μ)=P μ (T n >z 1−α(n)(s)) and \(P_{\mu _{1,n}(s)}(ACI^{u} \; covers\; \mu) =P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\), the theorems are proved.

Proof of Remark 1.

The first statement of the remark follows from
$$P_{\mu}\left(\mu>\bar{X}_{n}+\sigma z^{-}_{1-\alpha(n)}/\sqrt{n}\right) =P_{\mu}\left(\sqrt{n}(-\bar{X}_{n}+\mu)/\sigma >z^{-}_{1-\alpha(n)}\right) $$
and the second one from
$$P_{\mu^{-}_{1,n}(s)}\left(\mu< \bar{X}_{n}+ \sigma z^{-}_{1-\alpha(n)}/\sqrt{n}\right) =P_{0}\left(\bar{X}_{n}>\mu-\mu^{-}_{1,n}(s)-\sigma z^{-}_{1-\alpha(n)}/\sqrt{n}\right)$$
$$=P_{0}\left(\sqrt{n}\bar{X}_{n}/\sigma > z_{1-\beta(n)}(s)\right). $$

Proof of Theorem 3.

We start from the well known relations
$$\alpha=1-\Phi(z_{1-\alpha})= \left(1+O\left(\frac{1}{z^{2}_{1-\alpha}}\right)\right)\frac{1}{\sqrt{2\pi}z_{1-\alpha}} e^{-\frac{z^{2}_{1-\alpha}}{2}},\;\alpha\rightarrow 0. $$
The solution to the approximative quantile equation \(\alpha =\frac {1}{\sqrt {2\pi }x}e^{-\frac {x^{2}}{2}}\) will be denoted by x=x 1−α . Let us put
$$ xe^{\frac{x^{2}}{2}}=\frac{1}{\sqrt{2\pi}\alpha}=:y. $$
(3)
If x≥1 then it follows from (3) that \( y\geq e^{\frac {x^{2}}{2}}\), hence x 2≤ ln(y 2). It follows again from (3) that \( y^{2}\leq \ln (y^{2})e^{x^{2}}\), thus \(x^{2}\geq \ln \left (\frac {y^{2}}{\ln y^{2}}\right).\) After one more such step,
$$\ln\left(\frac{y^{2}}{\ln y^{2}}\right)\leq x^{2}\leq\ln\left[\frac{y^{2}}{\ln\left(\frac{y^{2}}{\ln y^{2}}\right)}\right]. $$
The theorem now follows from
$$x^{2}=\left\{\ln y^{2}-\ln 2-\ln\ln y\right\}\left\{1+O\left(\frac{\ln\ln y}{(\ln y^{2})^{2}}\right)\right\}, y\rightarrow \infty. $$

Let us remark that the inverse of the function ww e w is called the Lambert W function. An asymptotic representation of the solution of (3) as y can therefore be derived from the more general representation (4.19) of W in (Corless et al. 1996) if one reads (3) as w e w =y 2. Our derivation of the particular result needed here, however, is much more elementary than the general one given in the paper just mentioned.

Proof of Theorem 4.

Recognize that if \(g_{n,s}(x)=o\left (\frac {1}{x}\right), x\rightarrow \infty \) then \(f^{(+/-)(X)}_{n,s}(x+g_{n,s}(x))\sim f^{(+/-)(X)}_{n,s}(x),x\rightarrow \infty.\) Let us restrict to the case s=1. According to (Linnik 1961),
$$P_{\mu_{0}}(T_{n}{(1)}>z_{1-\alpha(n)})\sim P_{\mu_{0}}\left(\frac{3\sqrt{n}}{g_{1}}>T_{n}{(1)}>z_{1-\alpha(n)}\right). $$
The function \(f^{(1)}_{n}(t)=t-\frac {g_{1}t^{2}}{6\sqrt {n}}\) has a positive derivative, \(f^{(1)'}_{n}(t)=1- \frac {g_{1}t}{3\sqrt {n}}>0\), if \(g_{1}t<3\sqrt {n}\). Denoting there the inverse function of \(f_{n}^{(1)}\) by \(f_{n}^{(1)^{-1}}\), it follows \(f_{n}^{(1)^{-1}}(x)= x+\frac {g_{1}x^{2}}{6\sqrt {n}}+O\left (\frac {x^{3}}{n}\right)\) and \( f^{(1)}_{n}\left (f_{n}^{(1)^{-1}}(x)\right) = x+o\left (\frac {1}{x}\right).\) Thus,
$$P_{\mu_{o}}(T_{n}{(1)}>z_{1-\alpha(n)})\sim P_{\mu_{o}}(T_{n}>z_{1-\alpha(n)}(1)) \sim\alpha(n). $$

Moreover, \(P_{\mu _{1n}(1;\alpha (n),\beta (n))}(T_{n}{(1)}\!\leq \! z_{1-\alpha (n)})\,=\,P_{\mu _{1n}(1;\alpha (n),\beta (n))}\!\left (T_{n}\!\leq \! \left (f_{n}^{(1)}\right)^{-1} \!\left (z_{1-\alpha (n)}\right)\!\right)\!=\) \(P_{\mu _{1n}(1;\alpha (n),\beta (n))}\left (\sqrt {n}\frac {\overline {X}_{n} -\mu _{1n}(1)}{\sigma } \leq z_{1-\alpha (n)}(1)+\frac {z^{2}_{1-\alpha (n)}g_{1}}{6\sqrt {n}}+\!O\left (\frac {z^{3}_{1-\alpha (n)}}{n}\right)- \sqrt {n}\frac {\mu _{1n}(1)-\mu _{0}}{\sigma }\right)\! \sim f_{n,1}^{(-X)}\left (-z_{\beta (n)}(1)+ O\left (\frac {z^{3}_{1-\alpha (n)}}{n}\right)\right) \sim 1-\Phi (z_{1-\beta (n)})=\beta (n).\)

Proof of Proposition 1.

Because
$$P_{\mu_{0}}(reject\; H_{0})=P_{\mu_{0}}(ACI^{u}(1)\; does\; not\; cover\; \mu_{0}) $$
it follows by Theorem 1 that
$$P_{\mu_{0}}(reject\; H_{0})=P_{\mu_{0}}\left(\bar{X}_{n}-\frac{\sigma}{\sqrt{n}}z_{1-\alpha}(1)>\mu_{0}\right) =P_{\mu_{0}}\left(\sqrt{n}\frac{\bar{X}_{n}-\mu_{0}}{\sigma}>z_{1-\alpha}(1)\right). $$
With \(P_{\mu _{0}}=P_{\vartheta _{0}}^{\times n}, \mu _{0}=B'(\vartheta _{0}), \sigma ^{2}=B^{\prime \prime }(\vartheta _{0})\) and B ′′′(𝜗 0)/(B ′′(𝜗 0))3/2=g 1, the proof of Proposition 1 is finished.

Declarations

Acknowledgements

The author is grateful to the Reviewers for their valuable hints and declares no conflicts of interest.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Institute of Mathematics, University of Rostock

References

  1. Brown, LD: Fundamentals of statistical exponential families. IMS, Lecture Notes and Monograph Series. Hayward, CA (1986).Google Scholar
  2. Corless, RM, Gonnet, GH, Hare, DEG, Jeffrey, DJ, Knuth, DE: On the Lambert W-Funktion. Adv. Comp. Math. 5, 329–359 (1996).View ArticleMathSciNetMATHGoogle Scholar
  3. Ibragimov, IA, Linnik, YW: Independent and stationary sequence. Walters, Nordhoff. Translation from Russian edition, 1965 (1971).Google Scholar
  4. Linnik, YV: Limit theorems for sums of independent variables taking into account large deviations. I-III. Theor. Probab. Appl. 6 (1961). 131–148, 345–360; 7 (1962), 115–129.Google Scholar
  5. Nagaev, SV: Some limit theorems for large deviations. Theory Probab. Appl. 10, 214–235 (1965).View ArticleMathSciNetMATHGoogle Scholar
  6. Osipov, LV: Multidimensional limit theorems for large deviations. Theory Probab. Appl. 20, 38–56 (1975).View ArticleMATHGoogle Scholar
  7. Petrov, VV: Asymptotic behaviour of probabilities of large deviations. Theor. Probab. Appl. 13, 408–420 (1968).View ArticleGoogle Scholar
  8. Richter, W-D: Skewness-kurtosis controlled higher order equivalent decisions. Open Stat. Probability J. 7, 1–9 (2016). doi:10.2174/1876527001607010001.Google Scholar

Copyright

© Richter. 2016