Open Access

Missing data approaches for probability regression models with missing outcomes with applications

Journal of Statistical Distributions and Applications20141:23

DOI: 10.1186/s40488-014-0023-3

Received: 24 July 2014

Accepted: 25 November 2014

Published: 16 December 2014

Abstract

In this paper, we investigate several well known approaches for missing data and their relationships for the parametric probability regression model P β (Y|X) when outcome of interest Y is subject to missingness. We explore the relationships between the mean score method, the inverse probability weighting (IPW) method and the augmented inverse probability weighted (AIPW) method with some interesting findings. The asymptotic distributions of the IPW and AIPW estimators are derived and their efficiencies are compared. Our analysis details how efficiency may be gained from the AIPW estimator over the IPW estimator through estimation of validation probability and augmentation. We show that the AIPW estimator that is based on augmentation using the full set of observed variables is more efficient than the AIPW estimator that is based on augmentation using a subset of observed variables. The developed approaches are applied to Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. We show that, by stratifying based on a set of discrete variables, the proposed statistical procedure can be formulated to analyze automated records that only contain summarized information at categorical levels. The proposed methods are applied to analyze influenza vaccine efficacy for an influenza vaccine study conducted in Temple-Belton, Texas during the 2000-2001 influenza season. Mathematics Subject Classification Primary 62J02; Secondary 62F12

Keywords

Augmented inverse probability weighted estimator Asymptotic results Automated records Auxiliary outcome Efficiency Inverse probability weighted estimator Mean score estimation Recurrent events Vaccine efficacy Validation sample

Introduction

Suppose that Y is the outcome of interest and X is a covariate vector. One is often interested in the probability regression model P β (Y|X) that relates Y to X. In many medical and epidemiological studies, the complete observations on Y may not be available for all study subjects because of time, cost, or ethical concerns. In some situations, an easily measured but less accurate outcome named auxiliary outcome variable, A, is supplemented. The relationship between the true outcome Y and the auxiliary outcome A in the available observations can inform about the missing values of Y. Let V be a subsample of the study subjects, termed the validation sample, for which both true and auxiliary outcomes are available. Thus observations on (X,Y,A) are available for the subjects in V and only (X,A) are observed for those not in V.

It is well known that the complete-case analysis, which uses only subjects who have all variables observed, can be biased and inefficient, cf. Little and Rubin ([2002]). The issues also rise when substituting auxiliary outcome for true outcome; see Ellenberg and Hamilton ([1989]), Prentice ([1989]) and Fleming ([1992]). Inverse probability weighting (IPW) is a statistical technique developed for surveys by Horvitz and Thompson ([1952]) to calculate statistics standardized to a population different from that in which the data was collected. This approach has been generalized to many aspects of statistics under various frameworks. In particular, the IPW approach is used to account for missing data through inflating the weight for subjects who are underrepresented due to missingness. The method can potentially reduce the bias of the complete-case estimator when weighting is correctly specified. However, this approach has been shown to be inefficient in several situations, see Clayton et al. ([1998]) and Scharfstein et al. ([1999]). Robins et al. ([1994]) developed an improved augmented inverse probability weighted (AIPW) complete-case estimation procedure. The method is more efficient and possesses double robustness property. The multiple imputation described in Rubin ([1987]) has been routinely used to handle missing data. Carpenter et al. ([2006]) compared the multiple imputation with IPW and AIPW, and found AIPW as an attractive alternative in terms of double robustness and efficiency. Using the maximum likelihood estimation (MLE) coupled with the EM-algorithm ([Dempster et al. 1977]), Pepe et al. ([1994]) proposed the mean score method for the regression model P β (Y|X) when both X and A are discrete.

In this paper, we investigate several well known approaches for missing data and their relationships for the parametric probability regression model P β (Y|X) when outcome of interest Y is subject to missingness. We explore the relationships between the mean score method, IPW and AIPW with some interesting findings. Our analysis details how efficiency is gained from the AIPW estimator over the IPW estimator through estimation of validation probability and augmentation to the IPW score function. Applying the developed missing data methods, we derive the estimation procedures for Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. Further, we show that the proposed statistical procedures can be formulated to analyze automated records that only contain aggregated information at categorical levels, without using observations at individual levels.

The rest of the paper is organized as follows. Section 2 introduces several missing data approaches for the probability regression model P β (Y|X), where the outcome Y may be missing. Section 3 explores the relationships among these estimators. The asymptotic distributions of the IPW and AIPW estimators are derived and their efficiencies are compared. Section 3 investigates efficiency of two AIPW estimators, one is based on the augmentation using a subset of observed variables and the other is based on the augmentation using the full set of observed variables. The procedures for Poisson regression using automated data with missing outcomes are derived in Section 4. The finite-sample performances of the estimators are studied in simulations in Section 5. The proposed method is applied to analyze influenza vaccine efficacy for an influenza vaccine study conducted in Temple-Belton, Texas during the 2000-2001 influenza season. The proofs of the main results are given in the Appendix A, while the proof of a simplified variance formula in Section 4 is placed in the Appendix B.

Missing data approaches

Consider the probability regression model P β (Y|X), where Y is the outcome of interest and X is a covariate vector. Let A be the auxiliary outcome for Y and V be the validation set such that observations on (X,Y,A) are available for the subjects in V and only (X,A) are observed for those in V ̄ , the complement of V. In practice, the validation sample may be selected based on the characteristics of a subset, Z, of the covariates in X. We write X=(Z,Z c ). For example, Z may include exposure indicator and other discrete covariates and Z c may be the exposure time. Let (Z i ,X i ,Y i ,A i ), i=1,…,n, be independent identically distributed (iid) copies of (Z,X,Y,A). Let ξ i =I(iV) be the selection indicator.

Most statistical methods for missing data require some assumptions on missingness mechanisms. The commonly used ones are missing completely at random (MCAR) and missing at random (MAR). MCAR assumes that the probability of missingness in a variable is independent of any characteristics of the subjects. MAR assumes that the probability that a variable is missing depends only on observed variables. In practice, if missingness is a result by design, it is often convenient to let the missing probability depend on the categorical variables only. There is also simplicity in statistical inference by modeling the missing probability based on the categorical variables. We introduce the following missing at random assumptions.

MAR I:ξ i is independent of Y i conditional on (X i ,A i ) and ξ i is independent of Z i c conditional on (Z i ,A i ).

MAR II:ξ i is independent of ( Y i , Z i c ) conditional on (Z i ,A i ).

Since the conditional density f(y,z c |ξ,z,a)=f(z c |ξ,z,a) f(y|z c ,ξ,z,a)=f(z c |z,a)f(y|z c ,z,a)=f(y,z c |z,a), MAR I implies MAR II. It is also easy to show that MAR II implies MAR.

Let π ̂ i be the estimator of the conditional probability π i =P(ξ i =1|X i ,A i ), and π ̂ i z the estimator of π i z = P ( ξ i = 1 | Z i , A i ) . Let S β (Y|X) denote the partial derivatives of logP β (Y|X) with respect to β. Let Ê S β ( Y | X i ) | X i , A i be the estimator of the conditional expectation E{S β (Y|X i )|X i ,A i }, and Ê S β ( Y | X i ) | Z i , A i the estimator of E{S β (Y|X i )|Z i ,A i }. We investigate several estimators of β based on the following estimating equations with different choices of W i :
i = 1 n W i = 0 ,
(1)
where W i takes one of the following forms:
W i I 1 = ξ i π ̂ i z S β ( Y i | X i )
(2)
W i E 1 = ξ i S β ( Y i | X i ) + ( 1 ξ i ) Ê S β ( Y | X i ) | Z i , A i
(3)
W i A 1 = ξ i π ̂ i z S β ( Y i | X i ) + 1 ξ i π ̂ i z Ê S β ( Y | X i ) | Z i , A i
(4)
W i I 2 = ξ i π ̂ i S β ( Y i | X i )
(5)
W i E 2 = ξ i S β ( Y i | X i ) + ( 1 ξ i ) Ê S β ( Y | X i ) | X i , A i
(6)
W i A 2 = ξ i π ̂ i z S β ( Y i | X i ) + 1 ξ i π ̂ i z Ê S β ( Y | X i ) | X i , A i .
(7)
W i A 3 = ξ i π ̂ i S β ( Y i | X i ) + 1 ξ i π ̂ i Ê S β ( Y | X i ) | X i , A i .
(8)

The estimator β ̂ I 1 obtained by using W i I 1 is an IPW estimator where a subject’s validation probability π i z depends only on the category defined by (Z i ,A i ). Because E ( π i z ) 1 ξ i S β ( Y i | X i ) = E S β ( Y i | X i ) = 0 , the estimator β ̂ I 1 is approximately unbiased. The estimator β ̂ I 2 obtained by using W i I 2 is also an IPW estimator but with the validation probability π i depending on the category defined by (Z i ,A i ) and the additional covariate Z i c .

The estimator β ̂ E 1 obtained by using W i E 1 is the mean score estimator where the scores S β (Y i |X i ) for those with missing outcomes are replaced by the estimated conditional expectations given (Z i ,A i ). The estimator β ̂ E 2 obtained by using W i E 2 is the mean score estimator where the scores S β (Y i |X i ) for those with missing outcomes are replaced by the estimated conditional expectations given (X i ,A i ). The estimator β ̂ E 2 is the mean score estimator in Pepe et al. ([1994]). The mean score estimator is the MLE estimator employing the EM-algorithm ([Dempster et al. 1977]) under the assumption that the auxiliary outcome is noninformative in the sense that the probability model P θ (A|Y,X) is unrelated to β.

The estimator β ̂ A 1 obtained using W i A 1 is the AIPW estimator augmented with the estimated conditional expectation Ê S β ( Y | X i ) | Z i , A i . The estimator β ̂ A 2 obtained using W i A 2 is the AIPW estimator augmented with the estimated conditional expectation Ê S β ( Y | X i ) | X i , A i . The estimator β ̂ A 3 is obtained using W i A 3 . The W i A 3 differs from W i A 2 in that the estimated validation probability is π ̂ i instead of π ̂ i z .

Suppose that π ̂ i z is an asymptotically unbiased estimator of π ̄ i z and that Ê S β ( Y | X i ) | Z i , A i is asymptotically unbiased of Ē S β ( Y | X i ) | Z i , A i , where both π ̄ i z and Ē S β ( Y | X i ) | Z i , A i are functions of (Z i ,A i ). Under MAR II, if one of the equalities, π ̄ i z = π i z and Ē S β ( Y | X i ) | Z i , A i = E S β ( Y | X i ) | Z i , A i , holds, then
E π ̄ i z 1 ξ i S β Y i | X i + E 1 π i z 1 ξ i Ē S β Y | X i | Z i , A i = E S β ( Y i | X i ) = 0 ,

which entails that the estimator β ̂ A 1 has the double robust property in the sense that it is a consistent estimator of β if either π ̂ i z is a consistent estimator of π i z or Ê S β ( Y | X i ) | Z i , A i is a consistent estimator of E[S β (Y|X i )|Z i ,A i ]. Similarly, under MAR I, the estimator β ̂ A 2 possesses the double robust property in that β ̂ A 2 is a consistent estimator of β if either π ̂ i z is a consistent estimator of π i z or Ê S β ( Y | X i ) | X i , A i is a consistent estimator of E[S β (Y|X i )|X i ,A i ]. The estimator β ̂ A 3 has similar double robust property as β ̂ A 2 .

Method comparisons and asymptotic results

Let V(X i ,A i ) denote the subjects in V with values of (X,A) equal to (X i ,A i ), n V (X i ,A i ) the number of subjects in V(X i ,A i ), and n(X i ,A i ) the number of subjects with values of (X,A) equal to (X i ,A i ). When X and A are discrete and their dimensionality is reasonably small, the probability π i =P(ξ i =1|X i ,A i ) can be estimated by π ̂ i = n V ( X i , A i ) / n ( X i , A i ) . The conditional expectation E{S β (Y|X i )|X i ,A i } can be nonparametrically estimated based on the validation sample,
Ê S β ( Y | X i ) | X i , A i = j V ( X i , A i ) S β Y j | X j / n V ( X i , A i ) ,
(9)
Under MAR I, Ê S β ( Y | X i ) | X i , A i is an unbiased estimator of E{S β (Y|X i )|X i ,A i }. Now we let V(Z i ,A i ) denote the subjects in V with values of (Z,A) equal to (Z i ,A i ), n V (Z i ,A i ) the number of subjects in V(Z i ,A i ), and n(Z i ,A i ) the number of subjects in the sample with values of (Z,A) equal to (Z i ,A i ). A nonparametric estimator of π i z = P ( ξ i = 1 | Z i , A i ) is given by π ̂ i z = n V ( Z i , A i ) / n ( Z i , A i ) . A nonparametric estimator of E{S β (Y|X i )|Z i ,A i } is given by
Ê S β ( Y | X i ) | Z i , A i = j V ( Z i , A i ) S β Y j | X j / n V ( Z i , A i ) .
(10)

Under MAR II, (Y i ,X i ) is independent of ξ i conditional on (Z i ,A i ), then Ê S β ( Y | X i ) | Z i , A i is an unbiased estimator of E{S β (Y|X i )|Z i ,A i }.

Proposition 1.

Suppose that X=(Z,Z c ) and A are discrete and their dimensionality is reasonably small. Under the nonparametric estimators π ̂ i z = n V ( Z i , A i ) / n ( Z i , A i ) , π ̂ i = n V ( X i , A i ) / n ( X i , A i ) and the estimators for the conditional expectation defined in (9) and (10), the estimators β ̂ I 1 , β ̂ E 1 and β ̂ A 1 are equivalent, and the estimators β ̂ I 2 , β ̂ E 2 , β ̂ A 2 and β ̂ A 3 are equivalent. However, the estimator β ̂ A 2 is different from β ̂ A 1 unless Z i c is linearly related to Z i in which case β is not identifiable.

The results of Proposition 1 are very intriguing since research has shown that the AIPW and the mean score methods are more efficient than the IPW method. It is also intriguing that the AIPW estimators β ̂ A 2 and β ̂ A 3 are actually the same estimators, not affected by the validation probability. To further understand these approaches, we investigate the asymptotic properties of these methods where (X,A) are not necessarily discrete. Through the asymptotic analysis, we gain insights about what matters to the efficiency in terms of the selections of the validation sample and the augmentation function.

Suppose that E ~ S β ( Y | X i ) | X i , A i is a consistent parametric/nonparametric estimator of E a {S β (Y|X i )|X i ,A i }, where E a {S β (Y|X i )|X i ,A i } is E{S β (Y|X i )|X i ,A i } or E{S β (Y|X i )|Z i ,A i }. Let π(X i ,A i ,ψ) be the parametric model for the validation probability π i , where ψ is a q-dimensional parameter. We show in Corollary 2 that the nonparametric estimator of π(X i ,A i ,ψ) can also be expressed in the parametric form when (X i ,A i ) are discrete. Let ψ0 be the true value of ψ. Under MAR I, the MLE ψ ̂ = ψ ̂ 1 , , ψ ̂ q of ψ=(ψ1,…,ψ q ) is obtained by maximizing the observed data likelihood,
i = 1 n π ( X i , A i , ψ ) ξ i 1 π ( X i , A i , ψ ) 1 ξ i .
The validation probability π i is estimated by π ~ i = π X i , A i , ψ ̂ . Then by the standard likelihood based analysis, we have the approximation
ψ ̂ ψ 0 = n 1 i = 1 n I ψ 1 S i ψ + o p n 1 / 2 ,
(11)
where S i ψ and I ψ are the score vector and information matrix for ψ ̂ defined by
S i ψ = ( ξ i π ( X i , A i , ψ 0 ) ) π ( X i , A i , ψ 0 ) ( 1 π ( X i , A i , ψ 0 ) ) ∂π ( X i , A i , ψ 0 ) ∂ψ , I ψ = E 1 π ( X i , A i , ψ 0 ) ( 1 π ( X i , A i , ψ 0 ) ) ∂π ( X i , A i , ψ 0 ) ∂ψ 2 ,
(12)

where a2=a a.

Consider the IPW estimator β ̂ I obtained by solving the estimating equation
U I = i = 1 n ξ i π ~ i S β ( Y i | X i )
(13)
and the AIPW estimator β ̂ A based on solving the estimating equation
U A = i = 1 n ξ i π ~ i S β Y i | X i + 1 ξ i π ~ i E ~ S β ( Y | X i ) | X i , A i .
(14)

Theorem 1.

Assume that P β (Y|X) and π(X,A,ψ)have bounded third-order derivatives in a neighborhood of the true parameters and are bounded away from 0 almost surely, both −E{(2/ β2)(logP β (Y|X))} and I ψ are positive definite at the true parameters. Then, under MAR I,
n 1 / 2 β ̂ I β = I 1 ( β ) n 1 / 2 i = 1 n Q i I + o p ( 1 ) , n 1 / 2 β ̂ A β = I 1 ( β ) n 1 / 2 i = 1 n Q i A + o p ( 1 ) ,
where I(β)=E{−(2/ β2)(logP β (Y|X))}=Var(S β (Y i |X i )),
Q i I = ξ i / π i S β ( Y i | X i ) E π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) / ∂ψ ( I ψ ) 1 S i ψ

and Q i A = ξ i / π i S β ( Y i | X i ) + 1 ξ i / π i E a S β ( Y | X i ) | X i , A i .

Both n 1 / 2 β ̂ I β and n 1 / 2 β ̂ A β have asymptotically normal distributions with mean zero and covariances equal to I 1 ( β ) Var Q i I I 1 ( β ) and I 1 ( β ) Var Q i A I 1 ( β ) , respectively. Further,
Var Q i I = Var Q i A + Var ( B i + O i )
(15)
and
Var Q i A = I ( β ) + Var 1 ξ i π i S β ( Y i | X i ) E a S β ( Y | X i ) | X i , A i ,
(16)

where O i = E π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) / ∂ψ ( I ψ ) 1 S i ψ and B i =(1−ξ i /π i )E a {S β (Y|X i )|X i ,A i }.

Suppose that the validation probability π i = P(ξ i = 1|X i ,A i ) depends only on (Z i ,A i ). That is, π i = π i z = P ξ i = 1 | Z i , A i . Suppose that π ~ i is the MLE of π i z under the parametric family ψ(Z i ,A i ,ψ). Let β ̂ A 1 be the estimator obtained by solving (14) where the augmented term, E ~ S β ( Y | X i ) | X i , A i , is a consistent parametric/nonparametric estimator of E{S β (Y|X i )|Z i ,A i }. Let β ̂ A 2 be the estimator obtained by solving (14) where E ~ S β ( Y | X i ) | X i , A i is a consistent parametric/nonparametric estimator of E{S β (Y|X i )|X i ,A i }. The following corollary presents the asymptotic results for two AIPW estimators of β, one that corresponds to the augmentation based on a subset, (Z,A), of observed variables and the other that corresponds to the augmentation based on the full set, (X,A), of the observed variables.

Corollary 1.

Suppose that the validation probability π i =P(ξ i =1|X i ,A i ) depends only on (Z i ,A i ). Under the conditions of Theorem 1,
n 1 / 2 β ̂ A 1 β D N 0 , I 1 ( β ) + I 1 ( β ) Σ A 1 ( β ) I 1 ( β ) ,
(17)
and
n 1 / 2 β ̂ A 2 β D N 0 , I 1 ( β ) + I 1 ( β ) Σ A 2 ( β ) I 1 ( β ) ,
(18)

where Σ A 1 ( β ) = E ( ( 1 π i z ) / π i z ) Var { S β ( Y i | X i ) | Z i , A i } and Σ A 2 ( β ) = E ( ( 1 π i z ) / π i z ) Var S β ( Y i | X i ) | X i , A i . The asymptotic variance of β ̂ A 2 is smaller than the asymptotic variance of β ̂ A 1 if the covariates Z i are a proper subset of X i .

Suppose that (Z,A) are discrete taking values (z,a) in a set of finite number of values. If the number of parameters in ψ equals the number of values ψz,a=P(ξ i =1|Z i =z,A i =a) for all distinct pairs (z,a), then ψ={ψz,a} and π(z,a,ψ)=ψz,a. Further, ∂π ( z , a , ψ 0 ) ∂ψ can be viewed as a column vector with 1 in the position for ψz,a and 0 elsewhere. The information matrix I ψ defined in (12) has the expression,
I ψ = z , a ρ ( z , a ) 1 π ( z , a , ψ 0 ) ( 1 π ( z , a , ψ 0 ) ) ∂π ( z , a , ψ 0 ) ∂ψ ∂π ( z , a , ψ 0 ) ∂ψ ,
where ρ(z,a)=P(Z i =z,A i =a). It follows that I ψ is a diagonal matrix and its inverse matrix is also diagonal. The MLE ψ ̂ z , a = n V ( z , a ) / n ( z , a ) is in fact the nonparametric estimator for ψz,a based on the proportion of validated samples in the category specified by (z,a). The equation (11) can be expressed as
ψ ̂ z , a π ( z , a , ψ 0 ) = n 1 n V ( z , a ) n ( z , a ) π ( z , a , ψ 0 ) ρ ( z , a ) + o p n 1 / 2 ,

for (z,a).

By Threom 1, the possible efficiency gain of the AIPW estimator over the IPW estimator is shown through the equation (15). The AIPW estimator is more efficient unless Var(B i +O i )=0. In particular, from the proof of Theorem 1, we have
n 1 / 2 U A = n 1 / 2 i = 1 n Q i A + o p ( 1 )
(19)
n 1 / 2 U I = n 1 / 2 i = 1 n Q i A n 1 / 2 i = 1 n ( B i + O i ) + o p ( 1 ) ,
(20)

where B i and O i are defined following (16). The following corollary presents the analysis of the term n 1 / 2 i = 1 n ( B i + O i ) when (Z i ,A i ) are discrete to understand how efficiency may be gained from the AIPW estimator over the IPW estimator.

Corollary 2.

Under the conditions of Theorem 1, suppose that X=(Z,Z c ) and (Z,A) are discrete taking values (z,a) in a set of finite number of values. Suppose that the validation probability π i =P(ξ i =1|X i ,A i ) only depends on (Z i ,A i ) and ψz,a=P(ξ i =1|Z i =z,A i =a) is estimated nonparametrically by ψ ̂ z , a = n V ( z , a ) / n ( z , a ) . Then
n 1 / 2 i = 1 n ( O i + B i ) = z , a n 1 / 2 j = 1 n ξ j π ( z , a , ψ 0 ) π ( z , a , ψ 0 ) I ( Z j = z , A j = a ) E { S β ( Y j | X j ) | Z j = z , A j = a } E S β ( Y j | X j ) | Z j = z , Z j c , A j = a .
(21)

By Corollary 2, (19) and (20), β ̂ A is more efficient than β ̂ I unless V a r{S β (Y j |X j )|Z j =z,A j =a}=0 for all (z,a) for which P(Z i =z,A i =a)≠0. If X=Z and the validation probability π i =P(ξ i =1|X i ,A i ) is nonparametrically estimated with the cell frequencies ψ ̂ z , a = n V ( z , a ) / n ( z , a ) , then β ̂ A and β ̂ I are asymptotically equivalent.

Remark Consider the estimators of β obtained based on the estimating equation (1) corresponding to different choices of W i given in (2) to (8). If (Z,A) are discrete and the validation probability π i z = P ( ξ i = 1 | Z i , A i ) is estimated nonparametrically by the cell frequency, then by Theorem 1 and Corollary 2, β ̂ A 1 and β ̂ I 1 have same asymptotic normal distributions as long as Ê [ S β ( Y | X i ) | Z i , A i ] is a consistent estimator of E[S β (Y|X i )|Z i ,A i ]. But β ̂ A 2 is more efficient than β ̂ I 1 as long as Ê [ S β ( Y | X i ) | X i , A i ] is a consistent estimator of E[S β (Y|X i )|X i ,A i ] since Var(B i +O i ) is not zero by (21). These results are not affected by whether E[S β (Y|X i )|Z i ,A i ] and E[S β (Y|X i )|X i ,A i ] are estimated nonparametrically or based on some parametric models. In addition, by Theorem 1, Corollary 1 and 2, β ̂ A 3 and β ̂ I 2 have the same asymptotic normal distributions as long as Ê [ S β ( Y | X i ) | X i , A i ] is a consistent estimator of E[S β (Y|X i )|X i ,A i ].

Poisson regression using the automated data with missing outcomes

Many medical and public health data are available only in aggregated format, where the variables of interest are aggregated counts without being available at individual levels. Many existing statistical methods for missing data require observations at individual levels. Applying the missing data methods presented in Section 3, we derive some estimation procedures for the Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. Further, we show that, by stratifying based on a set of discrete variables, the proposed statistical procedure can be formulated so that it can be used to analyze automated records which do not contain observations at individual levels, only summarized information at categorical levels.

Let Y represent the number of events occurring in the time-exposure interval [ 0,T] and Z the covariates. We consider the Poisson regression model,
P ( Y = y | Z , T ) = exp T exp ( β Z ) T exp ( β Z ) y / y ! ,
(22)
where Z is a vector of k+1 covariates and β a vector of k+1 regression coefficients. In practice, the exact number of true events may not be available for all subjects. We may instead have the number of possible events, namely, the auxiliary events. For example, in the study of vaccine adverse events associated with childhood immunizations, the number of auxiliary events A for MAARI is collected based on ICD-9 codes through hospital records. Further diagnosis may indicate that some of these events are false events. The number of true vaccine adverse events, Y, can only be confirmed for the subjects in the validation set V. Suppose that Z is the vaccination status, 1 for the vaccinated and 0 for the unvaccinated. Then, under Poisson regression, exp(β) is the relative rate of event occurrence per unit time of the exposed versus unexposed. We assume that the number of automated events A can be expressed as A=Y+W, where W is the number of false events independent of Y conditional on (Z,T). Suppose that W follows the Poisson regression model
P ( W = w | Z , T ) = exp T exp ( γ Z ) T exp ( γ Z ) w / w ! ,
(23)

where γ=(a0,a1,γ1,,γk−1).

We apply the missing data methods introduced in Section 3 on model (22). The variables (Z i ,T i ,Y i ,A i ) are observed for the validation sample V and (Z i ,T i ,A i ) are observed for the nonvalidation sample V ̄ . While the covariate Z can be considered as categorical, it is natural to consider the exposure time T as a continuous variable. We assume that the validation probability depends only on the stratification of (Z,A). That is, the validation sample is a stratified random sample by the categories defined by (Z,A). Of those estimators discussed in Section 2, there are only two different estimators, β ̂ I 1 and β ̂ A 2 . We show in Section 4.3 that the proposed method can be formulated so that it can be used to analyze the automated records with missing outcomes. First we derive the explicit estimation procedures for β ̂ I 1 and β ̂ A 2 and their variance estimators under model (22).

4.1 Inverse probability weighting estimation

We adopt all notations introduced in Section 3. In particular, let π i z = P ( ξ i = 1 | Z i , A i ) and π ̂ i z = n V ( Z i , A i ) / n ( Z i , A i ) . Let X=(Z,T) and X i =(Z i ,T i ) to be consistent with earlier notations. The score function for subject i under model (22) is S β ( Y i | X i ) = Z i ( Y i T i exp ( β Z i ) ) . The estimator β ̂ I 1 is obtained by solving i = 1 n ( ξ i / π ̂ i z ) S β ( Y i | X i ) = 0 , where S β ( Y i | X i ) = Z i ( Y i T i exp ( γ Z i ) ) . By Corollary 1, n ( β ̂ I 1 β ) converges in distribution to a normal distribution with mean zero and the variance matrix I−1(β)+I−1(β)ΣA 1(β)I−1(β), where Σ A 1 ( β ) = E ( ( 1 π i z ) / π i z ) Var S β ( Y i | X i ) | Z i , A i .

The information matrix I ( β ) = E ( Z i Z i T i exp ( β Z i ) ) = z P ( Z i = z ) z z exp ( β z ) E ( T i | Z i = z ) can be estimated by Î ( β ̂ ) which is obtained by replacing β with β ̂ I 1 , P(Z i =z) by the sample proportion of the event {Z i =z}, and E(T i |Z i =z) with the sample average exposure time for those with covariates Z i =z. The matrix ΣA 1(β) can be estimated by
Σ ̂ A 1 ( β ̂ ) = a , z ρ ̂ ( a , z ) 1 ρ ̂ v ( a , z ) ρ ̂ v ( a , z ) Var ̂ { S β ( Y | X ) | A = a , Z = z } ,
(24)

where ρ ̂ ( a , z ) is the estimator of P{A i =a,Z i =z}, ρ ̂ v ( a , z ) is the estimator of P{iV|A i =a,Z i =z}, and Var ̂ S β ( Y | X ) | A = a , Z = z is an estimator of Var{S β (Y i |X i )|Z i ,A i }] which is derived in the following.

Since A is observed for all subjects, W can be determined if Y is known, and undetermined otherwise. The IPW estimator, γ ̂ I 1 , of γ can be estimated by solving the equation i = 1 n ( ξ i / π ̂ i z ) S γ ( W i | X i ) = 0 , where S γ ( W i | X i ) = Z i ( W i T i exp ( γ Z i ) ) . The conditional distribution of Y given A=a, T, and Z=z is Binomial (a,p z ), where p z = exp(βz)/(exp(βz)+ exp(γz)). Since this conditional distribution does not depend on T, the outcome Y and T are conditionally independent given (A,Z). Therefore, Var{S β (Y|X)|A,Z}=Z Z{Var(Y|A,Z)+ exp(2βZ)Var(T|A,Z)}. The variance Var(Y|A=a,Z=z) can be estimated by a p ̂ z ( 1 p ̂ z ) , where p ̂ z = exp ( β ̂ z ) / exp ( β ̂ z ) + exp ( γ ̂ z ) , and Var(T|A=a,Z=z)=E(T2|A=a,Z=z)−{E(T|A=a,Z=z)}2 can be estimated nonparametrically using the first and the second sample moments conditional on each category with A=a and Z=z.

4.2 Augmented inverse probability weighted estimation

Under the assumption that W follows the Poisson regression model (23) and is independent of Y conditional on (Z,T), E S β ( Y | X ) | Z , T , A = A Z exp ( β Z ) exp ( β Z ) + exp ( γ Z ) T Z exp ( β Z ) . Let Ê S β ( Y | X i ) | X i , A i be the estimator of E{S β (Y|X i )|X i ,A i } for a given β by substituting γ by its estimator γ ̂ I 1 of Section 4.1. Then the estimator β ̂ A 2 is obtained by solving
i = 1 n ξ i π ̂ i z S β Y i | X i + 1 ξ i π ̂ i z Ê S β ( Y | X i ) | X i , A i = 0 .
(25)
By Corollary 1, n ( β ̂ A 2 β ) converges in distribution to a normal distribution with mean zero and the variance matrix where I−1(β)+I−1(β)ΣA 2(β)I−1(β), where Σ A 2 ( β ) = E ( ( 1 π i z ) / π i z ) Var { S β ( Y i | X i ) | X i , A i } . The information matrix I(β) can be estimated by Î ( β ̂ ) given in Section 4.1. The conditional variance Var{S β (Y|X)|Z=z,T,A=a}=a p z (1−p z )z2 can be estimated by a p ̂ z ( 1 p ̂ z ) , where p ̂ z = exp ( β ̂ z ) / ( exp ( β ̂ z ) + exp ( γ ̂ z ) ) . It follows that ΣA 2(β) can be consistently estimated by
Σ ̂ A 2 ( β ) = a , z ρ ̂ ( a , z ) 1 ρ ̂ v ( a , z ) ρ ̂ v ( a , z ) a p ̂ z ( 1 p ̂ z ) z 2 ,

where ρ ̂ ( a , z ) is the estimator of P{A i =a,Z i =z} and ρ ̂ v ( a , z ) is the estimator of P{iV|A i =a,Z i =z}.

4.3 Estimation using the automated data

This section formulates the missing data estimation procedure for (22) based on the automated (summarized) information at categorical levels defined by relevant covariates of the model. In particular, we show that β ̂ I 1 and β ̂ A 2 and their variance estimators can be formulated using the automated data at categorical levels.

In many applications it is convenient to write Z=(1,Z(1),Z(2)) and β=(b0,b1,θ), where Z(1) is the treatment indicator (Z(1)=1 for the exposed group and Z(1)=0 for the unexposed group) and Z(2)=(η1,,ηk−1) as the other covariates, and θ=(θ1,,θk−1). For the applications involving the automated data records, we let η1,,ηk−1 be k−1 dummy variables representing k groups. Without loss of generality, we choose the k th group as the reference group, η1=1, η2=0, , ηk−1=0 for group 1, η1=0, η2=1, , ηk−1=0 for group 2, so on and η1=0, η2=0, , ηk−1=0 for group k. Thus each value of Z denotes a category which can be represented by (l,m) for l=0,1 and m=1,,k. This correspondence is denoted by Z(l,m) for convenience. For l=0,1 and m=1,,k−1, category (l,m) is defined by Z with Z(1)=l, η m =1 and η j =0 for jm, j=1,…,k, and category (l,k) is defined by Z(1)=l and η j =0 for j=1,…,k−1. Under model (22), the expected number of events for a subject in category (l,m) with the time-exposure interval [ 0,T] is T exp(b lm ), for l=0,1 and m=1,,k, where b1k=b0+b1, b0k=b0, b1m=b0+b1+θ m and b0m=b0+θ m for 1≤mk−1. The parameter b1 represents the log-relative rate of the exposed versus the unexposed adjusted for other factors.

The following notations are used to show that the estimators of β and their variance estimators can be calculated using the automated information at the categorical levels. Let V(a,l,m) denote the set of subjects in V with (A=a, Z(l,m)), V(l,m) for the set of subjects in V with (Z(l,m)), n alm for the number of subjects with (A=a, Z(l,m)), n alm v for the number of subjects in V(a,l,m), n lm v for the number of subjects in V(l,m), λ alm = n alm / n alm v , y alm for the number of events for subjects in V(a,l,m), y lm for the number of events for subjects in V(l,m), t alm for the total exposure time for subjects with (A=a, Z(l,m)), t2,a l m for the total squared exposure time for subjects with (A=a, Z(l,m)), t lm for the total exposure time for subjects with Z(l,m), α lm for the number of automated events for subjects with Z(l,m).

Estimation with β ̂ I 1 using the automated data. The validation probability π i z can be estimated by 1/λ alm when A i =a, Z i (l,m). It can be shown that the estimating equation for β ̂ I 1 is equivalent to the following nonlinear equations for {b lm , for l=0,1, m=1,,k},
m = 1 k a A y alm λ alm e b lm a A t alm λ alm = 0 , l = 0 , 1 a A y alm λ alm e b lm a A t alm λ alm = 0 ,

for l=0,1 and m=1,…,k−1. When k>1, the equations have no explicit solutions.

In the following, we show that the asymptotic variance of β ̂ I 1 can be consistently estimated by only using the automated information at categorical levels. The information matrix is a (k+1)×(k+1) symmetric matrix given by
I ( β ) = E ( Z i Z i T i exp ( β Z i ) ) = l , m q lm m = 1 k q 1 m q 11 + q 01 q 1 r + q 0 r m = 1 k q 1 m m = 1 k q 1 m q 11 q 1 r q 11 + q 01 q 11 q 11 + q 01 0 q 1 r + q 0 r q 1 r 0 q 1 r + q 0 r ,

where r=k−1 and q lm = E ( T i e b lm I { individual i in category ( l , m ) } ) . The consistent estimator, Î ( β ̂ ) , of I(β) is thus obtained by replacing q lm with exp ( b ̂ lm ) t lm / n .

Under model (23), the expected number of false events for a subject in category (l,m) with the time-exposure interval [ 0,T] is T exp(d lm ), for l=0,1 and m=1,,k, where d1k=a0+a1, d0k=a0, d1m=a0+a1+γ m and d0m=a0+γ m for 1≤mk−1. The conditional distribution of Y given A=a, T, and Z(l,m) is Binomial (a,p lm ), where p lm = exp(b lm )/(exp(b lm )+ exp(d lm )) for a≥1. Then Var(Y|A=a,Z(l,m)) can be estimated by a p ̂ lm ( 1 p ̂ lm ) , where p ̂ lm = e b ̂ lm / ( e b ̂ lm + e d ̂ lm ) , and Var(T|A=a,Z(l,m)) can be estimated by νa,l,m=t2,a,l,m/n alm −(t alm /n alm )2. By (24) and the discussion that follows, ΣA 1(β) can be estimated by
Σ ̂ A 1 ( β ̂ ) = a , l , m ρ ̂ ( a , l , m ) 1 ρ ̂ v ( a , l , m ) ρ ̂ v ( a , l , m ) G lm { a p ̂ lm ( 1 p ̂ lm ) + ν a , l , m } ,
(26)

where ρ ̂ ( a , l , m ) = n alm / n , ρ ̂ v ( a , l , m ) = n alm v / n alm and G lm be the value of G i = z i 2 when subject i belongs to the category (l,m). Hence the covariance matrix of β ̂ I 1 can be estimated by Î 1 ( β ̂ ) + Î 1 ( β ̂ ) Σ ̂ A 1 ( β ̂ ) Î 1 ( β ̂ ) using the automated data.

Estimation with β ̂ A 2 using the automated data. The estimating equations (25) are equivalent to the following nonlinear equations for {b lm , for l=0,1, m=1,,k},
m = 1 k a A y alm λ alm e b lm t lm + e b lm e b lm + e d ̂ 1 m α lm a A a n alm λ alm = 0 , l = 0 , 1 a A y alm λ alm e b lm t lm + e b lm e b lm + e d ̂ lm α lm a A a n alm λ alm = 0 ,

for l=0,1 and m=1,…,k−1.

Since Var{S β (Y|X)|Z(l,m),T,A=a}=a p lm (1−p lm )G lm , ΣA 2(β) can be consistently estimated by
Σ ̂ A 2 ( β ̂ ) = a , l , m ρ ̂ ( a , l , m ) 1 ρ ̂ v ( a , l , m ) ρ ̂ v ( a , l , m ) a p ̂ lm ( 1 p ̂ lm ) G lm .

Hence the covariance matrix of β ̂ A 2 can be estimated by Î 1 ( β ̂ ) + Î 1 ( β ̂ ) Σ ̂ A 2 ( β ̂ ) Î 1 ( β ̂ ) using the automated data.

Remark In the special case where ρ(α,l,m)≈0 for α≥2, a much simpler formula for the variance estimator of the log relative risk can be derived. For example in the vaccine safety study, the adverse-event rate is very small. Let
w m = α 0 m α 1 m y 0 m y 1 m α 0 m y 0 m n 1 m v + α 1 m y 1 m n 0 m v m = 1 k α 0 m α 1 m y 0 m n 1 m v α 0 m y 0 m n 1 m v + α 1 m y 1 m n 0 m v .
Then an estimate of variance of b ̂ 1 is given by
Var ( b ̂ 1 ) ̂ = m = 1 k w m 2 1 y 1 m 1 n 1 m v + 1 α 1 m + 1 y 0 m 1 n 0 m v + 1 α 0 m ,
(27)

which is the weighted sum of the estimated variances for the estimated log relative rate of the exposed versus the unexposed over k groups. The details of deviation are given in the Appendix B.

A simulation study

We conduct a simulation study to examine the finite sample performance of the IPW estimator β ̂ I 1 and the AIPW estimator β ̂ A 2 . We consider the Poisson regression model (22). The covariates Z1 and Z2 are generated from the Bernoulli distributions with the probability of success equals to 0.4 and 0.5 respectively. The exposure time T is generated from a uniform distribution on [0,10]. Given Z=(Z1,Z2) and T, the outcome variable Y follows a Poisson distribution with mean T exp(b0+b1Z1+θ Z2) where b0=−0.5, b1=−0.8 and θ=−0.6, and W follows a Poisson distribution with mean T exp(a0+a1Z1+γ Z2) where a0=−1.3, a1=−1.1, γ=−1. We set A=Y+W.

Four models for the validation sample are considered. Under Model 1, the validation sample is a simple random sample with probability π i =0.4. Model 2 considers π i =0.6. In Model 3, the validation probability only depends on A through the logistic regression model logit{π i (X,A)}=A−0.5 where X=(Z,T). In Model 4, the validation probability depends on A and Z1 through the logistic regression model logit{π i (X,A)}=AZ1−0.5.

Tables 1 and 2 present the simulation results for n=50,100,300,500 and 800. Each entry of the tables is based on 1000 simulation runs. Tables 1 and 2 summarize the bias (Bias), the empirical standard error (SSE), the average of the estimated standard error (ESE), and the empirical coverage probability (CP) of 95% confidence intervals of β ̂ I 1 and β ̂ A 2 for β=(b0,b1,θ). We also compare the performance of the estimators β ̂ I 1 and β ̂ A 2 with the complete-case (CC) estimator β ̂ C obtained by simply deleting subjects with missing values of Y i . As a gold standard, we present the estimation results for the full data where all the values of Y i are fully observed. Table 1 presents the results under Model 1 and 2, and Table 2 shows the results under Model 3 and 4.
Table 1

Simulation comparison of the IPW estimator β ̂ I 1 , the AIPW estimator β ̂ A 2 and the complete-case (CC) estimator β ̂ C under various sample sizes and selection probabilities

 

b 0

 

b 1

 

θ

n

 

Bias

SSE

ESE

CP

 

Bias

SSE

ESE

CP

 

Bias

SSE

ESE

CP

Model 1: π i =.4

50

IPW

-.0415

.3561

.1839

.851

 

-.2175

1.6737

.3354

.864

 

-.1610

1.2201

.2962

.847

 

AIPW

-.0110

.2213

.1664

.890

 

-.0062

.3099

.2873

.943

 

-.0186

.3076

.2551

.929

 

CC

-.0246

.3398

.2738

.938

 

-.1515

1.6082

.4730

.968

 

-.1038

1.1709

.4187

.959

100

IPW

-.0650

.1815

.1404

.870

 

-.0548

.3120

.2458

.891

 

-.0249

.2653

.2161

.898

 

AIPW

-.0094

.1376

.1187

.914

 

-.0024

.2284

.1988

.926

 

.0027

.1994

.1780

.925

 

CC

-.0240

.1728

.1685

.948

 

-.0086

.3086

.2981

.960

 

.0031

.2556

.2581

.946

300

IPW

-.0368

.0936

.0874

.931

 

-.0209

.1535

.1460

.946

 

-.0022

.1419

.1286

.929

 

AIPW

-.0027

.0732

.0712

.946

 

-.0028

.1233

.1165

.940

 

.0005

.1130

.1046

.938

 

CC

-.0092

.0919

.0935

.960

 

-.0012

.1627

.1634

.952

 

.0040

.1438

.1432

.952

500

IPW

-.0183

.0698

.0671

.938

 

-.0172

.1159

.1128

.943

 

-.0083

.1069

.0993

.933

 

AIPW

.0022

.0566

.0550

.936

 

-.0022

.0956

.0902

.943

 

-.0068

.0867

.0811

.930

 

CC

.0006

.0704

.0716

.949

 

-.0059

.1268

.1255

.949

 

-.0046

.1135

.1103

.942

800

IPW

-.0126

.0538

.0527

.942

 

-.0134

.0862

.0889

.950

 

-.0029

.0759

.0779

.947

 

AIPW

.0011

.0433

.0435

.952

 

-.0047

.0720

.0713

.956

 

-.0020

.0638

.0640

.951

 

CC

.0002

.0562

.0565

.948

 

-.0051

.0974

.0990

.958

 

-.0013

.0844

.0869

.958

Model 2: π i =.6

50

IPW

-.0316

.2079

.1714

.926

 

-.0934

.8426

.3112

.944

 

-.0563

.3320

.2690

.937

 

AIPW

-.0072

.1723

.1591

.941

 

-.0105

.2893

.2772

.950

 

-.0172

.2653

.2440

.948

 

CC

-.0126

.1973

.1949

.959

 

-.0594

.8369

.3512

.967

 

-.0278

.3213

.3044

.959

100

IPW

-.0366

.1399

.1259

.926

 

-.0420

.2363

.2192

.944

 

-.0100

.2103

.1911

.925

 

AIPW

-.0121

.1206

.1133

.941

 

-.0107

.2069

.1921

.944

 

.0078

.1764

.1700

.940

 

CC

-.0142

.1370

.1345

.947

 

-.0216

.2379

.2370

.961

 

.0030

.2103

.2072

.949

300

IPW

-.0138

.0742

.0728

.944

 

-.0194

.1267

.1250

.957

 

-.0049

.1064

.1096

.964

 

AIPW

-.0030

.0650

.0651

.948

 

-.0044

.1136

.1093

.949

 

.0005

.0960

.0974

.956

 

CC

-.0017

.0763

.0759

.946

 

-.0118

.1345

.1328

.951

 

-.0035

.1147

.1169

.957

500

IPW

-.0069

.0571

.0555

.945

 

-.0096

.0946

.0965

.947

 

-.0094

.0866

.0844

.953

 

AIPW

.0029

.0495

.0496

.942

 

-.0032

.0856

.0841

.947

 

-.0076

.0757

.0749

.955

 

CC

.0013

.0577

.0581

.947

 

-.0034

.1024

.1019

.949

 

-.0086

.0906

.0899

.954

800

IPW

-.0072

.0437

.0438

.954

 

-.0069

.0754

.0763

.956

 

-.0025

.0692

.0664

.947

 

AIPW

-.0011

.0401

.0393

.951

 

-.0019

.0693

.0665

.943

 

-.0015

.0626

.0590

.931

 

CC

-.0012

.0452

.0460

.958

 

-.0026

.0805

.0806

.952

 

-.0024

.0723

.0709

.952

Full data: π i =1

50

 

-.0079

.1510

.1466

.952

 

-.0182

.2691

.2618

.948

 

-.0104

.2264

.2263

.957

100

 

-.0079

.1068

.1024

.943

 

-.0075

.1841

.1798

.948

 

-.0039

.1560

.1577

.949

300

 

-.0019

.0596

.0583

.950

 

-.0081

.1032

.1023

.936

 

.0001

.0934

.0898

.932

500

 

.0006

.0452

.0450

.951

 

-.0041

.0783

.0788

.950

 

.0014

.0656

.0693

.960

800

 

-.0004

.0343

.0356

.951

 

.0025

.0612

.0622

.938

 

-.0006

.0532

.0547

.955

Table 2

Simulation comparison of the IPW estimator β ̂ I 1 , the AIPW estimator β ̂ A 2 and the complete-case (CC) estimator β ̂ C under various sample sizes and selection probabilities

 

b 0

 

b 1

 

θ

n

 

Bias

SSE

ESE

CP

 

Bias

SSE

ESE

CP

 

Bias

SSE

ESE

CP

Model 3

50

IPW

.0081

.1609

.1502

.949

 

-.0034

.5535

.2790

.954

 

-.0116

.2592

.2400

.963

 

AIPW

-.0070

.1543

.1486

.950

 

-.0134

.2715

.2690

.958

 

-.0185

.2364

.2330

.955

 

CC

.0230

.1529

.1504

.938

 

.0798

.5441

.2835

.940

 

.0648

.2367

.2414

.952

100

IPW

-.0052

.1145

.1077

.948

 

-.0001

.2073

.2014

.959

 

.0030

.1789

.1724

.948

 

AIPW

-.0124

.1077

.1041

.947

 

-.0085

.1869

.1840

.957

 

.0050

.1636

.1606

.948

 

CC

.0221

.1074

.1054

.939

 

.1023

.1830

.1937

.924

 

.0828

.1625

.1664

.915

300

IPW

-.0011

.0617

.0614

.951

 

-.0044

.1176

.1157

.952

 

.0019

.1009

.0993

.953

 

AIPW

-.0023

.0582

.0588

.956

 

-.0051

.1056

.1036

.954

 

.0022

.0936

.0910

.944

 

CC

.0295

.0577

.0596

.924

 

.1051

.1070

.1095

.824

 

.0823

.0925

.0946

.861

500

IPW

.0018

.0451

.0473

.958

 

-.0037

.0853

.0895

.958

 

-.0069

.0765

.0767

.945

 

AIPW

.0009

.0430

.0452

.957

 

-.0032

.0793

.0798

.947

 

-.0066

.0689

.0702

.951

 

CC

.0317

.0429

.0459

.903

 

.1077

.0788

.0844

.763

 

.0737

.0704

.0730

.839

800

IPW

-.0006

.0374

.0375

.951

 

-.0030

.0671

.0708

.962

 

.0004

.0617

.0605

.946

 

AIPW

-.0003

.0362

.0358

.949

 

-.0031

.0623

.0631

.954

 

-.0012

.0577

.0554

.935

 

CC

.0315

.0353

.0364

.863

 

.1065

.0630

.0667

.633

 

.0786

.0568

.0576

.721

Model 4

50

IPW

.0053

.1627

.1504

.948

 

.0825

.3531

.2832

.913

 

-.0057

.2736

.2405

.948

 

AIPW

-.0085

.1549

.1489

.950

 

-.0122

.2746

.2752

.966

 

-.0138

.2395

.2340

.961

 

CC

.2295

.2640

.0855

.531

 

.4513

.3805

.1760

.517

 

.2954

.3285

.1409

.536

100

IPW

-.0050

.1168

.1085

.939

 

.0481

.2350

.2130

.922

 

.0016

.1884

.1761

.940

 

AIPW

-.0130

.1083

.1043

.943

 

-.0067

.1920

.1885

.950

 

.0066

.1648

.1613

.949

 

CC

.0196

.1077

.1063

.943

 

.2010

.1946

.2087

.820

 

.0900

.1645

.1702

.910

300

IPW

-.0001

.0630

.0624

.945

 

-.0001

.1323

.1311

.955

 

-.0011

.1043

.1038

.946

 

AIPW

-.0020

.0588

.0588

.951

 

-.0052

.1095

.1059

.952

 

.0012

.0950

.0913

.931

 

CC

.0271

.0582

.0601

.930

 

.2020

.1060

.1173

.576

 

.0894

.0939

.0965

.863

500

IPW

.0012

.0457

.0480

.951

 

-.0007

.0966

.1010

.966

 

-.0054

.0801

.0799

.948

 

AIPW

.0006

.0433

.0453

.956

 

-.0010

.0821

.0813

.941

 

-.0059

.0697

.0704

.950

 

CC

.0291

.0434

.0463

.912

 

.2047

.0817

.0903

.364

 

.0815

.0711

.0745

.820

800

IPW

-.0006

.0381

.0380

.949

 

.0004

.0761

.0794

.967

 

.0000

.0636

.0630

.947

 

AIPW

-.0002

.0362

.0359

.949

 

-.0016

.0640

.0641

.955

 

-.0014

.0574

.0555

.942

 

CC

.0288

.0356

.0367

.885

 

.2039

.0644

.0714

.166

 

.0864

.0570

.0588

.673

Table 1 shows that under Model 1 and Model 2, the bias of all estimators is very small at a level comparable with that of the full data estimator. The bias decreases with increased sample size and the increased level of the validation probability. The empirical standard errors are in good agreement with the corresponding estimated standard errors, except for the IPW estimator when n≤100 and π≤0.6. Among them, AIPW has the smallest standard errors for all parameters and sample sizes concerned. The coverage probabilities of the confidence intervals for b0, b1 and θ are close to the nominal level 95%. When the sample size and the validation probability are both small, for example, n=50 and π=0.4, the IPW has large bias and is unstable but the AIPW still performs well.

Table 2 gives the results under Model 3 and Model 4. The bias remains small for β ̂ I 1 and β ̂ A 2 . The empirical standard errors are also close to the corresponding estimated standard errors. The coverage probabilities remain close to the nominal level 95% for all IPW and AIPW estimators. However, the complete-case estimator yields larger bias and incorrect coverage probability because of the association between the validation probability and the auxiliary variable A and/or the covariate Z1, in which case the missing is not missing completely at random. The AIPW performs better than IPW with smaller standard errors.

An Application

A community-based, nonrandomized, open-label influenza vaccine (CAIV-T) study was conducted in Temple-Belton, Texas during the 2000-2001 influenza season. The total 11,606 healthy children aged 18 months - 18 years were involved in this study and about 20% of them received a single dose of CAIV-T in 2000. The primary clinical outcome was based on an nonspecific case definition called medically attended acute respiratory infection (MAARI), which included all International Classification of Diseases, Ninth Revision, Clinical Modification diagnoses codes (ICD-9 codes 381-383, 460-487) for upper and lower respiratory tract infections, otitis media and sinusitis. MAARI outcomes and demographic data were extracted from the Scott & White Health Plan administrative database. For each visit, one or two International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes were listed. Visits for which asthma diagnosis codes alone were noted, without another MAARI code, were excluded. More details about this study can be found in Halloran et al. ([2003]).

Any children representing with history of fever and any respiratory illness were eligible to have a throat swab for influenza virus culture. The decision to obtain specimens was made irrespective of whether a patient had received CAIV-T. The specific case definition was culture-confirmed influenza. Table 3 taken from Halloran et al. ([2003]) contains information on the number of children in three age groups, the number of children who are vaccinated versus unvaccinated, the number of nonspecific MAARI cases, the number of cultures performed, and the number of cultures positive for each group.
Table 3

Study data for influenza epidemic season 2000-01, by age and vaccine group (from Halloran et al.[2003])

Age group

Vaccine

No. of

No. of MAARI

No. of MAARI

No. of positive

(years)

 

children

cases

cases cultured

cultures

1.5-4

CAIV-T

537

389

16

0

 

None

1844

1665

86

24

5-9

CAIV-T

807

316

17

2

 

None

2232

1156

118

53

10-18

CAIV-T

937

219

19

3

 

None

5249

1421

123

56

Total

CAIV-T

2281

924

52

5

 

None

9325

4242

327

133

With the method developed in Section 4 for Poisson regression, we compare the risk of developing MAARI for children who received CAIV-T to the risk for children who had never received CAIV-T using the automated information provided in Table 3. The number of nonspecific MAARI cases extracted using the ICD-9 codes is the auxiliary outcome A, whereas the actual number of influenza cases Y is the outcome of interest. Let Z1 be the treatment indicator (1=vaccine and 0=placebo). Let Z2=(η1,η2) be the dummy variables indicating three age groups, where η1=1 if the age is in the range 1.5–4, η1=0, otherwise, and η2=1 if the age is in the range 5–9, η2=0, otherwise. The reference group is the age 10–18. The exposure time for all children is taken as T=1 year.

Consider a Poisson regression model with mean T exp(b0+b1Z1+θ1η1+θ2η2). Using the IPW estimator β ̂ I 1 , the estimates (standard errors) are b ̂ 0 = 0.7659 ( σ ̂ b 0 = 0.1046 ), b ̂ 1 = 1.5830 ( σ ̂ b 1 = 0.5017 ), θ ̂ 1 = 0.5572 ( σ ̂ θ 1 = 0.2111 ) and θ ̂ 2 = 0.0199 ( σ ̂ θ 2 = 0.1472 ). The age-adjusted relative rate (RR) in the vaccinated group compared with the unvaccinated group equals exp ( b ̂ 1 ) = exp ( 1.5830 ) = 0.2054 , which means that the rate of developing MAARI for the vaccinated group is 20% of that for the unvaccinated group. In terms of the vaccine efficacy VE=1−RR=0.7946, this represents about 80% reduction in the risk of developing MAARI for the vaccinated group compared to the unvaccinated group. The 95% confidence interval of RR obtained by using the delta method is (0.0768,0.5490), showing clear evidence that the vaccinated children have less risk of influenza than the unvaccinated children. The 95% confidence interval for VE is (0.4510,0.9232).

Using the AIPW estimator β ̂ A 2 , the estimates (standard errors) are b ̂ 0 = 2.0703 ( σ ̂ b 0 = 0.0851 ), b ̂ 1 = 1.8072 ( σ ̂ b 1 = 0.3786 ), θ ̂ 1 = 0.6452 ( σ ̂ θ 1 = 0.1966 ) and θ ̂ 2 = 0.6235 ( σ ̂ θ 2 = 0.1265 ). The age-adjusted relative rate (RR) is exp ( b ̂ 1 ) = exp ( 1.8072 ) = 0.1641 . The estimated VE is 0.8359 and the 95% confidence interval is (0.6553,0.9219). The estimator β ̂ A 2 yields smaller standard errors and confidence intervals with more precision than using β ̂ I 1 .

This data was analyzed by Halloran et al. ([2003]) and Chu and Halloran ([2004]). Assuming the binary probability model for P β (Y|X) where X includes the vaccination status and age group indicators, and using the mean score method, Halloran et al. ([2003]) found that the estimated VE based on the nonspecific MAARI cases alone was 0.18 with 95% confidence interval of (0.11,0.24). The estimated VE by incorporating the surveillance cultures was 0.79 with 95% confidence interval of (0.51,0.91). Halloran et al. also reported sample-size-weighted VE =0.77 with 95% confidence interval of (0.48,0.90). Chu and Halloran ([2004]) have developed a Bayesian method to estimate vaccine efficacy. By Chu and Halloran ([2004]), the estimated VE was 0.74 with 95% confidence interval (0.50,0.88) and estimated VE by the multiple imputation method was 0.71 with 95% confidence interval (0.42,0.86).

Our estimates of the vaccine efficacy are in line with the existing methods. The estimator β ̂ A 2 yields smaller standard errors and therefore confidence intervals are more precise than the existing methods of Halloran et al. ([2003]) and Chu and Halloran ([2004]). Compared to the binary regression, Poisson regression model allows multiple recurrent MAARI cases for each child. Although for this particular application the exposure time is fixed at one year time interval, the proposed method is applicable to the situation where the length of exposure time may be different for different children.

Conclusions

In this paper, we investigated the mean score method, the IPW method and the AIPW method for the parametric probability regression model P β (Y|X) when outcome of interest Y is subject to missingness. The asymptotic distributions are derived for the IPW estimator and the AIPW estimator. The selection probability often needs to be estimated for the IPW estimator, and both the selection probability and the conditional expectation of the score function needs to be estimated for the AIPW estimator. We investigated the properties of the IPW estimator and the AIPW estimator when the selection probability and the conditional expectation are implemented differently.

An AIPW estimator is said to be fully augmented if the selection probability and the conditional expectation are estimated using the full set of observed variables; it is partially augmented if the selection probability and the conditional expectation are estimated using a subset of observed variables. Corollary 1 shows that the fully augmented AIPW estimator is more efficient than the partially augmented AIPW estimator. Corollary 2 shows that the AIPW estimator is more efficient than the IPW estimator. However, when the selection probability depends only on a set of discrete random variables, the IPW estimator obtained by estimating the selection probability nonparametrically with the cell frequencies is asymptotically equivalent to the AIPW estimator augmented using the same set of discrete random variables. Proposition 1 shows that the IPW estimator, the AIPW estimator and the mean score estimator are equivalent if the selection probability and the conditional expectation are estimated using same set of discrete random variables.

Applying the developed missing data methods, we derived the estimation procedures for Poisson regression model with missing outcomes based on auxiliary outcomes and a validated sample for true outcomes. By assuming the selection probability depending only on the observed discrete exposure variables, not on the continuous exposure time, we show that the IPW estimator and the AIPW estimator can be formulated to analyze data when only aggregated/summarized information are available. The simulation study shows that for a moderate sample size and selection probability, the IPW estimator and AIPW estimator perform better than the complete-case estimator. The AIPW estimator is more efficient and more stable than the IPW estimator. The proposed methods are applied to analyze a data set from for an influenza vaccine study conducted in Temple-Belton, Texas during the 2000-2001 influenza season. The data set presented in Table 3 only contains summarized information at categorical levels defined by the three age groups and vaccination status. The actual number of influenza cases (the number of positive cultures) out of the number of MAARI cases cultured, along with the number of MAARI cases, are available for each category. Our analysis using the AIPW approach shows that the age-adjusted relative rate in the vaccinated group compared to the unvaccinated group equals 0.1641, which represents about 84% reduction in the risk of developing MAARI for the vaccinated group compared to the unvaccinated group.

Appendix A

Proof of Proposition 1.

Since
i = 1 n ( 1 ξ i ) Ê S β ( Y | X i ) | Z i , A i = i V ̄ j V ( Z i , A i ) S β ( Y j | X j ) / n V ( Z i , A i ) = i V n V ̄ ( Z i , A i ) / n V ( Z i , A i ) S β ( Y i | X i ) ,
we have
i = 1 n W i E 1 = i V 1 + n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = i = 1 n ξ i π ̂ i z S β ( Y i | X i ) = i = 1 n W i I 1 .
(A.1)
This shows that the mean score estimator β ̂ E 1 is the same as the IPW estimator β ̂ I 1 . Further, since
i = 1 n 1 ξ i π ̂ i z Ê { S β ( Y | X i ) | Z i , A i } = i V ̄ j V ( Z i , A i ) S β ( Y j | X j ) n V ( Z i , A i ) i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) j V ( Z i , A i ) S β ( Y j | X j ) n V ( Z i , A i ) = i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = 0 ,

we have i = 1 n W i A 1 = i = 1 n W i I 1 . Thus the AIPW estimator β ̂ A 1 , the IPW estimator β ̂ I 1 and the mean score estimator β ̂ E 1 are equivalent to each other.

Note that
i = 1 n 1 ξ i π ̂ i z Ê { S β ( Y | X i ) | X i , A i } = i V ̄ j V ( X i , A i ) S β ( Y j | X j ) n V ( X i , A i ) i V n V ̄ ( Z i , A i ) n V ( Z i , A i ) j V ( X i , A i ) S β ( Y j | X j ) n V ( X i , A i ) = i V n V ̄ ( X i , A i ) n V ( X i , A i ) S β ( Y i | X i ) i V n V ( X i , A i ) n V ( X i , A i ) n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = i V n V ̄ ( X i , A i ) n V ( X i , A i ) n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) ,
(A.2)

which is not zero unless Z i c is linearly related to Z i and in this case β is not identifiable. Hence the AIPW estimator β ̂ A 2 is different from the AIPW estimator β ̂ A 1 .

By (A.1) and (A.2), we have
i = 1 n W i A 2 = i V n ( Z i , A i ) n V ( Z i , A i ) + n V ̄ ( X i , A i ) n V ( X i , A i ) n V ̄ ( Z i , A i ) n V ( Z i , A i ) S β ( Y i | X i ) = i V 1 + n V ̄ ( X i , A i ) n V ( X i , A i ) S β ( Y i | X i ) = i = 1 n W i I 2 .

Following the same arguments leading to (A.1), we also have i = 1 n W i E 2 = i = 1 n W i I 2 . Hence, the estimators β ̂ I 2 , β ̂ E 2 and β ̂ A 2 are equivalent. By following the steps in (A.2), we also have i = 1 n 1 ξ i π ̂ i Ê S β ( Y | X i ) | X i , A i = 0 . Hence, β ̂ A 3 is the same as β ̂ I 2 . Therefore, these are essentially two different estimators. □

Proof of Theorem 1.

Applying the first order Taylor expansion, π ~ i π i = ( ∂π ( X i , A i , ψ 0 ) / ∂ψ ) ψ ̂ ψ 0 + o p n 1 / 2 . From (13), we have
n 1 / 2 U I = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) n 1 / 2 i = 1 n π ~ i π i π ~ i π i ξ i S β ( Y i | X i )
(A.3)
The second term of (A.3) is
n 1 / 2 i = 1 n π ~ i π i π ~ i π i ξ i S β ( Y i | X i ) = n 1 / 2 i = 1 n π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) ∂ψ ψ ̂ ψ 0 = E π i 2 ξ i S β ( Y i | X i ) ∂π ( X i , A i , ψ 0 ) ∂ψ n 1 / 2 ψ ̂ ψ 0 + o p ( 1 )
(A.4)
By (11), (A.3) and (A.4), we have
n 1 / 2 U I = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) O i + o p ( 1 ) = n 1 / 2 i = 1 n Q i I + o p ( 1 ) .
Now consider the AIPW estimator β ̂ A based on solving the estimating equation (14). For simplicity, we denote E a {S β (Y|X i )|X i ,A i } by E i and E ~ S β ( Y | X i ) | X i , A i by E ~ i . We note that
n 1 / 2 U A = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) + 1 ξ i π i E i + n 1 / 2 i = 1 n ξ i π ~ i ξ i π i S β ( Y i | X i ) E i + 1 ξ i π ~ i E ~ i E i .
Suppose that π ~ i and E ~ i are the estimates of π i and E i based on some parametric or nonparametric models. Then it can be shown using Taylor expansion and standard probability arguments that the second term is at the order of o p (1) under MAR I. Hence
n 1 / 2 U A = n 1 / 2 i = 1 n ξ i π i S β ( Y i | X i ) + 1 ξ i π i E i + o p ( 1 ) .
It can be shown that under MAR I, n 1 U I / ∂β P I ( β ) and n 1 U A / ∂β P I ( β ) . By routine derivations, we have
n 1 / 2 β ̂ I β = I 1 ( β ) n 1 / 2 U I + o p ( 1 ) = I 1 ( β ) n 1 / 2 i = 1 n Q i I + o p ( 1 ) , n 1 / 2 β ̂ A β = I 1 ( β ) n 1 / 2 U A + o p ( 1 ) = I 1 ( β ) n 1 / 2 i = 1 n Q i A + o p ( 1 ) .

By the central limit theorem, both n 1 / 2 β ̂ I β and n 1 / 2 β ̂ A β have asymptotically normal distributions with mean zero and covariances equal to I 1 ( β ) Var Q i I I 1 ( β ) and I 1 ( β ) Var Q i A I 1 ( β ) , respectively.

Next, we examine the covariance matrices Var Q i I and Var Q i A to understand the efficiency gain of β ̂ A over β ̂ I . Note that Q i I = ξ i / π i S β ( Y i | X i ) O i and Q i A = ξ i / π i S β ( Y i | X i ) + ( 1 ξ i / π i ) E i . Denote A i =ξ i /π i S β (Y i |X i ) and B i =(1−ξ i /π i )E i . Then Q i I = Q i A B i O i . Under MAR I, Cov Q i A , O i = E Q i A O i = E E ( Q i A | ξ i , X i , A i ) O i = E { E i O i } = E { E i E ( O i | X i , A i ) } = 0 , and
Cov ( Q i A , B i ) = E ξ i π i S β ( Y i | X i ) + 1 ξ i π i E i 1 ξ i π i E i = E 1 ξ i π i 2 E i 2 E 1 ξ i π i 2 S β ( Y i | X i ) E i + E 1 ξ i π i S β ( Y i | X i ) E i = 0 .

Hence, Cov Q i A , B i + O i = 0 . It follows that Var ( Q i I ) = Var Q i A + Var ( B i + O i ) . Since Q i A = S β ( Y i | X i ) 1 ξ i π i S β ( Y i | X i ) E i and the two terms are uncorrelated under MAR I, we have Var Q i A = Var S β ( Y i | X i ) + Var ( 1 ξ i / π i ) S β ( Y i | X i ) E i , where the first term equals I(β). This completes the proof of Theorem 1. □

Proof of Corollary 1.

Let Q i A 1 = ξ i / π i z S β ( Y i | X i ) + ( 1 ξ i / π i z ) E S β ( Y | X i ) | Z i , A i , and Q i A 2 = ξ i / π i z S β ( Y i | X i ) + 1 ξ i / π i z E S β ( Y | X i ) | X i , A i . By (16),
Var Q i A 1 = Var S β ( Y i | X i ) + Var 1 ξ i π i z S β ( Y i | X i ) E S β ( Y | X i ) | Z i , A i , Var Q i A 2 = Var S β ( Y i | X i ) + Var 1 ξ i π i z S β ( Y i | X i ) E S β ( Y | X i ) | X i , A i .

The second term of Var Q i A 1 equals ΣA 1(β) and the second term of Var Q i A 2 equals ΣA 2(β). Then it follows from the main results in Theorem 1 that (17) and (18) hold.

Also by Theorem 1, the difference in the variances of Q i A 1 and Q i A 2 contributes to the difference in the asymptotic variances of β ̂ A 1 and β ̂ A 2 . Since E 1 ξ i / π i z 2 | Y i , X i , A i = E 1 ξ i / π i z 2 | X i , A i = ( 1 π i z ) / π i z under MAR I,
Σ A 2 ( β ) = E 1 π i z π i z E S β 2 ( Y i | X i ) | X i , A i E S β ( Y | X i ) | X i , A i 2 = E 1 π i z π i z E S β 2 ( Y i | X i ) | Z i , A i E S β ( Y | X i ) | Z i , A i 2 E 1 π i z π i z E S β ( Y | X i ) | X i , A i 2 E S β ( Y | X i ) | Z i , A i 2 = Σ A 1 ( β ) E 1 π i z π i z Var E S β ( Y | X i ) | X i , A i | Z i , A i ,

which is less than ΣA 1(β) if the covariates Z i is a proper subset of X i . □

Proof of Corollary 2.

Consider the definitions of B i and O i given following (16). We note that
E π i 1 S β ( Y i | X i )