## Abstract

Consider the model with data generated by the following two-stage process. First, a parameter θ is sampled from a prior distribution G, and then an observation is sampled from the conditional distribution f(y | θ). If the prior distribution is known, then the Bayes estimate under squared error loss is the posterior expectation of θ conditional on the data y. For example, if G is Gaussian with mean μ and variance τ^{2}and f(y | θ) is Gaussian with mean θ and variance σ^{2}, then the posterior distribution is Gaussian with mean B_{μ}+ (1 – B)y and variance σ^{2}(1 – B), where B = σ^{2}/(σ^{2}+ τ^{2}). Inferences about θ are based on this distribution. We study the application of the bootstrap to situations where the prior must be estimated from the data (empirical Bayes methods). For this model, we observe data Y^{T}= [Y_{1}, …, Y_{K}]^{T}, each independent Y_{K}following the compound model described previously. As first shown by James and Stein (1961), setting each θ_{k}equal to its estimated posterior mean, (Equation presented) where (Equation presented), and (Equation presented), produces estimates with smaller summed squared error loss than the maximum likelihood estimates (Equation presented). In many applications, confidence intervals or other summaries are required, but computing them from the posterior based on an estimated prior (the naive approach) generally is inappropriate. These posterior distributions fail to account for the uncertainty in estimating the prior and, therefore, may be too compact or have an inappropriate shape. Several approaches have been proposed for incorporating this uncertainty, ranging from Bayes–empirical Bayes based on the introduction of a hyperprior (Deely and Lindley 1981), to use of the delta method (Morris 1983a, b). We develop and study bootstrap methods for introducing prior uncertainty. The generic bootstrap generates data Yk (and possibly also θ*_{k}) and bases inferences on their bootstrap joint distribution. We use the Y* to produce estimated prior and posterior distributions from each bootstrap sample, and then use as the posterior distribution for any θ_{k}the mixture over the estimated posteriors of θ_{k}. This component is analogous to hyperprior Bayes. Different bootstraps result from different methods of generating the bootstrap data; we develop three methods that incorporate increasing adherence to the assumed compound model. One method samples data from the empirical cdf of the Y’s, the second estimates the prior by the nonparametric maximum likelihood estimate (Laird 1978) and then generates samples from the compound model, and the last method estimates μ and τ^{2}of an assumed Gaussian prior and then generates samples from the compound model (the fully parametric approach). Morris (1983b) defined empirical Bayes (EB) confidence intervals C(Y) with level α to be those satisfying Pr(θ ∈ C(Y)) ≥ 1 – α, where P is over the distribution of θ, Y. We evaluate EB confidence intervals based on the parametric bootstrap posterior and show that in general they successfully introduce appropriate variation into the posterior distribution in the sense defined by Morris. If τ^{2}is known, we show that the parametric bootstrap produces intervals with coverage exactly (1 – α) and length strictly less than the classical intervals for K > 1, B > 0. Numerical evaluation indicates that when μ is known and τ^{2}is unknown, the intervals have coverage close to (1 – α) and expected lengths compatible with intervals based on Morris’s delta method. The bootstrap and delta methods produce the EB advantage of shorter average length than intervals based on the maximum likelihood estimator (MLE), while retaining reasonable validity of coverage probability. For example, when B = .5 the two-tailed intervals are about 75% to 80% of the length of those based on the MLE. The bootstrap combines a frequentist approach to estimating the prior with a Bayesian compound model. Relating the bootstrap to hyperprior Bayes, we show that in the Gaussian case, if τ^{2}is known, our fully parametric bootstrap is equivalent to Bayes based on a flat dμ hyperprior. In no other case do the bootstrap and hyperprior Bayes agree, though they have the same formal representation. We apply the naive, Morris, and bootstrap methods to the batting average data first introduced by Efron and Morris, showing how the bootstrap intervals are skewed, how the posterior variance increases with the distance of the observation from the estimated prior mean (analogous to confidence bands for linear regression), and how the confidence intervals relate to the “true” parameter (the batting average for the season) and to intervals based on maximum likelihood. Use of the bootstrap in complicated problems will broaden the range for application of EB methods. The bootstrap approach can be easily modified for application to vector parameters, distributions other than the Gaussian, and models where, for example, the Y_{k}’s have different sampling variances, the θ_{k}’s are correlated, or they have prior means that follow a regression model. Having the ability to produce point estimates and confidence intervals enhances the utility of EB techniques in many applications, including the analysis of small area variation, ranking, and selection problems.

Original language | English (US) |
---|---|

Pages (from-to) | 739-750 |

Number of pages | 12 |

Journal | Journal of the American Statistical Association |

Volume | 82 |

Issue number | 399 |

DOIs | |

State | Published - Sep 1987 |

Externally published | Yes |

## ASJC Scopus subject areas

- Statistics and Probability
- Statistics, Probability and Uncertainty