Hyperspectral Image (HSI) anomaly detectors typically employ local background modeling techniques to facilitate target detection from surrounding clutter. Global background modeling has been challenging due to the multi-modal content that must be automatically modeled to enable target/background separation. We have previously developed a support vector based anomaly detector that does not impose an a priori parametric model on the data and enables multi-modal modeling of large background regions with inhomogeneous content. Effective application of this support vector approach requires the setting of a kernel parameter that controls the tightness of the model fit to the background data. Estimation of the kernel parameter has typically considered Type I / false-positive error optimization due to the availability of background samples, but this approach has not proven effective for general application since these methods only control the false alarm level, without any optimization for maximizing detection. Parameter optimization with respect to Type II / false-negative error has remained elusive due to the lack of sufficient target training exemplars. We present an approach that optimizes parameter selection based on both Type I and Type II error criteria by introducing outliers based on existing hypercube content to guide parameter estimation. The approach has been applied to hyperspectral imagery and has demonstrated automatic estimation of parameters consistent with those that were found to be optimal, thereby providing an automated method for general anomaly detection applications.