The neighbors principle implicit in any machine learning algorithm says that samples with similar labels should be close to one another in feature space as well. For example, while tumors are heterogeneous, tumors that have similar genomics profiles can also be expected to have similar responses to a specific therapy. Simple correlation coefficients provide an effective way to determine whether this principle holds when features and labels are both scalar, but not when either is multivariate. A new class of generalized correlation coefficients based on inter-point distances addresses this need and is called “distance correlation”. There is only one rank-based distance correlation test available to date, and it is asymmetric in the samples, requiring that one sample be distinguished as a fixed point of reference. Therefore, we introduce a novel, nonparametric statistic, REVA, inspired by the Kendall rank correlation coefficient. We use U-statistic theory to derive the asymptotic distribution of the new correlation coefficient, developing additional large and finite sample properties along the way. To establish the admissibility of the REVA statistic, and explore the utility and limitations of our model, we compared it to the most widely used distance based correlation coefficient in a range of simulated conditions, demonstrating that REVA does not depend on an assumption of linearity, and is robust to high levels of noise, high dimensions, and the presence of outliers. We also present an application to real data, applying REVA to determine whether cancer cells with similar genetic profiles also respond similarly to a targeted therapeutic.
ASJC Scopus subject areas
- Biochemistry, Genetics and Molecular Biology(all)
- Agricultural and Biological Sciences(all)
- Immunology and Microbiology(all)
- Pharmacology, Toxicology and Pharmaceutics(all)