Magnetic resonance image analysis is often hampered by inconsistent data due to upgrades or changes to the scanner platform or modification of scanning protocols. These changes can manifest in three main sources of image inconsistency: contrast, resolution, and noise. Modern analysis techniques that use supervised machine learning can be especially susceptible to these inconsistencies, as existing training data may not be valid after an upgrade or protocol change. In previous work, differences in contrast and resolution have been addressed in isolation. We propose a novel method of image intensity harmonization that addresses each of the three sources of inconsistency. We formulate our method around a multi-planar, multi-contrast U-net, where all of the available contrasts are used as input channels in a single modified U-Net to produce all of the output contrasts simultaneously. The multi-contrast nature of the deep network allows for harmonization of contrast as information can be shared between contrasts. In addition, coherent, biological features are highlighted and matched to the target, while noise, which differs between matched inputs and outputs, is not reinforced. This process also normalizes small differences in resolution due to the influence of the high resolution channels. To combat larger differences in resolution, which would not be recovered by the neural network alone, we use self super-resolution (SSR) on all images with thick (>2 mm) slices before harmonization. To generate consistent images, the target images are also processed in a similar manner so that all resulting images have consistent qualities. Our harmonization process eliminates significant volume bias of multiple brain compartments and lesion estimation. In addition, absolute volume difference and Dice similarity of segmentation volumes were significantly improved (p < 0.005). SSR alone did not affect the statistical significance of the difference, even though the absolute volume difference was reduced.