The detection of occluding contours in images of 3-D scenes is a fundamental problem of vision. We present a computational model of contour processing that was suggested by neurophysiological recordings from the monkey visual cortex. The model employs convolutions and nonlinear operations, but no feedback loops. Contours are defined by the local maxima of the responses of a contour operator that sums a representation of contrast borders and a 'grouping signal'. The grouping consists in convolving a representation of 'key-points', such as T-junctions, corners, and line ends, with a set of orientation selective kernels, and a nonlinear pairing operation. The grouping scheme is selective according to whether the configuration of key-points is consistent with the interpretation of occlusion. The resulting contour representation includes an indicator of figure-ground direction. We show (1) that the model reproduces illusory contours in accurate agreement with perception, and (2) generates representations of occluding contours on images of natural scenes that are more complete and less cluttered by spurious connections of foreground and background that those obtained by conventional edge detection operators.