The ability of animals to select a limited region of sensory space for scrutiny is an important factor in dealing with cluttered or complex sensory environments. Such an «attentional» system in the visual domain is believed to be involved in both the perception of objects and the control of eye movements in primates. While we can intentionally guide our attention to perform a specific task, it is also reflexively drawn to «salient» features in our sensory input space. Understanding how high-level task information and lour-level stimulus information can combine to control our sensory processing is of great interest to both neuroscience and engineering. Towards this end, we have designed and fabricated a one-dimensional, analog VLSI vision chip that models covert attentional search and tracking. We extend previous analog VLSI work (Morris and DeWeerth, 1997) on the delayed onset of inhibition in a winner-take-all network to now use extracted image edges as input to the attentional saliency map and to perform serial search on a particular feature conjunction (spatial derivative and the direction-of-motion). We further demonstrate the ability to modify the circuit's parameters «on-the-fly» to switch between a search mode and a tracking mode.