Traditionally, visual servoing is separated into tracking and control subsystems. This separation, though convenient, is not necessarily well justified. When tracking and control strategies are designed independently, it is not clear how to optimize them to achieve a certain task. In this work, we propose a framework in which spatial sampling kernels - borrowed from the tracking and registration literature -are used to design feedback controllers for visual servoing. The use of spatial sampling kernels provides natural hooks for Lyapunov theory, thus unifying tracking and control and providing a framework for optimizing a particular servoing task. As a first step, we develop kernel-based visual servos for a subset of relative motions between camera and target scene. The subset of motions we consider are 2D translation, scale, and roll of the target relative to the camera. Our approach provides formal guarantees on the convergence/stability of visual servoing algorithms under putatively generic conditions.