We consider the problem of ``visual attention'' in
the context of space-variant machine vision: Is
there a general theoretical and practical
formulation for the ``next-look'' problem to guide a
space-variant sensor to a rapid choice for its next
fixation point? This topic is developed in the
context of Hough transform methods, by the addition
of a third space to the usual feature and object
spaces considered in traditional Hough methods. This
third space is a ``behavioural,'' or ``motor''
space, which is typically low-dimaensional with
respect to the feature and object spaces. For
example, the motor space of particular interest for
us is the two-dimensional manifold of monocular eye
positions. By ``collapsing'' the generalized Hough
transform into a low dimensional motor space, we
show that it is possible to avoid a practical
difficulty in applying Hough transform methods,
which is the exponential dependence of the
accumulator array on the dimensionality of the
object space. Beginning with a simple and very
general Bayesian scheme, we derive in stages the
generalized Hough transform as a special case. Since
``attentional'' applications, by their nature,
require only partial knowledge about objects,
computation of all the parameters characterizing a
scene object is superfluous and wasteful of
computational resources. This suggests that for an
attentional application, collapsing the (large)
object space onto the (small) motor space provides a
computationally grounded definition of the term
``visual attention''. We illustrate these ideas with
an example of choosing ``fixation'' points for a
space-variant sensor in a machine vision application
for real-time reading of license plates of moving
vehicles.