The goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem,
starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic diffusion
equation, has recently resulted in the development of "geometry-driven" diffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear
filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly
make use of kernels whose shape and center are functions of local image structure, are too computationally expensive for use in real-time vision applications. In this
paper, we review several recently developed methods which achieve results largely equivalent to those obtained from nonlinear diffusion. In the first technique, we use an
adaptive (function approximation) approach to learn a kernel function which produces results typical of nonlinear anisotropic diffusion via spatial integration of the
kernel across the image. Because of the analogy of this method with the linear Greens Function approach to PDE solution, we call this (nonlinear, space-variant) method a
"Greens Function Approximator" (GFA). The second method involves the construction of a vector field of "offsets" at which to apply a (single-scale) filter, a process
which is conceptually separated into two very different functions. The former function is a kind of generalized image skeletonization; the latter is conventional (but
nonlocal) image filtering. The GFA method is about an order of magnitude faster than nonlinear diffusion on a serial machine. It can be fully parallelized, unlike the PDE
approach, which has intrinsically serial components. The nonlocal filter is roughly an order of magnitude faster still, resulting in two orders of magnitude speed-up
relative to direct PDE solution. An additional advantage of nonlocal filtering is that it allows hardware and software implementations to be applied with no modification,
since the offset step reduces to an image pixel permutation, or look-up table operation, after the application of the filter. When combined with space-variant (e.g. log
polar) architectures, which themselves provide between one and three orders of magnitude of speed-up relative to conventional image architectures, we are able to achieve
image enhancements effects similar to those of nonlinear diffusion at frame rate (30Hz) on a single processor. Demonstration of these results on a portable active vision
system will be provided.