Harris affine region detector explained

In the fields of computer vision and image analysis, the Harris affine region detector belongs to the category of feature detection. Feature detection is a preprocessing step of several algorithms that rely on identifying characteristic points or interest points so to make correspondences between images, recognize textures, categorize objects or build panoramas.

Overview

The Harris affine detector can identify similar regions between images that are related through affine transformations and have different illuminations. These affine-invariant detectors should be capable of identifying similar regions in images taken from different viewpoints that are related by a simple geometric transformation: scaling, rotation and shearing. These detected regions have been called both invariant and covariant. On one hand, the regions are detected invariant of the image transformation but the regions covariantly change with image transformation. Do not dwell too much on these two naming conventions; the important thing to understand is that the design of these interest points will make them compatible across images taken from several viewpoints. Other detectors that are affine-invariant include Hessian affine region detector, maximally stable extremal regions, Kadir–Brady saliency detector, edge-based regions (EBR) and intensity-extrema-based regions (IBR).

Mikolajczyk and Schmid (2002) first described the Harris affine detector as it is used today in Affine Invariant Interest Point Detector.[1] Earlier works in this direction include use of affine shape adaptation by Lindeberg and Garding for computing affine invariant image descriptors and in this way reducing the influence of perspective image deformations,[2] the use affine adapted feature points for wide baseline matching by Baumberg[3] and the first use of scale invariant feature points by Lindeberg;[4] [5] [6] for an overview of the theoretical background. The Harris affine detector relies on the combination of corner points detected through Harris corner detection, multi-scale analysis through Gaussian scale space and affine normalization using an iterative affine shape adaptation algorithm. The recursive and iterative algorithm follows an iterative approach to detecting these regions:

  1. Identify initial region points using scale-invariant Harris–Laplace detector.
  2. For each initial point, normalize the region to be affine invariant using affine shape adaptation.
  3. Iteratively estimate the affine region: selection of proper integration scale, differentiation scale and spatially localize interest points.
  4. Update the affine region using these scales and spatial localizations.
  5. Repeat step 3 if the stopping criterion is not met.

Algorithm description

Harris–Laplace detector (initial region points)

The Harris affine detector relies heavily on both the Harris measure and a Gaussian scale space representation. Therefore, a brief examination of both follow. For a more exhaustive derivations see corner detection and Gaussian scale space or their associated papers.[5] [7]

Harris corner measure

The Harris corner detector algorithm relies on a central principle: at a corner, the image intensity will change largely in multiple directions. This can alternatively be formulated by examining the changes of intensity due to shifts in a local window. Around a corner point, the image intensity will change greatly when the window is shifted in an arbitrary direction. Following this intuition and through a clever decomposition, the Harris detector uses the second moment matrix as the basis of its corner decisions. (See corner detection for a more complete derivation). The matrix

A

, has also been called the autocorrelation matrix and has values closely related to the derivatives of image intensity.

A(x)=\sump,q

2(p,q)
w(p,q) \begin{bmatrix} I
x

&IxIy(p,q)\\ IxIy(p,q)&

2(p,q)\ \end{bmatrix}
I
y

where

Ix

and

Iy

are the respective derivatives (of pixel intensity) in the

x

and

y

direction at point (

p

,

q

);

p

and

q

are the position parameters of the weighting function w. The off-diagonal entries are the product of

Ix

and

Iy

, while the diagonal entries are squares of the respective derivatives. The weighting function

w(x,y)

can be uniform, but is more typically an isotropic, circular Gaussian,

w(x,y)=g(x,y,\sigma)=

1
2\pi\sigma2
\left
(-x2+y2
2\sigma2
\right)
e

that acts to average in a local region while weighting those values near the center more heavily.

As it turns out, this

A

matrix describes the shape of the autocorrelation measure as due to shifts in window location. Thus, if we let

λ1

and

λ2

be the eigenvalues of

A

, then these values will provide a quantitative description of how the autocorrelation measure changes in space: its principal curvatures. As Harris and Stephens (1988) point out, the

A

matrix centered on corner points will have two large, positive eigenvalues.[7] Rather than extracting these eigenvalues using methods like singular value decomposition, the Harris measure based on the trace and determinant is used:

R=\det(A)-\alpha\operatorname{trace}2(A)=λ1λ2-\alpha(λ1+

2
λ
2)

where

\alpha

is a constant. Corner points have large, positive eigenvalues and would thus have a large Harris measure. Thus, corner points are identified as local maxima of the Harris measure that are above a specified threshold.

\begin{align} \{xc\}=\{xc\midR(xc)>R(xi),\forallxi\inW(xc)\},\\ R(xc)>tthreshold \end{align}

where

\{xc\}

are the set of all corner points,

R(x)

is the Harris measure calculated at

x

,

W(xc)

is an 8-neighbor set centered on

xc

and

tthreshold

is a specified threshold.

Gaussian scale-space

A Gaussian scale space representation of an image is the set of images that result from convolving a Gaussian kernel of various sizes with the original image. In general, the representation can be formulated as:

L(x,s)=G(s)I(x)

where

G(s)

is an isotropic, circular Gaussian kernel as defined above. The convolution with a Gaussian kernel smooths the image using a window the size of the kernel. A larger scale,

s

, corresponds to a smoother resultant image. Mikolajczyk and Schmid (2001) point out that derivatives and other measurements must be normalized across scales.[8] A derivative of order

m

,
D
i1,...im
, must be normalized by a factor

sm

in the following manner:
D
i1,...,im

(x,s)=sm

L
i1,...,im

(x,s)

These derivatives, or any arbitrary measure, can be adapted to a scale space representation by calculating this measure using a set of scales recursively where the

n

th scale is

sn=kns0

. See scale space for a more complete description.

Combining Harris detector across Gaussian scale-space

The Harris–Laplace detector combines the traditional 2D Harris corner detector with the idea of a Gaussian scale space representation in order to create a scale-invariant detector. Harris-corner points are good starting points because they have been shown to have good rotational and illumination invariance in addition to identifying the interesting points of the image.[9] However, the points are not scale invariant and thus the second-moment matrix must be modified to reflect a scale-invariant property. Let us denote,

M=\mu(x,\sigmaI,\sigmaD)

as the scale adapted second-moment matrix used in the Harris–Laplace detector.

M=\mu(x,\sigmaI,\sigmaD)

2
= \sigma
D

g(\sigmaI)

2(x,
\begin{bmatrix} L
x

\sigmaD)&LxLy(x,\sigmaD)\\ LxLy(x,\sigmaD)&

2(x,
L
y

\sigmaD) \end{bmatrix}

[10]

where

g(\sigmaI)

is the Gaussian kernel of scale

\sigmaI

and

x=(x,y)

. Similar to the Gaussian-scale space,

L(x)

is the Gaussian-smoothed image. The

operator denotes convolution.

Lx(x,\sigmaD)

and

Ly(x,\sigmaD)

are the derivatives in their respective direction applied to the smoothed image and calculated using a Gaussian kernel with scale

\sigmaD

. In terms of our Gaussian scale-space framework, the

\sigmaI

parameter determines the current scale at which the Harris corner points are detected.

Building upon this scale-adapted second-moment matrix, the Harris–Laplace detector is a twofold process: applying the Harris corner detector at multiple scales and automatically choosing the characteristic scale.

Multi-scale Harris corner points

The algorithm searches over a fixed number of predefined scales. This set of scales is defined as:

{\sigma1...\sigman}={k1\sigma0...kn\sigma0}

Mikolajczyk and Schmid (2004) use

k=1.4

. For each integration scale,

\sigmaI

, chosen from this set, the appropriate differentiation scale is chosen to be a constant factor of the integration scale:

\sigmaD=s\sigmaI

. Mikolajczyk and Schmid (2004) used

s=0.7

.[10] Using these scales, the interest points are detected using a Harris measure on the

\mu(x,\sigmaI,\sigmaD)

matrix. The cornerness, like the typical Harris measure, is defined as:

cornerness=\det(\mu(x,\sigmaI,\sigmaD))-\alpha\operatorname{trace}2(\mu(x,\sigmaI,\sigmaD))

Like the traditional Harris detector, corner points are those local (8 point neighborhood) maxima of the cornerness that are above a specified threshold.

Characteristic scale identification

An iterative algorithm based on Lindeberg (1998) both spatially localizes the corner points and selects the characteristic scale.[5] The iterative search has three key steps, that are carried for each point

x

that were initially detected at scale

\sigmaI

by the multi-scale Harris detector (

k

indicates the

kth

iteration):
(k+1)
\sigma
I
that maximizes the Laplacian-of-Gaussians (LoG) over a predefined range of neighboring scales. The neighboring scales are typically chosen from a range that is within a two scale-space neighborhood. That is, if the original points were detected using a scaling factor of

1.4

between successive scales, a two scale-space neighborhood is the range

t\in[0.7,...,1.4]

. Thus the Gaussian scales examined are:
(k+1)
\sigma
I

=t

k
\sigma
I
. The LoG measurement is defined as:

|\operatorname{LoG}(x,\sigmaI)|=

2
\sigma
I

\left|Lxx(x,\sigmaI)+Lyy(x,\sigmaI)\right|

where

Lxx

and

Lyy

are the second derivatives in their respective directions.[11] The
2
\sigma
I
factor (as discussed above in Gaussian scale-space) is used to normalize the LoG across scales and make these measures comparable, thus making a maximum relevant. Mikolajczyk and Schmid (2001) demonstrate that the LoG measure attains the highest percentage of correctly detected corner points in comparison to other scale-selection measures.[8] The scale which maximizes this LoG measure in the two scale-space neighborhood is deemed the characteristic scale,
(k+1)
\sigma
I
, and used in subsequent iterations. If no extrema, or maxima of the LoG is found, this point is discarded from future searches.

x(k+1)

is chosen such that it maximizes the Harris corner measure (cornerness as defined above) within an 8×8 local neighborhood.
(k+1)
\sigma
I

==

(k)
\sigma
I
and

x(k+1)==x(k)

.

If the stopping criterion is not met, then the algorithm repeats from step 1 using the new

k+1

points and scale. When the stopping criterion is met, the found points represent those that maximize the LoG across scales (scale selection) and maximize the Harris corner measure in a local neighborhood (spatial selection).

Affine-invariant points

Mathematical theory

The Harris–Laplace detected points are scale invariant and work well for isotropic regions that are viewed from the same viewing angle. In order to be invariant to arbitrary affine transformations (and viewpoints), the mathematical framework must be revisited. The second-moment matrix

\mu

is defined more generally for anisotropic regions:

\mu(x,\SigmaI,\SigmaD)=\det(\SigmaD)g(\SigmaI)*(\nablaL(x,\SigmaD)\nablaL(x,

T)
\Sigma
D)

where

\SigmaI

and

\SigmaD

are covariance matrices defining the differentiation and the integration Gaussian kernel scales. Although this may look significantly different from the second-moment matrix in the Harris–Laplace detector; it is in fact, identical. The earlier

\mu

matrix was the 2D-isotropic version in which the covariance matrices

\SigmaI

and

\SigmaD

were 2x2 identity matrices multiplied by factors

\sigmaI

and

\sigmaD

, respectively. In the new formulation, one can think of Gaussian kernels as a multivariate Gaussian distributions as opposed to a uniform Gaussian kernel. A uniform Gaussian kernel can be thought of as an isotropic, circular region. Similarly, a more general Gaussian kernel defines an ellipsoid. In fact, the eigenvectors and eigenvalues of the covariance matrix define the rotation and size of the ellipsoid. Thus we can easily see that this representation allows us to completely define an arbitrary elliptical affine region over which we want to integrate or differentiate.

The goal of the affine invariant detector is to identify regions in images that are related through affine transformations. We thus consider a point

xL

and the transformed point

xR=AxL

, where A is an affine transformation. In the case of images, both

xR

and

xL

live in

R2

space. The second-moment matrices are related in the following manner:

\begin{align} \mu(xL,\SigmaI,L,\SigmaD,L)&{}=AT\mu(xR,\SigmaI,R,\SigmaD,R)A\\ ML&{}=\mu(xL,\SigmaI,L,\SigmaD,L)\\ MR&{}=\mu(xR,\SigmaI,R,\SigmaD,R)\\ ML&{}=ATMRA\\ \SigmaI,R&{}=A\SigmaI,L

Tand\Sigma
A
D,R

=A\SigmaD,LAT \end{align}

where

\SigmaI,b

and

\SigmaD,b

are the covariance matrices for the

b

reference frame. If we continue with this formulation and enforce that

\begin{align} \SigmaI,L=\sigmaI

-1
M
L

\\ \SigmaD,L=\sigmaD

-1
M
L

\end{align}

where

\sigmaI

and

\sigmaD

are scalar factors, one can show that the covariance matrices for the related point are similarly related:

\begin{align} \SigmaI,R=\sigmaI

-1
M
R

\\ \SigmaD,R=\sigmaD

-1
M
R

\end{align}

By requiring the covariance matrices to satisfy these conditions, several nice properties arise. One of these properties is that the square root of the second-moment matrix,

M\tfrac{1{2}}

will transform the original anisotropic region into isotropic regions that are related simply through a pure rotation matrix

R

. These new isotropic regions can be thought of as a normalized reference frame. The following equations formulate the relation between the normalized points
'
x
R
and
'
x
L
:

\begin{align} A=

-\tfrac{1
M
R

{2}}R

\tfrac{1
M
L

{2}}

'
\\ x
R

=

\tfrac{1
M
R

{2}}xR

'
\\ x
L

=

\tfrac{1
M
L

{2}}xL

'
\\ x
L

=R

'\\ \end{align}
x
R

The rotation matrix can be recovered using gradient methods likes those in the SIFT descriptor. As discussed with the Harris detector, the eigenvalues and eigenvectors of the second-moment matrix,

M=\mu(x,\SigmaI,\SigmaD)

characterize the curvature and shape of the pixel intensities. That is, the eigenvector associated with the largest eigenvalue indicates the direction of largest change and the eigenvector associated with the smallest eigenvalue defines the direction of least change. In the 2D case, the eigenvectors and eigenvalues define an ellipse. For an isotropic region, the region should be circular in shape and not elliptical. This is the case when the eigenvalues have the same magnitude. Thus a measure of the isotropy around a local region is defined as the following:

l{Q}=

λmin(M)
λmax(M)

where

λ

denote eigenvalues. This measure has the range

[0...1]

. A value of

1

corresponds to perfect isotropy.

Iterative algorithm

Using this mathematical framework, the Harris affine detector algorithm iteratively discovers the second-moment matrix that transforms the anisotropic region into a normalized region in which the isotropic measure is sufficiently close to one. The algorithm uses this shape adaptation matrix,

U

, to transform the image into a normalized reference frame. In this normalized space, the interest points' parameters (spatial location, integration scale and differentiation scale) are refined using methods similar to the Harris–Laplace detector. The second-moment matrix is computed in this normalized reference frame and should have an isotropic measure close to one at the final iteration. At every

k

th iteration, each interest region is defined by several parameters that the algorithm must discover: the

U(k)

matrix, position

x(k)

, integration scale
(k)
\sigma
I
and differentiation scale
(k)
\sigma
D
. Because the detector computes the second-moment matrix in the transformed domain, it's convenient to denote this transformed position as
(k)
x
w
where

U(k)

(k)
x
w

=

x(k)
.

Computation and implementation

The computational complexity of the Harris-affine detector is broken into two parts: initial point detection and affine region normalization. The initial point detection algorithm, Harris–Laplace, has complexity

l{O}(n)

where

n

is the number of pixels in the image. The affine region normalization algorithm automatically detects the scale and estimates the shape adaptation matrix,

U

. This process has complexity

l{O}((m+k)p)

, where

p

is the number of initial points,

m

is the size of the search space for the automatic scale selection and

k

is the number of iterations required to compute the

U

matrix.[10]

Some methods exist to reduce the complexity of the algorithm at the expense of accuracy. One method is to eliminate the search in the differentiation scale step. Rather than choose a factor

s

from a set of factors, the sped-up algorithm chooses the scale to be constant across iterations and points:

\sigmaD=s\sigmaI,s=constant

. Although this reduction in search space might decrease the complexity, this change can severely effect the convergence of the

U

matrix.

Analysis

Convergence

One can imagine that this algorithm might identify duplicate interest points at multiple scales. Because the Harris affine algorithm looks at each initial point given by the Harris–Laplace detector independently, there is no discrimination between identical points. In practice, it has been shown that these points will ultimately all converge to the same interest point. After finishing identifying all interest points, the algorithm accounts for duplicates by comparing the spatial coordinates (

x

), the integration scale

\sigmaI

, the isotropic measure

\tfrac{λmin(U)}{λmax(U)}

and skew.[10] If these interest point parameters are similar within a specified threshold, then they are labeled duplicates. The algorithm discards all these duplicate points except for the interest point that's closest to the average of the duplicates. Typically 30% of the Harris affine points are distinct and dissimilar enough to not be discarded.[10]

Mikolajczyk and Schmid (2004) showed that often the initial points (40%) do not converge. The algorithm detects this divergence by stopping the iterative algorithm if the inverse of the isotropic measure is larger than a specified threshold:

\tfrac{λmax(U)}{λmin(U)}>tdiverge

. Mikolajczyk and Schmid (2004) use

tdiverge=6

. Of those that did converge, the typical number of required iterations was 10.[1]

Quantitative measure

Quantitative analysis of affine region detectors take into account both the accuracy of point locations and the overlap of regions across two images. Mioklajcyzk and Schmid (2004) extend the repeatability measure of Schmid et al. (1998) as the ratio of point correspondences to minimum detected points of the two images.[10] [12]

Rscore=

C(A,B)
min(nA,nB)

where

C(A,B)

are the number of corresponding points in images

A

and

B

.

nB

and

nA

are the number of detected points in the respective images. Because each image represents 3D space, it might be the case that the one image contains objects that are not in the second image and thus whose interest points have no chance of corresponding. In order to make the repeatability measure valid, one remove these points and must only consider points that lie in both images;

nA

and

nB

only count those points such that

xA=HxB

. For a pair of two images related through a homography matrix

H

, two points,
xa
and
xb
are said to correspond if:

Robustness to affine and other transformations

Mikolajczyk et al. (2005) have done a thorough analysis of several state-of-the-art affine region detectors: Harris affine, Hessian affine, MSER,[13] IBR & EBR[14] and salient[15] detectors.[16] Mikolajczyk et al. analyzed both structured images and textured images in their evaluation. Linux binaries of the detectors and their test images are freely available at their webpage. A brief summary of the results of Mikolajczyk et al. (2005) follow; see A comparison of affine region detectors for a more quantitative analysis.

General trends

Applications

Software packages

K. Mikolajczyk maintains a web page that contains Linux binaries of the Harris-affine detector in addition to other detectors and descriptors. Matlab code is also available that can be used to illustrate and compute the repeatability of various detectors. Code and images are also available to duplicate the results found in the Mikolajczyk et al. (2005) paper.

External links

See also

References

  1. Web site: Mikolajcyk, K. and Schmid, C. 2002. An affine invariant interest point detector. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canada. . 2007-12-11 . 2004-07-23 . https://web.archive.org/web/20040723195525/http://vasc.ri.cmu.edu/~hebert/04AP/mikolajc_ECCV2002.pdf . dead .
  2. http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A472972&dswid=6231 T. Lindeberg and J. Garding (1997). "Shape-adapted smoothing in estimation of 3- depth cues from affine distortions of local 2- structure". Image and Vision Computing 15: pp. 415 - 434.
  3. http://citeseer.ist.psu.edu/baumberg00reliable.html A. Baumberg (2000). "Reliable feature matching across widely separated views". Proceedings of IEEE Conference on Computer Vision and Pattern Recognition: pages I:1774 - 1781.
  4. http://www.csc.kth.se/~tony/book.html Lindeberg, Tony, Scale-Space Theory in Computer Vision, Kluwer Academic Publishers, 1994
  5. http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A453064&dswid=7766 T. Lindeberg (1998). "Feature detection with automatic scale selection". International Journal of Computer Vision 30 (2): pp. 77 - 116.
  6. Book: Lindeberg , T. . Scale-space. Encyclopedia of Computer Science and Engineering. Benjamin. Wah. John Wiley and Sons. IV. 2495–2504. 10.1002/9780470050118.ecse609. 2008. http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A441147&dswid=-488. 978-0470050118.
  7. http://www.csse.uwa.edu.au/~pk/research/matlabfns/Spatial/Docs/Harris/A_Combined_Corner_and_Edge_Detector.pdf C. Harris and M. Stephens (1988). "A combined corner and edge detector". Proceedings of the 4th Alvey Vision Conference: pages 147 - 151.
  8. https://robotics.caltech.edu/readinggroup/vision/mikolajcICCV2001.pdf K. Mikolajczyk and C. Schmid. Indexing based on scale invariant interest points. In Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canada, pages 525-531, 2001.
  9. Schmid, C., Mohr, R., and Bauckhage, C. 2000. Evaluation of interest point detectors. International Journal of Computer Vision, 37(2):151–172.
  10. https://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf Mikolajczyk, K. and Schmid, C. 2004. Scale & affine invariant interest point detectors. International Journal on Computer Vision 60(1):63-86.
  11. Web site: Spatial Filters: Laplacian/Laplacian of Gaussian . 2007-12-11 . 2007-11-20 . https://web.archive.org/web/20071120014339/http://www.cee.hw.ac.uk/hipr/html/log.html . dead .
  12. C. Schmid, R. Mohr, and C. Bauckhage. Comparing and evaluating interest points. In International Conference on Computer Vision, pp. 230–135, 1998.
  13. https://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/matas_bmvc2002.pdf J.Matas, O. Chum, M. Urban, and T. Pajdla, Robust wide baseline stereo from maximally stable extremal regions. In BMVC p. 384-393, 2002.
  14. http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/tuytelaars_ijcv2004.pdf T. Tuytelaars and L. Van Gool, Matching widely separated views based on affine invariant regions. In IJCV 59(1):61-85, 2004.
  15. https://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/kadir04.pdf T. Kadir, A. Zisserman, and M. Brady, An affine invariant salient region detector. In ECCV p. 404-416, 2004.
  16. https://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/vibes_ijcv2004.pdf K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir and L. Van Gool, A comparison of affine region detectors. In IJCV 65(1/2):43-72, 2005
  17. http://staff.science.uva.nl/~gevers/pub/overview.pdf
  18. https://www.liacs.nl/home/mlew/mir.survey16b.pdf R. Datta, J. Li, and J. Z. Wang, “Content-based image retrieval – Approaches and trends of the new age,” In Proc. Int. Workshop on Multimedia Information Retrieval, pp. 253–262, 2005.IEEE Transactions on Multimedia, vol. 7, no. 1, pp. 127–142, 2005.
  19. https://www.robots.ox.ac.uk/~vgg/publications/papers/sivic03.pdf J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In Proceedings of the International Conference on Computer Vision, Nice, France, 2003.
  20. http://www.robots.ox.ac.uk/~vgg/publications/papers/sivic04b.pdf J. Sivic and A. Zisserman. Video data mining using configurations of viewpoint invariant regions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington DC, USA, pp. 488–495, 2004.
  21. http://lear.inrialpes.fr/people/triggs/events/iccv03/cdrom/iccv03/0634_dorko.pdf G. Dorko and C. Schmid. Selection of scale invariant neighborhoods for object class recognition. In Proceedings of International Conference on Computer Vision, Nice, France, pp. 634–640, 2003.
  22. Beril Sirmacek and Cem Unsalan . January 2011 . A probabilistic framework to detect buildings in aerial and satellite images . IEEE Transactions on Geoscience and Remote Sensing . 49 . 1 . 211–221 . 10.1109/TGRS.2010.2053713 . 2011ITGRS..49..211S . 10637950 .