Faculty: Robert Sablatnig

The term image registration describes the procedure of aligning multiple images acquired by different modalities/sensors, at different time points or from different viewpoints (1). In this dissertation, the images are stemming from different sensors. If two images are considered, one image is considered as the source image, while the other one is referred to as the target image. The overall aim of image registration approaches is to determine a transformation between the source and target images. There have been multiple approaches during the last decades in order to find such a transformation (1,2). According to (1) the process of image registration involves the following steps: Feature detection and matching, transform model estimation and image transformation. The first stage – feature detection and matching – can be categorized into area-based (or intensity based) methods and feature-based methods. Intensity based approaches are considering a dense correspondence between the target and source images, whereby the entire image domain is considered. According to (1) and (2), these techniques are usually employed for medical images, since these images contain less salient image content than for instance remote sensing images. Similarity measurements used by area-based techniques include the normalized cross correlation, or the sum of squared differences. Feature-based approaches are in contrast considering the correspondences between a subset of control points in the source and target images. It is stated in (1) that feature-based approaches are preferable if an image contains a sufficient amount of salient image points, since area-based techniques tend to produce wrong registration results if the target and source images contain smooth regions. Feature-based registration methods on the contrary, are solely considering salient control points and find corresponding control points by applying feature matching techniques. Salient image content can be found by applying feature detection approaches like Harris corner detector or Difference of Gaussians. The features found have to be represented in a way that they are invariant to the assumed degradations (1)- for instance illumination changes between the source and target image, or different camera lens distortions. For example, the famous SIFT (3) feature descriptor can be used for a scale invariant description of the image content. The features found in the referenced and sensed image can be matched with appropriate methods. There are hybrid methods existing, which combine area-based and intensity-based approaches: For instance in (4) such a hybrid algorithm is used in order to enable a robust registration of computer tomography images. After the matching between corresponding image regions, the mapping function is determined. The type of the mapping (or transformation) model should be chosen according to the assumed geometrical distortion. The deformation models used for image registration are either inspired by physical models – for example diffusion based models – or based on interpolation theory. One of the most important group of approaches – falling in the category latter mentioned – are Radial Basis Functions (RBF) (2). One example for RBF approaches is the Thin-Plate Splines (TPS) technique, which is capable of handling local deformations. TPS based registration provides accurate registration result, but can require high computational load (1). Global transformations on the other hand are easier computable, but are only capable of handling global deformations – like translation or rotation.

Selected publications:

1. Zitova, B., Flusser, J. 2003. Image registration methods: a survey. Image Vision Comput 21: 977-1000.
2. A. Sotiras and N. Paragios. 2012. Deformable image registration: A survey. INRIA, Tech. Rep. 7971, http://hal.inria.fr/hal-00684715/PDF/RR-7919.pdf
3. D.G. Lowe. 2004. Distinctive image features from scale-invariant keypoints. Int. J Comp Vision, 60(2):91-110.
4. Camara, O., Delso, G., Colliot, O., Moreno-Ingelmo, A., Bloch, I. 2007. Explicit incorporation of prior anatomical information into a nonrigid registration of thoracic and abdominal CT and 18-FDG whole-body emission PET images. IEEE T Med Imaging 26: 164-78.


  • The proposed thesis will benefit from the documented collaborations of this faculty member (Marc Pollefeys,ETH Zürich, Henri Maitre, Ecole Nationale Superieure des Telecommunications (France), Horace H.S.Ip, City University of Hong Kong (China), Horst Bischof – TU Graz, Vaclav Hlavc, Prague Technical University).
  • Developed algorithms and data handling processes will be applied to data resulting from elemental, molecular and spectroscopic imaging, especially from G. Schütz, H.U. Dodt, H. Hutter and M. Marchetti-Deschmann (all TU Vienna).