14
PFG 2013 / 6, 0589 0602 Article Stuttgart, December 2013 © 2013 E. Schweizerbart'sche Verlagsbuchhandlung, Stuttgart, Germany www.schweizerbart.de DOI: 10.1127/1432-8364/2013/0202 1432-8364/13/0202 $ 3.50 Integrated 3D Range Camera Self-Calibration p atricK westfeld &hans-gerd Maas, Dresden Keywords: range imaging camera, integrated calibration, bundle adjustment, variance component estimation monoscopic sensors, they deliver spatially re- solved surface data at video rate without the need for stereo image matching, thus coming with the advantage of a considerable reduction in complexity and computing time. 1 Introduction Modulation techniques enable range cameras based on photonic mixer devices or similar principles to simultaneously produce both an amplitude image and a range image. Though Summary: An integrated bundle adjustment has been developed to facilitate the precise definition of the geometry of time-of-flight range imaging cam- eras, including the estimation of range-measure- ment-specific correction parameters modelling lin- ear, periodic and sensor-position-dependent effects of a distance measurement. The integrated calibra- tion routine jointly adjusts data from both informa- tion channels (amplitude and range) and automati- cally estimates optimum observation weights. The method is based on the flexible principle of self- calibration. It does not require spatial object data, thus avoiding the time-consuming determination of reference distances with superior accuracy. The ac- curacy analyses carried out using a PMD[vision]® CamCube 2.0 confirm the correctness of the pro- posed functional contexts, but they also exhibit challenges caused by non-parameterised range- measurement-specific errors. The level of accuracy of the observations is computed by variance com- ponent estimation and becomes in mean 1 / 35 pixel for an amplitude image coordinate measurement and 9.5 mm for a distance measurement. The preci- sion of a 3D point coordinate can be set at 5 mm after calibration, compared to several centimetres before applying any correction terms. In the case of depth imaging technology, which is influenced by a variety of noise sources, this accuracy level is very promising. Zusammenfassung: Integrierte Selbstkalibrie- rung distanzmessender 3D-Kameras. Die entwi- ckelte integrierte Bündelblockausgleichung ermög- licht die Bestimmung der exakten Aufnahmegeo- metrie distanzmessender 3D-Kameras sowie die Schätzung distanzmessspezifischer Korrekturpa- rameter zur Modellierung linearer, periodischer und sensorpositionsabhängiger Fehleranteile einer Streckenmessung. Die integrierte Kalibrierroutine gleicht in beiden Informationskanälen (Amplitude und Distanz in jedem Pixel) gemessene Größen ge- meinsam aus und bestimmt dabei simultan optima- le Beobachtungsgewichte. Die Methode basiert auf dem flexiblen Prinzip der Selbstkalibrierung und benötigt keine Objektrauminformation, wodurch insbesondere die aufwändige Ermittlung von Refe- renzmaßen übergeordneter Genauigkeit entf ällt. Die am Beispiel des PMD[vision]® CamCube 2.0 durchgef ührten Genauigkeitsuntersuchungen be- stätigen die Richtigkeit der aufgestellten funktio- nalen Zusammenhänge, zeigen aber auch Schwä- chen aufgrund noch nicht parametrisierter distanz- messspezifischer Fehler. Die durch eine Varianz- komponentenschätzung festgelegten Genauigkeits- niveaus der urspr ünglichen Beobachtungen betra- gen im Mittel 1 / 35 Pixel f ür die Amplitudenbildko- ordinatenmessung und 9,5 mm f ür die Strecken- messung. Die Qualität der 3D-Neupunktkoordina- ten kann nach einer Kalibrierung mit 5 mm ange- geben werden. Im Vergleich bewegt sich diese ohne Anbringen von Korrekturtermen im Bereich eini- ger Zentimeter. Für die durch eine Vielzahl von meist simultan auftretenden Rauschquellen beein- flusste Tiefenbildtechnologie ist dieser Genauig- keitswert sehr vielversprechend.

Integrated 3D Range Camera Self-Calibration · Die am Beispiel des PMD[vision]® CamCube 2.0 durchgeführten Genauigkeitsuntersuchungen be-stätigen die Richtigkeit der aufgestellten

Embed Size (px)

Citation preview

PFG 2013 / 6, 0589–0602 ArticleStuttgart, December 2013

© 2013 E. Schweizerbart'sche Verlagsbuchhandlung, Stuttgart, Germany www.schweizerbart.deDOI: 10.1127/1432-8364/2013/0202 1432-8364/13/0202 $ 3.50

Integrated 3D Range Camera Self-Calibration

patricK westfeld & hans-gerd Maas, Dresden

Keywords: range imaging camera, integrated calibration, bundle adjustment, variancecomponent estimation

monoscopic sensors, they deliver spatially re-solved surface data at video rate without theneed for stereo image matching, thus comingwith the advantage of a considerable reductionin complexity and computing time.

1 Introduction

Modulation techniques enable range camerasbased on photonic mixer devices or similarprinciples to simultaneously produce both anamplitude image and a range image. Though

Summary: An integrated bundle adjustment hasbeen developed to facilitate the precise definition ofthe geometry of time-of-flight range imaging cam-eras, including the estimation of range-measure-ment-specific correction parameters modelling lin-ear, periodic and sensor-position-dependent effectsof a distance measurement. The integrated calibra-tion routine jointly adjusts data from both informa-tion channels (amplitude and range) and automati-cally estimates optimum observation weights. Themethod is based on the flexible principle of self-calibration. It does not require spatial object data,thus avoiding the time-consuming determination ofreference distances with superior accuracy. The ac-curacy analyses carried out using a PMD[vision]®CamCube 2.0 confirm the correctness of the pro-posed functional contexts, but they also exhibitchallenges caused by non-parameterised range-measurement-specific errors. The level of accuracyof the observations is computed by variance com-ponent estimation and becomes in mean 1/35 pixelfor an amplitude image coordinate measurementand 9.5 mm for a distance measurement. The preci-sion of a 3D point coordinate can be set at 5 mmafter calibration, compared to several centimetresbefore applying any correction terms. In the case ofdepth imaging technology, which is influenced by avariety of noise sources, this accuracy level is verypromising.

Zusammenfassung: Integrierte Selbstkalibrie-rung distanzmessender 3D-Kameras. Die entwi-ckelte integrierte Bündelblockausgleichung ermög-licht die Bestimmung der exakten Aufnahmegeo-metrie distanzmessender 3D-Kameras sowie dieSchätzung distanzmessspezifischer Korrekturpa-rameter zur Modellierung linearer, periodischerund sensorpositionsabhängiger Fehleranteile einerStreckenmessung. Die integrierte Kalibrierroutinegleicht in beiden Informationskanälen (Amplitudeund Distanz in jedem Pixel) gemessene Größen ge-meinsam aus und bestimmt dabei simultan optima-le Beobachtungsgewichte. Die Methode basiert aufdem flexiblen Prinzip der Selbstkalibrierung undbenötigt keine Objektrauminformation, wodurchinsbesondere die aufwändige Ermittlung von Refe-renzmaßen übergeordneter Genauigkeit entfällt.Die am Beispiel des PMD[vision]® CamCube 2.0durchgeführten Genauigkeitsuntersuchungen be-stätigen die Richtigkeit der aufgestellten funktio-nalen Zusammenhänge, zeigen aber auch Schwä-chen aufgrund noch nicht parametrisierter distanz-messspezifischer Fehler. Die durch eine Varianz-komponentenschätzung festgelegten Genauigkeits-niveaus der ursprünglichen Beobachtungen betra-gen im Mittel 1/35 Pixel für die Amplitudenbildko-ordinatenmessung und 9,5 mm für die Strecken-messung. Die Qualität der 3D-Neupunktkoordina-ten kann nach einer Kalibrierung mit 5 mm ange-geben werden. Im Vergleich bewegt sich diese ohneAnbringen von Korrekturtermen im Bereich eini-ger Zentimeter. Für die durch eine Vielzahl vonmeist simultan auftretenden Rauschquellen beein-flusste Tiefenbildtechnologie ist dieser Genauig-keitswert sehr vielversprechend.

590 Photogrammetrie • Fernerkundung • Geoinformation 6/2013

the emitted and the received signals, and con-sequently the range from the camera’s projec-tion centre to the target. Furthermore, ampli-tude information as a measure for i.a. the re-flectivity of the corresponding surface pointscan be obtained.In the field of RIM sensor technology, ToF-

based cameras are currently available with asensor size of up to 40,000 pixels. Each pix-el thus becomes an electro-optical distance-measuring device. Low-cost, compact rangecameras combine the practicality of a digitalcamera with the 3D data acquisition potentialof conventional surface measurement systems.Thus, they represent an interesting alternativefor many applications such as mobile robotics(Schulze 2010, Mönnich et al. 2011), human-machine-interaction (KolB et al. 2010), auto-motive (ringBecK et al. 2007, reich 2011), orhuman motion analysis (weStfeld & heMPel

2008, weStfeld et al. 2013).The imaging geometry of a range camera is

similar to a conventional 2D camera (Fig. 1).The collinearity equations map an object pointX(X,Y,Z) to an image point x′(x′,y′) using acentral projection:

( ) ( ) ( )( ) ( ) ( )

0

11 0 21 0 31 0

13 0 23 0 33 0

' '

'

x xr X X r Y Y r Z Z

cr X X r Y Y r Z Z

x

=

− + − + −⋅ ⋅ ⋅⋅

⋅−

− + − + −

+ Δ

⋅ ⋅

(1)

( ) ( ) ( )( ) ( ) ( )

0

12 0 22 0 32 0

13 0 23 0 33 0

' '

'

y yr X X r Y Y r Z Z

cr X X r Y Y r Z Z

y

=

⋅ − + ⋅ − + ⋅ −− ⋅

⋅ − + ⋅ − + ⋅ −

+ Δ

wherec: Focal lengthx′0: Principal point∆x′: Correction functionsX0: Projection centrerr,c: Elements of a rotation matrix R

In addition to amplitude information, rangevalues D are stored in each pixel location. Dueto this fact, the inverse mapping, i.e. the pro-jection of an image point into object space, isuniquely defined and can be described by:

The use of range cameras as measuringdevices requires the modelling of deviationsfrom the ideal projection model. As a resultof its inherent design and measurement prin-ciple, range cameras simultaneously provideamplitude and range data reconstructed fromone measurement signal. The simultaneousintegration of all data obtained using a rangecamera into an integrated calibration approachis a logical consequence and represents the fo-cus of this contribution. On the one hand, thecomplementary characteristics of the obser-vations allow them to support each other dueto the creation of a functional context for themeasurement channels. On the other hand, theexpansion of the stochastic model to includevariance component estimation ensures thatthe heterogeneous information pool is fullyexploited. An increase in accuracy and reli-ability can be expected by both.The time-of-flight range imaging principle

is introduced in section 2, including the sen-sor device, the measurement principle and themathematical fundamentals of the range im-aging process. Range cameras are subject toa variety of error sources, which affect boththe optical imaging process itself and the dis-tance measurement. Their origin and effectsas well as different calibration strategies forcorrection are treated in section 3. The inte-grated self-calibrating bundle adjustment ap-proach presented here and the results achievedare described in detail in sections 4 and 5. Fi-nally, the work is summarised and an outlookis given in section 6.

2 Sensor

Time-of-flight range imaging cameras (ToF,RIM, 3D camera; SchwArte et al. 1999) aremonoscopic digital cameras. They acquireamplitude and range images simultaneously.An active illumination unit emits modulatednear infrared light, which is backscatteredfrom the object surface to the camera. Thesolid-state sensor array mounted is a photonicmixer device built in CCD/CMOS technology(PMD, SPirig et al. 1995, SchwArte 1996). Ateach detector pixel, a distance-based chargecarrier separation is performed. This demod-ulation delivers the phase difference between

Patrick Westfeld & Hans-Gerd Maas, Integrated 3D Range Camera Self-Calibration 591

es: Effects caused by temperature and run-ning-in behaviour or multipath propagationcan be decreased or even avoided by an ade-quate measurement setup. Fixed pattern noise(FPN) specifies a constant distance measure-ment error at each pixel location and is causedby inhomogeneous CMOS material properties(Jähne 2002). Look-up tables or multiple sig-nal sampling implemented by range cameramanufacturers are used to compensate FPNeffects in real-time (luAn 2001). ToF cam-eras acquire four raw measurements succes-sively in order to calculate one phase image.Any sensor or object movement occurring be-tween those sampling steps leads to motionartefacts due to phase changes (e.g. lindner

& KolB 2009). Current range camera devicescan suppress motion blur on-chip (ringBecK

& hAgeBeuKer 2007). Scattering (lens flare)describes the internal propagation of the inci-dent light over the PMD array caused by sig-nal reflections between optical filter, lens andsensor. Approaches to model the superimposi-tion of focused light with scattered portions bypoint spread functions are for example givenin Mure-duBoiS & hügli (2007) or KArel etal. (2012). lichti et al. (2012) propose an em-pirical parameterised model to compensate forscattering-induced range errors, which is onlyvalid under specific scene conditions. A lin-ear distance correction term considers a shiftd0 of the measurement origin, the latter beingdefined to coincide with the projection cen-tre (Fig. 2), and a scale variation d1 caused bymodulation frequency deviations:

∆Dlin = d0 + d1D (3)

The modulation of the measurement sig-nal can be influenced by harmonic interfer-ences. A periodic distance correction term ap-proximates the resulting cyclic phase errors(e.g. lichti et al. 2010). The wavelengths e.g.of two sinusoidal functions with d3 and d5 asamplitudes correspond to ¼ and ⅛ of a con-stant modulation wavelength λ; phase shiftsare considered by the cosine parts with d2 andd4 as amplitudes:

2 3

4 5

2 2cos 4 sin 4

2 2cos 8 sin 8

cycD d D d D

d D d D

π π Δ = + λ λπ π + + λ λ

(4)

( )11 12 13

21 22 23

31 32 33

''

r r rD D r r r

r r r

= + −Δ ⋅ ⋅

0X X ξξ

(2)

where0

0

' ' '' ' ' '

x x xy y y

c

− − Δ = − − Δ

− ξ .

Δx′ (Δx′,Δy′) = f(A1,A2,A3,B1,B2,C1,C2) de-scribes functions for the correction of imagingerrors. The additional parameters implement-ed compensate radial (A1 – A3) and tangential(B1, B2) distortions using the well-establishedmodel from (Brown 1971) as well as affinityand shear effects (C1, C2; El-hAKiM 1986). Thedistance correction term ΔD should considersystematic errors in the range measurements(section 3).

Fig. 1: Imaging geometry.

3 Error Sources and CalibrationStrategies

Range cameras use standard optics for imag-ing. Thus, the measurement geometry corre-sponds to a central projection (section 2), andany deviations from the ideal model have tobe compensated. Photogrammetric cameracalibration techniques are employed to deter-mine actual values for the focal length c andthe principal point x′0 as well as for the pa-rameters describing lens distortion and othererrors, summarised in Δx′.

The results of the distance measurementsare also affected by a number of error sourc-

592 Photogrammetrie • Fernerkundung • Geoinformation 6/2013

ibrating bundle adjustment approach. Initialthoughts about the combination of heteroge-neous ToF camera observations in one adjust-ment procedure were published in weStfeld

(2007). weStfeld et al. (2009) move a smallreference body with white spherical signalsthrough the measurement volume of a rangecamera. Interior camera orientation param-eters and distance calibration terms are esti-mated within a test-field-calibrating bundleadjustment with fixed object point coordi-nates. Variance component estimation en-sures that the heterogeneous information poolis fully exploited. However, the experimentaleffort of this study was still high, because thecentral points of the spheres have to be deter-mined at each position of the calibration body.lichti et al. (2010) and lichti & qi (2012) per-form a geometric calibration of range camer-as by means of an integrated self-calibratingbundle adjustment from measurements of an8 m2 planar target field, signalised with cir-cles. The centres of the targets in the ampli-tude images are measured by ellipse fitting,the corresponding range measurements mustbe derived from the neighbourhood by bilin-ear interpolation. Due to the special character-istics of the experimental setup, the distancecorrection model does not comprise a scaleerror term. lichti et al. (2012) extend the ap-proach to compensate the scattering bias usinga specific two-plane target.

4 Integrated Self-calibratingBundle Adjustment

4.1 Principle

With respect to the calibration strategies re-viewed in section 3, the integrated bundle ad-justment approach presented here should holdthe following properties: 1) The primary aimis to simultaneously integrate all range cam-era information in one joint functional andstochastic context. 2) To reduce efforts intime and instrumental resources, the methodis based on self-calibration. 3) If possible, theobservations should not be interpolated. 4)The target field is stable, compact, portable aswell as easy in set-up and dismantle.

Phase shifts can occur due to delays in sig-nal propagation (fuchS & hirzinger 2008)or high viewing angles (BöhM & PAttinSon

2010). A linear function of the radial distancer′ using a factor d6 can be introduced to cor-rect a distance measurement with respect tothe position on the sensor:

( ) ( )

' ' 6

2 26 0 0

'

' ' ' ' ' '

x yD d r

d x x x y y y

Δ =

= − − Δ + − − Δ

(5)

The amplitude demodulated at each sensorsite is a measure for the total amount of inci-dent light. KArel & Pfeifer (2009) show thata measured distance increases with decreas-ing reflectivity and model the influence of theamplitude A on the ranging system by a hyper-bola with parameters d7 – d9:

227 7 9

28 8 8 8

12ampd d dD A A Ad d d d

Δ = − + − + (6)

Several calibration strategies are reportedin literature to correct the errors describedabove. Depending on the observations used,they can be categorized as photogrammetric,sequential and simultaneous methods (weSt-feld 2012). A classical photogrammetric testfield calibration only determines the interiororientation parameters from amplitude im-ages, e.g. reulKe (2006), weStfeld (2007),Beder & Koch (2008). Sequential approachesgo one step further and calibrate the distancemeasurement after photogrammetric calibra-tion. The required reference is derived frominterferometric comparator displacementmeasurements (KAhlMAnn et al. 2006), pre-cise external positioning systems, e.g. a robot(fuchS & hirzinger 2008), or optical track-ing systems like a laser scanner (chiABrAn-do et al. 2009). Reference values can also beprovided indirectly by the network geometrydetermined in the course of the self-calibra-tion (KArel 2008, roBBinS et al. 2009, BöhM

& PAttinSon 2010), a procedure which reduc-es time and instrumental effort significantly.A logical extension of the 2-step calibrationis the simultaneous determination of all cali-bration parameters in one integrated self-cal-

Patrick Westfeld & Hans-Gerd Maas, Integrated 3D Range Camera Self-Calibration 593

age positions x′ijk and the stored slant rangesDijk of all sphere surface points k are used. Sothe geometric model for distances is based onthe intersection points Xjk of the projectionrays with the surface of a sphere.

4.2 Functional Model

Amplitude Channel

The centre coordinates of the reference spheresprojected onto the sensor can be measuredwith sub-pixel accuracy in each amplitudeimage by 2D least squares template matching(LSM), e.g. AcKerMAnn (1984), grün (1985).The template is generated automatically priorto the image matching process with respect tothe amplitude gradient and the dimension ofthe relevant sphere (Figs. 3a and 3b). 2D LSMis parameterised by two shifts in row and col-umn and one scale only. The approximate val-ues originate from an algorithm for automaticimage point detection based on edge detec-tion, thresholding and connectivity analysis(BurKhArt 2007). Furthermore, a student testhas been implemented for testing the LSMparameters’ level of significance. The meanstandard deviation of the image point coordi-nates is 1/25 pixel (Fig. 3c).

The image points x′ij, measured in ampli-tude images i, form the first type of observa-

The geometric principle of the calibrationapproach is shown in Fig. 2. Spheres serveas 3D reference objects. Their centres can bemeasured easily in the amplitude images, butthe corresponding slant range measurementscannot be derived directly from range images.In order to set the geometric relation betweenboth observation channels and to avoid rangevalue interpolation, spheres are modelled bytheir surface points. Thus, the geometric mod-el is based on the collinearity of the centre co-ordinates x′ij of a sphere j observed in image i,the projection centre X0i and the correspond-ing object point Xj. In addition, the integer im-

Fig. 2: Geometric model: For each range cam-era image i, a sphere j is projected onto theimage plane. For each surface point k, ampli-tude information gvijk and range value Dijk arestored at (x’,y’)ijk.

Fig. 3: Amplitude image point measurement by 2D LSM. (a) Synthetically generated referenceimage. (b) Search image with centre cross. (c) Exemplary plot of the a-posteriori standard devia-tions of the translation parameters based on a single inverse amplitude image. Averaged over allimages, the deviations are 1/25 pixel or 1.70 μm with a sensor size of 9.18 mm × 9.18 mm (exag-geration: 200).

594 Photogrammetrie • Fernerkundung • Geoinformation 6/2013

is applied (Fig. 4b). Starting in the centre ofthe range patch, a radial profile analysis basedon neighbouring range value differences isperformed for further patch containment(Fig. 4c). A range threshold, which originatesfrom the profile analysis, is applied afterwards(Fig. 4d). RANSAC (random sample consen-sus, fiSchler & BolleS 1981) is used to esti-mate the parameters of a sphere in order toeliminate gross errors, primarily caused byoversaturation or multi-path effects at objectboundaries, and finally calculates the initialset of observations (Fig. 4e). In this process,the probability of a randomly selected dataitem being part of a sphere model is 99 %.The error tolerance for establishing the spheremodel compatibility is equal to the knownsphere radius. After segmentation and withrespect to the experimental setup describedin section 5.1, one sphere surface is on aver-age represented by 100 points (min.: 11, max.:261). A detailed description of the segmenta-tion workflow is given inweStfeld (2012).

The surface points Xjk of a sphere j, withcentre Xj and radius r, have been segmentedfrom ToF images i. They satisfy the followinggeneral equation of a sphere:

( ) ( ) ( )2 2 2 20 jk j jk j jk jX X Y Y Z Z r= − + − + − −

(8)

Substituting Xjk in (8) by the right hand sideof (2) geometrically corresponds to a spatialintersection of a projection ray with the sur-face of the sphere:

tion introduced into the bundle adjustment.On the basis of the collinearity equations (1),two observation equations ΦAx′ and ΦAy′ can beformulated per target j:

( )

( )

'0 0 0 0

'0 0 0 0

'

, , , , , , ' , , ', , ,

'

, , , , , , ' , , ', , ,

ij

Axij i i i i i i j j j

ij

Ayij i i i i i i j j j

x

X Y Z x c x X Y Z

y

X Y Z y c y X Y Z

ω ϕ κ

ω ϕ κ

=

Φ Δ

=

Φ Δ

(7)

Range Channel

A range measurement is performed at each de-tector site. Thus, and in contrast to the ampli-tude image point measurements, ToF camerasdeliver those observations directly for eachsensor element. The distances measured be-tween camera optics and sphere surfaces serveas the second observation type. The task is toautomate the segmentation of the observedrange values for each sphere.

The centre coordinates obtained from 2DLSM are used to localize the spheres in am-plitude and range images. One patch appliedto both amplitude and range image is cut outcentred at the location of the target deter-mined in the amplitude image, with the patchsize depending on the sphere distance. Allpixels within this patch are considered to bepossible candidates for points on the sphere(Fig. 4a). In the next step, an amplitude thresh-old dynamically derived by fitting a Gauss-ian to the histogram of the amplitude image

Fig. 4: Sphere segmentation stages in amplitude images (top) and colour-coded range images(bottom). Non-valid pixels are coloured in Cyan, (a) rough localization, (b) histogram analysis inamplitude image, (c) profile analysis in range image, (d) thresholding in range image, (e) Inliers ofthe RANSAC procedure.

Patrick Westfeld & Hans-Gerd Maas, Integrated 3D Range Camera Self-Calibration 595

rotations as well as all 3D sphere centres Xjneed to be determined in the integrated bundleadjustment. The sphere radius r is introducedas a constant.

4.3 Stochastic Model

The stochastic model contains informationabout the accuracy of the functional model,especially the weighting of the observations. Itis represented by the variance-covariance ma-trix Σll of the observations before the adjust-ment process. Datasets with different types ofobservations should be separated into groups,and weights should be adjusted in order to tapthe full information potential. Usually, infor-mation from the instrument manufacturer orfrom previous accuracy analyses provide thebasis for specifying the a-priori variance of ameasurement.

The integrated calibration method com-bines heterogeneous observations with un-known accuracy. The a-priori variance com-ponents are adjusted automatically by itera-tive variance component estimation (VCE).The weights of the observations are given bythe quotient of s0

2 to si2. The variance of the

unit weight s0 is a constant (in general s0 = 1),and si are the variances of the observations,namely the variance component sA

2 for the am-plitude image point measurements and sD

2 forthe range measurements. weStfeld & heMPel

(2008) already show that an aggregation of allobservations of one group with one (adaptive)weight is acceptable for a ToF camera. Σll canbe subdivided into two components, i.e., onecomponent per group of observation:

2

2A A

llD D

ss

= =

0 I 00 0 I

ΣΣ

Σ(12)

where I: Identity matrix

Finally, (9) is solved for D and introducedas observation equation into the bundle ad-justment.

Additional Constraints

A 3D rotation can be described by three Eulerangles (ω,φ,κ) or, in order to avoid ambiguoustrigonometric functions, by four quaternions(q1,q2,q3,q4). The use of quaternions makessense from a numerical point of view but re-quires one additional equation per camera po-sition i to enforce an orthogonal rotation ma-trix Ri:

2 2 2 21 2 3 41 i i i iq q q q= + + + (10)

The reference frame of the integrated self-calibrating bundle approach is not uniquelydefined by known reference points and shouldbe adjusted as an unconstrained network. Therank defect of the resulting singular system ofequations can be removed by including sevenadditional constraints: 3 translations, 3 rota-tions, 1 scaling factor, e.g. luhMAnn et al.(2006). The scale was determined by two diag-onal reference distances across the target field.

Unknown Parameters

The unknown range camera calibration pa-rameters are the focal length c, the principalpoint x′0 and the parameters of the image cor-rection functions Δx′ (interior orientation pa-rameters) as well as linear, periodic and radialdistance correction terms, summarised in ΔD:

ΔD = ΔDlin + ΔDcyc + ΔDx′y′ (11)

They can be estimated from the observationequations (7) and (9). Reflectance dependen-cies cannot be modelled reliably due to the useof white spheres as reference objects.

Further, exterior orientation parameters X0iand (q1,q2,q3,q4)i for camera translations and

( ) ( )( ) ( )( ) ( )

2

0 11 1 12 2 13 3

2

0 21 1 22 2 23 3

22

0 31 1 32 2 33 3

0 ' ' '

' ' '

' ' '

i j i ijk i ijk i ijkijk

i j i ijk i ijk i ijkijk

i j i ijk i ijk i ijkijk

X X D D r r r

Y Y D D r r r

Z Z D D r r r r

ξ ξ ξ

ξ ξ ξ

ξ ξ ξ

= − + + Δ + +

+ − + + Δ ⋅ + +

+ − + + Δ ⋅ + + −

(9)

596 Photogrammetrie • Fernerkundung • Geoinformation 6/2013

set threshold is rejected, and the adjustmentis repeated until no gross error remains (datasnooping).

A student test is calculated after the solu-tion converges to decide whether an intro-duced parameter is significant or not. Non-significant parameters may be excluded fromthe estimation process in order to improve thestrength of the solution. The least squares ad-justment is repeated until all used parametersare significant.

5 Results

5.1 Experimental Setup

A 1 m2 rigid 3D calibration target field was de-signed in order to proof the concept presentedhere (Fig. 5). It consists of 25 white polysty-rene spheres with a radius r of 35 mm, alignedin two depth planes. The spatial point fieldwith roughly known geometry was capturedby a PMD[vision]® CamCube 2.0 (Fig. 6, fo-cal length 12.8 mm, field of view 40° × 40°)following the imaging configuration shown inFig. 7. The integration timewas set to 2,000 μs.The modulation frequency was set to 20 MHz,which corresponds to a non-ambiguous meas-urement range of 7.5 m. Overall 12 convergentimages, four of them rolled against the cameraaxis, ensure improved ray intersections andlower parameter correlations. A further fourcamera positions in distances of up to 5 m al-low for the determination of distance-relatedcorrection terms. Images in larger distancescannot be oriented reliably.

5.2 Internal Accuracy

Calibration Parameters

Tab. 1 lists the interior orientation parametersand the corresponding a-posteriori standarddeviations. All parameters could be deter-mined significantly, except for the radial dis-tortion parameters A2 and A3. Tab. 1 furtherlists the distance correction terms, and Fig. 8shows their influence depending on sensor po-sition and distance value. The additive term d0of ΔDlin is 115 mm, the multiplicative param-

In the course of the VCE, the approxi-mate values for the variance components areimproved within a few iterations. See Koch

(2004) for further information. The remain-ing additional constraints are considered tobe mathematically rigorous by PC = 0 in theextended system of normal equations (Snow

2002, section 4.4).

4.4 Solving the Adjustment Task

The integrated bundle adjustment bases on anextended Gauß-Markov model:

( )T

T T T

1

ˆ ˆ;ˆ2 min

ˆ

C−

− = + =

+ + →

= = −

Ax l v Bx w 0v Pv k Bx w

x A PA B A Plk B P w

(13)

where v: Residualsk: Lagrangian multipliers

The functional model (section 4.2) is re-quired to set up the coefficient matrices A andB, which contain the linearised observationand constraint equations, the reduced obser-vation vector l and the vector of inconsisten-cies w. The weight matrix P as inverse var-iance-covariance matrix Σll defines the sto-chastic model (section 4.3). The extended sys-tem of normal equations is solved iteratively.At each step, the solution vector x is added tothe approximation values of the previous itera-tion until the variances reach a minimum andthe optimisation criterion is fulfilled.

As a least squares adjustment method, theintegrated bundle adjustment delivers infor-mation on the precision, determinability andreliability of the calibration parameters. Thisincludes the a-posteriori variance si of each ofthe parameters as well as the correlation be-tween parameters. In combination with an au-tomatic VCE, a-posteriori variances s1 and s1of the original and adjusted observations canbe stated.

The ratio between the a-posteriori standarddeviation svi of the residuals and the actual re-sidual vi of an observation serves as reliabil-ity measure. Per group, the observation withthe largest normalised residual above a pre-

Patrick Westfeld & Hans-Gerd Maas, Integrated 3D Range Camera Self-Calibration 597

Fig. 5: Target field.

Fig. 6: Inverse amplitude images (top) und colour-coded range images (bottom) as input data. (a)Frontal view. (b – d) Oblique views at a distance of 1.5 m to 4.5 m.

Fig. 7: Network geometry.

598 Photogrammetrie • Fernerkundung • Geoinformation 6/2013

relations could not be observed. Particularly,no significant correlations between the inte-rior and exterior orientation parameters andthe distance correction parameters could beobserved. These correlation coefficients rangefrom -0.07 – +0.09 and from -0.13 – +0.10, re-spectively.

3D Object Coordinates

The a-posteriori standard deviations of thesphere surface points Xjk observed from onecamera position is 0.32 mm; with the redun-dancy the RMS is improved to 0.13 mm for thesphere centre coordinates Xj.

Residuals

The normally distributed residuals vA of theamplitude image coordinate measurements x′ijdo not show any systematic effect. They varyin both coordinate directions with an a-poste-riori RMS deviation svi

about 1/40 pixel aroundthe expected value μ = -0.06 μm (Fig. 9a).

eter d1 amounts to 2.8 % of the measured dis-tance. The amplitudes d[2,5] of the sinusoidalcorrection function ΔDcyc show about ±45 mmwave deflection in maximum. The radial fac-tor d6 in of ΔDx′y′ causes a correction of up to20 mm for distance measurements at the im-age corners.

The individual a-posteriori standard devi-ations of all distance correction parametersseem to be quite high. However, an interpre-tation is difficult due to non-parameterisedremaining errors like scattering or reflec-tance. Reducing the error model to only a lin-ear correction term would increase the inter-nal accuracy by one order of magnitude (d0 =-94.13 mm ± 1.235 mm; d1 = -1.29e-2 ± 6.29e-4),but concurrently increase the distance residu-als by more than 5 mm.

The not fully parameterised distance er-ror model also results in high correlations ofabout 0.90 between all distance correction pa-rameters summarised in ΔDlin and ΔDcyc. Thecorrelations between the other parameterswere relatively low. Further considerable cor-

Tab. 1: ToF camera calibration parameters xi with their standard deviations sxi.

c(mm)

x0'(mm)

y0'(mm)

A1 A2 A3 B1 B2 C1 C2

xi 12.149 -0.151 -0.258 -2.85e-3 0 0 3.55e-4 7.02e-4 7.26e-4 -2.32e-4

sxi6.48e-3 8.97e-3 8.59e-3 1.29e-5 fix fix 2.38e-5 2.38e-5 8.78e-5 6.90e-5

d0(mm)

d1 d2(mm)

d3(mm)

d4(mm)

d5(mm)

d6

xi -115.82 2.88e-2 -33.18 23.98 -8.56 -2.89 3.17

sxi15.38 6.33e-3 5.31 4.44 2.09 1.04 0.32

Fig. 8: (a) Effects of the distance correction term on sensor positions, (b) in image distances of1.0 m, (c) 2.0 m, (d) 3.0 m, and (e) 4.0 m.

Patrick Westfeld & Hans-Gerd Maas, Integrated 3D Range Camera Self-Calibration 599

becomes 1/35 pixel, and is thus slightly betterthan the LSM precision stated in section 4.2.The average deviation of an original distancemeasurement is about 9.5 mm. This value cor-responds to the level of precision determinedfrom repeated measurements in preliminaryinvestigations (heMPel 2007). The a-poste-riori standard deviation of the unit weight isnear to the a-priori constant value s0 = 1.0,which indicates an optimally determined ac-curacy ratio for both groups of observations.This implies that the a-posteriori variances ofthe original oberservations are equal to theira-priori variances.

The standard deviations sA and sD of the ad-justed observations, without any systematicerrors, are 1/70 pixel for the image coordinatemeasurement and about 0.8 mm for the dis-tance measurement.

The remaining residuals vD of the distanceobservations are shown in Fig. 9b for an ad-justment with and without considering the dis-tance correction model. The red graph resultsfrom of a function fitted into the residuals ofthe uncorrected distance measurements. Itindicates a constant offset, a slight distance-related trend and periodic variations. As ex-pected, the RMS of the residuals is at a heightof 40.86 mm. The deviations can be reducedsignificantly, if distance correction terms areestimated and added. The black graph of afunction fitted into the remaining residuals af-ter a fully parameterised adjustment is nearlya straight line with y = 0 = const. This is alsoconfirmed by the normally distributed residu-als around the expected value μ = 0.65 mm.The RMS error can be reduced by a factor 6to 6.49 mm.

Observational Errors

The a-posteriori standard deviations sA,D , theobservations estimated by the VCE as well asthe sA ,D of the adjusted observations were cal-culated in the course of the error analysis afterthe bundle adjustment (Tab. 2).

In average, the a-posteriori standard devi-ation sA of an image coordinate measurement

Fig. 9: Residuals of the amplitude image coordinate measurements show no systematic effectsand are normally distributed around -0.06 μm with an RMS error of 1/40 pixel (exaggeration: 200).(b) Residual of the distance measurements without correction show systematic effects (red). TheRMS error is 40.86 mm. The residuals do not show interpretable effects, if distance correctionterms are included (black). The RMS error can be reduced significantly to 6.49 mm.

Tab. 2: A-posteriori standard deviations s ofthe original and adjusted observations as wellas of the unit weight.

sA sA sD sD s0

1.23 μm 0.65 μm 9.468 mm 0.794 mm 1.00002

600 Photogrammetrie • Fernerkundung • Geoinformation 6/2013

bration method in particular are discussed inlichti & KiM (2011).

6 Conclusion and Outlook

The self-calibrating bundle adjustment ap-proach presented in this contribution deter-mines a geometric range camera model andestimates all image- and distance-relatedcorrection parameters. The parameterisationcombines both amplitude and distance meas-urements in an integrated functional model.The heterogeneous information pool is fullyexploited by estimating variance componentsautomatically within the integrated stochasticmodel. The experimental configuration of theself-calibration is based on a small, portabletarget field, whose geometry is determined si-multaneously in the adjustment. Complex ex-perimental set-ups can thus be avoided. Theprocess validation showed that the integrationof complementary data leads to a more ac-curate solution than a 2-step calibration. TheRMS error of a 3D coordinate can be statedwith 5 mm after the calibration.

Future work can concentrate on an improvedexperimental setup with grey-scale referencespheres in order to model the dependence ofthe distance measurements on reflectance.The integration time has an influence on thedistance measurements, too. Thus, the cali-bration should be performed for different inte-gration times. In order to define the scale, twobars of known length, each with two spheresat both ends, can be used as a flexible refer-ence length in object space. As soon as effectsof remaining error sources such as scatteringare investigated in a strict geometric-physicalmanner, the parameterisation of the distancecorrection model can be adapted. To differ-entiate between constant and distance-relatedmeasurement errors, the integration of adap-tive variance components is also possible. Fi-nally, the temporal stability of the calibrationparameters should be examined and a compar-ison of different range camera devices shouldbe carried out.

5.3 External Accuracy

The remaining random measurement devia-tions and the observational errors as internalprecision measures confirm the correctness ofthe proposed functional and stochastic contexts,but do not reflect the actual physical measure-ment accuracy.Images of the target field, captured by a

DSLR Nikon D300 and processed by photo-grammetric bundle adjustment, are used toobtain 3D sphere centre coordinates with su-perior accuracy. Such reference data can beused to specify the absolute or external accu-racy of the integrated calibration method pro-posed. The root-mean-square error of the co-ordinate differences is RMSΔX = 5.3 mm, theminimum deviation is 0.4 mm, the maximumdeviation is 10.7 mm.

5.4 Comparison of 1- and 2-stepProcedures

In order to evaluate the advantages of the inte-grated approach, the dataset is processed againon the basis of a 2-step calibration. The ampli-tude image coordinate measurements are in-troduced into a bundle adjustment, which isonly parameterised by the interior and exte-rior orientation parameters as well as with the3D sphere centres. Based on this network ge-ometry, the distance correction terms are esti-mated afterwards by adjusting the range data.The differences in orientation parameter val-ues as well as their accuracies and correlationsare marginal. The distance parameter valuesvary, but their sum ΔD shows a high degreeof correspondence. Only the residual analy-sis show significant improvements: The RMSdeviations sA,D of the remaining residuals vA,Ddecrease by 15 % for the integrated approach,and the expected values tend more clearly to-wards zero. Finally, the RMS of the coordi-nate differences ΔX can be improved slightlyby 5 % and thus confirm that the integratedbundle adjustment approach is better suited tomodel the geometric-physical reality of ToFrange imaging process.

A detailed comparison of the three differ-ent self-calibration methods for range camerasand the advantages of an integrated self-cali-

Patrick Westfeld & Hans-Gerd Maas, Integrated 3D Range Camera Self-Calibration 601

KArel, w., 2008: Integrated range camera calibra-tion using image sequences from hand-held op-eration. – International Archives of Photogram-metry, Remote Sensing and Spatial InformationSciences XXXVII: 945–951.

KArel, w. & Pfeifer, n., 2009: Range camera cali-bration based on image sequences and dense,comprehensive error statistics. – BerAldin, A.,cheoK, g.S., MccArthy, M. & neuSchAefer-ruBe, u. (eds.): SPIE Proceedings on ElectronicImaging / 3D Imaging Metrology 7239, SanJosé, CA, USA.

KArel, w., ghuffAr, S. & Pfeifer, n., 2012: Mod-elling and compensating internal light scatteringin time of flight range cameras. – Photogram-metric Record 27: 155–174.

Koch, K.-r., 2004: Parameterschätzung und Hy-pothesentests in linearen Modellen. – 4. Auflage,Dümmlers Verlag, Bonn.

KolB, A., BArth, e., Koch, r. & lArSen, r., 2010:Time-of-flight cameras in computer graphics. –Computer Graphics, Forum 29 (1): 141–159.

lichti, d.d., KiM, c. & JAMtSh, S., 2010: An inte-grated bundle adjustment approach to rangecamera geometric self-calibration. – ISPRSJournal of Photogrammetry and Remote Sensing65 (4): 360–368.

lichti, d.d. & KiM, c., 2011: A Comparison ofThree Geometric Self-Calibration Methods forRange Cameras. – Remote Sensing 3: 1014–1028.

lichti, d.d., qi, x. & AhMed, t., 2012: Range cam-era self-calibration with scattering compensa-tion. – ISPRS Journal of Photogrammetry andRemote Sensing 74: 101–109.

lichti, d.d. & qi, x., 2012: Range camera self-calibration with independent object space scaleobservations. – Journal of Spatial Science 57 (2):247–257.

lindner, M. & KolB, A., 2009: Compensation ofMotion Artifacts for Time-of-Flight Cameras. –Dynamic 3D Imaging, LNCS: 16–27, Springer.

luAn, x., 2001: Experimental Investigation of Pho-tonic Mixer Device and Development of TOF 3DRanging Systems Based on PMD Technology. –Dissertation, Department of Electrical Engi-neering and Computer Science, Universität Sie-gen.

luhMAnn, t., roBSon, S., Kyle, S. & hArley, i.,2006: Close Range Photogrammetry: Principles,Methods and Applications. – Whittles Publish-ing, revised edition.

Mönnich, h., nicolAi, P., Beyl, t., rAczKowSKy, J.& wörn, h., 2011: A supervision system for theintuitive usage of a telemanipulated surgical ro-botic. – IEEE International Conference on Ro-

References

AcKerMAnn, f., 1984: High precision digital imagecorrelation. – 39th Photogrammetric Week 9:231–243, Schriftenreihe der Universität Stutt-gart.

Beder, c. & Koch, r., 2008: Calibration of focallength and 3D pose based on the reflectance anddepth image of a planar object. – InternationalJournal of Intelligent Systems Technologies andApplications 5 (3/4): 285–294.

BöhM, J. & PAttinSon, t., 2010: Accuracy of exte-rior orientation for a range camera. –MillS, J.P.,BArBer, d.M., Miller, P. & newton, i. (eds.):ISPRS Commission V Mid-Term Symposium’Close Range Image Measurement Techniques’XXXVIII: 103–108.

Brown, d.c., 1971: Close-range camera calibra-tion. – Photogrammetric Engineering 37 (8):855–866.

BurKhArt, S., 2007: Automatische Messung kreis-förmiger Zielmarken. – Unveröffentlichte Stu-dienarbeit, Technische Universität Dresden, Ins-titut für Photogrammetrie und Fernerkundung.

chiABrAndo, f., chiABrAndo, r., PiAtti, d. & rin-Audo, f., 2009: Sensors for 3D imaging: Metricevaluation and calibration of a CCD/CMOStime-of-flight camera. – Sensors 9 (12): 80–96.

el-hAKiM, S., 1986: A real-time system for objectmeasurement with CCD cameras. – IAPRS 26.

fiSchler, M.A. & BolleS, r.c., 1981: Randomsample consensus: A paradigm for model fittingwith applications to image analysis and auto-mated cartography. – Communications of ACM24 (6): 381–395.

fuchS, S. & hirzinger, g., 2008: Extrinsic anddepth calibration of ToF-cameras. – IEEE Con-ference on Computer Vision and Pattern Recog-nition (CVPR) 2008: 1–6.

grün, A., 1985: Adaptive least squares correlation– a powerful image matching technique. – SouthAfrican Journal of Photogrammetry, RemoteSensing and Cartography 14: 175–187.

heMPel, M., 2007: Validierung der Genauigkeitund des Einsatzpotentials einer distanzmessen-den Kamera. – Unveröffentlichte Diplomarbeit,Technische Universität Dresden, Institut fürPhotogrammetrie und Fernerkundung.

Jähne, B., 2002: Digitale Bildverarbeitung. – 5.Auflage, Springer-Verlag, Berlin und Heidel-berg.

KAhlMAnn, t., reMondino, f. & ingenSAnd, h.,2006: Calibration for increased accuracy of therange imaging camera swissrangertm. – MAAS,h.-g. & Schneider, d. (eds.): ISPRS Image En-gineering and Vision Metrology XXXVI (5):136–141, ISSN 1682-1750.

602 Photogrammetrie • Fernerkundung • Geoinformation 6/2013

SPirig, t., Seitz, P., vietze, o. & heitger, f., 1995:The lock-in CCD-two-dimensional synchronousdetection of light. – IEEE Journal of QuantumElectronics 31 (9): 1705–1708.

weStfeld, P., 2007: Ansätze zur Kalibrierung desRange-Imaging-Sensors SR-3000 unter simulta-ner Verwendung von Intensitäts- und Entfer-nungsbildern. – luhMAnn, t. (ed.): Photogram-metrie – Laserscanning – Optische 3D-Mess-technik (Beiträge Oldenburger 3D-Tage 2007):137–146, Herbert Wichmann Verlag, Heidel-berg.

weStfeld, P. & heMPel, r., 2008: Range image se-quence analysis by 2.5-D least squares trackingwith variance component estimation and robustvariance covariance matrix estimation. – chen

J., JiAng, J. & MAAS, h.-g. (eds.): InternationalArchives of Photogrammetry, Remote Sensingand Spatial Information Sciences XXXVII:457–462, ISPRS.

weStfeld, P., MulSow, c. & Schulze, M., 2009:Photogrammetric calibration of range imagingsensors using intensity and range informationsimultaneously. – Optical 3D MeasurementTechniques IX (II): pp. 129.

weStfeld, P., 2012: Geometrische und stochasti-sche Modelle zur Verarbeitung von 3D-Kamera-daten am Beispiel menschlicher Bewegungsana-lysen. – Dissertation, Technische UniversitätDresden, Fakultät Forst-, Geo- und Hydrowis-senschaften, Professur für Photogrammetrie.

weStfeld, P., MAAS, h.-g., BringMAnn, o., gröl-lich, d. & SchMAuder, M., 2013: Automatictechniques for 3D reconstruction of criticalworkplace body postures from range imagingdata. – ISPRS Journal of Photogrammetry andRemote Sensing, http://dx.doi.org/10.1016/j.isprsjprs.2013.08.004.

Address of the Authors:

Dr.-Ing. PAtricK weStfeld, Prof. Dr. sc. techn.habil. hAnS-gerd MAAS, Technische UniversitätDresden, Institute Photogrammetry and RemoteSensing, D-01062 Dresden, Tel.: +49-351-463-39701, Fax: +49-351-463-37266, e-mail:{patrick.westfeld}{hans-gerd.maas}@tu-dresden.de

Manuskript eingereicht: März 2013Angenommen: August 2013

botics and Biomimetics (ROBIO 2011): 449–454.

Mure-duBoiS, J. & hügli, h., 2007: Real-time scat-tering compensation for time-of-flight camera.– 5th International Conference on Computer Vi-sion Systems.

reich, M., 2011: Untersuchung des Einsatzpoten-tials einer Laufzeitkamera zur quantitativen Er-fassung von Verkehrsströmen. – Studienarbeit,Technische Universität Dresden, Institut fürFestkörperelektronik.

reulKe, r., 2006: Combination of distance datawith high resolution images. – ISPRS Commis-sion V Symposium.

ringBecK, t. & hAgeBeuKer, B., 2007: Dreidimen-sionale Objekterfassung in Echtzeit – PMD Ka-meras erfassen pro Pixel Distanz und Helligkeitmit Videoframerate. – Allgemeine Vermes-sungs-Nachrichten 114 (7): 263–270.

ringBecK, t., Möller, t. & hAgeBeuKer, B., 2007:Multidimensional measurement by using 3-DPMD sensors. – Advances in Radio Science 5:135–146.

roBBinS, S., MurAwSKi, B. & Schröder, B., 2009:Photogrammetric calibration and colorization ofthe swissranger SR-3100 3-D range imaging sen-sor. – Optical Engineering 48: 053603–053603–8.

Schulze, M., 2010: 3D-camera based navigation ofa mobile robot in an agricultural environment. –MillS, J.P., BArBer, d.M., Miller, P. & newton,i. (eds.): The International Archives of the Pho-togrammetry, Remote Sensing and Spatial In-formation Sciences XXXVIII: 538–542.

SchwArte, r., 1996: Eine neuartige 3D-Kamera aufder Basis eines 2D-Gegentaktkorrelator-Arrays.– Aktuelle Entwicklungen und industrieller Ein-satz der Bildverarbeitung: 111–117, Aachen,MIT GmbH.

SchwArte, r., heinol, h., BuxBAuM, B., ringBecK,t., xu, z. & hArtMAnn, K., 1999: Principles ofthree-dimensional imaging techniques. – Jähne,B., hAuSSecKer, h. & geiSSler, P. (eds.): Hand-book of Computer Vision and Applications –Sensors and Imaging 1 (18): 463–484, AcademicPress.

Snow, K.B., 2002: Applications of Parameter Esti-mation and Hypothesis Testing of GPS NetworkAdjustments. – Technical Report 465, Geodeticand GeoInformation Science, Department ofCivil Environmental Engineering and GeodeticScience, The Ohio State University, Columbus,Ohio, USA.