53
I N F O R M A T I K Geometric Modeling Based on Polygonal Meshes Leif P. Kobbelt Stephan Bischoff Mario Botsch Kolja Kähler Christian Rössl Robert Schneider Jens Vorsatz MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT RESEARCH REPORT MAX-PLANCK-INSTITUT FÜR INFORMATIK Stuhlsatzenhausweg 85 66123 Saarbrücken Germany

MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

'$��'$ Æ ��I N F O R M A T I K

� �Geometric Modeling Based on

Polygonal Meshes

Leif P. Kobbelt

Stephan Bischoff Mario Botsch Kolja Kähler

Christian Rössl Robert Schneider Jens Vorsatz

MPI–I–2000–4-002 July 2000

FORSCHUNGSBERICHT RESEARCH REPORT

M A X - P L A N C K - I N S T I T U TF Ü R

I N F O R M A T I K

Stuhlsatzenhausweg 85 66123 Saarbrücken Germany

Page 2: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

Author’s AddressMax-Plan k-Institut für InformatikStuhlsatzenhausweg 8566123 Saarbrü kenGermany{kobbelt,bischoff,botsch,kaehler,roessl,rtschnei,vorsatz}@mpi-sb.mpg.de

Page 3: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

Contents

1 Introduction and overview . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . 2

2 Data acquisition and mesh generation . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 4

2.1 Data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 4

2.2 Triangulation of point clouds . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 7

3 Discrete differential geometry . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 15

3.1 Discrete curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . 15

3.2 Quality control for meshed surfaces . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . 16

4 Coarse-to-fine hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . 22

4.1 Stationary subdivision . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . 22

4.2 Remeshing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 26

5 Fine-to-coarse hierarchies . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 32

5.1 Mesh decimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 32

5.2 Discrete fairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 37

6 Geometric modeling based on polygonal meshes . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . 43

6.1 Freeform modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 43

6.2 Multiresolution modeling . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . 44

6.3 Modeling tools based on triangle meshes . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . 46

1

Page 4: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

2 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

1. Introduction and overview

The use of polygonal meshes for the representation of highlycomplex geometric objects has become the de facto standardin most computer graphics applications. Especially trianglemeshes are preferred due to their algorithmic simplicity, nu-merical robustness, and efficient display. Flexible and effec-tive algorithms have been developed which combine resultsfrom approximation theory, numerical analysis and differen-tial geometry and apply them to the discrete setting of polyg-onal meshes.

To a certain extend most of these techniques were al-ready available for NURBS–based surface representationsand have recently been generalized to unstructured polyg-onal meshes such that today splines can be substituted bypolygonal meshes in many applications. The advantage ofswitching to this representation is mainly due to the fact thatalgorithms for polygonal meshes usually work for shapeswith arbitrary topology and do not suffer from the severerestrictions which stem from the rigid algebraic structureofpolynomial patches. Another advantage of triangle meshesis that they can be used for many stages of the typical pro-cessing pipeline in geometric design applications withoutthe need for inter–stage data conversion. This acceleratestheoverall processing time and reduces the potential for round–off errors.

The motivation for using polygons to describe freeformgeometry is quite obvious: while simple shapes can be char-acterized by manageable functional expressions, the com-plexity of those expressions explodes if the shapes are be-coming more complicated. Hencepiecewiserepresentationsare preferred since higher complexity can be obtained bysimply using more segments (with constant complexity).The extreme case of piecewise representations are polygons:All we need are sample points on the given curve or surfaceand the corresponding surface description results from con-necting these samples by lines or triangles.

Representing a given (real or virtual) surface geometryby a polygonal mesh is usually an approximation process.Hence there is no unique polygonal 3D–model but the den-sity and distribution of sample points and the specific wayhow these samples are connected by triangles provide manydegrees of freedom. For efficient storage and modeling withpolygonal meshes, we have to choose a specific instanceamong those many possible models.

The most important criteria according to which the differ-ent polygonal approximations to a freeform surface can berated, are:smoothnessandcomplexity. Here the smoothnessof a triangle mesh is measured by some discrete reformula-tion of concepts known from differential geometry. One sim-ple example would be to use the angle between the normalvectors of adjacent triangles to measure the discrete curva-ture. More sophisticated measures will be presented in Sec-tion 3.

The complexity of a triangle mesh is measured by thenumber of vertices or the number of faces. It characterizes insome sense the computational costs that are required for dis-playing or modifying the mesh. Hence for computationallymore expensive operations (or on low–performance com-puters) a mesh with less triangles would be preferred whilecheap operations and high–performance computers allow forhigher complexities.

In a typical application the choice of a polygonal modelis constrained by a minimum smoothness requirement anda maximum complexity requirement. Inbetween these lowerand upper bounds an optimal trade–off has to be found sincehigher smoothness increases the complexity and lower com-plexity decreases smoothness. The major techniques that areused to adjust both properties arerefinementto increasesmoothness anddecimationto decrease complexity.

Both techniques together enable the generation ofhier-archical modelswhere different levels of the hierarchy arecharacterized by a varying level of detail. Here it is impor-tant to understand the conceptual difference between topo-logical hierarchies (coarse/fine) and geometrical hierarchies(smooth/non–smooth) which refer to the above quality cri-teria respectively. While decimation reduces complexity andhence always removes detail information, the refinement canbe used to either re–insert detail information or to increasesmoothness without adding detail (cf. Fig. 1).

The mesh processing pipeline

This tutorial is organized according to a (virtual) mesh pro-cessing pipeline. The first step in such a pipeline is usu-ally the generation of a geometric model based on measuredpoint data. The raw data is obtained by mechanical or op-tical scanning of an object’s surface (Section 2.1) and the“point cloud” is subsequently converted into a triangle meshby connecting nearby samples (Section 2.2). Other sourcesfor surface samples are volume data sets or the results ofnumerical simulations.

Once the mesh models exist, we can analyze their surfacequality. This is done by generalizing quality criteria suchas curvatures from continuous smooth surfaces to discretemeshes (Section 3).

As mentioned above, the raw triangle mesh data might notbe appropriate for a given application in terms of smoothnessor complexity and hence we have to apply refinement or dec-imation techniques respectively. In any case we enrich theplain polygonal mesh data by some hierarchical semanticsin terms of increasing smoothness or decreasing complexity.

Refinement can be considered as building up the hierar-chy from coarse to finewhere so–calledsubdivision schemesinsert new vertices into the mesh without introducing geo-metric detail (so that smooth meshes emerge, Section 4.1).Another approach to generate a coarse–to–fine hierarchy are

Max-Planck-Institut für Informatik

Page 5: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 3

Figure 1: Mesh decimation takes the original data (left) and computesa coarse approximation with less detail (center). Refine-ment can either re–insert the removed data to reconstruct the original model (topological hierarchy) or it can insert new pointsto generate a smooth mesh with high complexity but low geometric detail (right, geometrical hierarchy).

remeshing techniqueswhere the newly inserted vertices dur-ing refinement are sampled from some original, highly de-tailed surface (Section 4.2). In a remeshing algorithm therefinement also adds geometric detail such that the resultingsurface is not necessarily smooth but a resampled version ofthe original surface.

Mesh decimation builds up the hierarchyfrom fine tocoarsesince vertices are removed from a detailed mesh suchthat coarser and coarser approximations are generated (Sec-tion 5.1). Starting from the coarsest level of detail, the origi-nal mesh can be reconstructed incrementally by re–insertingthe removed vertices in reverse order (progressive meshes).Alternatively the removed vertices can be re–inserted butwith their position altered in order to increase smoothnesswhile suppressing any geometric detail. Here, the vertex po-sitions which guarantee optimal smoothness can be com-puted bydiscrete fairingtechniques (Section 5.2). The fol-lowing table depicts the relation between the different tech-niques presented in the respective sections.

smooth non–smooth

coarse–to–fine Subdivision Remeshing

fine–to–coarse Discrete fairing Mesh decimation

Finally, the preprocessed mesh models can be modified bysophisticated editing operations. Freeform modeling can beimplemented based on subdivision schemes or based on dis-crete fairing techniques (Section 6). The hierarchical struc-ture further enables multiresolution modeling where globaldeformations with preservation of the detail information arepossible.

Max-Planck-Institut für Informatik

Page 6: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

2. Data acquisition and mesh generation

In creating a computer model of a real–world object, the firsttask is to measure the relevant properties of the object (itsgeometry, surface texture, volumetric density information,etc.). This raw data then serves as a base for further process-ing, for example reconstruction of the object surface.

We definedata acquisitionas the process starting withactually capturing the data up to the construction of a con-sistent model from these samples. This is described in detailin Section 2.1. Techniques to create a surface mesh from theoutput of the data acquisition stage will then be discussed inSection 2.2.

2.1. Data acquisition

2.1.1. Introduction

There is a multitude of applications that need to acquire real–world data, with varying requirements and demands on themeasurement process. The following is a collection of somepopular application domains:

Reverse engineering:The geometry of a real part isscanned to create a CAD/CAM representation; this modelcan then be used for further editing and integration withother modeled parts, as well as for simulation purposes.E.g. in the car manufacturing industry, virtual crash testscan be performed in a non–destructive fashion.

Interrogation: A real part is scanned for comparison withits specifications; measured deviations can be used for re-calibration of the manufacturing process.

Pre–surgical planning: To simulate the implantation of anartificial limb, both the patients body and the part to beimplanted are scanned and the operation can then be sim-ulated in a virtual environment. For brain surgery, visual-ization of a volume scan of the patients head enables thesurgeon to plan the path of a probe in advance, in order toavoid damage to highly sensitive tissue.

Diagnostics: Visualization of volume scans is an importanttool in medical diagnostics today. Shape and size of inter-nal organs can be segmented from the volume data; thisand additional information derived from the scans (massdensity distribution, speed of blood flow, etc.) can providemuch more insight into body–internal structures and pro-cesses than conventional examination methods.

Surface reconstruction: Using photogrammetric methods,three–dimensional terrain can be reconstructed from satel-lite images; this also applies to creation of architecturalmodels from large buildings.

Custom fitting: Computer models of generic products(prosthetics, clothes, car seats) are built and manipulatedby software to fit the costumer’s needs.

E–commerce: With the rise of Internet shopping, models ofreal objects need to be transmitted to potential buyers.

Animation: Models of characters and props can be used infilm production for creation of special effects.

The generation of a CAD model of an industrial part ora prosthetic requires very precise samples of the object sur-face; in medical diagnostics one is interested in distinguish-ing specific types of body tissues; e–commerce applicationsdon’t need high precision as much as catching visual appear-ance, i.e. color and texture. To satisfy these different needs,a number of scanning techniques exist today.

Apart from mechanical probes, which sample a surfacethrough physical contact, non–intrusive methods are morepopular today. They do not disturb the physical environment,it is possible to get reliable data even from soft materials(e.g. human skin), and the scan process is generally faster.

A popular way to acquire surface data isrange scanning;here essentially the distance from the scanning device tosample points on the objectsurfaceis estimated. There area number of ways to measure distances, depending on rangeand application: optical triangulation, interferometrictech-niques using coherent laser light or “time of flight” (radarprinciple).

Especially in the medicine sector,volumetricdata is mea-sured by a number of methods, depending on the specifictype of matter (tissue, bone, etc.) that is of interest. Conven-tionalx–ray imagingcan be used to reconstruct the arrange-ment of blood vessels (angiography).Computer Tomogra-phy (CT) also relies on x–rays, where slices of the humanbody are scanned using a rotating tube. From the variationsin the measured x–ray absorption the spatial distribution ofmass density can be reconstructed. InMagnetic ResonanceTomography(MRT) high frequency electromagnetic pulsesare generated that alter the spin of protons in the body. Whenthe protons re–align, a signal is emitted that is measured asan electric current in a detector coil.

The raw output from these and other methods typicallyconsists of a three–dimensional voxel grid. For further pro-cessing, surfaces need to be extracted (segmentation). Thistype of volume data is very limited in spatial resolution, andso more accurate 3D surface data is often acquired usingdifferent techniques (e.g. by range scanning). To get morecomplete information for diagnostic purposes, datasets fromdifferent scans need to be merged and registered with eachother.

Though in this section we will concentrate on range scan-ning devices, the reconstruction of surfaces from volumetricdatasets will also be a topic in Section 2.2.

3D positions can also be reconstructed from photographicimages, which has been studied already in 191324; this hasled to photogrammetric modeling methods, facilitating re-construction of geometry from a photographed scene15.

In the context of creating polygonal representations ofcomplex models, range scanning devices based on the trian-gulation principle are the most popular ones for their flexi-bility (scanning devices come in all sizes from pens, portable

Max-Planck-Institut für Informatik

Page 7: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 5

constructions up to permanently installed whole–body scan-ners) and wide range of applications.

We are now going to discuss the process of data acquisi-tion for range scanners based on the triangulation principle.

2.1.2. Range scanning process overview

The scan process is divided into three basic steps:

1. Calibration: The system’s parameters are estimated ac-cording to hardware and environmental characteristics;this is a prerequisite for obtaining accurate measure-ments.

2. Scanning:The object surface is sampled from one view,resulting in a dense range map. Multiple passes have to beperformed to get a set of samples covering the completesurface.

3. Registration: The acquired range scans reside withintheir own local coordinate system. Thus, they have to bealigned with each other to express them in a global frame.

The result is a dense set of sample points representing theobject surface. These can then be processed to e.g. producea triangulated model (see 2.2).

We will now explain these stages in some detail.

2.1.3. Calibration

The image of an object on the scanner’s sensor array isdependent on physical parameters of the system and theenvironment. To be able to compute exact distance mea-surements for surface samples, proper system calibration iscrucial. This can be an arduous task, if there are a lot ofparameters18.

Object characteristicslike shape, color, texture and spec-tral characteristics of the surface determine the way in whichlight is reflected. Laser light of a certain wavelength may beabsorbed, subsurface scattering can distort the laser reflec-tion, or strong reflections may lead to wrong readings18.

Theexternal parametersof the sensor system are its pose(rotation and translation of the optical center), theinternalparametersare e.g. size of the CCD array, pixel aspect ra-tio, focal length and spectral characteristics of the lenses.These are of course entirely dependent on the specific scan-ner setup.

In addition, the lighting conditions of theenvironmenthave to be considered.

It is practically impossible to exactly determine all of theabovementioned parameters (or even more), so the calibra-tion process works with amathematical modelof the systemapproximating its behavior. The behavior of the real systemis measured to adjust the parameters of the model, whichmight be a simple standard camera model as used e.g. inOpenGL39, but can get arbitrarily complex, reflecting rigid

transformations, geometric distortions in the samples as wellas image space color distortions20; 18.

A common procedure is to use acalibration target, typi-cally a white board with a regular pattern. Since the objectgeometry and surface properties are known, intrinsic and ex-ternal camera parameters can be corrected for by analyzingthe captured images (color, geometry distortion) and scansof the object (camera pose)35.

The calibration process also yields information aboutthe usablescan volume, i.e. the range in which samplescan actually be taken within reasonable bounds of sam-pling resolution and error. This volume is often quite small,e.g. 14cm3(18).

Multiple iterations of the procedure are often necessary,manually adjusting system parameters within the degrees offreedom, to achieve a reasonable balance between resolu-tion, accuracy and the size of the scan volume — e.g. byvarying the sensor setup, or the distance from the object tothe scanner.

2.1.4. Scanning: range from stereo

Systems based on optical triangulation work by generatingdistance values from stereo information. They exist in twoflavors: active and passive.

In apassivesystem two or more images are taken by (usu-ally two or more) cameras which are mounted in a fixed spa-tial relation to each other. The images are either already cap-tured in digital form (CCD camera) or have to be digitized.Pixels in one camera image are now matched to pixels in theimage of another camera5. Assuming matched pixels reallycorrespond to the same surface point, one can calculate anestimate of its 3D position. The following picture shows thebasic principle in 2D:

object

b

β

α

left sensor

(x,z)right sensor

Light is reflected off a surface point and projected ontoeach camera’s CCD array. Given the lengthb of the baseline (the line between the two cameras) and the anglesα andβ related to the rays from the point through the optical centerof each camera to the sensor, the intersection point(x;z) ofthese rays can be computed. By assigning depth values toeach pixel in the sensor array, a range map is created as theresult of a single scan.

Max-Planck-Institut für Informatik

Page 8: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

6 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Successful matching of pixels in a passive stereo systemrelies on distinguishable intensity variations of the surface.This led to development ofactive systems, where the prin-ciple is altered by replacing one detector with acontrolledlight sourcesuch as a laser or a projector casting a grid ofstructured light22:

α

laser source

deflector

The sensor now detects the reflection of the point(s) onthe object illuminated by the light source. A laser scannertraces the surface along a grid, while other systems use asuccession of different stripe patterns or just a single verticalstripe moving across the surface31.

Because of their increased robustness, active systemsare by far dominant in industrial applications. Problemsmay still arise, though, e.g. if textures interfere with densestripe patterns. In general, scanning works best with “well–behaved” surfaces, which are smooth and have low re-flectance.

Triangulation scanners have the common problem ofshadowing, due to the separation of light source and detec-tor; parts of a non–convex object may not be reached by thelight from the projector or may not be seen by the detector,as shown here:

detector doesn’t see lit point point in shadow of projector

Obviously, the longer the base line, the more shadowingoccurs. On the other hand, a longer base line increases nu-merical precision of the distance measurements. So there isa tradeoff between getting enough data and achieving highaccuracy. Some approaches use more than one camera to getmore data and to simultaneously increase the reliability ofthe distance estimate29; 31.

Related to shadowing is thebeam location error. To lo-cate the precise position of a detected light spot or stripeprojected onto the object surface, it is generally assumedthat the light beam and its reflection have Gaussian inten-sity distribution, with the highest intensity at the centerofthe “bump”. This results in problems in locating the stripe,if the basic assumption of a planar surface and a fully visi-ble stripe is violated; finding the center of the stripe (or spot)fails in this case.

partial occlusion

ideal reflection

partial illumination

A possible solution to this problem is so–calledspacetimeanalysis13: Instead of guessingwherethe center of the beamon the surface is at a given time, the change in intensity perpixel over time as the beam sweeps across the surface is con-sidered. It is assumed that the beam is wide compared to thestep width of the sweep; over time, the intensity of a specificpixel increases, reaches its maximum, then decreases again.This time–varying profile is always Gaussian, so it can be re-liably estimated,whenthe beam was centered on that pixel.

2.1.5. Registration

When scanning complex objects, multiple scans are taken –which usually means, that either the object or the scannerhave to be repositioned. After scanning, the range imagesare given in their own local coordinate system; these datasetsneed to be put into one common frame for reconstructing thesurface.

The problem can be seen as equivalent to finding the rigidtransformations of the scan sensor between the individualscans, which for some systems is already known, e.g. cylin-drical scanners, where the scan head is moved under soft-ware control – the relative motion is thus likely to be directlyavailable. Often, especially for surfaces with little inherentstructure, special markers are applied to the physical object;the desired transformation is then the one that matches amarker in one image onto the other.

If no such external information can be used, or if re-finement of the solution is necessary due to lack of preci-sion, registration is done by an iterative optimization processthat tries to match the point clouds of individual scans byminimizing distances of point pairs. It is generally assumedthat the scans have enough overlap and are already roughlyaligned (e.g. by interactively pairing prominent surface fea-tures).

For matching multiple range images the standard ap-proach is to register thempair–wiseusing a variant of theiterated closest point method(ICP)9; 12; 40. For two point setsA andB this roughly works as follows:

1. Determine starting transformationT from A to B.2. Estimate correspondences between sample points inA to

points inB (using the currentT), where the correspond-ing point inB either is one of the original samples or lieson the surface reconstructed fromB.

Max-Planck-Institut für Informatik

Page 9: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 7

3. UpdateT with the rigid transformation that minimizesthe mean quadratic distance between the points of thesepairs.

4. Repeat steps 2 and 3 (potentially re–pairing points) untilconvergence.

AandBare then locked in place and another scan is added.Ideally, the transformations are computed precisely and after“working your way around”, all scans fit together perfectly.In practice though, the data is already contaminated: Surfacepoints measured from different angles or distances result inslightly different sampled positions. So even with a good op-timization procedure, it is likely that then–th scan won’t fitto the first. Multiple iterations are necessary to find a globaloptimum and to distribute the error evenly.

We have to solve two problems at once here: a) Find cor-responding point pairs; b) find the transformation that min-imizes the distance. Since the second step depends on thefirst, it is essential to find “good” point pairs. As we are deal-ing with a non–convex optimization problem, the algorithmcan get stuck in a local minimum, if the initial starting pointis not good (which lowers the chances of correctly pairingpoints).

Finding good pairs is a non–trivial task. Common heuris-tics are to pair each pointAi :� with its nearest neighborB j in space;� with the nearestB j in normal direction (point normals are

usually available with the range data);� with the intersection point of the normal and the surfacereconstructed fromB.

When pairs have been fixed, the transformation that alignsthem can be computed as a closed–form solution37. Findingbetter point pairs becomes easier and more reliable from oneiteration to the next, so the process converges to some mini-mum.

A A

project point onto plane

BB

match closest neighbor in space

These methods for pairing points often fail, if the sur-faces are not smooth enough or the current registration is toobad. A number of heuristics have been proposed to choose“good” pairs and discard the rest; among these are36; 19:� Only consider overlapping regions.� Don’t match points if their distance is too large.� Only match points with similar point normals.� Don’t pair boundary vertices.� Only consider the 90% best matches.

Searching closest points in sets of hundreds of thousandsof points in space can take considerable time. The proce-dure can be sped up by projecting points directly from onerange map into another38 and performing the pairing there,

reducing the problem to 2D. It is also possible to use imageintensity information for the pairing decisions31.

The distance minimization procedure can lead to prob-lems, if most of the paired points are very close to each otherand only a few are far apart. Then the surface is preventedfrom moving closer to the correct solution, sticking to anunacceptable local optimum:

BA

registration of A and B stuck in local optimum

For this reason, other approaches have been proposed thatdo not directly minimize the distance between paired points.A detailed discussion of these methods is beyond the scopeof this tutorial, so we only give some pointers to the liter-ature here: Chen and Medioni12 e.g. minimize the distanceof a point to the tangent plane through it’s peer, Masuda andYokoya26 minimize the sum of squared distances betweenall given pairs.

2.2. Triangulation of point clouds

2.2.1. Introduction

After the data acquisition (including the registration) phasethe next step in the surface reconstruction process is the gen-eration of a single surface representation, e.g. one overalltriangle mesh, from the acquired data.

Depending on the application and the structure of the in-put data different reconstruction problems arise:

Unorganized data: We have no additional informationother than the sampled points. This is the most generalapproach and therefore also the computationally most ex-pensive one, because we do not exploit any additional in-formation we might have. There was a state-of-the-art re-port at Eurographics ’98 by Mencl and Müller28.

Contour data: In medical applications the input model isoften sliced into thin layers each of which is then digitizedto a contour line. Here one can exploit the fact that thesecontours are closed polygons arranged in parallel stacks.

Volumetric data: Also in the medicine sector we have vol-umetric data measured by e.g. MRT or CT imaging. Herewe get a 3D-grid as input data and basically have to doiso-surface extraction, e.g. using the well-knownMarch-ing Cubesalgorithm25. But the Marching Cubes algorithmdoes not produce optimal results. If the edge length of thevoxel grid is too big aliasing artifacts are clearly visible.Additionally arbitrarily bad shaped triangles can occur inthe resulting mesh (cf. Fig. 2). The grid size should bechosen carefully since the mesh size grows quadratic.

Range data: The input data is a collection of range imagesthat are assumed to be registered into a common coordi-nate system (cf. Section 2.1.5). A range scanner typicallyproduces a rectangular grid of depth values or 3D-points,so we have adjacency information for the points of each

Max-Planck-Institut für Informatik

Page 10: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

8 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Figure 2: A model generated by applying the Marching Cubes algorithm to a volumetric distance function (left). Notice thealiasing artifacts at sharp feature lines and the badly shaped triangles (right).

single scan. The difficulty is the integration of the differ-ent scans into one single mesh. Another problem is thehuge amount of data, that is typically generated by theuniform (over-) sampling of the scanner.

Regardless of the underlying structure of the data we candivide the possible approaches into two groups, dependingon whether they produce an:

interpolation of the input data: the vertices in the resultingmesh are the original sampled points.

approximation to the sample points. Especially for rangedata we want an approximating rather than an interpolat-ing mesh to get a result of moderate complexity.

The approaches presented in this section can be classifiedas follows:

Sculpting based: This class of algorithms is used to re-construct a surface from an unorganized point cloud andproduces an interpolating mesh. These algorithms havein common that they first build a tetrahedrization (usu-ally the 3D Delaunay triangulation30; 6) of the point set toget some kind of global shape of the object. Afterwardsheuristics are used to select a subset of the 2-simplices(i.e. triangles) as the resulting mesh. These approachesare capable of reconstructing surfaces from very sparselysampled data. The drawback is the computational com-plexity and memory consumption for building the initialtetrahedrization.

Volume based: This technique can be used to reconstruct asurface from structured or unstructured sample data. Herean estimated distance for each sampled point is inserted ina voxel or octree structure and the result is extracted fromthis structure, e.g. using the Marching Cubes algorithm.Therefore these approaches produce approximations tothe sampled points, and the edge length of the volumet-ric grid controls the complexity of the output.

Incremental/region-growing: This class of algorithmsstarts with a seed and incrementally grow this seed un-til the whole input data is covered. The seed may be atriangle, an edge, a first range image or a wireframe ap-proximation.

2.2.2. Sculpting based approaches

2.2.2.1. Alpha-ShapesEdelsbrunner and Mücke17 gener-alized the notion of convex hull to the parameterizedα-shapes. Theα-shape of a set of points in 3-space is a poly-tope that does not need to be convex, connected or a 2-manifold. A triangle, edge or vertex belongs to theα-shapeiff it has a circumsphere of radius at mostα that is empty ofother sample points. Forα =1, theα-shape is the convexhull of the point set, forα = 0 it is the point set itself. Asα decreases theα-shape shrinks by developing cavities (cf.Fig. 3).

Theα-shape is related to the Delaunay triangulation (DT,cf. 30; 6) in the following way:8 simplexs2 DT 9 αs > 0 : s2 αs-shape:Conversely for 0� k� 2 every k-simplex of theα-shape isa simplex of the DT and thereforefk-simplices of DTg = [

0�α�1 fk-simplices ofα-shapeg :Theα-shape of a point set can be calculated by first build-

ing the DT and eliminating allk-simplices, 0� k� 3, whoseminimum enclosing sphere has radius greater thanα. Thinkof the DT filled with styrofoam and the vertices being moresolid. Then the geometric intuition behindα-shapes is to usea spherical eraser tool with radiusα that carves out the sty-rofoam wherever it can pass between the vertices.

In a last step the triangles that belong to the resulting sur-face are extracted out of theα-shape based on the followingheuristic: A triangle belongs to the surface if at least one ofthe two α-spheres that interpolate the triangle’s vertices isempty of other points.

The difficulty with this approach is to find a suitable valuefor the global parameterα. If it is too small, holes and gapswill occur; if it is too big, cavities may not be preserved. Inthe presence of varying sampling density this gets even morecomplicated.

Max-Planck-Institut für Informatik

Page 11: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 9

Figure 3: α-shapes for decreasing values ofα. The models have been created using the “Alpha Shapes” software by Edels-brunner et al. (http://www.alphashapes.org/alpha/index.html).

Pros/Cons:

+ Elegant theoretical formulation– Global variableα has to be found experimentally.– Problems with varying sampling density.– Computationally expensive.

2.2.2.2. Voronoi-Filtering Amenta et al.3; 2 present an al-gorithm for surface reconstruction from a given point cloud.The remarkable feature is the provability of their algorithm:given a “good” sample of a smooth surface (i.e. twice-differentiable manifold) without boundary the algorithm willreconstruct a surface that is topologically equivalent to theoriginal surface and will converge to it both pointwise andin surface normal as the sampling density increases.

The intuition behind the definition of a “good” sample isthat featureless areas can be reconstructed from fewer sam-ple points, while in detailed regions the sampling has to bemore dense. Actually a set of sample pointsSof a surfaceFis good, if the sampling density is (up to a factorr) inverselyproportional to the distance to themedial axis(ME) of F :

S is a “good” sampling :,8p2 F : dist(p;S) � r �dist(p;ME(F))The medial axisof F is the closure of all points having

more than one closest point onF and can be thought of as thecontinuous extension of theVoronoi diagram30. Thereforethis definition of a good sample respects the curvature ofFas well as the proximity of two sheets of the surface. Amentaet al. prove their results forr � 0:06, but also state that inpracticer = 0:5 is generally sufficient.

For every sample points2 S two poles p+s and p�s aredefined as the two vertices of theVoronoi cellof s that arefarthest froms and on opposite sides of the surfaceF . Theobservation is that the Voronoi cell of a vertex is long andthin and roughly perpendicular to the surfaceF . Thereforen+s := p+s � s and n�s := p�s � s are good approximationsto the surface normal ats (it can be proven that the angularerror is linear inr).

The actual algorithm is roughly sketched as follows:

1. Compute the Voronoi diagram ofS.2. 8s2 S: compute the polesp+s andp�s .3. Compute the Delaunay triangulation of the union ofSand

all poles.4. Keep the triangles whose vertices are all sample points

(Voronoi filtering).5. Normal filtering and manifold extraction.

After performing steps 1-4 one gets what Amenta et al.call thecrust of the sample points. This is not necessarily amanifold and therefore a normal filtering step discards trian-gles whose normals differ too much fromn+ or n�. After-wards a manifold extraction keeps only the outside surfaceof the remaining triangles.

Pros/Cons:

+ Reconstruction from point clouds.+ Provable reconstruction.+ Adaptive resolution (by adaptive sampling).– Complexity ofO(n2) and high memory needs because of

the Delaunay triangulation.– Problems with noise, sharp edges, boundaries.

2.2.2.3. Others An approach similar to the one by Amentaet al. is the “Delaunay sculpting” of Boissonnat10. He alsobuilds the Delaunay triangulation in a first step and removestetrahedra using a heuristic based on their circumspheres.Toovercome the problem of the global parameterα in α-shapebased approaches, Edelsbrunner16 introducedweightedα-shapes. Teichmann and Capps34 use varying values ofα toadapt to the local sampling density. Bernardini et al.7 au-tomatically determine the optimal value forα and use theα-shape of the point set for the construction of a signed dis-tance function.

2.2.3. Volumetric approaches

2.2.3.1. Reconstruction from unorganized points In thispaper Hoppe et al.21 address the issue of reconstructing asurface from unorganized points. The algorithm consists oftwo stages. In the first stage asigned distance functionis

Max-Planck-Institut für Informatik

Page 12: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

10 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

constructed. This function maps a point in 3-space to an es-timated signed distance to the unknown surface. Thereforethe unknown surface is approximated by the zero-set of thisfunction, points outside the object have positive, points in-side have negative distance. In the second stage the zero-setis extracted using the Marching Cubes algorithm.

The critical step is the first stage, the estimation of thesigned distance. To construct it, anoriented tangent planeis associated to each sample point. These tangent planes area good local approximation to the unknown surface and thedistance of a pointp 2 IR3 to it is defined to be the dis-tance to the plane associated to the sample point nearest top. While the estimation of anunorientedtangent plane to asample points2 IR3 only requires a least square fit tos andits k nearest neighbors, the difficulty is the consistent ori-entation of these planes. Consider two samples pointssi ;sj

that are geometrically close. If the sampling is dense and thesurface is smooth, then the corresponding normal vectors atthese points are assumed to be almost parallel, i.e.

��nin j���1.

This would give us an unsigned distance function. To get asigned one we have to ensure that the two normals are notonly almost parallel, but also pointing in approximately thesame direction, i.e.nin j � +1. This condition should holdfor all sample points that are sufficiently close to each other.

This can be formulated as a graph optimization problem.The nodes represent the tangent planes and are connected byan edge if the corresponding sample points are sufficientlyclose. To decide which nodes to connect, anenriched Eu-clidean Minimum Spanning Treeis constructed. The orienta-tion is propagated along this tree by traversing the minimumspanning tree with the cost of edge(si ;sj) being 1� ��nin j

��,therefore favoring the propagation between samples withalmost parallel normals. After this orientation step we arenow able to define the signed distance function as describedabove.

Using a variation of the Marching Cubes algorithm thezero-set of the distance function is extracted. To alleviatethe problem of badly shaped triangles Hoppe et al. collapsesmall edges in a postprocessing step using an aspect ratiocriterion.

Pros/Cons:

+ It can handle large, unstructured point clouds.+ It is robust in the presence of noisy input data.+ One can control the output size by adjusting the grid size

of the iso-surface extraction step.– There is no adaptive resolution. The vertices in the out-

put are uniformly distributed, regardless of highly curvedor featureless areas of the surface. If one does not wantto lose small features this will lead to complex outputmeshes.

– As already mentioned, the Marching Cubes algorithmcan produce bad shaped triangles. This makes a post-processing step necessary.

2.2.3.2. Reconstruction from range images The ap-proach of Curless and Levoy14 is similar to the method ofHoppe et al., but tuned for handling complex range data.They also build a signed distance function and get their re-sult by an iso-surface extraction step. Additionally they takeinto account the special problems and structures that comewith the integration of separate range images:� Range uncertainty: For every sampled point of a range

image one can derive a reliability value.� Utilization of all range data, including redundant sam-plings.� Incremental updating: After each scan a reconstructioncan be done and be improved by adding another scan. Thisshould be independent of the order in which the separatescans are processed.� Robustness: The algorithm should produce stable resultseven in the presence of noise and outliers in the data.� Ability to fill holes: When scanning non-trivial objects,there are always regions that cannot be captured becauseof self-shadowing (cf. Section 2.1.4).

The global distance function is generated by a weightedaverage of the distance functions of the individual range sur-faces. The weighting should be chosen specific to the rangescanning technology in order to take the range uncertaintyinto account. In their case the weights depend on the dotproduct between the vertex normal and the scanning direc-tion, leading to greater uncertainty in regions measured un-der a flat angle. The averaging of redundant measurementscan reduce sensor noise.

The distance function of a single range image is con-structed by triangulating the sampled points using the pixel-grid neighborhood information and assigning a weight toeach vertex. To evaluate this function at a pointv 2 IR3,this point gets projected onto the range mesh along the sen-sor’s line of sight. If an intersectionx occurs at a triangletthe weightw is computed by barycentric interpolation of theweights oft ’s vertices and the result is the distance fromv tox weighted byw.

The global distance function is evaluated on the voxel gridthat is used for the iso-surface extraction afterwards. When-ever an additional range image is generated, the global dis-tance and weighting function and their values at the grid ver-tices are updated.

The hole filling operates not on the reconstructed mesh,but directly on the volume. All points in the volume are clas-sified to one of three states:� Unseen: The initial value for all voxels.� Near the surface: These voxels take on the signed distance

and weighting values as described above.� Empty: By performing aspace carvingstep, i.e. by trac-ing all lines of sight from a sensors position to the ob-served points in the corresponding range image the tra-

Max-Planck-Institut für Informatik

Page 13: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 11

versed voxels are marked as empty. This technique is verylikely to remove outliers.

Holes in the extracted surface are frontiers between un-seen and empty regions. To fill these holes the iso-surface ex-traction is not only performed for the zero-set of the distancefunction, but also between unseen and empty regions. By as-signing specific distance and weighting values to empty andunseen grid vertices this can be done by one single extractionstep.

To achieve space efficiency the volume gets run-length en-coded. To achieve time efficiency by faster volume traver-sal each range image is resampled so that its scanlines arealigned to the scanlines of the volume grid. Therefore boththe range image and the volume can simultaneously be pro-cessed in scanline order. This approach is well suited forhandling huge amounts of data, as it was proven in the Dig-ital Michelangelo project18.

Pros/Cons:

+ Can handle huge amounts of data.+ Robust (noise, outliers).+ Output size controllable.– No adaptive resolution.– Marching Cubes problems.

2.2.3.3. Others Pulli et al.32 present a method for the re-construction from a sequence of range maps. They first builda hierarchical octree representation of the object. Every cubethat is neither completely inside nor completely outside theobject is recursively subdivided up to a maximum level.Afterwards a triangle mesh is extracted from the volumedata. Their approach is able to handle noise and outliers andfill holes at missing data. Bajaj, Bernardini and Xu4 builda signed distance function by first computing theα-shapeof the object to which they fit implicit Bernstein-Bézierpatches.

2.2.4. Incremental approaches

2.2.4.1. Ball pivoting The Ball-Pivoting Algorithm (BPA)of Bernardini et al.8 generates an interpolating triangle meshfrom a given unstructured point cloud. They assume that anoriented normal vector is available for each point and that thesampling density has a global minimum. The normal vectorsare used to determine the surface orientation and for con-sistency checks when generating triangles (all three normalsshould point in roughly the same direction). Range data au-tomatically fulfills both requirements. Otherwise techniqueslike the ones described in21; 3 can be used to estimate thenormal vectors, but this can be quite expensive (orientationproblems, Voronoi diagram).

The basic principle of the BPA is very simple and intu-itive: we start with a seed triangle and a ball of user definedradiusρ sitting on this triangle (i.e. interpolating its threevertices). Thisρ-ball is pivoted around an arbitrary edge of

the current boundary (initially the edges of the seed triangle)until it touches another sample point. This pivoting is basi-cally a circular movement of the ball’s center in the planeperpendicularly bi–secting the edge whilst always stayingincontact with the two edge vertices. If theρ-ball hits anothersample point on its movement around the edge a new triangleis created from the edge and this point. The boundary is ad-justed accordingly and the pivoting is continued. The updateof the boundary front may change its topology, e.g. a loopof edges may be split, or two loops may be merged into one.As the ball walks on the sample points the mesh grows un-til the whole connected component is reconstructed. If thereare multiple connected components, a new seed triangle ischosen and the process is repeated.

A nice feature of the BPA is that it is strongly related toα-shapes: by construction every BPA-generated triangle hasan empty circumsphere of radius� ρ (cf. Section 2.2.2.1).Therefore all triangles resulting from pivoting aρ-ball area subset of theρ-shape of the point set. Because of this re-lationship it can be proven that for a sufficient sampling ofa smooth manifold the BPA produces a homeomorphic ap-proximation with an error bound ofρ. Additionally the BPAis guaranteed to generate 2-manifolds, so no cleaning up stepis necessary. By usingout-of-coreprocessing techniques thisalgorithm is able to handle very large datasets (cf.1).

Pros/Cons:

+ Can handle large real-world scans.+ Out-of-core triangulation possible.+ Moderate memory consumption (no Delaunay triangula-

tion).– No adaptive resolution (requires uniform sampling).

2.2.4.2. Interactive approach While the existing tech-niques are off-line algorithms, the approach of Kobbelt andBotsch23 incorporates user interactionduring the surface re-construction. Their approach generates a mesh approximat-ing hybrid input data (e.g. points, triangles, or even NURBSpatches). Resolution and alignment of the triangles can beadapted manually to varying detail level and quality require-ments in different regions of the object (cf. Fig. 4).

The concept behind the user interface is to simulate avir-tual 3D scanning device. The input data is displayed in anOpenGL window, the user can view it from arbitrary posi-tions and angles. In an interactive scanning session the useradds one scan after the other to the previously generatedmodel. One iteration consists of placing, scaling (! reso-lution) and orienting (! alignment) the object on-screen,determining the valid region of interest, extracting the patchand automatically stitching it to the already existing mesh.

When rendering geometry with enabled depth-bufferingthe depth value for every pixel is stored in the z-buffer. Whilethe graphics system uses this information to determine mu-tual occlusion, the virtual scanner reads this data and un-projects it back into 3-space. The result is a range image con-

Max-Planck-Institut für Informatik

Page 14: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

12 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Figure 4: The interactive approach provides an intuitive in-terface to locally adapt the mesh resolution to specific re-quirements for the resulting model by simply zooming thepoint cloud. Here the ear and the eye have been z-bufferscanned with a higher density.

taining a 3D sample point for every pixel. This range imageis filtered, a user defined mask selects the region of interestand the remaining vertices can be trivially triangulated be-cause of the underlying pixel-coordinate parameterization.Zooming in on the object will result in a finer triangulation.By rotating the object so that geometric features are horizon-tally or vertically aligned, the edges of the triangulationwillbe aligned to these features by construction.

Since the scanning process is mainly the outcome of onerendering step all objects that can be rendered can also bez-scanned. So this approach can also be used for the remesh-ing of objects with poor triangle quality (e.g. meshes fromCAD systems). When scanning point clouds the sampling isassumed to be sufficiently dense, so that the rendered pointsform a connected area without (large) holes on the screen.Smaller holes can be interpolated by the graphics hardwareby choosing a bigger point size (! nearest neighbor inter-polation).

If the acquired scan overlaps with the previously gener-ated model, an automatic quality mask ensures that the bet-ter part is kept. This requires a projection of the new scanonto the old mesh. This expensive quadratic search can bereduced to oneID-renderingstep that is done by the graphics

hardware. The remaining patch gets stitched into the existingmesh using a modification of themesh zipperingalgorithmof Turk and Levoy36.

By out-sourcing the computationally expensive tasks tothe graphics hardware (subsampling, range image, mesh pro-jection) the program stays interactive even for large inputdata. The amount of input data only effects the renderingspeed, the scanning and stitching only depends on the sizeof the object in screen space. The memory consumption ismuch lower than for most other approaches, since no ad-ditional space partitioning structures have to be generated(like e.g. Delaunay triangulation, Voronoi diagram or searchstructures).

An extension of this approach that uses feature-sensitivesampling instead of the z-buffer in order to avoid aliasingartifacts at curved feature lines is be presented in11 (cf. Fig.5).

Figure 5: Using feature sensitive sampling for curved fea-ture lines enables optimal alignment of the mesh to the un-derlying geometry.

Pros/Cons:+ Interactivity: the user can directly influence the result.+ Adaptive resolution and adjustable alignment.+ Fast by using graphics hardware.+ Low memory consumption.– Interactivity: no automatic algorithm.

2.2.4.3. Others Boissonnat10 starts with the edge connect-ing the two closest sample points and generates an interpo-lating mesh by iteratively adding triangles to the boundaryfront. Mencl and Müller27 build asurface description graph,i.e. an enhanced Euclidean Minimum Spanning Tree, per-form a feature detection step and finally fill this wireframemodel with triangles, also leading to an interpolating mesh.The following methods all do reconstruction from range im-ages. Soucy and Laurendeau19 useVenn diagramsto parti-tion the input data into non-overlapping regions which are

Max-Planck-Institut für Informatik

Page 15: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 13

re-parameterized and merged together. In a similar approachTurk and Levoy36 incrementally take scans, erode overlap-ping parts and zipper them together, followed by a consen-sus geometry step in order to minimize registration errors.Rutishauser et al.33 merge depth images by taking into ac-count the error along the sensor’s line of sight and a retrian-gulation step for the redundant data.

References

1. J. Abouaf. The florentine pietá: Can visualization solvethe 450–year old mistery?IEEE Computer Graphicsand Applications, 19:6–10, 1999.

2. N. Amenta and M. Bern. Surface reconstruction byvoronoi filtering. InAnnual ACM Symposium on Com-putational Geometry, 1998.

3. N. Amenta, M. Bern, and M. Kamvysselis. A newvoronoi–based surface reconstruction algorithm. InSIGGRAPH 98 Conference Proceedings, pages 415–422, 1998.

4. C. L. Bajaj, F. Bernardini, and G. Xu. Automatic recon-struction of surfaces and scalar fields from 3D scans.In SIGGRAPH 95 Conference Proceedings, pages 109–118, 1995.

5. S. T. Barnard and M. A. Fischler. Computational stereo.ACM Computing Surveys, 14(4):553–572, 1982.

6. M. Bern and D. Eppstein.Mesh generation and optimaltriangulation. World Scientific, 1992.

7. F. Bernardini, C. L. Bajaj, J. Chen, and D. R. Schikore.Automatic reconstruction of 3D CAD models from dig-ital scans.International Journal of Computational Ge-ometry and Applications, 9(4&5):327–370, 1999.

8. F. Bernardini, J. Mittleman, H. Rushmeier, C. Silva, andG. Taubin. The ball–pivoting algorithm for surface re-construction. IEEE Transactions on Visualization andComputer Graphics, 5(4):349–359, 1999.

9. P. J. Besl and N. D. McKay. A method for registrationof 3–D shapes.IEEE Transactions on Pattern Analysisand Machine Intelligence, 14(2):239–258, 1992.

10. J.-D. Boissonnat. Geometric structures for three–dimensional shape representation.ACM Transactionson Graphics, 3(4):266–286, 1984.

11. M. Botsch, Ch. Rössl, and L. Kobbelt. Feature sensitivesampling for interactive remeshing.Preprint, 2000.

12. Y. Chen and G. Medioni. Object modelling by registra-tion of multiple range images.International Journal ofImage and Vision Computing, 10(3):145–155, 1992.

13. B. Curless and M. Levoy. Better optical triangulationthrough spacetime analysis. Technical Report CSL–TR–95–667, Stanford University, Computer SystemsLaboratory, 1995.

14. B. Curless and M. Levoy. A volumetric method forbuilding complex models from range images. InSIG-GRAPH 96 Conference Proceedings, pages 303–312,1996.

15. P. E. Debevec, C. J. Taylor, and J. Malik. Modelingand rendering architecture from photographs: A hybridgeometry– and image–based approach. InSIGGRAPH96 Conference Proceedings, pages 11–20, 1996.

16. H. Edelsbrunner. Weighted alpha shapes. Technical Re-port UIUCDCS–R–92–1760, Department of ComputerScience, University of Illinois, Urbana–Champagne,IL, 1992.

17. H. Edelsbrunner and E. P. Mücke. Three–dimensionalalpha shapes. ACM Transactions on Graphics,13(1):43–72, 1994.

18. M. Levoy et al. The Digital Michelangelo Project: 3Dscanning of large statues. InSIGGRAPH 00 ConferenceProceedings, to appear.

19. H. Gagnon, M. Soucy, R. Bergevin, and D. Laurendeau.Registration of multiple range views for automatic 3–Dmodel building. InProceedings of the Conference onComputer Vision and Pattern Recognition, pages 581–586, 1994.

20. J. Heikkilä and O. Silvén. A four–step camera cali-bration procedure with implicit image correction. InProceedings of the Conference on Computer Vision andPattern Recognition, pages 1106–1112, 1997.

21. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, andW. Stuetzle. Surface reconstruction from unorganizedpoints. InComputer Graphics (SIGGRAPH 92 Confer-ence Proceedings), volume 26, pages 71–78, 1992.

22. S. B. Kang, J. A. Webb, C. L. Zitnick, and T. Kanade.An active multibaseline stereo system with active illu-mination and real–time image acquisition. InProc. In-ternational Conference on Computer Vision, 1995.

23. L. Kobbelt and M. Botsch. An interactive approach topoint cloud triangulation. InComputer Graphics Forum(Proc. Eurographics 2000), to appear.

24. E. Kruppa. Zur Ermittlung eines Objecktes aus zweiPerspektiven mit innerer Orientierung.Sitz.–Ber. Akad.Wiss., Wien, Math. Naturw., Kl. Abt. IIa, 122:1939–1948, 1913.

25. W. E. Lorensen and H. E. Cline. Marching cubes: ahigh resolution 3D surface construction algorithm. InComputer Graphics (SIGGRAPH 87 Conference Pro-ceedings), volume 21, pages 163–170, 1987.

26. T. Masuda and N. Yokoya. A robust method forregistration and segmentation of multiple range im-ages. Computer Vision and Image Understanding,61(3):295–307, 1995.

Max-Planck-Institut für Informatik

Page 16: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

14 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

27. R. Mencl and H. Müller. Graph–based surface recon-struction using structures in scattered point sets. InPro-ceedings of the Conference on Computer Graphics In-ternational, pages 298–311. IEEE Computer Society,1998.

28. R. Mencl and H. Müller. Interpolation and approxima-tion of surfaces from three–dimensional scattered datapoints. InProceedings of Eurographics ’98, State of theArt Reports, 1998.

29. M. Okutomi and T. Kanade. A multiple–baselinestereo.IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, 15(4):353–63, 1993.

30. F. P. Preparata and M. I. Shamos.Computational ge-ometry : an introduction. Springer, 1985.

31. K. Pulli. Surface Reconstruction and Display fromRange and Color Data. PhD thesis, University ofWashington, 1997.

32. K. Pulli, T. Duchamp, H. Hoppe, J. McDonald,L. Shapiro, and W. Stuetzle. Robust meshes from mul-tiple range maps. InProc. IEEE Int. Conf. on RecentAdvances in 3–D Digital Imaging and Modeling, 1997.

33. M. Rutishauser, M. Stricker, and M. Trobina. Mergingrange images of arbitrarily shaped objects. InProceed-ings of the Conference on Computer Vision and PatternRecognition, pages 573–580, 1994.

34. M. Teichmann and M. Capps. Surface reconstructionwith anisotropic density–scaled alpha shapes. InIEEEVisualization ’98 Conference Proceedings, pages 67–72, 1998.

35. R. Tsai. A versatile camera calibration technique forhigh–accuracy 3–D machine vision metrology usingoff–the–shelf TV cameras and lenses. In L. Wolff,S. Shafer, and G. Healey, editors,Radiometry –(Physics-Based Vision). Jones and Bartlett, 1992.

36. G. Turk and M. Levoy. Zippered polygon meshes fromrange images. InSIGGRAPH 94 Conference Proceed-ings, pages 311–318, 1994.

37. Z. Wang and A. Jepson. A new closed–form solutionfor absolute orientation. InProceedings of the Con-ference on Computer Vision and Pattern Recognition,pages 129–134, 1994.

38. S. Weik. Registration of 3–d partial surface models us-ing luminance and depth information. InProc. IEEEInt. Conf. on Recent Advances in 3–D Digital Imagingand Modeling, pages 93–100, 1997.

39. M. Woo, J. Neider, and T. Davis. OpenGL Pro-gramming Guide. Second edition. The Official Guideto Learning OpenGL, Version 1.1. Addison–Wesley,1996.

40. Z. Zhang. Iterative point matching for registration offree–form curves and surfaces.International Journalof Computer Vision, 13(2):119–152, 1994.

Max-Planck-Institut für Informatik

Page 17: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 15

3. Discrete differential geometry

3.1. Discrete curvature

While differential geometry analyses surfaces that are suffi-ciently often differentiable – which does not apply to meshesdirectly – discrete differential geometry is based on the factthat meshes can be interpreted as approximations of suchsmooth surfaces. In the following we will start with a shortsurvey on differential geometry where we shortly explainthe most important geometric intrinsics, i.e. surface prop-erties that purely depend on the geometry and not on itsspecific parameterization. After that survey we will presentsome popular discretization methods for various geometricinvariants.

3.1.1. Geometric intrinsics

Let Sbe a sufficiently smooth surface andq 2 S be a pointon that surface.

Surface normal vectorThenormal vector~n atq is a unit vector that is perpendicularto the surface.

Normal curvatureThe normal curvatureκ(T) is the curvature of the planarcurve atq that results if one intersects the surfaceS with aplane that contains the surface normal direction~n. HereT isa unit vector inside the tangent plane ofq, i.e.<~n;T >= 0,that specifies the direction of the normal cut.

The tensor of curvatureThe tensor of curvatureassigns to eachq the functionthat measures the normal curvatureκ(T) in the directiondetermined byT. As function of the tangent vectorT thenormal curvature can be formulated as

κ(T) = txty

!T �� κ11 κ12κ21 κ22

� � txty

!; (1)

wheretx and ty are the coordinates ofT in an arbitrary or-thonormal basis of the tangent space andκ12 = κ21. Themaximal normal curvatureκ1 and the minimal normal cur-vatureκ2 are calledprincipal curvatures, the associated tan-gent vectorsT1 andT2 are calledprincipal directions.

It is always possible to choose an orthonormal basis ofthe tangent space built by two principal directionsT1 andT2. Using such a basis, equation (1) simplifies to Eulers’stheorem:

κ(θ) = κ(T) = κ1 cos2(θ)+κ2 sin2(θ); (2)

whereθ is the angle betweenT andT1.

Gaussian curvatureTheGaussian curvature Kis defined to be the determinantof the matrix that defines the quadratic form (1). If we

change the orientation of the surface normal vector theGaussian curvature does not change. IfK > 0 both prin-cipal curvatures have the same sign, forK < 0 they havedifferent signs.K can be expressed in terms of the principalcurvatures:

K = κ1κ2: (3)

Mean curvatureThemean curvature His defined to be half the trace of thematrix in (1). It can be expressed in terms of the principalcurvatures:

H = κ1+κ2

2: (4)

If we change the orientation of the surface normal vector themean curvature changes its sign.

The mean curvature can be interpreted as the average ofthe normal curvatures since it satisfies

H = 1π

Z π

0κ(θ) dθ:

3.1.2. Discretization techniques

Surface normal vectorA popular way to discretize the normal vector~n is to usean average of the unit normals of the triangular faces thatsurround the vertex. Various averages occur in the literature,e.g. arithmetic, area weighted or angle weighted average8.

Normal curvatureGiven a normal vector~n at a vertexq, we can discretize thenormal curvature in the direction given by a unit tangentvectorTj that results if we project a vertexq j that is adjacentto q into the tangent plane defined by~n. A frequently useddiscretization for such a normal curvature is given by theformula

κ(Tj ) = 2< q j �q;~n>< q j �q;q j �q>: (5)

This equation results if one discretizes the mathematicalformula for the continuous case11, but there is also ageometric explanation. This equation can be interpreted asinterpolating the verticesq andq j with a circle whose centerlies on the line defined byq and~n and to use the inverse ofthe resulting radius as normal curvature9.

Local polynomialsA popular method to discretize geometric intrinsics is basedon the idea to interpolate or approximate a local submesh byusing polynomials. In practice the most common type arequadratic polynomials

f (x) = a1x2+a2y2+a3xy+a4x+a5y+a6:To be able to use that approach, one requires a planar param-eter domain and has to assign planar coordinates to every

Max-Planck-Institut für Informatik

Page 18: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

16 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

vertex in a local neighborhood ofq. Perhaps the most well–known approach is to first estimate a surface normal vec-tor at a vertex and then project the vertices into that plane.However, this requires that the estimated normal is chosenaccurately. The surface triangulation induces an orderingonthe adjacent neighbors and this order should be preserved inthe parameterization. If the normal plane is badly chosen,this is not guaranteed. A solution is to choose the exponen-tial map12 of the 1-neighborhood aroundq. This operatormaps the neighborhood onto the plane while preserving thedistances of the vertex to its neighbors and the ratio of theadjacent angles. The drawback here is that this approach ismore expensive to calculate because it requires the usage oftrigonometric functions.

Quadratic interpolation requires that the vertex has va-lence 5. Therefore, to be able to use all vertex informationfor arbitrary vertex valences one uses least square approxi-mation. Here the most efficient method to solve the problemis to use the normal equations approach5, since this mainlyinvolves to calculate the inverse of a symmetric matrix oflow dimension.

Once the polynomial is determined, the geometric invari-ants of the polynomial can be used as discretization values.For the discrete curvatures this only requires to determinethe first and second fundamental forms of the polynomial.

However, problems occur if the parameter values that areassigned to the vertices lie on a curve of degree 2. In thatcase the least squares approximation fails and a special casetreatment - e.g. reduction of the number of basis functions -is necessary.

Taubin’s approachTaubin11 proposed an algorithm to derive the tensor ofcurvature. The principal curvature and principal directionsare obtained by computing in closed form the eigenvaluesand eigenvectors of certain 3� 3 symmetric matricesdefined by integral formulas, and closely related to thematrix representation of the tensor of curvature. As inputto his algorithm he requires discretized normal vectors anddiscretized normal curvatures (equation (5)).

Moreton and Séquin’s approachAnother algorithm to estimate the tensor of curvature atq is derived by Moreton and Séquin9. The idea behindthis approach is to use the fact that the normal curvaturedistribution can’t be arbitrary, but is determined by Euler’stheorem (2). The 2�2 matrix occurring in equation (1) canbe expressed as

K = � ex ey�ey ex

� �� κ1 00 κ2

� �� ex ey�ey ex

��1 ;whereex andey are the coordinates of the principal directionT1 in the chosen orthonormal basis. Moreton and Séquin’sidea now is to estimate a discrete normal~n and use the

normal curvatures given by equation (5) to create a linearsystem. Solving this system using a least squares algorithmone can find estimates for the unknown principal curvaturesand principal directions. A special case treatment here isonly necessary, if the projection of the adjacent vertices tothe tangent plane are intersection points of two lines passingthe vertexq, so only vertices of valence 3 or 4 may need aspecial case treatment10.

Estimating the Gaussian curvatureThe advantages of the last three methods is that once theprincipal curvatures are discretized, one can also deriveother important invariants from that information, as theGaussian curvature or the mean curvature using eq. (3) or(4). If only the Gaussian curvature is needed, one can usethe spherical image method or the angle deficit method.The idea of these approaches is to discretize a theorem fordefining the Gaussian curvature on a smooth surface derivedfrom a theorem by Rodrigues (see e.g.3).

Using the angle deficit method one can discretize theGaussian curvature atq with the well known formula

K = 2π�∑ j θ j13 ∑ j A j

;whereθ j are the inner angles adjacent toq andA j the corre-sponding triangle areas.

3.2. Quality control for meshed surfaces

This section gives a brief overview over techniques for rat-ing the quality of triangular meshes. Quality control may beapplied on the raw data immediately after mesh generationor on preprocessed data (cf. Section 2).

The techniques presented here are used to visualize sur-face quality in order to detect surface defects. They canbe implemented in a straightforward way while still beingeffective and can be used interactively. The methods areadapted from smooth free-form surfaces (e.g. NURBS), andwe may take advantage of the previously introduced con-cepts of discrete differential geometry (cf. Section 3).

3.2.1. What is thequality of a triangle mesh?

The answer to this question depends on what the mesh isused for. Different applications may require different qualitycriteria. We concentrate on the following:

Smoothness:Take a brief look on the situation when usingNURBS patches: They have inherentCn (e.g.C2 for cu-bic splines) smoothness by construction and one is usuallyinterested in smooth connections across patch boundaries(e.g.C1).The situation for triangle meshes is similar. Naturally, onewould not expect something like “patch boundaries” in-side a mesh. But such boundaries may emerge e.g. when

Max-Planck-Institut für Informatik

Page 19: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 17

0

1

0.7

0.7

0.7

light

1D texture

0.3

0.2

0.9

Figure 6: Isophotes. The middle part of the surface was blended between the three tubes, the boundary conditions are C1.The discontinuities of the curvature at the joints are hard to detect from the flat shaded image (left), but clearly visualized byisophotes (middle) since C1 blends cause C0 isophotes. The right image sketches the rendering of isophotes with 1D-textures:The illumination values are calculated for the vertices of atriangle from vertex normals and the light direction. Thesevalues areused as texture coordinates. The texel denoting the iso-value is colored black, so iso-lines are interpolated within the triangle.

filling holes by trivially triangulating the hole and subse-quent fairing.

Fairness: A smoothness criterion is not sufficient for gen-erating high quality, highly aesthetic surfaces. A so calledfairness criterionmust be added. Usually this is definedas low variation of curvature in contrast to continuity ofcurvature (smoothness). So, we will use the results of theprevious section for rating fairness. (see also Section 5.2)

Shape of triangles: Some applications require “wellshaped” triangles (e.g. simulations usingFinite ElementMethods(FEM) Computational Fluid Dynamics(CFD))while other applications (e.g. rendering) may neglectthis property. So we need to check constraints on shapeparameters such as angles and area. (cf. Section 5.1.7)

3.2.2. Visualizing smoothness

Our aim is interactive surface visualization; hence we try toget maximum advantage of graphics hardware. Therefore agiven surface is tessellated to a set of triangles for render-ing (in contrast to non-interactive rendering techniques likeray-tracing). As we can think of the mesh as an accurate tes-sellation of e.g. a set of NURBS patches we will use thesame techniques for quality control that are used for smoothsurfaces (cf. eg.6).

3.2.2.1. Specular shading The simplest visualizationtechnique is to use standard lighting and shading (Phong il-lumination model, flat- or Gouraud shading) as provided bythe graphics subsystem4; 13.

The local illumination of a vertex depends on the posi-tion of the light sources, on surface normals and on the viewpoint/direction if a specular lighting term is used.

This approach to surface interrogation is the most straight-forward one, but it is difficult to find minor perturbations ofa surface (cf. Fig. 6, left).

3.2.2.2. Isophotes Isophotes are lines of constant illumi-nation on a surface. Here, one assumes that there is onlydif-fuseor Lambertianreflection. As a consequence, isophotesare independent of the view point. For this application onesingle, infinitely distant point light source is assumed. Sotheillumination IP of a surface pointP is given by

IP = maxfNP L�;0g;

whereNP is the surface normal atP andL is the direction oflight (cf. 4). Both vectors are normalized, so the value ofIPis in the interval[0;1℄. Now some valuesIc; j 2 [0;1℄ = const

(e.g.Ic; j = jn ; j = 0; : : :;n) are chosen and the isophotes/iso-

curvesI = Ic; j are rendered.

The resulting image makes it easier to detect irregularitieson the surface compared to standard shading. The user canvisually trace the lines, rate their smoothness and transferthese observations to the surface: If the surface isCk contin-uous then the isophotes areCk�1 continuous (cf. Fig. 6).

There are two approaches to rendering iso-curves suchas isophotes: The first approach is to explicitly extract thecurves or curve segments and then display them as lines.Here, in principle the same algorithms as for extracting iso-surfaces can be applied (Marching Cubes7, cf. Section 2.2).Fortunately the situation for extracting a curve on a surfaceis easier (“marching triangles”).

The second approach takes advantage of the graphicshardware and allows direct rendering of isophotes from il-lumination values in the vertices of a triangle mesh:

Max-Planck-Institut für Informatik

Page 20: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

18 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

A 1-dimensional textureis initialized with a default colorC. Illumination valuesIP are now treated as texture coordi-nates, and for the isophote valuesIc; j the corresponding tex-els are set to colorCj 6= C. The illumination valuesIc; j areevaluated at every vertex and used as texture coordinates.With this setup the graphics subsystem will linearly inter-polate the 1D texture within the triangles resulting in a ren-dered image of the isophotes (colorsCj ) that are drawn ontothe surface (colorC) 13 (cf. Fig. 6).

The 1D textures approach benefits more from the graphicshardware in contrast to explicitly calculating line segments.A drawback is that the width of the curves varies due to tex-ture interpolation.

3.2.2.3. Reflection linesIn contrast to isophotes a specularsurface is assumed for reflection lines. As a consequence re-flection lines change when the point of view is modified resp.when the object is rotated or translated. The light source con-sists of a set of “light-lines” that are placed in 3D space.Normally, the light-lines are parallel lines (cf. Fig. 7).

Traditionally, reflection lines have been used in the pro-cess of designing cars. An arrangement of parallel fluores-cent tubes is put in front of the car model to survey the sur-face and its reflection properties.

eye

light sources

linesreflection

Figure 7: Reflection lines. The light source consists of twoparallel lines that are reflected by the surface. The reflectionproperty requires that angles of incidence (light,normal)areequal to angles of emission (viewing direction,normal).

Under the assumption that the light source is infinitely faraway from the object,environment mapping13 can be usedto display reflection lines in real-time. A texture for envi-ronment mapping is generated once by ray-tracing the lightsources over a sphere. The graphics subsystem will then au-tomatically generate appropriate texture coordinates forev-ery vertex depending on its relative position and normal.

Reflection lines are an effective and intuitive tool for sur-face interrogation. If the surface isCk continuous then the re-flection lines areCk�1 continuous. Just like isophotes, theycan be efficiently rendered by taking advantage of graphicshardware and they are also sensitive to small surface per-turbations. In addition, the concept that a real-world processis simulated makes their application very intuitive even forunexperienced users. Fig. 8 shows reflection lines for aC0,

C1 and aC2 surface. Reflection lines are also used to showsurface quality in Section 5.2.

3.2.3. Smoothness vs. fairness

The first two quality criteria listed above aresmoothnessandfairness. Smoothnessdenotes the mathematical definition ofcontinuous differentiability (Cn). While smoothness is nec-essary in order to guarantee high quality surfaces it is notsufficient.

A surface may be smooth in a mathematical sense but stilllooking awkward from an aesthetical point of view. This iswherefairnesscomes in: fairness is an aesthetic measure of“well-shapedness”, it is therefore more difficult to define intechnical terms than smoothness (distribution vs. variationof curvature)1:

An important rule is the so calledprinciple of simplestshapethat is derived from fine arts. A surface is said to bewell-shaped if it is simple in design and free of unessentialfeatures. So afair surface meets the mathematically defineddesign goals (e.g. interpolation, continuity) while beingnicelooking in this sense. There are several approaches to formu-late the latter term in a precise, mathematical way.

The most common measures for fairness are motivated byphysical models like the strain energy of a thin plateZ

κ21+κ2

2dA

or differential geometry like the variation of curvatureZ �∂κ1

∂~e1

�2+�∂κ2

∂~e2

�2

dA

with principal curvaturesκi and principal directions~ei ; i =f1;2g.In general, some surface energy is defined that punishes

“bad-shapedness”, and curvature is used to express theseterms as it is independent from the special parameterizationof a surface. A fair surface is then designed by minimizingthese energies (cf. Section 5.2). Our current goal is not toimprove but to check surface quality, so we need to visualizethese energies.

Note that there are also different characterizations of fair-ness like “aesthetical run of isophotes/reflection lines”.

3.2.4. Visualizing curvature and fairness

If fairness is expressed in terms of curvature we have to visu-alize curvature to evaluate fairness. We obtain curvature val-ues in every vertex of a triangle mesh by applying the tech-niques presented in the previous section (cf. Section 3.1).

3.2.4.1. Curvature values The technique suggested in thefollowing sections will visualize arbitrary “curvature val-ues”. Any useful scalar valued that is a measure for discretecurvature can be used (cf. textbooks on differential geometrylike 3 for detailed explanations). Here are some examples:

Max-Planck-Institut für Informatik

Page 21: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 19

Figure 8: Reflection lines on C0, C1 and C2 surfaces. One clearly sees that the differentiability of the reflection lines is oneorder lower, i.e. C�1, C0 and C1 respectively.� the Gaussian curvatureK = κ1κ2 that indicates the local

shape of the surface (elliptic forK > 0, hyperbolic forK < 0 and parabolic forK = 0^H 6= 0 resp. flat forK =0^H = 0). A local change of the sign ofK may denote a(even very small) perturbation of the surface.� the mean curvatureH = κ1+κ2� the maximum curvatureκmax= maxfjκ1j; jκ2jg� the total curvatureκ2

1 + κ22 that is used in the previously

mentioned thin plate energy.

3.2.4.2. Color coding scalar values We visualize scalarvalues directly on the surface, i.e. every vertex is assigned acolor value. The graphics subsystem will then linearly inter-polate colors within the triangles. All lighting calculationsmust be turned off for this purpose. There are lots of waysfor color coding, one of them is the following:

Assume a scalar valuedi is given for every vertexVi ,e.g.di may denote any type of curvature. Now letdmax :=maxfdig anddmin := minfdig. Data values are scaled by thefollowing functionscale: [dmin;dmax℄! [�1;1℄ with

scale: d 7! � �d=dmin : d < 0d=dmax : d� 0

Positive and negative values are scaled separately such thatthe zero level is preserved. Notice that the value 0 is usu-ally of special interest. Sodmin � 0� dmax is assumed. Ifnot so, the origin (“green line”, see below) should be shiftedappropriately.

The red and blue color components are used to indicatepositive resp. negative data values. All vertices and all dis-played pixels are to have equal intensity (r+g+b=1). So thegreen component is used to “fill up” intensity. Assume colorcomponents range from 0 tocmax, e.g.cmax= 255. The func-tion rgb : [�1;1℄! [0;cmax℄3 assigns to each value a RGBtriple with intensitycmax.

rgb : d 7! � (0; (1+d)cmax;�dcmax) : d < 0(dcmax; (1�d)cmax;0) : d� 0

Fig. 9 shows the RGB mapping on the right side. Zerovalues are displayed bright green,dmin and dmax result inblue and red respectively.

Data valuesd 2 [dmin;dmax℄ can now be mapped to RGBvalues byrgb(scale(d)). Many applications need enhancedcontrast in the vicinity of zero and less neardmin anddmax.Therefore a new parameterγ 2 (0;1℄ is introduced that ad-justs the “contrast” of the visualized data. Then valued ismapped to a RGB triple by

rgb(scale(d)γ)For γ = 1 we obtain the original mapping. With decreasingγ, the resolution increases for values near 0, i.e. a greaterrange in the color table is used for those values. Fig. 9 (left)illustrates the color coding forγ = 1 andγ < 1.

3.2.4.3. Isocurvature lines Isocurvature lines are lines ofconstant curvature on a surface. They can be displayed simi-larly to isophotes. Instead of illumination values curvaturevalues are used. If the surface isCk continuous then theisocurvature lines areCk�2 continuous, so isocurvature linesare also very sensitive to discontinuities.

A problem when rendering isocurvature lines with 1D-textures may be the wide range of curvature values that maynot map appropriately to the[0;1℄ interval of texture coor-dinates resp. the actual texels. One solution is to clamp thecurvature values to a suited interval, the other solution istoexplicitly extract the curves and draw them as lines.

3.2.4.4. Lines of curvature Besides the scalar principalcurvatures the principal directions also carry information onthe local surface properties. They define a discrete directionfield on the surface with one tangential direction per ver-tex. By linearly interpolating directions over triangles usingbarycentric coordinates a continuous field can be defined.

Lines of curvature can then be traced on this directionfield. The tracing algorithm does Euler integration steps in-

Max-Planck-Institut für Informatik

Page 22: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

20 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

G

R

scale(d)

B

1

-1 -0.5 0 0.5 1

0.5

0

-0.5

-1

Figure 9: Color coding. Left: d is mapped to[�1;1℄ byscale, resolution near 0 may be enhanced by usingγ< 1, the transformedvalue is then coded as(r;g;b). Middle: maximum curvatureκmax = maxfjκ1j; jκ2jg with γ = 1. Right with enhanced contrastγ = 1

2 .

Figure 10: Lines of curvature. The (signed) maximum curva-ture is color coded and lines of curvature are superimposed.

side a triangle until an edge is crossed. Then a neighboringtriangle is entered. Fig. 10 shows lines of curvature.

The visualized curvature flow may give a very good andintuitive impression of the surface. Alternatively texturebased techniques like line integral convolution (LIC)2 canalso be used on triangle meshes. Tracing and constructing ahuge number of lines of curvature is rather expensive com-pared to the other techniques.

3.2.5. The shape of triangles

Some applications need “well-shaped”, round triangles in or-der to prevent them from running into numerical problems.This includes numerical simulations based on FEM or CFD.For this purpose, “round” triangles are needed, i.e. the ratioof the radius of the circumcircle to the shortest edge shouldbe as small as possible (cf. Fig. 3.2.5).

The most common way to inspect the quality of trianglesis to view a wireframe or hidden-line rendered image. Thismay not be an option for very complex meshes. A straight-forward solution is a color coding criterion based on triangleshapes. This helps to identify even single “badly shaped” tri-angles.

3.2.6. Summary

The most important issue about quality control is probablythe fact that techniques that are well known from the interro-gation of smooth surfaces can be adapted and used for trian-gle meshes in a straightforward way. Discrete curvature anal-ysis is the key to achieve this result. In addition to smooth-ness and fairness there are criteria on triangle shape that maybe important for specific applications.

References

1. H. G. Burchard, J. A. Ayers, W. H. Frey, and N. S. Sa-pidis. Approximation with aesthetic constraints. InDe-signing Fair Curves and Surfaces, pages 3–28, 1994.

2. B. Carbal and L. C. Leedom. Imaging vector fields us-ing line integral convolution. InSIGGRAPH 93 Con-ference Proceedings, pages 263–274, 1993.

3. M. P. do Carmo.Differential Geometry of Curves andSurfaces. Prentice–Hall, Inc Englewood Cliffs, NewJersey, 1993.

4. J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes.Computer Graphics: Principles and Practice, secondedition in C. Addison–Wesley, 1996.

5. G. H. Golub and C. F. Van Loan.Matrix Computations.Johns Hopkins University Press, Baltimore, 1989.

6. H. Hagen, S. Hahmann, Th. Schreiber, Y. Nakayima,B. Wördenweber, and P. Hollemann-Grundstedt. Sur-face interrogation algorithms.IEEE Computer Graph-ics and Applications, 12(5):53–60, 1992.

7. W. E. Lorensen and H. E. Cline. Marching cubes: a

Max-Planck-Institut für Informatik

Page 23: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 21

Figure 11: A triangle mesh optimized for smooth appearance, leading toflat triangles (left), and for triangle shape, leading tosurface artifacts (right).

high resolution 3D surface construction algorithm. InComputer Graphics (SIGGRAPH 87 Conference Pro-ceedings), volume 21, pages 163–170, 1987.

8. D. S. Meek and D. J. Walton. On surface normal andgaussian curvature approximations given data sampledfrom a smooth surface.Computer Aided Geometric De-sign, 17:521–543, 2000.

9. H. P. Moreton and C. H. Séquin. Functional optimiza-tion for fair surface design. InComputer Graphics(SIGGRAPH 92 Conference Proceedings), volume 26,pages 167–176, 1992.

10. R. Schneider and L. Kobbelt. Geometric fairing of ir-regular meshes for free–form surface design. submit-ted.

11. G. Taubin. Estimating the tensor of curvature of a sur-face from a polyhedral. InProc. International Confer-ence on Computer Vision, pages 902–907, 1995.

12. W. Welch and A. Witkin. Free–form shape design us-ing triangulated surfaces. InSIGGRAPH 94 ConferenceProceedings), pages 247–256, 1994.

13. M. Woo, J. Neider, and T. Davis. OpenGL Pro-gramming Guide. Second edition. The Official Guideto Learning OpenGL, Version 1.1. Addison–Wesley,1996.

Max-Planck-Institut für Informatik

Page 24: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

22 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

4. Coarse-to-fine hierarchies

Coarse–to–fine hierarchies are build up by successively re-fining a coarse base mesh (i. e. by inserting new vertices andtriangles). The resulting degrees of freedom can be used intwo ways: Subdivision schemes position the new verticessuch that the resulting meshes become smooth—the geomet-ric information inherent to the base mesh is therefore notchanged. Remeshing methods on the other hand position thevertices such that more and more geometric detail becomesvisible by sampling points from the original surface—the re-sulting meshes therefore need not be smooth. The followingtwo sections describe these approaches in more detail.

4.1. Stationary subdivision

4.1.1. Introduction

Subdivision schemes have become increasingly popular inrecent years because they provide a uniform and efficientway to describe smooth curves and surfaces. Their beautylies in their elegant mathematical formulation and their sim-ple implementation: Given an arbitrary control polygon per-form the following subdivision step ad infinitum (see Fig-ure 12):

1. Splitting step:Insert a new vertex at the midpoint of eachedge.

2. Averaging step:Relocate each vertex according to givenrefinement rules.

If the refinement rules are chosen carefully the resulting con-trol polygons will converge to a smooth limit curve. In prac-tice the algorithm is stopped after a sufficient number of sub-division steps and the resulting control polygon is renderedas an approximation of the curve.

Figure 12: Subdivision step: original polygon (upper left),after splitting step (upper right), after averaging step (lowerleft) and limit curve (lower right).

Even though subdivision schemes have many applicationsin the univariate case their real strengths are only revealedwhen looking at the bivariate case: they are easily able tohandle surfaces ofarbitrary topology while automaticallymaintaining continuity properties.

In the following we give a brief historical overviewand cite the necessary references. Subdivision methods (forcurves) were introduced and mathematically analyzed for

the first time by G. de Rham in 1947. However, only theirre–invention in 1974 by G. M. Chaikin5 made them availablefor the computer graphics community. Chaikin used themto derive a simple algorithm for the high–speed generationof curves. In 1978 the concept of subdivision was carriedover from curves to surfaces: Catmull and Clark described ageneralization of bicubic tensor product B–splines4 and Dooand Sabin introduced a generalization of biquadratic ten-sor product B–splines9. In the following decades many newschemes were proposed: the Butterfly scheme12, the Loopscheme28 and variational20 schemes (see also Section 5.2) toname only a few. However, the subdivision rules of the earlyschemes were only “ad hoc” generalizations of known rulesfor regular cases (knot insertion) and lacked a precise math-ematical analysis with respect to convergence behavior anddifferentiability properties. A first and important approachwas already given by Doo and Sabin: they performed a spec-tral analysis of the so–calledsubdivision matrixto prove theconvergence of their scheme in extraordinary vertices. Thisapproach was further enhanced by Storry and Ball2. A sec-ond way to analyze subdivision schemes are the so calledgenerating functions10: this formalism allows the subdivi-sion step to be expressed as a simple multiplication of twoformal power series. However, it was not until 1995 whenU. Reif introduced the concept of thecharacteristic mapof asubdivision scheme30 that one was finally able to rigorouslyprove the continuity properties of subdivision schemes33.

Nowadays the research is focusing on applications of sub-division schemes: Methods to interpolate points and curveswere developed16; 27, physical simulations based on subdivi-sion methods were examined6. Subdivision techniques areused for animations in computer generated movies7 as wellas in raytracing applications22. New techniques to efficientlyhandle and render subdivision surfaces were developed andeven implemented in hardware29. The modeling power ofsubdivision schemes was greatly enhanced by introducingschemes that are able to model corners and creases17; 3.At present almost everything one can do with traditionalNURBS–based systems can also be achieved by subdivisiontechniques32.

Before starting we need some preliminaries: In the fol-lowing we are dealing mostly with triangular and quadran-gular meshes, i. e. the faces are triangles and quadrangles,respectively. The number of edges emanating from a vertexis called thevalenceof the vertex. A vertex of a quadran-gular mesh is said to beregular, if it is an inner vertex ofvalence 4 or a boundary vertex of valence 3, otherwise it iscalledextraordinary. Likewise a vertex of a triangular meshis calledregular, if it is an inner vertex of valence 6 or aboundary vertex of valence 4, andextraordinaryotherwise.

4.1.2. Catmull–Clark subdivision

In order to get a feeling for subdivision schemes we willdescribe one of the first of them in more detail: the Catmull–

Max-Planck-Institut für Informatik

Page 25: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 23

Clark scheme, which was introduced in 1978. It is a general-ization of ordinary tensor product B–spline subdivision andwidely used because its quadrangular structure fits well intoexisting CAD systems.

Let s(u;v) = ∑i ∑ j ci j N3i (u)N3

j (v) be a bicubic tensorproduct B–spline: theci j are the control points arranged ina regular quadrilateral grid (see Figure 13) and theN3

i (�) arethe uniform cubic B–splines over the knot–vectorZ. Thesurfaces is build of quadrilateral polynomial patches eachof them being determined by a regular 4�4 submesh of theoriginal mesh (compact support of the B–splines!).

Figure 13: B–Spline subdivision: Each patch of a tensorproduct B–Spline surface is fully determined by a4�4 sub-mesh of the original mesh (left). Knot insertion yields a re-fined mesh (right).

By inserting knots we see thats can be rewritten as a ten-sor product B–spline over the refined knot vector1

2Z, i. e.

s(u;v) = ∑i ∑ j di j N3i (2u)N3

j (2v). The control pointsdi j con-stitute a finer quadrilateral mesh (see Figure 13) and are eas-ily computed by the following masks:

1 1

1 1

66

11

1 1

6

6

6

6

361 1

1 1

Figure 14: Masks for knot–insertion of bicubic tensor prod-uct B–splines.

In this and the following figures already existing edges areshown as solid lines while new edges are dashed. To com-pute the new position of the vertex marked by a square onehas to take the weighted average of the depicted vertices.By repeatedly inserting knots like this the resulting controlmeshes converge to the limit surfaces. Now suppose wehave a quadrilateral mesh with extraordinary vertices; stillwe can interpret every 4�4 submesh as control net of a poly-nomial patch—but this will leave a hole in the surface (seeFigure 15).

In order to carry over the concept of subdivision to ex-traordinary vertices Catmull and Clark extended the masks

Figure 15: Catmull–Clark subdivision: Interpreting eachregular 4� 4 submesh as the control net of a polynomialpatch still leaves holes (left), subdividing the control meshresults in a larger regular region and thus gradually fillsthese holes with rings of patches (right).

for the regular case by a set of new masks—one mask foreach valence (see Figure 16).

β

β

β

β

β

γ

γ

γ

γ

α

Figure 16: Catmull–Clark subdivision: mask for extraordi-nary vertices14. The coefficients depend on the valence n ofthe center vertex:β = 3

2n2 , γ = 14n2 , α = 1�nβ�nγ.

Now they could do subdivision as above and add a furtherring of patches to the surface thus making the hole smaller(see Figure 15). The new masks were designed such that theresulting control meshes converge to a limit surface whichis C2 except for the extraordinary vertex where it isC1.Note that the choice of the masks is not unique—there existother variants which also generalize bicubic tensor productB–splines.

4.1.3. Analysis of subdivision schemes

There are two major approaches to mathematically analyzethe properties of subdivision schemes (convergence behav-ior, continuity/differentiability of the limit function,repro-duction properties): generating functions and spectral analy-sis of the subdivision matrix.

Generating functions

Generating functions are used to analyze univariate subdivi-sion schemes and the regular parts of bivariate subdivisionschemes. To avoid messy multi–indices we will restrict our-selves to the univariate case. We start with a definition:

Let P= (p j) j=�1;:::;1 be an arbitrary sequence, then wecall the formal seriesP(z) = ∑ j p jz

j thegenerating function(or thesymbol) of P.

Max-Planck-Institut für Informatik

Page 26: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

24 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

As an example consider the 4–point scheme (see Fig-ure 17):

pi0

pi1

pi2

pi�1

pi�2

pi+10

pi+11

pi+12

pi+13

pi+14

pi+1�1

pi+1�2

pi+1�3

pi+1�4

Figure 17: The 4–point scheme11: A polygon Pi = (pij)

(solid lines) is mapped to a refined polygon Pi+1 = (pi+1j )

(dashed lines). Note that this is an interpolatory scheme:pi+1

2 j = pij .

A polygon Pi = (pij ) is mapped to a refined polygon

Pi+1 = (pi+1j ) by applying the following two subdivision

rules:

pi+12 j = pi

j ;pi+1

2 j+1 = 116

(�pij�1+9pi

j +9pij+1� pi

j+2):In general such a subdivision step can be compactly writtenin a single equation

pi+1j = 1

∑k=�1α2k� j p

ik;

where theα j are coefficients depending on the subdivisionrules. Note that the index 2k� j alternatingly selects the evenindexedαs or the odd indexedαs. In our case we have

α = (α j) = 116

[: : : ;0;0;�1;0;9;16;9;0;�1;0;0; : : :℄After some computation we see that the subdivision step canbe expressed in the generating function formalism as a sim-ple multiplication of the corresponding symbols:

Pi+1(z) = α(z)Pi(z2):Note thatα(z) is the symbol of the sequenceα! The basicidea of proving convergence to a continuous limit functionis to show that the polygonsPi form a Cauchy sequence,which follows if the distancejjPi+1�Pi jj1 of two succes-sive polygons decreases exponentially ini. Some computa-tion shows that this is the case, if the so–calleddifferenceschemewhich is given by the symbol

α0(z) = z1+z

α(z)exists (i. e.α(�1) = 0) and is contractive. This is the case if

max

(∑

jjα02 j j;∑

jjα02 j+1j) = q< 1:

So we have an easy criterion to check the convergence ofa subdivision scheme to a continuous limit function. Like-wise one can prove convergence to higher order continuousfunctions by examining higher order difference schemes.

Subdivision matrix formalism

The subdivision matrix formalism is used to describe thebehavior of bivariate subdivision schemes near extraordi-nary vertices. The basic idea is to keep track of a vertexp0through different subdivision levels. To do this, it turns outthat it is necessary not only to keep track ofp0 but also ofthe verticesp1; : : :; pn in a finite neighborhood ofp0—in thecase of Loop subdivision this is the 1–ring of neighbors, i. e.all vertices adjacent top0 (see Figure 18). In general, thesize of the neighborhood is determined by the support of thesubdivision masks.

S

pi0

pi1

pi2

pi3

pi4

pi5

pi+10

pi+11

pi+12

pi+13

pi+14

pi+15

Figure 18: Subdivision matrix formalism: The subdivisionmatrix S maps a neighborhood of a vertex on level i to theneighborhood of the same vertex on level i+1.

Now let pi = [pi0; pi

1; : : :; pin℄ be a column vector compris-

ing the positions ofp0; p1; : : :; pn at subdivision leveli. Thenthere is a(n+1)� (n+1) matrixSsatisfying

pi+1 = Spi :This matrix is called thesubdivision matrixof the scheme.Let λ0 � λ1 � : : : � λn be the eigenvalues ofS andx0;x1; : : :;xn the corresponding eigenvectors (i. e.Sxj =λ jx j ). Note that due to the affine invariance of subdivisionschemes we haveλ0 = 1 andx0 = [1; : : :;1℄T . Since thex j

form a basis we can expandp0 to

p0 = ∑x j ω j

for some vector–valued coefficientsω j . Subdividing themeshm times means applyingSm to p0:

pm = Smp0 = ∑(λ j )mx jω j = λm0 x0ω0+λm

1 x1ω1+ � � � :Now suppose that 1= λ0 > λ1. Then it is easy to see thatthe limit position limi!1 pi

0 of p0 is given byω0. Similarformulas can be derived for the limit tangents inω0. Thus theanalysis of the subdivision matrix provides formulas (masks)for limit positions and tangents (if they exist).

However, it turns out that the above analysis is not enoughto show that the limit surface in the neighborhood of an ex-traordinary vertex is smooth. This means that the derived for-mulas are only valid, if the convergence to a differentiable

Max-Planck-Institut für Informatik

Page 27: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 25

function has already been shown by some other method. Forthis one has to consider a larger neighborhood ofp0 andanalyze the spectral properties of the corresponding sub-division matrix. Furthermore one needs to analyze the socalledcharacteristic mapof the subdivision scheme. It canbe shown that if this map is regular and injective the subdi-vision scheme itself producesC1–continuous surfaces in thelimit 30; 33.

4.1.4. Technical terms

In this section we explain some of the common technicalterms used in the subdivision literature.

According to the way they relate the topology of the origi-nal mesh to the topology of the subdivided mesh, subdivisionschemes are classified (see Figure 19) as� primal or face–splitschemes� dual or vertex–splitschemes� other, e. g.

p3–scheme, honeycomb refinement and bi-

section31

Figure 19: Classification of subdivision schemes: face splitschemes—quadrangular (left) and triangular (middle)—andvertex split schemes (right).

Furthermore one distinguishes betweeninterpolatory(oldvertices are not relocated) andapproximatingschemes. Asubdivision scheme is said to bestationary, if the subdi-vision rules do not depend on the overall structure of themesh nor on the subdivision level (all subdivision schemespresented here are stationary), otherwise it is callednon–stationary. Variational schemes20 are an example of non–stationary schemes which are based on optimization tech-niques to generate smooth meshes (see also Section 5.2).A subdivision scheme hascompact support, if only a finitenumber of coefficients in the subdivision masks is non–zero.

4.1.5. Common subdivision schemes

In this section we present some common subdivision sche-mes. The following table gives a brief overview of the basicproperties (note thatCk really meansCk almost everywhere,i. e. except for the extraordinary vertices, where all schemesareC1).

Doo–Sabin9 approx. C1 quadrilateral dualCatmull–Clark4 approx. C2 quadrilateral primalKobbelt19 interp. C1 quadrilateral primalButterfly12 interp. C1 triangular primalLoop28 approx. C2 triangular primalp

321 approx. C2 triangular other

We will only give the basic refinement rules and discusssome of the properties. For further information (refinementrules for boundary edges, masks for limit positions, etc.) thereader is referred to the literature. Generally interpolatoryschemes allow for a more intuitive control of the limit sur-face (vertices are interpolated, no shrinking effect) and for amore simple implementation (in–place) of many algorithms.However, the surface quality is usually not as good as that ofapproximating schemes.

Doo–Sabin schemeThe Doo–Sabin scheme9 (see Fig-ure 20) generalizes quadratic tensor product B–splines. Itsrefinement rules are given in Figure 21. It is interpolatoryin the sense that the barycenters of the faces of the originalmesh are interpolated.

Figure 20: The Doo–Sabin scheme is the most prominentexample of adual or vertex–splitsubdivision scheme. Notethat the number of extraordinary faces (i. e. faces which arenot quadrilateral) remains constant.

α0

α1

α2

αn�1

αn�2

Figure 21: Doo–Sabin scheme: There is only one mask pa-rameterized by the number n of vertices adjacent to the

face14: α0 = 14 + 5

4n , αi = 3+2cos(2iπ=n)4n for i = 1; : : :;n�1.

Kobbelt scheme The Kobbelt scheme19 is an interpolatoryscheme for quadrilateral meshes which emerges from gen-erating the tensor product of the 4–point scheme.

Max-Planck-Institut für Informatik

Page 28: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

26 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

(Modified) Butterfly scheme This scheme was originallyproposed by Dyn, Gregory and Levin12 and modified byZorin, Schröder and Sweldens34 to yield an everywhereC1–continuous surface. The refinement rule for the un-modified scheme is depicted in Figure 22.� 1

1618 � 1

16

12� 1

16 � 116

18

12

Figure 22: Butterfly scheme: As this is an interpola-tory scheme we only need a mask for the newly insertedvertices14.

Loop schemeThe Loop scheme28 generalizes quartic box-splines. Its refinement rules are given in Figure 23.

αnnαn

n

1�αn

αnn αn

n

αnn

18

18

38

38

Figure 23: Loop scheme: In case of a regular mesh thisscheme produces quartic boxsplines. We haveαn = 1

64(40�(3+ 2cos(2π=n))2) where n is the valence of the centervertex28.p

3–schemeThis scheme was only recently proposed byKobbelt21. It produces aC2 surface almost everywhere butis not based on polynomials. It is especially well suitedfor adaptive subdivision since one doesn’t need to insertauxiliary triangles. In a first step every original triangleissplit in three by inserting a new vertex at its barycenter. Inthe second step the original edges are flipped, yielding atriangle mesh rotated by 30 degrees (see Figure 24). Thesubdivision masks are shown in Figure 25.

4.2. Remeshing

Using the powerful means ofsubdivision, the preceding sec-tion illustrates how one can define a surface as the limit of asequence of successively refined polyhedral meshes. In thissection we do not deal with the geometric part of the sub-division that leads to mathematically smooth and visuallyappealing surfaces, but we focus on the special connectivity,the so calledsubdivision–connectivitythat emerges, wheniteratively applying a regular refinement operator to a coarsetriangle mesh. A well–known refinement operator is the 1–to–4 split that recursively splits each triangular face in 4sub-triangles by introducing 3 new vertices on the edges. Since

Figure 24:p

3–scheme : original mesh (upper left), afterinserting barycenters (upper right) and after edge flipping(lower left). Note that two

p3–subdivision steps result in a

tri–section of the original triangle edges (lower right), hencethe name of the scheme.

αnnαn

n

1�αn

αnn

αnn

αnn

13

13

13

Figure 25:p

3–scheme : This scheme has the masks with

the smallest possible support21. We haveαn = 4�2cos(2π=n)9

where n is the valence of the center vertex.

every submesh that corresponds to one base triangle has thestructure of a regular grid and the whole hierarchy is basedon an arbitrary coarse base mesh (cf. Fig. 26), the resultingmeshes are said to besemi-regular.

The implicitly defined connectivity established on acoarse base mesh and the direct availability of multireso-lution semantics gives rise to many techniques exploitingthis convenient representation as the following enumerationshows.

Compression/Progressive TransmissionLounsberry etal. 8 perform a multiresolution analysis, i.e. they introducea wavelet decomposition for meshes with subdivision–connectivity. By suppressing small wavelet coefficients, acompressed approximation within a given error–tolerancecan be achieved. Moreover such a wavelet representationcan easily be transmitted in a progressive fashion. (Sendthe base mesh first and refine it with successively arrivingwavelet coefficients.)

Multiresolution editing For instance Zorin and co–workers35 combine subdivision and smoothing techniquesand present an interactive multiresolution mesh editingsystem, which is based on semi–regular meshes and

Max-Planck-Institut für Informatik

Page 29: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 27

Figure 26: Meshes with subdivision–connectivity are generated by uniformly subdividing a coarse base meshS0 (left). On therefined meshes we can easily identify regular submeshes (right) which topologically correspond to a single triangle of the basemeshS0 (left).

enables efficient modifications of the gross shape whilepreserving little detailed features.

Parameterization Each submesh(subdivision–patch)canbe parameterized naturally by assigning barycentric co-ordinates to the vertices. Combining the local parameteri-zations of the subdivision–patches yields a global param-eterization. Texturing is just one application of such a pa-rameterization.

Level–of–detail control Standard rendering libraries areable to display objects at various levels–of–detail, that isthey display a coarse approximation, if the object is faraway and switch to a finer one, if the viewer approaches.The different subdivision levels naturally support thisfeature. In combination with multiresolution analysis,switching to finer resolutions can be done smoothly byfading in the wavelet coefficients.

Recent investigations show, that compact and convenientrepresentations for multiple of the applications above canbederived when using semi-regular meshes15; 25; 18.

However, even if semi–regular meshes are particularlyconvenient for various types of applications, in practice itis rather unlikely that a given triangle mesh has this specialtype of connectivity (except those meshes originating fromsubdivision). During the last years, a couple of methods havebeen presented to convert a manifold triangle mesh into onehaving subdivision–connectivity and thus having access tothe powerful methods developed for semi–regular mesheswhen an arbitrary input mesh is given.

Before we give an overview over three popular conver-sion schemes, we start by establishing the notation and de-scribe some quality criteria. Let an arbitrary (manifold) tri-angle meshM be given. The task ofremeshingthe inputdataMmeans to find a sequence of meshesS0; : : :;Sm suchthat eachS i+1 emerges fromS i by the application of a uni-form subdivision operator which performs a 1–to–4 split onevery triangular face ofS i (cf. Fig. 26). Since theS i shouldbe differently detailed approximations ofM, the verticesp2 S i have to lie on the continuous geometry ofM but they

not necessarily have to coincide withM’s vertices. Further-more it is allowed but not required, that the vertices ofS i area subset ofS i+1’s vertices.

In general it would be enough to generate the meshSm since the coarser levels of detailS i can be extractedby subsampling. Nevertheless, building the whole sequenceS0; : : :;Sm from coarse to fine often leads to more efficientmulti–level algorithms.

The quality of a subdivision–connectivity mesh is mea-sured in two different aspects. First, we want the resultingparameterization which maps points from the base meshS0to the corresponding points onSm to be close toisometric,i.e. the local distortion of the triangles should be small andevenly distributed over the patch. To achieve this, it is nec-essary to adapt the triangles in the base meshS0 carefully tothe shape of the corresponding surface patches in the givenmeshM.

The second quality requirement rates the base meshS0itself according to the usual quality criteria for trianglemeshes, i.e. uniform size and aspect ratio of the base trian-gles.

As previously stated, a global parameterization ofM caneasily be derived ifSm is given. This is also possible theother way around, i.e. a semi–regular mesh can easily beconstructed, if a global parameterization is available. There-fore, schemes that build a global parameterization onM canalso be adapted to our task24; 1; 17.

However, this tutorial focuses on the “classical” schemesthat convert an input mesh into one with subdivision–connectivity.

4.2.1. Eck’s scheme

In ’95 Eck and co–workers13were the first who came upwith a three step remeshing–scheme. This can roughly bedescribed as follows.

Max-Planck-Institut für Informatik

Page 30: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

28 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Partitioning partitionsM into regionsT0; : : :;Tr using dis-creteVoronoi tiles. The union of these tiles is dual to atriangulation which is used as the base triangulation.

Parameterization constructs a local parameterization foreach submesh ofM that corresponds to a base triangleand joins them which results in a global parameterization.

Resampling recursively performs 1–to–4 splits on the basedomain triangles inS0 and maps the new vertices intoR3.

In order to be able to compare this scheme with the twoother schemes, that were recently published, we focus onsome aspects of the algorithm. For a complete coverage werefer to the original paper.

The partitioning starts with a single seed–triangle. By su-cessively adding adjacent triangles, the seed grows to a tile.New seed–triangles are added, if one of the restrictions, thatensure that the dual of the tiles builds a proper triangulation,is violated. The growing process terminates, when the tilescover the complete surfaceM and the tiles’ dual serves asthe base triangulation. The vertices are located at the cen-troids of the seed–triangles. Thus the base–vertices are rela-tively equally spread overM. However, the positions of thevertices and because of that the whole remesh heavily de-pends on the growing process, i.e. the algorithm cannot optfor the second quality criterion.

Using harmonic maps, a parameterization is constructedas follows. At first, one computes a harmonic map that car-ries each Voronoi tileTi into a planar polygonPi . The in-verse mapping is a parameterization ofTi overPi . This pa-rameterization is evaluated to get the triangular submesheswhich are dual to the Voronoi tilesTi . Finally, harmonicmaps are used to map these submeshes to the correspond-ing triangle inS0. The combination of these mappings leadsto an optimal parameterization for the base–triangles but itprovidesC0–continuity over the edges of adjacent trianglesonly.

Finally, here is a summary of the algorithm’s main fea-tures.

+ arbitrary manifold input–meshes+ parameterization on base–triangles is close to isometric– C0–continuity over the edges of base–triangles only– base–domainS0 not optimal according to quality criteria– costly computations (harmonic mappings)

4.2.2. MAPS

An alternative approach addressing the remeshing problemis the MAPS algorithm presented by Lee et al.26 Within thescope of this tutorial we roughly sketch the key features ofthe scheme. For a more elaborate discussion, please refer tothe original paper. The base domainS0 is found by applyingan incremental mesh decimation algorithm to the originalmesh. (Cf. Sec. 5.1 for a closer look at mesh decimation).Compared to Eck’s scheme, this provides more control onthe generation of the base mesh since feature lines can be

taken into consideration and local curvature estimates canbeused to determine regions where more or less vertices are re-moved. The atomic simplification step is thevertex removal.Figures 27 and 28 illustrate how a parameterization overS0can be derived. This parameterization is, again, not globallysmooth but only locally within each patch corresponding to abase triangle. The actual remeshing is not done by samplingthe initial parameterization at the dyadic barycentric param-eter values as usual but an additional smoothing step basedon a variant of Loop’s subdivision scheme28 is used to shiftthe sample sites within the parameter domain.

retriangulation

flattening into parameter plane

3 space

Figure 27: To remove a vertex, its one–ring is mapped intothe plane (exponential–map), the vertex removed, the result-ing hole retriangulated and finally mapped back into 3 space(figure inspired by the original paper26).

assign barycentriccoordinates to oldpoint in new triangle

Figure 28: After retriangulation (cf. Fig. 27), the removedvertex (black dot) gets assigned barycentric coordinateswith respect to the containing triangle on the coarser level.Barycentric coordinates of previously removed vertices (hol-low dots) are updated accordingly (figure inspired by theoriginal paper26).

The pros and cons of the scheme are enlisted below.

+ arbitrary manifold input–meshes+ user defined feature lines+ parameterization on base–triangles is close to isometric

Max-Planck-Institut für Informatik

Page 31: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 29

+ Sm smooth over the edges ofS0 (“sampling at Loop–positions”)

– Without user input: base–domainS0 not optimal accord-ing to quality criteria, i.e. it depends on the mesh decima-tion only.

4.2.3. Shrink wrapping

A completely different approach to the remeshing problemfor genus–zero objects is presented by Kobbelt and co–workers23. The physical model behind the algorithm is theprocess ofshrink wrappingwhere a plastic membrane iswrapped around an object and shrunk either by heating thematerial or by evacuating the air from the space inbetweenthe membrane and the object’s surface. At the end of the pro-cess, the plastic skin provides an exact imprint of the givengeometry. To simulate the shrink wrapping process, they ap-proximate the plastic membrane by a semi–regular meshSm.Two forces are applied to its vertices. An attracting forcepulls them towards the surface. A relaxing force is appliedin order to optimize the local distortion energy and to avoidfolding. This ensures an even distribution of the vertices.Theattracting part is realized by a projecting operatorP that sim-ply projectsSm’s vertices onM. The relaxing is done byapplying an operatorU to all vertices inSm, that iterativelybalances the vertex distribution. Thus, the shrink–wrappingis an iterative process, where one alternates theP and theUoperator.

Nevertheless, the proposed scheme works slightly differ-ent in order to accelerate the underlying optimization pro-cess. Instead of immediately shriveling upSm , the remesh-ing process starts with an initial convex meshS0 (e. g. anicosahedron). Once applyingP/U converges on levelSi ,the scheme switches to the next refinement levelSi+1.Hence, this multi–level approach generates intermediate lev-els, which are close to the final solution, with relatively lowcomputational costs.

Unfortunately, the algorithm described so far works forsimple input meshesM only. One of the problems that ariseis that especially for the coarser approximations, the projec-tion operatorP might produce counter–intuitive results.

For this reason, they extend the basic shrink–wrapping al-gorithm with the aid of a parameterizationF of M over theunit sphere. Both,M (usingF ’s inverse) andS0 (projection)are mapped onto a sphere. Thus,P becomes trivial. The re-laxation operatorU is adapted to this in such a way, that itstill measures the geometry on the original surface. This isdone by associating triangles ofM to corresponding surfaceareas ofS0 (which is trivial, if both meshes are mapped toa sphere). This guarantees an equal distribution ofS0’s ver-tices onM when evaluatingF(S0). In areas where the sur-face metric ofS0 andM differ considerably, which wouldlead to severe stretching in the resulting remesh, new ver-tices are inserted intoS0 by performing simple edge–splits.

OnceS0 is found, successive levels can be computed by ei-ther using the stabilizing parameterization over the sphere ordirectly, if Si andM do not differ too much.

Summing up, here are the main features/limitations of theshrink–wrapping approach to the remeshing problem.

+ custom tailored base–domainS0 which is optimized withrespect to both quality criteria

– in its basic form the algorithm works for genus–zero ob-jects only

References

1. C. L. Bajaj, F. Bernardini, and G. Xu. Automatic recon-struction of surfaces and scalar fields from 3D scans.In SIGGRAPH 95 Conference Proceedings, pages 109–118, 1995.

2. A. A. Ball and D. J. T. Storry. Conditions for tangentplane continuity over recursively generated B–splinesurfaces.ACM Transactions on Graphics, 7(2):83–102,1988.

3. H. Biermann, A. Levin, and D. Zorin. Piecewisesmooth subdivision surfaces with normal control. InSIGGRAPH 00 Conference Proceedings, to appear.

4. E. Catmull and J. Clark. Recursively generated B–spline surfaces on arbitrary topological meshes.Com-puter Aided Design, 10:350–355, 1978.

5. G. Chaikin. An algorithm for high speed curve gen-eration. Computer Graphics and Image Processing,3:346–349, 1974.

6. F. Cirak, M. Ortiz, and P. Schröder. Subdivision sur-faces: A new paradigm for thin–shell finite–elementanalysis.Internat. J. Numer. Methods Engrg., 47, 2000.

7. T. DeRose, M. Kass, and T. Truong. Subdivision sur-faces in character animation. InSIGGRAPH 98 Con-ference Proceedings, pages 85–94, 1998.

8. T. DeRose, M. Lounsbery, and J. Warren. Multiresolu-tion analysis for sufaces of arbitrary topological type.Technical Report 93–10–05, Department of ComputerScience and Engineering, University of Washington,1993.

9. D. Doo and M. Sabin. Behaviour of recursive subdi-vision surfaces near extraordinary points.ComputerAided Design, 10:356–360, 1978.

10. N. Dyn. Subdivision schemes in computer aided ge-ometric design. Advances in Numerical Analysis II,Wavelets, Subdivisions and Radial Functions, pages36–104, 1991.

11. N. Dyn, D. Levin, and J. A. Gregory. A 4–point interpo-latory subdivision scheme for curve design.ComputerAided Geometric Design, 4:257–268, 1987.

Max-Planck-Institut für Informatik

Page 32: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

30 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Figure 29: A remeshed version (right) of a bust model (left) obtained with Kobbelt’s shrink–wrapping approach.

12. N. Dyn, D. Levin, and J. A. Gregory. A butterfly sub-division scheme for surface interpolation with tensioncontrol.ACM Transactions on Graphics, 9(2):160–169,1990.

13. M. Eck, T. DeRose, T. Duchamp, H. Hoppe, M. Louns-bery, and W. Stuetzle. Multiresolution analysis of arbi-trary meshes. InSIGGRAPH 95 Conference Proceed-ings, pages 173–182, 1995.

14. D. Zorin et. al. Subdivision for modeling and anima-tion. In SIGGRAPH 00 Course Notes, to appear.

15. I. Guskov, K. Vidimce, W. Sweldens, and P. Schröder.Normal meshes. InSIGGRAPH 00 Conference Pro-ceedings, to appear.

16. M. Halstead, M. Kass, and T. DeRose. Efficient,fair interpolation using catmull–clark surfaces. InSIGGRAPH 93 Conference Proceedings, pages 35–44,1993.

17. H. Hoppe, T. DeRose, T. Duchamp, M. Halstead,H. Jin, J. McDonald, J. Schweitzer, and W. Stuet-zle. Piecewise smooth surface reconstruction. InSIG-GRAPH 94 Conference Proceedings, pages 295–302,1994.

18. A. Khodakovsky, P. Schröder, and W. Sweldens. Pro-gressive geometry compression. InSIGGRAPH 00Conference Proceedings, to appear.

19. L. Kobbelt. Interpolatory subdivision on open quadri-lateral nets with arbitrary topology.Computer GraphicsForum, 15(3):409–420, 1996.

20. L. Kobbelt. A variational approach to subdivision.Computer Aided Geometric Design, 13(8):743–761,1996.

21. L. Kobbelt.p

3–subdivision. InSIGGRAPH 00 Con-ference Proceedings, to appear.

22. L. Kobbelt, K. Daubert, and H.-P. Seidel. Ray tracingof subdivision surfaces.Eurographics Rendering Work-shop, pages 69–80, 1998.

23. L. Kobbelt, J. Vorsatz, U. Labsik, and H.-P. Seidel.A shrink wrapping approach to remeshing polygonalsurfaces. Computer Graphics Forum, 18(3):119–130,1999.

24. V. Krishnamurthy and M. Levoy. Fitting smooth sur-faces to dense polygon meshes. InSIGGRAPH 96 Con-ference Proceedings, pages 313–324, 1996.

25. A. Lee, H. Moreton, and H. Hoppe. Displaced subdivi-sion surfaces. InSIGGRAPH 00 Conference Proceed-ings, to appear.

26. A. W. F. Lee, W. Sweldens, P. Schröder, L. Cowsar, andD. Dobkin. MAPS: Multiresolution adaptive parame-terization of surfaces. InSIGGRAPH 98 ConferenceProceedings, pages 95–104, 1998.

27. A. Levin. Interpolating nets of curves by smooth sub-division surfaces. InSIGGRAPH 99 Conference Pro-ceedings, pages 57–64, August 1999.

28. C. T. Loop. Smooth subdivision surfaces based on tri-angles. Master’s thesis, University of Utah, Departmentof Mathematics, 1987.

29. K. Pulli and M. Segal. Fast rendering of subdivisionsurfaces. Eurographics Rendering Workshop, pages61–70, 1996.

30. U. Reif. A unified approach to subdivision algorithmsnear extraordinary vertices.Computer Aided GeometricDesign, 12(2):153–174, 1995.

Max-Planck-Institut für Informatik

Page 33: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 31

31. M. C. Rivara. Mesh refinement processes based onthe generalized bisection of simplices.SIAM J. Numer.Anal., 21(3):604–613, 1984.

32. J. Stam. Exact evaluation of catmull–clark subdivisionsurfaces at arbitrary parameter values. InSIGGRAPH98 Conference Proceedings, pages 395–404, 1998.

33. D. Zorin. Ck Continuity of Subdivision Surfaces. PhDthesis, California Institute of Technology, Departmentof Computer Sciences, 1996.

34. D. Zorin, P. Schröder, and W. Sweldens. Interpolatingsubdivision for meshes with arbitrary topology. InSIG-GRAPH 96 Conference Proceedings, pages 189–192,1996.

35. D. Zorin, P. Schröder, and W. Sweldens. Interactivemultiresolution mesh editing. InSIGGRAPH 97 Con-ference Proceedings, pages 259–268, 1997.

Max-Planck-Institut für Informatik

Page 34: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

32 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

5. Fine-to-coarse hierarchies

In the previous section methods were presented that gener-ate mesh hierarchies from a coarse base mesh by refining“bottom–up” to a specified detail level. In this section weare going to discuss ways to build a hierarchy “top–down”:Given a detailed mesh, a coarse base mesh is created by suc-cessively removing detail information.

Again, two types of hierarchies can be discerned: Levelsof complexityare created through mesh decimation (Section5.1) while levels ofsmoothnessare achieved by fairing tech-niques (Section 5.2).

5.1. Mesh decimation

5.1.1. Introduction

The aims of this section are� to provide an overview of the current body of literaturethat can be used as a starting point for further investiga-tions;� to aid in identifying application–specific needs and pointyou to common problems with simplification algorithms;� to sum up what’s involved in “rolling your own” meshsimplification tool.

First of all, to give you an idea of what geometry simpli-fication can be used for, here are some typical applications:

Oversampled scan data:Meshes generated from scan dataare usually huge and mostly uniformly sampled; the den-sity of the mesh doesn’t correspond to the detail neededlocally.

Overtesselated surfaces:Dense meshes are also often gen-erated from other surface representations; a typical exam-ple is the extraction of iso–surfaces using e.g. the march-ing cubes algorithm (see Section 2.2), which samples thesurface at a fixed detail level. Also parametric surfaces,like NURBS patches (as used in CAD programs), are of-ten just uniformly sampled in parameter space and thusalso not adapted to the detail actually needed to representlocal geometry.

Level–of–detail rendering: To speed up rendering, manysystems support switching to a lower resolution modelwhen the object is far away from the viewer. It is desir-able to generate these simplified models automatically.

Progressive transmission:On low–bandwidth channels,transferring a complete object description might not befeasible. With progressive meshes, the basic shape of anobject is transmitted first and can already be viewed; de-tail is subsequently transmitted until the target resolutionis reached.

Multiresolution editing: If an object can be transformedinto a representation of varying coarseness, large–scalemodifications can be accomplished by working on acoarse level and automatically re–applying the detail in-formation (see Section 6).

So, how do you go about geometry simplification? As afirst approach, we could define the task at hand in looseterms as “creation of a low–polygon approximation of acomplex model that is good enough for the intended appli-cation”.

A close examination of this preliminary definition alreadyraises a lot of questions:� What are the properties of the model we have to consider?

Geometry, topology, attributes may or may not be impor-tant.� What exactly is an “approximation”? We have to definethe actualerror in some way.� Finally, what is “good enough”? This is also entirely de-pendent on the application. We probably have to specifyquality criteria on the model.

As you can see, the above problem statement is veryvague and application–dependent, which might help to ex-plain why there are so many mesh simplification algorithms,with their own particular strengths and weaknesses. In thefollowing section we briefly review the major developmentsand approaches that have been taken until today.

5.1.2. A brief history

For domain–specific geometric models automated methodsto generate lower levels of detail exist since the 1980s37; 38.The first more general simplification algorithms that werenot tied to a particular application and data structure ap-peared in 1992:

Re–tiling34 places new vertices on the existing geometry,inserting more points in areas of high surface curvature, anddistributes them by point repulsion forces. Most of the exist-ing vertices are then removed and a triangulation is createdfrom the remaining set of vertices. While the optimizationprocess leads to well–shaped triangles, there is no real wayto guide the simplification process and to bound the surfaceerror with this method, as Turk points out himself.

The same year, an incremental method was proposed inthe pioneering work of Schroeder et al.32; simplification ofthe mesh is accomplished by removing a vertex and retrian-gulating the resulting hole. Approximation error is measuredas the distance of the removed vertex to the (average) result-ing plane of the retriangulation. There is no bound for thedeviation from theoriginal mesh.

Rossignac and Borrel introducevertex clustering27: Aspatial grid is superimposed onto the geometry, and all ver-tices in a cell are merged into one, collapsing the associatedparts of the geometry. The maximum vertex error in this caseis the size of a cell. The algorithm is simple and efficient, butsince it is purely vertex–based, it ignores properties of thesurface such as curvature and does not preserve the originaltopology.

In mesh optimization17, minimization of an energy func-tion is used to find an approximation of a triangular mesh

Max-Planck-Institut für Informatik

Page 35: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 33

that has a low number of vertices, which is achieved by re-moving and repositioning vertices and flipping edges. Thedesired approximation constraints are directly modeled intothis function. The work is motivated by the surface recon-struction problem, but applies to mesh simplification as well.

Hoppe15 later proposes a simpler variant of mesh opti-mization (using now only one operation, the edge collapse)to generate levels of detail and presents theprogressive meshrepresentation for transmitting and storing a coarse approx-imation plus the detail information from which the originalsurface can then be reconstructed. The method extends toscalar surface attributes, such as vertex colors and normals.Feature edges (here called “discontinuity curves”) are sup-ported as well. To generate a progressive mesh representa-tion, the history of the applied simplification operations isstored together with the information needed to invert the pro-cess (basically the positions of the removed vertices). Be-ginning with the coarse base mesh, the original geometrycan now be successively reconstructed. With efficient datastructures16, this representation usually consumes even lessmemory than the original, non–hierarchical mesh. Nearly allcurrent mesh simplification algorithms support progressivemesh representations.

A number of incremental methods have been proposedin the years following, mainly differing in the way the ap-proximation error is estimated12; 21; 11; 7, with different con-sequences for memory consumption and running time, re-spectively. The incremental approach dominates the field ofmesh decimation today in various refinements; because oftheir modular structure flexible choices in mesh data struc-ture, evaluated criteria and local operations performed onthemesh are possible, thus enabling a wide range of applica-tions; for this reason, the following text will concentrateonincremental methods.

Most authors concentrate on simplification within somegeometricerror bound, though many algorithms extend tosurface attributes as well. A few authors have concentratedon visual appearance, most notably Cohen et al.8 who in-troduce atexture deviation metricto guarantee a maximumscreen–space error. Other than in previous algorithms, sur-face attributes are decoupled from the geometry by means oftexture and normal maps, which leads to fewer restrictionson the simplification of geometry in order to retain a level ofvisual faithfulness.

Probably the biggest current challenge for mesh simplifi-cation methods lies in handling the ever–increasing amountof data in an efficient way: Polygonal models with millionsor even billions of vertices are becoming more and morecommonplace, and efficient as well as effective reduction ofthe complexity is often a prerequisite for any further process-ing. Due to limited main memory capacities the model hassomehow to be processed without ever loading it completelyinto memory23.

complex edgeboundary vertex non−manifold vertexsimple inner vertex

Figure 30: Manifold and non–manifold triangle configura-tions

Nearly all decimation algorithms restrict to trianglemeshes, for a number of reasons:� Triangles are always planar, wherever you move a vertex.� Following from this: The surface is piecewise linear

throughout, thus simplifying evaluations.� Every polygonal surface can be triangulated.� Constant–size data structures can be implemented moreefficiently.

In the following, we first discuss suitable mesh data struc-tures and then explain the general incremental decimationapproach; existing methods will be discussed and classifiedby noting how particular subproblems are handled.

5.1.3. Prerequisites: mesh data structures

A mesh decimation algorithm usually needs more informa-tion than just vertex positions: the connectivity of the meshneeds to be accessible (and mutable), in order to easily ex-amine and update mesh regions. Common queries to such amesh data structure are:� Which are the vertices forming a given triangle?� Which triangles are adjacent to a given triangle?� Which triangles are adjacent to a given vertex?� Is a vertex/edge on a boundary?

There are a number of data structures that contain neigh-borhood information and can be used in mesh decimation,for example winged–edge1 or half–edge data structures4; 18.

These data structures cannot represent arbitrary triangula-tions, but are basically restricted tobounded two–manifolds,i.e. the surface has to be topologically disk–like at everypoint, or halfdisk–like on boundaries. This rules out edgeswith more than two adjacent triangles or points where twotriangles meet only at their tips. Figure 30 shows triangleconfigurations, where the left two are legal for a manifoldsurface, the others are not.

This is a common restriction for topology–preservingmesh decimation algorithms. A topology–changing algo-rithms though is allowed to merge disconnected componentsof a triangle mesh. To deal with “real world” meshes, whichoften have some non–manifold parts, the offending verticesare usually marked as such and ignored during simplifica-tion.

Max-Planck-Institut für Informatik

Page 36: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

34 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

5.1.4. Incremental methods outline

This is the outline of a simple incremental decimationalgorithm32:

Repeat:find region with estimated error < epsilonremove triangles in regionretriangulate hole

Until no further reduction possible

The problem with this naive approach is, that we are try-ing to solve aglobal optimization task by applying opera-tions only based onlocal decisions. A greedy algorithm likethis is prone to get stuck in a local optimum that may be veryfar away from the globally optimal solution.

A way to alleviate this problem is to introduce a globalranking for all candidate operations: Each currently possi-ble operation is assigned a computed cost value and then –if the cost is acceptable – inserted into a priority queue. Oneach step of the algorithm the current least–cost operationis applied and the priorities for the affected neighborhoodis reevaluated and updated. It should be noted, that this stilldoesn’t guarantee that a global optimum will be found: Nooperation is ever performed that violates the prescribed er-ror bound, even if at a later stage the surface would lie wellwithin these bounds.

The improved queue driven algorithm now has an initialpreprocessing phase to initialize the queue:

For all possible regions:op := operation on regionc := cost of local decimationif c < epsilon(op,c) -> queue

Repeat:Apply least-cost operationUpdate queue

Until queue empty

Having a closer look at the local simplification step, thereare still open questions:� What exactly is a “region”?� How should it be retriangulated?

Before we can answer these, we need to decide on therequired behavior of the mesh decimation algorithm:� Are we allowed to change the mesh topology or not?� Does the simplified mesh have to consist of a subset of the

original vertices or do we want to optimize the shape byshifting vertices around?� Should the process be invertible for reconstruction (pro-gressive mesh)?� What region of the mesh is affected by the change? (Weneed to know this to compute an error estimate.)

The answers to these questions are all connected to thechoice of the atomic changes we apply to the mesh; since

they affect the connectivity of the mesh, they are calledtopo-logical operators. The idea is, to have a small set of opera-tors, or even only one operator, that changes the mesh in aclearly defined region in a computationally cheap, useful andwell–defined way; this concept has been first introduced byHoppe17.

The following section gives an overview of common ele-mentary operators. All the following operations have a cor-responding inverse operation, varying in the information thathas to be stored to perform it.

5.1.5. Topological operators

The first operator that may come to mind is close to our def-inition above: Remove a vertex and retriangulate the hole.

Vertex Insertion

Vertex Removal

Simple as it is, we still have some degrees of freedomhere, namely how we retriangulate the hole after remov-ing the vertex. Optimizing for the best retriangulation canbe algorithmically cumbersome for special cases have to beconsidered32, and computationally expensive, depending onthe valenceof the vertex, i.e. the number of vertices con-nected to the removed vertex by triangle edges. As an ad-ditional drawback, a lot of information needs to be kept toinvert the operation.

To reduce the number of degrees of freedom, a very com-mon operation is theedge collapse17, where two end verticesof an edge are merged, effectively removing one vertex andtwo triangles. Here the only degree of freedom left is theposition of the merged vertex.

Edge Split

Edge Collapse

This operator can be restricted even further, if we choosethe target position to be at one of the original two vertexpositions. No new vertex positions are “invented”, the onlychoice left is into which vertex to collapse the edge. Thisoperation is calledrestricted edge collapseor half–edge col-lapse, because giving the half–edge completely specifies theoperation21.

Half Edge Collapse

Restricted Vertex Split

The previous operators do not modify the topology of themesh, i.e. no unconnected regions of the mesh will be con-nected and no tears will disappear or be introduced. Intopology–modifying algorithms, two vertices are allowed tomerge even if they arenot connected by an edge:

Max-Planck-Institut für Informatik

Page 37: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 35

Vertex Contraction

Vertex Separation

In this operation we may again choose to position the targetvertex freely or restrict to one of the two previous vertexpositions (this variant is shown in the picture).

Another common operator, theedge flip, does not changethe vertices at all, but only the connectivity. This is oftenused as an optimization step, because flipping an edge mayresult in a better local approximation of the surface17.

Edge Flip

Edge Flip

In the above pictures only the case of inner vertices areshown. To apply them to boundary vertices/edges/triangles,special care has to be taken to ensure a topologically consis-tent mesh.

Even in the non–boundary case, situations arise where anoperator cannot be applied, e.g. it may generate coplanar tri-angles, as shown in figure 31. So, when choosing an opera-tion, there has to be a check of the local topology, whetherthis operation can actually be applied21.

Figure 31: A collapse along the arrow would cause the darktriangles to share all three vertices and thus cause a non-manifold configuration.

5.1.6. Error metrics

Now we have clearly specified local operations to performon a given mesh. Given that the operation of choice is topo-logically legal, we still have to check whether the approxi-mation error it introduces lies within the given bounds. Thedefinition of this error metric is where incremental decima-tion algorithms differ most. There are three main points onehas to consider for classifying these metrics:

Measured properties: Topology, geometry, attributesGlobal vs. local: Is the error evaluated only from one step

to the next, or is there a comparison to the original geom-etry?

Interpretation: To be useful for specifying the error bound,the metric must be intuitive; also, there shouldn’t be toomany parameters, that the user has to adjust manually.

B

A

d(A,B)

d(B,A)

Figure 32: Hausdorff distance d(A;B) compared to d(B;A)An example for a local measure is the distance to the pre-

vious mesh32. The problem here is of course, that no usefulstatement can be made as for how faithful the simplified ge-ometry is compared to the original. Even if one accumulatesthe error values obtained at each step, this results only inan overly conservative estimate of the global error, prohibit-ing high simplification rates. Having a tight, reliable upperbound for the surface error is crucial in e.g. CAD design ormedical applications.

Many algorithms try to guarantee an upper bound on thedistance errorbetween the original and the simplified sur-face, as in the conceptually straightforward approach by Co-hen et al.9: inner and outer offset surfaces are computedand used by an incremental procedure to simplify the meshwithin these global bounds.

A common way to define distances between meshes isthe Hausdorff distancebetween to shapesA and B, whichis defined asmax(d(A;B);d(B;A)), whered(X;Y) (the so–calledone–sided Hausdorff distance) is the maximum of thedistances from any point onX to the closest point onY. Itshould be noted, that the distance from shape A to shape Bis in general not the same as the distance from B to A (seefigure 32).

The Hausdorff distance is an intuitive global error mea-sure, but difficult to compute19. In the case of scanned data,one can argue that only theverticesof the original mesh holdactual information, while the triangulation merely interpo-lates these points. From this point of view, it is sufficient toestimate theone–sided Hausdorff distanceby measuring thedistance from the original surfaceverticesto thetrianglesofthe simplified surface21.

Measuring the global error usually requires to carry alongsome sort of global history during simplification, e.g. theremoved vertices21 or information about the current errorbound in the form of scalar values, error volumes or otherdata11; 31.

By preserving global volume of the shape with each localsimplification step24; 25, no global history needs to be main-tained, but although the method yields in general very goodapproximations, no upper bound on the surface error can begiven. Recently, error quadrics where introduced11, whichcan be computed very fast and in general give pleasing re-sults, but unfortunately the interpretation of the surfaceer-ror is unintuitive (“squared distance of vertex to a set ofplanes”). It is not obvious how this measure relates to theHausdorff distance.

Max-Planck-Institut für Informatik

Page 38: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

36 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Figure 33: Situation after an edge collapse; the dark re-gion of the mesh has changed, thus for each vertex adjacentto that region (here shown exemplary for one vertex only),the potential collapses (marked by arrows) need to be re-evaluated.

Only a few authors elaborate explicitly on how not onlythe geometry of a surface can be approximated, but also sur-face attributes like colors, vertex normals and texture, thoughmost of the methods extend to these properties8; 15.

Meshes with boundaries (as opposed to solids) involvetreatment of special cases. E.g. if a vertex on a boundary isremoved, the distance error now needs to be measured fromthat vertex to the boundary polygon, not a triangle. Somemethods apply completely different criteria than for simpli-fication of inner vertices (such as preservation of the lengthof the boundary polygon13).

When rejecting removal of a vertex due to violation ofthe error bound, it is important to re–evaluate that vertex ina later stage, since it may again become a candidate due tochanges in the nearby regions of the mesh. This is usuallyachieved by updating priority queue entries for the neigh-borhood of the removed vertex (commonly involving all tri-angles in the2–ring of that vertex, i.e. the triangles adjacentto the changed regions, as shown in figure 33.

For validation of the error bounds the Metro tool6 is auseful, publically available software to compare meshes interms of surface distances and volume difference.

5.1.7. Quality metrics

The error metric explained in the previous section serves toselect the valid candidate operations for the next simplifica-tion step. But we still need to know, in whatorder these op-erations should be performed, to guide the process towards“better” surfaces. This is achieved by evaluating the opera-tions using aquality metric, which is conceptually separate,but in practice often intertwined with the error metric.

As a simple example, an algorithm that relies on boundingthe distance error between two surfaces can’t detectfold–overs: A point moving in the plane can cause the orientationof adjacent triangles to flip. To generate “reasonable” ap-proximations, more control needs to be exerted (e.g. by dis-allowing a more than 90 degree change of the face normal).Also, self intersectionsof a shape can appear, if the opposite

sides of a model are closer together than the bound on thedistance error.

We can group quality criteria by their domain:

Geometry: Smooth surfaces are usually desirable; apartfrom the “triangle flipping” problem, there may be user–imposed limits on surface curvature. Also, good preserva-tion of features can be understood as controlling quality.Decimation algorithms often optimize for well–shapedtriangles employing some measure for “roundness” (gen-erally maximizing triangle area with respect to edgelengths).

Topology: A regular mesh structure is often preferable. In atopology–changing decimation algorithm, one might wantto minimize the number of separate components.

Attributes: Common surface attributes are colors, texturecoordinates and vertex normals. Faithful reproduction ofthe original appearance requires minimizing texture dis-tortion, preserving discontinuities in surface colors andminimizing normal deviation (cf. Fig. 34).

Often, the evaluated criteria are used both for error andquality estimation: E.g. we may optimize for low distortionof a triangle, given some scalar measure, but have a thresh-old where we simply reject the operation, if the achievablequality is too low.

5.1.8. Practicalities

Besides the core decimation routines, authoring a generalsimplification tool often involves more to make it valuablein practice. The following list states in a nutshell a few keypoints to consider; this does of course not claim to be com-plete, as domain–specific demands always have to be added.

Feature preservation: The user should be able to specifyfeatures on the surface that are retained; this can be donecompletely manually by selecting individual vertices oredges, or semi–automatically (the program detects andmarks features based on user–specified criteria).

Configuration: The default parameters for error boundsand quality criteria should already be useful; named pre-sets are a way to hide complex parameter sets (“optimizefor triangle shape”; “optimize for smoothness”).

Progressive mesh generation:The decimation algorithmneeds to store the record of applied operations along withinformation to invert them.

Robustness: If the decimation algorithm cannot handlenon–manifold or degenerate mesh regions, these shouldbe detected and dealt with in a graceful way – e.g. bysimply ignoring them. This also influences the mesh datastructure of choice: If a mesh cannot even be representedinternally, there is obviously no way to deal with degen-eracies.

Speed: The choice of decimation criteria has a great effecton speed and scalability of the algorithm. It should becarefully investigated, whether exact error bounds are a

Max-Planck-Institut für Informatik

Page 39: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 37

Figure 34: Example of mesh decimation using error metrics for preservation of surface attributes; left: original model (400Ktriangles); center: decimated without respecting color attributes (5K triangles); right: decimated preserving color discontinu-ities (5K triangles). The algorithm trades geometry preservation for color preservation in this case.

necessity – this can make the difference between a run-ning time of a few seconds versus several minutes, or evenhours.

Memory Requirements: When meshes in the range of afew million triangles have to be simplified in–core, eas-ily hundreds of megabytes of RAM are consumed. Effi-cient data structures are very important here, and againthe choice of error bounds plays a role, because mem-ory usage is higher, if e.g. a global history has to bemaintained.25

5.2. Discrete fairing

Meshes usually have to satisfy aesthetic requirements. Whilethe aesthetic appearance of a surface is always subjective,itis nevertheless possible to formulate a principle that makesit possible to quantify the surface fairness, the principleofthe simplest shape3. Generally speaking, a shape should befree of unnecessary details as noise or oscillations. Discretefairing creates a mesh that is free of such unnecessary detail.To speed up the fairing process, multigrid algorithms basedon mesh decimation are often integrated in the constructionalgorithms.

There are two major cases where mesh fairing algorithmsare applied. In one case one is interested in smoothing outthe high frequency details of an existing mesh with theconstraint to keep the low frequency components. Typicalfields of application are smoothing of polyhedral surfaces asthose extracted from volumetric medical data by iso-surfaceconstruction algorithms or those resulting from laser-rangescanners. Because of the huge mesh size in such applica-tions, it is especially important to have fast algorithms here.The other application is free form surface modeling. Herethe problem is to create a well defined smooth surface thatis fair. Tightly connected to this application is multiresolu-tion editing of arbitrary meshes. Here fairing algorithms are

needed that are able to locally preserve the shape of the tri-angulation to enable a simple detail encoding. In this contextmesh hierarchies resulting from mesh decimation in combi-nation with discrete fairing are used to generate a geometrichierarchy.

When quantifying the quality of a mesh, in contrast tosmooth surfaces, we have to be aware that there are twodifferent fairness types,outer and inner fairness. The outerfairness measures the fairness of the mesh geometry. Thiscontrols the quality of the shape that is approximated by themesh. The inner fairness addresses the parameterization ofthe mesh vertices. It determines how the mesh vertices aredistributed over the resulting surface (Fig. 35).

Most mesh fairing schemes are based on linear methods.These algorithms do not strictly separate the outer and in-ner fairness concept, so the inner fairness strategy greatlyinfluence the outer fairness and therefore the shape of the re-sulting mesh. Other important influences are mesh size andconnectivity (Fig. 35). Therefore, linear approaches produceresults that are in most cases clearly not as fair as nonlinearapproaches that depend on intrinsic surface properties onlyand there is no guarantee that the surfaces are free of parame-terization artifacts. Moreover, it is also harder for a designerto develop an intuitive understanding for the manipulationbehavior.

To avoid such influences, one has to use fairing algorithmsthat depend on intrinsic surface properties only, propertiesthat purely depend on the geometry. Altering the mesh size,the connectivity or the inner fairness condition only pro-duces another discretization of the same smooth surface andhence leads to geometries that will be close to each other(Fig. 35). In practice this puts the designer in the positiontochoose the inner fairness condition freely.

However, linear approaches have some important advan-tages against their nonlinear counterparts. The resultingal-

Max-Planck-Institut für Informatik

Page 40: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

38 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

gorithms are usually simple and enable a fast implemen-tation. Another important property is that such linear ap-proaches are in general mathematically well understood.Here it is usually possible to give practicable statements un-der which conditions a solution exists respectively when theconstruction process converges. Intrinsic fairing is a nonlin-ear problem and often considerable more costly. Here, formany popular fairing functionals, it is also still unknown if asolution always exists within specific smoothness classes.

5.2.1. Linear methods

Energy minimizationThe standard approach for fair surface construction is basedon the idea to minimize a fairness metric, punishing intrinsicfeatures that are inconsistent with the fairness principleofthe simplest shape3. While this leads to nonlinear func-tionals, a popular technique to simplify this approach is togive up the parameter independence by approximating thegeometric invariants with higher order derivatives. For someimportant fairness functionals this results in algorithmsthat enable the construction of a solution by solving alinear system. This is the discrete fairing approach that waspresented by Kobbelt in20.

Diffusion flowA very effective method for smoothing polyhedral surfacesis the discrete diffusion flow33. Here the idea is to iterativelyupdate each vertexqi

qnewi = qi + t ∆qi (6)

by adding a displacement vector that is a scaled discreteLaplacian. Assigning a scalar weightλi j to every vertexq jthat is adjacent toqi with the constraint∑ j λi j = 1, the dis-crete Laplacian∆qi is defined as

∆(qi) = ∑qj2N(qi)λi j q j � qi :

HereN(qi) denotes the set of all vertices adjacent toqi . Thisalgorithm is linear in time and space complexity and there-fore well suited to fair even very large polyhedral surfaces.In this formulation, for stability reasons the scale factorthas to be a positive number satisfying 0< t < 1. This ap-proach can be interpreted as forward Euler discretization ofthe diffusion equation. Using backward Euler discretization,Desbrun et al.10 showed that it is possible to develop a dif-fusion flow that is unconditionally stable and hence enableslarger scaling factorst.

The main purpose of the diffusion flow is to smooth thehigh frequencies in noisy meshes. Since the equilibriumsurface of the flow only enablesC0 boundary conditions, inthis form it is of only limited use in free-form fair surfacedesign. To enable smooth boundary conditions one has toconsider diffusion equations of higher order. In33 Taubinproposed to combine two such smoothing steps with posi-tive and negative scale factors and developed an algorithm

that enables various interpolation constraints. Another ideathat enables smooth boundary conditions would be to usehigher powers of the Laplacian in the diffusion flow. A goodtrade-off between efficiency and quality is the bilaplacianflow, enabling C1 boundary conditions. This flow alsoresults if we chose scale factors of equal absolute value inTaubins algorithm.

Laplacian smoothingA special case of the diffusion flow is known as Laplaciansmoothing. Here the idea is quite simple: Each vertexqi isreplaced such that∆(qi) = 0 is satisfied locally, that meanswe have

qi = ∑qj2N(qi)λi j q j :

As we can see this equation results from (6) by settingt = 1. Although this value is outside the specified rangeallowed for the diffusion flow, convergence is guaranteed ifboundary conditions are specified. There are two principalmethods to update the verticesqi , simultaneousandsequen-tial Laplacian smoothing35. The first updates the verticesin a Jacobi like manner, the second depends on partiallypreviously calculated new vertices in a Gauss-Seidel likemanner. Although the simultaneous version needs morestorage space for holding the old positions ofqi , it producesbetter results, since the outcome does not depend on theupdate order of the local smoothing steps.

PDE discretizationAnother mesh fairing approach is based on the idea todiscretize the PDE approach of Bloor and Wilson2. In 22

Kobbelt et al. proposed to discretize

∆2 f = 0; (7)

to create surfaces satisfying prescribedC1 boundary condi-tions. This equation results if one uses variational calculusto describe the surface that minimizes the thin plate energyZ Z

f 2xx+2 f 2

xy+ f 2yy dxdy:

In practice equation (7) is discretized and the resulting lin-ear system is solved using an iterative multigrid solver. Themultigrid approach considerably speeds up the constructionalgorithm. In this formulation the algorithm is fast enoughto be used in interactive free form mesh modeling. The hi-erarchies for the multigrid approach are determined using amesh decimating algorithm based on the progressive meshapproach presented by Hoppe15 (see 5.1.2).

The last fairing schemes mentioned above can be parti-tioned into three categories, depending on whether fairingis based on some kind of diffusion flow, Laplacian smooth-ing or whether they discretize a PDE that characterizes thesmooth solution. All three categories are tightly connected.For example the two Laplacian smoothing algorithms can

Max-Planck-Institut für Informatik

Page 41: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 39

(a) (b) (c)

(d) (e) (f)

Figure 35: a) - c) show examples of mesh fairing based on discretizing the equation∆2 f = 0. The boundary condition isdetermined by3 cylinders that are arranged symmetrically. In a) and b) we see the results for two different mesh sizes, if alocal uniform parameterization is assumed. In c) we used a discretization of a Laplacian that locally preserves the shape ofthe triangles. We can see that the mesh size and the local parameterization strategy strongly affect the results. The rectangularpatch introduced in the original mesh leads to local as well as global shape distortions and prevents a symmetric solution. d)- f) show examples of an intrinsic fairing operator, whose outer fairness is based on the non-linear PDE∆BH = 0. We can seethat the chosen inner fairness operator and the mesh structure have only marginal influence on the shape. The influence of themesh size is also much weaker than in the linear setting.

be seen as performing some steps of an Jacobi or Gauss-Seidel iterative solver on the linear system that is derivedfrom the global equation∆ f = 0. And it is also possibleto create the surface that is given by the PDE (7) usingthe fact that the bilaplacian diffusion flow has this equa-tion as its equilibrium. Moreover, in all three categories thediscretization of the Laplacian plays a central role. Duringthe last years various linear discretizations of the Laplacianhave been developed33; 22; 10; 14, differing in how the geome-try of the mesh influences the discretization. The chosen dis-cretization strategy determines the inner mesh fairness, butit also greatly influences the geometry of the resulting mesh(Fig. 35).

5.2.2. Nonlinear methods

Energy minimizationA common method to create fair surfaces is based on the

idea to minimize a fairness metric that punishes unaestheticsurface properties. In36 Welch and Witkin proposed a meshfairing algorithm that enablesG1 boundary conditions basedon the idea to minimize the total curvatureZ

Aκ2

1+κ22 dA;

thus punishing large curvature values. The necessary intrin-sic curvature values were estimated using local quadraticapproximations over local planar parameterizations. The lo-cal parameter domains are constructed using an exponentialmap of the 1-neighborhood around each vertex. This opera-tor maps the neighborhood onto the plane while preservingthe distances of the vertex to its neighbors and the ratio of theadjacent angles. The quadratic approximations are then con-structed using local quadratic least square approximation. Ifthe least square approximation is underdetermined, the num-ber of the basis functions is reduced, making the discretiza-

Max-Planck-Institut für Informatik

Page 42: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

40 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

tion process discontinuous and thus leading to a potentialinstability risk in the mesh fairing algorithm. To overcomethat risk, Welch and Witkin propose to optimize some aux-iliary norm in the underdetermined cases, which requires anorthogonal matrix decomposition which is computationallyexpensive.

qj

q

qj-1

j+1

iq

j-1

j+1

j

β j

α

i

qq

q

j

q

Figure 36: The mean curvature vector at the vertex qi canbe discretized using the1-disk.

Diffusion flowAn intrinsic diffusion operator that is the nonlinear analogonto (6) was presented by Desbrun et al. in10 using the discretemean curvature flow

q0i = qi +λH~n;where H is the mean curvature and~n the outer unit sur-face normal, leading to an excellent noise reduction algo-rithm. They proved that the mean curvature vector can bediscretized as

H~n= 14A ∑

qj2N(qi)(cotα j +cotβ j)(qi �q j);whereA is the sum of the triangle areas of the 1-disk atqiandα j andβ j are the triangle angles as shown in figure 36.Assuming that changes induced in a time step will be small,it can be handled quite analogously to the linear diffusionequation (6) and it is also possible to formulate an uncondi-tional stable flow.

While this fairing algorithm is mainly designed to opti-mize the outer fairness in its original formulation, Ohtakeetal.26 combined this algorithm with an inner fairness criterionthat leads to more regular meshes. To achieve this result, theauthors propose to move the vertex in the direction definedby the linear diffusion flow using local uniform parameteri-zation but with the speed equal to a properly chosen functionof the mean curvature.

These intrinsic diffusion operators are excellent forsmoothing noisy meshes, but since these algorithms con-verge to a discrete minimal surface satisfyingH = 0, theyonly enableC0 boundary conditions and are therefore, astheir linear counterparts, of only limited use in free-formfair surface design. As for the linear diffusion flow al-gorithms, the solution would be to use curvature flows of

higher order as for example the Laplacian of curvature flow5.

PDE discretizationA fairing algorithm for arbitrary connected triangularmeshes, that enablesG1 boundary conditions (prescribedvertices and unit normals) and allows the designer tocompletely separate outer and inner fairness conditions waspresented by Schneider et al. in30. To achieve this, theouter fairness concept is based on the idea to discretize theintrinsic PDE

∆BH = 0; (8)

which can be interpreted as one possible nonlinear analogonto thin plate splines (7). Here∆B is the Laplace Beltramioperator andH the mean curvature. This equation charac-terizes the equilibrium state of the Laplacian of curvatureflow5. The PDE only depends on geometric intrinsics and iscomparatively simple for a forth order equation. The result-ing surfaces extend an important fairing concept, that makesspiral curves so interesting in the planar case28. Because ofthe mean value property of the Laplacian, it is guaranteedthat the extremal mean curvature values of a solution of (8)will be reached at the border and that there are no local ex-trema in the interior29. Thus it satisfies the important fairingprinciple of the simplest shape3. Since constant mean curva-ture surfaces satisfy this equation, important basic shapes asspheres and cylinders can be reconstructed. A coarse meshalready approximates the shape of the smooth surface that isimplicitly defined by the PDE, so increasing the mesh sizemainly improves the smoothness of the approximation. Thisproperty is exploited to improve the efficiency of the con-struction algorithm by using multigrid methods for arbitrarymeshes22. The necessary mesh hierarchies are again createdusing the progressive mesh representation as introduced byHoppe15. To further speed up the construction scheme, thefourth order PDE is factorized into two second order prob-lems. Instead of trying to simulate the continuous case usinglocal quadratic approximations, the authors use the discretedata of the construction algorithm directly.

References

1. B. G. Baumgart. Winged–edge polyhedron representa-tion. Technical Report STAN–CS–320, Stanford Uni-versity, Computer Science Department, 1972.

2. M. I. G. Bloor and M. J. Wilson. Using partial differ-ential equations to generate free–form surfaces.IEEETransactions on Visualization and Computer Graphics,22:202–212, 1990.

3. H. G. Burchard, J. A. Ayers, W. H. Frey, and N. S. Sa-pidis. Approximation with aesthetic constraints. InDe-signing Fair Curves and Surfaces, pages 3–28. SIAM,Philadelphia, 1994.

4. S. Campagna, L. Kobbelt, and H.-P. Seidel. Directed

Max-Planck-Institut für Informatik

Page 43: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 41

(a) (b) (c)

Figure 37: 6 circles with C1 boundary–conditions are used to define a “tetra thing”. Due to the symmetry the final solutionhere is actually G2 continuous, which is indicated in c) by the smooth reflectionlines. The pictures were constructed using anintrinsic fairing operator, whose outer fairness is based on ∆BH = 0.

edges: A scalable representation for triangle meshes.Journal of Graphics Tools, 3(4):1–11, 1998.

5. D. L. Chopp and J. A. Sethian. Motion by intrinsiclaplacian of curvature. InInterfaces and Free Bound-aries 1, pages 1–18, 1999.

6. P. Cignoni, C. Rocchini, and R. Scopigno. Metro: Mea-suring error on simplified surfaces.Computer GraphicsForum, 17(2):167–174, 1998.

7. J. Cohen, Dinesh M., and M. Olano. Simplifying polyg-onal models using successive mappings. InIEEE Visu-alization ’97 Conference Proceedings, pages 395–402,1997.

8. J. Cohen, M. Olano, and D. Manocha. Appearance–preserving simplification. InSIGGRAPH 98 Confer-ence Proceedings, pages 115–122, 1998.

9. J. Cohen, A. Varshney, D. Manocha, G. Turk, H. Weber,P. Agarwal, F. P. Brooks, Jr., and W. Wright. Simplifi-cation envelopes. InSIGGRAPH 96 Conference Pro-ceedings, pages 119–128, 1996.

10. M. Desbrun, M. Meyer, P. Schröder, and A. H. Barr.Implicit fairing of irregular meshes using diffusion andcurvature flow. InSIGGRAPH 99 Conference Proceed-ings, pages 317–324, 1999.

11. M. Garland and P. S. Heckbert. Surface simplificationusing quadric error metrics. InSIGGRAPH 97 Confer-ence Proceedings, pages 209–216, 1997.

12. A. Guéziec. Surface simplification with variable tol-erance. InSecond Annual Intl. Symp. on MedicalRobotics and Computer Assisted Surgery, pages 132–139, 1995.

13. A. Guéziec. Locally toleranced surface simplifica-tion. IEEE Transactions on Visualization and Com-puter Graphics, 5(2), 1999.

14. I. Guskov, W. Sweldens, and P. Schröder. Multiresolu-tion signal processing for meshes. InSIGGRAPH 99Conference Proceedings, pages 325–334, 1999.

15. H. Hoppe. Progressive meshes. InSIGGRAPH 96 Con-ference Proceedings, pages 99–108, 1996.

16. H. Hoppe. Efficient implementation of progressivemeshes.Computers and Graphics, 22(1):27–36, 1998.

17. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, andW. Stuetzle. Mesh optimization. InSIGGRAPH 93Conference Proceedings), pages 19–26, 1993.

18. L. Kettner. Designing a data structure for polyhedralsurfaces. Technical Report TR 278, ETH Zürich, Insti-tute of Theoretical Computer Science, 1997.

19. R. Klein, G. Liebich, and W. Straßer. Mesh reductionwith error control. InIEEE Visualization ’96 Confer-ence Proceedings, pages 311–318, 1996.

20. L. Kobbelt. Discrete fairing. InProceedings of the Sev-enth IMA Conference on the Mathematics of Surfaces,pages 101–131, 1996.

21. L. Kobbelt, S. Campagna, and H.-P. Seidel. A generalframework for mesh decimation. InGraphics Interface,pages 43–50, 1998.

22. L. Kobbelt, S. Campagna, J. Vorsatz, and H.-P. Sei-del. Interactive multi–resolution modeling on arbitrarymeshes. InSIGGRAPH 98 Conference Proceedings,pages 105–114, 1998.

Max-Planck-Institut für Informatik

Page 44: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

42 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

23. P. Lindstroem. Out–of–core simplification of largepolygonal models. InSIGGRAPH 00 Conference Pro-ceedings, to appear.

24. P. Lindstrom and G. Turk. Fast and memory efficientpolygonal simplification. InIEEE Visualization ’98Conference Proceedings, pages 279–286, 1998.

25. P. Lindstrom and G. Turk. Evaluation of memorylesssimplification.IEEE Transactions on Visualization andComputer Graphics, 5(2), 1999.

26. Y. Ohtake, A. G. Belyaev, and I. A. Bogaevski. Poly-hedral surface smoothing with simultaneous mesh reg-ularization. InProceedings Geometric Modeling andProcessing, pages 229–237, 2000.

27. J. Rossignac and P. Borrel. Multi–resolution 3D ap-proximation for rendering complex scenes. InSec-ond Conference on Geometric Modelling in ComputerGraphics, pages 453–465, 1993. Genova, Italy.

28. R. Schneider and L. Kobbelt. Discrete fairing of curvesand surfaces based on linear curvature distribution. InCurve and Surface Design, pages 371–380, 1999. SaintMalo, France.

29. R. Schneider and L. Kobbelt. Generating fair mesheswith g1 boundary conditions. InProceedings Geomet-ric Modeling and Processing, pages 251–261, 2000.

30. R. Schneider and L. Kobbelt. Geometric fairing of ir-regular meshes for free–form surface design. submit-ted.

31. W. J. Schroeder. A topology modifying progressivedecimation algorithm. InIEEE Visualization ’97 Con-ference Proceedings, pages 205–212, 1997.

32. W. J. Schroeder, J. A. Zarge, and W. E. Lorensen. Dec-imation of triangle meshes. InComputer Graphics(SIGGRAPH 92 Conference Proceedings), volume 26,pages 65–70, 1992.

33. G. Taubin. A signal processing approach to fair sur-face design. InSIGGRAPH 95 Conference Proceed-ings, pages 351–358, 1995.

34. G. Turk. Re–tiling polygonal surfaces. InComputerGraphics (SIGGRAPH 92 Conference Proceedings),volume 26, pages 55–64, 1992.

35. J. Vollmer, R. Mencl, and H. Muller. Improved lapla-cian smoothing of noisy surface meshes. InComputerGraphics Forum (Proc. Eurographics), pages 131–138,1999.

36. W. Welch and A. Witkin. Free–form shape design us-ing triangulated surfaces. InSIGGRAPH 94 ConferenceProceedings), pages 247–256, 1994.

37. L. Williams. Pyramidal parametrics. InComputerGraphics (SIGGRAPH 83 Conference Proceedings),volume 17, pages 1–11, 1983.

38. S. A. Zimmerman. Applying frequency domain con-structs to a broad spectrum of visual simulation prob-lems. Technical report, Evans & Sutherland, 1987. Pre-sented at the IMAGE IV Conference, Phoenix, Arizona.

Max-Planck-Institut für Informatik

Page 45: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 43

6. Geometric modeling based on polygonal meshes

Finally we come to the last stage in our virtual mesh process-ing pipeline. Until now we presented many powerful meth-ods to efficiently process polygonal meshes. Having all themeans at hand, we are able to focus on (interactive) mul-tiresolution modeling, where a designer modifies a givensurface by sophisticated editing operations. We distinguishbetween two different approaches:Freeform modelingis theprocess of modifying subregions of the surface in asmoothmanner whereas the notion ofmultiresolution modelingde-scribes edits where we can additionally preserve little geo-metric features.

Traditionally, geometric modeling is based on polynomialsurface representations1; 8; 9. However, while special polyno-mial basis functions are well suited for describing and modi-fying smooth triangular or quadrilateralpatches, it turns outto be rather difficult to smoothly join several pieces of a com-posite surface along common (possibly trimmed) boundarycurves. As flexible patch layout is crucial for the construc-tion of non–trivial geometric shapes, spline–based modelingtools spend much effort to maintain the global smoothness ofa surface. For this reason, considerable effort has been spendto extend these traditional methods to polygonal meshes.

Just like in the two previous sections (where we carriedmany of the advantageous features of parametric surfacesover to polygonal surfaces, while getting rid of the severerestrictions inherent to them) this section discusses someap-proaches that make freeform and multiresolution modelingavailable for triangle meshes. Opposed to splines, where thecontrol vertices provide a convenient way to smoothly editthe surface, this is a challenging task, since plain trianglemeshes do not have any reasonable control mechanism. Be-fore we describe in detail, how intuitive modeling metaphorsfor triangle meshes can be accomplished, we describe thegeneral requirements a modeling tool should satisfy.

Intuitive I.e. editing the overall shape with an easy to usecontrol mechanism (cf. control vertices of splines) in abroad, smooth manner while preserving little features re-siding on the surface should be possible.

Independent The editing interface should abstract from theunderlying mesh representation, since in general a de-signer is not interested in how the surface is actually rep-resented.

Interactive This is crucial, since a designer heavily dependson immediate visual feedback when performing furtheredits.

This part of the tutorial is organized as follows. At first wewill deal with freeform modeling. This can be done with thehelp ofdiscrete fairingor subdivisionrespectively. Since weexplained these methods in previous sections (cf. 5.2,4.1),we restrict ourselves to describe, how an efficient controlmechanism (similar to the control–points of spline–basedmethods) can be derived. In the second part we will show,how to build a hierarchical structure for semi–regular– and

for unstructured meshes. Combined with freeform modifi-cations, this enables us to perform multiresolution model-ing. Finally we will briefly discuss some specific modelingschemes, that were proposed during the last years.

6.1. Freeform modeling

Subdivision schemes can be considered as the algorithmicgeneralization of classical spline techniques enabling controlmeshes with arbitrary topology. They provide easy access toglobally smooth surfaces of arbitrary shape by iterativelyap-plying simple refinement rules to the given control mesh. Acoarse–to–fine hierarchy of meshes generated by this pro-cess quickly converges to a smooth limit surface. For mostpractical applications, the refined meshes are already suffi-ciently close to the smooth limit after only a few refinementsteps. Lets assume we are given asemi–regular meshMn,which was generated by applying some subdivision opera-tor S to a base meshM0, and we want to modifyMn withspecific support. The usual way to implement this operationis to run a decomposition scheme several steps until the de-sired resolution level/meshMi is reached. In our setting,this can simply be done by subsampling, i.e. we just switchtoMi . On this level the meshMi is edited. ApplyingS tothe modified meshM0

i (n� i)–times yields the final result.This operation can be performed quite efficiently due to thesimplicity and numerical robustness ofS. (Fig. 38 illustratesthe varying support of modifications at different levels). Themajor drawback of this procedure is the fact, that edits are re-stricted to vertices residing on a specific level. However, onecan fake low–frequency modifications by moving a group ofvertices from a finer level simultaneously. But besides beingcumbersome, this annihilates the mathematical elegance ofthe multiresolution representation.

In order to apply global and smooth modifications toar-bitrary (manifold) triangle meshes we make use of a com-pletely different approach. In Sec. 5.2 we introduced the no-tion of discrete fairing that provides elegant means for ourpurposes. The key idea is relatively simple and can roughlybe stated as follows.

Define the edit by imposing boundary conditionsto the mesh, choose an appropriate fairing schemeand solve the corresponding optimization prob-lem.

Fig. 39 shows a convenient way how boundary condi-tions can be defined by the user. However, more sophisti-cated methods can easily be derived. The following sectionsshow, how to apply discrete fairing in the context of interac-tive modeling.

Since interactivity is crucial, an efficient solver for thechosen fairing scheme has to be available. Linear methods(cf. Sec. 5.2) lead to large sparse linear problems and nu-merical analysis shows, that straight forward iterative solvers

Max-Planck-Institut für Informatik

Page 46: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

44 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Figure 38: A simple subdivision–surface (left) is modified by moving the vertices of corresponding control meshes. Editing thecoarse control mesh leads to a wide “bump” (middle) whereas altering a vertex on a finer level affects a smaller area (right).

Figure 39: Freeform edits for unstructured meshes: The dark line indicates the area which is subject to the modification. Thebright line defines the handle geometry which can be moved by the designer (middle,right). Both boundaries can have anarbitrary shape and hence they can, e.g. be aligned to geometric features in the mesh. The dark and the bright line imposeC1 and C0 boundary conditions to the mesh respectively and the modified smooth version is found by discrete fairing whilepreserving these conditions. Notice, that the designer canapply arbitrary modifications to the handle polygon and she does nothave to take the mesh connectivity into account.

(Jacobi/Gauß–Seidel) are not appropriate in this case. Never-theless, more sophisticated solvers exploit knowledge aboutthe structure of the problem, e.g. the large class of multi–gridschemes solve the problem hierarchically and thus achievelinear running times in the number of degrees of freedom.These multi–level schemes solve the problem on a coarselevel first and use this solution to predict initial values for asolution on the next refinement level. In our case, we can useincremental mesh decimation (cf. Sec. 5.1) to build a fine–to–coarse hierarchy of the mesh in such a way that interme-diate meshes can be used to solve the optimization problem(OP). This is done in the following way.

go to coarsest levelsolve OP directly

Repeat:reinsert some verticessolve OP in vicinity of new vertices

Until mesh is reconstructed

Discrete fairing adjusts for vertex positions only, whichmight be insufficient for certain boundary conditions, i.e.ex-tremely large/distorted triangles can occur. Fortunatelyan-

other degree of freedom inherent to triangle meshes can beexploited. In other words the connectivity can be changed toget rid of badly shaped triangles. This can e.g. be done byprescribing a maximal/minimal edge–length (cf.4; 5). Longedges are removed by inserting new vertices in their center(the two triangles adjacent to this edge are split into foursubtriangles). Subsequently a simple local optimization pro-cedure balances the vertices’ valences thus strongly distortedtriangles can be avoided.

6.2. Multiresolution modeling

The previous section shows how to perform freeform model-ing on triangle meshes. Let us now assume we want to mod-ify the face of the bust model (see Fig. 41) and we woulde.g. like to shift its nose. This could be accomplished withthe above methods but the face would lose its features likeeyes and mouth since this detail information would be re-moved by the optimization process. In order to enable suchtypes of edits, we have to extend freeform modeling toMul-tiresolution modelingwhich means that we have to be able todistinguish between high–frequency detail information that

Max-Planck-Institut für Informatik

Page 47: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 45

Figure 40: A flexible metaphor for multiresolution edits. On the left, the original mesh is shown. The smooth version of theoriginal mesh is found by applying discrete fairing while observing the boundary constraints (dark and bright line,cf.Fig. 39).The center left shows the result of the curvature minimization. The geometric difference between the two left meshes is stored asdetail information with respect to local frames. Now the designer can move the handle polygon and this changes the boundaryconstraints for the curvature minimization. Hence the discrete fairing generates a modified smooth mesh (center right). Addingthe previously stored detail information yields the final result on the right. Since we can apply fast multi-level smoothingwhen solving the optimization problem, the modified mesh canbe updated with several frames per second during the modelingoperation. Notice that all four meshes have the same connectivity.

has to be preserved and the low–frequency shape we want toedit. This is where multiresolution representations for trian-gle meshes come in. In the course of this tutorial we alreadygot to know two different ways to build hierarchies (coarse–to–fine and fine–to–coarse). In the context of multiresolutionmodeling however, we do not want hierarchies of differentcoarseness, i.e. with varying triangle count but of differentsmoothnessNevertheless, it will turn out, that both types ofhierarchies are closely related.

Given an arbitrary surfaceSm, a multiresolutiondecom-positionconsists of a sequence of topologically equivalentsurfacesSm�1; : : :;S0 with decreasing level of geometricdetail. The differenceDi = Si+1 � Si between two suc-cessive surfaces is thedetail on level i which is addedor removed when switching between the two approxima-tions. The reconstructionSm = Si +Di + : : :+Dm�1 ofthe original surfaceSm can start on any level of detailSi .Multiresolution modeling means that on some level of de-tail, the surfaceSi is replaced byS 0i . This operation doesnot have any effect onS0; : : :;Si�1 but Di�1 and henceSi+1; : : :;Sm change since the (unchanged) detail informa-tionDi ; : : :;Dm�1 is now added to the modified base surfaceS 0i for the reconstruction ofS 0m. To guarantee the intuitivepreservation of the shape characteristics after a modificationon some lower level of detail, this basic setting has to be ex-tended in the sense that the detail informationDi is encodedwith respect tolocal frames. These frames are aligned to thesurface geometry ofSi

2; 1; 6; 10. Sec.6.2.1 will elaborate onthis.

For semi–regular meshes based on subdivision therecon-struction operatoris given by the underlying subdivisionscheme. We transform the meshSi to the next refinementlevelS 0i+1 = SSi by applying the stationary subdivision op-eratorSand move the obtained control vertices by adding theassociated detail vectors:Si+1 = S 0i+1 +Di . To generate a

smooth low–frequency approximation ofSm, we simply sup-press the detail reconstruction starting from some intermedi-ate level j (Di = 0; i > j). The decomposition operatorhasto be the inverse of the subdivision operator, i.e. given a finemeshSi+1 we have to find a meshSi such thatSi+1 � SSi.In this case the detail vectors become as small as possible6.

If we build the hierarchy by using an incremental meshdecimation scheme, thedecomposition operatorD appliesto arbitrary meshes. Given a fine meshSi+1 we findSi =DSi+1, e.g. by applying a number of edge collapse opera-tions (cf. Sec. 5.1). However, it is not clear how to definethe detail coefficients, since inverse mesh decimation (pro-gressive meshes) always reconstructs the original mesh andthere is no canonical way to generate smooth low frequencygeometry by suppressing the detail information during re-construction. To solve this problem we split each step ofthe progressive mesh refinement into a topological operation(vertex insertion) and a geometric operation which placesthe re–inserted vertex at the original position. In analogyto the plain subdivision operator without detail reconstruc-tion we have to figure out a heuristic which places the newvertices such that they lie on a smooth surface (instead oftheir original position). This can be done by discrete fair-ing (see. Sec. 5.2). The difference between this “predicted”position and the original location can then be used as theassociated detail vector.

6.2.1. Detail encoding

In order to guarantee intuitive detail preservation under mod-ification of the global shape, we cannot simply store the de-tail vectors with respect to a global coordinate system but wehave to define them with respect to local frames which arealigned to the low–frequency geometry. Usually, the associ-ated local frame for each vertex has its origin at the locationpredicted by the reconstruction operator with suppressed de-

Max-Planck-Institut für Informatik

Page 48: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

46 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002)

Figure 41: Multiresolution editing of a bust (62k triangles,left). The handle lies around the nose, the whole face is marked asthe area of influence. The edits where achieved by scaling andtranslating the nose (from left to right).

tail. However, in many cases this can lead to rather long de-tail vectors with a significant component within the local tan-gent plane. Since we prefer short detail vectors to guaranteeintuitive reconstructions, we use a different origin for the lo-cal frame. In fact, the optimal choice is to find that pointon the low–frequency surface, whose normal points directlyto the original vertex. In this case the detail is not given in(x,y,z)–coordinates but rather by the base pointp = p(u;v)on the low–frequency geometry plus a scalar valueh for theoffset in normal direction.

The general setting for detail computation is that we havegiven two meshesSi+1 andS 0i+1 whereSi+1 is the origi-nal data whileS 0i+1 is reconstructed from the low–frequencyapproximationSi with suppressed detail, i.e. for coarse–to–fine hierarchies, the meshS 0i+1 is generated by applying astationary subdivision scheme and for fine–to–coarse hier-archiesS 0i+1 is optimal with respect to some global bend-ing energy functional. Encoding the difference between bothmeshes requires us to associate each vertexp of Si+1 with acorresponding base pointq onS 0i+1 such that the differencevectorp�q is parallel to the normal vector atq. Any pointqonS 0i+1 can be specified by a triangle indexi and barycentriccoordinates within the referred triangle.

To actually compute the detail coefficients, we have to de-fine a normal field on the meshS 0i+1. The most simple wayto do this is to use the normal vectors of the triangular facesfor the definition of a piecewise constant normal field. How-ever, since the orthogonal prisms spanned by a triangle meshdo not completely cover the vicinity of the mesh, we have toaccept negative barycentric coordinates, ifS 0i+1 is not suffi-ciently smooth. This may lead to “unnatural” detail recon-struction if the low–frequency geometry is modified.

We therefore propose a different approach7 where the nor-mal vectors are estimated at the vertices and a continuous

normal field for the interior of the triangles is computed bylinearly blending the normal vectors at the corner.

6.3. Modeling tools based on triangle meshes

6.3.1. Semi–regular meshes

In ’97 Zorin et al. came up with a modeling tool for semi–regular meshes10. It implements the control mechanism wedescribed in the context of freeform modeling for semi–regular meshes and it uses a decomposition operator similarto the one we sketched in Section 6.2.

+ intuitive modifications (detail preserving large scale edits)+ fast and stable due to simple analysis/synthesis operators– restricted to semi–regular input meshes– topological hierarchy pinpoints degrees of freedom for ed-

its on different scales.

6.3.2. Unstructured meshes

Kobbelt and co–workers6 generalized multiresolution tech-niques to arbitrary triangle meshes. They introduced thefine–to–coarse geometric hierarchy by using the simpleUm-brella algorithm (cf. Equation 7 in Sec. 5.2) to generate thelow–frequency version of the input mesh. In7 they extendthe two stage hierarchy to multiple geometric hierarchies.Asimilar approach has been investigated by Guskov3.

+ intuitive modifications (detail preserving large scale edits)+ unstructured input meshes+ flexible modeling metaphor (arbitrary area/handle)– fixed mesh–connectivity

6.3.3. Dynamic vertex connectivity

In 5 Kobbelt et al. introduced the notion of multiresolutionmodeling for meshes with dynamic connectivity. They nolonger require a global hierarchical structure that links the

Max-Planck-Institut für Informatik

Page 49: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 47

different detail levels, but they represent the detail informa-tion implicitly by the difference between independent sur-faces. Since describing the scheme in detail would go be-yond the scope of this tutorial, we refer to the original paperin the proceedings.

+ unstructured input meshes+ adaptive connectivity– no flexible/intuitive modeling metaphor (modeling with

ellipsoids)

References

1. D. Forsey and R. H. Bartels. Surface fitting with hi-erarchical splines. ACM Transactions on Graphics,14(2):134–161, 1995.

2. D. R. Forsey and R. H. Bartels. Hierarchical B–splinerefinement. InComputer Graphics (SIGGRAPH 88Proceedings), volume 22, pages 205–212, 1988.

3. I. Guskov, W. Sweldens, and P. Schröder. Multiresolu-tion signal processing for meshes. InSIGGRAPH 99Conference Proceedings, pages 325–334, 1999.

4. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, andW. Stuetzle. Mesh optimization. InSIGGRAPH 93Conference Proceedings), pages 19–26, 1993.

5. L. Kobbelt, T. Bareuther, and H.-P. Seidel. Multires-olution shape deformations for meshes with dynamicconnectivity. InComputer Graphics Forum (Proc. Eu-rographics 2000), to appear.

6. L. Kobbelt, S. Campagna, J. Vorsatz, and H.-P. Sei-del. Interactive multi–resolution modeling on arbitrarymeshes. InSIGGRAPH 98 Conference Proceedings,pages 105–114, 1998.

7. L. Kobbelt, J. Vorsatz, and H.-P. Seidel. Multiresolutionhierarchies on unstructured triangle meshes.Computa-tional Geometry: Theory and Applications, 14, 1999.

8. V. Krishnamurthy and M. Levoy. Fitting smooth sur-faces to dense polygon meshes. InSIGGRAPH 96 Con-ference Proceedings, pages 313–324, 1996.

9. S. Lee. Interactive multiresolution editing of arbi-trary meshes.Computer Graphics Forum, 18(3):73–82,1999.

10. D. Zorin, P. Schröder, and W. Sweldens. Interactivemultiresolution mesh editing. InSIGGRAPH 97 Con-ference Proceedings, pages 259–268, 1997.

Max-Planck-Institut für Informatik

Page 50: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

������ kI N F O R M A T I K

Below you find a list of the most recent technical reports of the Max-Planck-Institut für Informatik. They areavailable by anonymous ftp fromftp.mpi-sb.mpg.de under the directorypub/papers/reports. Mostof the reports are also accessible via WWW using the URLhttp://www.mpi-sb.mpg.de. If you have anyquestions concerning ftp or WWW access, please [email protected]. Paper copies (which arenot necessarily free of charge) can be ordered either by regular mail or by e-mail at the address below.

Max-Planck-Institut für InformatikLibraryattn. Anja BeckerStuhlsatzenhausweg 8566123 SaarbrückenGERMANYe-mail:[email protected]

MPI-I-2000-4-002 L.P. Kobbelt, S. Bischoff, K. Kähler,R. Schneider, M. Botsch, C. Rössl, J. Vorsatz

Geometric Modeling Based on Polygonal Meshes

MPI-I-2000-4-001 J. Kautz, W. Heidrich, K. Daubert Bump MapShadows for OpenGL Rendering

MPI-I-2000-2-001 F. Eisenbrand Short Vectors of Planar Lattices Via Continued Fractions

MPI-I-2000-1-003 P. Fatourou Low-Contention Depth-First Scheduling of ParallelComputations with Synchronization Variables

MPI-I-2000-1-002 R. Beier, J. Sibeyn A Powerful Heuristic for Telephone Gossiping

MPI-I-2000-1-001 E. Althaus, O. Kohlbacher, H. Lenhof, P. Müller A branch and cut algorithm for the optimal solution of theside-chain placement problem

MPI-I-1999-4-001 J. Haber, H. Seidel A Framework for Evaluating the Quality of Lossy ImageCompression

MPI-I-1999-3-005 T.A. Henzinger, J. Raskin, P. Schobbens Axioms for Real-Time Logics

MPI-I-1999-3-004 J. Raskin, P. Schobbens Proving a conjecture of Andreka on temporal logic

MPI-I-1999-3-003 T.A. Henzinger, J. Raskin, P. Schobbens Fully Decidable Logics, Automata and Classical Theories forDefining Regular Real-Time Languages

MPI-I-1999-3-002 J. Raskin, P. Schobbens The Logic of EventClocks

MPI-I-1999-3-001 S. Vorobyov New Lower Bounds for the Expressiveness and the Higher-OrderMatching Problem in the Simply Typed Lambda Calculus

MPI-I-1999-2-008 A. Bockmayr, F. Eisenbrand Cutting Planes and the Elementary Closure in Fixed Dimension

MPI-I-1999-2-007 G. Delzanno, J. Raskin Symbolic Representation of Upward-closed Sets

MPI-I-1999-2-006 A. Nonnengart A Deductive Model CheckingApproach for Hybrid Systems

MPI-I-1999-2-005 J. Wu Symmetries in Logic Programs

MPI-I-1999-2-004 V. Cortier, H. Ganzinger, F. Jacquemard,M. Veanes

Decidable fragments of simultaneous rigid reachability

MPI-I-1999-2-003 U. Waldmann Cancellative SuperpositionDecides the Theory of DivisibleTorsion-Free Abelian Groups

MPI-I-1999-2-001 W. Charatonik Automata on DAG Representations of Finite Trees

Page 51: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

MPI-I-1999-1-007 C. Burnikel, K. Mehlhorn, M. Seel A simpleway to recognize a correct Voronoi diagram of linesegments

MPI-I-1999-1-006 M. Nissen Integration of Graph Iterators into LEDA

MPI-I-1999-1-005 J.F. Sibeyn Ultimate Parallel List Ranking ?

MPI-I-1999-1-004 M. Nissen, K. Weihe How generic language extensions enable “open-world” desing inJava

MPI-I-1999-1-003 P. Sanders, S. Egner, J. Korst Fast Concurrent Access to Parallel Disks

MPI-I-1999-1-002 N.P. Boghossian, O. Kohlbacher, H.-. Lenhof BALL: Biochemical Algorithms Library

MPI-I-1999-1-001 A. Crauser, P. Ferragina A Theoretical and Experimental Study on the Construction ofSuffix Arrays in External Memory

MPI-I-98-2-018 F. Eisenbrand A Note on the Membership Problem for the First ElementaryClosure of a Polyhedron

MPI-I-98-2-017 M. Tzakova, P. Blackburn Hybridizing Concept Languages

MPI-I-98-2-014 Y. Gurevich, M. Veanes Partisan Corroboration, and Shifted Pairing

MPI-I-98-2-013 H. Ganzinger, F. Jacquemard, M. Veanes Rigid Reachability

MPI-I-98-2-012 G. Delzanno, A. Podelski Model Checking Infinite-state Systems in CLP

MPI-I-98-2-011 A. Degtyarev, A. Voronkov Equality Reasoning in Sequent-Based Calculi

MPI-I-98-2-010 S. Ramangalahy Strategies for ConformanceTesting

MPI-I-98-2-009 S. Vorobyov The Undecidability of the First-Order Theories of One StepRewriting in Linear Canonical Systems

MPI-I-98-2-008 S. Vorobyov AE-Equational theory of context unification is Co-RE-Hard

MPI-I-98-2-007 S. Vorobyov The Most Nonelementary Theory (A Direct Lower Bound Proof)

MPI-I-98-2-006 P. Blackburn, M. Tzakova Hybrid Languages and Temporal Logic

MPI-I-98-2-005 M. Veanes The Relation Between Second-Order Unification andSimultaneous RigidE-Unification

MPI-I-98-2-004 S. Vorobyov Satisfiability of Functional+Record Subtype Constraints isNP-Hard

MPI-I-98-2-003 R.A. Schmidt E-Unification for Subsystems of S4

MPI-I-98-2-002 F. Jacquemard, C. Meyer, C. Weidenbach Unification in Extensions of Shallow Equational Theories

MPI-I-98-1-031 G.W. Klau, P. Mutzel Optimal Compaction of Orthogonal Grid Drawings

MPI-I-98-1-030 H. Brönniman, L. Kettner, S. Schirra,R. Veltkamp

Applications of the Generic Programming Paradigm in theDesign of CGAL

MPI-I-98-1-029 P. Mutzel, R. Weiskircher Optimizing Over All Combinatorial Embeddings of a PlanarGraph

MPI-I-98-1-028 A. Crauser, K. Mehlhorn, E. Althaus, K. Brengel,T. Buchheit, J. Keller, H. Krone, O. Lambert,R. Schulte, S. Thiel, M. Westphal, R. Wirth

On the performance of LEDA-SM

MPI-I-98-1-027 C. Burnikel Delaunay Graphs by Divide and Conquer

MPI-I-98-1-026 K. Jansen, L. Porkolab Improved Approximation Schemes for Scheduling UnrelatedParallel Machines

MPI-I-98-1-025 K. Jansen, L. Porkolab Linear-time Approximation Schemes for Scheduling MalleableParallel Tasks

MPI-I-98-1-024 S. Burkhardt, A. Crauser, P. Ferragina,H. Lenhof, E. Rivals, M. Vingron

q-gram Based Database Searching Using a Suffix Array(QUASAR)

Page 52: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition

MPI-I-98-1-023 C. Burnikel Rational Points on Circles

MPI-I-98-1-022 C. Burnikel, J. Ziegler Fast Recursive Division

MPI-I-98-1-021 S. Albers, G. Schmidt Scheduling with Unexpected Machine Breakdowns

MPI-I-98-1-020 C. Rüb On Wallace’s Method for the Generation of Normal Variates

MPI-I-98-1-019 2nd Workshop on Algorithm Engineering WAE ’98 -Proceedings

MPI-I-98-1-018 D. Dubhashi, D. Ranjan On Positive Influenceand Negative Dependence

Page 53: MPI–I–2000–4-002 July 2000 FORSCHUNGSBERICHT …domino.mpi-inf.mpg.de/.../$file/mpi-i-2000-4-002.pdf4 GMU MPI Saarbrücken / Geometric Modeling (MPI-I-2000-4-002) 2. Data acquisition