Im Impaint

Embed Size (px)

Citation preview

  • 8/11/2019 Im Impaint

    1/12

    Image InpaintingMarcelo Bertalmio and Guillermo Sapiro

    Electrical and Computer Engineering, University of MinnesotaVicent Caselles and Coloma Ballester

    Escola Superior Politecnica, Universitat Pompeu Fabra

    Abstract

    Inpainting, the technique of modifying an image in an undetectableform, is as ancient as art itself. The goals and applications of in-painting are numerous, from the restoration of damaged paintingsand photographs to the removal/replacement of selected objects. Inthis paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used byprofessional restorators. After the user selects the regions to berestored, the algorithm automatically lls-in these regions with in-formation surrounding them. The ll-in is done in such a way thatisophote lines arriving at the regions boundaries are completed in-side. In contrast with previous approaches, the technique here in-troduced does not require the user to specify where the novel in-formation comes from. This is automatically done (and in a fastway), thereby allowing to simultaneously ll-in numerous regionscontaining completely different structures and surrounding back-grounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique includethe restoration of old photographsand damagedlm; removal of su-perimposed text like dates, subtitles, or publicity; and the removalof entire objects from the image like microphones or wires in spe-cial effects.

    CR Categories: I.3.3 [Computer Graphics]: Picture/ImageGeneration; I.3.4 [Computer Graphics]: Graphics Utilities; I.4.4 [Image Processing and Computer Vision]: Restoration;I.4.9 [Image Processing and Computer Vision]: Applications;

    Keywords: Image restoration, inpainting, isophotes, anisotropicdiffusion.

    1 Introduction

    The modication of images in a way that is non-detectable for anobserver who does not know the original image is a practice as oldas artistic creation itself. Medieval artwork started to be restored asearly as the Renaissance, the motives being often as much to bringmedieval pictures up to date as to ll in any gaps [1, 2]. Thispractice is called retouching or inpainting . The object of inpaintingis to reconstitute the missing or damaged portions of the work, inorder to make it more legible and to restore its unity [2].

    The need to retouch the image in an unobtrusive way extendednaturally from paintings to photography and lm. The purposesremain the same: to revert deterioration (e.g., cracks in photographs

    or scratches and dust spots in lm), or to add or remove elements(e.g., removal of stamped date and red-eye from photographs, theinfamous airbrushing of political enemies [3]).

    Digital techniques are starting to be a widespread way of inpaint-ing, ranging from attempts to fully automatic detection and removalof scratches in lm [4, 5, 6], all the way to software tools that allowa sophisticated but mostly manual process [7].

    In this article we introduce a novel algorithm for automatic digi-tal inpainting, being its main motivation to replicate the basic tech-

    Electrical and Computer Engineering, University of Minnesota, Min-neapolis, MN 55455, USA, marcelo,guille @ece.umn.edu

    niques used by professional restorators. At this point, the only userinteraction required by the algorithm here introduced is to mark the regions to be inpainted. Although a number of techniques ex-ist for the semi-automatic detection of image defects (mainly inlms), addressing this is out of the scope of this paper. Moreover,since the inpainting algorithm here presented can be used not justto restore damaged photographs but also to remove undesired ob- jects and writings on the image, the regions to be inpainted must bemarked by the user, since they depend on his/her subjective selec-tion. Here we are concerned on how to ll-in the regions to beinpainted, once they have been selected. 1 Marked regions are au-tomatically lled with the structure of their surrounding, in a formthat will be explained later in this paper.

    2 Related work and our contributionWe should rst note that classical image denoising algorithms donot apply to image inpainting, since the regions to be inpainted areusually large. That is, regions occupied by top to bottom scratchesalong several lm frames, long cracks in photographs, superim-posed large fonts, and so on, are of signicant larger size than thetype of noise assumed in common image enhancement algorithms.In addition, in common image enhancement applications, the pixelscontain both information about the real data and the noise (e.g., im-age plus noise for additive noise), while in image inpainting, thereis no signicant information in the region to be inpainted. Theinformation is mainly in the regions surrounding the areas to beinpainted. There is then a need to develop specic techniques toaddress these problems.

    Mainly three groups of works can be found in the literature re-lated to digital inpainting. The rst one deals with the restorationof lms, the second one is related to texture synthesis, and the thirdone, a signicantly less studied class though very inuential to thework here presented, is related to disocclusion.

    Joyeux et al. [4] devised a 2-steps frequency based reconstruc-tion system for detecting and removingline scratches in lms. Theypropose to rst recover low and then high frequencies. Althoughgood results are obtained for their specic application, the algo-rithm can not handle large loss areas. Frequency domain algorithmstrade a good recovery of the overall structureof the regions for poorspatial results regarding, for instance, the continuity of lines.

    Kokaram et al. [6] use motion estimation and autoregressivemodels to interpolate losses in lms from adjacent frames. Thebasic idea is to copy into the gap the right pixels from neighboring

    frames. The technique can not be applied to still images or to lmswhere the regions to be inpainted span many frames.Hirani and Totsuka [8] combine frequency and spatial domain in-

    formation in order to ll a given region with a selected texture. Thisis a very simple technique that produces incredible goodresults. Onthe other hand, the algorithm mainly deals with texture synthesis

    1In order to study the robustness of the algorithm here proposed, and notto be too dependent on the marking of the regions to be inpainted, we mark them in a very rough form with any available paintbrush software. Markingthese regions in the examples reported in this paper just takes a few secondsto a non-expert user.

  • 8/11/2019 Im Impaint

    2/12

    Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering andmaintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information,including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, ArlingtonVA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing t o comply with a collection of information if itdoes not display a currently valid OMB control number.

    1. REPORT DATE

    2005 2. REPORT TYPE

    3. DATES COVERED

    -

    4. TITLE AND SUBTITLE Image Inpainting

    5a. CONTRACT NUMBER

    5b. GRANT NUMBER

    5c. PROGRAM ELEMENT NUMBER

    6. AUTHOR(S) 5d. PROJECT NUMBER

    5e. TASK NUMBER

    5f. WORK UNIT NUMBER

    7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Office of Naval Research,One Liberty Center,875 North Randolph Street

    Suite 1425,Arlington,VA,22203-1995

    8. PERFORMING ORGANIZATIONREPORT NUMBER

    9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITORS ACRONYM(S)

    11. SPONSOR/MONITORS REPORTNUMBER(S)

    12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited

    13. SUPPLEMENTARY NOTES The original document contains color images.

    14. ABSTRACT Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. Thegoals and applications of inpainting are numerous, from the restoration of damaged paintings andphotographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithmfor digital inpainting of still images that attempts to replicate the basic techniques used by professionalrestorators. After the user selects the regions to be restored, the algorithm automatically fills-in theseregions with information surrounding them. The fill-in is done in such a way that isophote lines arriving atthe regions boundaries are completed inside. In contrast with previous approaches, the technique hereintroduced does not require the user to specify where the novel information comes from. This isautomatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regionscontaining completely different structures and surrounding backgrounds. In addition, no limitations areimposed on the topology of the region to be inpainted. Applications of this technique include therestoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or

    publicity; and the removal of entire objects from the image like microphones or wires in special effects. 15. SUBJECT TERMS

    16. SECURITY CLASSIFICATION OF: 17. LIMITATION OFABSTRACT

    18. NUMBEROF PAGES

    11

    19a. NAME OFRESPONSIBLE PERSON

    a. REPORT unclassified

    b. ABSTRACT unclassified

    c. THIS PAGE unclassified

    Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18

  • 8/11/2019 Im Impaint

    3/12

    (and not with structuredbackground), and requires the user to selectthe te xture to be copied into the region to be inpainted. For imageswhere the region to be replaced covers several different structures,the user would need to go through the tremendous work of segment-ing them and searching corresponding replacements throughout thepicture. Although part of this search can be done automatically,this is extremely time consuming and requires the non-trivial selec-tion of many critical parameters, e.g., [9]. Other texture synthesisalgorithms, e.g., [9, 10, 11], can be used as well to re-create a pre-

    selected texture to ll-in a (square) region to be inpainted.In the group of disocclusion algorithms, a pioneering work isdescribed in [12]. The authors presented a technique for removingocclusions with the goal of image segmentation. 2 The basic ideais to connect T-junctions at the same gray-level with elastica min-imizing curves. The technique was mainly developed for simpleimages, with only a few objects with constant gray-levels, and willnot be applicable for the examples with natural images presentedlater in this paper. Masnou and Morel [13] recently extended theseideas, presenting a very inspiring general variational formulationfor disocclusion and a particular practical algorithm implementingsome of the ideas in this formulation. The algorithm performs in-painting by joining with geodesic curves the points of the isophotesarriving at the boundary of the region to be inpainted. As reportedby the authors, the regions to be inpainted are limited to havingsimple topology, e.g., holes are not allowed. 3 In addition, the anglewith which the level lines arrive at the boundary of the inpaintedregion is not (well) preserved: the algorithm uses straight lines to join equal gray value pixels. These drawbacks will be exempliedlater in this paper. On the other hand, we should note that this is theclosest technique to ours and has motivated in part and inspired ourwork.

    2.1 Our contribution

    Algorithms devised for lm restoration are not appropriate for ourapplication since they normally work on relatively small regionsand rely on the existence of information from several frames.

    On the other hand, algorithms based on texture synthesis can lllarge regions, but require the user to specify what texture to putwhere. This is a signicant limitation of these approaches, as maybe seen in examples presented later in this paper, where the regionto be inpainted is surrounded by hundreds of different backgrounds,some of them being structure and not texture.

    The technique we propose does not require any user interven-tion, once the region to be inpainted has been selected. The algo-rithm is able to simultaneously ll regions surrounded by differentbackgrounds, without the user specifying what to put where. Noassumptions on the topology of the region to be inpainted, or onthe simplicity of the image, are made. The algorithm is devisedfor inpainting in structured regions (e.g., regions crossing throughboundaries), though it is not devised to reproduce large texturedareas. As we will discuss later, the combination of our proposedapproach with texture synthesis techniques is the subject of currentresearch.

    3 The digital inpainting algorithm

    3.1 Fundamentals

    Let stand for the region to be inpainted, and for its boundary(note once again that no assumption on the topology of is made).

    2 Since the region to be inpainted can be considered as occluding objects,removing occlusions is analogous to image inpainting.

    3 This is not intrinsic to the general variational formulation they propose,only to the specic discrete implementation they perform.

    Figure 1: 1D signals and .

    Intuitively, the technique we propose will prolong the isophote linesarriving at , while maintaining the angle of arrival. We pro-ceed drawing from inward in this way, while curving the pro-longation lines progressively to prevent them from crossing eachother. 4

    Before presenting the detailed description of this technique, letus analyze how experts inpaint. Conservators of the MinneapolisInstitute of Arts were consulted for this work and made it clear tous that inpainting is a very subjective procedure, different for each

    work of art and for each professional. There is no such thing asthe way to solve the problem, but the underlying methodology isas follows: (1.) The global picture determines how to ll in the gap,the purpose of inpainting being to restore the unity of the work; (2.)The structure of the area surrounding is continued into the gap,contour lines are drawn via the prolongation of those arriving at

    ; (3.) The different regions inside , as dened by the contourlines, are lled withcolor, matching those of ; and (4.) The smalldetails are painted (e.g. little white spots on an otherwise uniformlyblue sky). In other words, texture is added.

    Our algorithm simultaneously and iteratively performs the steps(2.) and (3.) above. (In the discussion section we will argue howboth steps can be performed separately, and we will also discussstep (4.).) We progressively shrink the gap by prolonging in-ward, in a smooth way, the lines arriving at the gap boundary .

    3.2 A one dimensional illustration

    A simple one dimensional (1D) example is rst used to illustrateour technique. This is done only for a didactic purpose, since ouralgorithm was devised for 2D, and there are other techniques (suchas splines) that might yield better interpolation results in 1D. Our1D example will help to explain how we translate the basic inpaint-ing ideas presented above into a working algorithm. Consider a 1D(discrete) signal ( ) and a box-function mask signal , as illustrated in Figure 1. The region

    to inpaint is given by the set of points where , whileits boundary are the points Q and P. Let be the direction of increasing .

    The progressive nature of the inpainting procedure suggests an

    evolution approach. We dene a family of 1D signals ( ) where is the

    time parameter, such that and, where is the output of the algorithm (inpainted image).

    At each iteration step we want to construct given.For the 1D case, the prolongation of lines into is interpreted

    as the continuation of the function from the left of Q to itsright, and from therightof P to itsleft, in a smoothway. Therefore,

    4This type of information propagation is related and might be applicableto velocity elds extension in level-set techniques [14].

  • 8/11/2019 Im Impaint

    4/12

    Q R

    a

    b

    c

    Figure 2: Discrete example; see text..

    in order to inpaint in the direction of , the following pseudo-coderealizes the ideas above:for to

    for each point (pixel)compute the smoothness of :compute the smoothness of :compute the smoothness variation in :

    endend,

    where the index denotes spatial position, the superindex de-notes the articial evolution time ( ), the subindexdenotes (discretized) spatial differentiation in the horizontal axis

    direction, and controls the velocity of the evolution. We arechanging in time the function so thatat each point ,tends to follow . The purpose of the smoothness estimationis to ensure continuity not only of but also of its derivatives.Our discrete smoothness estimator is the second spatial derivativeof , , which in discrete form can be writtenas .

    To explain why this works, consider Figure 2. The continuousline denotes the part of that is not changed by the algorithmsince it is outside . To the right of Q, the three dashed lines denotethree possible initial conditions for inside . Being straightlines, in the three cases. At the point Q on the otherhand, the value of is different in each case:If , , , and

    decreases;If , , , anddoes not change;If , , , andincreases.

    In all three cases the evolution is deforming inside intoa continuation of to the left of Q.

    Figure 3 shows three stages of the evolution of the signal in Fig-ure 1. The dotted line corresponds to the initial condition, .The dashed line corresponds to an intermediate state. The contin-uous line corresponds to , the steady state of the evolution.Notice how continuity up to the rst derivative is obtained for thepoint Q. That is not the case for P, since the direction of propaga-

    tion is and not . We will later see that this is not a problemfor the 2D case. We have tested 1D examples with different initialconditions, and found that the algorithm always converges to thesame solution within a 0.1% error margin. The formal study of thisexperimental result is of theoretical interest (see Section 5).

    3.3 The 2D inpainting algorithm

    Let us re-write the 1D discrete equation describing the rate of change of :

    (1)

    Figure 3: Three stages of the evolution of in Figure 1.

    Figure 4: Propagation directionas the normal to the signed distanceto the boundary of the region to be inpainted.

    where

    (2)

    (3)

    In other words, we estimate the smoothness of our func-tion and compute its change along the direction, adapting theimage in the region to be inpainted with this variation of smooth-ness quantity. Let us now see how to extend these ideas into 2D.

    Let , with, be a discrete 2D gray level image. The inpainting pro-

    cedure will construct a family of images such that and

    , where is the output of thealgorithm (inpainted image).

    Following the 1D procedure, rst we estimate the smoothnessof our function. For this purpose we may use a simple

    discrete implementation of the Laplacian:. Other smoothness estimators might be used, though sat-

    isfactory results were already obtained with this very simple selec-tion.

    Then, we must compute the change of this value along . Inorder to do this we must rst dene what the direction for the2D information propagation will be. One possibility is to deneas the normal to the signed distance to , i.e., at each pointin the vector will be normal to the shrinked version of

    to which belongs, see Figure 4. This choice is motivatedby the belief that a propagation normal to the boundary would leadto the continuity of the isophotes at the boundary. Instead, whathappens is that the lines arriving at curve in order to align with

    , see Figure 5. This is of course not what we expect. Note thatthe orientation of is not intrinsic to the image geometry, sincethe region to be inpainted ! ! is arbitrary.

    If isophotes tend to align with , the best choice for is thenthe isophotes directions. This is a bootstrapping problem: hav-ing the isophotes directions inside is equivalent to having theinpainted image itself, since we can easily recover the gray levelimage from its isophote direction eld ( see the discussion sectionand [15]).

  • 8/11/2019 Im Impaint

    5/12

    Figure 5: Unsuccessful choice of the information propagation di-rection. Left: detail of the original image, region to be inpainted isin white. Right: restoration.

    We use then a time varying estimation of the isophotes directioneld: for any given point , the (discretized) gradient vector

    gives the direction of largest (spatial) change, while its90 degrees rotation is the direction of smallest (spatial)change, so the vector gives the isophotes direction. Oureld is then given by thetime-varying .We are using a time-varying estimation that is coarse at the begin-ning but progressively achieves the desired continuity at , in-stead of a xed eld that would imply to know the direc-tions of the isophotes from the start.

    Note that the eld is not normalized as it was in the 1D case,where was just the direction of increasing . The reason for thischoice relies on the numerical stability of the algorithm, and will bediscussed in the following subsection.

    Since we are performing an inpainting along the isophotes, it isirrelevant if is obtained as a clockwiseor counterclock-wise rotation of . In both cases, the change of along those directions should be minimum.

    Recapping, we estimate a variation of the smoothness, given bya discretization of the 2D Laplacian in our case, and project thisvariation into the isophotes direction. This projection is used toupdate the value of the image inside the region to be inpainted.

    To ensure a correct evolution of the direction eld, a diffusionprocess is interleaved with the image inpainting process describedabove. That is, every few steps (see below), we apply a few itera-

    tions of image diffusion. This diffusion corresponds to the periodi-cal curving of lines to avoid them from crossing each other, as wasmentioned in section 3.1. We use anisotropic diffusion, [16, 17],in order to achieve this goal without losing sharpness in the recon-struction. In particular, we apply a straightforward discretization of the following continuous-time/continuous-space anisotropic diffu-sion equation:

    (4)

    where is a dilation of with a ball of radius , is the Euclideancurvature of the isophotes of and is a smooth function in

    such that in , and at the set of points of whose distance to is larger that (this is a way toimpose Dirichlet boundary conditions for the equation (4)).

    3.4 Discrete scheme and implementation details

    The only input to our algorithm are the image to be restored andthe mask that delimits the portion to be inpainted. As a preprocess-ing step, the whole original image undergoes anisotropic diffusionsmoothing. The purpose is to minimize the inuence of noise onthe estimation of the direction of the isophotes arrivingat . Afterthis, the image enters the inpainting loop, where only the values in-side are modied. These values change according to the discreteimplementation of the inpainting procedure, which we proceed to

    describe. Every few iterations, a step of anisotropic diffusion is ap-plied (a straightforward, central differences implementation of (4)is used; for details see [16, 17]). This process is repeated until asteady state is achieved.

    Let stand for each one of the image pixels inside theregion at the inpainting time . Then, the discrete inpaintingequationborrows from the numericalanalysis literature and is givenby

    (5)where

    (6)

    (7)

    (8)

    (9)

    (10)

    and

    when

    when(11)

    We rst compute the 2D smoothness estimation in (8) and theisophote direction in (9). Then in (10) we compute ,

    the projection of onto the (normalized) vector , that is, wecompute the change of along the direction of . Finally, wemultiply by a slope-limited version of the norm of the gradientof the image, , in (11). A central differences realization wouldturn the scheme unstable, and that is the reason for using slope-limiters. The subindexes and denote backward and forwarddifferences respectively, while the subindexes and denote theminimum or maximum, respectively, between the derivative andzero (we have omitted the space coordinates for simplicity);see [18] for details. Finally, let us note that the choice of instead of a normalized version of it (as was the case in 1D) allowsfor a simpler and more stable numerical scheme; see [19, 20].

    When applying equations (5)-(11) to the pixels in the border

    of the region to be inpainted, known pixels from outside thisregion are used. That is, conceptually, we compute equations (5)-(11) in theregion (an dilation of ), although update the valuesonly inside (that is, (5) is applied only inside ). The informationin the narrow band (isophote directions and gray values) ispropagated inside . Propagation of this information, both gray-values and isophotes directions, is fundamental for the success of the algorithm.

    In the restoration loop we perform steps of inpainting with(5), then steps of diffusion with (4), again steps of (5), andso on. The total number of steps is . This number may be pre-established, or the algorithm may stop when changes in the image

  • 8/11/2019 Im Impaint

    6/12

    Figure 6: Relation between the color model and the oneused in this article, .

    are below a given threshold. The values we use are:. The value of depends on the size of . If is

    of considerable size, a multiresolution approach is used to speed-upthe process.

    Color images are considered as a set of three images, and theabove described technique is applied independently to each one.To avoid the appearance of spurious colors, we use a color modelwhich is very similar to the LUV model, with one luminance andtwo chroma components. See Figure 6.

    4 Results

    The CPU time required for inpainting depends on the size of . Inall the color examples here presented, the inpainting process wascompleted in less than 5 minutes (for the three color planes), usingnon-optimized C++code running on a PentiumII PC (128Mb RAM,300MHz) under Linux. All the examples use images available frompublic databases over the Internet.

    Figure 7 shows, on the left, a synthetic image with the regionto inpaint in white. Here is large (30 pixels in diameter) andcontains a hole. The inpainted reconstruction is shown on the right.

    Notice that contours are recovered, joining points from the innerand outer boundaries. Also, these reconstructed contours followsmoothly the direction of the isophotes arriving at .

    Figure 8 shows a deteriorated B&W image and its reconstruc-tion. As in all the examples in this article, the user only suppliedthe mask image, shown in Figure 9. This image was drawn man-ually, using a paintbrush-like program. The variables were set tothe values specied in the previous section, and the number of it-erations was set to 3000. When multiresolution is not used, theCPU time required by the inpainting procedure was approximately7 minutes. With a 2-level multiresolution scheme, only 2 minuteswere needed. Observe that details in the nose and right eye of themiddle girl could not be completely restored. This is in part due tothe fact that the mask covers most of the relevant information, andthere is not much to be done without the use of high level prior in-

    formation (e.g., the fact that it is an eye). These minor errors can becorrected by the manual procedures mentioned in the introduction,and still the overall inpainting time would be reduced by orders of magnitude. In Figure 10, top, we show a different initial conditioninside . The inpainted image (bottom) is practically identical tothe one in Figure 9, thereby showing the robustness of the algo-rithm.

    Figure 11 shows a vandalized image and its restoration, followedby an example where overimposed text is removed from the image.These are typical examples where texture synthesis algorithms asthose described in the introduction can not be used, since the num-ber of different regions to be lled is very large.

    Figure 7: Synthetic example: is shown in white. Topology isnot an issue, and the recovered contours smoothly continue theisophotes.

    Figure 12 shows the progressive nature of the algorithm: severalintermediate steps of the inpaintingprocedure are shown for a detailof gure 11.

    Finally, Figure 13 shows an entertainment application Thebungee cord and the knot tying the mans legs have been removed.Given the size of a 2-level multiresolution scheme was used.Here it becomes apparent that it is the user who has to supply thealgorithm with the masking image, since the choice of the region toinpaint is completely subjective.

    5 Conclusions and future work

    In this paper we have introduced a novel algorithm for image in-painting that attempts to replicate the basic techniques used by pro-fessional restorators. The basic idea is to smoothly propagate infor-mation from the surrounding areas in the isophotes direction. Theuser needs only to provide the region to be inpainted, the rest isautomatically performed by the algorithm in a few minutes. Theinpainted images are sharp and without color artifacts. The exam-ples shown suggest a wide range of applications like restorationof old photographs and damaged lm, removal of superimposedtext, and removal of objects. The results can either be adopted asa nal restoration or be used to provide an initial point for manualrestoration, thereby reducing the total restoration time by orders of magnitude.

    One of the main problems with our technique is the reproductionof large texturedregions, as canbe seen in Figure 14. The algorithmhere proposed is currently being tested in conjunction with texturesynthesis ideas to address this issue.

    The inpainting algorithm here presented has been clearly mo-tivated by and has borrowed from the intensive work on the useof Partial Differential Equations (PDEs) in image processing andcomputer vision. When blindly letting the grid go to zero, the in-painting technique in equations (5)-(11), naively resembles a thirdorder equation (not resulting from a variational gradient descentow, see below). Although adding regularizationterms might makethis high order equation stable and allow a formal analysis, 5 to thebest of our knowledge the complete understanding of such type of

    equations is not just beyond the scope of this paper but beyond thecurrent state of mathematical knowledge (although results for otherhigh order equations, which might be relevant for image processingas well, are available, e.g., [22]). In addition, the discrete algorithmhere introduced uses as boundary conditionsbothgray values andisophote directions present at the narrow band surrounding the re-gion to be inpainted. This is done, as explained above, performingthe computations in the dilated region . All this important infor-mation needs to be incorporated to such an equation as extra (and

    5Note that numerical implementations of PDEs intrinsically regularizethe equations.

  • 8/11/2019 Im Impaint

    7/12

    fundamental) boundary conditions. Nevertheless, this suggests theinvestigationof the use of lower, second order, PDEs to address theinpainting problem. Moreover, having these equations to be gradi-ent descent ows of variational formulations permits the inclusionof the narrow band information simply by changing the limits of integration. One possible approach to follow is to rst reconstructthe isophote directions (a planar vector eld), and from them thecorresponding gray values . Once the isophote directions are re-constructed, the gradient direction is given, and we can reconstruct

    the gray levels by nding an image consistent with these gradi-ent directions. Although this can be done using a one dimensionaltransport equation, the existent theory limits the possible gradientelds, thereby disqualifying edges. We can take then a variationalapproach and nd the image minimizing ,where is the reconstructed gradient direction inside (an ad-ditional term that penalizes the deviation from the real image inthe narrow band can be added as well). The minimalis obtained via gradient descent ows (second order PDE). There-fore, what is left is to reconstruct the vector eld . This can againbe achieved using a variational formulation on with the properboundary conditions to guarantee the continuity of the vector eldacross (see also [13, 21]). Both variational formulations canactually be coupled to simultaneously recover the isophotes (vectoreld) and gray values via the corresponding gradient descent ow,

    obtaininga set of coupled second order PDEs.6

    Preliminary resultsin this direction are promising and will be further investigated andreported elsewhere [23]. One of the advantages of this approach isthat formal results regarding uniqueness and existence of the solu-tions can be shown, thereby reducing the necessity to rely mainlyon experimental validations as done until now with the basic imageinpainting algorithms reported in the literature.

    Acknowledgments

    This work started when the authors were visiting the Institute HenriPoincare in Paris, France. We thank the organizers of the quar-ter on Mathematical Questions in Signal and Image Processingand the Institute for their hospitality and partial nancial support.We would like to thank Tom Robbins and Elizabeth Buschor fromthe Upper Midwest Conservation Association for their help in link-ing our work with actual art restoration and conservation. We alsothank Dr. Santiago Betelu, University of Minnesota, Prof. StanOsher, UCLA, and Prof. Eero Simoncelli, NYU, for very interest-ing discussions and feedback. This work was partially supportedby a grant from the Ofce of Naval Research ONR-N00014-97-1-0509, the Ofce of Naval Research Young Investigator Award,the Presidential Early Career Awards for Scientists and Engineers(PECASE), a National Science Foundation CAREER Award, andby the National Science Foundation Learning and Intelligent Sys-tems Program (LIS).

    References

    [1] S. Walden. The Ravished Image . St. Martins Press, New York,1985.

    [2] G. Emile-Male. The Restorers Handbook of Easel Painting .Van Nostrand Reinhold, New York, 1976.

    [3] D. King. The Commissar Vanishes . Henry Holt and Company,1997.

    6 Intuitively, it is clear that we can not address the inpainting problemwith a single second order PDE, since regularization constraints on vectorelds are needed (in [13], the general variational formulation suggested willnormally lead to a fourth order gradient descent ow).

    [4] L. Joyeux, O. Buisson, B. Besserer, S. Boukir. Detection and Removal of Line Scratches in motion Picture Films . Proceed-ings of CVPR99, IEEE Int. Conf. on Computer Vision andPattern Recognition, Fort Collins, Colorado, USA, June 1999.

    [5] A.C. Kokaram, R.D. Morris, W.J. Fitzgerald, P.J.W. Rayner. Detection of missing data in image sequences . IEEE Transac-tions on Image Processing 11(4), 1496-1508, 1995.

    [6] A.C. Kokaram, R.D. Morris, W.J. Fitzgerald, P.J.W. Rayner. Interpolation of missing data in image sequences . IEEE Trans-actions on Image Processing 11(4), 1509-1519, 1995.

    [7] C. Braverman. Photoshop retouching handbook . IDG BooksWorlwide, 1998.

    [8] A. Hirani and T. Totsuka. Combining Frequency and spatialdomain information for fast interactive image noise removal .Computer Graphics, pp. 269-276, SIGGRAPH 96, 1996.

    [9] A. Efros and T. Leung, Texture synthesis by non-parametricsampling, Proc. IEEE International Conference Computer Vi-sion , pp. 1033-1038, Corfu, Greece, September 1999.

    [10] D. Heeger and J. Bergen. Pyramid based texture analy-sis/synthesis . Computer Graphics, pp. 229-238, SIGGRAPH95, 1995.

    [11] E. Simoncelli and J. Portilla. Texture characterization via joint statistics of wavelet coefcient magnitudes . 5th IEEE IntlConf. on Image Processing, Chicago, IL. Oct 4-7, 1998.

    [12] M. Nitzberg, D. Mumford, and T. Shiota, Filtering, Segmen-tation, and Depth , Springer-Verlag, Berlin, 1993.

    [13] S. Masnou and J.M. Morel. Level-lines based disocclusion .5th IEEE Intl Conf. on Image Processing, Chicago, IL. Oct4-7, 1998.

    [14] S. Osher, personal communication, October 1999.

    [15] C. Kenney and J. Langan. A new image processing primitive:reconstructing images from modied ow elds . University of California Santa Barbara Preprint, 1999.

    [16] P. Perona and J. Malik Scale-space and edge detection usinganisotropic diffusion . IEEE-PAMI 12, pp. 629-639, 1990.

    [17] L. Alvarez, P.L. Lions, J.M. Morel. Image selective smoothingand edge detection by nonlinear diffusion . SIAM J. Numer.Anal. 29, pp. 845-866, 1992.

    [18] S. Osher and J. Sethian. Fronts propagating with curvaturedependent speed: algorithms based on Hamilton-Jacobi for-mulations . Journal of Computational Physics, 79:12-49, 1988.

    [19] A. Marquina and S. Osher. Explicit algorithms for a new timedependent model based on level set motion for nonlinear de-bluring and noise removal . UCLA CAM Report 99-5, January1999.

    [20] L. Rudin, S. Osher and E. Fatemi. Nonlinear total variationbased noise removal algorithms . Physica D, 60, pp. 259-268,1992.

    [21] L. I. Rudin and S. Osher, Total variation based image restora-tion with free local constraints . Proc. IEEE-ICIP I, pp. 31-35,Austin, Texas, 1994.

  • 8/11/2019 Im Impaint

    8/12

    [22] A. Bertozzi The mathematics of moving contact lines in thinliquid lms . Notices Amer. Math. Soc., Volume 45, Number 6,pp. 689-697, June/July 1998.

    [23] C. Ballester, M. Bertalmio, V. Caselles, and G. Sapiro, Inter-polation of planar vector elds and applications, in prepara-tion.

  • 8/11/2019 Im Impaint

    9/12

    Figure 8: Restoration of an old photograph.

    Figure 9: The user denes the region to inpaint (here shown in red).

    Figure 10: From a different initial condition inside we arrive at a very similar result..

  • 8/11/2019 Im Impaint

    10/12

    Figure 11: Restoration of a color image and removal of superimposed text.

  • 8/11/2019 Im Impaint

    11/12

    Figure 12: Progressive nature of the algorithm: several intermediate steps of the reconstruction of gure 11 (detail).

    Figure 13: The bungee cord and the knot tying the mans feet have been removed.

  • 8/11/2019 Im Impaint

    12/12

    Figure 14: Limitations of the algorithm: texture is not reproduced.