0% found this document useful (0 votes)
8 views10 pages

Awdgpt I3dppso 04

The document presents a novel interactive 3D painting system that utilizes a unified sample-based approach to represent both geometry and appearance of 3D objects, overcoming limitations of traditional mesh-based systems. It features a dynamic resampling scheme for arbitrary levels of detail and simulates real brush dynamics with haptic feedback for intuitive user interaction. The system supports various paint effects and allows for high-quality rendering of painted objects, making it a significant advancement in 3D painting technology.

Uploaded by

Farji Meme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views10 pages

Awdgpt I3dppso 04

The document presents a novel interactive 3D painting system that utilizes a unified sample-based approach to represent both geometry and appearance of 3D objects, overcoming limitations of traditional mesh-based systems. It features a dynamic resampling scheme for arbitrary levels of detail and simulates real brush dynamics with haptic feedback for intuitive user interaction. The system supports various paint effects and allows for high-quality rendering of painted objects, making it a significant advancement in 3D painting technology.

Uploaded by

Farji Meme
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Eurographics Symposium on Point-Based Graphics (2004)

M. Alexa and S. Rusinkiewicz (Editors)

Interactive 3D Painting on Point-Sampled Objects

Bart Adams1 Martin Wicke2 Philip Dutré1 Markus Gross2 Mark Pauly3 Matthias Teschner2

1 Katholieke Universiteit Leuven† 2 ETH Zürich‡ 3 Stanford University§

Abstract
We present a novel painting system for 3D objects. In order to overcome parameterization problems of existing
applications, we propose a unified sample-based approach to represent geometry and appearance of the 3D object
as well as the brush surface. The generalization of 2D pixel-based paint models to point samples allows us to
elegantly simulate paint transfer for 3D objects. In contrast to mesh-based painting systems, an efficient dynamic
resampling scheme permits arbitrary levels of painted detail.
Our system provides intuitive user interaction with a six degree-of-freedom (DOF) input device. As opposed to
other 3D painting systems, real brushes are simulated including their dynamics and collision handling.
Categories and Subject Descriptors (according to ACM CCS): I.3.4 [Computer Graphics]: Graphics Utilities, I.3.5
[Computer Graphics]: Computational Geometry and Object Modeling

1. Introduction Furthermore, discontinuities arising at the patch boundaries


are difficult to cope with.
For many years, digital painting has been of major interest
in computer graphics. Numerous approaches were proposed In this paper, we provide a solution to the aforementioned
to realistically represent brushes and to model their behav- limitations which is entirely based on point-sampled geome-
ior and interaction with canvas. Recently, such models for try. By representing both the object and the brush surface as
painting on 2D canvases have been extended to 3D objects. collections of point samples, we remove the separation be-
Very often, 3D painting systems employ polygonal meshes tween appearance and geometry. All relevant attributes and
or spline patches to represent the underlying 3D geometry. parameters, such as paint pigments, color, spatial position,
By establishing some mapping between a 2D parameter do- and normals, are stored along with the sample.
main and the 3D surface, appearance attributes, resulting
This conceptual generalization of 2D pixel-based paint
from paint operations, can be stored separately in texture
models to 3D geometry allows us to elegantly simulate paint
maps. Once created, these texture maps can be reprojected
transfer by immediate access of pigment properties stored
onto the object surface.
in the samples. In addition, our point-based model can be
This separation of geometry and appearance entails var- resampled dynamically and adaptively to store appearance
ious inherent drawbacks: the surface parameterization re- detail across a wide range of scales. Since the paint trans-
quired to connect the two domains unavoidably leads to dis- fer is handled locally between brush and surface samples,
tortions degrading the visual quality of the 3D painting. In texture parameterization and patching become obsolete. Our
addition, the uniform resolution of the texture map makes it approach permits painting onto irregularly sampled object
difficult to handle spatially varying levels of painted detail. surfaces without distortions or visual artifacts.
Most often, a brute force global upsampling is applied to
Based on this sampled representation we built a prototype
accommodate high resolution strokes. While local parame-
framework for interactive 3D painting. Our system supports
terizations and surface patching offer potential solutions, op-
a variety of paint effects, including paint diffusion, gold,
timal patch layout and texture packing can be cumbersome.
chrome, and mosaic paint, and renders the objects in high
quality. For intuitive 3D user interaction we added a haptic
† email: {barta,phil}@cs.kuleuven.ac.be feedback model and a six DOF input device.
‡ email: {wicke,grossm,teschner}@inf.ethz.ch The remainder of this paper is devoted to the description
§ email: [email protected] of the technical details of the approach. After discussing

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

related work, we give an overview in Section 3. Next, we


present our brush model, including dynamics, collision han-
dling, and haptic feedback in Section 4. Paint transfer is de-
scribed in Section 5 and the dynamic resampling is discussed
in Section 6. Section 7 addresses implementation details. Fi-
nally, we demonstrate the performance of our method and
present results in Section 8.

2. Related Work
Point-Sampled Geometry. Points have shown to be very
effective as a display primitive for high quality rendering
of complex objects. Rusinkiewicz and Levoy [RL00] use
a bounding sphere hierarchy for progressive rendering of
large objects. Alexa et al. [ABCO∗ 01] use a dynamic re-
sampling strategy to obtain high display quality. Zwicker et Figure 1: The user interface. The brush is controlled via a
al. [ZPvG01] further increase the rendering quality by intro- PHANToM Desktop haptic device. The object can be rotated
ducing the Elliptical Weighted Average (EWA) filter in the using a SpaceMouse.
point-based rendering pipeline.
Point-sampled surfaces are also used for modeling and
Hanrahan and Haeberli [HH90] first suggested a 3D paint-
editing. Pauly et al. [PKKG03] are able to perform large
ing system, using a mouse to position the brush. Agrawala
free-form deformations on point-sampled geometry. Both
et al. [ABL95] color the vertices of a scanned object using a
Pauly et al. [PKKG03] and Adams and Dutré [AD03]
spherical brush volume. There is no remeshing and therefore
present algorithms to perform boolean operations on point-
the painted detail is limited by the original resolution. More
sampled objects. Central in both approaches is a dynamic
recent painting systems [JTK∗ 99, GEL00, KSD03] provide
resampling strategy.
a haptic interface, but still use a sphere-shaped brush to ap-
In the context of appearance modeling, Pointshop 3D ply paint to the object. In all these systems, color and mate-
[ZPKG02] extends 2D photo editing functions to 3D point rial information is stored in fixed-sized textures, limiting the
clouds. Zwicker et al. propose a set of tools to paint, fil- level of detail that the artist can apply.
ter and sculpt point-sampled objects. Painting is performed
To overcome these limitations, Berman et al. [BBS94]
by projecting a brush footprint bitmap. Recently, Reuter et
propose the use of images with different resolutions in dif-
al. [RSPS04] developed a Pointshop 3D plugin to interac-
ferent places, called multiresolution images, to represent 2D
tively texture an object using a point-sampled multiresolu-
paintings with regions of varying levels of detail. In 3D, the
tion surface representation. Photometric attributes are as-
Chameleon painting system [IC01] overcomes the limitia-
signed to the point samples which result from sampling the
tions of fixed-sized textures and predefined uv-mappings by
3D object in a preprocess. There is no resampling during in-
automatically building a texture map and corresponding pa-
teractive texturing and therefore texture detail is limited by
rameterization during interactive painting. By using different
the sampling resolution at the finest level. There is no brush
patches for different regions they allow for adaptively vary-
or paint metaphor.
ing the painted level of detail. However, even these elabo-
Virtual Painting. There are several 2D painting systems rate techniques cannot fully solve the parameterization prob-
of which the work of Baxter et al. [BSLM01] is most re- lems inherent in texture mapping. DeBry et al. [DGPR02] as
lated to this paper. They present a haptic painting system well as Benson and Davis [BD02] solve the parameterization
using deformable 3D brushes to paint on a 2D canvas. problems by storing paint properties in adaptive octrees, thus
Thanks to the virtual brushes and a bidirectional paint trans- only creating texture detail when necessary. Painting is lim-
fer model the artist can achieve an expressive power sim- ited to a 2D plane which is then projected onto the surface.
ilar to painting on real canvases. They also introduce var- The resulting color attributes are stored in the octree.
ious more advanced paint models [BLL03, BWL04]. An
alternative brush and paint model is presented by Xu et
3. System Overview
al. [XLTP03]. They simulate clusters of hairs and use a diffu-
sion process to transfer paint from brush to canvas. Several User Interface. We developed a user interface which en-
researchers [WI00, XTLP02, YLO02, CT02] propose other ables the artist to manipulate the brush, mix paint, move
virtual brush models in the context of Chinese calligraphy. the object and apply paint in an intuitive manner (see Fig-
However, none of these brush models is employed in a 3D ure 1). In our painting system, the virtual brush is posi-
painting system. tioned using a six DOF input device, such as the PHANToM

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

Desktop (Sensable Technologies) which also provides hap- A. Brush Dynamics Loop (1 kHz)
tic feedback to the user. The object can be translated and
Collision Brush Skeleton Haptic
rotated using a mouse or a six DOF input device such as Detection Deformation Feedback
the SpaceMouse (3DConnexion). The artist can choose dif-
ferent paint types, such as aquarelle and oil paint. A virtual F
palette is used to mix paint. We vary the reflectivity of the
paint and use environment mapping to enhance realism. The
brush casts shadows on the palette and the object, giving vi-
sual depth cues. An advanced point renderer [ZPvG01] is
used to obtain high quality images of the painted object. Brush Surface Surface Samples Paint Transfer,
Deformation Collecting Resampling
Object Representation and Dynamic Resampling. We
avoid problems, such as texture distortion and patch discon-
tinuities, which are often apparent in polygon-based paint-
ing systems, by using a point-sampled surface representation
and a dynamic sampling strategy. The fundamental idea is to
upsample the object locally if necessary or downsample the B. Paint Transfer Loop (30 Hz)
object locally if possible. For example, if the artist paints a
thin line, our system locally upsamples the surface. If later Figure 2: Top row: brush dynamics loop. When a collision
the artist overpaints this small stroke with a large brush, the occurs, the brush tip is projected onto the object surface
system locally downsamples the affected areas without los- and the brush skeleton is deformed accordingly. The result-
ing any geometric information. ing force acting on the handle is sent to the haptic device.
Bottom row: paint transfer loop. The brush point samples
Our system handles regularly or irregularly sampled ob-
are deformed according to the skeleton deformation. After
jects, given that the samples adequately capture the object
collecting the relevant surface samples paint is transfered
geometry. Each surface sample carries geometric attributes
between the brush and the surface samples.
such as position x, normal n and radius r, as well as a set
of appearance attributes which represent the paint pigments:
dry paint attributes Ad , wet paint attributes Aw and wet paint and procedural small-scale geometry manipulations. The lat-
volume per unit area Vw . The point samples are stored in ter are used to simulate the geometric detail often found in
a kd-tree which is used for efficient collision detection and oil or acrylic paint, or planar patches typically found in mo-
neighbor collection during painting. saics. We chose the paint model from [BSLM01] as a basis
for ours as it allows for bidirectional paint transfer while be-
Virtual Brushes. We model virtual brushes using a point-
ing computationally cheap. Any other paint transfer model
sampled surface, wrapped around a mass-spring skeleton.
that is defined on pixels can be used as well.
The skeleton is used to model the dynamics of the brush, the
surface samples store paint information. This flexible brush Decoupled Haptics. To guarantee the required 1 kHz up-
model enables us to define different brush types of various date frequency of the haptic device we decouple the force
sizes and resolutions. Collision detection between the brush computation from the rest of the application. Only opera-
and complex 3D objects is possible at high rates. Although tions that are necessary to simulate the dynamic behavior of
more accurate, simulating individual brush hairs or clusters the brush, such as collision detection and skeleton deforma-
is too expensive to compute for haptic feedback. Our brush tion, are performed in the brush dynamics loop (Figure 2, A).
model gives us all the flexibility we need for a plausible All other (more costly) operations, such as brush surface de-
painting simulation. formation, paint transfer and dynamic resampling, are per-
formed in the paint transfer loop (Figure 2, B) which runs at
When zooming in on the object surface, the brush is scaled the 30 Hz display frequency.
down. As a result, the brush sampling density increases rela-
tive to the object sampling density. This enables the artist to
apply fine detail to the object. Since both the object and the 4. Brush Model and Haptic Display
brush are represented with point samples, an elegant imple-
mentation of bidirectional paint transfer can be realized. To model a virtual brush, we have to devise a geometric rep-
resentation for the brush surface as well as a physics-based
Paint Model. Based on [BSLM01], our system handles dif- model to simulate the dynamic behavior. We use point sam-
ferent paint types such as aquarelle, oil paint, metallic or oth- ples storing paint information to represent the surface of the
erwise reflective paint as well as other surface types such as brush. These samples are defined relative to a mass-spring
mosaic or beaten gold. In order to model this broad variety of skeleton which is used to simulate the dynamic behavior.
paint types, the paint model stores color as well as other at- The force resulting from the dynamics is directly used for
tributes, such as diffusion coefficients, reflectivity, shininess, haptic feedback.

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

Round Brush Flat Brush


x1
R1 x2
R2
d1
x0 d2
h0
x
h1

Figure 3: The brush is represented as a point-sampled sur- Figure 4: Left: undeformed brush. Middle and right: when
face wrapped around a mass-spring skeleton. Left: round the brush is deformed, the new positions x0 of the brush sam-
brush consisting of one basic skeleton. Right: flat brush ples are computed from the original positions and the rota-
modeled using two tips. tion of the springs.

Depending on its tangential speed, we also apply static or


4.1. Brush Dynamics
dynamic friction.
Mass-Spring Skeleton. To simulate the dynamic behav-
Haptic Display. The resulting force F acting on the han-
ior of a brush, we use a mass-spring skeleton similar to
dle can be computed by adding up the forces exerted by all
[BSLM01]. The basic skeleton block is a tip skeleton con-
springs in the brush skeleton that are attached to the han-
sisting of a single mass (the tip) and eight springs attached
dle. The torque resulting from the simulation can be used
to handle points. We model round brushes using one tip.
for haptic feedback, if supported. When the user zooms in on
Flat brushes have skeletons consisting of several tips (see
some part of the object, the transformations returned by the
Figure 3). More exotic brushes can be modeled using an
haptic device are scaled down to allow for controlled move-
arbitrary mass-spring skeleton. To simulate viscosity, the
ments even in a very small field of view. The forces sent to
brush simulation is heavily damped. Leap-Frog integration
the haptic device are scaled up proportionately in order to
[Hoc70] is used to solve the differential equations governing
maintain the illusion of a hard surface.
the brush behavior. Even with larger skeletons, this method
is fast enough to run in the brush dynamics loop. With an Thin sheets are a problem for the haptic rendering algo-
update frequency of 1 kHz, the simulation proved robust for rithm, since the springs cannot generate sufficient force to
all user manipulations. keep the user from pushing the brush through a thin part
of the object. This problem disappears when zooming in,
Collision Handling. The brush skeleton should never pen- mainly due to the scaling of the haptic device movement.
etrate the object. Therefore, we perform a collision detec-
tion query for each skeleton mass point. In order to imple-
ment a variant of force shading as proposed by [RKK97], we 4.2. Brush Surface Representation
compute smoothed surface normals and penetration depths The samples representing the surface of the brush carry at-
during collision detection. Given the position x of the mass tributes similar to the object samples, namely position x,
point, we search for the N (typically N = 10) closest ob- orientation n, radius r, paint volume per unit area Vb and
ject samples with positions xi and normals ni and compute a paint attributes Ab . The undeformed brush surface is manu-
weighted average penetration depth d and local surface nor- ally modeled to resemble a real brush.
mal n as follows:
N
To deform the brush surface, we apply a combination
d= ∑ wi · ni · (xi − x), (1) of linear blend skinning [LCF00] and the free-form defor-
mation method for point-sampled geometry presented in
i=1
[PKKG03] (see Figure 4). Given the position x of the point
N sample on the undeformed brush, we compute the distances
n= ∑ w i · ni , (2) di from the point x to the N = 4 closest handle points. When
i=1 the brush is deformed, the spring attached to each of those
dmax − di handle points defines a rotation Ri . Applying the rotations
wi = , (3) Ri to the point x yields new point positions Ri (x). The final
∑Nj=1 dmax − d j
position x0 of the deformed sample is obtained as a convex
where di = kxi − xk and dmax = max di . This weighting combination of the original position x and the rotated posi-
scheme provides a smooth interpolation of normals over the tions Ri (x):
surface. When a collision is detected, i.e. d > 0, the mass N
point is constrained to the surface of the object: x0 = (1 − α) · x + α · ∑ wi · Ri (x), (5)
i=1
x ← x + d · n/knk. (4)

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

bounding sphere of the brush samples (see Figure 6, A). The


bounding sphere is computed from the current positions of
the brush samples using the smallest enclosing ball algo-
rithm presented in [Gar99].
B. Paint Buffer Construction. After collecting the relevant
samples, we construct the paint buffer in a plane defined by
the average normal of the collected surface samples (see Fig-
ure 6, B). The position of the plane is arbitrary. Its dimen-
Figure 5: Left: if two or more skeletons are constrained to sions are chosen so that the bounding sphere of the brush
differently oriented surface parts, the brush splits. Right: head projects completely inside the buffer. If the sampling
brush splitting on the back of the Stanford Dragon. density of the brush is higher than the sampling density of
the surface, one brush sample should project to approxi-
mately one pixel in the paint buffer. The paint buffer reso-
α = h0 /(h0 + h1 ), (6) lution is chosen accordingly. Typical paint buffer resolutions
range from 30 by 30 to 50 by 50 pixels. If however the local
1 d sampling density of the object is higher than the brush sam-
wi = · (1 − N i ), (7)
N −1 ∑ j=1 d j pling density, the paint buffer pixel size is adjusted to the
object sample size. This guarantees that texture detail and
where h0 and h1 denote the distance of the brush sample geometric features are preserved during painting.
to the handle and the tip respectively. Thus, a brush sam-
ple close to the skeleton tip will deform more than a sample Note that the paint buffer plane is usually a good approxi-
close to the handle. We apply the same transformation to mation to the area of the surface that is touched by the brush.
compute the deformed normal n0 of the brush sample. If the curvature of this region is very high, the brush is very
likely to split. In this case we use multiple paint buffers, one
for each skeleton tip. The same holds when the brush enters
4.3. Brush Splitting a crease as the surface normals computed for the tips will
When a brush with several tips moves over a highly curved differ significantly. This way, we minimize distortion when
surface, two tips may be constrained to differently oriented painting on highly curved surfaces.
surfaces (see Figure 5). We detect this by comparing the lo- C. Surface Sample Projection. We use a software imple-
cal surface normals computed for each of the tips. If the lo- mentation of the EWA splatting algorithm to rasterize the
cal surface orientation differs significantly, i.e. when the an- front-facing samples into the paint buffer (see Figure 6, C).
gle between the two normals is more than 60 degrees, the By using the EWA splatting algorithm we avoid aliasing ar-
brush is split and interior brush samples are activated to rep- tifacts in all attributes. Note that the orthogonal projection
resent the two brush head volumes. This way we can paint simplifies the splatting algorithm. The following point at-
on highly curved surfaces such as the back of the Stanford tributes are written to the paint buffer:
Dragon (see Figure 5). As will be explained in the next sec-
tion, we compute paint transfer separately for each of the • depth d (distance to the projection plane),
brush parts. • normal n,
• paint attributes Aw and Ad ,
5. Paint Transfer • wet paint volume per unit area Vw .

When a collision between the brush and the object surface D. Brush Sample Projection. Similar to the object samples,
is detected, paint is transferred from the brush to the surface the back-facing brush samples are projected into the paint
and vice versa. Inspired by the orthogonal projection map- buffer (see Figure 6, D). Only fragments with a depth greater
ping presented in [ZPKG02] we construct a local planar ap- than the depth already stored in the paint buffer are writ-
proximation of the object surface, the paint buffer. We splat ten. These fragments represent parts of the brush that pene-
both the object samples and the brush samples into the paint trate the surface of the object. The following brush sample
buffer, which serves as a common projection plane. Repro- attributes are written to the paint buffer:
jecting the paint buffer results in new object samples storing
• penetration depth d p ,
the painted detail. The different steps performed during a
paint event are explained below (see Figure 6). • paint attributes Ab ,
• paint volume per unit area Vb .
A. Collecting Surface Samples. In a first step, we collect
all object samples that might be affected by the brush. As Here the penetration depth d p denotes the signed distance
the points are stored in a kd-tree, this can be efficiently im- between the surface of the brush and the surface of the object
plemented by performing a range query corresponding to the at the relevant pixel.

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

A. Collecting Surface Samples B. Paint Buffer Construction C. Surface Sample Projection D. Brush Sample Projection E. Reprojection

Figure 6: Different steps performed during a paint event. A. Collecting the surface samples. B. Constructing the paint buffer
orthogonal to the average surface normal. C. EWA splatting of the collected surface samples. D. EWA splatting of the back-
facing brush samples. E. Reprojection of pixels in the paint buffer to surface samples.

E. Reprojection. We compute bidirectional paint transfer Static kd−tree


using the transfer model proposed in [BSLM01] to deter-
mine the resulting color in the paint buffer. After computing
paint transfer, we reproject the newly painted pixels onto the
object surface (see Figure 6, E). If the brush sampling rate is parent samples
higher than the local object sampling rate, the object surface
is locally upsampled using our dynamic upsampling oper- child samples
ator which is described in detail in Section 6. In order to
add geometric effects to the paint type, the normal and posi-
Dynamic lists
tion values of the new samples can be modified by the paint
model. Figure 7: Two-level data structure. Top: the original sur-
face samples are stored in a static kd-tree. Bottom: when
6. Dynamic Sampling new samples are added, they are stored in a list belonging to
the closest object sample in the kd-tree.
To preserve the detail that is potentially created with a high
resolution brush on a less densely sampled object, the ob-
ject surface has to be upsampled in order to accomodate the brush touches the surface, needs to be resampled. In the paint
texture detail. Conversely, if there is no texture detail to jus- buffer, this area consists of all pixels that have a penetration
tify the high sampling density, the object surface needs to be depth d p > 0. Each of these pixels is reprojected onto the
downsampled to remove redundant information. These dy- object and becomes a child of the closest sample in the kd-
namic resampling operators are based on the assumption that tree. The position of the new sample can be computed using
the surface of the original object is adequately sampled. the paint buffer plane position and orientation as well as the
Up- and downsampling is facilitated by a two-level data pixel’s depth value. Its normal and the paint attributes can be
structure (see Figure 7). The original object samples are directly read from the paint buffer. The new sample’s radius
stored in a static kd-tree. They may never be deleted in order equals the diagonal of one paint buffer pixel. All object sam-
to retain the original geometric information. However, dur- ples fully or partly covered by these new samples are marked
ing resampling, they can be marked as dead, meaning they as dead. If some of the killed samples were only partly cov-
will not be rendered. These samples can have children, i.e. ered by the reprojected pixels, we additionally reproject pix-
new samples replacing or complementing the parent. Chil- els that are only touched by the projection of killed samples
dren are uniquely assigned to one parent, which is in general (see Figure 9).
the closest sample in the kd-tree. Children are stored in a Note that child samples never become the parent of new
list belonging to the parent, and are instantly deleted when samples: they are instantly deleted and the new sample is
marked as dead. This way we avoid updating the kd-tree, added to the list of the closest sample in the kd-tree. This
which is too costly during interactive painting. way our system can handle multiple overpaints without the
Upsampling. The upsampling operator needs to be invoked need to reorganize a hierarchical data structure.
whenever a brush paints a less densely sampled object. It lo- Downsampling. Reprojection of the paint buffer may re-
cally upsamples the area in which more texture detail needs sult in neighboring samples having the same appearance at-
to be stored (see Figure 8, B). In order to find the affected tributes. This usually happens when overpainting fine de-
area, the paint buffer storing the paint information is ana- tail with a large brush. To remove this unnecessary detail,
lyzed. we apply the following simple downsampling operator. For
Since the brush can have a very uneven color distribu- each parent sample, we compute the deviation of paint at-
tion, the complete brush footprint, i.e. the area where the tributes of its child samples. When this deviation is below a

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

A. Before Painting B. Upsampling C. Downsampling We use the software renderer during painting because the
hardware implementation suffers from quantization artifacts
ocurring when locally updating the rendered image.
The brush casts a shadow on the object and the palette.
Shadow mapping can be integrated into the hardware ren-
derer without requiring an additional rendering pass. To ren-
der shadows using the software renderer, we save the ren-
dered image to a texture and add the shadow using an addi-
tional hardware rendering pass performing the shadow test
in a fragment program. Environment mapping enhances re-
alism for reflective paint types.

Figure 8: The sampling density is locally adapted to ac- Paint Effects. In order to give the artists a variety of paint
curately represent the texture detail. Left: sampling density types, we modeled various paint effects. We do not limit
of the Stanford Bunny. Middle: upsampling to represent the the paint attributes to color information. Reflectivity makes
painted detail. Right: downsampling of child samples. chrome or gold paint possible and shininess can be used to
model matt paint or glossy polished surfaces.
brush footprint will be reprojected The paint transfer model is allowed to modify the small-
scale geometry of painted surfaces. A mosaic-like effect is
achieved by setting the normal of newly created child sam-
ples to their parent’s normal instead of blending it. When
using gold paint combined with the mosaic effect, we obtain
the appearance of beaten gold.
dead sample non−dead sample will not be reprojected When painting with highly viscous paint, such as oil or
acryl, the brush hairs leave an imprint in the paint. Although
Figure 9: Analyzing the paint buffer. The footprint of the
we are not able to model the complete geometric effect of
brush is shaded blue. Left: samples projected onto a pixel
adding layers of paint, we can model the surface structure.
that is touched by the brush are marked as dead. Right: pix-
If the brush skeleton is aligned with the brush velocity, we
els affected by the brush (blue pixels) will result in new sam-
slightly manipulate the surface normals of the new samples
ples. Additionally, in order to avoid holes, we reproject pix-
as to create the illusion of a hair imprint.
els touched by a dead sample and not touched by any non-
dead sample (red pixels). Diffusion is the most important feature of aquarelle.
Our paint model stores diffusion coefficients and supports
threshold, we remove the child samples, resurrect the parent isotropic surface diffusion. Each surface sample xi ex-
if necessary and set the parent’s attributes to the average of changes wet paint volume ∆Vw with other surface samples
all its children’s attributes. Reasonable values of the devia- x j:
tion threshold are between 0.95 and 0.99, depending on the j
2
− v̄2d ·T
amount of smoothing the downsampling operator is allowed ∆Vw = (Vwi −Vw ) · e , (8)
to perform. To ensure that no geometric detail is lost, we re- where d = kxi − x j k is an approximation of the geodesic dis-
move only child samples. Thus, we maintain an adequately
tance between the two sample points, v̄2 denotes the average
sampled surface at all times. An example of downsampling
squared particle speed and T is the elapsed time period. The
is illustrated in Figure 8, C.
wet paint attributes Aw are adjusted using the paint transfer
rules. Because of the exponential decay of ∆Vw , we can re-
7. Implementation strict the number of samples x j by only considering neighbor
samples within a threshold distance to xi .
Rendering. High quality renderings of the painted objects
are generated with a software implementation of the EWA
splatting algorithm [ZPvG01]. Each paint event only affects 8. Results
a local part of the surface. Thus, we can achieve high frame
Inspired by the Art on Cows project, we set up our own vir-
rates by only locally updating the rendered image, unsplat-
tual Art on Bunnies project and asked a number of artists
ting samples that have been killed and splatting newly added
to paint the Stanford Bunny using our painting system. The
or resurrected samples.
artists all used the same irregularly sampled bunny model
When rotating or translating the object, the system consisting of 97k point samples. We provided them with a
switches to a hardware implementation of the EWA splat- set of 12 round and flat brushes. The painting system runs
ting algorithm similar to [BK03] for performance reasons. on a 3 GHz PC with a GForce FX 5900 graphics board.

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

A selection of the resulting bunnies is displayed in Fig- [ABL95] AGRAWALA M., B EERS A. C., L EVOY M.:
ures 10, 11, 12 and 13 and in the accompanying video. Fig- 3d painting on scanned surfaces. In 1995
ure 10 shows the sampling density of the painted Day-and- Symposium on Interactive 3D Graphics (Apr.
Night Bunny. The sampling density is increased locally to 1995), pp. 145–150. 2
preserve sharp painted edges. The dynamic resampling strat-
[AD03] A DAMS B., D UTRÉ P.: Interactive boolean
egy also allows for fine painted detail such as the flowers
operations on surfel-bounded solids. ACM
and the bee in Figure 12. The entire bee covers an area about
Transactions on Graphics 22, 3 (July 2003),
the size of a single point sample of the original model. Re-
651–656. 2
flective paint was used for the Caesar Bunny (Figure 11).
You can see diffusion effects on the Savannah Bunny (Fig- [BBS94] B ERMAN D. F., BARTELL J. T., S ALESIN
ure 11). Note the imprints left by the virtual brush hairs in D. H.: Multiresolution painting and composit-
the painted water on the Beach Bunny (see Figure 13 and ing. In Proceedings of SIGGRAPH 94 (July
the video). Depending on the amount of detail, the resulting 1994), Computer Graphics Proceedings, An-
bunnies consist of 300k to 800k point samples. nual Conference Series, pp. 85–90. 2
One of the artists painted the Stanford Dragon (Figure 14). [BD02] B ENSON D., DAVIS J.: Octree textures. In
Environment mapping is used for the reflecting dragon ball. Proceedings of ACM Siggraph 2002 (2002),
The eyes of the dragon are laid in mosaic. pp. 785–790. 2
[BK03] B OTSCH M., KOBBELT L.: High-quality
9. Conclusion and Future Work point-based rendering on modern gpus. In
Proceedings of Pacific Graphics 2003 (2003),
We presented a novel painting system for 3D objects. Our
pp. 335–343. 7
system provides virtual brushes, various paint types, and an
intuitive user interface. In order to overcome parameteriza- [BLL03] BAXTER W., L IU Y., L IN M. C.: A Vis-
tion problems of existing painting applications we employ cous Paint Model for Interactive Applications.
a unified sample-based approach to represent geometry and Tech. rep., University of North Carolina at
appearance of the 3D object surface as well as the brush sur- Chapel Hill, 2003. 2, 8
face. Our paint transfer model locally approximates the ob-
[BSLM01] BAXTER W., S CHEIB V., L IN M. C.,
ject surface with one or more planes, also handling highly
M ANOCHA D.: Dab: Interactive haptic paint-
curved surfaces without distortions. Dynamic resampling of
ing with 3d virtual brushes. In Proceedings of
the point-sampled object surface allows the artist to apply
ACM Siggraph 2001 (2001), pp. 461–468. 2,
arbitrarily fine painted detail.
3, 4, 6
In our current implementation, collision handling of the
[BWL04] BAXTER W., W ENDT J., L IN M. C.: IM-
brush is performed with respect to the original object geom-
PaSTo, a realistic, interactive model for paint.
etry. However, user feedback suggests that the actual thick-
In Proceedings of the Third International Sym-
ness of applied paint should be considered in order to be felt
posium on Non-Photorealistic Animation and
by the user. Therefore, we intend to integrate a height field to
Rendering (NPAR) for Art and Entertainment
represent paint thickness. This height field would also sup-
(2004). to appear. 2
port the incorporation of more advanced paint transfer mod-
els [BLL03]. User feedback has also shown that more intu- [CT02] C HU N. S.-H., TAI C.-L.: An efficient brush
itive depth cues should be provided. Although our system model for physically-based 3d painting. In
gives depth information such as the brush shadow, it might Proceedings of Pacific Graphics 2002 (2002),
be useful to add stereo vision. pp. 413–422. 2
Acknowledgments. We thank Michael Waschbüsch for his work [DGPR02] D E B RY D., G IBBS J., P ETTY D. D., ROBINS
on the renderer and the artists Silke Lang and Christian Ratti for N.: Painting and rendering textures on unpa-
their enthusiasm and feedback. The first author is funded as a Re- rameterized models. In Proceedings of ACM
search Assistant by the Fund for Scientific Research - Flanders, Bel- Siggraph 2002 (2002), pp. 763–768. 2
gium (Aspirant F.W.O.-Vlaanderen).
[Gar99] G ARTNER B.: Fast and robust smallest enclos-
ing balls. In Proceedings of the European Sym-
References posium on Algorithms 1999 (1999), pp. 325–
338. 5
[ABCO∗ 01] A LEXA M., B EHR J., C OHEN -O R D.,
F LEISHMAN S., L EVIN D., S ILVA C. T.: [GEL00] G REGORY A. D., E HMANN S. A., L IN
Point set surfaces. In Proceedings of IEEE Vi- M. C.: inTouch: Interactive multiresolution
sualization 2001 (2001), pp. 21–28. 2 modeling and 3d painting with a haptic inter-

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

Figure 10: Left: Day-and-Night Bunny. Middle: sampling density on the back of the painted bunny. The sampling density is
higher where necessary to represent fine detail. You can see higher sampling rates around the sun’s boundary. Right: original
sampling density in the same region.

face. In Proceedings of IEEE Conference on The haptic display of complex graphical envi-
Virtual Reality 2002 (2000), pp. 45–52. 2 ronments. In Proceedings of ACM Siggraph
1997 (1997), pp. 345–352. 4
[HH90] H ANRAHAN P., H AEBERLI P.: Direct wysi-
wyg painting and texturing on 3d shapes. In [RL00] RUSINKIEWICZ S., L EVOY M.: Qsplat:
Proceedings of ACM Siggraph 1990 (1990), A multiresolution point rendering system for
pp. 215–223. 2 large meshes. In Proceedings of ACM Sig-
graph 2000 (2000), pp. 343–352. 2
[Hoc70] H OCKNEY R.: The potential calculation and
some applications. In Methods in Computa- [RSPS04] R EUTER P., S CHMITT B., PASKO A.,
tional Physics (1970), Alder B., Fernbach S.„ S CHLICK C.: Interactive solid texturing
Rotenberg M., (Eds.), vol. 9, Academic Press, using point-based multiresolution representa-
New York, pp. 136–211. 4 tions. In Journal of WSCG 2004 (2004),
vol. 12, pp. 363–370. 2
[IC01] I GARASHI T., C OSGROVE D.: Adaptive
unwrapping for interactive texture painting. [WI00] W ONG H. T., I P H. H.: Virtual brush: a
In 2001 ACM Symposium on Interactive 3D model-based synthesis of chinese calligraphy.
Graphics (Mar. 2001), pp. 209–216. 2 Computers & Graphics 24, 1 (2000), 99–113.
2
[JTK∗ 99] J OHNSON D., T HOMPSON II T. V., K APLAN
[XLTP03] X U S., L AU F. C., TANG F., PAN Y.: Ad-
M., N ELSON D. D., C OHEN E.: Painting tex-
vanced design for a realistic virtual brush.
tures with a haptic interface. In Proceedings of
In Proceedings of Eurographics 2003 (2003),
the IEEE Conference on Virtual Reality 1999
pp. 533–542. 2
(1999), pp. 282–285. 2
[XTLP02] X U S., TANG M., L AU F., PAN Y.: A solid
[KSD03] K IM L., S UKHATME G. S., D ESBRUN M.: model based virtual hairy brush. Computer
Haptic editing of decoration and material prop- Graphics Forum 21, 3 (2002), 299–308. 2
erties. In Proceedings of the Symposium on
Haptic Interfaces for Virtual Environment and [YLO02] Y EH J., L IEN T., O UHYOUNG M.: On the ef-
Teleoperator Systems 2003 (2003), pp. 213– fects of haptic display in brush and ink sim-
220. 2 ulation for chinese painting and calligraphy.
In Proceedings of IEEE Pacific Graphics 2002
[LCF00] L EWIS J. P., C ORDNER M., F ONG N.: Pose (2002), pp. 439–441. 2
space deformation: a unified approach to shape
interpolation and skeleton-driven deformation. [ZPKG02] Z WICKER M., PAULY M., K NOLL O.,
In Proceedings of ACM Siggraph 2000 (2000), G ROSS M.: Pointshop 3d: an interactive
pp. 165–172. 4 system for point-based surface editing. In
Proceedings of ACM Siggraph 2002 (2002),
[PKKG03] PAULY M., K EISER R., KOBBELT L. P., pp. 322–329. 2, 5
G ROSS M.: Shape modeling with point-
[ZPvG01] Z WICKER M., P FISTER H., VAN BAAR J.,
sampled geometry. In Proceedings of ACM
G ROSS M.: Surface splatting. In Proceedings
Siggraph 2003 (2003), pp. 641–650. 2, 4
of ACM Siggraph 2001 (2001), pp. 371–378.
[RKK97] RUSPINI D. C., KOLAROV K., K HATIB O.: 2, 3, 7

c The Eurographics Association 2004.


Adams et al. / Interactive 3D Painting on Point-Sampled Objects

Figure 11: Several bunnies painted using our painting system. From left to right and top to bottom: Cloud Bunny, Nemo Bunny,
Caesar Bunny, Mondriaan Bunny, Savannah Bunny and Flower-Power Bunny.

Figure 12: Close-ups of the Day-and-Night Bunny. Note the very fine detail. Right: sampling density around the bee.

Figure 13: The Beach Bunny. Right: geometric detail on Figure 14: Left: the Fire Dragon. Top right: close-up of
the water. the reflecting dragon ball. Bottom right: close-up of one
of the eyes painted with the mosaic effect.

c The Eurographics Association 2004.

You might also like