\documentclass[12pt]{article}
\topmargin=-.25in
\oddsidemargin=0in
\evensidemargin=0in
\textwidth=6.5in
\textheight=8.5in
\newcommand{\Bez}{B{\'e}zier}
\title{Contouring with $C^1$ data (part 1)}
\author{Josh Levenberg}
\date{February 13, 2003}
\begin{document}
\maketitle
\section*{Segway} % -------------------------------------
Last semester I talked about two different problems:
\begin{enumerate}
\item How to render a regularly sampled surface $(x, y, f(x,y))$, and
\item How to construct a (signed) distance function $f(x,y)$ to a subset
$C$ of the plane -- that is $f(x,y) = \min_{(x_c,y_c)\mbox{ in }C}
|| (x,y) - (x_c,y_c) ||$ (possibly times -1)
\end{enumerate}
Observe:
\begin{itemize}
\item You could render a distance function using method (1) -- it would
end up looking like a cone near any isolated points of C, and an extruded
V near a line segment in C.
\item The zero set of the distance function to C is precisely the set C. Contouring is all about finding the zero set of functions.
\end{itemize}
\section*{Motivation} % -------------------------------------
John Strain has a paper on that uses distance functions to track the
evolution of a moving interface: given the intial interface $C_0$ you
can construct its signed distance function $f_0$ (so it is negative in
one fluid and positive in the other). Instead of evolving the
interface itself, we can evolve the distance function, to get $\tilde f_h$, an
approximation to the distance function to $C_h$, the interface at time
$h$. We can then find the zero set of $\tilde f_h$ to get $C_h$, and repeat. The
idea is that it is easier to contour $\tilde f_h$ than other ways of dealing
with cusps and topology changes in $C_t$. Of course, Strain's method
incorporates other buzzwords like 'quad-trees' and 'semi-lagrangian.'
\section*{Focus} % -------------------------------------
I have concentrated on the problem of determining the contours of a
function where we have both positional and derivative (gradient) data.
I have primarily looked at solutions that use only local data -- for
speed and scalability reasons. My main tools are splines and adaptive
meshes.
\section*{1D} % -------------------------------------
Consider the 1 dimensional problem: we have a black box which will
tell us the value or derivative of $f$ at any specified point and we
want to find all roots of $f$ in a given interval. Of course, we'd like
to minimize the number of calls to the black box, and have some bound
on how far the roots of $f$ are from the output of the algorithm.
My approach divides this into two stages: approximation of $f$ by a
spline (ie. a piecewise cubic), and then finding the roots of a cubic
on an interval.
\subsection*{Approximating $f$ by a spline} % ----
When we approximate $f$ by a spline, we introduce an approximation
error. This moves the roots an amount proportional to the distance to
the nearest sample ($O(h^4)$ since I use cubic approximation) and
inversely proportional to the magnitude of the derivative in the
interval (have special handling for double roots). Therefore we want
to have more samples near roots, with extra samples if the derivative
is small as well. Of course, we don't a priori know where the roots
are, so some sampling is just to ensure we don't miss any roots.
My approximation algorithm goes like this:
\begin{enumerate}
\item Query the function at the end points of the interval.
\item If we can tell there are no roots in this interval, discard it.
\item If the approximation error is low, contour the corresponding cubic.
\item Otherwise bisect the interval repeat with each piece.
\end{enumerate}
Here the adaptive mesh results from the repeated bisection of the
interval. I use a cubic approximation within the interval: it is
completely determined by the values and derivatives of the function at
its endpoints. Since adjacent cubics use the same value and
derivative at their shared point, they abut with $C^1$ continuity. I use
the cubic \Bez\ spline basis to represent these cubics since that
basis is stable and simplifies root finding.
\subsection*{Background: cubic 1-d \Bez\ splines} % ----
A cubic \Bez\ spline $s(t) : [0,1] \to \mathrm{\mathbf{R}}$ is
determined by four values $s_0 \ldots s_3$.
The formula is $s(t) = \sum_{i=0..3} s_i C(3,i) t^i (1-t)^{3-i}$.
Main facts:
\begin{itemize}
\item $s(0)=s_0$
\item $s(1)=s_3$
\item $s'(0)= 3(s_1-s_0)$
\item $s'(1)= 3(s_3-s_2)$
\item $s(t) = c_0(t) s_0 + ... + c_3(t) s_3$ where $c_i(t)=C(3,i) t^i
(1-t)^{3-i} \ge 0$ and $\sum_i c_i(t) = (t+1-t)^3 = 1$
for all $t$ in $[0,1]$ so $(t. s(t))$ lies in the convex hull of
$(i/3, s_i)$.
\item There exists an algorithm for coming up with the coefficients
restricting the spline to a subinterval.
\end{itemize}
\subsection*{1D root finding} % ----
Simple algorithm for finding the roots of s(t):
\begin{enumerate}
\item if all the $s_i$ are the same sign, there are no roots
\item if the interval is smaller than epsilon output the midpoint as a root
\item otherwise bisect the interval and repeat for each half
\end{enumerate}
This has linear convergence (adds one binary digit with each iteration).
To get quadratic convergence, replace step `3' with:
\begin{enumerate}
\item[3a.] Find the intersection of the convex hull of the $s_i$ with the
x-axis.
\item[3b.] If the new interval is less than than $70\%$ the length of
the previous interval, find the new $s_i$ and go to step 1
\item[3c.] Otherwise bisect and repeat for each half of the new
interval
\end{enumerate}
Refinements: Can avoid error accumulation by only subdividing the
original interval. Can keep roots isolated by factoring them out as
they are found ('deflation')~--- can be done entirely in \Bez\ basis.
The above algorithm is robust and fast. About the only improvements
to my implementation I could think of are:
\begin{enumerate}
\item base the error analysis on how far the approximated root is from a sample
\item subdivide at a more clever point
\end{enumerate}
Neither of these would generalize to higher dimensions.
\section*{2D} % -------------------------------------
Lets consider the 2-d case. The general approach from 1-d carries
over, only the details are harder.
\subsection*{Adaptive mesh} % ----
For an adaptive mesh, I use a binary triangle tree (which I talked
about last semester, short version: its (basically) a restricted
quad-tree with a specific triangulation).
\subsection*{Cubic approximation} % ----
I still do a piecewise cubic approximation, but its much harder to get
$C^1$ continuity: the number of degrees of freedom do not match up. A
2-d cubic has 10 degrees of freedom: on a triangle we have to match
the values and gradients at the three corners (9 degrees of freedom)
and the cross boundary derivative along each edge (3 degrees of
freedom).
It may be possible to create a cubic $C^1$ approximation using a
global approach, but this is slow and not guaranteed to work.
Using a bicubic (tensor product of two cubics) does not solve the
problem: we would then have 16 degrees of freedom, but have to match
the gradient and value at four points (12 degrees of freedom) and the
cross boundary derivative on four sides (8 degrees of freedom). Also,
building an adaptive mesh out of square domains is much more difficult
than out of triangular domains.
The solution: divide each triangle into three by adding the centroid.
Gives more control points, isolates sides from each other, and gives
us plenty of degrees of freedom [Clough and Tocher 1965]. I use
degree lowering to set the cross boundary derivative, which means I
can in general reproduce quadratics but not all cubics. There are
methods for estimating that derivative so that you may reproduce
cubics as well.
Detail: I actually use a different but compatible (in the $C^1$ sense)
interpolant for the diamonds in the binary triangle tree: this uses 4
cubics instead of 6 to approximate the area of two triangles. This
results in less work (and maybe even better approximations).
\subsection*{Contouring a cubic in 2D} % ----
Next problem is contouring a cubic on a triangle. Again I use
splines~--- this time triangular cubic splines. [Grandine and Kleine
1997] has an algorithm for contouring tensor product splines, which
I've adapted to triangular splines. The approach goes something like this:
\begin{enumerate}
\item Define a 2-d direction as ``up.''
\item Define the height function by dot product with the up vector.
\item Consider the points where the contour is perpendicular to the up
vector, these corespond to critical points of the height function on
the contour. Between any two critical values, the contours are
topologically simple.
\item Figure out the topology at the critical points.
\item Use cubic root finding to find a bunch of points on the contours.
\item Compute the normal and curvature at those points.
\item Find cubic splines that match [de Boor, H\"ollig, and Sabin
1987]~--- 6th order accurate!
\item Join the splines together according to the topology
\end{enumerate}
Step 3 requires solving an equation like:
\begin{eqnarray*}
f&=&0 \\
\frac{\partial f}{\partial s} &=& 0
\end{eqnarray*}
for $f$ a cubic in $s$ and $t$. I use a generalization of the 1D root
finder, [Sherbrooke and Patrikalakis 1993] modified for cubics instead
of bicubics. Unfortunately, this only has linear convergence, but I
have a another modification that I think will give quadratic
convergence.
Three sources of error:
\begin{enumerate}
\item approximation error: from using spline instead of original function:
\[
\begin{array}{c}
\left. \begin{array}{l}
2 \mbox{ with an adaptive mesh} \\
4 \mbox{ with a regular grid}
\end{array} \right\}
\mbox{ times as many triangles} \\
\mbox{to scale triangles by } \frac{1}{2} \\
\mbox{to reduce the error by a factor of }
\left\{ \begin{array}{l}
8 \mbox{ with standard Clough-Tocher} \\
16 \mbox{ if you reproduce cubics}
\end{array} \right.
\end{array}
\]
\item root finding error: whenever we execute the operation ``find all
roots at a given height'': quadratic convergence so time is $O(\log
-(\log \mbox{error}))$. This is the same amount of time as the
simultaneous solver, assuming my modifications give quadratic
convergence there too.
\item spline fitting error: when we construct the splines representing
the solution in step 8, error is $O(h^6)$ where $h$ is stepsize
\end{enumerate}
Algorithmic complexity is approximately:
\[ O\left(
(\mbox{length of solution})
*\frac{1}{\sqrt[3]{\mbox{error1}}}\right)\]
function and gradient evaluations plus:
\[ O\left((\mbox{length of solution})
*\frac{1}{\sqrt[3]{\mbox{error1}}}
*\left(\log \left(-\log
\frac{\mbox{error2}}{\sqrt[3]{\mbox{error1}}}\right) +
\frac{\sqrt[3]{\mbox{error1}}}{\sqrt[6]{\mbox{error3}}}
\right)\right)\]
other work. If you can reproduce cubics, the
$\sqrt[3]{\mbox{error1}}$ are replaced by
$\sqrt[4]{\mbox{error1}}$.
Total error is approx error1+error2+error3, so generally
error1 $>>$ error2, error3
Note that I haven't done careful timing to verify these running time estimates.
Note that there are many bad cases: consider $f(x,y)=y(x^2+y^2-1)$.
Problems happen whenever the gradient is 0. My solution is to look at
the behavior of the function $2*\epsilon$ above and below the problem
area and then extrapolate to the problem point.
\end{document}