Loren on the Art of MATLAB

Computational Geometry in MATLAB R2009a, Part I 15

Posted by Loren Shure,

I'm pleased to introduce Damian Sheehy as this week's guest blogger. Damian is a geometry developer at The MathWorks, his background is in the area of geometric modeling and mesh generation.

Contents

A Creature Called Epsilon

When I set out to write this I reviewed some of Loren's past blogs to refresh my sense of what goes. When I read her blog on A Glimpse into Floating-Point Accuracy, Another Lesson in Floating Point, and Collinearity, I knew I was in familiar territory. The chances are, you too have had a brush with epsilon at some point in your programming past.

I had my first encounter with ulp (unit in the last place) and his accomplice eps when I was a graduate student in the early 90s. My advisor and I were working on an algorithm that was based on a Delaunay triangulation. The triangulation algorithm would often "fall over" due to numerical problems. I wasted effort trying to "fine tune" the tolerances to get our collection of datasets to triangulate successfully. As you know, this is like trying to push a wrinkle out of a fully-fitted carpet. Sure enough, before long my advisor would present a new dataset from a research collaborator and I was back in business. The frustrating part was the lack of information on how to deal with these failures, because they were rarely addressed in texts or technical publications. When I was finishing up at graduate school and thereafter things began to change for the better. Technical journals and conferences dealing with geometric computing began to solicit research papers on robustness. The papers that appeared through the years were encouraging and I had a reassuring sense that help was on the way.

Why Geometric Computing is Susceptible to Robustness Issues

In Loren's blog on Collinearity, we realized the collinearity test for three points could viewed as computing the area of a triangle formed by the points and then checking for zero area. This reduced the geometric test to computing a determinant. In fact, this geometric test can also be used to tell whether a query point lies to the left or the right of an oriented line defined by two points. Here, give it a try . . .

p1 = [1 1];
p2 = [5 5];
p3 = [2 4];
plotPointLine(p1,p2,p3);
axis equal;
pointLineOrientation(p1,p2,p3)
ans =
LEFT
q1 = [1 1];
q2 = [5 5];
q3 = [4 2];
pointLineOrientation(q1,q2,q3)
ans =
RIGHT
dbtype pointLineOrientation
1     function orient = pointLineOrientation(p1,p2,p3)
2          mat = [p1(1)-p3(1) p1(2)-p3(2); ...
3                 p2(1)-p3(1) p2(2)-p3(2)];
4          detmat = det(mat);   
5          if(detmat == 0)
6              orient = 'ON';
7          else if(detmat < 0)
8                  orient = 'RIGHT';
9          else orient = 'LEFT';
10             end
11         end
12         

dbtype plotPointLine
1     function plotPointLine(p1,p2,p3)
2     P = [p1;p2;p3];
3     plot(P(1:2,1),P(1:2,2));
4     hold on;
5     plot(p3(1),p3(2),'*r')
6     ptlabels = arrayfun(@(n) {sprintf('  %d', n)}, (1:3)');
7     Hpl = text(P(:,1),P(:,2),ptlabels, 'FontWeight', 'bold');

If the point lies to the left of the line, the sign of the determinant will be positive – corresponding to a triangle of positive area. Conversely if the point lies on the other side. As we saw before, if the query point is on the line all three points are collinear and the determinant is zero. Many problems in geometric computing reduce to evaluating the sign of a determinant. The construction of a 2D Delaunay triangulation is based on two geometric tests. The point-line orientation test which we just saw, and a point-in-circle test which is used to decide if a query point is within the circumcircle defined by the three vertices of a triangle. It is surprising that you only need two simple geometric tests that you learned in high-school and some programming knowledge to write a 2D Delaunay triangulation algorithm. However, if you were to try this for real you would find things can sometimes go wrong – very wrong.

Robust Delaunay Triangulations

The integrity of the Delaunay triangulation algorithm hinges on the correct evaluation of the determinants used in the geometric tests. Problems arise when the determinant is ambiguously close to zero. Inconsistencies creep in due to the presence of numerical roundoff, orientation tests can return an incorrect outcome, and the algorithm can subsequently fail.

The Qhull computational geometry library, which was developed in the early 1990s, identified these numerical precision problems during the computation. Qhull communicated the problems to the user through a warning or error message, and provided some helpful interactive options to try to work around the issues.

For example, if we compute the Delaunay triangulation of the corner points of a unit square using the Qhull-based delaunayn method, a numerical precision warning is issued; this is because all four points are co-circular. Qhull provides an option to "workaround" these problems; the 'QJ' option adds noise to the points to avoid this degenerate scenario.

X = [0 0; 1 0; 1 1; 0 1]
tri = delaunayn(X, {'QJ'})
X =
     0     0
     1     0
     1     1
     0     1
Warning: qhull precision warning:
The initial hull is narrow (cosine of min. angle is 1.0000000000000000).
A coplanar point may lead to a wide facet.  Options 'QbB' (scale to unit box)
or 'Qbb' (scale last coordinate) may remove this warning.  Use 'Pp' to skip
this warning.  See 'Limitations' in http://www.qhull.org/html/qh-impre.htm

 
tri =
     1     2     4
     4     2     3

While this workaround is helpful, is not guaranteed to be robust and failure can still arise in practice. Fortunately, technology has progressed since the early 1990s.

In Loren's earlier blog on Collinearity, follow-up comments cited numerical inaccuracies associated with the computation of a determinant, and recommended the use of rand/SVD as being more numerically reliable. But, reducing numerical roundoff does not guarantee the correct outcome of the geometric test either. The implementation is still vulnerable to numerical failure. In the past decade researchers have focused attention on robustness issues like these and on robustly computing Delaunay triangulations in particular. Regarding Delaunay triangulations, the generally accepted solution is based on the notion of Exact Geometric Computing (EGC).

One reply posted on Loren's Collinearity blog highlighted this solution, but it didn't catch focus. The idea is based on evaluating the determinant using exact arithmetic. Since this is computationally expensive, a two-step process is applied; the determinant is computed using floating point arithmetic in the usual way and a corresponding error bound is worked out along with this computation. The error bound is used to filter out the ambiguous cases, and EGC is then applied to these cases to ensure a correct outcome.

In R2009a we adopted 2D and 3D Delaunay triangulations from the Computational Geometry Algorithms Library (CGAL) to provide more robust, faster, and memory-efficient solutions in MATLAB. CGAL employs EGC and floating point filters to guarantee numerical robustness. Let's see how EGC performs for our collinear test. We will use the new class DelaunayTri, which is based on CGAL, to experiment a little.

Collinear Test Revisited

Let's take a look at Loren's collinear test again, first choose 3 points we know are collinear. If we attempt to construct a Delaunay triangulation using DelaunayTri, then a triangle should not be formed because the three points are collinear.

format long
p1 = [1 1]; p2 = [7500 7500]; p3 = [750000 750000];

Here's the test based on the determinant computation

collinear = (det([p2-p1 ; p3-p1]) == 0)
collinear =
     1

and the test based on rank computation.

collinear = (rank ([p1 ; p2 ; p3]) < 2)
collinear =
     1
P = [p1;p2;p3];

Here's the test based on EGC.

dt = DelaunayTri(P)
dt = 
  DelaunayTri

  Properties:
      Constraints: []
                X: [3x2 double]
    Triangulation: []

The Triangulation is empty [] since the points are collinear, this is what we expect. Now move point p1 by a small amount to make the three points non-collinear.

p1 = [1 1+eps(5)];

Here's the test based on the determinant computation

collinear = (det([p2-p1 ; p3-p1]) == 0)	% Gives incorrect result
collinear =
     1

and the test based on rank computation.

collinear = (rank ([p1 ; p2 ; p3]) < 2) % Gives incorrect result
collinear =
     1

Finally, here's the EGC-based test.

P = [p1;p2;p3];
dt = DelaunayTri(P);
collinear = isempty(dt(:,:))            % Gives the correct result
collinear =
     0

In this instance a triangle was constructed which indicates the three points are not considered to be collinear. We know this is the correct assessment.

What Conclusions Can We Draw From This?

Well, EGC plays a very important role in the robust implementation of algorithms like Delaunay triangulations, Voronoi diagrams, and convex hulls. These algorithms have a long history of being notoriously susceptible to numerical precision issues.

This raises an interesting question; should we adopt exact geometric tests for general algorithm development within MATLAB? For example; should we be using exact collinear tests and exact point-line orientation tests and the like?

In general we should not, because the environment we are working in has finite precision. If we used EGC extensively we would realize that while it is numerically correct, it may fail to capture our programming intent. In the flow of an algorithm, we take the output from one step of the computation and pass it as input to the next.

What happens if we pass inexact input to an exact geometric test? Suppose we begin our computation with three collinear points, and next we apply a matrix of rotation to reorient the points in a particular direction. In doing so we introduce inaccuracies; will the exact geometric test right those inaccuracies? No, it will not; see for yourself.

p1 = [1 1];
p2 = [7500 7500];
p3 = [750000 750000];
P = [p1;p2;p3];
T = [cos(pi/3) sin(pi/3); -sin(pi/3) cos(pi/3)];
P2 = P*T;
dt = DelaunayTri(P2);
collinear = isempty(dt(:,:))
collinear =
     0

A triangle was constructed from points {p1, p2, p3}, so in terms of EGC the three points are considered to be noncollinear.

While the outcome is technically correct it's probably not what you expected or hoped for. In the context of Delaunay triangulations and their applications, the consequences of imprecise input are relatively benign. EGC is a powerful tool for addressing a long-standing robustness problem in this domain.

In our programming world of finite precision a judicious choice of tolerance is generally sufficient to capture the correct behavior. Now, that just begs the question; what's a judicious choice of tolerance? What would be a suitable tolerance for use in the collinearity test, for example? Well, in this context, a good tolerance in one that captures the loss in the computation of the determinant in a dynamic or relative sense. It should work when the coordinates of the points are small and it should work when they are large. A predefined fixed value will not work over a wide range of coordinate values. When I compute a tolerance-sensitive determinant to be used in a geometric test, I examine the products that are produced during the expansion. I then choose the product with the largest magnitude and use it to scale eps.

For example, when computing the determinant of matrix [a b; c d], I choose the magnitude,

     mag = max(abs(a*d), abs(b*c))

and then compute the relative tolerance,

     relativeTol = eps(mag);

and I apply the tolerance as follows.

     collinear = (abs(det([p2-p1 ; p3-p1])  <= relativeTol))

(For simplicity, the products a*d and b*c are not reused to compute det.)

The Bottom Line?

So, what's the point of telling you all about Exact Geometric Computing? Ohhh, I am hoping this overview gives you some insight into numerical issues in geometric computing. But the important point for you, the user, is you no longer need to worry about numerical robustness issues when constructing 2D/3D Delaunay triangulations, Voronoi diagrams or Convex hulls in MATLAB. Since other applications like nearest neighbor queries and scattered data interpolation are built on these features, you can expect robstness, performance improvements, and memory efficiency there as well. If you would like to learn more, check out the video and the Delaunay triangulation demo highlighting the new computational geometry features in R2009a. Follow these links for the reference pages for the new features:

Coming up Next

Next week I will talk about an important application area of Delaunay triangulations in MATLAB, and that's Scattered Data Interpolation. We introduced new interpolation features in R2009a that improve usability, robustness, and performance, so stay tuned for more insight into that. In the meantime, please let me know your views, thoughts, and opinions here.


Get the MATLAB code

Published with MATLAB® 7.8

15 CommentsOldest to Newest

Hi,

I want to explain my problem maybe in future realese you can modify the computational geometry tools.

I am unsing delaunay in 3d space, I have always used the mex version of qhull. All I want is speed, because I don’t care about slivers since in my appication I only need the triangles of tetraedrons.

I was very happy when I saw then the new tool allows for about 3x time costruction fo delaunay tessellation of uniform dataset.But I sunddenly become sad when I tried it on sparse dataset (for who doesn’t know they are critical for delaunay algorithms), it is terribly, but very terribly slow. For a few thousands points I had to force it to stop before finishing after minutes of computation.

EGC is great thing but there should be a way to turn it off, can you please add it in the next release?

I thing also adding some more feature from CGAL will make MATLAB even more interestign than it si now.

Thank you for your attention and the tool you provided.

Regards

Luigi

http://giaccariluigi.altervista.org/blog/

Hi Luigi,

Thank you for your feedback; you have touched on an important issue here. Most likely your data-set is highly-degenerate where many data points are aligned in a regular manner. The output from meshgrid is a typical example of a highly-degenerate dataset. During the incremental construction of the Delaunay, this data leads to many cases where a newly inserted point lies on the circumsphere of an existing tetrahedron within the triangulation. This invokes EGC to resolve potential ambiguity in the point-in-circumsphere test, and you end up paying a performance penalty in return for a robustness guarantee.

Research and development on EGC and floating-point filters is on-going, and I am happy to report that the performance in this scenario is considerably improved in the next release of MATLAB (R2009b).

If we were to provide an option in DelaunayTri to bypass EGC, you may get lucky and get a triangulation, or you may not… In a production environment a somewhat slower valid solution is always better than an invalid one or no solution at all.

As you point out the triangulation is 3-4 times faster in the general case, and you only pay a penalty when you hit a highly-degenerate data-set. In practice this is not too common, and it’s exactly this type of data that leads to failure in a pure floating-point Delaunay triangulation
algorithm.

If you can send me your data I would be more than happy to report the improvements you can expect in the next release (R2009b).

Based on your impressive posts to the File Exchange you are in a good position to take advantage of the new Computational Geometry tools in MATLAB.

If you have any further enhancement requests please feel free to drop me an email, and keep up the good work!

Damian

Dear Damian,

Will the boolean operations for polyhedra from CGAL be included in MATLAB in the future? I am currently using a crude, nonrobust algorithm for computing intersection of polyhedra. I would like to find intersections of polyhedra in a robust and efficient way.

Thank you,
Kyle

Hi Kyle,

Being a geometry buff I personally would like to have boolean support for polyhedra in MATLAB. But in practice, we have to weigh up the real need of a feature against its usefulness to our customers at large; otherwise MATLAB would grow out of control fairly quickly. Ultimately, it’s requests from users like you that get our attention focused on features like this. At this point we do not have near-term plans to provide polyhedral booleans, and I will track your enhancement request as a vote in favor.

Best regards,

Damian

Hi Kyle,

You could potentially use a 3D DelaunayTri to compute the intersection. But the implementation may not be simple. I can’t say how it would perform relative to the functions you are currently using as I have not tried or looked at geom3d. But if you were performing the intersection computation repeatedly on the same pair of polyhedra, you may be able to make some gains.

Here’s the approach… Given two polyheda P and Q construct a DelaunayTri Tp and Tq representing each. Test the boundary edges of Tp against Tq (and vice versa). You have have 3 scenarios;
1) Both edge vertices are contained in Tq.
2) One vertex contained in Tq the other is not – compute the intersection point.
3) Both edge vertices are not contained in Tq – the edge could potentially pass through Tq or lie completely outside Tq. More analysis, e.g. ray intersection to qualify. The intersection volume would be given by the convex hull of the interior points and intersection points.

OK, you get the idea. It’s not too difficult to get an approximate implementation up and running, but the real effort involves detecting and handling step 3 efficiently. DelaunayTri has a fast pointLocation test that will handle steps 1 and 2 very efficiently. But whether you can ignore step 3 depends on your accuracy requirements and the “shape” of your polyhedra.

For more ideas see Christer Ericson’s excellent book “Real-Time Collision Detection”. If you want to have a shot at implementing a Delaunay-based algorithm feel free to contact me off-line with any questions.

Damian

Math is something that is created by human as a tool to describe our nature. All theoretical physics and even computational algorithms originate from some form of mathematical formulation. Where does the knowledge of math, number, counting, geometry came from in the first place? Do you think math is an intrinsic property of mother nature and so it is also imprinted in our mind, naturally? Is it possible to describe nature using something other than math (ie. planetary motions, fluid dynamics, mechanics)?

Hi Alex,

This is an interesting question and I’m sure it’s one that most scientific-minded people have pondered on. Unfortunately, working for a math software company doesn’t give me any greater insight than anyone else who appreciates the beauty of math and science. To find answers to questions like these I subscribe to New Scientist magazine, but it often leaves me with more questions than answers and it appears in my mailbox faster than I can read it. Here’s an interesting and somewhat related article that appeared in last week’s edition:

http://www.newscientist.com/article/mg20527535.100-mind-over-matter-how-your-body-does-your-thinking.html

Damian

Hello,
I had posted this earlier on the support site, I was hoping somebody would offer me a solution on how to make it work:
I have two vecs x, y:
x =

1.0000 1.0000 0.6667 0.3333 0.5000

y =

0 1.0000 0.3333 0.6667 0.5000

It seems that with all this literature about EGC it would compute the hull, but the point (0.5,0.5) is always left out. Is there any way to include tolerances in convhull computations? e.g., if a point is 1e-8 or closer to the hull, include it in the hull, maybe…
Thank you,
Raghu

The convhull function accurately computes the convex hull of the data set you provide. It does not take into account the loss of precision that may have incurred when you derived this data. Likewise, it does not have an internal tolerance that relaxes the condition of convexity. Depending on the relative magnitude of the inaccuracies in your input data, this issue can also arise when computations are performed using standard floating point arithmetic.

Geometric computations in floating point arithmetic typically use internal tolerances to offset the loss of precision in the computation. This may capture intent more favorably in the presence of inaccurate input, but it’s a double edge sword. This approach is inherently fragile in degenerate and near coincidence scenarios and can lead to failure of the algorithm. If you wish to try the floating point computation you can call convhulln([x’ y’]) instead of convhull.

Thanks for the clarification. In my particular application, there are a lot of cases where collinearity necessarily occurs. Is it then not a good idea to expect such points to be included in the Convex Hull (e.g., (0.5,0.5) between
(0.3333, 0.6667) and (0.6667, 0.3333)) All points are generated internally, so they should be double precision, I don’t think much can be done to improve how well they are generated.

Thank you,

Raghu

Hi Raghu,

Did you try convhulln instead on convhull? It is not clear what you mean by “Is it then not a good idea to expect such points to be included in the Convex Hull”. One can view this as included in the output definition of the convex hull or included in the interior of the convex hull. Either way, why is it not a good idea? If you have compelling reasons why the function should behave in a specific manner then we can consider filing an enhancement, but you have to justify your reasoning.

Thanks,

Damian

Hello,

Sorry for the confusion, I was not sure if convhull includes points on the hull that are not extremal, but it seems to do so from the following example:

x = [double(0) double(1) double(2) double(0) double(0)]

x =

0 1 2 0 0

>> y = [double(0) double(1) double(2) double(1) double(0)]

y =

0 1 2 1 0

k = convhull(x,y)

k =

1
2
3
4
1

The point i’m having trouble with is as follows:

x = [double(1/3) double(1/2) double(2/3) double(1) double(1/3)]

x =

0.3333 0.5000 0.6667 1.0000 0.3333

>> y = [double(2/3) double(1/2) double(1/3) double(1) double(2/3)]

y =

0.6667 0.5000 0.3333 1.0000 0.6667

k = convhull(x,y)

k =

1
3
4
1

convhulln([x’,y’])

ans =

3 4
4 1
1 3

I fully expected the point (0.5,0.5) to be on the hull but both convhull and convhulln don’t include it. With inpolygon I get:

[inP onP] = inpolygon(double(1/2),double(1/2), x(k), y(k))

inP =

1

onP =

1

Of course, inpolygon and convhull might be completely different algorithms and hence the disparity, which was why I felt that if there were a tolerance parameter included in convhull it could be easier to detect points on the hull boundary that are not extremal. In my particular application, the assumption of general position of points is not valid, which is why the non-extremal points also matter in the computation.

Once again, thanks for getting back, please do let me know if I can provide more information.

Thank you,

Raghu

Raghu,

Thank you for providing a more detailed example. Capturing intent is always a challenge in an edge-case scenario like the one you provided. It is tempting to suggest a tolerance and while that may work for your case it may have shortcomings in other respects. Users may attempt to use large tolerances and the notion of convexity is no longer valid. The Alpha-shape may be a more appropriate construct for addressing computations like yours. This is basically a generalization of a convex hull that admits concavities. Imagine the 2D convex hull as the boundary defined by a ruler as it traverses tightly around your point set. In contrast, the 2D Alpha Shape is the boundary defined a disk of radius R rather than a ruler. By varying R you can get a family of different shapes. Conceptually, this would appear to be more appropriate, but I cannot comment on how well it would work with your data set. It is a more worthwhile enhancement to consider as opposed to adding a tolerance to convhull.

You may be able to use other computational geometry tools to solve your problem. Here’s an example I came up with based on a triangulation of the dataset. The function triangulates the data and then “peels away” the sliver triangles that (typically) lie on the boundary, revealing the interior points that lie in close proximity. This function may not work for your particular data set; for example, the distribution of points may give rise to slivers in the interior. It any case, it’s something to consider.

function k = myconvhull(x,y,rtol)
s1 = warning('off','MATLAB:delaunay:DupPtsDelaunayWarnId');
s2 = warning('off','MATLAB:TriRep:PtsNotInTriWarnId');
tri = delaunay(x(:),y(:));     % Triangulate the points
tr = TriRep(tri,x(:),y(:));   % Use a TriRep to represent the triangulation
[~, ccrad] = tr.circumcenters(); % Compute the radius of circumcenters
idx = find(ccrad > rtol);     % Remove sliver triangles (assumed to be on the boundary) 
tri(idx,:) = [];
tr = TriRep(tri,x(:),y(:));   % Create a new TriRep and use it to compute
fb = tr.freeBoundary();       % the free boundary; this is an approximation 
k = fb(:,1);                  % of the convex hull 
k(end+1) = k(1);

Example usage

x = [double(1/3) double(1/2) double(2/3) double(1) double(1/3)]
y = [double(2/3) double(1/2) double(1/3) double(1) double(2/3)]
>> k = myconvhull(x,y,10e10)

k =

     1
     2
     3
     4
     1

Thank you very much for this piece of code. I’ll try to use this to loosen some of the constraints on the intermediate points.
Thank you,
Raghu

These postings are the author's and don't necessarily represent the opinions of MathWorks.