Gregg's blog

I’ve written my last two blogs about different pitfalls and insight needed in order to properly translate CAD data. I’ve discussed how “sharing” of geometry inside the data structure is a hidden but much used form of design intent and discussed how geometry forms are inherently linked to high-level algorithms inside the modeler itself. But I haven’t discussed the healing operations that the Spatial translators perform in order to properly translate the different CAD formats. If you use our translators you know they exist, and people commonly ask about their purpose and efficacy. 

To understand InterOp healing we have to start by borrowing a concept from any undergraduate Data Structure and Algorithms class. Generally, one views a software system as two distinct but highly inter-related concepts: a data structure and an acting set of algorithms or operators. In our case the data structure is a classic Boundary Representation structure (B-rep) which geometrically and topologically models wire, sheet and solid data. An operator is an action on that data, for example, an algorithm to determine if a point is inside the solid or not.  But the system’s operators are more than just a set of actions. Implicitly, the operators define a set of rules that the structure must obey. Not all the rules are enforced in the structure itself; actually, many can’t be. But they exist and it’s healing in InterOp that properly conditions the B-rep data to adhere to these rules upon translation.

As always a couple of examples best describe the point. I picked three ACIS rules that are, hopefully, easily understandable.

All 3d edge geometry must be projectable to the surface. Anybody can define a spline based EDGE curve and a surface and write it to SAT. Basically, jot down a bunch of control points, knot vectors, what have you, and put it in a file that obeys SAT format. But in order for it to work properly, geometric rules for edge geometries exist. Specifically, the edge geometry must be projectable to the surface. In short, you can’t have this:

Edge Curve

 

There are many reasons in ACIS for this, but primarily if it’s not projectable then point-perp operations are not well-behaved. If they’re not well behaved finding the correct tolerance (distance between the curve and the surface) is problematic. If one cannot define correct tolerances then water-tightness is not achieved and simple operators, like querying if a point is inside the body, fail. 

 

 

 

Edge and Face geometry cannot be self-intersecting. A great deal of solid modeling algorithms work by firing rays and analyzing intersections with different edge and face geometries.  In order for any conclusion to be drawn, the results of the intersection must be quantifiable. The problem with self intersecting geometries is just that; how to you quantify the results in Figure 3? The key observation here; imagine you are walking along the curve in Figure 3, starting from the left side. At the start, the material is on the right side, but after the self intersection the material changes to the left side. You cross the self intersection again and the material switches to the right again. This causes endless grief in understanding the results of an intersection.

 

Tolerances of Vertices cannot entirely consume neighboring edges. For a B-rep model to be considered water-tight, tolerances of faces and edges must be understood. Today many kernels have global tolerances plus optional tolerances applied to edge curves and vertices. These tolerances vary depending on neighboring conditions, usually obeying some upper bound. You can think of these tolerances as the “caulking” that keeps the model water-tight. Depending on the quality of the geometry or the tolerances of the originating modeling system you might need more “caulking” or less; respectively, larger tolerances on edges or vertices, or smaller tolerances.  However in order to realize a robust Boolean engine, again, rules apply. Consider this:

 

 

 

 

 

 

 

 

 

 

Above we have Edge Curve 2 encapsulated completely inside the gray tolerant vertex. Again, I can easily write this configuration to SAT format, however Booleans cannot process it. It yields horrific ambiguity when building the intersection graphs in the internal stages of Booleans. 

So this is a list of just three rules, it’s far from being comprehensive. But the main point: we know that not everything that ends up in an IGES file comes from a mathematically rigorous surfacing or solid modeling engine. Perhaps people are translating their home-grown data into a system like ACIS so they can perform operations that they could not in their originating system.  But in order to perform these operations, the data must conform to the rules of the system. To simply marshal the data and obey a file format, but disregard the rules, is doing just half the job. 

That’s why healing matters.  

 

 

'Form' in mathematics manifests itself in all manners of perspective and discussion. From your earliest mathematics courses, professors drilled home the discipline; “return all answers in simplest form”. My youthful efforts to dismiss the need yielded discussions as such; “OK, please graph this equation" . In a quick second I would naively suggest that at x = -1 the equation is undefined and then I would start plotting points. But alas, this is why form is important.  gets factored to  which is simplified to   to x - 1, whenever x isn’t -1. Wait, that’s a line. My eighth grade algebra teacher, Mr. Sower, was right, simplest form is important.

As you advanced in your course work, you start to define forms of equations by their mathematical representation and to understand advantages and disadvantages of each. Farin, in his book, Practical Linear Algebra, does a nice job outlining the three main forms of an equation of a line and advantages of each in computer graphics:

  • Explicit Form: y = mx + b This is the form in every basic algebra book. It’s very conceptual; the coefficients have clear geometric meaning. In computer graphics it’s the preferred form for Bresenham’s line drawing algorithm and scan line fill algorithms.
  • Implicit Form:    (Given a point p and a vector a that is perpendicular to the line.) The implicit form is very useful for determining if an arbitrary point is on a line.
  • Parametric Form: . The scalar value t is a parameter. In this form, we can easily calculate points along the line by use of the parameter, t.

I’m not certain when I internalized that inherent in mathematics is the art, strategy and beauty of 'form'.  (I’m a slow learner, it wasn’t Mr. Sower’s fault.) But as my career developed into the commercial implementation of B-rep modeling kernels their translation technologies, 'form' again, became a principal view.

So, for the purpose of this discussion we define 'form' of geometric curves and surfaces in three ways: analytic, B-spline and procedural representations. All three of the major solid modeling kernels, ACIS, CGM, and Parasolid, maintain their geometry in either of these three forms, or sometimes as dual representations. [1]

  • Analytic: geometry which can be represented explicitly by an equation (algebraic formula), for example: planes, spheres, and cylinders. These equations are very light weight and they intrinsically hold characteristics of the surface, for example the centroid of a sphere. 
  • B-spline: geometry represented by smooth polynomial functions (in parametric form) that are piece-wise defined. Generally used to represent free-form surfaces. Advantages of B-splines are their ability to represent many types of geometry and bounding boxes are easy to calculate.
  • Procedural: geometry represented as an implicit equation or algorithm. For example, the IGES specification has tabulated cylinders and offset surfaces as defined procedural surfaces. The advantages are precision and the knowledge of progenitor data to understand design intent.

From this perspective, each of the major kernels has thought long and hard about the best form for each geometry type. In some cases it’s self-evident and easy. If you are modeling a sphere, the analytic form is clearly best. It’s a lighter weight representation and the full extents of the surface are defined. Even more, imagine doing a calculation requiring the distance between a point and a sphere. In this “special” case, you simply compute the distance between the point and the centroid of the sphere, subtract the radius and you’re done. If the sphere was in the form of a b-spline it’s much more computationally expensive. Despite this translation solutions still don’t get this preferred form right. Now imagine you’re a CMM application and you purchased the solution that translates spheres to B-splines?  Your app is horribly slow.

Although spheres are a trivial example, more complex geometries become intriguing. In what form should you prefer a helical surface? Or an offset surface?  ACIS has preferred multiple versions of helical surfaces over the years. Early on the preferred version was a procedural sweep surface with either a procedural rail curve or a b-spline rail curve. (The rail curve is what defines the helical nature of the surface).  If the surface was translated in from Parasolid it came in as a generic b-spline surface. But the need to understand different characteristics of the helical surface soon became apparent. For example, the hidden line removal algorithm and intersectors all needed to understand the pitch and handedness to efficiently work with the geometry. To that end, ACIS moved to a procedural surface definition with an analytical representation of the helical rail curve. 

The offset surface is an excellent example where the CGM developers and the ACIS developers came to different conclusions. In ACIS the offset surface is procedural; evaluate the progenitor and shoot up the surface normal the offset distance. ACIS choose this representation for preciseness and compactness of data. In addition, in ACIS, if you offset an offset surface the progenitor of the original offset becomes the progenitor for the second or third or fourth offset and more geometry sharing is possible. But all of this comes at a cost. Procedural surfaces, although exact, may have a performance penalty and may introduce unwanted discontinuities. The CGM developers decided the best strategy here was to create b-splines for offsets.

So what does this all have to do with translation? The key point here is; you need to understand what the preferred forms are for each of the modeling kernels. In each of these systems you can easily slip in geometry in non-optimal forms causing immense grief when doing downstream modeling operations. I spoke earlier about the translator solution that goofed up even a simple conversion of spheres. And the CMM application that purchased that translation solution? In short, don’t let that be you.


 


[1] For this discussion I’m going to leave off polyhedral form.

 

Tags:

In my last post, I introduced our idea of B-rep Health and the notion of "legal" but bad B-rep modeling data. Literally, the day after publishing that post, a beautiful, classic case came into development where a "remove face" operation failed due to unhealthy B-rep data. And again, as we see so many times, the culprit was bad translation (unknown third party translator). It’s a nice example. But it’s not that the pathology can be described so conceptually (it can - you will see), more, it shows the subtle, implicit information that is maintained inside a B-rep data structure; information you might not know is used. And lastly, it shows why the fundamentals of B-rep data translation are so important.  

So consider a modeling scenario like this; start with a basic shape that we call a wiggle. It’s a block with a free-form (b-spline) as the top face (picture 1). Fillet one of the edges along the top, creating a filleting surface as shown (picture 2). Now build some form of a feature that cuts the filleting surface in two. Here we simply build a notch in the body (picture 3).

Now translate the part to IGES and import it into a different modeling kernel, like ACIS or CGM. From here, it’s not uncommon that one would "defeature" the part, perhaps for a CAM operation. This involves taking the notch and removing it. This should produce the original wiggle with the filleted edge.  Of course, I wouldn’t be writing this blog if something didn’t go wrong. One would expect for this to always work. Well, there can be trouble; but first, let’s take a quick look at how the remove algorithm works.

The remove algorithm is simple; you unhook and delete the input faces (the faces which are to be removed). You extend the neighboring faces (called the moat ring) intersecting them with each other and using the curves generated from the surface / surface intersections to heal the gap and build the needed edges. So in this case, we will intersect (and possibly extend) surface A and surface B shown below.

Now, we are at the key point of the analysis. Surface A and surface B are the exact same surface. It’s ill-fated to try and intersect a surface with itself (this should be self-evident). Before translation – in the original B-rep - the face presiding over surface A and the face presiding over surface B are different, but  they both point to the exact same geometric surface underneath. This is called "sharing". Now if shared, the remove algorithm knows they are the same and doesn’t do the ill-fated intersection. Everything is taken into account and the remove operation works with the original B-rep. Ok, but what happened during translation? And here is where good translation matters. Let’s now look at how this model got translated.

If you have weak translation; you might do something like this. (And this, I believe, is the scenario behind this bug.)  The translator had some method of processing faces (and this could have been done when writing to IGES) that went face by face writing out each face and the surface underneath it. If two different faces pointed to the same surface, it didn’t care. It just processed the surfaces as if they were unique. Basically, the translator didn’t bother to share. Now the future remove operation "thinks" they’re different surfaces and this causes the intersectors endless grief. I suppose you could go back to unsaid company and tell them this is bad, please fix it. Perhaps they will tell you they get sharing correct in some cases but not all (after all sharing is not a complex topic, it’s a concept that was built into even the first B-Rep technologies). But for them it’s simply a performance benefit to reduce size and processing time. The model’s OK without it. Maybe they will get to fixing it, maybe they won’t. After all, even if you don’t have precise sharing, the model got translated and passes any industrial checker. But performance and checking!? That’s not the point. If you’re modeling, it’s an entirely different story. Modeling operations will not work, as you are removing key information from the B-rep that these operations need. [1]

Ok, so maybe this turned out to be a rant; and having a rather intense, five-year-old son, I can’t believe I have to come to work and talk about "sharing". But these things matter, along with so many other fundamental principles that need to be taken into account during translation (future blogs).  I’ve learned that working in a company that has both a modeling product and a translation product greatly helps with the insight (and motivation) you need to get translation right. As I said in my last post, choose your translation solution wisely.



[1] I should follow up by saying, in ACIS we could add a check to always see if two surfaces are the same prior to intersection (comparing data definition, i.e. knot vectors, control points, etc). But the next billion surface / surface intersections will not have identical surfaces and you now introduced an unneeded check that always has to be done. We don’t want to go there!

 

Tags:

We’ve known for a long time that the integrity of B-rep data plays a major role in the success of downstream modeling operations; but it has always been a difficult task communicating this back to our users in a meaningful manner. For ACIS we have had an external geometry and topology checker since the beginning of the product; it serves the function of defining illegal state(s) of the model.  It has, and still does, serve its purpose. But I knew something was amiss when I kept seeing in-house debugging tools written by developers that reported back B-rep pathologies that were never a part of our external checker. The ACIS developer would see a pathology using these tools and, more often than not, deal with the situation by placing a “fix” in the algorithm to detect the pathology and correct the situation by some form of data manipulation (e.g., re-computing secondary geometry) or expanding the algorithm to handle more numerical inaccuracies / bad data. Although this is one way of doing business (and it shields the application developers from the immediate problem), you trend towards a slower modeler on good data, and bloated B-rep data size on bad data. What’s worse, the application developer never understands why.

So it’s been a long standing issue with ACIS and other modelers I imagine: how do you assess data that is legal, but simply, bad? After observing ACIS developers using their in-house tools the notion of B-rep health (and a future operator) started to crystallize, largely based on the following:

Principle #1: There is a notion of legal, but unhealthy B-rep data. In B-rep modeling, we all would like to live in a black and white world. Tell me, is the data bad or is it not? Well, things are not that simple. For example; it’s not uncommon that we receive models that have edges in them that are slightly greater than the modeling tolerance (sliver edges). They almost always don’t reflect design intent and cause significant difficulties in downstream operations, especially during Boolean operations when the tolerance of the blank (second model in the Boolean operation) might be larger than the length of the sliver edge in the tool (the first model in the Boolean operation). But in the pure definition of the B-rep they are not illegal. They can have a purpose and sometimes do reflect design intent. So we say they are legal, but largely bad (or unhealthy).

Principle #2: The health of your B-rep data is context sensitive.  What might be unhealthy B-rep data for a future local operation is not necessarily unhealthy data for a future Boolean operation. For example; almost all local operations, move, taper, remove face require surface extensions. Booleans do not. So imagine a B-Spline surface with parameterization such as below:

Figure 1: Converging parameter lines

The surface will pass any industry standard checker; by all mathematical requirements, it’s legal in the state that it’s in. Booleans should be fine, as well as other operations such as Body Point distance, etc. So, for a lot of application domains the model will work. But extending this surface, you get this:

 

 

 

Figure 2: Surface after extension

It quickly self-intersects. Any operation that requires an extension will fail (depending on the extension distance required). In all my discussions with the development team, we describe this surface (before extension) as being legal, but unhealthy. (Principle #1 again as well.)

Principle #3: B-rep health is measured as a continuous spectrum.  We just discussed two cases that are legal, but unhealthy. It’s also the case that many forms of pathologies in B-rep data are very localized. In the case of a high curvature on a surface or bad parameterization, it might not affect the next 100 modeling operations that one performs because you never hit it precisely. If 1 of only 1000 surfaces in your model has high curvature, how unhealthy, really, is your data? Additionally, as I stated earlier, ACIS has a great deal of code to deal with bad data, so your modeling operation might work, albeit less efficiently. So it’s not a discrete value we can state; again, it’s not all good or all bad. For us, it's best expressed using a continuous spectrum. I have been using the analogy I see every year when I use TurboTax® to do my taxes. When completing your taxes they give you an indicator of your chances of being audited. They do not say your tax return is wrong or illegal, or that you will get audited; just the chance that you might.

Principle #4: Make sure the basics are right or forget about it. We recently had model data from a customer that had all analytic geometries represented in the form of B-splines. That is, what should have been an analytic sphere was represented as a (poorly crafted) B-spline. Although not illegal, this has obvious, serious, implications; it’s much heavier on model size for just the representation of the surface itself, not to mention you now have to have p-curves etc. All downstream atomic modeling operations like point-perp and intersections go through general algorithms and don’t benefit from special casing; surface extensions are not natural, etc. But none of this, really, is the main point. To assess the B-rep health of this model was akin to checking the cholesterol level of a patient suffering total organ failure. The model, in this case, was a product of a third-party translator. The lesson here: pick your InterOp solution wisely! The basics have to be right; or the measurement of health is all nonsense. And by using the term basics, I do not mean to imply they are self-evident or easy. We have done a great deal of work on the ACIS translators to make sure the fundamentals are maintained. (Like a math book with the word “elementary” in the title; never, ever, associate that with, “oh, this will be an easy read”. Fundamentals / basics can be very difficult.)

So, as you might assume, (or if you have attended our 3D Insiders’ Summits) a B-rep health operator is coming out in ACIS R23. I hope this gives you an idea of the thought process behind the work. It should also foreshadow potential behaviors, such as context setting and how data might be returned. Additionally, this effort will take on many forms, from the operator itself, to the continual advancement of the healing operators in our InterOp translation suite, and eventually, other various forms of correction. For now, you can play with an early version of the operator in Spatial Labs.

Good health!

Tags:

Spatial Developers in a Team RoomSpatial Developers in a Team RoomSo Stefanie's right; but a slight introduction before the criteria. We were floundering big time with Agile. Our original belief was a take-no-prisoners, do-everything-the-book-said, all-R&D strategy. What we ended up with were endless meetings and mind-numbing philosophical debates on “what Agile really was”. It didn’t help when we heard outside sources say, “well, I’ve seen it before and it was like this”. But these outside sources were hapless in helping us install it at Spatial. I personally felt we were chasing a ghost.

So this is what we did; first, we carved out one significant 'delivery' of R&D that needed to be done by the next release (later to be called an epic). I let the rest of the R&D activities, the smaller miscellaneous ones (bug fixes, minor enhancements), go on their merry way un-encumbered with Agile. I assigned six people to this epic delivery, for a six week duration (later to be called a sprint), and gave them the following conditions and promises:

#1 You will work on this epic and this epic only.
No more having a delivery team schedule a bunch of disparate activities for an iteration; minor bug fixes, major project work, along with all the other crap that happens in the daily life of developers. If someone in the group was a specialist that only knew a certain area of code, I would prioritize whatever needed to done on that code later. Or, I would MAKE someone else (not on the epic) learn on the job. (This is why, largely, we only did this with a subset of our R&D staff.) But the major point was to allow them to do Agile on one specific epic, and to not be interrupted!

#2 You will all go in one room and you will not work from your offices or cubes.
BUT, and this is very important, we will never, ever, take your private office or cube away. (I had to get a personal promise from our CEO.) It was this condition that started our notion of a Team Room.

#3 The epic will be well defined and relatively narrow or specific.
I wouldn’t let an epic (or team room) start unless we had some reasonable definition and a prototype worked out. These prototypes might have been done individually or by a small group of people, but we had a pretty good idea of how we wanted to solve the problem before the team room started. Hence, it was reasonably well-defined and the group could start running on day one.

#4 All the resources needed for the epic will be in the team room.
There were to be NO external dependencies. I didn’t want to hear, “we’re blocked, Fred from the blah-blah group needs to deliver this code before we can move on”. If Fred had something that needed to be done for that epic he was in the team room for the entire six weeks. (This meant it was his ass on the line as well). This included QA, documentation and a position we created called, 'Team Room PM'. The Team Room PM was the priority man. He was on the team and made all final decisions. (But to be sure, the epic was planned and scheduled by the higher level PM group outside of development.) This did mean we had to well think-out who was to be in the room. (Hence, number three is important).

#5 It ends in six weeks.
After its over, you can spend time working independently; prototyping future epics, scanning code, fixing bugs, reading, and most importantly, thinking. You can go back into a team room when the next one starts and you’re ready.

#6 If you fail, fail quickly and decisively.
I’m a big believer in having an environment where people feel 'safe to fail'. It’s not that I wanted epics or team rooms to go bust, its more I wanted transparency. We work on a very complex and difficult piece of software. I’ve had what I thought were great ideas that didn’t pan out, and if they didn’t, you had to be man enough to say, “well, rats, that didn’t work”; and management needs to know when to stop feeding a dead horse. (Of course, the horse could be the project or you!).

#7 Lastly, the rest of XP and Agile is up to you.
Pair program, don’t pair program. Unit test, don’t unit test. Play planning poker, don’t play planning poker. Decide your own iteration schedule, one day iterations, four day iterations, one hour iterations . . . I don’t care! This might have been exhaustion on my part, but this is where it all got interesting . . . all the things that we were trying to force earlier, especially pair programming, teaminess (not a word); now happened naturally. We didn’t have to force them or set up goofy metrics to measure how much we were adhering to. (One brain-dead idea from the first year, was to award an iPad to the developer who logged the most pairing hours.)

P.S. The one aspect of Agile (or XP, if you like) I didn’t address was 'vertical slicing'. Okay, there is a lot of XP/Agile that I didn’t address, but I’m a big believer that vertical slicing is a most central and important concept. I didn’t want the team rooms to ignore this, but again, I didn’t want to force it. The question in my head was . . . if conditions #1 - 7 were in place, would vertical slicing become a natural practice, like pair programming did?

You’ll have to wait for the next post to find out!

Tags:
Twitter Facebook LinkedIn YouTube RSS