3D Modeler

It’s Easier Using 3D Solids

By David

In my last post I constructed a thin walled 3D body of a teapot from surface data using surfacing operations.  Whilst the code I used worked it was more complicated than I expected when I started, so I took a second look at the workflow.  Once again here is the description of the input data:

Color Description
Blue Sides
Green Rim
Turquoise Spout
Magenta Tip of Spout
Yellow Handle
Orange  Lid
Red Base

The simplification I came up with was to convert my model to 3D as early as possible, before cutting any sheets or thickening.  That way I would have to do much less work myself and make use of the solid based algorithms already built into the geometry kernel.
Remaking the Teapot

Remaking the Teapot
With this in mind, the first thing to note about the surface data is that the sides of the teapot stop in a horizontal plane, top and bottom, which makes it very easy to construct a solid from them by closing the volume with planar faces.

 The teapot has six parts which can be made into separate solids in this way. Here shown in an exploded view are the main part of the body, the handle and the spout. Each solid was constructed by combining the surface patches then closing the volume with planar faces, a combination of two straightforward operations. Additionally the lid, the rim and the tip of the spout are converted into solids in thiis way, but are not shown for clarity.

 Uniting these three solids into one is trivially easy with a geometry kernel; there are none of the issues we had with sheet bodies determining inside and outside because for solids that is unambiguous. Next it is a simple matter to shell the body to the required thickness, remembering to specify a different thickness for the faces that make up the spout. You can see the hole at the base of the spout which was created automatically by the shelling operation.

 Having shelled out the main section of the pot, the solid rim is united with the body.  The match between the rim and the shelled out body was very good both in position and curvature. The lid is shelled out separately, however I kept its handle solid as would be the case in a real teapot.

Those of you familiar with the Utah teapot might know that the original data set did not have a base, which was added later and some people view it as impure.  I did not use the base for two reasons; firstly the surface patches exhibit the high curvature making them harder to shell out to the required thickness, but finally it was the existence of the curvature at all; I have never seen a real teapot which has a curved base.

In conclusion

The workflow I was interested in was relatively straight-forward with what proved to be a very clean data-set.  Using sheet body operations I was able to complete the workflow but there were some issues that were not immediately obvious when I started.  However converting the whole problem into 3D solids as early as possible removed the ambiguities of “inside and outside” and used the geometry kernel in a more elegant way with much simpler code.

Tags:

Making 3D Solids from Surface Data

By David

Recently I was looking at some surfacing operations and I thought it would make an interesting case study to complete the entire workflow of building a 3D solid model from a set of surfaces.  My idea was simply to join the surfaces together and then thicken the resulting sheet body to make a thin walled solid.

I chose the Utah teapot1, created by Martin Newell at the University of Utah in 1975.  This data was originally created for graphics developers to use as test data for lighting and rendering algorithms and it is simply a set of surfaces representing the exterior of the teapot, extending as far as the visible portion but without any of the internal details such as the inner faces.

Creating the Surfaces from the Input Data
The data consists of 32 Bezier patches together with a code snippet to read them.  I adapted the code to read in the patches and created a corresponding set of NURBS sheet bodies with relative ease.  I have found it convenient to split the patches into 7 groups to identify them.

Color Description
Blue Sides
Green Rim
Turquoise Spout
Magenta Tip of the Spout
Yellow Handle
Orange Lid
Red Base

Whilst the data is sufficient for graphical applications, a couple of aspects of the surfaces are different to the ones that I would expect from a 3D modeler.  These differences are where the workflow gets interesting.

“Inside” and “Outside” for Sheet Bodies

The first issue that is not shown up by a rendering application is that some of the surface patches intersect; both the spout and the handle protrude through to the inside of the teapot sides.

Calculating the intersections between the surfaces is straightforward, however when modeling with sheet bodies there is no concept of “inside” and “outside” and typically an application has to decide which pieces are required and tell the geometry kernel what to keep.

The image shows in red the portions of the spout and teapot sides that are not required, in my workflow I had to specify that I did not wish to keep them.

Thickening Surfaces into Solids
Offsetting a surface is a standard geometric operation and the basis of a thickening operation; however as the offset approaches the radius of curvature cusps are introduced and with further increased offset self-intersections occur.  A function of the geometric kernel is to devise strategies to avoid these artifacts in the result of the operation whenever possible.

An example of this is the rim, where the patch actually represents both sides of the teapot wall.  In  fact it does not make sense to thicken the rim at all, instead the strategy I used to was to construct a blend surface which joins the inside of the teapot side with the inside of the rim.

After construction of this blending surface a closed volume is defined, this can then be combined with the side walls into a single solid body.

Completing the Model
To complete my model, the handle and the lid were straightforward, along the lines already described, though the spout required a slightly different thickness and some extra work to make the tip match up.

The workflow I envisaged at the start turned out to be more complex than I thought and on reviewing the code I had written it seemed too involved for a relatively simple model.  There is a much simpler way to achieve the same result and that will be the subject of my next article.

Tags:

How High Can You Jump?

By Guest

By Kevin Tatterson

I’ve invented a new phrase: "highly correlated single metric". A hcsm is a single test for assessing the fitness of someone or something. The results of a hcsm test would be highly correlated with the result of conducting numerous tests.

For example, there are many different types of fitness tests. The SWAT team and Navy Seals have a number of tests for assessing fitness: max # of push-ups in 1-2 min, long jump, pull-ups, timed sprints, vertical jump, and more. Studies show, however, that the vertical jump test is highly correlative with overall athletic performance.

Imagine that: admission into elite Special Forces could depend entirely on how high you can jump. It’s reasonable, right? I mean, the facts state that a strong correlation exists! We could streamline the admission process down to a simple, single, vertical jump test – we’d save time, money, and remove having to think. In fact, we could fully automate the test and get rid of any human involvement. Just walk up to a special machine that measures your vertical jump – if you’re high enough, you’re in!

Of course I jest. If this silly fiction became reality, everyone who wanted to be in the Special Forces would simply practice their vertical jump – nothing else. As a result, their overall fitness would suffer – and our Special Forces would simply be comprised of people that could do nothing well – except jump.

What is funny about the hcsm is the human nature desire around it. We seem to be programmed to believe hcsm’s exist – and almost always believe the marketers who spin them. Given these tendencies, all of us should think carefully when presented with hcsm-style comparative data.

Here are some places you can find hcsm’s:

• CPU’s. Five+ years ago, the hcsm was clock speed. Today, with multiple cores and ever-evolving chip architectures, clock speed is now just a small part of the equation. Industry provided benchmarks try to fill the hcsm need (BAPCo Sysmark, SPEC, etc.) – but can be controversial. Consider that AMD recently resigned from BAPCo because their Sysmark metric doesn’t show any benefit for AMD’s latest chip which they spent the last the three+ years designing and bringing to market (AMD integrated GPU capabilities into their Llano APU).

• GPU’s / video cards. 15 or so years ago, when the first generation of 3-D games were really taking off (Quake, Unreal, etc.), GPU benchmarks were written to compare 3-D video card performance. Guess what happened? For a brief period of time, GPU chip’s were designed to maximize the benchmark metrics – but provided little real world performance improvements. Wisely, benchmarks have since evolved to reflect real world performance (Futuremark 3DMark).

• LCD televisions. Buying a LCD TV with the “best specs” has turned into a sad experience. Marketers want you to focus on hcsm’s like “contrast ratio” and “response time”. However, these metrics all but lost their meaning, with Sony claiming “infinite contrast ratio”, and “response times” being measured differently by different manufacturers (GTG=gray-to-gray, or BWB=black-white-black). Maximum PC published a great article debunking these myths.

• Interviewing. The book Sway: The Irresistible Pull of Irrational Behavior (written by Ori & Ron Brafman) states that the hcsm for interviewing is a technical interview. Reducing the interview process to being strictly technical, however, is akin to the Special Forces focusing solely on the vertical jump test – with all the negative consequences.

• Supercars. The latest season of Top Gear BBC has poked fun at the fact that supercar manufacturers are obsessed with using the Nürburgring test track as the ultimate hcsm. As a result, other real-world automotive features and comforts are ignored. Ironically, however, the only numerical measurement Top Gear BBC produces and ranks is: lap times around their own test track.

• Software Quality. Almost every VP R&D and Director of Quality has strong hcsm tendencies for their Product’s quality. Few agree on the hcsm, though: bug flow rates, the number of static code analysis issues, the number of open defects, code complexity, customer satisfaction surveys, and so on. (Admittedly, it is typical to use more than one of these metrics.) Whatever the case, all of these measures can be influenced by outside factors and/or “fudged”.

As you can see, on its own, the hcsm is actually pretty weak. It leaves many questions unanswered, even good metrics can be faked, and marketers spin their own meaningless hcsm’s.

Now to the crux point: what would be the highly correlated single metric for a 3D Modeler? I surveyed some of Spatial’s most experienced Modeling developers/staff for their thoughts:

• Jeff Happoldt & Karthick Chilaka gave the same response: "Market share. Strong market share indicates that the Modeler must work. If it’s been accepted by the majority of the market, it must have the best qualities."

• Vivekan Iyengar: "Given a complex model, push numerous cutting planes through it in all three dimensions. This will result in many difficult, near coincident intersections. This heavily tests surface/surface intersections."

• John Sloan: "This isn’t posslble. Every Modeler is its own ecosystem – and has evolved via selection pressure – their environments have dictated their growth. As a result, every Modeler will have different strengths and weaknesses. Forced to choose, however, I’d evaluate Boolean/Intersection robustness."

• My answer: "The size of the release package. The thinking: assuming all Modelers have equal functionality, a concise package implies careful architecting."

Here’s the real story, though: when I asked my coworkers this question, they balked (heck, even I balked at my own question). They didn’t like the feeling of being cornered into a "single metric". Intuitively, they wanted to evaluate many different aspects of the Modeler.

I guess that’s just it. A highly correlated single metric should be thought of as nothing more than what it is: a statistic. You’d be foolish to make decisions based solely on hcsm’s. In my opinion, the best selection decisions are made under careful scrutiny, in real world situations – and that takes time and effort.

Tags:

Solid Modeling and AJAX3D

By Gregg

I’m going to follow up on my first blog, At Look At Spatial Labs, writing about the technology behind the web site, specifically how the AJAX design pattern was used to create a dynamic, 3D graphical viewport.

We knew when we first created the Spatial Labs site we wanted the graphics viewport to act and look like it would in a typical desktop application; not just in presentation (for example, a sophisticated coloring model, shadows, gridlines, view, pan, rotate, etc) but in its behavior upon user interaction with the model. We wanted to be able to select faces and edges on the model, perform modeling operations and update the viewport as smoothly and “flicker free” as possible. This, of course, meant that we needed to build an interactive (dynamic) web site as opposed to a simple site of static html pages. In the next blog I’ll discuss the array of options for the 3D graphics themselves but for now I would like to limit the discussion to the later, how can one make a graphics based website dynamic? - as if you are running a typical desktop application.

There is nothing new or Earth shocking here, dynamic web sites showed up years ago largely due to a concept called AJAX. Although this is not a blog about general web application development, a little introduction is needed (as AJAX is the key); and in order to save you from chasing the link, I’m going to shamelessly copy the first paragraph in Wikipedia which introduces AJAX.

Ajax (pronounced /ˈeɪdʒæks/; shorthand for Asynchronous JavaScript and XML) [1] is a group of interrelated web development methods used on the client-side to create interactive web applications. With Ajax, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page. Data is usually retrieved using the XMLHttpRequest object. Despite the name, the use of XML is not needed, and the requests need not be asynchronous. [2]

(I love Wikipedia. Cuts right to the chase. I own multiple AJAX books, not one has that clean and clear of an introduction.)

Along with the introduction from Wikipedia a little more detail is needed; the “presentation” (text, images, etc) of a typical website is provided to you by what’s called the browser’s DOM. (You’ll have to chase this link, if you want). Think of it as “a tree structure of HTML and a JavaScript interface in order to manipulate it”. AJAX works by having the browser contact the server asynchronously and the server returning information back (maybe little “updated” pieces of HTML). The browser, through the use of JavaScript, updates the DOM with this information. As the DOM is a tree structure, the browser might hack off a limb and replace it with this new bit of HTML just returned from the server. This updates the “presentation” in a dynamic way; one did not go back and reload the entire DOM when only a small part changed.

OK, anybody spending any time on the net should recognize AJAX at work all over the place. (Think of all those blue rotating circles in the middle of pages.) You’re hard-pressed these days to find a website that does not have some AJAX in it.

Enough with the basics of AJAX, how does this contribute to a dynamic (flicker-free) graphics based web-site? The key observation here is that the DOM is a tree structure that can be updated by the browser via JavaScript. Well, the scene graph of the viewport is a tree structure as well. Instead of text, button controls and images (as for HTML) - it’s a tree of polygons, polylines, color nodes, etc. And if one has JavaScript access one should be able to update the graphics in the exact same matter as the HTML DOM. Again, in a latter blog, I’ll get into the array of technologies available in order to mechanically place the 3D graphics in the browser; but now we are starting to get an idea of some of the prerequistes.  (There are many technologies for this, but I will tell you now, the more comprehensive and powerful the scene graph and JavaScript interface, the better). I have to give credit to the paper AJAX3D - The Open Platform For Rich 3D Web Applications by Toni Parsi. He observed it. We read it and put a Solid Modeler behind it.

As with all high level concepts; upon implementation come the details. Ohhh, the details. One has to be careful with AJAX. You can overuse it and make your site more annoying than worthwhile. You don’t have control over everybody’s network speed and people using your site will be anywhere in the broad spectrum of performance. We had to figure out when we really needed to go the server and when we didn’t. We knew that less would probably be safer and better. Of course, in our case, the 3D modeler is on the server (not in the browser) so when one fillets an edge (as in one of the examples in Spatial Labs) we have to go to the server for that. But what about other areas of interaction; rotation, pan zoom, picking etc? In our implementation these are on the client side, including picking. But in Solid Modeling apps, nothing is trivial. What if one wanted to select a FACE and get all the EDGEs belonging to that FACE? We could go back to the server and get this information … or what if we made the scene graph more intelligent and embedded this kind of information in it? Then it would all be client side and, well, one less blue rotating circle.

In the end we went through many iterations on this subject and are still refining as we go; basically adding more and more intelligence (in a very compact way) to our scene graph. We found it really key to understand what information is best on the client and what should be on the server. In the end, I don’t think the refinement will ever stop. It’s become one of the more intriguing problems of our design.

In the next blog I’ll discuss the array of technologies one can use to display the graphics in the browser and how we converged onto one solution but are keeping the doors open for others. I added an AVI so you can get a feel of the performance.

 © 2013 Spatial Corp. ACIS 와 SAT 는 Spatial Corp 의 등록 상표입니다.     개인정보 보호정책     사이트 맵