3D Graphics

Figure 1: SpaceClaim Saw (model courtesy of SpaceClaim Corporation)Figure 1: SpaceClaim Saw (model courtesy of SpaceClaim Corporation)In my last post, 3D Graphics and Internet Browsers, I gave an overview of the different technologies we have experimented with to provide the graphical viewport inside the web browser. Missing from the conversation was consideration in regards to performance, especially in dealing with the initial loading and viewing of industrial sized parts.

You can go to the site now and test performance. I’m sure experiences will vary as everybody’s internet connection is different. Our tests show (from working at home, coffee shops, etc.) that the SpaceClaim Saw, shown in Figure 1 on the left, can be loaded off the server, visualized and ready to model in anywhere from 5–10 seconds. (Sometimes longer if you have a really slow connection.) Now this part is small by today’s standards (45,000 facets). So I’m considering the performance we’re experiencing as somewhat pedestrian. We know the industry is going to demand better. We’ve directed our last bit of work in this direction; and this is what we have found.

The visualization data being sent to the browser is in the form of XML. I’ve mentioned we are using an XML schema called X3D. The key point here however is not really X3D; its XML. XML is represented as plaintext. Our first attempt at optimization (other than the basics; preprocessing the visualization data and having it stored alongside the b-rep, representing visualization data with four significant digits as opposed to eight, and optimizing the tessellation routines) was to simply byte compress the XML before sending it to the client. XML, based on its inherent structure, and being plaintext, compresses nicely. To do this, we had two choices; we could use IIS’s built in compression capabilities, or we could manually gzip the data for compression ourselves. Two weeks ago Spatial Labs was relying on IIS’s compression mechanism. Last week we reconfigured the site to gzip the data and then use the browser’s plug-in to decompress. The visual size of the SpaceClaim Saw prior to compression was 4.5 MB. The size after compression is 860 KB. That’s good, but upon testing, we learned that manually gzipping the file didn’t buy us any significant improvement over the built in compression in IIS. (It sped up a little; however, it’s hard to measure internet performance as establishing consistency over the net is difficult for obvious reasons.)

Figure 2: Small Engine BlockFigure 2: Small Engine BlockDigging into the problem a little further reached this obvious conclusion; it’s not just the time and size of the transmission, the plug-in was spending a great deal of time parsing the XML and converting the plaintext numerical data to actual doubles and integers. And looking at our visualization data; a healthy 90% of the data is really numbers. The performance hit becomes more apparent on the bigger models, such as the two engine blocks we have on Spatial Labs. The small engine block (98,000 facets) shown in Figure 2 on the right, can take up to 10-15 seconds, most of the time being spent on converting the plaintext data. (The compressed XML is 1.5 MB which should take only a couple of seconds to download.)

So this takes us down the path of using a binary format for our visualization data as opposed to any XML based plaintext format. Luckily our chosen XML format X3D supports a binary form, X3DB. However we now tie this all back to the plug-ins one chooses for your browser. Unfortunately not all X3D (VRML) browsers will accept and work with binary XML forms. If I had to rewrite my last post (discussing the various available plug-ins) I would include as a necessary condition the ability to work with binary data. But it doesn’t end here. There is one more piece to the puzzle that needs to happen. We’ve concluded binary is good, compression of the binary is better, but to get to the ultimate performance one could achieve you need to work with different forms of Geometric Compression as well. We’re not at a point where we can talk details now, however you can see this is a rich area with work dating back many years (Java3D). Hopefully in a future post, we’ll clearly address these technical challenges.

Concluding, and going back to my last post one more time; we speculated that the future was going the way of HTML5 and webGL; completely zero deployment and very much JavaScript based. Now this is where things get interesting. We can use the browser to decompress and we learned that it works well. However we will have to use JavaScript to parse the binary stream and feed it to WebGL calls. In addition, we have to write any geometric decompression logic with the JavaScript as well. It’s starting to place a pretty tall order on what you need to code in JavaScript. All of this adds up to some acknowledgment that browser plug-ins aren’t THAT bad.

Your thoughts?

Tags:

I promised in my last post, Solid Modeling and AJAX3D, that I would follow up with a discussion on the various technologies we have experimented with and in regards to placing 3d graphics in a browser. As most browsers extend their capabilities through plug-ins (I’ll stay clear of using the word "Active X") we’re effectively discussing various forms of such, and we will finish with HTML5 and webgl (a new, non plug-in based approach).

You have to start the discussion by acknowledging that this is an extremely diverse and rapidly changing bit of technology. I’m writing this in the spring of 2011; plug-ins we (Spatial Labs) were working with a year ago are now obsolete and our current solution might even be on its last legs (HTML5 is coming?). Despite the dynamic technical environment, the requirements are clear and have stayed fairly consistent. Let’s start with them:

The most obvious and basic; the presentation of 3d graphics. The graphics should be fairly advanced, supporting complex color models (shaders would be even better), rotation controls, and selections. In our solution we based our client’s 3d graphics on classical polygonal definitions (triangles, polylines, etc). There are solutions out there that can represent b-splines, however we have stayed clear dealing with them directly in the client.

An application programming interface to the plug-in, accessible through JavaScript. This is how we manipulated the scene-graph in order to make dynamic updates. In our solution we learned quickly that you need this to be a higher level interface. JavaScript is a squirrelly language (my word for "loosely typed"), and difficult to debug. You don’t want to write an excessive amount of JavaScript.

Support of a graphics meta-language; either directly in the plug-in (can it read VRML?) or an industrially tested JavaScript library that will parse a meta-language and make appropriate calls to the plug-in’s API. We’ve found that we didn’t need the meta-language to be extremely popular or prolific across the industry, as we never actually persisted data visualization or exchanged with other applications. The meta-language simply serves as the transmission format from the server, and helps in creating non-modeling graphics, reducing the need to code such things via JavaScript. (The visualization of the slicing planes that we used in our Multi-proc Lab were created with 5 lines of our meta-language.)

The last requirement I will mention is the size of the plug-in and cross browser availability of the solution in general. Our website is currently plug-in based which necessitates a 4 meg download. We didn’t want to go any larger than that as your drop-off rate of users can increase as the plug-in takes longer to download. (If you still have them at this point).

So, based on the requirements, I made a table summarizing various, common plug-ins:

Common Plug-insCommon Plug-ins 

We actually started off with O3D. (Although Google invested heavily in this plug-in they have now placed their efforts in HTML5.) We moved data from ACIS to Collada, to O3D. However when we started to build in dynamic scene manipulation the amount of custom JavaScript quickly became overwhelming. It was with O3D that we fully understood the need of a well coupled and supported meta-language. (You can think of O3D as an OpenGL interface; and like OpenGL, it’s just an interface, there is no corresponding graphics language.) We investigated both Flash and SilverLight as they are certainly the most ubiquitous plug-ins on the net (well, Flash, really). Unfortunately, the 3D capabilities are lacking. Nonetheless, AutoDesk offered up Project Dragonfly based on Flash.

Based on our experience with O3D we started to look into different meta-languages and settled on X3D. It’s the next generation of VRML and fairly powerful. As the meta-language has power; it simplifies the plug-in’s API – again, reducing the JavaScript you need to write. There are several commercial plug-ins that are tightly coupled with X3D and we eventually settled on the BSContact plug-in from Bitmanagement. It includes very refined rotation controls and is cross browser. BSContact is the plug-in currently required for our site Spatial Labs.

All technologies mentioned so far are plug-in based. And as we discuss the on-line technologies with various customers they all ask for a non plug-in solution. This of course, leads us to HTML5 and webGL. Although still in development and not currently supported by MS’s Internet Explorer the technology has a great deal of momentum. I’ve included an interesting example here:

And what about supporting graphical meta-languages? I think I’ve made it clear how important they are to a dynamic website. Yes, HTML 5 and webGL do not come with any tightly coupled meta-languages. Nonetheless, several powerful JavaScript libraries exist to provide these services. X3DOM provided by Fraunhofer in Germany, marries X3D and webGL. Other high level JavaScript libraries (mentioned below) provide abstractions and general higher level calls:

Clearly, the main issue going forward for HTML5 and webGl center around performance and cross browser consistency. Try running 3dtin on Chrome and then Safari. You’ll see Chrome behave vastly superior to Safari based on Chrome’s JavaScript engine and its interface to openGL. (To be fair, Safari is not quite there yet, but could be in the next couple of releases.)

In the end, there is a rich array of technologies out there; and its changing daily. (This blog should become outdated pretty quickly!) All I know is that the technology we are coding against today will certainly not be the preferred solution tomorrow. Abstracting your site over various technologies might be well worth the effort.

What technologies are YOU using successfully today?

Tags:

I’m going to follow up on my first blog, At Look At Spatial Labs, writing about the technology behind the web site, specifically how the AJAX design pattern was used to create a dynamic, 3D graphical viewport.

We knew when we first created the Spatial Labs site we wanted the graphics viewport to act and look like it would in a typical desktop application; not just in presentation (for example, a sophisticated coloring model, shadows, gridlines, view, pan, rotate, etc) but in its behavior upon user interaction with the model. We wanted to be able to select faces and edges on the model, perform modeling operations and update the viewport as smoothly and “flicker free” as possible. This, of course, meant that we needed to build an interactive (dynamic) web site as opposed to a simple site of static html pages. In the next blog I’ll discuss the array of options for the 3D graphics themselves but for now I would like to limit the discussion to the later, how can one make a graphics based website dynamic? - as if you are running a typical desktop application.

There is nothing new or Earth shocking here, dynamic web sites showed up years ago largely due to a concept called AJAX. Although this is not a blog about general web application development, a little introduction is needed (as AJAX is the key); and in order to save you from chasing the link, I’m going to shamelessly copy the first paragraph in Wikipedia which introduces AJAX.


Ajax (pronounced /ˈeɪdʒæks/; shorthand for Asynchronous JavaScript and XML) [1] is a group of interrelated web development methods used on the client-side to create interactive web applications. With Ajax, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page. Data is usually retrieved using the XMLHttpRequest object. Despite the name, the use of XML is not needed, and the requests need not be asynchronous. [2]


(I love Wikipedia. Cuts right to the chase. I own multiple AJAX books, not one has that clean and clear of an introduction.)

Along with the introduction from Wikipedia a little more detail is needed; the “presentation” (text, images, etc) of a typical website is provided to you by what’s called the browser’s DOM. (You’ll have to chase this link, if you want). Think of it as “a tree structure of HTML and a JavaScript interface in order to manipulate it”. AJAX works by having the browser contact the server asynchronously and the server returning information back (maybe little “updated” pieces of HTML). The browser, through the use of JavaScript, updates the DOM with this information. As the DOM is a tree structure, the browser might hack off a limb and replace it with this new bit of HTML just returned from the server. This updates the “presentation” in a dynamic way; one did not go back and reload the entire DOM when only a small part changed.

OK, anybody spending any time on the net should recognize AJAX at work all over the place. (Think of all those blue rotating circles in the middle of pages.) You’re hard-pressed these days to find a website that does not have some AJAX in it.

Enough with the basics of AJAX, how does this contribute to a dynamic (flicker-free) graphics based web-site? The key observation here is that the DOM is a tree structure that can be updated by the browser via JavaScript. Well, the scene graph of the viewport is a tree structure as well. Instead of text, button controls and images (as for HTML) - it’s a tree of polygons, polylines, color nodes, etc. And if one has JavaScript access one should be able to update the graphics in the exact same matter as the HTML DOM. Again, in a latter blog, I’ll get into the array of technologies available in order to mechanically place the 3D graphics in the browser; but now we are starting to get an idea of some of the prerequistes.  (There are many technologies for this, but I will tell you now, the more comprehensive and powerful the scene graph and JavaScript interface, the better). I have to give credit to the paper AJAX3D - The Open Platform For Rich 3D Web Applications by Toni Parsi. He observed it. We read it and put a Solid Modeler behind it.

As with all high level concepts; upon implementation come the details. Ohhh, the details. One has to be careful with AJAX. You can overuse it and make your site more annoying than worthwhile. You don’t have control over everybody’s network speed and people using your site will be anywhere in the broad spectrum of performance. We had to figure out when we really needed to go the server and when we didn’t. We knew that less would probably be safer and better. Of course, in our case, the 3D modeler is on the server (not in the browser) so when one fillets an edge (as in one of the examples in Spatial Labs) we have to go to the server for that. But what about other areas of interaction; rotation, pan zoom, picking etc? In our implementation these are on the client side, including picking. But in Solid Modeling apps, nothing is trivial. What if one wanted to select a FACE and get all the EDGEs belonging to that FACE? We could go back to the server and get this information … or what if we made the scene graph more intelligent and embedded this kind of information in it? Then it would all be client side and, well, one less blue rotating circle.

In the end we went through many iterations on this subject and are still refining as we go; basically adding more and more intelligence (in a very compact way) to our scene graph. We found it really key to understand what information is best on the client and what should be on the server. In the end, I don’t think the refinement will ever stop. It’s become one of the more intriguing problems of our design.

In the next blog I’ll discuss the array of technologies one can use to display the graphics in the browser and how we converged onto one solution but are keeping the doors open for others. I added an AVI so you can get a feel of the performance. 

You are missing some Flash content that should appear here! Perhaps your browser cannot display it, or maybe it did not initialize correctly.

Twitter Facebook LinkedIn YouTube RSS