A geometry kernel is a big thing. It’s a huge thing. Maybe even big enough to see from space. By most accounts, even the Great Wall of China is not visible from space. However, other huge infrastructure is: highways, airports, bridges, and dams. This is the scale for this post. Decades of work, in a five topic flyover.
1. Creating 3D models
If you asked a guy on the street what a geometry kernel was for, odds are he’d reply "creating 3D models". Most 3D ACIS Modeler enabled applications create 3D models. And if you’ve read this far, you’ve probably created a 3D model at some point in your life. The basic steps haven’t changed a lot in the last 25 years.
Sketching is the process used to create a collection of curves, circles, ellipses, lines, and B-splines. Sketching is just connecting the dots, users input points and tangents, then applying constraints on a 2D grid. The collection of curves matches part of a design, or makes one up.
Curves can create surfaces by extruding, revolving, sweeping, or skinning. Surfaces can be trimmed and stitched to create solids.
The construction of primitive solids follows from specific parameters. This may include solids such as spheres, tori, cuboids, etc.
Finally, overlapping simple solids can be combined with Boolean operations like union and subtract to make a complex 3D model. In 3D ACIS Modeler, 3D models can also be non-manifold, combining solids, two sided sheet bodies and wires.
While all the above is standard fare for a geometry kernel, creating stable, fast high-level APIs for a geometry kernel is no small feat. One of the training exercises at Spatial for new 3D ACIS Modeler developers is writing a function to create a solid tetrahedron using low-level interfaces. It’s surprisingly hard - try for yourself.
I cover the other 4 essentials in my eBook. Please click below to download.
1. Don't Reinvent the Wheel
You're designing a new product, but you have an old part that's almost right for the job. Don't redesign it from scratch, import it from its source format and tweak it to meet your current needs.
2. Keep it Tight
If you're designing a part for manufacturing, you need the end product to be water-tight. If your model is “leaky”, your production run might just sink before it gets ever gets launched. If you’re re-using data files and they are not in the best of shape, importing them may help to patch some of those holes during the healing phase and make for clear sailing during production. In this way, data re-use not only extends the life of the model, but it may very well make your model ship-shape to boot!
3. Beauty AND/OR the Beast
Sometimes you just want to show off your model so all you really need is a "glam shot" of your little beauty, but other times you need to work with your model which means loading the whole big beast into your application. You should be able load only the stuff you want when you want it.
4. Garbage In, Garbage Out...or Maybe Not!
If you have models that are a little "dirty", don't throw them out! Clean them up with healing during translation. Models are precious resources and they can have a very long shelf-life. But over the course of time, old data formats fall out of favor and make way for new ones. That doesn’t mean you have to take your old models to the thrift store, but you might need to translate them into a new format. With high-quality translations that include healing, you can effectively refurbish your old models and make them new again. Data re-use can make your old junk into a new treasure.
5. Maximize Your Resources
You've already made an investment in model design, now it's time to get the most from your model by maximizing the number of applications that can work with it. High fidelity translations between a wide variety of formats naturally increase your opportunities to re-use your model.
Valuable development resources should focus on solving your customers’ needs. Adding data interoperability shouldn't be a burden. A simple interface allows your developers to maximize their time working on adding value to your application.
Design and production requires different kinds of data at different points in the process. Getting the kinds of data you need - and no more - just when you need it maximizes your ability to collaborate all along the way.
Some of the models you work with are big, I mean HUGE, monstrous even. Maximize your time working with your model rather than going for coffee by using your machine's resources to the fullest extent possible. That means choosing an interoperability component that is multi-process and multi-thread enabled.
Leah Morgan is a Senior Developer in 3D InterOp
Learn More about 3D InterOp and Data Translation
Posted: April 2nd, 2013 |
Debugging problems is really easy once you "have them under glass". Get all the input data, get all the code, build it on your computer, and you can bisect down on the problem in the debugger until you have fixed it. (Ok. This is an over simplification. Assume that you are really smart, can talk to someone who knows about the code you are looking at, and have an unlimited supply of time and coffee :-).)
In ACIS, journalling really helps with these problems but it has its limitations. Journalling is a feature of ACIS where you can call APIs with a special option setting which tells them to print a scheme script describing the operation, and save a sat file for the inputs. What are the problems? It is time consuming and unusual work to support. (How often to most ACIS programs manipulate strings? The c library for strings is really dangerous which makes this even more fun.) In addition, only APIs are journalled, but customers can all sorts of functionality. There are sg functions which are lower level and generally are not journalled. Then there are call backs (e.g., MESH_MANAGERs) which we don't even try to journal.
I dislike using C++ samples because it consumes a lot of time, but they have their place. Scheme scripts (or JS scripts for CGM Component) can distill a customer situation to just a bunch of solid modelling operations. A sample application often illustrates context. Sometimes an API call that seems to reveal a bug as a scheme script was actually caused by an application trying to do a workflow that doesn't make sense. Experience maintaining a solid modeler gives a person a warped perspective about what is natural.
I write this because I spent the better part of a day debugging a sample application which showed a problem that could have been journalled. I didn't know the problem could have been journalled, because I couldn't reproduce it. I had the sat file and the same code level of ACIS and a description of what the user was trying to do. It turned out that using facet_options_expert rather than facet_options_precise caused a significant change in the answer. Since the customer was using a GLOBAL_MESH_MANAGER that they wrote, I assumed some of the difference in behavior could be caused by that.
I guess it takes practice asking the right questions and getting a little creative about how to debug a problem. What is the craziest thing you have had to do to reproduce a bug?
Posted: March 20th, 2013 |
Try to buy a single-core laptop today and you’ll have a difficult time even finding one. The leading computer manufacturers offer at least dual-core for base models of their economy lines, even for laptops. Let that sink in. Our days of single core machines, even laptops, are over. Many of the leading mobile products are also at least dual-core with higher end products having even more cores.
As a developer, it’s exciting to have this hardware available and to know it’ll only get better. It’s even more exciting to be able to use it for development. Perhaps even more enjoyable is to ultimately see your customers using your software to push their multi-core hardware to its limits. If you’ve ever looked at the processor utilization on a multi-core machine and watched as every one of its processor maxed out, chewing through all that work you were throwing at it, you know that feeling of satisfaction. It’s the satisfaction you get from knowing that you’re utilizing the hardware to its fullest.
However, all the cores in the world aren’t going to just magically make the software you run faster – that software has to support it. 3D InterOp from Spatial has been around for many years and was largely established prior to the multi-core revolution. Therefore, file translations have been inherently sequential. Just because the file translation process has always been sequential doesn’t mean it needs to be. In fact, we are already exploring and implementing multiple strategies for how to catapult 3D InterOp into the multi-core world.
Two distinct high level strategies exist for taking advantage of multi-core machines: multi-threading and multi-processing. Multi-threading is the use of multiple threads on a given processor. Multi-processing is the use of multiple processors on a given computer. Which strategy should be used? Which offers the most bang-for-the-buck? Which is more optimal? Which scales better? These questions can’t be answered without understanding what your code does and how it does it.
So let’s dive into the code. First, is it thread-safe? If it is then multi-threading is an option. If not then you have to decide if you’re willing to invest the time and resources necessary to make it thread-safe. If that is not possible then you’re left with the multi-process strategy. Since multi-processing uses separate processes, no memory is shared and therefore thread-safety is a non-issue. Next you’ll need to determine what exactly it is that you want to parallelize. Often this will be a performance bottle neck that you know most of your users encounter. From there you have to analyze the particular algorithms, determine how to split up its work, and begin parallelizing.
The above is a very superficial introduction on how to start parallelizing your application and is by no means complete. Each application will have its own set of complications when it comes to parallelization. There are lots of resources on online to get you started and if you’re even thinking of going down this road, the sooner you start the better.
Please stay tuned for part 2 of this series to be posted after our R24 release.
How are you taking advantage of the multi-core revolution?
Which strategy do you prefer?
What issues and roadblocks have you encountered?
Posted: March 13th, 2013 |
It always struck me that 20,000 Leagues Under the Sea (ok, Vingt mille lieues sous les mers) described some aspects of *real* submarines and their usage before the invention of what we consider modern submarines. If Wikipedia (http://en.wikipedia.org/wiki/Submarine) is to be believed, there were experiments with submarines as early as 1620 but wide scale usage of submarines was in World War I, well after the book's publication in 1870. Moreover, details of modern submarines and their usage were suggested in the book: they were not powered by people, were actually capable of traveling long distances, and could be used for sinking ships.
Perhaps a more trivial example: I could swear I remember seeing things recognizable as iPads (ok, tablet computers), in either Star Trek, The Jetsons or maybe some other Sci-Fi shows: Buck Rogers, Blake's 7, . . . It seems like controls on space ships are never keyboards, but more touch sensitive, or voice/mind controlled. The Sci-Fi people guessed wrong about the tape drives all over the place too. (Do you remember this too? My memory is a bit fuzzy on the specifics).
The point is that sometimes people imagine what will be useful in the future before it actually happens. Knowing which flights of fancy are not overly fantastic can be commercially valuable. It is also possible to under estimate progress too. I think a lot of people underestimated Moore's law for a long time. Balancing what can be done with what is technically impressive is an odd sort of art, requiring both restraint and boldness.
Some caution is in order. In The World Set Free, H. G. Wells hinted at usage of radioactivity to create nuclear weapons. In The War in the Air, he described action reminiscent of dog fighting which actually occurred in WWI.
A lot of our customers write the software that will be used to design things of the future. So with some indirection, understanding what fantastic things are ahead could be helpful for us in our planning.
Do you have any more examples where fiction predicted the future correctly?
More lucratively, do you have any predictions, and rationale/vision supporting them?
Posted: February 28th, 2013 |