Try to buy a single-core laptop today and you’ll have a difficult time even finding one. The leading computer manufacturers offer at least dual-core for base models of their economy lines, even for laptops. Let that sink in. Our days of single core machines, even laptops, are over. Many of the leading mobile products are also at least dual-core with higher end products having even more cores.
As a developer, it’s exciting to have this hardware available and to know it’ll only get better. It’s even more exciting to be able to use it for development. Perhaps even more enjoyable is to ultimately see your customers using your software to push their multi-core hardware to its limits. If you’ve ever looked at the processor utilization on a multi-core machine and watched as every one of its processor maxed out, chewing through all that work you were throwing at it, you know that feeling of satisfaction. It’s the satisfaction you get from knowing that you’re utilizing the hardware to its fullest.
However, all the cores in the world aren’t going to just magically make the software you run faster – that software has to support it. 3D InterOp from Spatial has been around for many years and was largely established prior to the multi-core revolution. Therefore, file translations have been inherently sequential. Just because the file translation process has always been sequential doesn’t mean it needs to be. In fact, we are already exploring and implementing multiple strategies for how to catapult 3D InterOp into the multi-core world.
Two distinct high level strategies exist for taking advantage of multi-core machines: multi-threading and multi-processing. Multi-threading is the use of multiple threads on a given processor. Multi-processing is the use of multiple processors on a given computer. Which strategy should be used? Which offers the most bang-for-the-buck? Which is more optimal? Which scales better? These questions can’t be answered without understanding what your code does and how it does it.
So let’s dive into the code. First, is it thread-safe? If it is then multi-threading is an option. If not then you have to decide if you’re willing to invest the time and resources necessary to make it thread-safe. If that is not possible then you’re left with the multi-process strategy. Since multi-processing uses separate processes, no memory is shared and therefore thread-safety is a non-issue. Next you’ll need to determine what exactly it is that you want to parallelize. Often this will be a performance bottle neck that you know most of your users encounter. From there you have to analyze the particular algorithms, determine how to split up its work, and begin parallelizing.
The above is a very superficial introduction on how to start parallelizing your application and is by no means complete. Each application will have its own set of complications when it comes to parallelization. There are lots of resources on online to get you started and if you’re even thinking of going down this road, the sooner you start the better.
Please stay tuned for part 2 of this series to be posted after our R24 release.
How are you taking advantage of the multi-core revolution?
Which strategy do you prefer?
What issues and roadblocks have you encountered?
Posted: March 13th, 2013 |
In my previous post, Creating a Better Documentation Experience, I covered highlights of our online documentation. This time we will dive into search methods, categories and navigation.
Using the Search Engine
The first, most obvious manner in finding what you need is to use the site Search. On the left side of your screen, the Navigation Side Bar contains a Search box with two buttons under it labeled Go and Search.
If you know the title of the article, the Search autocompletes your entry with one or more page titles (case-sensitive). You can then press the Go button and the Search takes you directly to that article.
Else, press Search if you simply want to search the text of all articles (this option is case-insensitive). You will be presented with a listing of Search results, or a message indicating that no matches were found.
For more tips on using the site Search, visit the Help on Searching page, which can be accessed by clicking Help on the Main Page (to the right of the Search box) or entering keywords such as help on searching in the Search.
Browsing by Category
Another method to locating what you need is browsing by Category. Spatial develops its product documentation on a MediaWiki platform. Therefore, each page is categorized so that you may find it by browsing the Category pages. You may choose to browse the complete list of categories, or you may choose to browse those specific to a product, such as ACIS, InterOp, or RADF.
Simply entering Category:ACIS Docs in the Search box will take you directly to that category and you will see that it has several subcategories, such as Advanced Blending, Components, and Local Operations.
Likewise, at the bottom of every technical article, notice that one or more categories are listed. This guides you to other similarly categorized pages and other related categories.
More Navigation Tips
And finally, as you browse through the articles, notice that many of them have breadcrumb navigation near the top of the page, or a See Also section near the end of the page. Following the breadcrumb trail takes you to a “parent” level page (usually a Portal page), while the links in the See Also section take you to related/recommended pages.
I hope this post helps you discover some key areas of our online product documentation, and helps you find what you need quickly. If you have any specific requests for future blog posts about our documentation, please leave a comment for us below.
Posted: December 12th, 2012 |
As I was sitting here trying to come up with a topic for this post, I was thinking that while I have a million things going on, none of them are post-worthy in and of themselves, and I'm sure nobody wants to read a general post about being busy. Then I had an epiphany, there is something bigger going on that ties it all together.
3D InterOp is going through a paradigm shift.
The longstanding objective of InterOp has been to convert CAD data from 1 format to another while retaining the highest quality. The interface was originally designed with this very simple objective in mind -- give the user a small, clean interface, independent of input or output format.
This all works pretty well, the interface is certainly easy to use. When we added the CGM modeler though, it presented us with some new challenges. Being newly componentized, CGM doesn't have all of the somewhat clunky add-ons that we've put into ACIS to support additional types of incoming data, for instance product structure and PMI. We were faced with a question, do we add these in the same way as we've done in the past so that we can translate all data into 1 format, even when it isn't very clean? The question was particularly relevant because we knew we'd be adding graphical data soon, which didn't have anywhere to go in either ACIS or CGM.
This is where we come to our paradigm shift. We found ourselves asking how people will really use the data and how do we modify the interface with this in mind?
For geometry, this part always came for free. You convert files into a modeler, which then provides a full range of APIs for doing something with the new data - query it, change it, whatever you want. As long as the data is usable by the modeler, InterOp's job is done.
So we had mostly avoided this question, but faced with adding new types of data to both CGM and ACIS, we had to truly address it. Even if we add all new data, like graphical data, into the modeler, we have to make sure there are APIs that allow the user to get it back out and use it. That starts to make things very complicated.
We decided to go for a cleaner approach that was very focused on making sure people had a targeted way of using every type of incoming data. Through this examination, we came to a few key realizations:
- The objective of 3D InterOp is not simply to convert from one format to another, but rather to query the source document for different "containers" of data, converting only when necessary.
- Rather than one size fits all, the interfaces for reading such containers should vary with their complexity and downstream use.
- If the data is very simple, then a direct API is a great way to access the data, so, for instance, we've added new APIs for extracting product structure and graphical data in memory. This means that applications can put the data directly into their internal representation without any file interaction, saving steps and time. Here the interface is a little more involved because the user is exposed to more.
- If the data is more complex, the obvious case being geometry, then you need to put it into something that knows how to represent it and that offers tools for operating on it (the modeler). So here, InterOp's primary responsibility is getting the data into the modeler in the way it expects so it is ready for downstream usage. The user interface for this is very simple because all the work goes on behind the scenes.
- There will also be meta data that connects all the different containers together, e.g. attributes and PMI. We're working on figuring out this part.
This is a really cool way of looking at things because it allows us to expand the InterOp interface to handle new data in a concise and flexible way. That's the big picture - which means that in my smaller picture, helping to roll this all out to our customers, there is certainly a lot going on.
Below is an Example of Extracting a Single Instance from a Product Structure
I’ve written my last two blogs about different pitfalls and insight needed in order to properly translate CAD data. I’ve discussed how “sharing” of geometry inside the data structure is a hidden but much used form of design intent and discussed how geometry forms are inherently linked to high-level algorithms inside the modeler itself. But I haven’t discussed the healing operations that the Spatial translators perform in order to properly translate the different CAD formats. If you use our translators you know they exist, and people commonly ask about their purpose and efficacy.
To understand InterOp healing we have to start by borrowing a concept from any undergraduate Data Structure and Algorithms class. Generally, one views a software system as two distinct but highly inter-related concepts: a data structure and an acting set of algorithms or operators. In our case the data structure is a classic Boundary Representation structure (B-rep) which geometrically and topologically models wire, sheet and solid data. An operator is an action on that data, for example, an algorithm to determine if a point is inside the solid or not. But the system’s operators are more than just a set of actions. Implicitly, the operators define a set of rules that the structure must obey. Not all the rules are enforced in the structure itself; actually, many can’t be. But they exist and it’s healing in InterOp that properly conditions the B-rep data to adhere to these rules upon translation.
As always a couple of examples best describe the point. I picked three ACIS rules that are, hopefully, easily understandable.
All 3d edge geometry must be projectable to the surface. Anybody can define a spline based EDGE curve and a surface and write it to SAT. Basically, jot down a bunch of control points, knot vectors, what have you, and put it in a file that obeys SAT format. But in order for it to work properly, geometric rules for edge geometries exist. Specifically, the edge geometry must be projectable to the surface. In short, you can’t have this:
There are many reasons in ACIS for this, but primarily if it’s not projectable then point-perp operations are not well-behaved. If they’re not well behaved finding the correct tolerance (distance between the curve and the surface) is problematic. If one cannot define correct tolerances then water-tightness is not achieved and simple operators, like querying if a point is inside the body, fail.
Edge and Face geometry cannot be self-intersecting. A great deal of solid modeling algorithms work by firing rays and analyzing intersections with different edge and face geometries. In order for any conclusion to be drawn, the results of the intersection must be quantifiable. The problem with self intersecting geometries is just that; how to you quantify the results in Figure 3? The key observation here; imagine you are walking along the curve in Figure 3, starting from the left side. At the start, the material is on the right side, but after the self intersection the material changes to the left side. You cross the self intersection again and the material switches to the right again. This causes endless grief in understanding the results of an intersection.
Tolerances of Vertices cannot entirely consume neighboring edges. For a B-rep model to be considered water-tight, tolerances of faces and edges must be understood. Today many kernels have global tolerances plus optional tolerances applied to edge curves and vertices. These tolerances vary depending on neighboring conditions, usually obeying some upper bound. You can think of these tolerances as the “caulking” that keeps the model water-tight. Depending on the quality of the geometry or the tolerances of the originating modeling system you might need more “caulking” or less; respectively, larger tolerances on edges or vertices, or smaller tolerances. However in order to realize a robust Boolean engine, again, rules apply. Consider this:
Above we have Edge Curve 2 encapsulated completely inside the gray tolerant vertex. Again, I can easily write this configuration to SAT format, however Booleans cannot process it. It yields horrific ambiguity when building the intersection graphs in the internal stages of Booleans.
So this is a list of just three rules, it’s far from being comprehensive. But the main point: we know that not everything that ends up in an IGES file comes from a mathematically rigorous surfacing or solid modeling engine. Perhaps people are translating their home-grown data into a system like ACIS so they can perform operations that they could not in their originating system. But in order to perform these operations, the data must conform to the rules of the system. To simply marshal the data and obey a file format, but disregard the rules, is doing just half the job.
That’s why healing matters.
Posted: June 15th, 2012 |
Two weeks ago, Spatial hosted a booth at the CONTROL Exhibition in Stuttgart, Germany. I hate to follow John's recent post with another one about a trade show, but this one is worth discussing - let's just call it "Interesting Shows - part 2."
For anybody not familiar with it, CONTROL is a huge show aimed at the dimensional metrology market. Whenever I go to trade shows, I am amazed at the scale of the market (4 huge buildings for this one) and the specificity of the vendors.
The range of devices was quite interesting. There were many varieties of bridge CMMs, but there was also a wide range of hand held measurement machines. One was a small metal ball with mirrors inside. You put the ball on the part you wish to measure, and a nearby camera shoots a laser at the ball, which reflects it back. A similar idea was a wand that looked like the ones used for frisking at airport security. You poke the point to measure, and again a camera measures specific points on the wand which allow it to infer the location of the point you poked. After wandering the halls for a few days, a simple understanding of all of it gelled in my mind.
All that these devices do is measure points in space
Of course they do that with tremendous variety, which is how they differentiate themselves from each other. Differentiation can be on the accuracy of measurement, point gathering speed, physical access (e.g. you can't put the wing of an airplane in a bridge machine, so you use a hand held device), and much more. But the one thing they have in common is that they're still all trying to do one basic thing - give you three very, very accurate coordinates, many, many times over.
As a small indicator of just how hard this actually is, I saw a few vendors selling only the granite slabs that go into the CMMs. Imagine - there are entire companies whose only business is to make sure that they give you something very flat on which to put your measurement machine. Now that's accurate.
I realize that to anybody working in this market, this is a simple and obvious concept, but sometimes working on software components, you get so focused on what a specific customer's application is doing that you only see the trees and not the forest -- or maybe the points and not the cloud :-).
Which brings me to the software side of things. The hardware is a major investment and differentiator in the CMM market, but good software is essential to run it. A good CMM program will do things like help the programmer and/or machine operator easily determine which points to measure, it'll tell the machine how to do that in the most optimal way, and it will analyze the gathered points and report the results back to the user.
Obviously, Spatial is very involved in this part of the measurement market, particularly as more and more systems are moving to measuring and comparing to 3D parts rather than 2D drawings. One thing in particular struck me throughout the show - almost every discussion I had turned to the subject of PMI (or GD&T) at some point. There was a time not so long ago when using PMI in CMM applications was a new idea. When we first added PMI to our 3D InterOp
product line, we had many customers excited about it, but mostly in principle. Very few were actually doing anything with it. Today the discussion is totally different. We're seeing applications do everything from drive automatic test plan creation to automatic post-process comparison between the gathered points and the tolerances originally specified by the designer.
Getting out to see the physical products in person is a tremendous help to anybody working in software. For me, I finally internalized both the simplicity and the complexity of dimensional metrology and how we fit into it.
Anybody out there have suggestions for another good educational experience in your market?