As I was sitting here trying to come up with a topic for this post, I was thinking that while I have a million things going on, none of them are post-worthy in and of themselves, and I'm sure nobody wants to read a general post about being busy. Then I had an epiphany, there is something bigger going on that ties it all together.
3D InterOp is going through a paradigm shift.
The longstanding objective of InterOp has been to convert CAD data from 1 format to another while retaining the highest quality. The interface was originally designed with this very simple objective in mind -- give the user a small, clean interface, independent of input or output format.
This all works pretty well, the interface is certainly easy to use. When we added the CGM modeler though, it presented us with some new challenges. Being newly componentized, CGM doesn't have all of the somewhat clunky add-ons that we've put into ACIS to support additional types of incoming data, for instance product structure and PMI. We were faced with a question, do we add these in the same way as we've done in the past so that we can translate all data into 1 format, even when it isn't very clean? The question was particularly relevant because we knew we'd be adding graphical data soon, which didn't have anywhere to go in either ACIS or CGM.
This is where we come to our paradigm shift. We found ourselves asking how people will really use the data and how do we modify the interface with this in mind?
For geometry, this part always came for free. You convert files into a modeler, which then provides a full range of APIs for doing something with the new data - query it, change it, whatever you want. As long as the data is usable by the modeler, InterOp's job is done.
So we had mostly avoided this question, but faced with adding new types of data to both CGM and ACIS, we had to truly address it. Even if we add all new data, like graphical data, into the modeler, we have to make sure there are APIs that allow the user to get it back out and use it. That starts to make things very complicated.
We decided to go for a cleaner approach that was very focused on making sure people had a targeted way of using every type of incoming data. Through this examination, we came to a few key realizations:
- The objective of 3D InterOp is not simply to convert from one format to another, but rather to query the source document for different "containers" of data, converting only when necessary.
- Rather than one size fits all, the interfaces for reading such containers should vary with their complexity and downstream use.
- If the data is very simple, then a direct API is a great way to access the data, so, for instance, we've added new APIs for extracting product structure and graphical data in memory. This means that applications can put the data directly into their internal representation without any file interaction, saving steps and time. Here the interface is a little more involved because the user is exposed to more.
- If the data is more complex, the obvious case being geometry, then you need to put it into something that knows how to represent it and that offers tools for operating on it (the modeler). So here, InterOp's primary responsibility is getting the data into the modeler in the way it expects so it is ready for downstream usage. The user interface for this is very simple because all the work goes on behind the scenes.
- There will also be meta data that connects all the different containers together, e.g. attributes and PMI. We're working on figuring out this part.
This is a really cool way of looking at things because it allows us to expand the InterOp interface to handle new data in a concise and flexible way. That's the big picture - which means that in my smaller picture, helping to roll this all out to our customers, there is certainly a lot going on.
Below is an Example of Extracting a Single Instance from a Product Structure
Two weeks ago, Spatial hosted a booth at the CONTROL Exhibition in Stuttgart, Germany. I hate to follow John's recent post with another one about a trade show, but this one is worth discussing - let's just call it "Interesting Shows - part 2."
For anybody not familiar with it, CONTROL is a huge show aimed at the dimensional metrology market. Whenever I go to trade shows, I am amazed at the scale of the market (4 huge buildings for this one) and the specificity of the vendors.
The range of devices was quite interesting. There were many varieties of bridge CMMs, but there was also a wide range of hand held measurement machines. One was a small metal ball with mirrors inside. You put the ball on the part you wish to measure, and a nearby camera shoots a laser at the ball, which reflects it back. A similar idea was a wand that looked like the ones used for frisking at airport security. You poke the point to measure, and again a camera measures specific points on the wand which allow it to infer the location of the point you poked. After wandering the halls for a few days, a simple understanding of all of it gelled in my mind.
All that these devices do is measure points in space
Of course they do that with tremendous variety, which is how they differentiate themselves from each other. Differentiation can be on the accuracy of measurement, point gathering speed, physical access (e.g. you can't put the wing of an airplane in a bridge machine, so you use a hand held device), and much more. But the one thing they have in common is that they're still all trying to do one basic thing - give you three very, very accurate coordinates, many, many times over.
As a small indicator of just how hard this actually is, I saw a few vendors selling only the granite slabs that go into the CMMs. Imagine - there are entire companies whose only business is to make sure that they give you something very flat on which to put your measurement machine. Now that's accurate.
I realize that to anybody working in this market, this is a simple and obvious concept, but sometimes working on software components, you get so focused on what a specific customer's application is doing that you only see the trees and not the forest -- or maybe the points and not the cloud :-).
Which brings me to the software side of things. The hardware is a major investment and differentiator in the CMM market, but good software is essential to run it. A good CMM program will do things like help the programmer and/or machine operator easily determine which points to measure, it'll tell the machine how to do that in the most optimal way, and it will analyze the gathered points and report the results back to the user.
Obviously, Spatial is very involved in this part of the measurement market, particularly as more and more systems are moving to measuring and comparing to 3D parts rather than 2D drawings. One thing in particular struck me throughout the show - almost every discussion I had turned to the subject of PMI (or GD&T) at some point. There was a time not so long ago when using PMI in CMM applications was a new idea. When we first added PMI to our 3D InterOp
product line, we had many customers excited about it, but mostly in principle. Very few were actually doing anything with it. Today the discussion is totally different. We're seeing applications do everything from drive automatic test plan creation to automatic post-process comparison between the gathered points and the tolerances originally specified by the designer.
Getting out to see the physical products in person is a tremendous help to anybody working in software. For me, I finally internalized both the simplicity and the complexity of dimensional metrology and how we fit into it.
Anybody out there have suggestions for another good educational experience in your market?
John's recent post on documentation and behavior driven development reminded me of an interesting experience I had last fall in developing training documentation. Our annual 3D Insiders' Summit (early bird registration is now open, by the way. We hope to see you there!) always gives the sales team a rare opportunity to come together from around the world in one geographic location with a large chunk of the development team. We decided to take the time to have some introductory CGM training for the TAMs (Technical Account Managers), and through the process of elimination, I somehow landed the task of organizing it.
Unfortunately, we were challenged by a number of issues. We only had a day and a half. Most of the developers and TAMS were busy in the months prior preparing presentations and demos for the Summit, including me. Amongst our team, we had varying levels of hands-on experience with CGM, and I had the least experience of all. Given these constraints, how could I ensure that we would make the most of our short time with development?
The first thing I did, of course, was to procrastinate for a few months. What's that saying, "I work best under pressure?" If that's true, there was going to be some good stuff coming for sure. With three weeks left, it hit me . . . people have extended their trips by two days to come to this training, which I haven't even started preparing. Panic! What could we do with the least amount of effort possible? I worked with development to gather any and all presentations we had lying around and threw them together into one messy powerpoint - something like 60 slides, I think. Uggh, nobody is going to have time to fix this, I don't know how to do it, and if we don't, it will be soooooo boring to sit through.
Hmm, let's avoid that topic for now. Maybe some hands-on exercises would help. I agreed to create a sequence of exercises demonstrating a (very, very) simple CAM mold and die workflow. Brilliant idea, Stef. I've never programmed with CGM before, and my ACIS is even a bit rusty. Oh well, dive in . . .
Early on, I had a pleasant surprise. The team working on componentizing CGM had spent a lot of time thinking about things they'd like to do differently from Acis, and one of those was a strong documentation structure right from the beginning. The structure is oriented towards hands-on cases, FAQs and tutorials (documentation driven development as John mentioned), with less emphasis on theory and technical articles. Their work had paid off. I was expecting to need a lot of help, given my novice state, but I was able to develop the whole workflow with only their documentation. I made some mistakes along the way, but I was able to sort them out on my own without insider help.
One problem though, was that despite the smooth development process, it was still enough work that it wouldn't fit into a 2 day training and leave us time to talk with development. Then somebody had the brilliant idea that we should assign the exercises as homework. I decided to turn my whole experience into the homework, mistakes and all. It took me a few hours to create a sequence of 15 assignments, with helpful documentation links, screenshots and hints, but no explanations from me.
Fig. 2 Above: We’re getting ready to create a mold for this swept body. We’ll use a draft to taper the sides of the part for extraction from a mold. Before drafting, we first need to pick faces for the draft.
- Pick the ribbon faces as shown in the picture below. (Hint: the little man is looking in the – X direction from 10, 0, 1 and in the +Z direction from 0,0, -1)
The idea worked pretty well. Most people did the homework. Some flew through it in a day, and some ran into difficulties and weren't able to finish. But everyone came into the training with a lot of questions and basic knowledge. During the training time, we skimmed through the messy presentation, spending most of the time asking development about the finer points and harder technical problems. The training seemed truly customized for the audience because in a sense, we created it as we went. John, what would this be called? CDT (customer driven training), PDD (Panic Driven Development), LOOE (Lucky, One-Off Experience)?
I'd be curious to know about your most valuable training experience.
Last August, I made a huge change in my life - I decided to forego a stable, mature relationship and go long-distance. No, not my husband . . . Spatial. My family and I moved to Ireland and I began working for Spatial remotely. At first it was really hard. I missed our time together (all those meetings in the board room, sigh), sharing common experiences (no more bathroom chat, sniff), and all the little things you take for granted until they're gone (bagel Fridays, never running out of milk for your (decaf) coffee, a printer). What made it even harder was the magnitude of the distance - I had moved to a country far away from everyone (only 1 Spatial customer), 7 hours away from headquarters, and not a single decent cup of decaf to be found in the whole country. On top of that, I'd taken on a new role as a Technical Account Manager. I'd never worked directly with customers before, I hadn't done development in quite some time, and now I was responsible for ensuring their success . . . from Ireland! The first week of working in my new 'office' (a cheap IKEA desk in the corner of the living room), I was asking myself, "What have I done?"
Joking aside, the change has been extremely interesting and probably not dissimilar to what our customers experience every day. We sell technically advanced products with somewhat of a learning curve and for the majority of my workday, I'm on my own if I get stuck.
A few things I've learned to do:
- Leverage every resource available - our docs, our samples, always keeping the latest packages, free viewers, wikipedia, our internal wiki, you name it.
- Be prompt - When I was in development, I would often focus intently on one project and let other emails and requests slide. This allowed me to concentrate, and somehow I could always get caught up afterwards. I can't do that anymore because I know that I only have a short window of time to interact with people (whether customers or developers), which could cause big delays. I now think of myself as the person that keeps everything moving, and I try to do whatever communication I can to ensure that even if questions aren't answered, at least the other party is still able to proceed with their work. Heck I have organized my Inbox for the first time in 10 years.
- Do my homework - On the flip side of replying to every email, I also try to make sure that when I do have time to look at a problem, I take it as far as I possibly can. Did I look at that file in Catia? Did I open it with the latest version of Interop? Have I tried an old one too? Have I looked at it in both ACIS and CGM? Should I look up affine transforms before I write to somebody to ask how to scale them? Is that file really corrupt? Maybe I'd better download it again to check.
- Ask the dumb question - When I've gotten as far as I can, I have to get on the phone and in blunt terms, explain to somebody that no, I really don't know how to scale a transform from mm to inches and what does affine mean, anyway? I don't have much time for communication, so the more direct I can be about my shortcomings, the better the likelihood that I'll get what I want. And often it turns out that the reason I can't find the answer is because the problem isn't straightforward, and, similar to many challenges we get from our customers, the asking of the question gives development new information about how to improve our products.
- Make the most of contact time - skype, IM, phone calls. If I'm on the computer late at night doing something personal, taking 5 minutes to talk to somebody can possibly eliminate 1 hour of working solo the next day (and I can go to yoga!)
Its funny how getting further away from Spatial has actually brought me closer to the customers and prospects I work with. These may be things that they've already learned to do. I'd encourage any of you out there to definitely keep doing more of the same: use all of Spatial's resources to go as far as you can, don't hesitate to call, or email, ask your TAM lots of questions, even ones that seem dumb, and above all, go to yoga.
I recently found a really interesting technical article describing the difference between semantic and visual PMI.
First some background . . .
For the first few years that Spatial 3D InterOp offered PMI, there was one topic that really confused me: semantic PMI. What did the term semantic mean when applied to dimensioning? Actually my lack of understanding went deeper than that, what was the big deal about dimensioning and PMI anyway? (I'm no ME) -- I had to do some catch up to understand what on earth a geomtol was and why it was important. Prior to then, I thought PMI was just +/- some tolerance on the length of an edge, right?
So I learned that it is more complicated than that and that there wasn't even a standard way of representing PMI. In fact, not only wasn't there a standard, but there wasn't even a common ideology on the structure. There have historically been two competing ideologies: semantic and graphical. Spatial started offering semantic initially to meet the automation needs of our CAM and measurement customers, while graphical PMI was more popular at the time. In more recent years, these two ideologies are starting to merge together, as we'll show in upcoming releases (sorry for the marketing, hard to talk about this topic without discussing our product line).
So about the article . . .
As Fischer nicely explains, "semantic data captures the meaning" whereas graphical is presented "for human consumption." In computer science terms, you get a class structure in memory which represents the PMI , providing access to its specifics, such as geometric tolerance type or magnitude through a class method or property - this enables automatic creation of machine paths and test plans. The article also discusses some of the inherent difficulties with semantic PMI, which we struggle with too, by the way, stemming from the lack of a common standard.
An example of this that we've seen is in ProE/Creo, which allows you to put tolerances on driving dimensions. Driving dimensions are the various dimensions defining the features which ultimately result in the final solid, but they may not necessarily be dimensions that are meaningful to the final solid. See the example below in which I've created a geomtol between the solid and a construction plane. Ok, this is a very simple example of a feature, but the significant point is still illustrated: unless you understand the relationship between that plane and the solid (i.e. the feature, i.e. ProE's "secret sauce"), that dimension and geomtol are meaningless. This is an inherently different style of tolerancing than what is used in either UG/NX or V5, which makes standardizing the data between them difficult.
Anyway, to make a long story short, in my quest to understand a topic that confused me, I found out that there is general confusion and inconsistency on the topic . . . but people are working on it.