I recently found a really interesting technical article describing the difference between semantic and visual PMI.

First some background . . .

For the first few years that Spatial 3D InterOp offered PMI, there was one topic that really confused me: semantic PMI.  What did the term semantic mean when applied to dimensioning?  Actually my lack of understanding went deeper than that, what was the big deal about dimensioning and PMI anyway?  (I'm no ME) -- I had to do some catch up to understand what on earth a geomtol was and why it was important.  Prior to then, I thought PMI was just +/- some tolerance on the length of an edge, right?

So I learned that it is more complicated than that and that there wasn't even a standard way of representing PMI.  In fact, not only wasn't there a standard, but there wasn't even a common ideology on the structure.  There have historically been two competing ideologies: semantic and graphical.  Spatial started offering semantic initially to meet the automation needs of our CAM and measurement customers, while graphical PMI was more popular at the time.  In more recent years, these two ideologies are starting to merge together, as we'll show in upcoming releases (sorry for the marketing, hard to talk about this topic without discussing our product line).

So about the article . . .

As Fischer nicely explains, "semantic data captures the meaning" whereas graphical is presented "for human consumption."  In computer science terms, you get a class structure in memory which represents the PMI , providing access to its specifics, such as geometric tolerance type or magnitude through a class method or property - this enables automatic creation of machine paths and test plans.  The article also discusses some of the inherent difficulties with semantic PMI, which we struggle with too, by the way, stemming from the lack of a common standard.  

 An example of this that we've seen is in ProE/Creo, which allows you to put tolerances on driving dimensions.  Driving dimensions are the various dimensions defining the features which ultimately result in the final solid, but they may not necessarily be dimensions that are meaningful to the final solid.  See the example below in which I've created a geomtol between the solid and a construction plane.  Ok, this is a very simple example of a feature, but the significant point is still illustrated: unless you understand the relationship between that plane and the solid (i.e. the feature, i.e. ProE's "secret sauce"), that dimension and geomtol are meaningless.  This is an inherently different style of tolerancing than what is used in either UG/NX or V5, which makes standardizing the data between them difficult.

 

Anyway, to make a long story short, in my quest to understand a topic that confused me, I found out that there is general confusion and inconsistency on the topic . . . but people are working on it.

 

Contributed by Paul Cardosi, CFO, Spatial

Are you old enough to remember 1986? If you are reading this blog it has more significance in your life than you realize! In late 1986 three aspiring entrepreneurs flew back to Boulder, CO from New York after pitching their new idea for a 3D software company to an investor.  Once back in Boulder they settled into their routine of coding all day in their ‘office’ above Murphy’s Pub and later heading downstairs for some brews while convincing each other of future success.

The second part of this is routine would soon change!  An investor called and wanted to fund their company which they named Spatial Technology, laying the foundation for the first commercial 3D geometric modeling kernel – ACIS.

In 2011 Spatial celebrated 25 years in business which is a great achievement for a venture-backed software company.  In 2000 Spatial got a new owner (Dassault Systèmes) but many aspects of the startup culture remain at Spatial like cheap soda, bagel Friday and free old-school Pac-man machine (the R&D guys always seem to have the high score). Unfortunately there isn’t a pub downstairs!

For the record I was still at ‘college’ in the UK in 1986 so I have a good excuse for having a vague memory. I think Talking Heads, Space Hoppers and Top Gun were all the rage!

I met one of those 1986 aspiring entrepreneurs recently who told me of Spatial’s UK influence. Three early employees played a key role in Spatial’s ACIS product - a cornerstone to Spatial’s longevity. They were called Alan, Charles and Ian (the ACI in ACIS) who joined Spatial from a UK solid modeling development partner called Three Space, Ltd. They were instrumental in getting Spatial on the right path to success.

I would ask that you join me and raise a virtual glass to say Happy 25th Birthday Spatial!

Answer: when it’s a 'HappyPathPoint'.

Happy New Year everyone! I left off my last post with a dilemma: I wanted to design the interface for a function that people will call when they want to find the closest point on a surface to a test point. The problem is that the answer to this question is only usually a single, isolated point. Every now and then the answer is a set of (more than one) points. For example, all the points on a spherical surface are 'closest' to its center. So how does one design the interface so that the person writing the client code naturally accounts for this subtlety, while not forcing him to deal with an awkward interface?

I already stated that the wrong thing to do is to simply document the subtlety and hope the customer will notice it. A better solution is to (in addition to documentation) make the name of the function reflect the subtlety: prefer 'FindAClosestPoint' over 'FindClosestPoint'. The problem with this idea is that it doesn’t actually deal with the problem – how is the customer to tell if he’s on the happy path, or has run into the complex condition? And again, it relies on the customer noticing that something is up (rather than having the compiler tell him).

My preferred solution (which we first thought of while working on an ACIS project to make our internal interfaces more robust) is to introduce a new object called a 'HappyPathPoint'; the idea is that the function returns this object rather than a raw point:

As you can see, HappyPathPoint encapsulates both possible answers: if IsPoint() returns true, then the answer is (the typical case of) an isolated point and it’s safe for the customer to use the Value() method to get the point. If IsPoint() returns false, however, then the answer is (the extremely rare case of) a more general point set, and the customer should use a different branch in his analysis code:

In reality, the customer’s initial implementation for the branch where IsPoint() returns false will simply be to throw an error (and put a project in the backlog to think about what his algorithm should do). This is still a big improvement over the case where the customer’s code ignores the subtlety – what was previously a subtlety is now simply an extra question about the answer, and the code for dealing with the atypical case is isolated at the point where that case is first introduced.

One interesting question in all this is "What should the behavior of Value() be if it is called before IsPoint() has been called?" I see three possibilities:

A) Silently return an arbitrary point in the point set B) In release builds, return an arbitrary point. When contract checks are on, throw. C) Always throw

My preferred answer is B, especially if it is used in conjunction with a code coverage tool that ensures that you test suite will always hit the (invalid) call to Value() (with contract checks turned on, of course). The reason that this works is that the question I asked was NOT "What should the behavior of Value() be if it is called when IsPoint() is false?" That case will only show up in the atypical situations, and so typically won’t be hit in customer testing. The contract is that the customer must validate that the HappyPathPoint truly represents an isolated point before calling Value(), even if the HappyPathPoint happens to be an isolated point. This ensures that the client code is correctly written, even if none of its test cases hit the atypical case.

For those of you who speak C#, you should see similarities with the 'nullable' class that allows one to wrap a value class up into something that can be tested against null. As with nullable, the compiler will warn you if you try to use a HappyPathPoint where a Point is expected. HappyPathPoint is more powerful than nullable, however: nullable doesn’t protect (as a contract check) against an untested call to Value(), and it doesn’t provide an alternative answer.

I hope that I’ve convinced at least someone out there that this is a good idea. I don’t think it’s overly burdensome, and it makes the subtlety obvious in the interface. It is important not to adopt this idea as a one-off, however – if you’re going to use it, it should be implemented uniformly across your interface. The code snippets above are just an example; an interesting thing to try might be to write a HappyPath template class.

The last thing I want to say is that we’ve been thinking about this idea for years, and still don’t have a good name!! Any ideas for a better one? 'PointCandidate' maybe?

 

Tags:

To finish up this series of posts; what Gregg's post described happened a few years ago. Since that first team room, we've kept and dropped some Agile practices, and each development team has evolved their own processes. But the original idea has stuck. He did have to coerce developers into the room the first time. But it worked. Why? Because he correctly identified the criteria for making them successful, and well, I guess people liked that.  

Now most Spatial developers still work in teams rooms . . . but by choice. Some occasionally, some full time.  I guarantee that if you walk past 3 empty developer offices, they are huddled together over laptops somewhere in a dark conference room - and Gregg is often with them, skipping a meeting. The rest of the company got so fed up with not having any conference rooms, so they built a custom team room for the die-hards. In that first team room, Spatial had its first real breakthrough on getting a multi-threaded ACIS API to scale well (api_entity_point_distance), something we'd been trying to do for years. And I believe there may have been a few unit tests written too . . .

 

Tags:

Spatial Developers in a Team RoomSpatial Developers in a Team RoomSo Stefanie's right; but a slight introduction before the criteria. We were floundering big time with Agile. Our original belief was a take-no-prisoners, do-everything-the-book-said, all-R&D strategy. What we ended up with were endless meetings and mind-numbing philosophical debates on “what Agile really was”. It didn’t help when we heard outside sources say, “well, I’ve seen it before and it was like this”. But these outside sources were hapless in helping us install it at Spatial. I personally felt we were chasing a ghost.

So this is what we did; first, we carved out one significant 'delivery' of R&D that needed to be done by the next release (later to be called an epic). I let the rest of the R&D activities, the smaller miscellaneous ones (bug fixes, minor enhancements), go on their merry way un-encumbered with Agile. I assigned six people to this epic delivery, for a six week duration (later to be called a sprint), and gave them the following conditions and promises:

#1 You will work on this epic and this epic only. No more having a delivery team schedule a bunch of disparate activities for an iteration; minor bug fixes, major project work, along with all the other crap that happens in the daily life of developers. If someone in the group was a specialist that only knew a certain area of code, I would prioritize whatever needed to done on that code later. Or, I would MAKE someone else (not on the epic) learn on the job. (This is why, largely, we only did this with a subset of our R&D staff.) But the major point was to allow them to do Agile on one specific epic, and to not be interrupted!

#2 You will all go in one room and you will not work from your offices or cubes. BUT, and this is very important, we will never, ever, take your private office or cube away. (I had to get a personal promise from our CEO.) It was this condition that started our notion of a Team Room.

#3 The epic will be well defined and relatively narrow or specific. I wouldn’t let an epic (or team room) start unless we had some reasonable definition and a prototype worked out. These prototypes might have been done individually or by a small group of people, but we had a pretty good idea of how we wanted to solve the problem before the team room started. Hence, it was reasonably well-defined and the group could start running on day one.

#4 All the resources needed for the epic will be in the team room. There were to be NO external dependencies. I didn’t want to hear, “we’re blocked, Fred from the blah-blah group needs to deliver this code before we can move on”. If Fred had something that needed to be done for that epic he was in the team room for the entire six weeks. (This meant it was his ass on the line as well). This included QA, documentation and a position we created called, 'Team Room PM'. The Team Room PM was the priority man. He was on the team and made all final decisions. (But to be sure, the epic was planned and scheduled by the higher level PM group outside of development.) This did mean we had to well think-out who was to be in the room. (Hence, number three is important).

#5 It ends in six weeks. After its over, you can spend time working independently; prototyping future epics, scanning code, fixing bugs, reading, and most importantly, thinking. You can go back into a team room when the next one starts and you’re ready.

#6 If you fail, fail quickly and decisively. I’m a big believer in having an environment where people feel 'safe to fail'. It’s not that I wanted epics or team rooms to go bust, its more I wanted transparency. We work on a very complex and difficult piece of software. I’ve had what I thought were great ideas that didn’t pan out, and if they didn’t, you had to be man enough to say, “well, rats, that didn’t work”; and management needs to know when to stop feeding a dead horse. (Of course, the horse could be the project or you!).

#7 Lastly, the rest of XP and Agile is up to you. Pair program, don’t pair program. Unit test, don’t unit test. Play planning poker, don’t play planning poker. Decide your own iteration schedule, one day iterations, four day iterations, one hour iterations . . . I don’t care! This might have been exhaustion on my part, but this is where it all got interesting . . . all the things that we were trying to force earlier, especially pair programming, teaminess (not a word); now happened naturally. We didn’t have to force them or set up goofy metrics to measure how much we were adhering to. (One brain-dead idea from the first year, was to award an iPad to the developer who logged the most pairing hours.)

P.S. The one aspect of Agile (or XP, if you like) I didn’t address was 'vertical slicing'. Okay, there is a lot of XP/Agile that I didn’t address, but I’m a big believer that vertical slicing is a most central and important concept. I didn’t want the team rooms to ignore this, but again, I didn’t want to force it. The question in my head was . . . if conditions #1 - 7 were in place, would vertical slicing become a natural practice, like pair programming did?

You’ll have to wait for the next post to find out!

Twitter Facebook LinkedIn YouTube RSS