For this post, I thought I would talk about SPAR 2012, which I attended in Houston last week.   

For those of you where are not familiar with it, SPAR is a conference/trade show for the medium and long-range scanning industries.  From a geometry perspective, this means dealing with point clouds.  Lots of point clouds.  Lots of really big point clouds.  For example, at one of the talks I attended, the speaker was discussing dealing with thousands of terabytes of data.  Another speaker discussed the fact that developing the systems just to manage and archive the huge amount of data being produced is going to be a major challenge, let alone the need for processing it all.

As an example of this, the very first thing I noticed when I walked into the exhibit hall was the huge number of companies selling mobile terrestrial scanners.  These are laser scanning units that you strap onto the back of a van, or an SUV, or an ultra-light airplane, or a UAV - there was even a lidar-equipped Cooper Mini on display. You then drive (or fly) along the path you want to scan, acquiring huge amounts of data.  The data is then processed to tell you about e.g. potholes or highway signs or lane lines on roads (the scanners are often coupled with photographic systems) or vegetation incursions on power lines (typically from aerial scans).  When I attended two years ago, this was a fairly specialized industry; there were only a few vans on display, the companies that made the hardware tended to be the ones doing the scans, and they also wrote the software to display and interpret the data. 

This year, it seemed like this sector had commoditized:  there were at least eight vehicles on display, the starting price of scanning units had come down to about $100K, and it seemed that there were vendors everywhere you looked on the display floor (and yes, it does sound odd to me that I’m calling a $100K price point “commoditized”).  Another thing that I was looking for, and think I saw, was a bifurcation into hardware and software vendors.  I asked several of the hardware vendors about analysis; this year they uniformly told me that they spat their data out into one of several formats that could be read by standard software packages.  I view this specialization as a sign of growing maturity in the scanning industry; it shows that it is moving past the pioneer days when a hardware manufacturer has to do it all.

On the software side, I saw a LOT of fitting pipes to point clouds.  This is because a large part of the medium range scanning market (at least as represented at SPAR) is capturing “as built” models of oil platforms and chemical plants, especially the piping.  The workflow is to spend a few days scanning the facility, and then send the data to a contractor who spends a few months building a CAD model of the piping, from which renovation work on the facility can be planned.  One of the sub-themes that ran through many of the talks at the conference was “be careful of your data – even though the scanner says it’s accurate to 1mm, you’re probably only getting ½ inch of accuracy”.  This was driven home to us at Spatial a few years ago when we bought a low-end scanner to play around with and we discovered that a sharp black/white transition caused a “tear” in the surface mesh spit out from the scanner (due to differential systematic errors between white and black).  A practical example of this was discussed in a talk by one of the service providers; he gave a case study of a company that tried to refit a plant using the workflow described above.  Early on they discovered that the purported “as built” model (obtained by humans building models from scanned data) weren’t accurate enough to do the work – new piping that should fit correctly from the model wouldn’t fit in reality (for example all the small-diameter pipes had been left out of the model completely).  This is because a real-world plant isn’t a bunch of clean vertical and horizontal cylinders; pipes sag, they’re stressed (bent) to make them fit pieces of equipment and so on.  The company went back and had the job re-done, this time keeping a close tie between the original scans and the model at all stages.  I really appreciated the attention to detail shown in this and other talks; in my opinion it’s just good engineering to understand and control for the systematic errors that are introduced at every stage of the process.

Two more quick observations:

  • Several people mentioned the MS Kinect scanner (for the gaming console) as disruptive technology.  My gut is telling me that there’s a good chance that this will truly commoditize the scanning world, and that photogrammetry might take over from laser scanning.
  • I didn’t expect my former life as a particle physicist to be relevant at a scanning conference.  Imagine my surprise when I saw not one but TWO pictures of particle accelerators show up in talks (and one of them a plenary session!)

Next year’s SPAR conference is in Colorado Springs – I hope to see you there!

Tags:

John's recent post on documentation and behavior driven development reminded me of an interesting experience I had last fall in developing training documentation.  Our annual 3D Insiders' Summit (early bird registration is now open, by the way. We hope to see you there!) always gives the sales team a rare opportunity to come together from around the world in one geographic location with a large chunk of the development team.  We decided to take the time to have some introductory CGM training for the TAMs (Technical Account Managers), and through the process of elimination, I somehow landed the task of organizing it.  

Unfortunately, we were challenged by a number of issues.  We only had a day and a half.  Most of the developers and TAMS were busy in the months prior preparing presentations and demos for the Summit, including me.  Amongst our team, we had varying levels of hands-on experience with CGM, and I had the least experience of all.  Given these constraints, how could I ensure that we would make the most of our short time with development?

The first thing I did, of course, was to procrastinate for a few months.  What's that saying, "I work best under pressure?"  If that's true, there was going to be some good stuff coming for sure.  With three weeks left, it hit me . . . people have extended their trips by two days to come to this training, which I haven't even started preparing.  Panic!  What could we do with the least amount of effort possible?  I worked with development to gather any and all presentations we had lying around and threw them together into one messy powerpoint - something like 60 slides, I think.  Uggh, nobody is going to have time to fix this, I don't know how to do it, and if we don't, it will be soooooo boring to sit through.

Hmm, let's avoid that topic for now.  Maybe some hands-on exercises would help.  I agreed to create  a sequence of exercises demonstrating a (very, very) simple CAM mold and die workflow.  Brilliant idea, Stef.  I've never programmed with CGM before, and my ACIS is even a bit rusty.  Oh well, dive in . . .

Early on, I had a pleasant surprise.  The team working on componentizing CGM had spent a lot of time thinking about things they'd like to do differently from Acis, and one of those was a strong documentation structure right from the beginning.  The structure is oriented towards hands-on cases, FAQs and tutorials (documentation driven development as John mentioned), with less emphasis on theory and technical articles.  Their work had paid off.  I was expecting to need a lot of help, given my novice state, but I was able to develop the whole workflow with only their documentation.  I made some mistakes along the way, but I was able to sort them out on my own without insider help.

One problem though, was that despite the smooth development process, it was still enough work that it wouldn't fit into a 2 day training and leave us time to talk with development.  Then somebody had the brilliant idea that we should assign the exercises as homework.  I decided to turn my whole experience into the homework, mistakes and all.  It took me a few hours to create a sequence of 15 assignments, with helpful documentation links, screenshots and hints, but no explanations from me.
 

 

Fig. 2 Above: We’re getting ready to create a mold for this swept body. We’ll use a draft to taper the sides of the part for extraction from a mold. Before drafting, we first need to pick faces for the draft.

- Pick the ribbon faces as shown in the picture below. (Hint: the little man is looking in the – X direction from 10, 0, 1 and in the +Z direction from 0,0, -1)

The idea worked pretty well.  Most people did the homework.  Some flew through it in a day, and some ran into difficulties and weren't able to finish.  But everyone came into the training with a lot of questions and basic knowledge.  During the training time, we skimmed through the messy presentation, spending most of the time asking development about the finer points and harder technical problems.  The training seemed truly customized for the audience because in a sense, we created it as we went.  John, what would this be called?  CDT (customer driven training), PDD (Panic Driven Development), LOOE (Lucky, One-Off Experience)?

I'd be curious to know about your most valuable training experience.

 

In my last post, I introduced our idea of B-rep Health and the notion of "legal" but bad B-rep modeling data. Literally, the day after publishing that post, a beautiful, classic case came into development where a "remove face" operation failed due to unhealthy B-rep data. And again, as we see so many times, the culprit was bad translation (unknown third party translator). It’s a nice example. But it’s not that the pathology can be described so conceptually (it can - you will see), more, it shows the subtle, implicit information that is maintained inside a B-rep data structure; information you might not know is used. And lastly, it shows why the fundamentals of B-rep data translation are so important.  

So consider a modeling scenario like this; start with a basic shape that we call a wiggle. It’s a block with a free-form (b-spline) as the top face (picture 1). Fillet one of the edges along the top, creating a filleting surface as shown (picture 2). Now build some form of a feature that cuts the filleting surface in two. Here we simply build a notch in the body (picture 3).

Now translate the part to IGES and import it into a different modeling kernel, like ACIS or CGM. From here, it’s not uncommon that one would "defeature" the part, perhaps for a CAM operation. This involves taking the notch and removing it. This should produce the original wiggle with the filleted edge.  Of course, I wouldn’t be writing this blog if something didn’t go wrong. One would expect for this to always work. Well, there can be trouble; but first, let’s take a quick look at how the remove algorithm works.

The remove algorithm is simple; you unhook and delete the input faces (the faces which are to be removed). You extend the neighboring faces (called the moat ring) intersecting them with each other and using the curves generated from the surface / surface intersections to heal the gap and build the needed edges. So in this case, we will intersect (and possibly extend) surface A and surface B shown below.

Now, we are at the key point of the analysis. Surface A and surface B are the exact same surface. It’s ill-fated to try and intersect a surface with itself (this should be self-evident). Before translation – in the original B-rep - the face presiding over surface A and the face presiding over surface B are different, but  they both point to the exact same geometric surface underneath. This is called "sharing". Now if shared, the remove algorithm knows they are the same and doesn’t do the ill-fated intersection. Everything is taken into account and the remove operation works with the original B-rep. Ok, but what happened during translation? And here is where good translation matters. Let’s now look at how this model got translated.

If you have weak translation; you might do something like this. (And this, I believe, is the scenario behind this bug.)  The translator had some method of processing faces (and this could have been done when writing to IGES) that went face by face writing out each face and the surface underneath it. If two different faces pointed to the same surface, it didn’t care. It just processed the surfaces as if they were unique. Basically, the translator didn’t bother to share. Now the future remove operation "thinks" they’re different surfaces and this causes the intersectors endless grief. I suppose you could go back to unsaid company and tell them this is bad, please fix it. Perhaps they will tell you they get sharing correct in some cases but not all (after all sharing is not a complex topic, it’s a concept that was built into even the first B-Rep technologies). But for them it’s simply a performance benefit to reduce size and processing time. The model’s OK without it. Maybe they will get to fixing it, maybe they won’t. After all, even if you don’t have precise sharing, the model got translated and passes any industrial checker. But performance and checking!? That’s not the point. If you’re modeling, it’s an entirely different story. Modeling operations will not work, as you are removing key information from the B-rep that these operations need. [1]

Ok, so maybe this turned out to be a rant; and having a rather intense, five-year-old son, I can’t believe I have to come to work and talk about "sharing". But these things matter, along with so many other fundamental principles that need to be taken into account during translation (future blogs).  I’ve learned that working in a company that has both a modeling product and a translation product greatly helps with the insight (and motivation) you need to get translation right. As I said in my last post, choose your translation solution wisely.



[1] I should follow up by saying, in ACIS we could add a check to always see if two surfaces are the same prior to intersection (comparing data definition, i.e. knot vectors, control points, etc). But the next billion surface / surface intersections will not have identical surfaces and you now introduced an unneeded check that always has to be done. We don’t want to go there!

 

Tags:

Beyond TDD

By John S.

Today, I’ll be diving into alphabet soup of TLA’s.  For this I apologize in advance – those TDD guys started it!!  First though, a little reminder:  Spatial is a software component, rather than an end-user product company.  So most of the discussion below is in the context of developing components that will be used by customers in their applications, as opposed to developing applications that will be used by end users to get something done.

For those of you who are unfamiliar with it, TDD stands for "Test Driven Development".  It’s a methodology espoused by the agile/extreme community that advocates using unit tests to drive the interface design of your software.  A core principle of TDD is that it is a design (as opposed to testing) methodology – the idea is that if you use tests to drive the interface design, then testability is built into the application.

When we started using agile methods in our ACIS product five or six years ago, one of the techniques we adopted was TDD.  And over the course of the next few years I began to see a pattern: we would discover while writing the documentation for new functionality that the interface that we’d come up with during TDD often wasn’t quite right.  It’s the usual effect of writing things down – only when you’re documenting how a customer is supposed use an interface function do you discover that you only have an 80% understanding of what you want.  Writing the documentation down makes you work through the nasty and subtle 20% that’s the hard part, and lets you understand what it was that you didn’t understand when you thought you understood what it was that you wanted.  (Understand? :)  This led us to the concept of something we called "Documentation Driven Development" (DDD).  The idea is that, when putting a new piece of functionality into ACIS, we should write the documentation first.  This documentation can then be used to drive the stories, which drive the acceptance tests, which drive the software development (which is where TDD comes in).  In retrospect, this is pretty obvious; the alternative of writing the stories before you write the documentation leads to stories that might not be relevant to the actual requirements.  Not surprisingly, when I googled for “Documentation Driven Development” yesterday I got a lot of hits – the part in the first one where he talks about writing sample code after the code is written is exactly the same thing we went through.

But wait!  There’s more!!!

The same 80/20 argument that I applied to the interface functions above also applies to the documentation itself.  The best way to know if your new software component will fulfill the needs of customer applications is to try to write an application against your new software component.  This is just the usual "eat your own dog food" principle.  In the same way that stories without documentation leads to a tendency to miss the forest (documentation) for the trees (stories), documentation without an application can lead to a set of documented functions which don’t quite fit together when trying to build an app.  This led us to generalize DDD to the concept of "Application Driven Development" (AppDD), where a sample application is used to drive development of component software.

Note that nothing above is new.  The Wikipedia article on TDD refers to methodologies such as Acceptance Test Driven Development (ATDD) and Behavior Driven Development (BDD); these and a host of others are all pushing the general idea of driving development based on application scenarios.  In fact, classic Agile methodology says that the acceptance tests should drive the need for the interfaces that are developed using TDD, and that stories should represent vertical slices.  What I think might be new is the following:

When you think in terms of vertical slices (i.e. write a story), the vertical slice needs to extend into your customer’s work environment. 

If you’re a developing a software component (such as ACIS), then the story should be "as an application developer, I want to introduce a CreateBlock feature into my application", NOT "as an application developer, I want to be able to call an ACIS function to create a block."  If you’re developing a mold-design application named MoldApp, then the story should be "As a user, I want to be able to import a mold I designed with MoldApp into MachineApp, so that the tool paths for cutting the die can be calculated."  The best way to do this is to have a sample version of your customers’ environment within your organization, and implement your stories within that environment.

Next time, I’ll talk about how we are applying this principle in our CGM product.

Tags:

A Guilty Secret

By Eric

Ok.  I admit it.  I sometimes use text based debugging.  It’s ugly and when there is a better way, I jump to the alternative, but sometimes "printf"s hidden behind a preprocessor define are the fast tool to figuring out what is going on. 

Generally I prefer using trace breakpoints, visual breakpoints (described in an early blog), or assertions to test hypotheses about what went wrong.  Other times, a fancy tool ( memory access checker, profiler, thread safety checker) is just the thing.  Basically, I look at the bug description, reproduce the issue, and then visualize it.  Given reasonable knowledge about what the code is trying to do, a picture usually gets me a short list of what could be wrong.  Then I try to eliminate possibilities.

But sometimes, the test cases to reproduce problems are too big for visual breakpoints to tell the whole story.  In these cases, "some breadcrumbs" from the call stack is just the thing I need.  If the edge facets went wrong, I log all the places where the faceter made an AF_POINT.  If the quad tree didn’t work, log each step of its creation to see where we made an unnecessary split or failed to make a necessary one.

A good text editor and "diff" like program gives you some leverage that you wouldn’t otherwise get.  Together with pictures of the problem, this can be just the thing.

So: do you have any guilty secrets related to debugging?

Twitter Facebook LinkedIn YouTube RSS