Can you solve this problem in 120 seconds or less?

By Stefanie

IF

• 5+3+2 = 151022
• 9+2+4 = 183652
• 8+6+3 = 482466
• 5+4+5 = 202541

THEN

• 7+2+5 = ?

Do you ever get these riddles sent to you, usually some arithmetic problem?

I'm not particularly good at them, nor can I ever, ever, ever solve those infuriating little brain teasers where you get the metal stick off of the metal ball.

I always wonder what is the skill that makes some good at these and others not. Is it lateral thinking, exploring indirectly related ideas until you hit on the right one? Is it being extremely observant, in that you notice every detail about the problem in question, so that you can then question why that detail is present and whether it is important, like a crime detective?

A few days ago, I felt myself in this same situation except that this time the challenge had to do with work… A customer sent in a Catia product (assembly) file that failed to load for no obvious reason. Now, I am sort of new at troubleshooting foreign data files - I've always been more internally focused. But I'm trying to get better, so I'm playing with things a little more, learning more about the customer side of 3D Interop, and therefore . . . translation problems. So far, it's fun. Every time I think I'm stuck, I've got some tiny lead to explore that eventually helps me understand the larger problem. Not this time . . .

Our Catia V5 reader is extremely robust, so I thought, well something is definitely wrong with this file. Load it into V5, no problem. Hmmm . . . Get a tea. Repeat the process with the same result. Hmmmm . . . Now I'm stuck. So, I gave up on my riddle and went to ask the expert.

At first, he was stumped too. "This is weird, our reader doesn't usually do that unless the file is really corrupt." Then the wheels in his brain started working . . .

In the specified 120 seconds, he found the problem. How? Through prior experience? No, he'd never seen this problem before. Through a guess? No, I actually watched him follow some sort of weird lateral/observant thinking process, taking little tiny baby steps until one small guess pointed him in the right direction.

What were his steps?

1. Translate the file in Interop - fails
2. Try various different Interop options - still fails
3. Open the file in Catia V5 - looks fine (here is where I got stuck)
4. Save the file from Catia V5 to Step (why did he think of that? is this where the lateral thinking comes in?)
5. Translate the file from Step to Acis in Interop - translates correctly and looks perfect (interesting)
6. Try to save the file from Catia V5 again but don't actually do it. (Why?)
7. Stop, think. (Why is he stopping? Oh, the .CATProduct* extension is not on the save list. Weird)
8. Open a different CATIA assembly file in Catia V5. Open the save dialog box to see if the .CATProduct extension is available, which it is. (Where is he going with this?)
9. Open a CATIA part file in Catia V5. Open the save dialog box to see what is there - only the .CATPart extension is available, not .CATProduct. (Oh, I think I'm seeing…)
10. Save the customer file to .CATPart (I wonder why .CATProduct isn't available for this file...)
11. Translate in interop - looks perfect
12. Change the extension of the customer file to .CATPart.
13. Translate in Interop - looks perfect (Wow, cool)
14. Reopen the renamed file in CatiaV5 to see if it now has problems - nope looks fine too (So the file extension has probably been tampered with, nothing is actually wrong with the data.)

So this was a pretty weird case, and obviously I don't expect to come across it again soon. But I still learned one general trick for troubleshooting cad translation failures - reroute the file through as many paths as you can think of: saving, opening, closing, translating.

What the term for this skill is, I'm still not sure, but I feel I'm a little closer to solving the next riddle.

I'm curious - how do YOU go about solving a challenge such as this, and how do you improve those skills?

*Note - .CATProduct = CATIA assembly/product structure file, .CATPart = CATIA part file

From Tests to Test Strategy - Still Seeking Perfection

By Stefanie

Since we're on a bit of a theme here, I thought I'd continue to explore the topic of testing.

For years now, I have looked for the optimum way to do quality assurance for a complex product (like 3D software components). Having managed both a QA and a development team and worked in both roles as well, I have seen the problem from multiple points of view.

In reading books, forums and talking to software engineers outside Spatial, the traditional standalone QA tester group seems to still be most commonly used.

We had such a group for many years at Spatial but found it to have some problems:

• Development tends to rely somewhat on the safety cushion of knowing that another group is checking their work. While Spatial developers take pride in product quality and make efforts to overcome this tendency, it is natural and reinforced by the rest of the organization that sees two separate groups with separate responsibilities and applies pressure according to this structure.
• The schedule seems to naturally separate into a waterfall structure with implementation done first and QA done 2nd. Despite best intentions, this usually leads to schedule compression and quality compromises at the end of the release cycle.
• For an extremely complex product (3D modeling and CAD translation libraries), the people that develop the product are the ones that understand it's behavior the best. Often requirements are not well understood until our developers work together with customer developers to find the line between our components and their applications. Injecting an additional person, no matter how competent, into this dialog and expecting them to keep up has always been hard. This can also be seen by noting the pattern I discussed in my last post - our best tests come from customers rather than from us.

What is the alternative?

Rather than a standalone group, we made each development team, including a QA engineer, responsible for the overall quality of their output. This had a number of positive benefits. First, I think ownership for quality of output did increase. Our testing has definitely advanced since then, both in coverage and in level of automation. Another benefit was that the schedule problem was eliminated entirely because the team includes testing in all of their planning.

However, we have found that this approach also one big drawback: QA work is always done in the context of whatever project the development team is implementing. Which is great! because the project gets full attention. But unfortunately nobody has time to sit down with the overall product and play with it, poke it, break it, create samples, assess its usability and consistency as a whole. In my opinion, Agile completely fails to answer this question. While it has a huge emphasis on developer testing (or at least XP does), which is great, I've only read passing references to the fact that you also need system level QA … no further clarification.  Where do you put it? How do they get really involved in the process? How do they keep up with the highly technical developer to developer discussions without transitioning into a development role themselves (which has happened in a few cases)?

I've actually become pretty comfortable in the belief that there is no perfect answer (which, if you know me, you'd realize is not an easy conclusion for me to accept). I think a healthy tension can be created by accepting and being mindful of the limitations of each approach and oscillating back and forth between embedded versus standalone testing. A good example is that our RADF team develops a framework on top of our components for application development. Is that really so different than product level, standalone testing? We have somehow restarted our standalone QA efforts without even knowing it!

Desperately Seeking System Tests

By Stefanie

In one of my former lives as QA manager, one of the problems that continually grated on my nerves was the mysterious nature of our nightly regression test failures. Our test suite was incredibly fragile, such that in analyzing failures during our convergence period to determine whether they were truly indicative of a potential customer problem or just whiny tests, I was constantly faced with making decisions based on vague error information emitted from black box tests of unknown origin. A lot of manual analysis was required for every release.

In my quest for a better way, I read Xunit Test Patterns, by Gerard Meszaros and holy cow, was I a convert. While the practices of unit testing and test driven development were conceived to improve code development, I liked some of the ideas so much that I wanted to see if they could be applied at the system testing level:

The book is huge and not simple to summarize (and available for free online in an early form), but in the end, my epiphany came down to one simple idea:

The purpose of testing is to give you feedback about your product. The more effectively a test can do that, the better it performs its job.

What do I mean by an effective test? The test is constructed such that it gives focused and meaningful feedback to the developer about the undesirable behavior it has detected. A few traits that make a test more effective are:

• Is expressively written and named
• Behaves expressively at runtime
• Tests only one thing at a time, ideally even one part of the code
• Is self-checking

For comparison, typical test names, which give me the shivers, are a.stp, a1.stp, a2.stp, bigfile.stp, bigfile2.stp, reallybigfile3.stp. And they fail with errors like "error code 3", "error code 127", … How about some English for crying out loud? How about not making me memorize bizarre sequences of alphanumerics to know why our product is failing?

There are other ideas I've forgotten, but I think you get the idea…Tests should ideally all be expressive and single minded, so you reduce debug time and can understand your coverage and behavior at a glance. Tests that aren’t written in such a way are much less valuable and should be avoided as much as possible. This seems so logical to me that I think it holds true for both unit and system level tests.

What have we found to be the reality? Sadly, in applying these ideas, we’ve found that such an approach is totally insufficient for building a comprehensive test plan.

For example, let’s say in the 3D InterOp product line that you want to enhance a reverse engineered reader to extract a particular type of surface for the first time.

According to my interpretation of test driven or requirements driven development, one would spend time thoroughly analyzing the input file format – how this type of surface is stored. One would also spend time with the source CAD system in order to generate a variety of instances of this particular surface type, so as to exercise the format and the reader. This sounds pretty thorough.

Next step, just to make sure we’re safe, which of course we are because we wrote awesome CAD translation tests, we gather every customer file we can find, old, new, large, small to "industrialize." I will refer to this as our "monolithic, mysterious test suite."

Then what happens? Sigh. We discover that we’ve only … just … begun. Sadly, our designed tests gave us almost no indication of how the code would behave in a realistic customer situation.

Bummer, I like tests with good names.

So what do we do now? Do we need to goback to our old, fragile ways? Not exactly. We’ve had two big changes:

1. We still do design tests. But we do not rely on them for product validation. They are used mainly by developers to validate that the code does what they expected.
2. We now write tools that automatically analyze our run of the mill industrial test suites so that we can understand their composition and begin to classify tests more specifically. This has been an incredibly positive step towards better understanding our coverage, requirements, and prioritizing development.

I don’t even use the phrase "monolithic, mysterious test suite" around these test suites that much anymore. I grudgingly admit that they’re valuable, not because they’re expressive (they’re still not) but because they contain cases that we didn’t think of, lots and lots and lots of them – many more than we could ever write ourselves.

 © 2013 Spatial Corp. ACIS 와 SAT 는 Spatial Corp 의 등록 상표입니다.     개인정보 보호정책     사이트 맵