Thursday, July 5, 2007

Test it until it works

After graduate school, I worked for an aerospace subcontractor firm as my first real job. The company’s products were split 50/50 between the military and commercial fields. On my first manufacturing project I started to run behind on shipments because the yield was slipping. When that happened, I was told by more seasoned engineers (who had initially started this project) to go through the "marginally failed" units and "test them until they work."

The rationale behind this statement was that:

  1. The spec was extraordinarily tight for the product (blame was placed on sales & marketing).
  2. We were up against the accuracy limits of the system.
  3. It didn't really matter if the positioning of the cannon was off by a couple arc-seconds. They had redundant systems in place.
Thinking that this is how it must be done in the "real world," I did what he told me to do and got back on schedule. Granted, I figured out some other things to do to correct the yield, but going through the marginal failures was a contributing factor. But it always bothered me.

Stepping aside from the questionable moral grounds of this situation, let's look at that rationale list from a test engineer's perspective.
  1. Spec too tight. Marketing should certainly know to what tolerance the product can be tested. If they don't, then it is the job of test engineering to inform them. If marketing plays the word game of "it is guaranteed by design" then it should not need to be tested, now should it?
    Of course, if marketing knows the limits and chooses to ignore them, then you have much bigger problems...
  2. Limited test accuracy. If you are trying to test to a spec that is at the limit of what you can measure, then you have serious problems. Buy a more accurate tester, build one if you can't buy it, or do sufficient test system qualification to verify your accuracy. You have no business being anywhere near those limits. Test equipment manufacturers themselves can play "specsmanship" games, so you cannot always trust their numbers.
  3. The customer doesn't really need that accuracy. I'm sure it's possible that the customer has over-specified what they need. They may have other backup systems in place if the accuracy is not there, they may have an over-tight spec because they don't entirely trust the product (or the company). Or they may just be clueless. But you can't get into the game of second-guessing the customer. That'll get you in deep trouble, somewhere down the line.


So, did I really screw up as a test engineer (although I wasn't called a test engineer back then), or was I just doing what I was ordered to do? I think I will just plead 'youthful transgression' and try not to let that happen again.

No comments: