Monday, July 23, 2007

Medical test system accuracy

Medical test systems, like any system, have a certain level of accuracy and precision. Do doctors know about the accuracy of their tests? Do they inform their patients?

For example, I have a friend who is diabetic. She went in to get her "A1C" tested and it was measured to be 6.4 %. Since the American Diabetes Association recommends that number to be 6.5 or less, she felt pretty good.

So I did some checking on WIkipedia and found the following quote:
"Interpreting results is somewhat difficult because (1) laboratory results can differ depending on the analytical technique and (2) biological variation between individuals can be up to 1 percentage point (i.e., two individuals with the same average blood sugar can have A1C values that differ by up to 1 percentage point)."

So, the error is at least ±0.5%. The Wikipedia article also states that the results can be off if there was a recent change in diet.


This particular doctor did not discuss uncertainty levels, or maybe he didn't know. Maybe he just assumed that 6.4 really is 6.4. So I guess the lesson to take away from this is that you should always ask your doctor about the test itself. If he doesn't know, then do your own research.

Tuesday, July 17, 2007

Linux on test systems


So I just downloaded the first in a series of Agilent white papers on using Linux in test systems. The paper has a brief history of Linux, comments on licensing structures, lists available tools, and discusses controlling tools within Linux.

In a clever marketing move, Agilent is releasing these papers episodically. The other papers in the series are:
  • "Using Linux to Control LXI Instruments through VXI-11"
  • "Using Linux to Control LXI Instruments through TCP Communication"
  • "Using Linux to Control USB Instruments"
  • "Using Linux in Soft Real-Time Applications"

Normally I would skip these papers. They tend to be self-serving and talk about things that are pretty obvious. But I would like to hold Agilent to a higher standard (that may be naive of me). Plus, in my test group we've seriously considered using Linux on test systems - so I'm interested in what they have to say.

The paper also lists 4 reasons why you'd use Linux in the test system:
  1. Threat of software obsolescence. What happens if the version of windows you run on your system is obsoleted?
  2. Flexibility/performance. The Linux OS can be customized.
  3. Stability. It's very unlikely to go belly-up on you.
  4. Leveraging know-how and software. There is a lot of UNIX experience out there that can be extended to Linux.
There are also a couple of practical reasons that we've considered at work:
  1. Technicians and operators are less likely to screw around with the computer (browse the net, download games) if it doesn't run Windows.
    • As a corollary, engineers won't try to run other programs, analyze data with Excel, or remote login to their own computers on a Linux test station.
  2. Security is better. Some of our new test systems will likely be hooked into the internet for remote access, and there's just too many viruses out there looking for systems running Windows.
At any rate, I will definitely be reading the other papers in the series. I'll post my comments on them as they come out.

Monday, July 16, 2007

Where does testing belong?

I was once tasked to head a newly-formed testing group. It consisted of me, another programmer, two contractors, and a mechanical engineer. The groups responsibilities were to support existing test and manufacturing systems, create new test stations as appropriate, and automate certain manufacturing steps that were still being done partly by hand. The VP of Engineering wanted to call it Test and Automation, until I pointed out that it might not be wise to say I work for the T&A group....

But where does testing belong? I've seen three different choices in the 1/2 dozen or so companies I've been:
  • A completely separate group. Each member of the group supports one or more projects.
  • Part of the engineering department.
  • Part of the manufacturing department.
I think it really depends on the size and focus of the company. If you're in a small startup, all the engineers wear multiple hats. Sometimes you're called on to test, sometimes to help with product design, and sometimes to just get customer samples out the door. A separate test group may not make sense.

Of course, if you already have three or more test engineers then you may want to group them together. Get those guys together and they share ideas and solutions, just like any other engineering team.

If the company's products are mature, then test is probably part of production. Your job is to get product tested and out the door, and you may be called on to debug test stations, process flow, or maybe the product itself.

If test is part of the engineering team, then the test systems may evolve almost as fast as the product itself. When the design engineers add a new feature, or marketing has new specs or a new customer, test engineering has plenty of work to do.

At least, those are my thoughts on the subject.

Saturday, July 14, 2007

Design of Experiments (DOE)

I recently finished a course on Design of Experiments. It was really fascinating stuff, and very applicable to manufacturing issues. In fact, the "final exam" consisted of a group of us picking a topic and building up an experiment - this design will now be implemented in the next couple of weeks where I work. I wholeheartedly recommend reading up on this subject, or taking a class (esp. if the employer pays for it).

The question I now have is, how applicable is this to test engineering? Is it the responsibility of the test engineer to get involved in DOEs? Should test engineers consider learning how to use stat software like Minitab or JMP (both of which have extensive DOE support)?

I think it really depends on where you work. For example, if part of your job as a test engineer is dealing with SPC charts and processing test data, then DOE is a logical extension of that work. Or if you have a test system with multiple settings which need to be optimized, then DOE would be helpful. However, if you spend most of your day writing code for test systems, then it's probably just a "nice to know" thing.


BTW, the course instructor said that if you read any single book on the subject, it should be Understanding Industrial Designed Experiments by R. Launsby and S. Schmidt. It is, of course, at Amazon.com.

Friday, July 13, 2007

USB instruments - new Keithley DMM

So I just found out today that Keithley has a new digital multimeter (DMM) out that a) is connectible via USB and b) is less than $1000. The 2100/120 appears to have most of the standard functionality you'd expect for a good DMM. I've used plenty of Keithley's equipment in the past and I've had very few complaints. One of these days I'll post something comparing different test mfg companies, but not today.

What interests me the most is that it's USB compliant. In my experience, test engineers are pretty conservative about switching to new technologies. It makes no sense to switch to something until you're sure it won't screw up your measurements. When I worked at HP I was part of a team introducing a new spectrometer that had an Ethernet port. That port was used for internal access to the embedded code, among other things, and I thought it would be great for customers to have TCP/IP access to the instrument instead of just HP-IB. The idea was shot down by marketing because it was considered "too new".

The GPIB interface has been around for a LONG time, and even longer as HP-IB (I think it's in the HP/Agilent handbook that you have to call it HP-IB). Is it time to get rid of it? The downsides include
  • Those cables can be very expensive
  • You have to buy and configure a separate interface card
  • The cables and connectors are bulky
  • Interface speed can be a limiting factor
Of course there are a couple of upsides as well: the cables can be well-shielded, and connectors can be daisy-chained.


But personally, I've already used two instruments with USB interfaces, and I really like the idea. Now that other companies appear to be offering USB, I'm read to kick the GPIB habit. Is there a 12-step program for that?

Thursday, July 12, 2007

Visual Basic vs. Labview vs. C++

I would hazard a guess and say that most test systems use LabVIEW as the development environment (with LabWindows as a subcategory). Why? I've programmed test stations in all three languages over the last 15 years (for testing, I first used C++ in 1994, VB v2 in 1996, and LV v4 in 1996), and I used to be a big fan of VB. Now I've used LV almost exclusively for the past 4 years.

Does it have to do with "getting there first?" Well, let's see. C++ has been around since 1985 (and regular C since 1972). Labview was created in 1986 for the Macintosh platform and was ported to Windows in 1992. Visual Basic was introduced in 1991. Looking at dates, either C++ or LV wins.

Is it because LV has a lot more language support? I don't think so...C++ has a worldwide user base, and Microsoft has historically had plenty of support for it's flagship programming language.

Is it because LV is focused more on testing? Maybe. C++ is a general-purpose language, which could be both a strength and weakness. VB was historically used a lot for DB interfacing and front end GUI apps. LV has it's diagram that looks like an electrical schematic, ready support for all kinds of test instruments, and National Instrument's (NI) claim that it is designed for "test, measurement, and control applications."


Personally, I think it has more to do with having a champion than anything else. C++, while created by one person, is a universally accepted standard language and lots of companies build compilers for it. Visual Basic is supported just by Microsoft, but I think they've always wanted it to be an alternative to C++ while at the same time promoting Windows as an OS.

But LabVIEW has always been about testing. NI has always claimed it is the perfect tool for test apps, they continually push building a community of users for it (free introduction seminars across the country, user groups in larger cities), and developing new test tools for it (motion control, DB tools, vision). Yes, LV is supported by a single company and if NI ever goes belly up a LOT of people are in trouble. But sometimes a single backer with deep pockets can help.

Comments on this are appreciated.

Friday, July 6, 2007

From Test To Sales

The career of Field Applications Engineer has its own entry on Wikipedia. Also known as an applications engineer or sales engineer, they are usually a liaison between the customer and engineering. They must have a technical background and a good understanding of the product, but they must also be able to communicate well with the customer. After all, they are part of the sales department.

I was a field apps engineer for two years, and I did it part time for a year with a different company. Furthermore, I've met several other apps engineers who transitioned from test engineering. Granted this is purely anecdotal evidence, but is there a good career path from one to the other?

Let's look at why a test engineer might do well in this position
  • He has a solid technical background.
  • If the company has multiple product lines, he probably has written tests or helped to test those products, so he has a breadth of knowledge.
  • A test engineer who has seen various product failures can help customers who may have similar problems.
  • A test engineer has seen the negatives of the product (i.e. - failures) but is still focused on the product (making it work right by correcting failures, or at least weeding out bad products). With this attitude a field applications engineer can build a layer of trust with the customer while at the same time help to sell him on the product.

But in the end it all still depends on the person.

Thursday, July 5, 2007

Test it until it works

After graduate school, I worked for an aerospace subcontractor firm as my first real job. The company’s products were split 50/50 between the military and commercial fields. On my first manufacturing project I started to run behind on shipments because the yield was slipping. When that happened, I was told by more seasoned engineers (who had initially started this project) to go through the "marginally failed" units and "test them until they work."

The rationale behind this statement was that:

  1. The spec was extraordinarily tight for the product (blame was placed on sales & marketing).
  2. We were up against the accuracy limits of the system.
  3. It didn't really matter if the positioning of the cannon was off by a couple arc-seconds. They had redundant systems in place.
Thinking that this is how it must be done in the "real world," I did what he told me to do and got back on schedule. Granted, I figured out some other things to do to correct the yield, but going through the marginal failures was a contributing factor. But it always bothered me.

Stepping aside from the questionable moral grounds of this situation, let's look at that rationale list from a test engineer's perspective.
  1. Spec too tight. Marketing should certainly know to what tolerance the product can be tested. If they don't, then it is the job of test engineering to inform them. If marketing plays the word game of "it is guaranteed by design" then it should not need to be tested, now should it?
    Of course, if marketing knows the limits and chooses to ignore them, then you have much bigger problems...
  2. Limited test accuracy. If you are trying to test to a spec that is at the limit of what you can measure, then you have serious problems. Buy a more accurate tester, build one if you can't buy it, or do sufficient test system qualification to verify your accuracy. You have no business being anywhere near those limits. Test equipment manufacturers themselves can play "specsmanship" games, so you cannot always trust their numbers.
  3. The customer doesn't really need that accuracy. I'm sure it's possible that the customer has over-specified what they need. They may have other backup systems in place if the accuracy is not there, they may have an over-tight spec because they don't entirely trust the product (or the company). Or they may just be clueless. But you can't get into the game of second-guessing the customer. That'll get you in deep trouble, somewhere down the line.


So, did I really screw up as a test engineer (although I wasn't called a test engineer back then), or was I just doing what I was ordered to do? I think I will just plead 'youthful transgression' and try not to let that happen again.

Sunday, July 1, 2007

The first post

Let's get this first post out of the way.

Similar to how I felt when I created my first website a dozen years ago, I only wanted to start a blog about something I knew. I did not want a generic "read about me and what I think" creation - that's not who I am. Recently, I have accepted the fact that I am a test engineer - not a researcher doing testing because it needs to be done, not a programmer who likes technical details - a test engineer. So I googled "Test Engineer Blog" and found a handful of blogs about software testing but not much about other types of testing. I decided to try and fill the niche.

I chose Savvy as part of the title for three reasons: 1) the definition [
shrewdly informed; experienced and well-informed; canny] fits what I think a test engineer needs to be, 2) I wanted a word that would stand out without sounding pretentious, 3) Captain Jack Sparrow made the word cool again.