Thursday, December 27, 2007

Statistitical Analysis packages - JMP

When I posted on DOEs, I briefly mentioned a statistical analysis package called Jump (JMP). I want to write more about that.

For some people, test engineering is about putting together the system. You build it, you ship it out to your customer (internal or external), and you move on to the next project. For others, test engineering is about managing the test process. The job revolves around SPC chores, preventive maintenance, & setting up new test runs.

But sometimes test engineering involves analyzing the results of the testing. I've spent plenty of time putting together graphs in Excel. But that sort of work can overload you when you have huge data sets & multiple sets of variables to consider. You need a more serious piece of software than just a spreadsheet for filtering down the data, looking at box plots, and plotting trends.

I know a guy who is a great statistician. He loves Minitab. The only package like that I've ever used is JMP, but he says it's a good piece of software as well. I know I like it, and more and more I've been using it instead of Excel when I need to examine data.

There are numerous books published by SAS (the company that wrote JMP) on how to use the software. I picked up one of them, Elementary Statistics Using JMP, and have read about half of it by skipping around to specific sections. The book is knowledgeable, well-organized and worth the money.

Tuesday, December 25, 2007

Peace On Earth

It's been the Christmas season here in New England. Snow on the ground, shoppers in the malls. And while I know that plenty of people who read this post do not celebrate this holiday, I would like to wish everyone peace on earth. I don't think anyone can disagree that the world needs more of it.

Thursday, December 13, 2007

Vendor books about testing - National Instruments

Back in October I posted about recent test system manuals Keithley and Agilent had written. In a post last month I mentioned that NI had also put out a manual that I would eventually review as well. Here it is.


First Thoughts
NI is very good at marketing. They interact well with customers, get knowledgeable sales people embedded with key industries, and support their hardware and software. So when I say they excel at marketing it is truly meant as a compliment. Yet this proficiency also hurts them. Read on and you'll see.


Sections
There are four sections and 14 chapters divided amongst the sections. The first is just an introduction, the second discusses test system guidelines, the third goes over improving a system, and the last one consists of case studies.

Section 1
This single chapter reads more like a position paper for NI being the best ever than an introduction to a test system guide. Pity. For example, on just a single page (1-5) the author referenced three different marketing white papers. My hopes for the manual diminished.

Section 2
There were two saving graces to this section. Chapter five has a good overview of different buses, and chapter seven reviews the PXI standard. Otherwise it is more marketing than substance.

Section 3
These three chapters were somewhat of a revelation. The marketing was minimized in favor of looking at 1) ways to speed up a test, 2) measurement accuracy, and 3) system longevity. Cool.

Section 4
I liked the first chapter in the manual. Describing software-defined radio testing, it was short & too the point. But the other three case studies were all but useless. Okay, so Microsoft used LV and a PXI chassis to test the XBox - why not spend a few pages and describe the test architecture or obstacles that were overcome in the design. Each case study reads like an extended press release.


Summary
Unfortunately, this testing manual is more like the Agilent manual (bad) than the Keithley manual (good). It pushes a theme of "NI products are the best thing since sliced bread." The only time it mentions Agilent is to take them to task for the lack of support of IEEE1394 (VEE isn't mentioned at all). The manual could have used a good editor - the exact same graph, bandwidth as a function of latency, shows up an improbable FIVE times under different titles.

In other words, if it wasn't for section 3 I would write off the whole manual as a waste of space on my hard drive.

Outsourcing a test station, part 2

Last month I posted about my experiences with outsourcing test systems. Here's another update: training.

The schedule has slipped - they usually do - but it was a result of scheduling conflicts and money issues. The system was ready. Regardless, I did have an engineer from the contract manufacturer fly out to be trained on the system. He seemed like a good guy & knowledgeable, but there was a definite language gap. Furthermore, I only had two days to show him a system I've been using for a couple of years.

Speaking slowing and struggling for words, I think I eventually taught him enough so that he can run the station when there are no big problems. We started with an overview of what the system does, the separate components, and the basic procedure. He spent a 1/2 day just testing devices. We also went over common maintenance issues and problem points to check when it won't run. But there were some things that didn't translate well.

But now I think I have the answer. I'm using my digital camera (a nice 5 megapixel Canon) to film common tasks and maintenance fixes. Maybe my narration will help, maybe not. But if a picture is worth a thousand words, how much is a high res avi file worth?

Sunday, December 9, 2007

Virtual Instruments

I said in a post last month that I would read & review Designing Next Generation Test Systems - An In-Depth Developers Guide from National Instruments. I'm practically done now & will post my thoughts in a couple of days. But parts of this manual neatly dovetailed with a conversation I had earlier this week about virtual instruments.

NI is big on the concept of a virtual instrument - use the computer in place of the benchtop instrument to do the measurements. I've used this concept for potentiometers and oscilloscopes. But I just don't think this works in all cases, or even most cases. I have two reasons to back this opinion.

Complicated real-world measurements
There are some properties that are more than just a voltage or current. You need a good deal of physical hardware to actually acquire the data. Several examples I'm familiar with include optical spectrometers, digital communications analyzers, and (more esoteric) high energy particle detectors. A good deal of additional circuitry, physical devices, and sometimes patented techniques are involved.

Test Expertise
Hardware companies that build test equipment often have a good deal of knowledge and experience making that kind of measurement. That information often is built into the desktop instrument that performs that measurement. In most of those sorts of situations I would rather have the actual instrument than spend time and effort trying to duplicate that expertise myself.


I am not saying that virtual instruments are invalid. I think they work well for any non-complicated measurements or measurement techniques that are well-established (i.e. - the modern triggered oscilloscope was invented over 60 years ago). But sometimes you need the actual hardware.

Monday, December 3, 2007

LabVIEW Certifications

In 1999, near the height of the internet bubble, Microsoft Press published a book called titled After the Gold Rush, with the more descriptive subtitle, “Creating a True Profession of Software Engineering.” I started writing this post as a diatribe against National Instruments (NI), but ended up writing about how they are supporting what this book proposed.


The Book

Steve McConnell wrote the book. If you don’t know who McConnell is, then you don’t spend much time working on major software projects (or you do it in isolation). Several of the books he’s written over the years (Code Complete and Software Project Survival Guide to name a couple) I recommend as required reading for serious programmers. I don’t know if I would label him a genius – I’ve only talked with him at a couple of seminars about 7 years ago, so I hardly know him – but he has a unique knack for gathering the best practices of a particular subject and positioning them in an organized fashion under one tidy roof.

In other words, he knows his stuff. And his thesis in this particular book was that software engineering needs to be licensed like other engineering professions or like dentists, doctors, attorneys, nurses, etc. There are too many programmers that learned how to code here and there, follow no standard software conventions, write mediocre (or worse) code, and yet still consider themselves professionals. This makes good sense.


NI Certification Gripe

NI has a fairly elaborate certification process that has evolved over the past decade or so. Exams cost ~$200 and are valid for only two years - after that, you have to take the test again. My first impression was that this was a real scam. Get companies to buy into the idea that they have to have NI-certified programmers, forcing programmers to cough up money every couple of years or risk job stagnation, which leads to a nice revenue stream for NI.

I have a MS in PHYSICS that I completed over a dozen years ago – do I have to go back to my alma mater every couple of years and re-certify myself? No, because getting that degree implies a certain level of competency. If someone wants to gauge me on those matters, they can talk with me or look at my body of work since school. I have a driver’s license that I renew every few years, but I don’t have to prove I can parallel park when I show up at the DMV. It is assumed that I drive on a regular basis and as such keep my skills up to date.

Once I’ve taken the LV certification test & proven my skills, why should I pay to take it again every two years? Prospective employers can talk with me about various LV topics, or look at my resume. Furthermore, my resume can prove that I’ve been regularly employed and have kept my skills current. To take the argument further, Microsoft certifications for server admins (MCSA) or system engineers (MCSE) do not require re-certifications. Yes, they roll out new systems every few years, but people with an existing certification can upgrade via a subset of the full test suite.


Counterview

But then I started relating NI’s certification efforts to McConnell’s thesis of licensing software engineers. Certification adds some legitimacy to programming in LV. It weeds out some people who write substandard code yet try to pass themselves off as experienced users. It allows those engineers with the certification to (maybe) command a better salary.

So I’ve modified my view. I still think National Instruments is exploiting the certification process, but they’re also doing a good thing for the LV development community. And since LabVIEW belongs to them, that’s how it works.

Thursday, November 29, 2007

Linux on test systems, pt 3


The third paper in a new series of Agilent white papers on using Linux in test systems has just been released: "Using Linux to Control LXI Instruments Through TCP." As has become custom, here is my review.

The previous paper in this series discussed using Linux to control LXI via VXI-11. While that paper gave me the impression that this was the best way to control instruments, the new paper says that TCP (via direct socket connection) is better for short time measurements.

The author gives a very brief overview of the seven layers of TCP/IP and then dives right in to gritty details (including a quick discussion of Nagle's Algorithm). The paper provides several extensive code examples. The examples are in C, but that could be ported easily to LabWindows, or you could wrap it up as a separate object to use in LabVIEW.

I liked this paper better than the last one. To use a Thanksgiving metaphor, there was less marketing feathering and more engineering meat on the bones of the paper. I would really recommend this as a useful paper to read if you were looking at using Linux and LXI, and now I'm feeling more optimistic about the remaining papers.

----------------------

The remaining papers in the series are "Using Linux to Control USB Instruments" and "Using Linux in Soft Real-Time Applications". I'll be reviewing those as they are released. Since these papers have been released just about once per month, I expect to see the next one sometime around the end of the year.

Monday, November 19, 2007

Outsourcing a test station, part 1

Over the past few months one of my major projects has been building a test system that we are shipping to a contract manufacturer in Asia. The station is similar to our in-house systems, but different considerations were needed because it will be operating independently of our system. It has been a lot of work to this point, but it is finally nearing completion.

This is not the first time I have built a station that was shipped to a contract manufacturer - when I worked for Dupont several years ago our contract manufacturer had the equipment in house to build our products but didn't have the equipment or software to test it. I think this underlines something unique about test engineering: it is oftentimes easier to build something than it is to test it. When you are testing something you are verifying that what you built meets certain requirements. You must have confidence in the data, so extra care goes into the measurements. I think THAT is why test systems are built by the contracting firm and then shipped out - often the test system is specialized to suit your product, and you have to trust the data.


If the schedule holds, the station will ship out sometime next month, and I will fly out to help set it up and verify it after the new year. I will write more about this experience as the project progresses.

I do NOT expect to post more to the blog the rest of this week. Thanksgiving is coming up, and I have plans.

Tuesday, November 13, 2007

Agilent vs. Keithley

While spending time reviewing the testing handbooks by Agilent and Keithley, I started thinking more about the differences between the two companies. Here's a quick synopsis of publicly available information:

Keithley
Employees: 650
Founded: 1946
Operating Income: ~$10 million
Net Income: ~$8.4 million

Agilent
Employees: 19390
Founded: 1999 (split from HP, founded in 1939)
Operating Income: ~$465 million
Net Income: ~$3.31 billion

Now of course I realize that Agilent does more than just make test & measurement hardware - for example, they also have an investment group. I also realize that Agilent makes instruments for a lot more applications than Keithley.

I've bought & used instruments from both companies. I think both companies make good products, have good tech support, and do a good job of knowing their customers. But still, I find it very interesting that Keithley, such a small company by comparison, holds up its own so well against a huge conglomerate like Agilent. I guess there's something to be said for being small and focused.

Wednesday, November 7, 2007

Off-the-shelf Test Systems

In mid-October someone asked in a comment about using commercial off-the-shelf (COTS) test systems vs. building your own systems. At the time I replied with this:
"That's a broad subject that deserves some thinking - maybe I'll post on it in a week or so. But in general, I'd rather not reinvent the wheel if I can help it. I've used several 'canned' systems, especially motion control and generic test instruments, in the past. But often the things I've had to test were unique enough that I had to build my own system, or at least use the off-the-shelf solutions as a sub-assembly for my final test station."
I have time to post on it now, so here are three different experiences I've had with such systems.


Building a COTS system
When I worked at HP/Agilent, I helped create the Passive Component Test system for a new Optical Spectrum Analyzer (which is obsoleted by now). It was a built-in software app that used the OSA to test common parameters for optical components. To configure the tests, you needed a script. I wrote an Excel script creator in VBA that made it easier for the user to configure the test setup. It worked very well, and I know of at least four different companies who used it (I talked with them at a trade show a year later).

Full system
Several years ago I started work with a company that had just purchased a test system from Palomar Technologies. This system handled the optical fiber alignment, test setup, and specific manufacturing steps after testing. This system had a "pseudo-basic" script language for customizing tests. For further customization I wrote a LabVIEW front end that controlled aspects of the testing.

Multiple Vendors→One System
One of my current test systems is a conglomeration from three different sources. The main system (motion control, vision recognition, basic data handling) is from an established vendor for these systems. Second, the front-end software (controlling the test infrastructure) was written by a engineering company based on their standard product but customized for our use. Third, I have written quite a bit of code to further customize the front end of the software.


Summary
Any "off-the-shelf" system I have used, or helped build, has required customization. What your company makes, how it uses the data, how it grades those devices - all of those features are unique. Furthermore, unless you are testing final product the test system needs to be integrated within other manufacturing steps. That leads to further modifications.

The only exception I can think of is if the test system vendor sells you a test system that they have also sold to a direct competitor that makes the same product. That is a completely different issue.

Saturday, November 3, 2007

CMMI for testing

There was an article in the September issue of Evaluation Engineering about CMMI ("Capability Maturity Model Integration"). I flagged it for future reading and just had a chance to finish it today.

I flagged this article because I have experience with the CMMI. The division I worked in at HP/Agilent years ago was classed at CMM level 2, and I worked in a couple of projects aimed at moving the department to level 3. I called it "CMM" instead of "CMMI" because back then the older nomenclature was in use. Working in a project group that adhered to those standards, which was very enjoyable and a great learning experience (we used Rational Rose for the heavy lifting, before it was bought by IBM). Testing, and specifically software testing, has a very specific role to fill within such models, and it's significance is not underrated.

In general the article is a cogent overview of the CMMI and how it is applied. It also makes a good point that test engineers involved in creating software - especially for more complicated projects involving multiple people - should learn how to apply the model and use tools associated with it. Many test engineers for hardware testing do NOT have a software background, and don't necessarily have exposure to best practices for programming. But believe me, the CMMI is worth using.

Of course, the author is from NI so I expected some marketing and was not disappointed. The author discussed how NI Requirements Gateway can be used to implement the CMMI, and he also referenced NI programs like LabVIEW and TestStand extensively. But this didn't really bother me - he works for NI and that's his job. Evaluation Engineering has free access, so I expect a modest amount of bias.

No, what really bugged me is that right at the beginning of the article he called the CMMI "Component maturity model integration" instead of "Capability maturity model integration." If you're going to write about something, please get the acronym right. In the engineering world there are way too many acronyms and abbreviations, and doing something like this confuses the issue further.

Thursday, November 1, 2007

Vendor books about testing - marketing

Yesterday I posted my review of an Agilent guide to test systems. Eric, who works at National Instruments and runs The Automated Test Blog, added a comment about a test systems book that NI has here. So, I downloaded it and skimmed it quickly. I'll probably review that one as well for completeness sake (thanks for the heads up, Eric).

Of course, originally I wanted to compare the books from Agilent and Keithley to see if they reflected a difference between the two companies themselves: Agilent is much more of a marketing behemoth than it was as HP many years ago. To be honest, I have a bias. I worked in the Test & Measurement group at HP for a few years before and after the switch to Agilent, and I saw firsthand the large amount of resources that went into marketing. But that is a post for another day.


I must tread lightly with this sort of thing. I've had a few marketing/salespeople contact me about products they make. Maybe they want to sell me their products, look for free advertisement on my blog, or just honestly offer information. It could be a blend of those reasons. But I'm an end user of test equipment nowadays, and no one pays me to do this blog. From an ethical point of view I should treat all requests equally. That is, only talk about things I experience, not show unwarranted bias towards one vendor or another, and not lambast someone or something without reason.

Or at least I'll try.

Tuesday, October 30, 2007

Vendor books about testing - Agilent

About a month ago I talked about testing handbooks recently published by Agilent and Keithley. On October 15th I reviewed the Keithley book. Now I'm going to review the Agilent book, "Test-System Development Guide: A Comprehensive Handbook for Test Engineers," which is available here.


First Thoughts
This book, released in May 2007, is partly a repackaging of other white papers, many of which you can find here. I suspect that it was repackaged like this to compile what had been written separately as well as to heavily promote the LXI interface. I previously posted about Agilent's big stake in LXI, so I won't get into that again.

Other than the marketing-oriented aspects, I found the guide to be somewhat useful.


Sections
There are 4 main sections of this handbook - each section has numerous subsections. The first discusses test system design. The second section covers LAN networking issues. The third is devoted to LXI. The fourth and final section lists some details of RF/Microwave testing.

Section 1 - Test System Design
This section is devoted to going over the various aspects and theory of a test system. Parts of it I found insulting (it appeared aimed at a pure beginner), some of the things they talk about I have posted about in my rules for building systems (here and here), and some of it was actually pretty good.

For about 15 pages the guide discusses software architecture: defining the requirements, controlling instruments, storing data. It's all very general, but I found it extremely funny that whenever they mentioned LabVIEW, their competing product Agilent VEE was written first.

Section 2 - Networking Choices
Here the guide covers networking considerations for a test system. This might be a bit of overkill for some people, since it is aimed for the test engineer who knows very little about networking basics.

Section 3 - LXI: The Future of Test
Yes, that was the actual title of this section. Somewhat presumptuous, and very much market-speak, but that is what the section is called.

Section 4 - RF/Microwave Test Systems
I have no real experience with this kind of testing, so I cannot speak to it's accuracy or whether it was worthwhile or not. To be honest, I skimmed this section.

Summary
When you compare this book to the Keithley book, you can see that that they have two completely different intents. The Agilent guide is polished, views testing from a general point of view, and serves as a vehicle for pushing LXI. The Keithley guide is not so polished, goes over the guts of testing (i.e. - the many pages devoted to discussing passive and active components), and includes numerous examples.

In short, the Agilent book is written for a manager, VP, or someone looking for more information about testing. The Keithley book is written for the engineer. If you're a test engineer, I would recommend reading both of them, file away the Agilent book, and put the Keithley book on the shelf for frequent referencing.

There is a book about test engineering that is supposedly a college-level intro coursebook. Maybe I'll take a look at it for comparison. There is also a free handbook on LXI interfaces available. I haven't looked at it yet myself, but it may be worthwhile.

Tuesday, October 23, 2007

Wafer level testing and digging into details

Over the past 4-5 years part of my job has been testing devices at the wafer level. Before that I was a complete novice to that type of testing, so I was then, and still am, on the lookout for wafer probing information.

I had a few minutes yesterday to glance through Test & Measurement World and saw an article detailing some of the recent history of wafer testing [this is part 3, so follow the links to read parts 1 & 2]. It was really pretty interesting. Of course, if you have no involvement with chip testing your eyes may glaze over.

Last night I had time to dig into the article more. The author mentioned a presentation made by Vikas Sharma at the 2007 IEEE Semiconductor Wafer Test Workshop. So I navigated over to archives for this past June and found the entire presentation. Even more, there were several other presentations there that I can use. Yes, most of those engineers are probing logic chips and testing things much more complicated than what I do, but parts of it are still applicable.

In a post a few weeks ago I talked about the need to lift your head and look around you once in a while. What I did last night is a prime example of this. Sometimes you never know what you'll find, and there is a certain satisfaction to be gained from discovering a little nugget of treasure.

Friday, October 19, 2007

Why you need to test

Close to a decade ago I spent a couple of years as a software test engineer for a large company. Before that (and after as well) my experience had been in hardware testing, so this job was a real eye-opening experience. To school myself on the ins and outs of the profession I attended conferences, seminars, ordered books from Amazon, and read all I could online about the profession.

One of the things I found so surprising was the loss incurred because people did not test - why would a company not test? I was astounded over a list on a website that documented some of the more outrageous consequences of not testing. I had forgotten about that list (the full & updated list is here), but a blog I read the other day reminded me of it.

40 IT failures caused by software bugs by ZDNet's Michael Krigsman -- Rick Hower, who runs the Software QA Test Resource Center has compiled a lengthy listing of “major computer system failures caused by software bugs” Here are several entries from that list: A September 2006 news report indicated problems with software utilized in a state government’s primary election, resulting in periodic unexpected rebooting of voter check-in machines, [...]



I posted this today is because I had to fight this "why should we test?" attitude recently. In software testing, one of the main reasons for shorting the test process is time. It slows the software life cycle, and there is a push to just release the software and fix the bugs as they crop up.

The reasons are different in hardware testing. Yes, test time is a concern, as well as cost. But yield is an even bigger issue - especially in medium to high production volumes. Production managers tend to think that "every parameter tested is a chance for the yield to go down." So they naturally want to reduce the amount of testing. They are judged based on how close to 100% yield they get. Furthermore, in a narrow vision of the company, your business is dependent on how good your yield gets.

Testing is important. If you subscribe to continuous product improvement (Kaizen) mantras, you have to have the data. And not every piece of data is used to make a pass/fail decision. Engineers need the data to judge how well the product or process is working, where there is room for improvement, where you can save effort.

It isn't just about testing the product before shipping it to make sure it's okay. It is about making the product better over time. Morally, you can't ask for much more than that.

Linux on test systems, pt 2


Back in mid-July I talked about a new series of Agilent white papers on using Linux in test systems. Well, the second paper in the series, "Using Linux to Control LXI Instruments through VXI-11," has just come out.

The paper begins by defining VXI-11: the GPIB equivalent for controlling instruments via Ethernet. It was added by the VXI Alliance in 2000. The other method the VXI group added, direct TCP socket communications, is a lower-level protocol. This paper maintains that VXI-11 is better for most cases.

It then proceeds to talk about Remote Procedure Calls (RPC). VXI-11 is based on RPC, so Linux will directly support VXI-11 (no $500 GPIB cards or expensive cabling required). To use RPC, Agilent promotes using the rpcgen code generator. They supply several different code examples using generated code.


In general, the white paper was organized, the author knew the subject, and the narrative flowed well from beginning to end. But this paper was not a Linux paper. Other than stating that you can use RPC in Linux (obvious), the paper is really just about VXI-11. To cap it off, the name "Linux" is only mentioned FOUR times in the text of the paper.

The paper is really a veiled push for communicating via VXI-11 regardless of the operating system. But as I stated in my original post for this series: white papers tend to be self serving. They are usually generated by the marketing department. And Agilent certainly has a big axe to grind for using Ethernet to access test equipment. All the big names in test equipment are members of the LXI consortium (LAN eXtensions for Instrumentation), but Agilent was an early proponent of the standard. Also, they were the first company to have LXI-certified equipment.

----------------------

The remaining papers in the series are "Using Linux to Control LXI Instruments through TCP Communication", "Using Linux to Control USB Instruments", and "Using Linux in Soft Real-Time Applications". I'll be reviewing those as they are released. Hopefully the remaining papers will have more substance relative to Linux. Unfortunately, I now harbor doubts.

Monday, October 15, 2007

Vendor books about testing - Keithley

A couple of weeks ago I talked about testing handbooks recently published by Agilent and Keithley. The following is a brief summary of the Keithley book, "Understanding New Developments in Data Acquistion, Measurement, and Control." (http://www.keithley.com/news/prod031407)

First Thoughts
This book is called a "first edition," although portions felt like they were written some time ago and pasted into this new book. Also, I couldn't find a PDF copy of this available online, which is very retro. Finally, there is no summary at the beginning of book. There isn't even a page listing when & where it was published. Clearly the book did NOT come from the marketing department.

Sections
The book has 9 sections and three appendices. Only the ninth section is listed as "examples," but sections 6,7, and 8 are all about different applications as well: temperature, strain, and current measurements.

Hardware
Sections 1 & 2 discuss hardware concerns when building or upgrading a test station. They include mentions of Keithley hardware, but they also cover processors, bus architectures, and networking.

Software
Section 3 discussed software. There was some marketing influence here - several pages were devoted to talking about the Keihtley script programming tool - but they also devoted time covering open source issues, which I think is commendable. They also talked about IVI software. They are a sponsor member, but so are Agilent, NI, Tektronix, and Rohde & Schwarz.

More Hardware
In sections 4 & 5 they examined electronic components (from resistors to op-amps) and how they might relate to test engineering concerns. Very basic, EE stuff, but good to go over as a refresher. Section 6 covered DAQ in some detail, including ground loops (which have bitten me on at least one occasion). As previously mentioned, the remaining sections go over some details in measuring temperature (which I've had to do), strain (which I don't do), and a few other applications.

Summary
This book felt like more of an introductory survey than anything else. It didn't delve deeply into any single topic, yet it presented an overview of a variety of topics and mentioned things that warrant further research. They also had a large variety of application examples. I liked it. It was fairly straightforward, with a minimum of marketing fluff, and was aimed at test engineers.


My next post on this topic will be about the Agilent book.

Monday, October 8, 2007

Types of test engineers

Back in late August I wrote about sustaining engineering and its relationship to test engineering. Recent events caused me to think about that some more, and I think I have a good list of the different types of test engineering positions.

Sustain
See the original post. In short, he runs existing test systems. He may add minor modifications or improvements as necessary.

Research
The engineer is part of a research group, either in an advanced R&D group (i.e. - Bell Labs), federal lab (CERN) or university setting. The test systems he builds are very specialized and are likely only used for a limited time.

Create
His primary responsibility is building new systems. After that he hands them off to someone else. This is usually the province of a contractor, although most test engineers have done this at one point or another.

Software Test
There is a LOT of information out there about software testing. I only have a couple of years of software test experience, so I hardly qualify as an expert, but in general software test is all about testing software - use cases, limit testing, memory leaks, boundary testing, etc. There is very little hardware involved, other than the computer itself.

Of course, you may be in a position where your job duties are a mixture of these categories. I've certainly done that in the past.

---------------------------

Okay, I've written enough on the the subject of test engineering as a career for a while. As I wrote on October 2nd, my next couple of posts will be on those test handbooks written by Agilent and Keithley.

Legacy test systems

Over the past few months I’ve written a few times about different types of test engineering. Last Friday I faced a seamier side of test engineering: legacy systems.

I don’t want to discuss the work in detail and bore everyone, so here is a synopsis:

- I’m overseeing a test system that has to be ready in a month.

- The system has hardware that was designed decades ago, and some of the software was written a good dozen years past, based on software even older.

- I scheduled a consultant to fly in from California on Monday to work on a newer part of the system.

- I’ve had an engineer and an intern do some preliminary setup work.


On Thursday afternoon I was performing last minute checks and couldn’t get the auto alignment system to work – the camera would not capture any images. Friday morning I dug into it further & found that the entire motion control system (the 20+ year part of the system) was motionless. I talked with an engineer at the firm responsible for the system, and he was wracking his brain trying to remember some of the details. Great.

The whole motion system is controlled through a single a serial port. After having little luck with the phone support, I tried wiring pin 2 to 3 (output to input) to test the port itself. It was fine. I tried swapping out card cage boards with systems that were working. No luck. I even swapped out the image grabber card, but that didn’t work either. After an entire day of work, I finally found the problem: a combination of a flaky PCI slot and an incorrect software setting.

The point of all this is that sometimes the systems you use for testing are not your own. They’re old. Yes, they work, but they can be finicky. The documentation is incomplete, you don't have access to the source code, replacement parts are hard to find. Sometimes that’s just part of the job.

Tuesday, October 2, 2007

Vendor books about testing

In the past month I've come across two different manuals about building test systems They are both from big companies in the T&M industry, Keithley and Agilent. They both came out within the past few months. The page count is north of 200 for each book. They both appear to have useful content, in and around the marketing stuff.

That last item is important to me. Sometimes test companies can give you plenty of useful information in books like these - their engineers built the test equipment, and they've done plenty of research on how to use it for testing. But sometimes there's so much marketing fluff that it's hard to separate the useful from the dubious, so I end up ignoring the whole thing.


So, I've decided that I'm going to read through these books and review them. Hopefully I'll learn something, and I might as well pass my opinions of the books to anyone else interested in them. I'll post the first review (probably the Keithley book) next week.

The names of the books (and links to them) are:
Keithley
"Understanding New Developments in Data Acquistion, Measurement, and Control"
http://www.keithley.com/news/prod031407

Agilent
"Test-System Development Guide: A Comprehensive Handbook for Test Engineers"
PDF File
http://cp.literature.agilent.com/litweb/pdf/5989-5367EN.pdf
Hard copy http://www.home.agilent.com/agilent/editorial.jspx?action=download&cc=US&lc=eng&ckey=1244104&nid=-536900530.0.00&id=1244104&cmpid=20580

Monday, October 1, 2007

Rules for building test systems (part 2)

Back on September 12th I posted a list of five rules for building test systems. Here's a few more...


Leave something for Phase 2.
You must resist the temptation to add new features to the system. Release the test system to production, and THEN work on your plans for the upgrade.

Do NOT hard-code specs. The specs will always change. Put the specs into a database that you can query, a spreadsheet file, or at least some sort of configuration file.

Log what happens. Make sure the software logs what happens - you cannot always rely on the technician running the test to write down what happened when and why.

Be aware of the environment.
Your test equipment or the DUT itself may change as the temperature, humidity or pressure change. You have to take this into account.

Know your accuracy.
You need to know the specs of the test equipment in your system. To be more specific, you need to know the specs for how you use the equipment, because vendors will sometimes list different specs for different conditions.

ATML (Automatic Test Markup Language)

Back a few weeks ago I came across a standard called Automatic Test Markup Language (ATML). I did some research, got to the official ATML website, and found out that it is currently under discussion by an IEEE working group. Basically, it is a formalized way of communicating information about tests between hardware, software, databases, and across the internet. It is based on XML and includes schema for test results, diagnostics, test description, instruments, etc.

I like what I've read about it so far, and I've already introduced it to the other people in my group. The plan will be to implement it in work going forward. We currently use a mixture of home-grown XML and a small amount of plain text files. Moving to ATML is a logical progression.

The thing that bothers me is why didn't I hear about this before? It has been in the works since at least 2005, there was an article about it in Evaluation Engineering, and even National Instruments wrote about it to promote their Test Stand software. Perhaps I was so buried in the details of my current company over the past two years that I never lifted my head to see what else was happening.

That is something that engineers, especially test engineers, need to do. New testing tools keep coming out, better versions of existing instruments, new ways to test - we need to stay abreast of these things.

Monday, September 24, 2007

Configuration control for test systems

Any firm writing software is concerned with configuration control. There has to be a way to manage what the latest version of the program really is. Especially with multiple programmers, you need to know what code is safe to run. Similarly, after the software is released the company needs to manage the configuration of the software out in the field & any revisions or upgrades it makes.

Document control is an important subject for any manufacturing firm. Work instructions, specs, hardware designs, test plans - these all have to be managed. Where I currently work we have a nice system called Omnify that performs these tasks well, but it took a lot of work to get to that point.

Hardware systems are not immune to configuration management issues, for many of the same reasons. Do a Google search for "hardware configuration control," and you will find items such as discussions about control issues on the old NASA Apollo program, telecommunications standards for configuring hardware on a network, and plenty of ads for software that helps to manage hardware configuration issues.

Having said all that, I have to conclude that similar concerns apply to test stations. For many test engineers this isn't a big issue. There is only one test system, they update it when they need to, and all is good. But what if you have several test stations that, for one reason or another, are different?

This is something I have struggled with lately. I have two different test stations that test the same things and run almost the same software. Both stations use an optical spectrometer, but they use different spectrometer models and that leads to slightly different software. When I upgrade the software on the stations, I need to maintain separate software images. When the technician does calibrations, I make sure there are different work instructions he follows for each station. If the firmware for the spectrometer is upgraded, it must be done separately and logged as separate activities.

The long term solution to this problem is upgrade both stations to the same model of spectrometer and therefore the same software. But in the short term it requires rigorous configuration control. That's just the way it is.

Wednesday, September 19, 2007

Autotestcon 2007

I worked for a company for about 3.5 years that build subsystems for aerospace applications. It was about half military and half civilian. Autotestcon claims to be "the United States’ largest conference focused on automatic test systems for US military systems." I never knew anyone who went to it, but that job was over a decade ago, so I'm dated.

Anyway, this convention is happening right now & lasts until tomorrow. It's at the convention center in Baltimore's Inner Harbor (a pretty nice location). I scrolled through the list of technical sessions and was impressed by the list. I have been to trade shows where there was a lot more marketing than actual learning, but the signal to noise ratio appears to be higher for this event.

So, if there's anyone reading this that went to this show this year, or has gone in past years, let me know what you thought of the show. I'll probably write an entry sometime down the road about trade shows for testing, and I'd appreciate the input.

Tuesday, September 18, 2007

Falling in love (with the system)

It is a thing of beauty. After months of meetings, charts, designs, purchase orders, assembly, phone calls, programming, and debugging...it WORKS. For an engineer, few moments are more satisfying than seeing the fruit of your labors run smoothly.

But there is a particular trap that a test engineer can fall into. Usually the system does not ship out the door - it's still there. The engineer will probably be running it, at least until he trains a technician to use it. As he uses it, he sees ways to speed it up, features to add, neat little things it could do. THIS is the trap: the test engineer spends so much time modifying/improving the system that he neglects his other tasks.


I will be honest and admit that I did this a time or two, especially when I was younger. My first love was a calorimeter I built in graduate school. I worked for months to create the right design, weeks on the code (Fortran!) to verify I was extracting the data correctly, more weeks writing software filters for the data. It was anticlimatic to actually just sit and take data and analyze that data. That's what I needed to do for my thesis - but I was happier playing with the code and the hardware. Eventually I had to force myself to do the actual testing.


Test systems are like children. They grow up and become productive - you have to let them go. But if you start taking pictures for your family album... then that's just weird.

Wednesday, September 12, 2007

Rules for building test systems

I've decided to create a list of rules to follow when building a test system. Now I've created an informal checklist over the years of what I do when I put together a new system, but it's time to codify the list. Furthermore, the items on the list are generic and not specific to any one industry. Maybe I'll create a separate rules list someday for actively running a test system, but not today.


Know what you are testing, or work closely with someone who does.
A test system built without knowledge of what it will test and how it will test will not work.

Document everything.
Eventually a technician will need to know how it works, unless you want to run the station yourself forever.

Respect Murphy.
Anything that can go wrong will. It's really just applied statistics. So plan for that when building the system.

Create PM plans and schedules.
Most people think of PM (preventive maintenance) in terms of manufacturing systems: you should have a schedule for when parts need to be oiled, when accelerometers have to be recalibrated, when to clean off gear assemblies. But test systems, especially high volume ones, need this maintenance as well.

Run it with actual parts.
NEVER proclaim the system is ready to go before you have ran it through its paces thoroughly with actual components. I cannot stress this enough.



There's my list. Please comment or email me if you have suggestions of your own. I'll probably update it in a month.

Tuesday, September 11, 2007

Patents for testing

Through several conversations with patent attorneys, I’ve learned something about patenting test methods.

Patents are not cheap. If you work for a large corporation that regularly submits patents, this is not a noticeable issue. But for everyone else, there has to be a very good justification to apply for a patent.

A company will apply for a patent for several reasons. The patent can provide a barrier to a competitor trying to enter a new market. It can protect the company from competitors who may try to use the research that company has done without paying a license fee. Some companies may not actively use the patents, but they can make money off the patents by licensing them. Patents can be a source of pride – listing all the patents a company has applied for is equivalent (in certain industries) to beating your chest.

But there is a catch: you have to be able to prove that the patent is being violated. For example, suppose you have an innovative new manufacturing process. This process is an intermediate step that deposits certain chemicals on the product, and later that layer is removed. This process saves money and improves the product, but technically there is no real way to prove that you do it. Because of that, your competitors could use the same process (that they read about in your patent application) and claim that they do not do that. Short of going into their manufacturing facility, you can't prove it. So, your company might be better off just classifying it as a trade secret and not patent it.

A similar conundrum can apply with test methods. You have a new way to test your product. It's clever, it saves money, it's faster. But how can you prove your competitors test their products that way?

This is something that I'm wrestling with right now. The only saving grace is that if this is the only way to reasonably test the product, then I can probably apply for the patent. We'll see.

Tuesday, September 4, 2007

There is no "test engineering" major

To my knowledge there is no "Test Engineering" degree offered by any reputable 4 year college. In fact, I doubt that most test engineers went to college planning on being a test engineer. Of the many test engineers I've known, their degrees have been in EE, Physics, CS, and ME - that list is roughly in numerical order as well. Are college students aware of test engineering as a specific position? Probably not.

This is different if they are CS majors. In that case, they've probably been exposed to software testing theories in classes. There are scores of books, websites, and blogs on the subject. They may have even interned as a SQA (software quality assurance) engineer. I've met Microsoft testers, and they've said that most programmers hired start out in a testing position before they do anything else.

But I think hardware testing is a lot more dependent on exactly what you are testing, so the nature of it is harder to teach in a classroom setting. Yes, there are a few books that present an overview of the subjects (like Test Engineering: A Concise Guide to Cost-effective Design, Development and Manufacture by O'Connor), but it's just an overview. You'll get exposure to the basic tools (oscilloscope, DAQ cards, etc) in EE classes. You may learn about statistical methods in a stats class or industrial engineering class. But to really learn about the specifics of testing in a certain field, then you have to dig into the details. For example, you'll never learn about the details of fiber optic testing in an undergraduate class. You either learn that on the job or maybe from a book (like the excellent Fiber Optic Test and Measurement by Derickson).


Of course, what you major in during college is not necessarily a predictor of what you'll do in life. I once worked with a manager of the Integrity Program for the F-22 at Lockheed Martin in Ft. Worth, TX. It was a fairly prominent engineering position with a good deal of responsibility. He had a BA in philosophy....

Thursday, August 30, 2007

Sustaining engineering

When you ask about the duties of a test engineer, the answers fall into one of two categories: creating test stations and maintaining test stations. Many times these duties will be handled by the same person/group - sort of a "you built it, you own it" theory. I've been in several of those situations before. I have also worked for places where the test system was conceived in the R&D group and then turned over to production. Production modified it and maintained it. In that case the person maintaining the system was a sustaining engineer.

I found a Dept. of Defense definition of sustaining engineering as follows:
The technical effort to support an in-service system in its operational environment. This effort spans those technical tasks (engineering and logistics investigations and analyses) to ensure continued operation and maintenance of a system with managed (i.e. known) risk.


I would consider this to be a fairly accurate summary. So, the point of my post is "how much of this is test engineering and how much is it manufacturing engineering?" The sustaining engineer is not creating a test system, but he is responsible for a system's upkeep, making improvements if needed, and monitoring/analyzing the data it creates. These duties are directly related to the test system. But in theory the test system should have been engineering well enough so that it can be turned over to manufacturing without a lot of care and feeding. I mean, Agilent may build a $200k test system that they sell, but they do not sell an engineer with that system.

Personally, I think the answer is that the sustaining engineer is part of the test department (if the dept is large enough to handle that subdivision of labor). He may have had a hand in the creation of the system, and now he helps run it. Ideally, he still has time to work on other projects because the test station has few problems.

Of course, if your test group primarily does R&D work, then all bets are off. Any test system you build will probably be used a few dozen times and then scrapped in favor of something new...

Tuesday, August 21, 2007

Test group interns, part 3 of 3

In the past two posts I've discussed two different interns (Bill and John) I've worked with in test departments. One performed well and the other...not so much. Some of the lessons drawn from those experiences might apply to any intern, but I maintain that interning within a test group is a very specific experience that has specific requirements. Proceeding from this supposition, here are three things I think you need to do for an intern to succeed in a test environment.

--------------------------------

Hire an intern with broad technical interests. Building and running test stations requires the engineer to be something of a jack of all trades. If the station is automated, you need to know software. You may be called upon to build some fixturing, so you may need to know a bit of mechanical engineering. If you need to debug a circuit, then an electronics background is handy. This can be fun stuff, but only if you _like_ doing different things. All Bill liked to do was program. He did that well, but not much else.


Scope the assignments. There are no courses that teach how to test - it is a very specific skill that is learned from experience. Bill often had little direction from the manager (who was busy looking for a new job at that time) and he floundered. I gave John detailed instructions, and he flourished.


Play to his strengths. This is a corollary to the first point. The intern may have interests in different fields, but he is still getting his degree in a specific discipline. Bill did a great job with the project that involved a lot of software. Since John is a EE major, I've given him tasks that minimize his programming. Choose projects that play to what the intern can reasonably do.

Saturday, August 18, 2007

Test group interns, part 2 of 3

I've had some experience with using interns in test groups over the past eight years. What I want to do is present two cases - one that worked poorly and one that is working well - and then draw a few test-group-specific conclusions.

--------------------------------

This summer we've had two interns in my test group. There are several good engineering schools here in the New England area, and they run programs that send students out for an entire semester. The firms get good yet inexpensive help and the students get college credit. We had an intern in our group last year and he worked out well.

One of our interns (call him John) was tasked to work with me on a couple of different projects I have. John is an EE major who has interned with two other companies in the area (with a good track record) and has excellent grades in school. He's a nice, easy-going guy, fairly responsible for his age, and very eager to work with some serious technology.

I have two major projects I am managing. Both projects involve wafer-level testing, although the second is more software-oriented than the first. The first thing I did was make a general plan of what I'd like him to do. First he needed to learn the basics of the test stations and then get some practice time in. Since most of the first project can be viewed as separate mini-projects, I decided that I could parcel out some of that work to John.

I forwarded a collection of documents - work instructions and a couple of presentations I had made for the stations - for him to read. I spent time showing him how the testing worked. Before I went on vacation for a week I asked a production engineer to help him do some testing by himself, and I forwarded detailed instructions of some work he could do.

When I got back from vacation I was pleasantly surprised to see that he had spent most of his spare time in the clean room working on the tasks I had assigned. He had not completed them, but he showed me how far he had gotten and had clear plans for what remained. Since that time I have assigned him other small projects with clear instructions, and he has completed them successfully.


Part 1 of 3 talked about a unsuccessful intern.
Part 3 of 3 will draw a few conclusions.

Thursday, August 16, 2007

Test group interns, part 1 of 3

I've had some experience with using interns in test groups over the past eight years. What I want to do is present two cases - one that worked poorly and one that is working well - and then draw a few test-group-specific conclusions.

--------------------------------

In the late 1990s Hewlett Packard had a program (they probably still do) called SEED: Student Employment and Educational Development. Every summer the company hired standout undergraduate students as summer interns. In my division it was a very organized program, the students hired were bright and articulate, and several of the people I worked with were former SEEDs themselves.

One summer my test group retained a SEED intern - let's call him Bill. Bill was a smart guy pursuing a BS in CS at a great school. He had a good deal of experience with designing web sites and JAVA programming (which had only been released a couple of years before). He even did some volunteer work for Red Hat debugging Linux - they gave him stock options which paid out very well when Red Hat went public in 1999. In other words, he was a very clever computer geek.

Bill did very well with one project he was assigned. The tasks were a) write a JAVA program that interfaced with a spectrum analyzer, b) repeadedly run the analyzer through a series of rigorous tasks, c) track the free memory of the analyzer over a long period of time, and d) find any memory leaks and what triggers them. He wrote the program, ran it, and found some important problems.

Bill liked programming, but he wasn't interested in data analysis, hardware (EE stuff) or miscellaneous paperwork. When he had to do those tasks he performed poorly. For example, when he was asked to analyze some data from a series of tests, I spent a lot of time walking him through using advanced features in Excel. If I wasn't there showing him what to do and how, he would tend to chat online with friends and play games. He needed a lot of supervision and guidance.

To be fair, one of Bill's problems was a lack of planning on the manager's part. In my opinion, he didn't always assign tasks that played to Bill's strengths, and sometimes the goals and steps towards those goals were vague. But at the end of the day it was Bill's responsibility to try and figure out things he didn't understand, and to actively seek out help when he needed it (instead of spending the day playing Diablo).


Part 2 of 3 will talk about a successful intern.
Part 3 of 3 will draw a few conclusions.

Monday, August 13, 2007

LabVIEW programming style

So I have to give props to Peter Blume for his book The LabVIEW Style Book. Yes, it repeats a list of best practices for LV programming from different sources. Yes, it is grayscale, when color print would have been much more useful. Yes, it is self-serving: he runs a software consulting firm that is mentioned frequently in the book.

Aside from those gripes, this book is great resource for three specific reasons:

  1. It covers specific aspects of LV programming chapter by chapter. These divisions are clear, logical, and discussed in detail.
  2. He uses copious examples.
  3. He devotes much of chapter 10 to the VI Analyzer, which I have found to be an incredibly useful tool. For some reason NI never pushed this tool, but the book's popularization of it may go some way towards improving the overall standard of LV code.
In fact, I liked the book so much that I started a study group with the other engineers in my group. We cover one chapter per month - we've covered four so far (10 chapters total).


I should post a disclaimer here as well. Back about 4-5 years ago I interviewed with his company and almost started work there. Then I got wind of how little money they would offer and bailed. I was disappointed with the firm for a while, but I still like this book.

Friday, August 10, 2007

Future posts

So I've been posting to this blog for a couple of months now, and I've actually posted all of the things that I wrote sometime in the past 6 months. Of course I still have more topics I want to write, and a couple of other people have suggested ideas. So here's a list of things I'll blog about sometime in the next month or three:
  • Test engineer salaries
  • Career path for engineers
  • Useful background/experience for test engineering
  • Typical issues faced by a test engineer
  • Sustaining engineering vs. test engineering
  • Falling in love with the test system
  • Interns
  • Outsourcing test
  • Labview style
This list is as much for me as for anyone reading this. I have a long text file where I've been writing down ideas - this was a good way for me to sift through that list, filter out what I've already written about, throw away bad ideas, and summarize what I had left. Furthermore, all of these items on the list are of current importance to me because I've dealt with them in the past year, or will be soon.

So, take it for what it is worth.

New toy: LabVIEW 8.5

So I downloaded a new toy a couple of days ago - I just couldn't wait for my CDs to come through the mail in a couple of weeks - but with all the things going on at work, I haven't had a lot of time to play with it. Hopefully that will come this weekend. But there are a few things that I noted about it so far:

  • NI finally added conditional for-loop exiting. They've needed this for a while. After all, an endless loop is one of the classic software bugs, and in LV the while loop is the prime contributer to this problem.
  • They've changed shared re-entrant VIs some. I haven't looked at this in much detail yet, but I've always been a little ambivalent about the re-entrant status of VIs in general. I know there are developers who rely on that feature all the time to use a subVI as a sort of variable for holding data. To me that seems like a bit of a crutch and asking for complicated bugs. I'll play with the new changes to see if it has changed my mind any.
  • NI is anal enough to change the names of VIs because they mispelled "zeros" - previous versions of LV include a VI called Find All Zeroes of f(x). I got a real chuckle out of that.

Of course, NI almost always releases a new version of LV right around the beginning of August to coincide with NI Week down in Austin, TX. I went last year but was on vacation this year. So I wasn't really surprised the new version came out. But it's fun to have a new toy nevertheless.

Thursday, August 2, 2007

Feature Space for software testing

I haven't written a post about software testing in a while, and this is something I had thought about in the past.

When I did software testing for HP, one group of errors I specifically tried to account for were corner cases. The common thinking was that it was always relatively easy to find many bugs - they were in the middle, so to speak. But product quality was always high in mind at HP, so we wanted to dig out odd little bugs that 'hid in the corners.' Besides that, an expensive piece of test equipment, especially one in a lab, can be used in all sorts of odd ways over the course of its life. And the definition of a corner case is a situation that happens under extreme conditions.

After a while I started thinking about extending this idea. What if you referred to a corner case in terms of how the system is used in general, instead of adjusting specific parameters. If the usage of a program, or software embedded in test equipment, or even a test station, is a three-dimensional space then what are the dimensions? To frame it as a physics question, what are the degrees of freedom?

One dimension would have to be the number of functions - all the distinct things that a user can get the software to do. If you use just a few of the software's features (typically the most common ones), then you are not likely to find a corner case bug. The most common features are usually thoroughly vetted by the design team. But if you use a lot of those features, in different orders, then you may find something bad.

The second would be time. That is, the more a program is actively running (not just idling), the more likely it is to encounter a problem. It may be a memory leak, but it could be some other problem as well.

The third and final degree of freedom is the number of users of the system. I've written programs that were buggy, but that was fine because I was the only one using it. I ran it a certain way, to do certain things, and it worked just fine. But if I gave that program to someone else, sure enough they would crash it. This can be generalized to, "The number of ways in which a program can be used and abused increases proportional to the number of users of that program." And this applies to inanimate users as well. If a program can be accessed and used by some other application, then that program is itself a user.


So, lay these three degrees of freedom on orthogonal axes and you have a Feature Space Cube (FSC). Down by the origin you won't find many interesting bugs. The farther out you go, the more interesting things can get. Think about the ramifications for a while, and the concept can be a useful way to picture a system, not to mention entertaining in a geeky sort of way.

Monday, July 23, 2007

Medical test system accuracy

Medical test systems, like any system, have a certain level of accuracy and precision. Do doctors know about the accuracy of their tests? Do they inform their patients?

For example, I have a friend who is diabetic. She went in to get her "A1C" tested and it was measured to be 6.4 %. Since the American Diabetes Association recommends that number to be 6.5 or less, she felt pretty good.

So I did some checking on WIkipedia and found the following quote:
"Interpreting results is somewhat difficult because (1) laboratory results can differ depending on the analytical technique and (2) biological variation between individuals can be up to 1 percentage point (i.e., two individuals with the same average blood sugar can have A1C values that differ by up to 1 percentage point)."

So, the error is at least ±0.5%. The Wikipedia article also states that the results can be off if there was a recent change in diet.


This particular doctor did not discuss uncertainty levels, or maybe he didn't know. Maybe he just assumed that 6.4 really is 6.4. So I guess the lesson to take away from this is that you should always ask your doctor about the test itself. If he doesn't know, then do your own research.

Tuesday, July 17, 2007

Linux on test systems


So I just downloaded the first in a series of Agilent white papers on using Linux in test systems. The paper has a brief history of Linux, comments on licensing structures, lists available tools, and discusses controlling tools within Linux.

In a clever marketing move, Agilent is releasing these papers episodically. The other papers in the series are:
  • "Using Linux to Control LXI Instruments through VXI-11"
  • "Using Linux to Control LXI Instruments through TCP Communication"
  • "Using Linux to Control USB Instruments"
  • "Using Linux in Soft Real-Time Applications"

Normally I would skip these papers. They tend to be self-serving and talk about things that are pretty obvious. But I would like to hold Agilent to a higher standard (that may be naive of me). Plus, in my test group we've seriously considered using Linux on test systems - so I'm interested in what they have to say.

The paper also lists 4 reasons why you'd use Linux in the test system:
  1. Threat of software obsolescence. What happens if the version of windows you run on your system is obsoleted?
  2. Flexibility/performance. The Linux OS can be customized.
  3. Stability. It's very unlikely to go belly-up on you.
  4. Leveraging know-how and software. There is a lot of UNIX experience out there that can be extended to Linux.
There are also a couple of practical reasons that we've considered at work:
  1. Technicians and operators are less likely to screw around with the computer (browse the net, download games) if it doesn't run Windows.
    • As a corollary, engineers won't try to run other programs, analyze data with Excel, or remote login to their own computers on a Linux test station.
  2. Security is better. Some of our new test systems will likely be hooked into the internet for remote access, and there's just too many viruses out there looking for systems running Windows.
At any rate, I will definitely be reading the other papers in the series. I'll post my comments on them as they come out.

Monday, July 16, 2007

Where does testing belong?

I was once tasked to head a newly-formed testing group. It consisted of me, another programmer, two contractors, and a mechanical engineer. The groups responsibilities were to support existing test and manufacturing systems, create new test stations as appropriate, and automate certain manufacturing steps that were still being done partly by hand. The VP of Engineering wanted to call it Test and Automation, until I pointed out that it might not be wise to say I work for the T&A group....

But where does testing belong? I've seen three different choices in the 1/2 dozen or so companies I've been:
  • A completely separate group. Each member of the group supports one or more projects.
  • Part of the engineering department.
  • Part of the manufacturing department.
I think it really depends on the size and focus of the company. If you're in a small startup, all the engineers wear multiple hats. Sometimes you're called on to test, sometimes to help with product design, and sometimes to just get customer samples out the door. A separate test group may not make sense.

Of course, if you already have three or more test engineers then you may want to group them together. Get those guys together and they share ideas and solutions, just like any other engineering team.

If the company's products are mature, then test is probably part of production. Your job is to get product tested and out the door, and you may be called on to debug test stations, process flow, or maybe the product itself.

If test is part of the engineering team, then the test systems may evolve almost as fast as the product itself. When the design engineers add a new feature, or marketing has new specs or a new customer, test engineering has plenty of work to do.

At least, those are my thoughts on the subject.

Saturday, July 14, 2007

Design of Experiments (DOE)

I recently finished a course on Design of Experiments. It was really fascinating stuff, and very applicable to manufacturing issues. In fact, the "final exam" consisted of a group of us picking a topic and building up an experiment - this design will now be implemented in the next couple of weeks where I work. I wholeheartedly recommend reading up on this subject, or taking a class (esp. if the employer pays for it).

The question I now have is, how applicable is this to test engineering? Is it the responsibility of the test engineer to get involved in DOEs? Should test engineers consider learning how to use stat software like Minitab or JMP (both of which have extensive DOE support)?

I think it really depends on where you work. For example, if part of your job as a test engineer is dealing with SPC charts and processing test data, then DOE is a logical extension of that work. Or if you have a test system with multiple settings which need to be optimized, then DOE would be helpful. However, if you spend most of your day writing code for test systems, then it's probably just a "nice to know" thing.


BTW, the course instructor said that if you read any single book on the subject, it should be Understanding Industrial Designed Experiments by R. Launsby and S. Schmidt. It is, of course, at Amazon.com.

Friday, July 13, 2007

USB instruments - new Keithley DMM

So I just found out today that Keithley has a new digital multimeter (DMM) out that a) is connectible via USB and b) is less than $1000. The 2100/120 appears to have most of the standard functionality you'd expect for a good DMM. I've used plenty of Keithley's equipment in the past and I've had very few complaints. One of these days I'll post something comparing different test mfg companies, but not today.

What interests me the most is that it's USB compliant. In my experience, test engineers are pretty conservative about switching to new technologies. It makes no sense to switch to something until you're sure it won't screw up your measurements. When I worked at HP I was part of a team introducing a new spectrometer that had an Ethernet port. That port was used for internal access to the embedded code, among other things, and I thought it would be great for customers to have TCP/IP access to the instrument instead of just HP-IB. The idea was shot down by marketing because it was considered "too new".

The GPIB interface has been around for a LONG time, and even longer as HP-IB (I think it's in the HP/Agilent handbook that you have to call it HP-IB). Is it time to get rid of it? The downsides include
  • Those cables can be very expensive
  • You have to buy and configure a separate interface card
  • The cables and connectors are bulky
  • Interface speed can be a limiting factor
Of course there are a couple of upsides as well: the cables can be well-shielded, and connectors can be daisy-chained.


But personally, I've already used two instruments with USB interfaces, and I really like the idea. Now that other companies appear to be offering USB, I'm read to kick the GPIB habit. Is there a 12-step program for that?

Thursday, July 12, 2007

Visual Basic vs. Labview vs. C++

I would hazard a guess and say that most test systems use LabVIEW as the development environment (with LabWindows as a subcategory). Why? I've programmed test stations in all three languages over the last 15 years (for testing, I first used C++ in 1994, VB v2 in 1996, and LV v4 in 1996), and I used to be a big fan of VB. Now I've used LV almost exclusively for the past 4 years.

Does it have to do with "getting there first?" Well, let's see. C++ has been around since 1985 (and regular C since 1972). Labview was created in 1986 for the Macintosh platform and was ported to Windows in 1992. Visual Basic was introduced in 1991. Looking at dates, either C++ or LV wins.

Is it because LV has a lot more language support? I don't think so...C++ has a worldwide user base, and Microsoft has historically had plenty of support for it's flagship programming language.

Is it because LV is focused more on testing? Maybe. C++ is a general-purpose language, which could be both a strength and weakness. VB was historically used a lot for DB interfacing and front end GUI apps. LV has it's diagram that looks like an electrical schematic, ready support for all kinds of test instruments, and National Instrument's (NI) claim that it is designed for "test, measurement, and control applications."


Personally, I think it has more to do with having a champion than anything else. C++, while created by one person, is a universally accepted standard language and lots of companies build compilers for it. Visual Basic is supported just by Microsoft, but I think they've always wanted it to be an alternative to C++ while at the same time promoting Windows as an OS.

But LabVIEW has always been about testing. NI has always claimed it is the perfect tool for test apps, they continually push building a community of users for it (free introduction seminars across the country, user groups in larger cities), and developing new test tools for it (motion control, DB tools, vision). Yes, LV is supported by a single company and if NI ever goes belly up a LOT of people are in trouble. But sometimes a single backer with deep pockets can help.

Comments on this are appreciated.

Friday, July 6, 2007

From Test To Sales

The career of Field Applications Engineer has its own entry on Wikipedia. Also known as an applications engineer or sales engineer, they are usually a liaison between the customer and engineering. They must have a technical background and a good understanding of the product, but they must also be able to communicate well with the customer. After all, they are part of the sales department.

I was a field apps engineer for two years, and I did it part time for a year with a different company. Furthermore, I've met several other apps engineers who transitioned from test engineering. Granted this is purely anecdotal evidence, but is there a good career path from one to the other?

Let's look at why a test engineer might do well in this position
  • He has a solid technical background.
  • If the company has multiple product lines, he probably has written tests or helped to test those products, so he has a breadth of knowledge.
  • A test engineer who has seen various product failures can help customers who may have similar problems.
  • A test engineer has seen the negatives of the product (i.e. - failures) but is still focused on the product (making it work right by correcting failures, or at least weeding out bad products). With this attitude a field applications engineer can build a layer of trust with the customer while at the same time help to sell him on the product.

But in the end it all still depends on the person.

Thursday, July 5, 2007

Test it until it works

After graduate school, I worked for an aerospace subcontractor firm as my first real job. The company’s products were split 50/50 between the military and commercial fields. On my first manufacturing project I started to run behind on shipments because the yield was slipping. When that happened, I was told by more seasoned engineers (who had initially started this project) to go through the "marginally failed" units and "test them until they work."

The rationale behind this statement was that:

  1. The spec was extraordinarily tight for the product (blame was placed on sales & marketing).
  2. We were up against the accuracy limits of the system.
  3. It didn't really matter if the positioning of the cannon was off by a couple arc-seconds. They had redundant systems in place.
Thinking that this is how it must be done in the "real world," I did what he told me to do and got back on schedule. Granted, I figured out some other things to do to correct the yield, but going through the marginal failures was a contributing factor. But it always bothered me.

Stepping aside from the questionable moral grounds of this situation, let's look at that rationale list from a test engineer's perspective.
  1. Spec too tight. Marketing should certainly know to what tolerance the product can be tested. If they don't, then it is the job of test engineering to inform them. If marketing plays the word game of "it is guaranteed by design" then it should not need to be tested, now should it?
    Of course, if marketing knows the limits and chooses to ignore them, then you have much bigger problems...
  2. Limited test accuracy. If you are trying to test to a spec that is at the limit of what you can measure, then you have serious problems. Buy a more accurate tester, build one if you can't buy it, or do sufficient test system qualification to verify your accuracy. You have no business being anywhere near those limits. Test equipment manufacturers themselves can play "specsmanship" games, so you cannot always trust their numbers.
  3. The customer doesn't really need that accuracy. I'm sure it's possible that the customer has over-specified what they need. They may have other backup systems in place if the accuracy is not there, they may have an over-tight spec because they don't entirely trust the product (or the company). Or they may just be clueless. But you can't get into the game of second-guessing the customer. That'll get you in deep trouble, somewhere down the line.


So, did I really screw up as a test engineer (although I wasn't called a test engineer back then), or was I just doing what I was ordered to do? I think I will just plead 'youthful transgression' and try not to let that happen again.