Thursday, December 27, 2007
For some people, test engineering is about putting together the system. You build it, you ship it out to your customer (internal or external), and you move on to the next project. For others, test engineering is about managing the test process. The job revolves around SPC chores, preventive maintenance, & setting up new test runs.
But sometimes test engineering involves analyzing the results of the testing. I've spent plenty of time putting together graphs in Excel. But that sort of work can overload you when you have huge data sets & multiple sets of variables to consider. You need a more serious piece of software than just a spreadsheet for filtering down the data, looking at box plots, and plotting trends.
I know a guy who is a great statistician. He loves Minitab. The only package like that I've ever used is JMP, but he says it's a good piece of software as well. I know I like it, and more and more I've been using it instead of Excel when I need to examine data.
There are numerous books published by SAS (the company that wrote JMP) on how to use the software. I picked up one of them, Elementary Statistics Using JMP, and have read about half of it by skipping around to specific sections. The book is knowledgeable, well-organized and worth the money.
Tuesday, December 25, 2007
Thursday, December 13, 2007
NI is very good at marketing. They interact well with customers, get knowledgeable sales people embedded with key industries, and support their hardware and software. So when I say they excel at marketing it is truly meant as a compliment. Yet this proficiency also hurts them. Read on and you'll see.
There are four sections and 14 chapters divided amongst the sections. The first is just an introduction, the second discusses test system guidelines, the third goes over improving a system, and the last one consists of case studies.
This single chapter reads more like a position paper for NI being the best ever than an introduction to a test system guide. Pity. For example, on just a single page (1-5) the author referenced three different marketing white papers. My hopes for the manual diminished.
There were two saving graces to this section. Chapter five has a good overview of different buses, and chapter seven reviews the PXI standard. Otherwise it is more marketing than substance.
These three chapters were somewhat of a revelation. The marketing was minimized in favor of looking at 1) ways to speed up a test, 2) measurement accuracy, and 3) system longevity. Cool.
I liked the first chapter in the manual. Describing software-defined radio testing, it was short & too the point. But the other three case studies were all but useless. Okay, so Microsoft used LV and a PXI chassis to test the XBox - why not spend a few pages and describe the test architecture or obstacles that were overcome in the design. Each case study reads like an extended press release.
Unfortunately, this testing manual is more like the Agilent manual (bad) than the Keithley manual (good). It pushes a theme of "NI products are the best thing since sliced bread." The only time it mentions Agilent is to take them to task for the lack of support of IEEE1394 (VEE isn't mentioned at all). The manual could have used a good editor - the exact same graph, bandwidth as a function of latency, shows up an improbable FIVE times under different titles.
In other words, if it wasn't for section 3 I would write off the whole manual as a waste of space on my hard drive.
The schedule has slipped - they usually do - but it was a result of scheduling conflicts and money issues. The system was ready. Regardless, I did have an engineer from the contract manufacturer fly out to be trained on the system. He seemed like a good guy & knowledgeable, but there was a definite language gap. Furthermore, I only had two days to show him a system I've been using for a couple of years.
Speaking slowing and struggling for words, I think I eventually taught him enough so that he can run the station when there are no big problems. We started with an overview of what the system does, the separate components, and the basic procedure. He spent a 1/2 day just testing devices. We also went over common maintenance issues and problem points to check when it won't run. But there were some things that didn't translate well.
But now I think I have the answer. I'm using my digital camera (a nice 5 megapixel Canon) to film common tasks and maintenance fixes. Maybe my narration will help, maybe not. But if a picture is worth a thousand words, how much is a high res avi file worth?
Sunday, December 9, 2007
NI is big on the concept of a virtual instrument - use the computer in place of the benchtop instrument to do the measurements. I've used this concept for potentiometers and oscilloscopes. But I just don't think this works in all cases, or even most cases. I have two reasons to back this opinion.
Complicated real-world measurements
There are some properties that are more than just a voltage or current. You need a good deal of physical hardware to actually acquire the data. Several examples I'm familiar with include optical spectrometers, digital communications analyzers, and (more esoteric) high energy particle detectors. A good deal of additional circuitry, physical devices, and sometimes patented techniques are involved.
Hardware companies that build test equipment often have a good deal of knowledge and experience making that kind of measurement. That information often is built into the desktop instrument that performs that measurement. In most of those sorts of situations I would rather have the actual instrument than spend time and effort trying to duplicate that expertise myself.
I am not saying that virtual instruments are invalid. I think they work well for any non-complicated measurements or measurement techniques that are well-established (i.e. - the modern triggered oscilloscope was invented over 60 years ago). But sometimes you need the actual hardware.
Monday, December 3, 2007
In 1999, near the height of the internet bubble, Microsoft Press published a book called titled After the Gold Rush, with the more descriptive subtitle, “Creating a True Profession of Software Engineering.” I started writing this post as a diatribe against National Instruments (NI), but ended up writing about how they are supporting what this book proposed.
Steve McConnell wrote the book. If you don’t know who McConnell is, then you don’t spend much time working on major software projects (or you do it in isolation). Several of the books he’s written over the years (Code Complete and Software Project Survival Guide to name a couple) I recommend as required reading for serious programmers. I don’t know if I would label him a genius – I’ve only talked with him at a couple of seminars about 7 years ago, so I hardly know him – but he has a unique knack for gathering the best practices of a particular subject and positioning them in an organized fashion under one tidy roof.
In other words, he knows his stuff. And his thesis in this particular book was that software engineering needs to be licensed like other engineering professions or like dentists, doctors, attorneys, nurses, etc. There are too many programmers that learned how to code here and there, follow no standard software conventions, write mediocre (or worse) code, and yet still consider themselves professionals. This makes good sense.
NI Certification Gripe
NI has a fairly elaborate certification process that has evolved over the past decade or so. Exams cost ~$200 and are valid for only two years - after that, you have to take the test again. My first impression was that this was a real scam. Get companies to buy into the idea that they have to have NI-certified programmers, forcing programmers to cough up money every couple of years or risk job stagnation, which leads to a nice revenue stream for NI.
I have a MS in PHYSICS that I completed over a dozen years ago – do I have to go back to my alma mater every couple of years and re-certify myself? No, because getting that degree implies a certain level of competency. If someone wants to gauge me on those matters, they can talk with me or look at my body of work since school. I have a driver’s license that I renew every few years, but I don’t have to prove I can parallel park when I show up at the DMV. It is assumed that I drive on a regular basis and as such keep my skills up to date.
Once I’ve taken the LV certification test & proven my skills, why should I pay to take it again every two years? Prospective employers can talk with me about various
But then I started relating NI’s certification efforts to McConnell’s thesis of licensing software engineers. Certification adds some legitimacy to programming in
So I’ve modified my view. I still think National Instruments is exploiting the certification process, but they’re also doing a good thing for the LV development community. And since LabVIEW belongs to them, that’s how it works.
Thursday, November 29, 2007
The third paper in a new series of Agilent white papers on using Linux in test systems has just been released: "Using Linux to Control LXI Instruments Through TCP." As has become custom, here is my review.
The previous paper in this series discussed using Linux to control LXI via VXI-11. While that paper gave me the impression that this was the best way to control instruments, the new paper says that TCP (via direct socket connection) is better for short time measurements.
The author gives a very brief overview of the seven layers of TCP/IP and then dives right in to gritty details (including a quick discussion of Nagle's Algorithm). The paper provides several extensive code examples. The examples are in C, but that could be ported easily to LabWindows, or you could wrap it up as a separate object to use in LabVIEW.
I liked this paper better than the last one. To use a Thanksgiving metaphor, there was less marketing feathering and more engineering meat on the bones of the paper. I would really recommend this as a useful paper to read if you were looking at using Linux and LXI, and now I'm feeling more optimistic about the remaining papers.
The remaining papers in the series are "Using Linux to Control USB Instruments" and "Using Linux in Soft Real-Time Applications". I'll be reviewing those as they are released. Since these papers have been released just about once per month, I expect to see the next one sometime around the end of the year.
Monday, November 19, 2007
This is not the first time I have built a station that was shipped to a contract manufacturer - when I worked for Dupont several years ago our contract manufacturer had the equipment in house to build our products but didn't have the equipment or software to test it. I think this underlines something unique about test engineering: it is oftentimes easier to build something than it is to test it. When you are testing something you are verifying that what you built meets certain requirements. You must have confidence in the data, so extra care goes into the measurements. I think THAT is why test systems are built by the contracting firm and then shipped out - often the test system is specialized to suit your product, and you have to trust the data.
If the schedule holds, the station will ship out sometime next month, and I will fly out to help set it up and verify it after the new year. I will write more about this experience as the project progresses.
I do NOT expect to post more to the blog the rest of this week. Thanksgiving is coming up, and I have plans.
Tuesday, November 13, 2007
Operating Income: ~$10 million
Net Income: ~$8.4 million
Founded: 1999 (split from HP, founded in 1939)
Operating Income: ~$465 million
Net Income: ~$3.31 billion
Now of course I realize that Agilent does more than just make test & measurement hardware - for example, they also have an investment group. I also realize that Agilent makes instruments for a lot more applications than Keithley.
I've bought & used instruments from both companies. I think both companies make good products, have good tech support, and do a good job of knowing their customers. But still, I find it very interesting that Keithley, such a small company by comparison, holds up its own so well against a huge conglomerate like Agilent. I guess there's something to be said for being small and focused.
Wednesday, November 7, 2007
"That's a broad subject that deserves some thinking - maybe I'll post on it in a week or so. But in general, I'd rather not reinvent the wheel if I can help it. I've used several 'canned' systems, especially motion control and generic test instruments, in the past. But often the things I've had to test were unique enough that I had to build my own system, or at least use the off-the-shelf solutions as a sub-assembly for my final test station."I have time to post on it now, so here are three different experiences I've had with such systems.
Building a COTS system
When I worked at HP/Agilent, I helped create the Passive Component Test system for a new Optical Spectrum Analyzer (which is obsoleted by now). It was a built-in software app that used the OSA to test common parameters for optical components. To configure the tests, you needed a script. I wrote an Excel script creator in VBA that made it easier for the user to configure the test setup. It worked very well, and I know of at least four different companies who used it (I talked with them at a trade show a year later).
Several years ago I started work with a company that had just purchased a test system from Palomar Technologies. This system handled the optical fiber alignment, test setup, and specific manufacturing steps after testing. This system had a "pseudo-basic" script language for customizing tests. For further customization I wrote a LabVIEW front end that controlled aspects of the testing.
Multiple Vendors→One System
One of my current test systems is a conglomeration from three different sources. The main system (motion control, vision recognition, basic data handling) is from an established vendor for these systems. Second, the front-end software (controlling the test infrastructure) was written by a engineering company based on their standard product but customized for our use. Third, I have written quite a bit of code to further customize the front end of the software.
Any "off-the-shelf" system I have used, or helped build, has required customization. What your company makes, how it uses the data, how it grades those devices - all of those features are unique. Furthermore, unless you are testing final product the test system needs to be integrated within other manufacturing steps. That leads to further modifications.
The only exception I can think of is if the test system vendor sells you a test system that they have also sold to a direct competitor that makes the same product. That is a completely different issue.
Saturday, November 3, 2007
I flagged this article because I have experience with the CMMI. The division I worked in at HP/Agilent years ago was classed at CMM level 2, and I worked in a couple of projects aimed at moving the department to level 3. I called it "CMM" instead of "CMMI" because back then the older nomenclature was in use. Working in a project group that adhered to those standards, which was very enjoyable and a great learning experience (we used Rational Rose for the heavy lifting, before it was bought by IBM). Testing, and specifically software testing, has a very specific role to fill within such models, and it's significance is not underrated.
In general the article is a cogent overview of the CMMI and how it is applied. It also makes a good point that test engineers involved in creating software - especially for more complicated projects involving multiple people - should learn how to apply the model and use tools associated with it. Many test engineers for hardware testing do NOT have a software background, and don't necessarily have exposure to best practices for programming. But believe me, the CMMI is worth using.
Of course, the author is from NI so I expected some marketing and was not disappointed. The author discussed how NI Requirements Gateway can be used to implement the CMMI, and he also referenced NI programs like LabVIEW and TestStand extensively. But this didn't really bother me - he works for NI and that's his job. Evaluation Engineering has free access, so I expect a modest amount of bias.
No, what really bugged me is that right at the beginning of the article he called the CMMI "Component maturity model integration" instead of "Capability maturity model integration." If you're going to write about something, please get the acronym right. In the engineering world there are way too many acronyms and abbreviations, and doing something like this confuses the issue further.
Thursday, November 1, 2007
Of course, originally I wanted to compare the books from Agilent and Keithley to see if they reflected a difference between the two companies themselves: Agilent is much more of a marketing behemoth than it was as HP many years ago. To be honest, I have a bias. I worked in the Test & Measurement group at HP for a few years before and after the switch to Agilent, and I saw firsthand the large amount of resources that went into marketing. But that is a post for another day.
I must tread lightly with this sort of thing. I've had a few marketing/salespeople contact me about products they make. Maybe they want to sell me their products, look for free advertisement on my blog, or just honestly offer information. It could be a blend of those reasons. But I'm an end user of test equipment nowadays, and no one pays me to do this blog. From an ethical point of view I should treat all requests equally. That is, only talk about things I experience, not show unwarranted bias towards one vendor or another, and not lambast someone or something without reason.
Or at least I'll try.
Tuesday, October 30, 2007
This book, released in May 2007, is partly a repackaging of other white papers, many of which you can find here. I suspect that it was repackaged like this to compile what had been written separately as well as to heavily promote the LXI interface. I previously posted about Agilent's big stake in LXI, so I won't get into that again.
Other than the marketing-oriented aspects, I found the guide to be somewhat useful.
There are 4 main sections of this handbook - each section has numerous subsections. The first discusses test system design. The second section covers LAN networking issues. The third is devoted to LXI. The fourth and final section lists some details of RF/Microwave testing.
Section 1 - Test System Design
This section is devoted to going over the various aspects and theory of a test system. Parts of it I found insulting (it appeared aimed at a pure beginner), some of the things they talk about I have posted about in my rules for building systems (here and here), and some of it was actually pretty good.
For about 15 pages the guide discusses software architecture: defining the requirements, controlling instruments, storing data. It's all very general, but I found it extremely funny that whenever they mentioned LabVIEW, their competing product Agilent VEE was written first.
Section 2 - Networking Choices
Here the guide covers networking considerations for a test system. This might be a bit of overkill for some people, since it is aimed for the test engineer who knows very little about networking basics.
Section 3 - LXI: The Future of Test
Yes, that was the actual title of this section. Somewhat presumptuous, and very much market-speak, but that is what the section is called.
Section 4 - RF/Microwave Test Systems
I have no real experience with this kind of testing, so I cannot speak to it's accuracy or whether it was worthwhile or not. To be honest, I skimmed this section.
When you compare this book to the Keithley book, you can see that that they have two completely different intents. The Agilent guide is polished, views testing from a general point of view, and serves as a vehicle for pushing LXI. The Keithley guide is not so polished, goes over the guts of testing (i.e. - the many pages devoted to discussing passive and active components), and includes numerous examples.
In short, the Agilent book is written for a manager, VP, or someone looking for more information about testing. The Keithley book is written for the engineer. If you're a test engineer, I would recommend reading both of them, file away the Agilent book, and put the Keithley book on the shelf for frequent referencing.
There is a book about test engineering that is supposedly a college-level intro coursebook. Maybe I'll take a look at it for comparison. There is also a free handbook on LXI interfaces available. I haven't looked at it yet myself, but it may be worthwhile.
Tuesday, October 23, 2007
I had a few minutes yesterday to glance through Test & Measurement World and saw an article detailing some of the recent history of wafer testing [this is part 3, so follow the links to read parts 1 & 2]. It was really pretty interesting. Of course, if you have no involvement with chip testing your eyes may glaze over.
Last night I had time to dig into the article more. The author mentioned a presentation made by Vikas Sharma at the 2007 IEEE Semiconductor Wafer Test Workshop. So I navigated over to archives for this past June and found the entire presentation. Even more, there were several other presentations there that I can use. Yes, most of those engineers are probing logic chips and testing things much more complicated than what I do, but parts of it are still applicable.
In a post a few weeks ago I talked about the need to lift your head and look around you once in a while. What I did last night is a prime example of this. Sometimes you never know what you'll find, and there is a certain satisfaction to be gained from discovering a little nugget of treasure.
Friday, October 19, 2007
One of the things I found so surprising was the loss incurred because people did not test - why would a company not test? I was astounded over a list on a website that documented some of the more outrageous consequences of not testing. I had forgotten about that list (the full & updated list is here), but a blog I read the other day reminded me of it.
40 IT failures caused by software bugs by ZDNet's Michael Krigsman -- Rick Hower, who runs the Software QA Test Resource Center has compiled a lengthy listing of “major computer system failures caused by software bugs” Here are several entries from that list: A September 2006 news report indicated problems with software utilized in a state government’s primary election, resulting in periodic unexpected rebooting of voter check-in machines, [...]
I posted this today is because I had to fight this "why should we test?" attitude recently. In software testing, one of the main reasons for shorting the test process is time. It slows the software life cycle, and there is a push to just release the software and fix the bugs as they crop up.
The reasons are different in hardware testing. Yes, test time is a concern, as well as cost. But yield is an even bigger issue - especially in medium to high production volumes. Production managers tend to think that "every parameter tested is a chance for the yield to go down." So they naturally want to reduce the amount of testing. They are judged based on how close to 100% yield they get. Furthermore, in a narrow vision of the company, your business is dependent on how good your yield gets.
Testing is important. If you subscribe to continuous product improvement (Kaizen) mantras, you have to have the data. And not every piece of data is used to make a pass/fail decision. Engineers need the data to judge how well the product or process is working, where there is room for improvement, where you can save effort.
It isn't just about testing the product before shipping it to make sure it's okay. It is about making the product better over time. Morally, you can't ask for much more than that.
Back in mid-July I talked about a new series of Agilent white papers on using Linux in test systems. Well, the second paper in the series, "Using Linux to Control LXI Instruments through VXI-11," has just come out.
The paper begins by defining VXI-11: the GPIB equivalent for controlling instruments via Ethernet. It was added by the VXI Alliance in 2000. The other method the VXI group added, direct TCP socket communications, is a lower-level protocol. This paper maintains that VXI-11 is better for most cases.
It then proceeds to talk about Remote Procedure Calls (RPC). VXI-11 is based on RPC, so Linux will directly support VXI-11 (no $500 GPIB cards or expensive cabling required). To use RPC, Agilent promotes using the rpcgen code generator. They supply several different code examples using generated code.
In general, the white paper was organized, the author knew the subject, and the narrative flowed well from beginning to end. But this paper was not a Linux paper. Other than stating that you can use RPC in Linux (obvious), the paper is really just about VXI-11. To cap it off, the name "Linux" is only mentioned FOUR times in the text of the paper.
The paper is really a veiled push for communicating via VXI-11 regardless of the operating system. But as I stated in my original post for this series: white papers tend to be self serving. They are usually generated by the marketing department. And Agilent certainly has a big axe to grind for using Ethernet to access test equipment. All the big names in test equipment are members of the LXI consortium (LAN eXtensions for Instrumentation), but Agilent was an early proponent of the standard. Also, they were the first company to have LXI-certified equipment.
The remaining papers in the series are "Using Linux to Control LXI Instruments through TCP Communication", "Using Linux to Control USB Instruments", and "Using Linux in Soft Real-Time Applications". I'll be reviewing those as they are released. Hopefully the remaining papers will have more substance relative to Linux. Unfortunately, I now harbor doubts.
Monday, October 15, 2007
This book is called a "first edition," although portions felt like they were written some time ago and pasted into this new book. Also, I couldn't find a PDF copy of this available online, which is very retro. Finally, there is no summary at the beginning of book. There isn't even a page listing when & where it was published. Clearly the book did NOT come from the marketing department.
The book has 9 sections and three appendices. Only the ninth section is listed as "examples," but sections 6,7, and 8 are all about different applications as well: temperature, strain, and current measurements.
Sections 1 & 2 discuss hardware concerns when building or upgrading a test station. They include mentions of Keithley hardware, but they also cover processors, bus architectures, and networking.
Section 3 discussed software. There was some marketing influence here - several pages were devoted to talking about the Keihtley script programming tool - but they also devoted time covering open source issues, which I think is commendable. They also talked about IVI software. They are a sponsor member, but so are Agilent, NI, Tektronix, and Rohde & Schwarz.
In sections 4 & 5 they examined electronic components (from resistors to op-amps) and how they might relate to test engineering concerns. Very basic, EE stuff, but good to go over as a refresher. Section 6 covered DAQ in some detail, including ground loops (which have bitten me on at least one occasion). As previously mentioned, the remaining sections go over some details in measuring temperature (which I've had to do), strain (which I don't do), and a few other applications.
This book felt like more of an introductory survey than anything else. It didn't delve deeply into any single topic, yet it presented an overview of a variety of topics and mentioned things that warrant further research. They also had a large variety of application examples. I liked it. It was fairly straightforward, with a minimum of marketing fluff, and was aimed at test engineers.
My next post on this topic will be about the Agilent book.
Monday, October 8, 2007
See the original post. In short, he runs existing test systems. He may add minor modifications or improvements as necessary.
The engineer is part of a research group, either in an advanced R&D group (i.e. - Bell Labs), federal lab (CERN) or university setting. The test systems he builds are very specialized and are likely only used for a limited time.
His primary responsibility is building new systems. After that he hands them off to someone else. This is usually the province of a contractor, although most test engineers have done this at one point or another.
There is a LOT of information out there about software testing. I only have a couple of years of software test experience, so I hardly qualify as an expert, but in general software test is all about testing software - use cases, limit testing, memory leaks, boundary testing, etc. There is very little hardware involved, other than the computer itself.
Of course, you may be in a position where your job duties are a mixture of these categories. I've certainly done that in the past.
Okay, I've written enough on the the subject of test engineering as a career for a while. As I wrote on October 2nd, my next couple of posts will be on those test handbooks written by Agilent and Keithley.
Over the past few months I’ve written a few times about different types of test engineering. Last Friday I faced a seamier side of test engineering: legacy systems.
I don’t want to discuss the work in detail and bore everyone, so here is a synopsis:
- I’m overseeing a test system that has to be ready in a month.
- The system has hardware that was designed decades ago, and some of the software was written a good dozen years past, based on software even older.
- I scheduled a consultant to fly in from
- I’ve had an engineer and an intern do some preliminary setup work.
On Thursday afternoon I was performing last minute checks and couldn’t get the auto alignment system to work – the camera would not capture any images. Friday morning I dug into it further & found that the entire motion control system (the 20+ year part of the system) was motionless. I talked with an engineer at the firm responsible for the system, and he was wracking his brain trying to remember some of the details. Great.
The whole motion system is controlled through a single a serial port. After having little luck with the phone support, I tried wiring pin 2 to 3 (output to input) to test the port itself. It was fine. I tried swapping out card cage boards with systems that were working. No luck. I even swapped out the image grabber card, but that didn’t work either. After an entire day of work, I finally found the problem: a combination of a flaky PCI slot and an incorrect software setting.
The point of all this is that sometimes the systems you use for testing are not your own. They’re old. Yes, they work, but they can be finicky. The documentation is incomplete, you don't have access to the source code, replacement parts are hard to find. Sometimes that’s just part of the job.
Tuesday, October 2, 2007
That last item is important to me. Sometimes test companies can give you plenty of useful information in books like these - their engineers built the test equipment, and they've done plenty of research on how to use it for testing. But sometimes there's so much marketing fluff that it's hard to separate the useful from the dubious, so I end up ignoring the whole thing.
So, I've decided that I'm going to read through these books and review them. Hopefully I'll learn something, and I might as well pass my opinions of the books to anyone else interested in them. I'll post the first review (probably the Keithley book) next week.
The names of the books (and links to them) are:
"Understanding New Developments in Data Acquistion, Measurement, and Control"
"Test-System Development Guide: A Comprehensive Handbook for Test Engineers"
Hard copy http://www.home.agilent.com/agilent/editorial.jspx?action=download&cc=US&lc=eng&ckey=1244104&nid=-536900530.0.00&id=1244104&cmpid=20580
Monday, October 1, 2007
Leave something for Phase 2.
You must resist the temptation to add new features to the system. Release the test system to production, and THEN work on your plans for the upgrade.
Do NOT hard-code specs. The specs will always change. Put the specs into a database that you can query, a spreadsheet file, or at least some sort of configuration file.
Log what happens. Make sure the software logs what happens - you cannot always rely on the technician running the test to write down what happened when and why.
Be aware of the environment.
Your test equipment or the DUT itself may change as the temperature, humidity or pressure change. You have to take this into account.
Know your accuracy.
You need to know the specs of the test equipment in your system. To be more specific, you need to know the specs for how you use the equipment, because vendors will sometimes list different specs for different conditions.
I like what I've read about it so far, and I've already introduced it to the other people in my group. The plan will be to implement it in work going forward. We currently use a mixture of home-grown XML and a small amount of plain text files. Moving to ATML is a logical progression.
The thing that bothers me is why didn't I hear about this before? It has been in the works since at least 2005, there was an article about it in Evaluation Engineering, and even National Instruments wrote about it to promote their Test Stand software. Perhaps I was so buried in the details of my current company over the past two years that I never lifted my head to see what else was happening.
That is something that engineers, especially test engineers, need to do. New testing tools keep coming out, better versions of existing instruments, new ways to test - we need to stay abreast of these things.
Monday, September 24, 2007
Document control is an important subject for any manufacturing firm. Work instructions, specs, hardware designs, test plans - these all have to be managed. Where I currently work we have a nice system called Omnify that performs these tasks well, but it took a lot of work to get to that point.
Hardware systems are not immune to configuration management issues, for many of the same reasons. Do a Google search for "hardware configuration control," and you will find items such as discussions about control issues on the old NASA Apollo program, telecommunications standards for configuring hardware on a network, and plenty of ads for software that helps to manage hardware configuration issues.
Having said all that, I have to conclude that similar concerns apply to test stations. For many test engineers this isn't a big issue. There is only one test system, they update it when they need to, and all is good. But what if you have several test stations that, for one reason or another, are different?
This is something I have struggled with lately. I have two different test stations that test the same things and run almost the same software. Both stations use an optical spectrometer, but they use different spectrometer models and that leads to slightly different software. When I upgrade the software on the stations, I need to maintain separate software images. When the technician does calibrations, I make sure there are different work instructions he follows for each station. If the firmware for the spectrometer is upgraded, it must be done separately and logged as separate activities.
The long term solution to this problem is upgrade both stations to the same model of spectrometer and therefore the same software. But in the short term it requires rigorous configuration control. That's just the way it is.
Wednesday, September 19, 2007
Anyway, this convention is happening right now & lasts until tomorrow. It's at the convention center in Baltimore's Inner Harbor (a pretty nice location). I scrolled through the list of technical sessions and was impressed by the list. I have been to trade shows where there was a lot more marketing than actual learning, but the signal to noise ratio appears to be higher for this event.
So, if there's anyone reading this that went to this show this year, or has gone in past years, let me know what you thought of the show. I'll probably write an entry sometime down the road about trade shows for testing, and I'd appreciate the input.
Tuesday, September 18, 2007
But there is a particular trap that a test engineer can fall into. Usually the system does not ship out the door - it's still there. The engineer will probably be running it, at least until he trains a technician to use it. As he uses it, he sees ways to speed it up, features to add, neat little things it could do. THIS is the trap: the test engineer spends so much time modifying/improving the system that he neglects his other tasks.
I will be honest and admit that I did this a time or two, especially when I was younger. My first love was a calorimeter I built in graduate school. I worked for months to create the right design, weeks on the code (Fortran!) to verify I was extracting the data correctly, more weeks writing software filters for the data. It was anticlimatic to actually just sit and take data and analyze that data. That's what I needed to do for my thesis - but I was happier playing with the code and the hardware. Eventually I had to force myself to do the actual testing.
Test systems are like children. They grow up and become productive - you have to let them go. But if you start taking pictures for your family album... then that's just weird.
Wednesday, September 12, 2007
Know what you are testing, or work closely with someone who does.
A test system built without knowledge of what it will test and how it will test will not work.
Eventually a technician will need to know how it works, unless you want to run the station yourself forever.
Anything that can go wrong will. It's really just applied statistics. So plan for that when building the system.
Create PM plans and schedules.
Most people think of PM (preventive maintenance) in terms of manufacturing systems: you should have a schedule for when parts need to be oiled, when accelerometers have to be recalibrated, when to clean off gear assemblies. But test systems, especially high volume ones, need this maintenance as well.
Run it with actual parts.
NEVER proclaim the system is ready to go before you have ran it through its paces thoroughly with actual components. I cannot stress this enough.
There's my list. Please comment or email me if you have suggestions of your own. I'll probably update it in a month.
Tuesday, September 11, 2007
Patents are not cheap. If you work for a large corporation that regularly submits patents, this is not a noticeable issue. But for everyone else, there has to be a very good justification to apply for a patent.
A company will apply for a patent for several reasons. The patent can provide a barrier to a competitor trying to enter a new market. It can protect the company from competitors who may try to use the research that company has done without paying a license fee. Some companies may not actively use the patents, but they can make money off the patents by licensing them. Patents can be a source of pride – listing all the patents a company has applied for is equivalent (in certain industries) to beating your chest.
But there is a catch: you have to be able to prove that the patent is being violated. For example, suppose you have an innovative new manufacturing process. This process is an intermediate step that deposits certain chemicals on the product, and later that layer is removed. This process saves money and improves the product, but technically there is no real way to prove that you do it. Because of that, your competitors could use the same process (that they read about in your patent application) and claim that they do not do that. Short of going into their manufacturing facility, you can't prove it. So, your company might be better off just classifying it as a trade secret and not patent it.
A similar conundrum can apply with test methods. You have a new way to test your product. It's clever, it saves money, it's faster. But how can you prove your competitors test their products that way?
This is something that I'm wrestling with right now. The only saving grace is that if this is the only way to reasonably test the product, then I can probably apply for the patent. We'll see.
Tuesday, September 4, 2007
This is different if they are CS majors. In that case, they've probably been exposed to software testing theories in classes. There are scores of books, websites, and blogs on the subject. They may have even interned as a SQA (software quality assurance) engineer. I've met Microsoft testers, and they've said that most programmers hired start out in a testing position before they do anything else.
But I think hardware testing is a lot more dependent on exactly what you are testing, so the nature of it is harder to teach in a classroom setting. Yes, there are a few books that present an overview of the subjects (like Test Engineering: A Concise Guide to Cost-effective Design, Development and Manufacture by O'Connor), but it's just an overview. You'll get exposure to the basic tools (oscilloscope, DAQ cards, etc) in EE classes. You may learn about statistical methods in a stats class or industrial engineering class. But to really learn about the specifics of testing in a certain field, then you have to dig into the details. For example, you'll never learn about the details of fiber optic testing in an undergraduate class. You either learn that on the job or maybe from a book (like the excellent Fiber Optic Test and Measurement by Derickson).
Of course, what you major in during college is not necessarily a predictor of what you'll do in life. I once worked with a manager of the Integrity Program for the F-22 at Lockheed Martin in Ft. Worth, TX. It was a fairly prominent engineering position with a good deal of responsibility. He had a BA in philosophy....
Thursday, August 30, 2007
I found a Dept. of Defense definition of sustaining engineering as follows:
The technical effort to support an in-service system in its operational environment. This effort spans those technical tasks (engineering and logistics investigations and analyses) to ensure continued operation and maintenance of a system with managed (i.e. known) risk.
I would consider this to be a fairly accurate summary. So, the point of my post is "how much of this is test engineering and how much is it manufacturing engineering?" The sustaining engineer is not creating a test system, but he is responsible for a system's upkeep, making improvements if needed, and monitoring/analyzing the data it creates. These duties are directly related to the test system. But in theory the test system should have been engineering well enough so that it can be turned over to manufacturing without a lot of care and feeding. I mean, Agilent may build a $200k test system that they sell, but they do not sell an engineer with that system.
Personally, I think the answer is that the sustaining engineer is part of the test department (if the dept is large enough to handle that subdivision of labor). He may have had a hand in the creation of the system, and now he helps run it. Ideally, he still has time to work on other projects because the test station has few problems.
Of course, if your test group primarily does R&D work, then all bets are off. Any test system you build will probably be used a few dozen times and then scrapped in favor of something new...
Tuesday, August 21, 2007
Hire an intern with broad technical interests. Building and running test stations requires the engineer to be something of a jack of all trades. If the station is automated, you need to know software. You may be called upon to build some fixturing, so you may need to know a bit of mechanical engineering. If you need to debug a circuit, then an electronics background is handy. This can be fun stuff, but only if you _like_ doing different things. All Bill liked to do was program. He did that well, but not much else.
Scope the assignments. There are no courses that teach how to test - it is a very specific skill that is learned from experience. Bill often had little direction from the manager (who was busy looking for a new job at that time) and he floundered. I gave John detailed instructions, and he flourished.
Play to his strengths. This is a corollary to the first point. The intern may have interests in different fields, but he is still getting his degree in a specific discipline. Bill did a great job with the project that involved a lot of software. Since John is a EE major, I've given him tasks that minimize his programming. Choose projects that play to what the intern can reasonably do.
Saturday, August 18, 2007
This summer we've had two interns in my test group. There are several good engineering schools here in the New England area, and they run programs that send students out for an entire semester. The firms get good yet inexpensive help and the students get college credit. We had an intern in our group last year and he worked out well.
One of our interns (call him John) was tasked to work with me on a couple of different projects I have. John is an EE major who has interned with two other companies in the area (with a good track record) and has excellent grades in school. He's a nice, easy-going guy, fairly responsible for his age, and very eager to work with some serious technology.
I have two major projects I am managing. Both projects involve wafer-level testing, although the second is more software-oriented than the first. The first thing I did was make a general plan of what I'd like him to do. First he needed to learn the basics of the test stations and then get some practice time in. Since most of the first project can be viewed as separate mini-projects, I decided that I could parcel out some of that work to John.
I forwarded a collection of documents - work instructions and a couple of presentations I had made for the stations - for him to read. I spent time showing him how the testing worked. Before I went on vacation for a week I asked a production engineer to help him do some testing by himself, and I forwarded detailed instructions of some work he could do.
When I got back from vacation I was pleasantly surprised to see that he had spent most of his spare time in the clean room working on the tasks I had assigned. He had not completed them, but he showed me how far he had gotten and had clear plans for what remained. Since that time I have assigned him other small projects with clear instructions, and he has completed them successfully.
Part 1 of 3 talked about a unsuccessful intern.
Part 3 of 3 will draw a few conclusions.
Thursday, August 16, 2007
In the late 1990s Hewlett Packard had a program (they probably still do) called SEED: Student Employment and Educational Development. Every summer the company hired standout undergraduate students as summer interns. In my division it was a very organized program, the students hired were bright and articulate, and several of the people I worked with were former SEEDs themselves.
One summer my test group retained a SEED intern - let's call him Bill. Bill was a smart guy pursuing a BS in CS at a great school. He had a good deal of experience with designing web sites and JAVA programming (which had only been released a couple of years before). He even did some volunteer work for Red Hat debugging Linux - they gave him stock options which paid out very well when Red Hat went public in 1999. In other words, he was a very clever computer geek.
Bill did very well with one project he was assigned. The tasks were a) write a JAVA program that interfaced with a spectrum analyzer, b) repeadedly run the analyzer through a series of rigorous tasks, c) track the free memory of the analyzer over a long period of time, and d) find any memory leaks and what triggers them. He wrote the program, ran it, and found some important problems.
Bill liked programming, but he wasn't interested in data analysis, hardware (EE stuff) or miscellaneous paperwork. When he had to do those tasks he performed poorly. For example, when he was asked to analyze some data from a series of tests, I spent a lot of time walking him through using advanced features in Excel. If I wasn't there showing him what to do and how, he would tend to chat online with friends and play games. He needed a lot of supervision and guidance.
To be fair, one of Bill's problems was a lack of planning on the manager's part. In my opinion, he didn't always assign tasks that played to Bill's strengths, and sometimes the goals and steps towards those goals were vague. But at the end of the day it was Bill's responsibility to try and figure out things he didn't understand, and to actively seek out help when he needed it (instead of spending the day playing Diablo).
Part 2 of 3 will talk about a successful intern.
Part 3 of 3 will draw a few conclusions.
Monday, August 13, 2007
Aside from those gripes, this book is great resource for three specific reasons:
- It covers specific aspects of LV programming chapter by chapter. These divisions are clear, logical, and discussed in detail.
- He uses copious examples.
- He devotes much of chapter 10 to the VI Analyzer, which I have found to be an incredibly useful tool. For some reason NI never pushed this tool, but the book's popularization of it may go some way towards improving the overall standard of LV code.
I should post a disclaimer here as well. Back about 4-5 years ago I interviewed with his company and almost started work there. Then I got wind of how little money they would offer and bailed. I was disappointed with the firm for a while, but I still like this book.
Friday, August 10, 2007
- Test engineer salaries
- Career path for engineers
- Useful background/experience for test engineering
- Typical issues faced by a test engineer
- Sustaining engineering vs. test engineering
- Falling in love with the test system
- Outsourcing test
- Labview style
So, take it for what it is worth.
- NI finally added conditional for-loop exiting. They've needed this for a while. After all, an endless loop is one of the classic software bugs, and in LV the while loop is the prime contributer to this problem.
- They've changed shared re-entrant VIs some. I haven't looked at this in much detail yet, but I've always been a little ambivalent about the re-entrant status of VIs in general. I know there are developers who rely on that feature all the time to use a subVI as a sort of variable for holding data. To me that seems like a bit of a crutch and asking for complicated bugs. I'll play with the new changes to see if it has changed my mind any.
- NI is anal enough to change the names of VIs because they mispelled "zeros" - previous versions of LV include a VI called Find All Zeroes of f(x). I got a real chuckle out of that.
Of course, NI almost always releases a new version of LV right around the beginning of August to coincide with NI Week down in Austin, TX. I went last year but was on vacation this year. So I wasn't really surprised the new version came out. But it's fun to have a new toy nevertheless.
Thursday, August 2, 2007
When I did software testing for HP, one group of errors I specifically tried to account for were corner cases. The common thinking was that it was always relatively easy to find many bugs - they were in the middle, so to speak. But product quality was always high in mind at HP, so we wanted to dig out odd little bugs that 'hid in the corners.' Besides that, an expensive piece of test equipment, especially one in a lab, can be used in all sorts of odd ways over the course of its life. And the definition of a corner case is a situation that happens under extreme conditions.
After a while I started thinking about extending this idea. What if you referred to a corner case in terms of how the system is used in general, instead of adjusting specific parameters. If the usage of a program, or software embedded in test equipment, or even a test station, is a three-dimensional space then what are the dimensions? To frame it as a physics question, what are the degrees of freedom?
One dimension would have to be the number of functions - all the distinct things that a user can get the software to do. If you use just a few of the software's features (typically the most common ones), then you are not likely to find a corner case bug. The most common features are usually thoroughly vetted by the design team. But if you use a lot of those features, in different orders, then you may find something bad.
The second would be time. That is, the more a program is actively running (not just idling), the more likely it is to encounter a problem. It may be a memory leak, but it could be some other problem as well.
The third and final degree of freedom is the number of users of the system. I've written programs that were buggy, but that was fine because I was the only one using it. I ran it a certain way, to do certain things, and it worked just fine. But if I gave that program to someone else, sure enough they would crash it. This can be generalized to, "The number of ways in which a program can be used and abused increases proportional to the number of users of that program." And this applies to inanimate users as well. If a program can be accessed and used by some other application, then that program is itself a user.
So, lay these three degrees of freedom on orthogonal axes and you have a Feature Space Cube (FSC). Down by the origin you won't find many interesting bugs. The farther out you go, the more interesting things can get. Think about the ramifications for a while, and the concept can be a useful way to picture a system, not to mention entertaining in a geeky sort of way.
Monday, July 23, 2007
For example, I have a friend who is diabetic. She went in to get her "A1C" tested and it was measured to be 6.4 %. Since the American Diabetes Association recommends that number to be 6.5 or less, she felt pretty good.
So I did some checking on WIkipedia and found the following quote:
"Interpreting results is somewhat difficult because (1) laboratory results can differ depending on the analytical technique and (2) biological variation between individuals can be up to 1 percentage point (i.e., two individuals with the same average blood sugar can have A1C values that differ by up to 1 percentage point)."
So, the error is at least ±0.5%. The Wikipedia article also states that the results can be off if there was a recent change in diet.
This particular doctor did not discuss uncertainty levels, or maybe he didn't know. Maybe he just assumed that 6.4 really is 6.4. So I guess the lesson to take away from this is that you should always ask your doctor about the test itself. If he doesn't know, then do your own research.
Tuesday, July 17, 2007
So I just downloaded the first in a series of Agilent white papers on using Linux in test systems. The paper has a brief history of Linux, comments on licensing structures, lists available tools, and discusses controlling tools within Linux.
In a clever marketing move, Agilent is releasing these papers episodically. The other papers in the series are:
- "Using Linux to Control LXI Instruments through VXI-11"
- "Using Linux to Control LXI Instruments through TCP Communication"
- "Using Linux to Control USB Instruments"
- "Using Linux in Soft Real-Time Applications"
Normally I would skip these papers. They tend to be self-serving and talk about things that are pretty obvious. But I would like to hold Agilent to a higher standard (that may be naive of me). Plus, in my test group we've seriously considered using Linux on test systems - so I'm interested in what they have to say.
The paper also lists 4 reasons why you'd use Linux in the test system:
- Threat of software obsolescence. What happens if the version of windows you run on your system is obsoleted?
- Flexibility/performance. The Linux OS can be customized.
- Stability. It's very unlikely to go belly-up on you.
- Leveraging know-how and software. There is a lot of UNIX experience out there that can be extended to Linux.
- Technicians and operators are less likely to screw around with the computer (browse the net, download games) if it doesn't run Windows.
- As a corollary, engineers won't try to run other programs, analyze data with Excel, or remote login to their own computers on a Linux test station.
Monday, July 16, 2007
But where does testing belong? I've seen three different choices in the 1/2 dozen or so companies I've been:
- A completely separate group. Each member of the group supports one or more projects.
- Part of the engineering department.
- Part of the manufacturing department.
Of course, if you already have three or more test engineers then you may want to group them together. Get those guys together and they share ideas and solutions, just like any other engineering team.
If the company's products are mature, then test is probably part of production. Your job is to get product tested and out the door, and you may be called on to debug test stations, process flow, or maybe the product itself.
If test is part of the engineering team, then the test systems may evolve almost as fast as the product itself. When the design engineers add a new feature, or marketing has new specs or a new customer, test engineering has plenty of work to do.
At least, those are my thoughts on the subject.
Saturday, July 14, 2007
The question I now have is, how applicable is this to test engineering? Is it the responsibility of the test engineer to get involved in DOEs? Should test engineers consider learning how to use stat software like Minitab or JMP (both of which have extensive DOE support)?
I think it really depends on where you work. For example, if part of your job as a test engineer is dealing with SPC charts and processing test data, then DOE is a logical extension of that work. Or if you have a test system with multiple settings which need to be optimized, then DOE would be helpful. However, if you spend most of your day writing code for test systems, then it's probably just a "nice to know" thing.
BTW, the course instructor said that if you read any single book on the subject, it should be Understanding Industrial Designed Experiments by R. Launsby and S. Schmidt. It is, of course, at Amazon.com.
Friday, July 13, 2007
What interests me the most is that it's USB compliant. In my experience, test engineers are pretty conservative about switching to new technologies. It makes no sense to switch to something until you're sure it won't screw up your measurements. When I worked at HP I was part of a team introducing a new spectrometer that had an Ethernet port. That port was used for internal access to the embedded code, among other things, and I thought it would be great for customers to have TCP/IP access to the instrument instead of just HP-IB. The idea was shot down by marketing because it was considered "too new".
The GPIB interface has been around for a LONG time, and even longer as HP-IB (I think it's in the HP/Agilent handbook that you have to call it HP-IB). Is it time to get rid of it? The downsides include
- Those cables can be very expensive
- You have to buy and configure a separate interface card
- The cables and connectors are bulky
- Interface speed can be a limiting factor
But personally, I've already used two instruments with USB interfaces, and I really like the idea. Now that other companies appear to be offering USB, I'm read to kick the GPIB habit. Is there a 12-step program for that?
Thursday, July 12, 2007
Does it have to do with "getting there first?" Well, let's see. C++ has been around since 1985 (and regular C since 1972). Labview was created in 1986 for the Macintosh platform and was ported to Windows in 1992. Visual Basic was introduced in 1991. Looking at dates, either C++ or LV wins.
Is it because LV has a lot more language support? I don't think so...C++ has a worldwide user base, and Microsoft has historically had plenty of support for it's flagship programming language.
Is it because LV is focused more on testing? Maybe. C++ is a general-purpose language, which could be both a strength and weakness. VB was historically used a lot for DB interfacing and front end GUI apps. LV has it's diagram that looks like an electrical schematic, ready support for all kinds of test instruments, and National Instrument's (NI) claim that it is designed for "test, measurement, and control applications."
Personally, I think it has more to do with having a champion than anything else. C++, while created by one person, is a universally accepted standard language and lots of companies build compilers for it. Visual Basic is supported just by Microsoft, but I think they've always wanted it to be an alternative to C++ while at the same time promoting Windows as an OS.
But LabVIEW has always been about testing. NI has always claimed it is the perfect tool for test apps, they continually push building a community of users for it (free introduction seminars across the country, user groups in larger cities), and developing new test tools for it (motion control, DB tools, vision). Yes, LV is supported by a single company and if NI ever goes belly up a LOT of people are in trouble. But sometimes a single backer with deep pockets can help.
Comments on this are appreciated.
Friday, July 6, 2007
I was a field apps engineer for two years, and I did it part time for a year with a different company. Furthermore, I've met several other apps engineers who transitioned from test engineering. Granted this is purely anecdotal evidence, but is there a good career path from one to the other?
Let's look at why a test engineer might do well in this position
- He has a solid technical background.
- If the company has multiple product lines, he probably has written tests or helped to test those products, so he has a breadth of knowledge.
- A test engineer who has seen various product failures can help customers who may have similar problems.
- A test engineer has seen the negatives of the product (i.e. - failures) but is still focused on the product (making it work right by correcting failures, or at least weeding out bad products). With this attitude a field applications engineer can build a layer of trust with the customer while at the same time help to sell him on the product.
But in the end it all still depends on the person.
Thursday, July 5, 2007
After graduate school, I worked for an aerospace subcontractor firm as my first real job. The company’s products were split 50/50 between the military and commercial fields. On my first manufacturing project I started to run behind on shipments because the yield was slipping. When that happened, I was told by more seasoned engineers (who had initially started this project) to go through the "marginally failed" units and "test them until they work."
- The spec was extraordinarily tight for the product (blame was placed on sales & marketing).
- We were up against the accuracy limits of the system.
- It didn't really matter if the positioning of the cannon was off by a couple arc-seconds. They had redundant systems in place.
Stepping aside from the questionable moral grounds of this situation, let's look at that rationale list from a test engineer's perspective.
- Spec too tight. Marketing should certainly know to what tolerance the product can be tested. If they don't, then it is the job of test engineering to inform them. If marketing plays the word game of "it is guaranteed by design" then it should not need to be tested, now should it?
Of course, if marketing knows the limits and chooses to ignore them, then you have much bigger problems...
- Limited test accuracy. If you are trying to test to a spec that is at the limit of what you can measure, then you have serious problems. Buy a more accurate tester, build one if you can't buy it, or do sufficient test system qualification to verify your accuracy. You have no business being anywhere near those limits. Test equipment manufacturers themselves can play "specsmanship" games, so you cannot always trust their numbers.
- The customer doesn't really need that accuracy. I'm sure it's possible that the customer has over-specified what they need. They may have other backup systems in place if the accuracy is not there, they may have an over-tight spec because they don't entirely trust the product (or the company). Or they may just be clueless. But you can't get into the game of second-guessing the customer. That'll get you in deep trouble, somewhere down the line.
So, did I really screw up as a test engineer (although I wasn't called a test engineer back then), or was I just doing what I was ordered to do? I think I will just plead 'youthful transgression' and try not to let that happen again.