Tuesday, October 30, 2007
This book, released in May 2007, is partly a repackaging of other white papers, many of which you can find here. I suspect that it was repackaged like this to compile what had been written separately as well as to heavily promote the LXI interface. I previously posted about Agilent's big stake in LXI, so I won't get into that again.
Other than the marketing-oriented aspects, I found the guide to be somewhat useful.
There are 4 main sections of this handbook - each section has numerous subsections. The first discusses test system design. The second section covers LAN networking issues. The third is devoted to LXI. The fourth and final section lists some details of RF/Microwave testing.
Section 1 - Test System Design
This section is devoted to going over the various aspects and theory of a test system. Parts of it I found insulting (it appeared aimed at a pure beginner), some of the things they talk about I have posted about in my rules for building systems (here and here), and some of it was actually pretty good.
For about 15 pages the guide discusses software architecture: defining the requirements, controlling instruments, storing data. It's all very general, but I found it extremely funny that whenever they mentioned LabVIEW, their competing product Agilent VEE was written first.
Section 2 - Networking Choices
Here the guide covers networking considerations for a test system. This might be a bit of overkill for some people, since it is aimed for the test engineer who knows very little about networking basics.
Section 3 - LXI: The Future of Test
Yes, that was the actual title of this section. Somewhat presumptuous, and very much market-speak, but that is what the section is called.
Section 4 - RF/Microwave Test Systems
I have no real experience with this kind of testing, so I cannot speak to it's accuracy or whether it was worthwhile or not. To be honest, I skimmed this section.
When you compare this book to the Keithley book, you can see that that they have two completely different intents. The Agilent guide is polished, views testing from a general point of view, and serves as a vehicle for pushing LXI. The Keithley guide is not so polished, goes over the guts of testing (i.e. - the many pages devoted to discussing passive and active components), and includes numerous examples.
In short, the Agilent book is written for a manager, VP, or someone looking for more information about testing. The Keithley book is written for the engineer. If you're a test engineer, I would recommend reading both of them, file away the Agilent book, and put the Keithley book on the shelf for frequent referencing.
There is a book about test engineering that is supposedly a college-level intro coursebook. Maybe I'll take a look at it for comparison. There is also a free handbook on LXI interfaces available. I haven't looked at it yet myself, but it may be worthwhile.
Tuesday, October 23, 2007
I had a few minutes yesterday to glance through Test & Measurement World and saw an article detailing some of the recent history of wafer testing [this is part 3, so follow the links to read parts 1 & 2]. It was really pretty interesting. Of course, if you have no involvement with chip testing your eyes may glaze over.
Last night I had time to dig into the article more. The author mentioned a presentation made by Vikas Sharma at the 2007 IEEE Semiconductor Wafer Test Workshop. So I navigated over to archives for this past June and found the entire presentation. Even more, there were several other presentations there that I can use. Yes, most of those engineers are probing logic chips and testing things much more complicated than what I do, but parts of it are still applicable.
In a post a few weeks ago I talked about the need to lift your head and look around you once in a while. What I did last night is a prime example of this. Sometimes you never know what you'll find, and there is a certain satisfaction to be gained from discovering a little nugget of treasure.
Friday, October 19, 2007
One of the things I found so surprising was the loss incurred because people did not test - why would a company not test? I was astounded over a list on a website that documented some of the more outrageous consequences of not testing. I had forgotten about that list (the full & updated list is here), but a blog I read the other day reminded me of it.
40 IT failures caused by software bugs by ZDNet's Michael Krigsman -- Rick Hower, who runs the Software QA Test Resource Center has compiled a lengthy listing of “major computer system failures caused by software bugs” Here are several entries from that list: A September 2006 news report indicated problems with software utilized in a state government’s primary election, resulting in periodic unexpected rebooting of voter check-in machines, [...]
I posted this today is because I had to fight this "why should we test?" attitude recently. In software testing, one of the main reasons for shorting the test process is time. It slows the software life cycle, and there is a push to just release the software and fix the bugs as they crop up.
The reasons are different in hardware testing. Yes, test time is a concern, as well as cost. But yield is an even bigger issue - especially in medium to high production volumes. Production managers tend to think that "every parameter tested is a chance for the yield to go down." So they naturally want to reduce the amount of testing. They are judged based on how close to 100% yield they get. Furthermore, in a narrow vision of the company, your business is dependent on how good your yield gets.
Testing is important. If you subscribe to continuous product improvement (Kaizen) mantras, you have to have the data. And not every piece of data is used to make a pass/fail decision. Engineers need the data to judge how well the product or process is working, where there is room for improvement, where you can save effort.
It isn't just about testing the product before shipping it to make sure it's okay. It is about making the product better over time. Morally, you can't ask for much more than that.
Back in mid-July I talked about a new series of Agilent white papers on using Linux in test systems. Well, the second paper in the series, "Using Linux to Control LXI Instruments through VXI-11," has just come out.
The paper begins by defining VXI-11: the GPIB equivalent for controlling instruments via Ethernet. It was added by the VXI Alliance in 2000. The other method the VXI group added, direct TCP socket communications, is a lower-level protocol. This paper maintains that VXI-11 is better for most cases.
It then proceeds to talk about Remote Procedure Calls (RPC). VXI-11 is based on RPC, so Linux will directly support VXI-11 (no $500 GPIB cards or expensive cabling required). To use RPC, Agilent promotes using the rpcgen code generator. They supply several different code examples using generated code.
In general, the white paper was organized, the author knew the subject, and the narrative flowed well from beginning to end. But this paper was not a Linux paper. Other than stating that you can use RPC in Linux (obvious), the paper is really just about VXI-11. To cap it off, the name "Linux" is only mentioned FOUR times in the text of the paper.
The paper is really a veiled push for communicating via VXI-11 regardless of the operating system. But as I stated in my original post for this series: white papers tend to be self serving. They are usually generated by the marketing department. And Agilent certainly has a big axe to grind for using Ethernet to access test equipment. All the big names in test equipment are members of the LXI consortium (LAN eXtensions for Instrumentation), but Agilent was an early proponent of the standard. Also, they were the first company to have LXI-certified equipment.
The remaining papers in the series are "Using Linux to Control LXI Instruments through TCP Communication", "Using Linux to Control USB Instruments", and "Using Linux in Soft Real-Time Applications". I'll be reviewing those as they are released. Hopefully the remaining papers will have more substance relative to Linux. Unfortunately, I now harbor doubts.
Monday, October 15, 2007
This book is called a "first edition," although portions felt like they were written some time ago and pasted into this new book. Also, I couldn't find a PDF copy of this available online, which is very retro. Finally, there is no summary at the beginning of book. There isn't even a page listing when & where it was published. Clearly the book did NOT come from the marketing department.
The book has 9 sections and three appendices. Only the ninth section is listed as "examples," but sections 6,7, and 8 are all about different applications as well: temperature, strain, and current measurements.
Sections 1 & 2 discuss hardware concerns when building or upgrading a test station. They include mentions of Keithley hardware, but they also cover processors, bus architectures, and networking.
Section 3 discussed software. There was some marketing influence here - several pages were devoted to talking about the Keihtley script programming tool - but they also devoted time covering open source issues, which I think is commendable. They also talked about IVI software. They are a sponsor member, but so are Agilent, NI, Tektronix, and Rohde & Schwarz.
In sections 4 & 5 they examined electronic components (from resistors to op-amps) and how they might relate to test engineering concerns. Very basic, EE stuff, but good to go over as a refresher. Section 6 covered DAQ in some detail, including ground loops (which have bitten me on at least one occasion). As previously mentioned, the remaining sections go over some details in measuring temperature (which I've had to do), strain (which I don't do), and a few other applications.
This book felt like more of an introductory survey than anything else. It didn't delve deeply into any single topic, yet it presented an overview of a variety of topics and mentioned things that warrant further research. They also had a large variety of application examples. I liked it. It was fairly straightforward, with a minimum of marketing fluff, and was aimed at test engineers.
My next post on this topic will be about the Agilent book.
Monday, October 8, 2007
See the original post. In short, he runs existing test systems. He may add minor modifications or improvements as necessary.
The engineer is part of a research group, either in an advanced R&D group (i.e. - Bell Labs), federal lab (CERN) or university setting. The test systems he builds are very specialized and are likely only used for a limited time.
His primary responsibility is building new systems. After that he hands them off to someone else. This is usually the province of a contractor, although most test engineers have done this at one point or another.
There is a LOT of information out there about software testing. I only have a couple of years of software test experience, so I hardly qualify as an expert, but in general software test is all about testing software - use cases, limit testing, memory leaks, boundary testing, etc. There is very little hardware involved, other than the computer itself.
Of course, you may be in a position where your job duties are a mixture of these categories. I've certainly done that in the past.
Okay, I've written enough on the the subject of test engineering as a career for a while. As I wrote on October 2nd, my next couple of posts will be on those test handbooks written by Agilent and Keithley.
Over the past few months I’ve written a few times about different types of test engineering. Last Friday I faced a seamier side of test engineering: legacy systems.
I don’t want to discuss the work in detail and bore everyone, so here is a synopsis:
- I’m overseeing a test system that has to be ready in a month.
- The system has hardware that was designed decades ago, and some of the software was written a good dozen years past, based on software even older.
- I scheduled a consultant to fly in from
- I’ve had an engineer and an intern do some preliminary setup work.
On Thursday afternoon I was performing last minute checks and couldn’t get the auto alignment system to work – the camera would not capture any images. Friday morning I dug into it further & found that the entire motion control system (the 20+ year part of the system) was motionless. I talked with an engineer at the firm responsible for the system, and he was wracking his brain trying to remember some of the details. Great.
The whole motion system is controlled through a single a serial port. After having little luck with the phone support, I tried wiring pin 2 to 3 (output to input) to test the port itself. It was fine. I tried swapping out card cage boards with systems that were working. No luck. I even swapped out the image grabber card, but that didn’t work either. After an entire day of work, I finally found the problem: a combination of a flaky PCI slot and an incorrect software setting.
The point of all this is that sometimes the systems you use for testing are not your own. They’re old. Yes, they work, but they can be finicky. The documentation is incomplete, you don't have access to the source code, replacement parts are hard to find. Sometimes that’s just part of the job.
Tuesday, October 2, 2007
That last item is important to me. Sometimes test companies can give you plenty of useful information in books like these - their engineers built the test equipment, and they've done plenty of research on how to use it for testing. But sometimes there's so much marketing fluff that it's hard to separate the useful from the dubious, so I end up ignoring the whole thing.
So, I've decided that I'm going to read through these books and review them. Hopefully I'll learn something, and I might as well pass my opinions of the books to anyone else interested in them. I'll post the first review (probably the Keithley book) next week.
The names of the books (and links to them) are:
"Understanding New Developments in Data Acquistion, Measurement, and Control"
"Test-System Development Guide: A Comprehensive Handbook for Test Engineers"
Hard copy http://www.home.agilent.com/agilent/editorial.jspx?action=download&cc=US&lc=eng&ckey=1244104&nid=-536900530.0.00&id=1244104&cmpid=20580
Monday, October 1, 2007
Leave something for Phase 2.
You must resist the temptation to add new features to the system. Release the test system to production, and THEN work on your plans for the upgrade.
Do NOT hard-code specs. The specs will always change. Put the specs into a database that you can query, a spreadsheet file, or at least some sort of configuration file.
Log what happens. Make sure the software logs what happens - you cannot always rely on the technician running the test to write down what happened when and why.
Be aware of the environment.
Your test equipment or the DUT itself may change as the temperature, humidity or pressure change. You have to take this into account.
Know your accuracy.
You need to know the specs of the test equipment in your system. To be more specific, you need to know the specs for how you use the equipment, because vendors will sometimes list different specs for different conditions.
I like what I've read about it so far, and I've already introduced it to the other people in my group. The plan will be to implement it in work going forward. We currently use a mixture of home-grown XML and a small amount of plain text files. Moving to ATML is a logical progression.
The thing that bothers me is why didn't I hear about this before? It has been in the works since at least 2005, there was an article about it in Evaluation Engineering, and even National Instruments wrote about it to promote their Test Stand software. Perhaps I was so buried in the details of my current company over the past two years that I never lifted my head to see what else was happening.
That is something that engineers, especially test engineers, need to do. New testing tools keep coming out, better versions of existing instruments, new ways to test - we need to stay abreast of these things.