Thursday, November 29, 2007
Linux on test systems, pt 3
The third paper in a new series of Agilent white papers on using Linux in test systems has just been released: "Using Linux to Control LXI Instruments Through TCP." As has become custom, here is my review.
The previous paper in this series discussed using Linux to control LXI via VXI-11. While that paper gave me the impression that this was the best way to control instruments, the new paper says that TCP (via direct socket connection) is better for short time measurements.
The author gives a very brief overview of the seven layers of TCP/IP and then dives right in to gritty details (including a quick discussion of Nagle's Algorithm). The paper provides several extensive code examples. The examples are in C, but that could be ported easily to LabWindows, or you could wrap it up as a separate object to use in LabVIEW.
I liked this paper better than the last one. To use a Thanksgiving metaphor, there was less marketing feathering and more engineering meat on the bones of the paper. I would really recommend this as a useful paper to read if you were looking at using Linux and LXI, and now I'm feeling more optimistic about the remaining papers.
----------------------
The remaining papers in the series are "Using Linux to Control USB Instruments" and "Using Linux in Soft Real-Time Applications". I'll be reviewing those as they are released. Since these papers have been released just about once per month, I expect to see the next one sometime around the end of the year.
Monday, November 19, 2007
Outsourcing a test station, part 1
Over the past few months one of my major projects has been building a test system that we are shipping to a contract manufacturer in Asia. The station is similar to our in-house systems, but different considerations were needed because it will be operating independently of our system. It has been a lot of work to this point, but it is finally nearing completion.
This is not the first time I have built a station that was shipped to a contract manufacturer - when I worked for Dupont several years ago our contract manufacturer had the equipment in house to build our products but didn't have the equipment or software to test it. I think this underlines something unique about test engineering: it is oftentimes easier to build something than it is to test it. When you are testing something you are verifying that what you built meets certain requirements. You must have confidence in the data, so extra care goes into the measurements. I think THAT is why test systems are built by the contracting firm and then shipped out - often the test system is specialized to suit your product, and you have to trust the data.
If the schedule holds, the station will ship out sometime next month, and I will fly out to help set it up and verify it after the new year. I will write more about this experience as the project progresses.
I do NOT expect to post more to the blog the rest of this week. Thanksgiving is coming up, and I have plans.
This is not the first time I have built a station that was shipped to a contract manufacturer - when I worked for Dupont several years ago our contract manufacturer had the equipment in house to build our products but didn't have the equipment or software to test it. I think this underlines something unique about test engineering: it is oftentimes easier to build something than it is to test it. When you are testing something you are verifying that what you built meets certain requirements. You must have confidence in the data, so extra care goes into the measurements. I think THAT is why test systems are built by the contracting firm and then shipped out - often the test system is specialized to suit your product, and you have to trust the data.
If the schedule holds, the station will ship out sometime next month, and I will fly out to help set it up and verify it after the new year. I will write more about this experience as the project progresses.
I do NOT expect to post more to the blog the rest of this week. Thanksgiving is coming up, and I have plans.
Tuesday, November 13, 2007
Agilent vs. Keithley
While spending time reviewing the testing handbooks by Agilent and Keithley, I started thinking more about the differences between the two companies. Here's a quick synopsis of publicly available information:
Keithley
Employees: 650
Founded: 1946
Operating Income: ~$10 million
Net Income: ~$8.4 million
Agilent
Employees: 19390
Founded: 1999 (split from HP, founded in 1939)
Operating Income: ~$465 million
Net Income: ~$3.31 billion
Now of course I realize that Agilent does more than just make test & measurement hardware - for example, they also have an investment group. I also realize that Agilent makes instruments for a lot more applications than Keithley.
I've bought & used instruments from both companies. I think both companies make good products, have good tech support, and do a good job of knowing their customers. But still, I find it very interesting that Keithley, such a small company by comparison, holds up its own so well against a huge conglomerate like Agilent. I guess there's something to be said for being small and focused.
Keithley
Employees: 650
Founded: 1946
Operating Income: ~$10 million
Net Income: ~$8.4 million
Agilent
Employees: 19390
Founded: 1999 (split from HP, founded in 1939)
Operating Income: ~$465 million
Net Income: ~$3.31 billion
Now of course I realize that Agilent does more than just make test & measurement hardware - for example, they also have an investment group. I also realize that Agilent makes instruments for a lot more applications than Keithley.
I've bought & used instruments from both companies. I think both companies make good products, have good tech support, and do a good job of knowing their customers. But still, I find it very interesting that Keithley, such a small company by comparison, holds up its own so well against a huge conglomerate like Agilent. I guess there's something to be said for being small and focused.
Wednesday, November 7, 2007
Off-the-shelf Test Systems
In mid-October someone asked in a comment about using commercial off-the-shelf (COTS) test systems vs. building your own systems. At the time I replied with this:
Building a COTS system
When I worked at HP/Agilent, I helped create the Passive Component Test system for a new Optical Spectrum Analyzer (which is obsoleted by now). It was a built-in software app that used the OSA to test common parameters for optical components. To configure the tests, you needed a script. I wrote an Excel script creator in VBA that made it easier for the user to configure the test setup. It worked very well, and I know of at least four different companies who used it (I talked with them at a trade show a year later).
Full system
Several years ago I started work with a company that had just purchased a test system from Palomar Technologies. This system handled the optical fiber alignment, test setup, and specific manufacturing steps after testing. This system had a "pseudo-basic" script language for customizing tests. For further customization I wrote a LabVIEW front end that controlled aspects of the testing.
Multiple Vendors→One System
One of my current test systems is a conglomeration from three different sources. The main system (motion control, vision recognition, basic data handling) is from an established vendor for these systems. Second, the front-end software (controlling the test infrastructure) was written by a engineering company based on their standard product but customized for our use. Third, I have written quite a bit of code to further customize the front end of the software.
Summary
Any "off-the-shelf" system I have used, or helped build, has required customization. What your company makes, how it uses the data, how it grades those devices - all of those features are unique. Furthermore, unless you are testing final product the test system needs to be integrated within other manufacturing steps. That leads to further modifications.
The only exception I can think of is if the test system vendor sells you a test system that they have also sold to a direct competitor that makes the same product. That is a completely different issue.
"That's a broad subject that deserves some thinking - maybe I'll post on it in a week or so. But in general, I'd rather not reinvent the wheel if I can help it. I've used several 'canned' systems, especially motion control and generic test instruments, in the past. But often the things I've had to test were unique enough that I had to build my own system, or at least use the off-the-shelf solutions as a sub-assembly for my final test station."I have time to post on it now, so here are three different experiences I've had with such systems.
Building a COTS system
When I worked at HP/Agilent, I helped create the Passive Component Test system for a new Optical Spectrum Analyzer (which is obsoleted by now). It was a built-in software app that used the OSA to test common parameters for optical components. To configure the tests, you needed a script. I wrote an Excel script creator in VBA that made it easier for the user to configure the test setup. It worked very well, and I know of at least four different companies who used it (I talked with them at a trade show a year later).
Full system
Several years ago I started work with a company that had just purchased a test system from Palomar Technologies. This system handled the optical fiber alignment, test setup, and specific manufacturing steps after testing. This system had a "pseudo-basic" script language for customizing tests. For further customization I wrote a LabVIEW front end that controlled aspects of the testing.
Multiple Vendors→One System
One of my current test systems is a conglomeration from three different sources. The main system (motion control, vision recognition, basic data handling) is from an established vendor for these systems. Second, the front-end software (controlling the test infrastructure) was written by a engineering company based on their standard product but customized for our use. Third, I have written quite a bit of code to further customize the front end of the software.
Summary
Any "off-the-shelf" system I have used, or helped build, has required customization. What your company makes, how it uses the data, how it grades those devices - all of those features are unique. Furthermore, unless you are testing final product the test system needs to be integrated within other manufacturing steps. That leads to further modifications.
The only exception I can think of is if the test system vendor sells you a test system that they have also sold to a direct competitor that makes the same product. That is a completely different issue.
Saturday, November 3, 2007
CMMI for testing
There was an article in the September issue of Evaluation Engineering about CMMI ("Capability Maturity Model Integration"). I flagged it for future reading and just had a chance to finish it today.
I flagged this article because I have experience with the CMMI. The division I worked in at HP/Agilent years ago was classed at CMM level 2, and I worked in a couple of projects aimed at moving the department to level 3. I called it "CMM" instead of "CMMI" because back then the older nomenclature was in use. Working in a project group that adhered to those standards, which was very enjoyable and a great learning experience (we used Rational Rose for the heavy lifting, before it was bought by IBM). Testing, and specifically software testing, has a very specific role to fill within such models, and it's significance is not underrated.
In general the article is a cogent overview of the CMMI and how it is applied. It also makes a good point that test engineers involved in creating software - especially for more complicated projects involving multiple people - should learn how to apply the model and use tools associated with it. Many test engineers for hardware testing do NOT have a software background, and don't necessarily have exposure to best practices for programming. But believe me, the CMMI is worth using.
Of course, the author is from NI so I expected some marketing and was not disappointed. The author discussed how NI Requirements Gateway can be used to implement the CMMI, and he also referenced NI programs like LabVIEW and TestStand extensively. But this didn't really bother me - he works for NI and that's his job. Evaluation Engineering has free access, so I expect a modest amount of bias.
No, what really bugged me is that right at the beginning of the article he called the CMMI "Component maturity model integration" instead of "Capability maturity model integration." If you're going to write about something, please get the acronym right. In the engineering world there are way too many acronyms and abbreviations, and doing something like this confuses the issue further.
I flagged this article because I have experience with the CMMI. The division I worked in at HP/Agilent years ago was classed at CMM level 2, and I worked in a couple of projects aimed at moving the department to level 3. I called it "CMM" instead of "CMMI" because back then the older nomenclature was in use. Working in a project group that adhered to those standards, which was very enjoyable and a great learning experience (we used Rational Rose for the heavy lifting, before it was bought by IBM). Testing, and specifically software testing, has a very specific role to fill within such models, and it's significance is not underrated.
In general the article is a cogent overview of the CMMI and how it is applied. It also makes a good point that test engineers involved in creating software - especially for more complicated projects involving multiple people - should learn how to apply the model and use tools associated with it. Many test engineers for hardware testing do NOT have a software background, and don't necessarily have exposure to best practices for programming. But believe me, the CMMI is worth using.
Of course, the author is from NI so I expected some marketing and was not disappointed. The author discussed how NI Requirements Gateway can be used to implement the CMMI, and he also referenced NI programs like LabVIEW and TestStand extensively. But this didn't really bother me - he works for NI and that's his job. Evaluation Engineering has free access, so I expect a modest amount of bias.
No, what really bugged me is that right at the beginning of the article he called the CMMI "Component maturity model integration" instead of "Capability maturity model integration." If you're going to write about something, please get the acronym right. In the engineering world there are way too many acronyms and abbreviations, and doing something like this confuses the issue further.
Labels:
marketing,
Software,
Software Testing,
Testing issues
Thursday, November 1, 2007
Vendor books about testing - marketing
Yesterday I posted my review of an Agilent guide to test systems. Eric, who works at National Instruments and runs The Automated Test Blog, added a comment about a test systems book that NI has here. So, I downloaded it and skimmed it quickly. I'll probably review that one as well for completeness sake (thanks for the heads up, Eric).
Of course, originally I wanted to compare the books from Agilent and Keithley to see if they reflected a difference between the two companies themselves: Agilent is much more of a marketing behemoth than it was as HP many years ago. To be honest, I have a bias. I worked in the Test & Measurement group at HP for a few years before and after the switch to Agilent, and I saw firsthand the large amount of resources that went into marketing. But that is a post for another day.
I must tread lightly with this sort of thing. I've had a few marketing/salespeople contact me about products they make. Maybe they want to sell me their products, look for free advertisement on my blog, or just honestly offer information. It could be a blend of those reasons. But I'm an end user of test equipment nowadays, and no one pays me to do this blog. From an ethical point of view I should treat all requests equally. That is, only talk about things I experience, not show unwarranted bias towards one vendor or another, and not lambast someone or something without reason.
Or at least I'll try.
Of course, originally I wanted to compare the books from Agilent and Keithley to see if they reflected a difference between the two companies themselves: Agilent is much more of a marketing behemoth than it was as HP many years ago. To be honest, I have a bias. I worked in the Test & Measurement group at HP for a few years before and after the switch to Agilent, and I saw firsthand the large amount of resources that went into marketing. But that is a post for another day.
I must tread lightly with this sort of thing. I've had a few marketing/salespeople contact me about products they make. Maybe they want to sell me their products, look for free advertisement on my blog, or just honestly offer information. It could be a blend of those reasons. But I'm an end user of test equipment nowadays, and no one pays me to do this blog. From an ethical point of view I should treat all requests equally. That is, only talk about things I experience, not show unwarranted bias towards one vendor or another, and not lambast someone or something without reason.
Or at least I'll try.
Subscribe to:
Posts (Atom)