I started writing about test executive software last month, and then a couple weeks ago I wrote about off the shelf software. Now I want to write about my experiences with "rolling your own" test executive.
I've worked with homegrown test executives at two different companies. At the first company the test executive evolved out of a couple of different programs for testing different features of the same product. In the second company, I worked with a test executive that had been written several years before I started. I'll address each one in a separate paragraph.
The executive I wrote myself was somewhat rudimentary. For the products we were manufacturing, there were six distinct tests you could perform. Each of those tests had from about 3 to 20 different numerical parameters that could be adjusted. This just screamed out for scripting, so that's what I did. The end result had the configurability I needed, and it was often used by test technicians, but it was missing several other features of an off the shelf system test executive:
- It was only usable within the Labview framework I had written. For example, it couldn't interface with any modules written in C++ without a lot of extra coding.
- There was no programmatic logic in the scripting. The scripts consisted entirely of what tests to perform under what conditions - no if-then or optional looping allowed.
- Some reporting tools existed, but mostly in simple formats (saving CSV files or bitmaps of graphs).
- It was written with testing a specific product, and had to be overhauled to test some other type of product.
The other test executive was written in .NET and made use of Measurement Studio for certain graphical presentations. The scripting tools had quite a bit of programmatic control, and the reporting tools were more extensive. But there were different problems.
- Because it had been designed as a reconfigurable tool, it was horribly complicated to use. We didn't let technicians near it without a lot of training.
- It was only usable within the .NET framework and didn't play well with other modules.
- It was written with testing a specific product, and had to be overhauled to test some other type of product.
That's a brief synopsis of the two in-house test executives. So, what is the point I'm trying to make? I'm not really saying that you should run out and buy a copy of Test Stand. It's certainly a nice piece of software, but it's expensive and may be overkill for what you need. I guess the lesson I've learned is that if you get to the point were you want to develop something for yourself to reuse over and over, learn from how the OTS software does it. Specifically:
- Don't make it overly complicated
- Make it generic enough to use for different tasks
- Use programmable logic in the scripting
- Have plenty of reporting tools