As I have written before, for about two years in the late 90s I worked for Hewlett Packard/Agilent in software testing. Granted, my group tested firmware that was getting installed into complicated test systems, rather than testing huge web apps or desktop programs. But it was software testing.
HP was thorough. When I started with the test group it was just getting its feet under it. So management sent us to in-house training classes and paid for us to go to several training seminars. I even went to three different software test conferences.
Since I left Agilent all my jobs have been testing hardware. But I still like to pay attention now and then to software testing issues. That's why the Jolt Awards for Testing Tools popped up on my radar. More interestingly, there was a commentary on those awards asking why there hasn't been a big leap in what kind of testing tools developers have. Andrew Binstock, the author, makes the point that the types of tools being awarded (code inspection, unit testing, UI testing, browser testing, load and performance testing) are the same types of tools being used a decade ago. Indeed, those types of tools are the same things I used back in 1998 and 99.
This question reminded me of something I read last week. I've been working my way through the book Physics of the Future by Michio Kaku. It's an interesting read, if a little overly hyped at times. One of the topics covered in the book is the related subject of AI and the Singularity. Dr. Kaku makes a point in this section that, while computer processor speed and memory keep improving by leaps and bounds, the software is still being written by people. Visionaries who predict the coming robotic revolution or the end of history as we know it miss the point that the code still requires the creativity of people to be developed.
I think that's what is happening with the software test tools. The hardware keeps improving, but the software improvements just don't follow Moore's Law in the same way.