I don’t normally talk about my work, but if I generalise this I reckon I can make a half-decent point about modern software practices.
I work with a set of tools provided by a best-in-class software house. These tools aren’t cheap, and we expect them to be good.
But we’ve been having problems with quality lately and this has caused me to be contacting the supplier for support.
I’ll spare the details, but basically, if I ran an upgrade of this software as the wrong user, it brought the whole system down. Testing for this condition with either:
- if user = “” then…
- if response from task is “insufficient privileges”
Seems like a simple and obvious thing to want to do.
Yet here’s the response I got:
“Generally all testing are done with the same single administrator user.”
Hmmm…OK. Now, I did a computing A’ level and in that we studied software testing and it was pretty much basic practice that you tested with:
- valid data
- invalid data
- data from the boundary of validity
So, for a percentage value which only has valid values between 0 and 100 you would test with, say:
- 20, 54 and 78
- -13, 1000
- -1, 0, 1, 99, 100, 101
You didn’t say: “I’m going to test with the value 50 every time” because it would rarely tell you anything useful about whether or not your software functons.
I’m assured that the quality assurances processes for this software are excellent, however, to miss a simple case like this makes me wonder, not just what other simple errors may not be tested for in my expensive, best-in-class software, but what’s happened to software development processes that this is allowed to happen.
Back to school folks…