That made configuration testing less important, since there were fewer configurations to test for. Under these conditions, the type of software testing that platforms like Sauce Labs deliver today was done as part of the broader debugging process. With small teams of programmers, relatively few environment variables for a given software program, and little pressure to release code on a frequent basis, an ad hoc approach to software testing worked well enough.
For the first time, at least in the consumer market, programmers could write for a single hardware platform. By the s, PCs were not identical, of course. But programmers faced increasing pressure to release software that worked well on any type of computer advertised as PC-compatible. Another change was increasing demand for more frequent software releases.
Another was the growing importance of the Internet, which provided a much faster way to distribute new versions of programs. And then there was the advent of open source, heralded by projects like Linux. These changes raised the stakes for software testing. Releasing software that worked on any PC required careful configuration testing of the many possible environment variables. At the same time, the fact that users had come to expect more frequent releases meant that programming teams had to optimize their testing processes so they could deliver faster.
And while the Linux crowd showed that it was possible to develop complex software by releasing code to the public and asking users to help find defects, the companies that started trying to sell Linux in the early s quickly learned that better configuration testing and other quality assurance was needed in order to make open source commercially viable.
The pressures described above are what ushered in tools like Selenium. But today, developers face a new set of needs, and those needs require even more sophisticated innovations. For instance, take Continuous Delivery, which puts enormous pressure on programmers to test and update code on an ongoing basis.
Great ones know what to rewrite and reuse. Linus Torvalds didn't actually try to write Linux from scratch. Instead, he started by reusing code and ideas from Minix, a tiny Unix-like operating system for PC clones. The second time, maybe you know enough to do it right. So if you want to get it right, be ready to start over at least once 4. If you have the right attitude, interesting problems will find you.
ER says that in a software culture that encourages code-sharing, interesting problems finding the right person is a natural way for a project to evolve. When you lose interest in a program, your last duty to it is to hand it off to a competent successor. T his is what the original author of the 'popclient' program did. He lost interest in the program and handed over the complete responsibility to ER. Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
ER thinks that Linus' cleverest and most consequential hack was not the construction of the Linux kernel itself, but rather his invention of the Linux development model.
Release early. Release often RERO. And listen to your customers. The rationale for this is given below in ER's comment section. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out.
Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door. Surprisingly this cost is strongly affected by the number of users. You can, in fact you must, plan every aspect the cathedral years in advance and in fine detail.
No such thing with software projects. Requirements are always in a state of flux as user needs, market realities and company goals are prone to change. Worse, the requirements are often based on what we assume the users will find valuable, but invariably users are of a different mind read about it here. The classical cathedral-like approach of creating long and convoluted specs that lead to long and convoluted projects culminating in complex, feature-rich products is therefore self-defeating.
The uncomfortable truth is this:. Also, unanticipated needs almost always arise once a system is in operation. So the approach suggested is to treat your product as a living organism, an experiment— start it simple and relatively bare and build into it the ability to grow and adapt quickly.
0コメント