|
Because of its possible instant worldwide audience a WebSite's quality and reliability are crucial. The very special nature of the Web applications and WebSites pose unique software testing challenges. Webmasters, Web applications developers, and WebSite quality assurance managers need tools and methods that can match up to the new needs. Mechanized testing via special purpose Web testing software offers the potential to meet these challenges.
WebSites are something entirely new in the world of software quality! Within minutes of going live, a Web application can have many thousands more users than a conventional, non-Web application. The immediacy of the Web creates an immediate expectation of quality and rapid application delivery, but the technical complexities of a WebSite and variances in the browser make testing and quality control more difficult, and in some ways, more subtle. Automated testing of WebSites is both an opportunity and a challenge.
A WebSite is like any piece of software: no single, all-inclusive quality measure applies, and even multiple quality metrics may not apply. Yet, verifying user-critical impressions of "quality" and "reliability" take on new importance.
Dimensions of Quality. There are many dimensions of quality, and each measure will pertain to a particular WebSite in varying degrees. Here are some of them:
Impact of Quality. Quality is in the mind of the user. A poor-quality WebSite, one with many broken pages and faulty images, with Cgi-Bin error messages, etc. may cost in poor customer relations, lost corporate image, and even in lost revenue. Very complex WebSites can sometimes overload the user.
The combination of WebSite complexity and low quality is potentially lethal to an E-commerce operation. Unhappy users will quickly depart for a different site! And they won't leave with any good impressions.
A WebSite can be complex, and that complexity -- which is what provides the power, of course -- can be an impediment in assuring WebSite Quality. Add in the possibilities of multiple authors, very-rapid updates and changes, and the problem compounds.
Here are the major parts of WebSites as seen from a Quality perspective.
Browser. The browser is the viewer of a WebSite and there are so many different browsers and browser options that a well-done WebSite is probably designed to look good on as many browsers as possible. This imposes a kind of de facto standard: the WebSite must use only those constructs that work with the majority of browsers. But this still leaves room for a lot of creativity, and a range of technical difficulties.
Display Technologies. What you see in your browser is actually composed from many sources:
Some access to information from the database may be appropriate, depending on the application, but this is typically found by other means.
Navigation. Users move to and from pages, click on links, click on images (thumbnails), etc. Navigation in a WebSite often is complex and has to be quick and error free.
Object Mode. The display you see changes dynamically; the only constants are the "objects" that make up the display. These aren't real objects in the OO sense; but they have to be treated that way. So, the quality test tools have to be able to handle URL links, forms, tables, anchors, buttons of all types in an "object like" manner so that validations are independent of representation.
Server Response. How fast the WebSite host responds influences whether a user (i.e. someone on the browser) moves on or continues. Obviously, InterNet loading affects this too, but this factor is often outside the Webmaster's control at least in terms of how the WebSite is written. Instead, it seems to be more an issue of server hardware capacity and throughput. Yet, if a WebSite becomes very popular -- this can happen overnight! -- loading and tuning are real issues that often are imposed -- perhaps not fairly -- on the WebMaster.
Interaction & Feedback. For passive, content-only sites the only issue is availability, but for a WebSite that interacts with the user, how fast and how reliable that interaction is can be a big factor.
Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others' way? While WebSites often resemble conventional client/server software structures, with multiple users at multiple locations a WebSite can be much different, and much more complex, than complex applications.
Assuring WebSite quality requires conducting sets of tests, automatically and repeatably, that demonstrate required properties and behaviors. Here are some required elements of tools that aim to do this.
Test Sessions. Typical elements of tests involve these characteristics:
Object mode operation is essential to protect an investment in tests and to assure tests' continued operation when WebSite pages change. When buttons and form entries change location -- as they often do -- the tests should still work.
When a button or other object is deleted, that error should be sensed! Adding objects to a page clearly implies re-making the test.
Test Context. Tests need to operate from the browser level for two reasons: (1) this is where users see a WebSite, so tests based in browser operation are the most realistic; and (2) tests based in browsers can be run locally or across the Web equally well. Local execution is fine for quality control, but not for performance measurement work, where response time including Web-variable delays reflective of real-world usage is essential.
Confirming validity of what is tested is the key to assuring WebSite quality -- and is the most difficult challenge of all. Here are four key areas where test automation will have a significant impact.
Operational Testing. Individual test steps may involve a variety of checks on individual pages in the WebSite:
Test Suites. Typically you may have dozens or hundreds (or thousands?) of tests, and you may wish to run tests in a variety of modes:
Content Validation. Apart from how a WebSite responds dynamically, the content should be checkable either exactly or approximately. Here are some ways that should be possible:
Load Simulation. Load analysis needs to proceed by having a special purpose browser act like a human user. This assures that the performance checking experiment indicates true performance -- not performance on simulated but unrealistic conditions.
Sessions should be recorded live or edited from live recordings to assure faithful timing. There should be adjustable speed up and slow down ratios and intervals.
Load generation should proceed from:
All of these needs and requirements impose constraints on the test automation tools used to confirm the quality and reliability of a WebSite. At the same time they present a real opportunity to amplify human tester/analyst capabilities. Better, more reliable WebSites should be the result.