|
The instant worldwide audience of any Web Browser Enabled Application -- or a WebSite -- make its quality and reliability crucial factors in its success. Correspondingly, the nature of WebSites and Web Applications pose unique software testing challenges. Webmasters, Web applications developers, and WebSite quality assurance managers need tools and methods that meet their specific needs. Mechanized testing via special purpose Web testing software offers the potential to meet these challenges. Our technical approach, based on existing Web browsers, offers a clear solution to most of the technical needs for assuring WebSite quality.
WebSites impose some entirely new challenges in the world of software quality! Within minutes of going live, a Web application can have many thousands more users than a conventional, non-Web application. The immediacy of the Web creates immediate expectations of quality and rapid application delivery, but the technical complexities of a WebSite and variances in the browser make testing and quality control that much more difficult, and in some ways, more subtle, than "conventional" client/server or application testing. Automated testing of WebSites is an opportunity and a challenge.
Like any complex piece of software there is no single, all inclusive quality measure that fully characterizes a WebSite (by which we mean any web browser enabled application).
Dimensions of Quality. There are many dimensions of quality; each measure will pertain to a particular WebSite in varying degrees. Here are some common measures:
Impact of Quality. Quality remains is in the mind of the WebSite user. A poor quality WebSite, one with many broken pages and faulty images, with Cgi-Bin error messages, etc., may cost a lot in poor customer relations, lost corporate image, and even in lost sales revenue. Very complex, disorganized WebSites can sometimes overload the user.
The combination of WebSite complexity and low quality is potentially lethal to Company goals. Unhappy users will quickly depart for a different site; and, they probably won't leave with a good impression.
A WebSite can be quite complex, and that complexity -- which is what provides the power, of course -- can be a real impediment in assuring WebSite Quality. Add in the possibilities of multiple WebSite page authors, very-rapid updates and changes, and the problem compounds.
Here are the major pieces of WebSites as seen from a Quality perspective.
Browser. The browser is the viewer of a WebSite and there are so many different browsers and browser options that a well-done WebSite is probably designed to look good on as many browsers as possible. This imposes a kind of de facto standard: the WebSite must use only those constructs that work with the majority of browsers. But this still leaves room for a lot of creativity, and a range of technical difficulties. And, multiple browsers' renderings and responses to a WebSite have to be checked.
Display Technologies. What you see in your browser is actually composed from many sources:
Some access to information from the database may be appropriate, depending on the application, but this is typically found by other means.
Navigation. Users move to and from pages, click on links, click on images (thumbnails), etc. Navigation in a WebSite is often complex and has to be quick and error free.
Object Mode. The display you see changes dynamically; the only constants are the "objects" that make up the display. These aren't real objects in the OO sense; but they have to be treated that way. So, the quality test tools have to be able to handle URL links, forms, tables, anchors, buttons of all types in an "object like" manner so that validations are independent of representation.
Server Response. How fast the WebSite host responds influences whether a user (i.e. someone on the browser) moves on or gives up. Obviously, InterNet loading affects this too, but this factor is often outside the Webmaster's control at least in terms of how the WebSite is written. Instead, it seems to be more an issue of server hardware capacity and throughput. Yet, if a WebSite becomes very popular -- this can happen overnight! -- loading and tuning are real issues that often are imposed -- perhaps not fairly -- on the WebMaster.
Interaction & Feedback. For passive, content-only sites the only real quality issue is availability. For a WebSite that interacts with the user, the big factor is how fast and how reliable that interaction is.
Concurrent Users. Do multiple users interact on a WebSite? Can they get in each others' way? While WebSites often resemble client/server structures, with multiple users at multiple locations a WebSite can be much different, and much more complex, than complex applications.
Assuring WebSite quality requires conducting sets of tests, automatically and repeatably, that demonstrate required properties and behaviors. Here are some required elements of tools that aim to do this.
Test Sessions. Typical elements of tests involve these characteristics:
Object mode operation is essential to protect an investment in test suites and to assure that test suites continue operating when WebSite pages experience change. In other words, when buttons and form entries change location on the screen -- as they often do -- the tests should still work.
However, when a button or other object is deleted, that error should be sensed! Adding objects to a page clearly implies re-making the test.
Test Context. Tests need to operate from the browser level for two reasons: (1) this is where users see a WebSite, so tests based in browser operation are the most realistic; and (2) tests based in browsers can be run locally or across the Web equally well. Local execution is fine for quality control, but not for performance measurement work, where response time including Web-variable delays reflective of real-world usage is essential.
Confirming validity of what is tested is the key to assuring WebSite quality -- the most difficult challenge of all. Here are four key areas where test automation will have a significant impact.
Operational Testing. Individual test steps may involve a variety of checks on individual pages in the WebSite:
Test Suites. Typically you may have dozens or hundreds (or thousands?) of tests, and you may wish to run tests in a variety of modes:
Content Validation. Apart from how a WebSite responds dynamically, the content should be checkable either exactly or approximately. Here are some ways that content validation could be accomplished:
Load Simulation. Load analysis needs to proceed by having a special purpose browser act like a human user. This assures that the performance checking experiment indicates true performance -- not performance on simulated but unrealistic conditions. There are many "http torture machines" that generate large numbers of http requests, but that is not necessarily the way real-world users generate requests.
Sessions should be recorded live or edited from live recordings to assure faithful timing. There should be adjustable speed up and slow down ratios and intervals.
Load generation should proceed from:
Considering all of these disparate requirements, it seems evident that a single product that supports all of these goals will not be possible. However, there is one common theme and that is that the majority of the work seems to be based on "...what does it [the WebSite] look like from the point of view of the user?" That is, from the point of view of someone using a browser to look at the WebSite.
This observation led our group to conclude that it would be worthwhile trying to build certain test features into a "test enabled web browser", which we called eValid.
Browser Based Solution. With this as a starting point we determined that the browser based solution had to meet these additional requirements:
User Interface. How the user interacts with the product is very important, in part because in some cases the user will be someone very familiar with WebSite browsing and not necessarily a testing expert. The design we implemented takes this reality into account.
Figure 1. eValid Menu Functions.
Operational Features. Based on prior experience, the user interface for eValid had to provide for several kinds of capabilities already known to be critical for a testing system. Many of these are critically important for automated testing because they assure an optimal combination of test script reliability and robustness.
A side benefit of this was that playbacks were reliable, independent of the rendering choices made by the user. A script plays back identically the same, independent of browser window size, type-font choices, color mappings, etc.
Figure 2. Illustration of eValid Validate Selected Text Feature.
Test Wizards. In most cases manual scripting is too laborious to use and making a recording to achieve a certain result is equally unacceptable. We built in several test wizards that mechanize some of the most common script-writing chores.
Here is a sample of the output of this wizard, applied to our standard sample test page example1.html:
# Static Simple Link Test Wizard starting... GotoLink 0 "http://www.e-valid.com/Products/example1/example1.html#bo" \ "ttom" "" GotoLink 0 "http://www.e-valid.com/Products/example1/example1.html#ta" \ "rget" "" GotoLink 0 "http://www.e-valid.com/Products/example1/example1.html#no" \ "tdefined" "" GotoLink 0 "http://www.e-valid.com/Products/example1/example1.html" "" GotoLink 0 "http://www.sr-corp.com/Products/Web/CAPBAK/example1/example1.no" \ "toutside.html" "" GotoLink 0 "http://www.e-valid.com/Products/example1/example1.html#to" \ "p" "" GotoLink 0 "http://www.e-valid.com/Products/example1/example1.html" "" # Static Simple Link Test Wizard ended. |
Figure 3. Sample of Output of Link Test Wizard. |
Here is a sample of the output of this wizard, applied to our standard test page: example1.html:
# Form Test Wizard starting... InputValue 0 69 "SELECT-ONE" "list" "Top of Page " "0" "" InputValue 0 90 "RADIO" "check" "buying-now" "TRUE" "" InputValue 0 92 "RADIO" "check" "next-month" "TRUE" "" InputValue 0 94 "RADIO" "check" "just-looking" "TRUE" "" InputValue 0 96 "RADIO" "check" "no-interest" "TRUE" "" InputValue 0 103 "CHECKBOX" "concerned" "Yes" "TRUE" "" InputValue 0 105 "CHECKBOX" "info" "Yes" "TRUE" "" InputValue 0 107 "CHECKBOX" "evaluate" "Yes" "TRUE" "" InputValue 0 109 "CHECKBOX" "send" "Yes" "TRUE" "" InputValue 0 121 "SELECT-MULT" "product" "All eValid Products||Site Anal" \ "ysis||Regression Testing||Advanced Application Monitoring||Performance, " \ "Load/Testing" "0,1,2,3,4" "" InputValue 0 131 "SELECT-ONE" "immediacy" "Never" "2" "" InputValue 0 141 "TEXT" "name" "eValid" "" "" InputValue 0 143 "TEXT" "phone" "eValid" "" "" InputValue 0 145 "TEXT" "email" "eValid" "" "" InputValue 0 147 "TEXT" "number" "eValid" "" "" InputValue 0 150 "TEXTAREA" "comment" "eValid\\\\eValid" "" "" SubmitClick 0 156 "submit" "SUBMIT DATA" "" GoBackTo 0 1 "http://www.e-valid.com/Products/example1/example1.html" "" # Form Test Wizard ended. |
Figure 4. Sample of Output of FORM Test Wizard. |
The idea is that this script can be processed automatically to produce the result of varying combinations of pushing buttons. As is clear, the wizard will have pushed all buttons, but only the last-applied one in a set of radio buttons will be left in the TRUE state.
Performance Testing Illustration. To illustrate how eValid measures timing we have built a set of Public Portal Performance Profile TestSuites that have these features:
E-Commerce Illustration. This example shows a typical E-Commerce product ordering situation. The script automatically places an order and uses the Validate Selected Text sequence to confirm that the order was processed correctly. In a real-world example this is the equivalent of (i) selecting an item for the shopping basket, (ii) ordering it, and (iii) examining the confirmation page's order code to assure that the transaction was successful. (The final validation step of confirming that the ordered item was actually delivered to a specific address is also part of what eValid can do -- see below.)
Figure 5. Sample Input Form For E-Commerce Example.
Figure 6. Response Page for E-Commerce Example.
ProjectID "Project" GroupID "Group" TestID "Test" LogID "AUTO" ScreenSize 1280 960 FontSize 0 InitLink "http://www.e-valid.com/Products/example1/example1.html" Wait 3838 InputValue 0 141 "TEXT" "name" "Mr. Software" "" "" Wait 3847 InputValue 0 143 "TEXT" "phone" "415-861-2800" "" "" Wait 5168 InputValue 0 145 "TEXT" "email" "info@sr-corp.com" "" "" Wait 2123 InputValue 0 147 "TEXT" "number" "9999" "" "" Wait 3265 InputValue 0 150 "TEXTAREA" "comment" " Testing" "" "" SubmitClick 0 156 "submit" "SUBMIT DATA" "" NAV Wait 5135 ValidateSelectedText 0 12 278 "88899999" "" # End of script. |
Figure 7. Script for E-Commerce Test Loop. |
Obviously we need to expand the capability to the Mozilla class of browsers, and possibly others as well. And, certain of the user-control functions have to be refined to get the best use out of the product set.