Webtest WebTest GitHub Home

Canoo WebTest White Paper

Testing is an important part of any serious development effort. For web applications it is crucial.

Defects in your corporate website may be only annoying at one time but they can cost you real money at other times, they can lower your market value and may even put you out of business.

Canoo WebTest helps you to reduce the defect rate of your web application.

What our customers care about

Quality
Quality improvements are hard to achieve if you cannot see the effects of your measures.
Canoo WebTest measures the externally observable quality of your application.
Development Risk
Is the development team on track? What progress did it achieve? What does it mean, if they say that 80% is working? Is it really?
Canoo WebTest reports the real progress in terms of running Use Cases.
Operations Risk
Can we put our application into production safely? Will it work? Will it not do any harm when running?
Canoo WebTest tells you whether it will work.
Delivery
Did the development team really deliver everything they promised?
Canoo WebTest tells you what was delivered and whether it works as expected.
Costs
The costs for testing must not exceed its benefits.
Canoo WebTest is free of charge, tests are easy and quick to write. They can be run countless times unsupervised and automatically. In fact it is cheaper and faster than testing manually.

What programmers care about

As programmers we want to be sure that our web application works as expected. We want to validate our work. We need some backing so that we can boldly say: "Yes, we have done it correctly. Yes, it works. Yes, we are finished with this. No, we have not broken any old functionality."

If we apply the full set of tests to the system every day then it is easy to find the cause of any reported defect, because it must be something we checked in yesterday.

If testing finds a defect, we want to solve it quickly. Therefore, we need to reproduce the unexpected behavior. What were the steps that led to this error? What was the sequence? What were the intermediate results? How much easier would it be to track down the error if we only had this information!

No matter how hard we try, there will always be defects that slip through our testing. They get reported by our users. We want to make sure that their feedback does not get lost, that the defect really gets solved, that it never appears again in future releases. The best solution is to write an automated test that exposes the bug. It will fail as long as the bug is unsolved. It will stay forever in our suite of tests.

We have to read a lot of documentation every day. Bad experience made us suspicious about the correctness of any external documentation. We don't really like writing documentation ourselves because we know that it is only a matter of time until it is out of sync with the system and all our effort will be wasted. If the documentation is done via automated tests, it is assured to be up to date, making it a reliable source of information. We are much more motivated to invest our time for this.

The same holds true for requirements specifications. It would be really convenient if we could automatically prove that we comply with the requirements spec. Therefore the spec needs to be formal enough to allow automated compliance tests. It must still be easy to understand so that the customer, the requirements analyst and the development team can all easily understand the spec. The specification language needs to be flexible enough to express page contents, workflow and navigational structures.

You may claim that all the above would be really helpful but impossible to implement under the constraints of real projects. We have done it ourselves and we have helped others doing it. The effect is tremendous: to the quality of the system, to the satisfaction of the customer and to the motivation and self-esteem of the development crew.

Testing is not for free, but it pays off.

How Canoo WebTest works

Canoo WebTest lets you specify test steps like

get the login page
validate the page title to be Login Page
fill scott in the username text field
fill tiger in the password field
hit the ok button
validate the page title to be Home Page

The example steps above make up a sequence of steps that only make sense if executed in exactly this order and within one user session. We call this a use case or a scenario. Canoo WebTest offers the appropriate abstraction for this. Refer to the Manual Overview and the API Doc for a complete list of step types.

Converting the textual description into a Canoo WebTest is easy, as you see below. Note how close it is to the textual description.

The example as a Canoo WebTest
<target name="login" > 
  <webtest name="normal" >
    &config;
    <steps>
      <invoke        description="get Login Page"
        url="login.jsp"   />
      <verifyTitle   description="we should see the login title
        text="Login Page" />
      <setInputField description="set user name
        name="username"
        value="scott"     />
      <setInputField description="set password
        name="password"
        value="tiger"     />
      <clickButton   description="Click the submit button
        label="let me in" />
      <verifyTitle   description="Home Page follows if login ok"
        text="Home Page"  />
    </steps>
  </webtest>
</target>

This is XML and you will get all the support from your preferred XML editor, including syntax highlighting and code completion based on the WebTest.dtd. Canoo WebTest leverages the advantages of XML even further. You may have noticed the line &config;. This is an XML entity that refers to the content of a file. The XML parser inlines the file at test execution time. It is one of the possible ways to share common settings for all test steps. Here the settings for protocol, host, port and webapp name are shared.

If you are familiar with the ANT build automation tool you will have recognized that Canoo WebTest makes use of this. If ANT is totally new to you, we recommend having a look at the ANT description at The Jakarta Project. Canoo WebTest exploits ANT's ability to structure a "build" into modules that can either be called separately or as a whole. That way, you can run any WebTest in isolation. You can also group tests into a testsuite that again can be part of a bigger testsuite. In the end you have a tree of testsuites, where each node and subtree can be executed.

The execution of the several test steps is currently implemented by using the HtmlUnit API, again an Open Source package. Test results are reported in either plain text or in XML format for later presentation via XSLT. Standard reporting XSLT stylesheets come with the Canoo WebTest distribution. They can easily be adapted to your corporate style and reporting requirements.

A sidebar: Do you think that the above example is so easy that you do not need an automatic test for this? Consider the following variations:

Bookmark
What if I try to get the Home Page directly without login?
Other pages
We have to test that no page is shown without proper login and that we get the requested page after proper login.
Bad Login
Bad login should keep us on the login page.

This is quite a number of scenarios to be tested. Now imagine a manual tester checking all this. Very soon he will get bored and unobservant, not to mention that resetting his session for every single test requires a lot of work. Is he really checking again all the possible variations at every full test?

Pragmatic Considerations

Test automation is key to better quality. Manual checks are more flexible and less expensive to do one time. They are more expensive and less reliable when tests need to be done over and over again. We advise to do manual checks for everything that cannot break after it worked once. Everything else should be automated if the automation can be done without excessive costs. We feel that testing with Canoo WebTest reaches the break-even point for 90% of our tests after a few test runs at latest.

We want to use what we already know. We don't want to learn a new language for the test automation. We want to rely on standard formats.

Functional testing can be classified as being either data driven or record/replay. Canoo WebTest follows the data driven approach. Record/replay is appealing at first, because you can create a lot of tests in a short time. A proxy logs what pages you request and stores the results. It can then replay the requests and compare the results against the stored ones. You typically have to tweak this procedure to tell the program what parts of the page are expected to change. The actual date and time are the most obvious examples. Every small change to your webapp causes a lot of these tests to fail. These failures must be manually processed to separate the "real" failures from the "false negatives". Doing this is almost as tedious and error prone as the manual testing and is therefore discouraged testing professionals.

Any automated test should fit snugly into your build process. If you are already using ANT for your build automation, it is no effort to integrate Canoo WebTest. An Example of this is Canoo WebTest itself. It contains a selftest that is written with Canoo WebTest. Every new build of Canoo WebTest triggers that selftest. You can explore this behavior online, starting at the Build Info link of the Canoo WebTest distribution page. Note that this is very convenient for nightly builds and even for use with a continuous integration platform like cruisecontrol or travis-ci.

If your build process is not ANT based, calling Canoo WebTest is still easy. It means starting a Java Application. This can easily be done with every build script language that we know.

"Regression tests" is the concept of testing that asserts that everything that worked yesterday still works today. To achieve this, our tests must not be dependent on random data. Also, the expected result must be clear in advance as opposed to the "guru checks output" approach, where a specialist validates changing results. Tests must give a thumps up indication when successful and a detailed error indication otherwise. Well, this is pretty much like compiler messages.

Functional tests do not replace unit tests. They work together hand-in-hand. Consider the following example: Your Webapp displays an html table that is filled with data from the database. The maximum number of rows should be 20 and if there is more data available, a link should be shown that points to the page that contains the next 20 entries. If there is no data, no table should be shown, but the message "sorry, no data". We would test this with a) no data b) one row c) 5 rows d) exactly 20 rows e) 21 rows f) 40 rows g) 41 rows. A naive way of testing this would be to manipulate the database (maybe by using an administration servlet that we can call via "invoke") prior to calling the page. But this is not only very slow but also a little dangerous. What if two tests run concurrently against the same test database? They will mutually destroy their test setup. What if the test run breaks? Is the state of the test database rolled back? The whole job is difficult to do for a functional test, but easy and quick for a unit test. A unit test can easily call the table rendering and assert the proper "paging" without even having a database! What is left for the functional test is to assert that the table rendering logic was called at all.

There is a lot more to say about unit testing. Refer to JUnit and the annotated references for further information.

Canoo WebTest is an Open Source Java project and totally based on Open Source packages. If you are not satisfied with any of the functionality, you can adapt it to your requirements. Having the sources, you even gain the ability to start the test in the debugger, revealing everything that goes on.

Canoo WebTest is free of charge. The downside is, that there is no guaranteed support. However, you can ask Canoo for special support incidents, a support contract and on-site help for introducing automated testing in your project.

Canoo WebTest is not restricted to any special technology on the server side. It makes no difference if you use Servlets, JSP, ASP, CGI, PHP or whatever as long as it produces html. Canoo WebTest executes client side JavaScript just like your browser does. In fact, it uses HtmlUnit for this purpose, which can be seen as a faceless browser that is able to mimic the behaviour of e.g. MSIE or Firefox.

Browser dependencies are the menace of web programming. One possibility is to check manually against all the "supported browsers". Our approach is to validate our html to comply with the specification. A full and pedantic validation is outside the scope of Canoo WebTest, but every validation step calls the Neko Html parser (part of HtmlUnit) and will warn you on improper html. That has proven to be very helpful. If your manual tests reveal that certain html constructions produce different behavior in your supported browers (like empty table cells in IE and Netscape), you can set up a test that checks against the usage of these constructs.

Advanced Topics

We found Canoo WebTests to be easy to understand, maintain and create even for non-developers. We had testers, assistants, novice programmers, business-process analysts and even managers and customers writing tests. This opens another opportunity: if the customer is able to understand or even write the tests, than they can serve as a requirements collection. Our preferred way of dealing with requirements is: "Whatever you write in a test, we will make it run. We promise nothing else but this."

If we get the tests written in advance, they serve as a requirements specification. While implementing, they give feedback how far we are. After Implementation, they document what we have done. That documentation is always up to date, as we can prove by the click of a button. The format of this documentation may be unfamiliar (as it is not MS-Word) but it has "the power of plain text" (cf. The Pragmatic Programmer). It can easily be transferred into other formats, e.g. by using XSLT.

It is good practice to care for the quality of your tests no less than you do for the quality of your production code. The first point here is to avoid duplication. Canoo WebTest combines the options of XML and ANT for helping you with this.

Canoo WebTest allows defining modules that can be reused in a number of tests. A common example is a sequence of validation steps that you apply to almost every page. These steps check against error indications like http errors, java stack traces, "sorry, we cannot...", etc. It may also contain a check for the copyright statement that is supposed to appear on every page. The samples that come with Canoo WebTest show how to do this.

Sometimes we have to test the same scenario for a number of different languages, each with different classes of users and each of these combinations with different user settings, etc. That can easily lead to so many test combinations that copy/paste would make the tests unmaintainable. Canoo WebTest uses the ANT mechanics to allow calling tests with overriding parameters. Again, the distribution contains a comprehensive example. Although all the test combinations get tested, the test description contains the scenario only once plus the information about the variation of calling parameters.

Canoo WebTest can be used to do automated tracking of your project. If your tests capture all the requirements, then every test run gives you feedback on how much you have achieved so far. The history of test reports reflects your team's productivity in terms of delivered functionality. The last report always shows the current state of your project in the most reliable metric we know: running and tested use cases.