Why traditional testing tools do not work well for Web 2.0

The emergence of new Web 2.0 technologies has also transformed the world of testing. In the early Web days, testing tools were based on protocol-level recording. They recorded the http requests from the browser to the server and back. Dynamic values that the server sent back, such as session IDs, had to be manually correlated.

As applications became more complex, so did the scripting. Correlations started to require advanced scripting and application expertise, and IT scripting became a complex and time-consuming process.

QA organizations then started shifting to UI-level recording, this focuses on verifying specific objects in a browser. Testing tools did not need to work on the lower transport level layer; they could instead focus on the objects in the DOM.

However, Ajax introduced a new set of complexities: client-side processing and asynchronous communication. The UI-level testing tools that focused on the DOM also no longer worked. Record and replay tools needed to add JavaScript-rendering agents on top of the DOM to be able to support the multitude of different toolkits used to build Web 2.0 applications.

This approach in turn presented its own insurmountable challenge: with new toolkits becoming available every month and old toolkits being constantly updated and revamped, no vendor could keep up and provide a reasonable level of support for the new functionality.

Additionally, conventional GUI automation tools were simply too “heavy” and could typically automate only a single user session per operating system session. A successful performance testing solution needs to have a concurrent multiuser, multisession driver, simultaneously automating multiple sessions.