Tuesday 18 September 2012

The Future of Mobile Testing

Six or so years ago, Jason Huggins and I were talking about the next generation of web testing tools. This wasn't a conversation about the as-then unreleased Selenium 1.0 or even of WebDriver, which was a new and shiny thing I was working on at ThoughtWorks. This was about the next generation of testing tools.

The fact that we can do automated testing on the Web is a happy accident. When MS and Netscape put Javascript into browsers and standardised the DOM they didn't do so with an eye to making it easy to write tests. They wanted new and whizzy features added that only worked in their browser in a fight to win the browser wars. Each browser implemented the features of the other and then added more in a bid to gain the edge. The fact that it was possible to build something like Selenium on top of this work was never meant to happen.

That wasn't the conversation that Jason and I were having. We were talking about what the next generation of testing tools would use; the ones that would make selenium and webdriver totally redundant. It was obvious to both of us that accessibility APIs would be The Way Forward. After all, users with one form of disability or another make up a small, but important, percentage of web users. Their equal access to information and applications are enshrined in laws. Not only is making an app accessible a groovy and lovely thing to do, it's also often a legal requirement.

The next generation of tools, we reasoned, would build upon this accidental automation infrastructure in the same way that we used the DOM and JS: to provide an API that can be used to drive and query an application from outside that application.

Microsoft lead the way, when .Net 3.0 contained an API called UI Automation. I was working on a project with Mike Two, who hacked together a proof of concept against the desktop app we were working on before flying to India to be closer to the dev team there. Some time later, White appeared, which took the concepts and followed through. Brilliant stuff. 

Then it went quiet.

Until, that is, the mobile revolution started. For an amazing number of users, their primary contact with the Web will be a mobile device, probably either Android or iOS. The problem is that neither of these platforms have "making it easy to write an automated end to end test" baked in as a concept. Increasingly, however, they do have the keys to accidental testability provided: their accessibility frameworks, and these are often called something useful like "UI Automation

All this means that the next generation of tools are coming.

There is, however, a missing piece. We have the WebDriver APIs for testing web-based content, and for testing native content we have the accessibility APIs (which can be wrapped to look webdriver-ish if desired)(and I think it is desired)(but I'm biased) But how do these two gel? How do we test a "hybrid" app, composed of both native and web-based content? In my view, this gap can best be bridged by augmenting the accessibility API to allow a webdriver instance to be returned from any WebViews that are found via the accessibility APIs, and by allowing returned WebElement instances to also implement the equivalent of UIAElement, so that it can be the target of OS-level simulated user inputs.

There. Problem solved.