Exploring Mobile UI Testing

Exploring Mobile UI Testing

In this series of posts that will be published on a rather regular basis, I'll try to describe my journey in the Mobile UI Testing world with findings, anectodes, and of course failures that brought to the current state of our testing frameworks for both Android and iOS. I won't be sharing a lot of code but instead some snippets to give a rough idea of how it works and the rationale behind certain choices.

Quick links to already published articles (will be updated):


When I first discovered the world of UI Automation Testing I was extremely overwhelmed. Up until that time, I was just developing product features as a junior developer and I must admit that testing wasn't my main selling point. Sure, I was extremely efficient in manual testing specific scenarios on mobile and desktop applications just before a major release, but I never had the incentive to go deep into automated testing, mostly because at that time it was something that I just heard of during my studies but never really clicked.

To me, the testing pyramid was still something unknown, and unit tests were those files in the green-backgrounded folder rows in the IDE that only contained ExampleUnitTest.kt with a simple assertEquals(4, 2+2).

A couple of years in at my first job, I got offered a new opportunity that was something different than what I was used to. Rather than working on product features, my task was going to be testing them with automated tests by porting the existing broken-in toolchain and pipelines they already had for iOS onto Android. It was huge. My prior experience in this field was close to zero, so by the time my first interview arrived I had started dusting off my testing books and catching up as much as I could.

Fast forward the big day arrived, and still today I remember how I felt when I first saw a phone rack with several physical iOS devices running thousands of UI tests in parallel. At that point time I had a rough idea how things worked thanks to the countless nights spent bashing my head against Espresso and UIAutomator on Android, but seeing everything in action felt somehow magical. And to be fair it still does now, day after day several years in, when I see Android virtual devices (AVDs) and iOS simulators running the test suites our team is maintaining.

A lot of water has passed under the bridge. Over the years I have changed companies and even gone back to "pure" product development for a while, with a strongly opinionated automated testing strategy that eventually led me to my current position as Mobile QA Automation Lead and largely impacted the foundation of the current testing frameworks we use every day.

When I joined Wise Emotions (now Telepass Digital) back in March 2020 as QA Automation Tester, my task consisted in fact in implementing the first UI E2E tests from scratch both on Android and iOS.

During most of its first year of existence, the QA team was however solely composed of manual testers with no coding experience except for myself and our current QA Manager, and while manually validating features was fine as the testers/developers ratio was rather ok at that time (we still were a startup with ~30 people), it was obvious that at some point the number of changes and additions that needed to be tested - on top of regression checks - were going to bury us and eventually risk slowing down the development process.

Having to manually test dozens of different behaviors of a single screen because of the high amount of combinations possible with account types (referring for example to the Telepass world) could make a tester spend a reasonable amount of time on setup and teardown, as every test run requires a clean state every single time. Additional automated tests on top of the unit tests written by the development team were for that reason the most obvious solution in the immediate future to start preventing regressions and subsequent surprises once shipped to production.

With that in mind, some were the main points we identified back then which today define our automation testing strategy.

No real network involved

We wanted to achieve E2E testing with a mocked environment that helped us capture any regression or undocumented changes in API calls. Network mocks would also be stored in a specific repository as plain JSON files and fetched at compile time during builds to have any commit of the target applications testable with relevant responses.

Similar APIs between Android and iOS

Android and iOS would need to expose similar APIs. Our automation team is today split into two different subteams, one for Android and one for iOS, but at some point, we'd love them to eventually work in a cross-platform manner.

To achieve this, the APIs exposed by our custom libraries would need to abstract the heavy work such as mapping and interacting with UI elements as much as possible, allowing someone who never wrote a line of Swift/Kotlin and/or interacted with Espresso/UIAutomator/XCTest to onboard in a very short amount of time.

Thoroughly written test cases for every test

Every test would refer to a well-documented test case, in a page containing requirements, JIRA tickets, Figma mockups, network mocks, and all the information needed to implement it right away without having to spend countless hours browsing stories or pinging designers just to make sure the screen and behavioral reference is up to date.

This set of requirements is mandatory: we dislike uncertainty, we want to know exactly what happens when interacting with elements on the screen, and avoid "I don't know"s when asked: "what if I do X with Y?". Any tester would probably go insane having to hunt down requirements.

JIRA, CI integrations, and reporting

By the time the first automated tests were implemented, a proper workflow would have been defined to continuously test our applications at every push to the main branches with the help of a CI platform and remote machines (both VMs and bare metal) to run AVDs and Simulators.

A test report page with screenshots and videos was also a big thing in our to-do list, as stack traces are sometimes rather cryptical and hard to grasp compared to a video that shows, for example, the automation tapping on the wrong button or idle waiting for some component that's not there for any reason.

To keep track of the test cases that needed to be implemented or even created for new features, lastly, we also decided to build a custom JIRA integration to combine the efforts of both manual and automation testers. This helped upkeep the good practice of building and maintaining a consistent and exhaustive test book.

This implementation, a webhook handler with some custom logic (that I enjoyed coding in Rust 🦀) is rather new and we have started using it just a few weeks ago, so it still needs some fine-tuning but I'll eventually report the impact it (hopefully) is going to have.


Looking back today I can tell that it has been quite the journey. I started implementing all the points by myself and I encountered many bumps along the way especially on iOS, mostly because until two years ago I had never touched a single line of Swift - if you exclude some random bugfixes in my previous job - and I had little understanding of xcodebuild and the whole fascinating world behind it (I enjoy playing with Gradle a lot, and it's the same here). But it was worth it, as today I lead a team of talented SDETs that maintain and improve the foundation of the two internal libraries with their ideas and skills.

But enough with introductions, let's take a look at Network Isolation in the next post and how it's vital in our testing frameworks.

Niccolò Forlini

Niccolò Forlini

Senior Mobile Engineer