How the QA team ensures ODK is reliable

On a recent Insiders call, we shared a little bit about the quality assurance (QA) process and types of testing we do across the different tools. Dominika (@dbemke), Wiktor (@wkobus), and I wanted to give more insight into how we ensure ODK is reliable. Our goal is to do the heavy lifting on testing, so you can focus on what matters - collecting quality data and making a positive impact.

New features to any piece of software will typically go through a design, build, and test phase. Most software projects involve QA only in the test phase, but at ODK, we do things a little differently. QA is involved throughout the software life cycle.

We use different types of testing from test-driven design to manual exploratory testing to ensure every feature works as intended and is reliable. We are never 100% sure something works perfectly because ODK has so many diverse use cases. Fortunately, we have crash reporting that alerts the developer team of any problems with new features and fantastic users who report issues as soon as they run into them.

:sparkles: Design phase
We work closely throughout the product design process to make sure that, based on our experience, the feature will be useful and that the design is likely to be reliable.

:rocket: Build phase
Before any written code can be added to ODK, it is first reviewed and approved by another member of the development team to make sure it’s correct, performant, and secure. The approval process requires that the code passes the many automated tests we have in ODK. Many of these tests were created to catch bugs found over the project’s 16 year history.

:dart: Testing phase
This is where we spend most of our time. We start with understanding the goal and review the feature’s specification documents, user needs, and understand the problems we are solving. We ask a lot of questions to ensure nothing has been overlooked, so we can ensure it works as intended. Once we understand the goal, we go through various types of testing:

  • Functional testing - We manually test each aspect of the feature to ensure it behaves according to the requirements. And not always in the office. Sometimes we go on a field trip to really make sure the features work in the field! Recent functional tests include:

    • In Collect, when the new counter question was added, we made sure adding and subtracting worked and the question respected constraints.
    • In Central, when Entities upload via CSV was added, we verified that only roles like Project Manager or Administrator could upload Entity data this way.
  • Explore edge cases - We try to break things or find the unhappy paths where things can go wrong.

  • Extreme input - What happens when a user creates a form with 25,000 questions or sends a submission with 1,000 high-resolution images?

  • Boundary testing - How does the feature behave at the limit of acceptable input (e.g., minimum and maximum allowed values)?

  • Concurrency - What happens if multiple actions are performed at once (e.g., syncing data while filling out a form or multiple users attempting to edit the same form or project settings at the same time)?

  • Test different devices - Collect works on over 20,000 different devices. We can’t test every one, but we test on various real devices, emulators, and different Android versions to catch device-specific issues. Central is tested with different versions of Chrome, Firefox, Safari, so we can be sure it works well on your computer.

  • Test case creation - We write manual test cases that cover all aspects of the feature. For example, for a barcode scanning feature in Collect, you’d write test cases for: scanning a valid barcode, scanning an invalid, inverted or unreadable barcode, behavior when the camera fails to open, and performance when scanning in low-light conditions.

  • Reporting Issues - Any issues identified are filed on GitHub. This includes detailing the steps to reproduce it, the expected outcome versus the actual result, and any relevant logs, screenshots or short videos.

  • Integration testing - Since ODK includes multiple tools (e.g., Collect, Central, Web Forms, pyODK), we do integration testing to ensure everything works seamlessly together, even when you are using an older version of one tool with a newer version of another.

  • Security testing - ODK hires independent security companies that test the platform for security issues (more on this soon!).

  • Regression testing - We manually test both the new feature and the app’s previously working features to ensure nothing is broken by the new code.

  • Exploratory testing - We perform unscripted testing by interacting with the app in unpredictable ways. This type of testing is based on our experience and intuition with the app. For example, test what happens if a user switches between apps while scanning a QR code or test what happens when multiple users are submitting forms to the same project simultaneously from different devices.

So now, you know everything we do to ensure ODK is as reliable as possible for you. Any questions? Let us know below.

9 Likes