XFormTest - An XLSForm testing library just for you

XFormTest is a tool for creating and running automated tests to ensure the quality and stability of individual XForms and XLSForms.

I looked far and wide, but I was unable to find a tool out there which would allow me to run tests on my ODK forms. This meant a lot of manual testing for our team before we felt confident enough to use our forms in data collection. To save time and create peace of mind, I've created this tool.

Quick start

  1. Prerequisites: i. Java 8, ii. access to an XLSForm (excel) to XForm (xml) converter, such as pyxform or XLSForm online, or XLSForm offline.
  2. Download XFormTest.
  3. Download this example XLSForm and convert it to XML.
  4. Run the pre-made tests in the example form (replace "x.y.z" with the version number in the file name of the Java jar
    file downloaded): java -jar xform-test-x.y.z.jar xlsxExample.xml

Quick start video

Read the docs
Much more information is available at: http://xform-test.pma2020.org

5 Likes

XFormTest Web App
I'm happy to announce that there's now a web application for using XFormTest, in addition to the cli: https://xform-test.herokuapp.com/

After some DNS updates, this will be accessible at http://xform-test.pma2020.org/, and the docs will be at http://xform-test-docs.pma2020.org/

So this looks interesting! Quick noobish question: what's the difference between "a test" and the validation rule triggers you get when you use the xls -> xml converters?

Are these unit tests?

Hey Amit, thanks!

Validation vs Testing
Are you familiar with the difference between a "syntax error" and "logic error"?

Think of the validation that runs in an XLS (XLSForm) -> XML (XForm) converter (a form validator) to be checking for mostly syntax errors. It wants to know if it can generate a valid XForm that will, at the very least, be able to fully load in the client (e.g. ODK Collect).

Think of a form tester as not checking for syntax errors, but logic errors. It's not answering if the form can simply load successfully. It's trying to answer the question "does my form work as intended?"

Are these unit tests?
Short answer:
No, they are primarily automated blackbox tests to emulate data entry of an entire form.

Long answer:
The tests that XFormTest is currently designed to run are not unit tests. A unit test is designed to test a small unit of functionality. The equivalent, I think, to a unit test in an XLSForm would be checking whether or not a single cell containing logic (e.g. a constraint, relevant, or a choice_filter) functioned as intended.

While I'm not foremost an expert on QA, to the best of my knowledge, XformTest's primary functionality (called value assertions) are "automated blackbox tests". By blackbox, they don't know anything about the nature of the code inside the form. They test a specific workflow, such as navigating a form and doing data entry in the form. You can test specific workflows manually by loading a form on the phone, holding it in your hands, and going through the form, question by question. XFormTest aims to ease that burden by letting you write the test case essentially once, and then you can "automatically" execute the test by running XFormTest on the file again later.

Yes, this helps. But how would it know that logically it does what I want? A quick example (something that happened to me just now)... I have a field that is filtered by another field... but that second field showed up later in the survey (so the first question always had 0 options). So I would see this as logically incorrect, but not syntax-wise.

Would your test suite detect that?

I guess that's actually a better way to ask: "what use case errors does your test protect us from?"

Hopefully that's a bit clearer?

It's sounding awesome though!

It's a lot simpler than you might think!

I'll give you an example. Let's imagine that your filter questions are about foods.

1. Which foods do you like?

  • strawberries
  • steak
  • potatoes
  • waffles

In your test case column, let's say you want to emulate that the respondent chooses strawberries and waffles. You would enter:
strawberries waffles

2...9
(questions I won't show here)

10. Earlier, you said that you liked the following foods. Of these remaining foods, which is your favorite?

  • strawberries
  • waffles

In the test case column, I can write:

waffles

If for some reason your choice filter is completely broken and none of the options show up, it will error when it tries to enter "waffles", and let you know exactly where the problem was. If however your choice_filter is broken in such a way that all the options show up, it wouldn't catch that particular problem, because it would still be able to enter "waffles", and that's all that the test case cares about.

As you can see, the objective is to emulate data entry. As you can imagine, it won't catch all possible errors. It's really hard to get automated test coverage. But it will leave you a lot more covered than you otherwise would be.

@Joseph_E_Flack_IV Interesting! So in your example listed above, would one still need to write Strawberries/Waffles into Q1 and then Waffles into Q10? and then automate the simulation of those entries?

If so then it IS more or less a unit test (to my way of thinking) because I could create a trillion tests that would cover all edge cases just to make sure everything runs smoothly!

Interesting tool! Thanks for creating it and for sharing it! How many tests do you run typically in your Use Case?

Yep, I think you got it. You can write several test cases. Here are some examples:

  1. Passing test case
    Q1 - strawberries waffles
    Q10 - waffles

  2. Passing test case
    Q1 - strawberries waffles
    Q10 - strawberries

  3. Failing test case
    Q1 - strawberries waffles
    Q10 - strawberry (fails because misspelled)

  4. Failing test case
    Q1 - strawberries waffles
    Q10 - potatoes (if your choice filter is working correctly, this will fail because potatoes was not one of the selections in Q`)

Fair enough! I'd call these unit tests, although I see how strictly speaking they are not (in that they aren't testing functions as much as testing outcomes. In R, which is my primary language, even these thingies are unit tests... you can write tests with whatever level of specificity you want... from the lowest level C++ level code to a massive "did anything at all fail" level of thing. Anyway, this looks awesome! Great job!