We've just open-sourced our AI-powered survey instrument evaluation toolkit, and we're inviting people to try it out and, ideally, help (a) evaluate the quality and cost-effectiveness of AI vs. human review and (b) improve the toolkit (lots of improvements could be made in how XLSForms are parsed, for example). You can find all of the details here in this blog post and more technical details here in the GitHub repo.
This is very much a work-in-progress, but we're hopeful that it can be the start of something that ultimately improves the quality of survey instruments — and therefore data quality. Fingers crossed! Feedback and contributions very much welcome.