Enumerator training advice and best practices?

Hello, survey community!

After 4 years in Senegal running an RCT with IPA, I am now supporting a US-based survey organization that is looking to improve its enumerator training practices and wants to learn from what is already being done in the survey world.

Since the documentation on training practices is much poorer than technical support, I am asking for you to share your thoughts, experiences, and amazing solutions. I'm also hoping that this question can encourage more learning from the community about training best practices that are out there somewhere! Training our enumerators is arguably the most important, but often most neglected, piece of the quality data puzzle.

THE SITUATION: The organization works with about a dozen developing countries to conduct large-scale semi-annual household surveys (quantitative) with local partners. The core questionnaire remains the same throughout rounds, with some minor revisions and a handful of new questions. There is a high level of language diversity in the countries in which they work. In each country, as well as between countries, different languages are often used in training, written training and reference materials, the questionnaire itself, and the field interviews. There is a high level of cultural diversity and varying contexts across and within countries. There is a great span of formal education levels of field agents. A few years ago, HQ prepared a substantial set of training materials, namely PowerPoint presentations, in order to standardize content and to lessen the training prep time burden in country partners.

THE PROBLEM: As is common, the main challenges are the TIME and the QUALITY of the training; there is never enough time, and quality could always be higher. Training preparation takes time, due to translation issues (not all languages are written ones) and to making updates in training materials based on the most recent questionnaire updates. The training itself is facilitated by a mix of HQ and in-country coordination and technical staff, individuals who are experienced in the subject but are not trained facilitators. As such, they may be less effective, or at least less efficient, than professional facilitators at teaching the content to enumerators in the time allotted.

THE MAIN QUESTIONS: We are looking to learn from your experiences, both good and bad, around the following four questions:

  1. What have you found to be efficient methods for updating the language and content of training materials for subsequent rounds and new countries? (automated? Semi-automated? Better structured?)

  2. What are the best ways to deal with multilingual users in trainings? (Language group break-offs? Codify unwritten languages? Audio recordings?)

  3. What strategies have you used to deal with a limited, and seemingly insufficient, number of in-person training hours? (Pre- or post-training activities? E-learning? Follow-ups?)

  4. How do you try to build "communities of practice" for capacity development, allowing and encouraging your team members to communicate their own questions and experiences to benefit others?

Thank you in advance for all of your help!
And if you'd be open to sharing even more, let me know and I'll follow up with you?

Cheers,
Sarah

Hi Sarah,

I know this is late in coming, but you might want to look at the
training guides at https://opendatakit.org/help/training-guides/ and
reach out individually to the folks who wrote those.

Yaw

··· On Tue, Feb 28, 2017 at 12:29 PM, wrote: > Hello, survey community! > > After 4 years in Senegal running an RCT with IPA, I am now supporting a US-based survey organization that is looking to improve its enumerator training practices and wants to learn from what is already being done in the survey world. > > Since the documentation on training practices is much poorer than technical support, I am asking for you to share your thoughts, experiences, and amazing solutions. I'm also hoping that this question can encourage more learning from the community about training best practices that are out there somewhere! Training our enumerators is arguably the most important, but often most neglected, piece of the quality data puzzle. > > THE SITUATION: The organization works with about a dozen developing countries to conduct large-scale semi-annual household surveys (quantitative) with local partners. The core questionnaire remains the same throughout rounds, with some minor revisions and a handful of new questions. There is a high level of language diversity in the countries in which they work. In each country, as well as between countries, different languages are often used in training, written training and reference materials, the questionnaire itself, and the field interviews. There is a high level of cultural diversity and varying contexts across and within countries. There is a great span of formal education levels of field agents. A few years ago, HQ prepared a substantial set of training materials, namely PowerPoint presentations, in order to standardize content and to lessen the training prep time burden in country partners. > > THE PROBLEM: As is common, the main challenges are the TIME and the QUALITY of the training; there is never enough time, and quality could always be higher. Training preparation takes time, due to translation issues (not all languages are written ones) and to making updates in training materials based on the most recent questionnaire updates. The training itself is facilitated by a mix of HQ and in-country coordination and technical staff, individuals who are experienced in the subject but are not trained facilitators. As such, they may be less effective, or at least less efficient, than professional facilitators at teaching the content to enumerators in the time allotted. > > THE MAIN QUESTIONS: We are looking to learn from your experiences, both good and bad, around the following four questions: > > 1. What have you found to be efficient methods for updating the language and content of training materials for subsequent rounds and new countries? (automated? Semi-automated? Better structured?) > > 2. What are the best ways to deal with multilingual users in trainings? (Language group break-offs? Codify unwritten languages? Audio recordings?) > > 3. What strategies have you used to deal with a limited, and seemingly insufficient, number of in-person training hours? (Pre- or post-training activities? E-learning? Follow-ups?) > > 4. How do you try to build "communities of practice" for capacity development, allowing and encouraging your team members to communicate their own questions and experiences to benefit others? > > > Thank you in advance for all of your help! > And if you'd be open to sharing even more, let me know and I'll follow up with you? > > Cheers, > Sarah > > -- > -- > Post: opendatakit@googlegroups.com > Unsubscribe: opendatakit+unsubscribe@googlegroups.com > Options: http://groups.google.com/group/opendatakit?hl=en > > --- > You received this message because you are subscribed to the Google Groups "ODK Community" group. > To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/d/optout.

Sarah,

Since it's been a few years, I'm wondering if you (or others) have any experiences they'd want to share?

While I'm not deploying surveys over multiple countries on a large scale, I think my context is very similar to what you describe with regards to language diversity. I'm wanting to do a side project for a group of cashew farmers, but they are mostly illiterate and monolingual in a local language. I asked if they could create a WhatsApp group a couple weeks ago and still haven't heard if they succeeded in doing so. My time is limited and I don't want to become tech support for them, but I do think setting up a basic survey where farmers could take a GPS perimeter of their field to calculate the actual area in hectares, submit their contact info, and annual harvest data would be eye opening for the entire group to see since many people here have no idea, or even worse, are wildly inaccurate with how many hectares they think they are planting.

Tyler

I too would be very interested in hearing about others' experience and best practices. I've mostly been involved in train-the-trainer contexts myself but I have had the opportunity to observe or read reports from some field staff trainings. Here are some of my observations:

  • where possible, co-designing forms with field staff can go a long way in reducing training needs. Field staff can advise on things like what order it's appropriate to ask questions in, what kind of introduction is expected, how big of devices they'll have if using their own, how they're most comfortable expressing dates, etc.
  • getting buy-in from field staff is really important. If they understand and believe in the goals of the project, they can be creative problem-solvers if needed. This may seem like an obvious point but it can feel really tempting to jump into mechanics and neglect this.
  • relatedly, field staff benefit a lot from seeing the big picture of the form(s) they're going to be using. This could be using a high-level workflow diagram or a very granular flowchart like @Dalerhoda has shown examples of in this post. This again makes data collectors more like partners in the work rather than just followers.
  • where possible, use hints, labels, images, audio, video within forms instead of external training materials. Those will get translated along with the form and won't go out of date. You can even have an option at the beginning of the form to see a tutorial. This can be shown/hidden based on a question like "are you practicing yes/no" or "do you want to see instructions yes/no"?
  • think about the end-to-end workflow and limit the Collect interface as much as you can to reduce what data collectors have to think about. For example, if you don't want data collectors to be saving partially-filled drafts, use settings to prevent saving drafts and hide the draft button. This guide provides some example configuration ideas.
  • spend your training time role playing and/or piloting. This will let you focus on real problems and misunderstandings.
  • use the audit log at least for training and initial data collection. This will let you see how data collectors navigate your form and help identify sections that may not be working as intended.

Unfortunately I think the geoshape question type is one of the least intuitive in ODK. This is something we want to improve on both in Collect and web forms.

For now, you could do something like embed an image in the form that shows which controls to press (e.g. tap the "play" triangle, select the second "automatic" option, etc) right before the geoshape question. I think this is a case where practice with a trained facilitator would be ideal so that the facilitator can identify common mistakes and come up with suggestions on how to address them either in form design or further practice.

You could take an entirely other approach and instead use a repeat to capture points for the different vertices of the plot. You could have simple audio prompts like "walk to one corner of your field" with a picture of what a corner looks like (you might need community input on how to phrase this so it makes sense). You can use a technique like this one to ask a question like "are you back where you started?" to end the repeat.

You could also try things like making the form a bit fun and friendly by doing things like showing pictures of smiling community members or healthy cashew plants at the beginning or end. If data collectors have some numeracy and you already have estimated field sizes, you could show a comparison of the estimated and actual area.

2 Likes