Discussion Series: Implementing ODK 2.0

ODeeKers,

Early on, when the internet came to your house on an AOL CD through snail
mail, when the Spam Wars raged in every inbox, when dragging your 11 pound
laptop into a Guatemalan cyber cafe and asking to plug a Cat5 cable into
the network hub was both bizarre and obscene, I tried to change the
nomenclature of the day. You may recall in the mid 90s, that no one quite
knew what to call this thing. Luddite news anchors would snark through
reports of new software, websites, scams and schemes of the nascent network
now ubiquitously called "The Internet". They called it "The Web, The Net,
The InterWeb, The World Wide Web, the WebNet" and many other abbreviations
and portmanteaus. As an early adopter, I thought it would be great to have
a cool way to talk about sending a message over the WebNet, and I didn't
like the word "E-mail". So, I started asking people to "Zap me with a
Zoltar".

"Here's my address, corvuscorax@juno.com," I would say, "Zap me with a
Zoltar tomorrow, and we'll talk." This was still at the time when people
would ask you if there were any spaces in your email address, or which
letters were capitalized. They would tell you to go to a web site at
H-T-T-P-Colon-Backslash-Backslash-Dubbleyou-Dubbleyou-Dubbleyou-.... before
telling your the address. At the time, everything was so up in the air and
so dumb that calling emails *Zoltars *didn't seem all that crazy. Now, it
sounds stupid, I know, but there are still a few choice friends and family
members who will humor me to this day by zapping my inbox with a friendly
Zoltar.

My point here is that not all ideas are good ideas, though they may sound
good at the time, it's the spin cycle of heavy use that centrifuges out the
fluff and nonsense and leaves you with something you can use and depend on.
If you looked at all of the ODK users, I'm sure you would see a spectrum,
some of whom are early adopters, many of whom just want something that
works. Me, I am a reckless early adopter, first in line for whatever
personal jet-pack, sub-dermal cerebral stimulator, or gadget that just
rolled off of the bespoke Akihabara electronics production line. And so it
was that I produced my first digital data collection system in 2006, to
support research on war refugees in Uganda
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1448368. This thing
had "early adopter" written all over it. We used smart phones that probably
cost $800 a piece once we had jacked them up with super size batteries and
memory cards, they ran WindowsME (?!?!), and i built the survey in a sort
of chopped down version of Visual Basic called Visual CE
http://www.syware.com/products/visual_ce.php.

(I'm coming around to talking about ODK pretty soon here, so stick with me)

This Uganda survey was a monster, on paper it was 25 pages long and had
it's share of skip logic and data constraints. The software I used to build
it had a sort of weird conceit where every element you added (a label, a
list of options, a text entry box) was given an X,Y coordinate. That is,
every single item had to be plotted on a Cartesian plane. So, I had to
imagine the screen dimensions, place the elements for each question in that
"screen", and then when the user had completed the question, move their
view to the coordinates of the next "screen". As I added questions, the
Cartesian plane grew and grew and grew. In the end, the entire survey
covered a virtual surface that was as large as the side of a barn. It
pushed the hard limit of 32,000 pixels maximum dimension. It was as if the
phone was a little window that the user looked through, and as they clicked
buttons, I would slide this barn sized survey around behind the window.
When you wanted to edit the thing, you had to drag around the Cartesian
plane until you found the one label or drop-down list you needed to change.
I still have nightmares about this thing.

Despite all of the trouble, the survey turned out great. We could query the
database after every data sync, and look for errors that could be fixed
before data collection was complete. This, at a time when people doing
research in developing countries were shipping hundreds of pounds of paper
back to their universities where graduate students chained in basements
performed double data entry for months. Principal Investigators would get
their first look at data collected over the summer while leaves turned
orange in the crisp Autumn air.

This proof of concept was an incredible risk, an expensive and difficult
gamble that only in hindsight could be shown to save a ton of money and
time while providing faster and higher quality data. Way to go, Us! This
was when I worked at Tulane with Phuong Pham and Patrick Vinck who now run
the KoBo Project http://www.kobotoolbox.org/ at the Harvard Humanitarian
Initiative http://hhi.harvard.edu/, and you have to give them credit for
foresight and stone cold nerve in taking the risk to prove the point that
paper was out-dated, inaccurate, and expensive.

The result was a bit of funding to really get going with digital data
collection, and to come up with a better system than the pixel counting
horror movie that I first used. I was charged with figuring it out, and my
first thought was that it had to be better and easier to work with. It had
to be open source, and it had to be adaptable to a variety of surveys. Some
poking around led me to Open Data Kit, and a few conversations with Yaw
sealed the deal. I wrote my next survey in XML, again for Uganda, and it
was many times easier. Even then, this was a risk. There weren't a ton of
people using ODK, but it had a University program behind it, so there
wasn't a danger of the old bait-and-switch to having to pay for it (looking
at you, Magpi), if you wanted to add something to it you could edit the
code, and it had a charismatic leader in Gaetano Borriello.

I met Gaetano when we were both on a panel at a the "Soul of the New
Machine: Human Rights, Technology & New Media" conference at UC Berkeley.
He embodied a virtue that I aspire to, Problem Solving Through Risk. As
previously mentioned in this discussion series, it was quite risky to
imagine that smartphones were the best solution to data collection in
developing countries at a time when only nerds and trust fundees had
smartphones, and everyone else thought that we should dumb down to collect
data on those little flip-phones. Gaetano knew that you aimed up and better
technology, not down at cheaper tech. I begged him to let me present
first, so that I could be the opening act instead of having to follow him
and the other luminaries on the panel. Afterwords, we had this deep
troubleshooting conversation in which I presented a haunting ODK related
issue. He leaned in to it the same way I lean into a seemingly unsolvable
problem, joyfully grinding through the symptoms, ruling out unknowable
variables, and finally concluding it could only be a simple but unavoidable
hardware issue persistent to that brand of phone. Few people can have this
kind of conversation, most people see a technical issue as a problem to
sweat through, not an opportunity to make things better.

Now, when you take up ODK Collect to do your data collection, it's as
smooth as silk. Their is a huge community asking and answering questions,
and a ton of ancillary support mechanisms like KoboForm, ODK Aggregate, and
custom versions of Collect. My point is, because of people like Gaetano,
the road is smooth enough that data collection is democratized and open to
people who don't want to edit 5000 lines of XML by hand, they can just jump
in and get to work.

There are, however, always those persons who seek the limits of the field.
They will ask "Can it do this?" and "Can it do that?" until someone says
No. There was a lot of that early on in ODK and a lot of features were
added and extra tools developed to accommodate the needs of researchers.
Now, ODK Collect can do almost everything you can image.

Almost.

But what if you want to query a database in the middle of a survey to
populate a list of choices? What if you want to modify the layout and look
of the question screen? What if you want to collect longitudinal data,
linking data collected last year with data you collect this year? ODK
Collect stores all of its collected data in discrete XML files, you can't
query them or do anything really clever. There is no real database behind
ODK Collect while you are working on the phone. If you want to take things
to the next level, Open Data Kit has your back! It's time to look at ODK
2.0.

I've been working with ODK 2.0 https://opendatakit.org/use/2_0_tools/ for
a year now, and it has that seat of your pants feeling of experimental
excitement that can only come with Problem Solving Through Risk that you
can only get trying something new. I'm happy to say that the results of
using this in a very large and demanding field survey are very positive,
and I hope I can encourage more people to pick it up, and even to
contribute to its further development.

In brief, an NGO called PATH http://www.path.org/has a Malaria
elimination program http://sites.path.org/macepa/ whose data requirements
exceed the capabilities of ODK Collect. They decided, in a move whose sheer
nerve I'm not sure if everyone fully understood, decided to go with data
collection using ODK 2.0. They needed to be able to do things like query a
CSV file mid-survey, and they needed to record a ton of data about every
member of a household, and then later on do things like populate a list of
choices with "All Female members of the household between the ages of 12
and 49 who tested Negative for Malaria"
. This kind of complex work
requires a database behind the survey, and the flexibility to push the
boundaries of the survey's capabilities.

Not only did PATH decide to go with ODK 2.0, they went so far as to develop
a super cool front end on top of ODK Survey that adds capabilities for
epidemiological sampling and navigation. You can even try it out yourself,
it's in the google Play store (Yay, Open Source!) and it's called Episample
https://play.google.com/store/apps/details?id=org.path.episample.android.
(The developer is a wonderful Ethiopian gentleman named Belendia Serda who
you should all know as one of the single most talented ODK developers out
there.)

Taking the risk to develop custom software on top of ODK 2.0, and to deploy
it in the field in something as critical as a mass drug trial in a public
health program shows the kind of forward thinking and enthusiastic adoption
of open-source technology that has made ODK great in the last ten years.
The results have been very positive, and the data collected is able to
demonstrate the clear success of the malaria elimination program.

Now, I have a lot to say about the actual implementation, the pitfalls,
late night struggles, hilarious gaffs and mistakes I made and fixed in the
service of bringing an ODK 2.0 project to shimmering life against
incredible odds with all the chips on the table at the 11th hour and
without a net, but I may have written more than I intended about the
philosophical idea behind such a scheme, and I think I might be just as
well off to leave it open to questions from the peanut gallery. Since this
is a discussion series, let's discuss ODK 2.0:

  • How do you decide if you should stick with ODK Collect, or go on to
    ODK Survey?
  • What are the best features of ODK 2.0 tools?
  • What areas of development are most needed for wider adoption?

I would love to hear your questions and thoughts, so please Zap me with a
Zoltar and I'll answer with more directness and brevity than I have brought
to bear on my opening comments.

Best from Washington DC,

Neil Hendrick

Hello Neil,

Great to read your thoughts about the initial development of ODK and the
next stage of ODK. With this Zoltar I want to pose the following questions

  1. Is ODK 2.0 going though the same development path as ODK? I mean, does
    the 3 principles exposed by Yaw in the first series apply to ODK 2.0

  2. The implementation of a database behind the survey sounds great. We have
    been very busy implementing ODK and selling it to the organization and have
    yet to try ODK 2.0. We are heavily using ODK on the geospatial side. A
    corner we still haven't explore is the editing of existing polygons and
    registering their mutations/changes. This clearly calls for a database, as
    spatial data needs to be sideloaded and edited in the field. In this case
    we are dealing with data edits and synchronization to a record in an
    existing database, rather than the collection of new entities. Would this
    be suitable for ODK 2.0?

A final thought. I see the most fun part of these developments is not
technology itself but all the developments and progress occurring around
technology. This is in particular the case of open source and escalable
entrepreneurships in developing countries where individuals and
organizations have to undergo strong changes and address questions that
recently where only in the realm of international powers. I find that more
than international monetary aid this a real way to bridge the gap
north-south and reach those UN development goals.

Cheers,

Juan

··· Le lundi 12 octobre 2015 12:00:39 UTC-5, Neil Hendrick a écrit : > > ODeeKers, > > Early on, when the internet came to your house on an AOL CD through snail > mail, when the Spam Wars raged in every inbox, when dragging your 11 pound > laptop into a Guatemalan cyber cafe and asking to plug a Cat5 cable into > the network hub was both bizarre and obscene, I tried to change the > nomenclature of the day. You may recall in the mid 90s, that no one quite > knew what to call this thing. Luddite news anchors would snark through > reports of new software, websites, scams and schemes of the nascent network > now ubiquitously called "The Internet". They called it "The Web, The Net, > The InterWeb, The World Wide Web, the WebNet" and many other abbreviations > and portmanteaus. As an early adopter, I thought it would be great to have > a cool way to talk about sending a message over the WebNet, and I didn't > like the word "E-mail". So, I started asking people to "Zap me with a > Zoltar". > > "Here's my address, corvu...@juno.com ," I would say, "Zap > me with a Zoltar tomorrow, and we'll talk." This was still at the time > when people would ask you if there were any spaces in your email address, > or which letters were capitalized. They would tell you to go to a web site > at H-T-T-P-Colon-Backslash-Backslash-Dubbleyou-Dubbleyou-Dubbleyou-.... > before telling your the address. At the time, everything was so up in the > air and so dumb that calling emails *Zoltars *didn't seem all that crazy. > Now, it sounds stupid, I know, but there are still a few choice friends and > family members who will humor me to this day by zapping my inbox with a > friendly Zoltar. > > My point here is that not all ideas are good ideas, though they may sound > good at the time, it's the spin cycle of heavy use that centrifuges out the > fluff and nonsense and leaves you with something you can use and depend on. > If you looked at all of the ODK users, I'm sure you would see a spectrum, > some of whom are early adopters, many of whom just want something that > works. Me, I am a reckless early adopter, first in line for whatever > personal jet-pack, sub-dermal cerebral stimulator, or gadget that just > rolled off of the bespoke Akihabara electronics production line. And so it > was that I produced my first digital data collection system in 2006, to > support research on war refugees in Uganda > . This thing > had "early adopter" written all over it. We used smart phones that probably > cost $800 a piece once we had jacked them up with super size batteries and > memory cards, they ran WindowsME (?!?!), and i built the survey in a sort > of chopped down version of Visual Basic called Visual CE > . > > (I'm coming around to talking about ODK pretty soon here, so stick with me) > > This Uganda survey was a monster, on paper it was 25 pages long and had > it's share of skip logic and data constraints. The software I used to build > it had a sort of weird conceit where every element you added (a label, a > list of options, a text entry box) was given an X,Y coordinate. That is, > every single item had to be plotted on a Cartesian plane. So, I had to > imagine the screen dimensions, place the elements for each question in that > "screen", and then when the user had completed the question, move their > view to the coordinates of the next "screen". As I added questions, the > Cartesian plane grew and grew and grew. In the end, the entire survey > covered a virtual surface that was as large as the side of a barn. It > pushed the hard limit of 32,000 pixels maximum dimension. It was as if the > phone was a little window that the user looked through, and as they clicked > buttons, I would slide this barn sized survey around behind the window. > When you wanted to edit the thing, you had to drag around the Cartesian > plane until you found the one label or drop-down list you needed to change. > I still have nightmares about this thing. > > Despite all of the trouble, the survey turned out great. We could query > the database after every data sync, and look for errors that could be fixed > before data collection was complete. This, at a time when people doing > research in developing countries were shipping hundreds of pounds of paper > back to their universities where graduate students chained in basements > performed double data entry for months. Principal Investigators would get > their first look at data collected over the summer while leaves turned > orange in the crisp Autumn air. > > This proof of concept was an incredible risk, an expensive and difficult > gamble that only in hindsight could be shown to save a ton of money and > time while providing faster and higher quality data. Way to go, Us! This > was when I worked at Tulane with Phuong Pham and Patrick Vinck who now run > the KoBo Project at the Harvard > Humanitarian Initiative , and you have to give > them credit for foresight and stone cold nerve in taking the risk to prove > the point that paper was out-dated, inaccurate, and expensive. > > The result was a bit of funding to really get going with digital data > collection, and to come up with a better system than the pixel counting > horror movie that I first used. I was charged with figuring it out, and my > first thought was that it had to be better and easier to work with. It had > to be open source, and it had to be adaptable to a variety of surveys. Some > poking around led me to Open Data Kit, and a few conversations with Yaw > sealed the deal. I wrote my next survey in XML, again for Uganda, and it > was many times easier. Even then, this was a risk. There weren't a ton of > people using ODK, but it had a University program behind it, so there > wasn't a danger of the old bait-and-switch to having to pay for it (looking > at you, Magpi), if you wanted to add something to it you could edit the > code, and it had a charismatic leader in Gaetano Borriello. > > I met Gaetano when we were both on a panel at a the "Soul of the New > Machine: Human Rights, Technology & New Media" conference at UC Berkeley. > He embodied a virtue that I aspire to, Problem Solving Through Risk. As > previously mentioned in this discussion series, it was quite risky to > imagine that smartphones were the best solution to data collection in > developing countries at a time when only nerds and trust fundees had > smartphones, and everyone else thought that we should dumb down to collect > data on those little flip-phones. Gaetano knew that you aimed up and better > technology, not down at cheaper tech. I begged him to let me present > first, so that I could be the opening act instead of having to follow him > and the other luminaries on the panel. Afterwords, we had this deep > troubleshooting conversation in which I presented a haunting ODK related > issue. He leaned in to it the same way I lean into a seemingly unsolvable > problem, joyfully grinding through the symptoms, ruling out unknowable > variables, and finally concluding it could only be a simple but unavoidable > hardware issue persistent to that brand of phone. Few people can have this > kind of conversation, most people see a technical issue as a problem to > sweat through, not an opportunity to make things better. > > Now, when you take up ODK Collect to do your data collection, it's as > smooth as silk. Their is a huge community asking and answering questions, > and a ton of ancillary support mechanisms like KoboForm, ODK Aggregate, and > custom versions of Collect. My point is, because of people like Gaetano, > the road is smooth enough that data collection is democratized and open to > people who don't want to edit 5000 lines of XML by hand, they can just jump > in and get to work. > > There are, however, always those persons who seek the limits of the field. > They will ask "Can it do this?" and "Can it do that?" until someone says > No. There was a lot of that early on in ODK and a lot of features were > added and extra tools developed to accommodate the needs of researchers. > Now, ODK Collect can do almost everything you can image. > > Almost. > > But what if you want to query a database in the middle of a survey to > populate a list of choices? What if you want to modify the layout and look > of the question screen? What if you want to collect longitudinal data, > linking data collected last year with data you collect this year? ODK > Collect stores all of its collected data in discrete XML files, you can't > query them or do anything really clever. There is no real database behind > ODK Collect while you are working on the phone. If you want to take things > to the next level, Open Data Kit has your back! It's time to look at ODK > 2.0. > > I've been working with ODK 2.0 for > a year now, and it has that seat of your pants feeling of experimental > excitement that can only come with Problem Solving Through Risk that you > can only get trying something new. I'm happy to say that the results of > using this in a very large and demanding field survey are very positive, > and I hope I can encourage more people to pick it up, and even to > contribute to its further development. > > In brief, an NGO called PATH has a Malaria > elimination program whose data > requirements exceed the capabilities of ODK Collect. They decided, in a > move whose sheer nerve I'm not sure if everyone fully understood, decided > to go with data collection using ODK 2.0. They needed to be able to do > things like query a CSV file mid-survey, and they needed to record a ton of > data about every member of a household, and then later on do things like > populate a list of choices with *"All Female members of the household > between the ages of 12 and 49 who tested Negative for Malaria"*. This > kind of complex work requires a database behind the survey, and the > flexibility to push the boundaries of the survey's capabilities. > > Not only did PATH decide to go with ODK 2.0, they went so far as to > develop a super cool front end on top of ODK Survey that adds capabilities > for epidemiological sampling and navigation. You can even try it out > yourself, it's in the google Play store (Yay, Open Source!) and it's called > Episample > . > (The developer is a wonderful Ethiopian gentleman named Belendia Serda who > you should all know as one of the single most talented ODK developers out > there.) > > Taking the risk to develop custom software on top of ODK 2.0, and to > deploy it in the field in something as critical as a mass drug trial in a > public health program shows the kind of forward thinking and enthusiastic > adoption of open-source technology that has made ODK great in the last ten > years. The results have been very positive, and the data collected is able > to demonstrate the clear success of the malaria elimination program. > > Now, I have a lot to say about the actual implementation, the pitfalls, > late night struggles, hilarious gaffs and mistakes I made and fixed in the > service of bringing an ODK 2.0 project to shimmering life against > incredible odds with all the chips on the table at the 11th hour and > without a net, but I may have written more than I intended about the > philosophical idea behind such a scheme, and I think I might be just as > well off to leave it open to questions from the peanut gallery. Since this > is a discussion series, let's discuss ODK 2.0: > > > - How do you decide if you should stick with ODK Collect, or go on to > ODK Survey? > - What are the best features of ODK 2.0 tools? > - What areas of development are most needed for wider adoption? > > > I would love to hear your questions and thoughts, so please Zap me with a > Zoltar and I'll answer with more directness and brevity than I have brought > to bear on my opening comments. > > Best from Washington DC, > > Neil Hendrick > > > > > >

Dear Neil,

thanks for facilitating this part of this discussion series. Especially the
first subtopic has my interest, since I'm dealing with that choice right
now: How do you decide if you should stick with ODK Collect, or go on to
ODK Survey?

There are several issues that would either speed or delay my move to ODK
Survey. The most important two are listed below:

  • Ease of adding value and variable labels to your datase.

Together with colleagues I've been implementing some surveys in Zambia, The
Gambia, Ghana and Indonesia, all with ODK Collects. One of the big pitfalls
to me is getting from a .csv to an SPSS or STATA file in which all labels
are added. Maybe I did not yet find the right resource online on how to
save time and have this done automated. Somehow it should not be too
difficult since all labels (both value as well as variable labels) are
included in the survey form. If this is easier done in ODK 2.0 I'd love to
try it out.

  • Possibilities to convert working/existing survey forms from ODK
    Collect format to ODK2.0 format.

What I've seen online is that it is not directly possible to re-use
existing survey forms. One of the reasons for me to stick with ODK Collect
is that we've got quite some working forms at our disposal and re-designing
these again in a different format would make me think twice on whether or
not to move to ODK2.0. In case there is a programme available that would
convert files from ODK Collect format to ODK2.0 format, I come round.

Best from Wageningen, the Netherlands
Anja Wolsky

Dear Neil and Tino,

thank you both for the very useful feedback and showing me alternatives for
adding value and variable labels for SPSS. It's a real pity to hear that
that feature is not incorporated in ODK2. I hope it will happen in the near
future as that would really add to the user-friendliness of ODK.

@Neil: do you have any idea if and when ODK Collect will stop functioning?
Or, in other words: when will we be forced to move to ODK2.0 (not that I
don't want to start, just want to know the time frame for getting myself
updated and for getting familiar with ODK2.0)

Cheers,
Anja

Hi all,

Enjoyed this post and love this group! My team and I are new to ODK. I
definitely relate to the Problem Solving through Risk approach as it's
exactly what we're doing...after a few weeks watching teams of doctors open
and hand-sum hundreds of error-ridden Excel sheets, I've convinced a group
of high-level public health officials that we really ought to pilot this,
but we have no developer support and I, with about a year's experience
programming in R and a few techy phone-a-friends, have ended up in the
position of spearheading the whole thing. I'm pretty sure we're going to
pull it off, but I'm hoping nobody finds out I'm actually up at night
watching YouTube videos about how servers work so I can present solutions
in AM meetings.

So far, ODK Collect seems sufficient for what we need...we're collecting
simple data (3 different forms, each about 2 paper pages) from about 150
users to start and I don't think we need most of the functionality offered
in 2.0 over 1.x. However, we will eventually want to scale to many more
collectors at different tiers (though the surveys will remain pretty
simple), and the ability to control who sees what data is going to be
important to us down the line. It's also important that we have easy
web-based as well as device-based collection (we'll use Enketo to start,
but sounds like this may be easier in 2.0?) I've gotten a good handle on
XLS Form and Collect and we're working slowly through the Aggregate setup
process, likely hosting on AWS. It seems that 2.0 is in a pretty nascent
phase and may require more html/css/etc experience to implement? I also
like that 1.x has more extensive tutorials available and has been largely
de-bugged, as I am working a bit blind and rely very much on the online
community for support. I also will not be around forever (American on
contract in India) and need as straightforward and unbreakable a system as
possible to leave to the team.

The question: since we're building from scratch now, should I be starting
with 2.0 rather than anticipate migrating from the Collect/Aggregate suite
in the future, even if this future is far away? Should we just stick with
Collect since it seems a bit simpler, and not worry about upgrading just
for the sake of it? What additional skills would I have to learn to
implement 2.0 over collect/aggregate? Any guidance in making this decision
would be welcome.

Thanks!!

Best,
Jordan

··· On Monday, October 12, 2015 at 10:30:39 PM UTC+5:30, Neil Hendrick wrote: > > ODeeKers, > > Early on, when the internet came to your house on an AOL CD through snail > mail, when the Spam Wars raged in every inbox, when dragging your 11 pound > laptop into a Guatemalan cyber cafe and asking to plug a Cat5 cable into > the network hub was both bizarre and obscene, I tried to change the > nomenclature of the day. You may recall in the mid 90s, that no one quite > knew what to call this thing. Luddite news anchors would snark through > reports of new software, websites, scams and schemes of the nascent network > now ubiquitously called "The Internet". They called it "The Web, The Net, > The InterWeb, The World Wide Web, the WebNet" and many other abbreviations > and portmanteaus. As an early adopter, I thought it would be great to have > a cool way to talk about sending a message over the WebNet, and I didn't > like the word "E-mail". So, I started asking people to "Zap me with a > Zoltar". > > "Here's my address, corvu...@juno.com ," I would say, "Zap > me with a Zoltar tomorrow, and we'll talk." This was still at the time > when people would ask you if there were any spaces in your email address, > or which letters were capitalized. They would tell you to go to a web site > at H-T-T-P-Colon-Backslash-Backslash-Dubbleyou-Dubbleyou-Dubbleyou-.... > before telling your the address. At the time, everything was so up in the > air and so dumb that calling emails *Zoltars *didn't seem all that crazy. > Now, it sounds stupid, I know, but there are still a few choice friends and > family members who will humor me to this day by zapping my inbox with a > friendly Zoltar. > > My point here is that not all ideas are good ideas, though they may sound > good at the time, it's the spin cycle of heavy use that centrifuges out the > fluff and nonsense and leaves you with something you can use and depend on. > If you looked at all of the ODK users, I'm sure you would see a spectrum, > some of whom are early adopters, many of whom just want something that > works. Me, I am a reckless early adopter, first in line for whatever > personal jet-pack, sub-dermal cerebral stimulator, or gadget that just > rolled off of the bespoke Akihabara electronics production line. And so it > was that I produced my first digital data collection system in 2006, to > support research on war refugees in Uganda > . This thing > had "early adopter" written all over it. We used smart phones that probably > cost $800 a piece once we had jacked them up with super size batteries and > memory cards, they ran WindowsME (?!?!), and i built the survey in a sort > of chopped down version of Visual Basic called Visual CE > . > > (I'm coming around to talking about ODK pretty soon here, so stick with me) > > This Uganda survey was a monster, on paper it was 25 pages long and had > it's share of skip logic and data constraints. The software I used to build > it had a sort of weird conceit where every element you added (a label, a > list of options, a text entry box) was given an X,Y coordinate. That is, > every single item had to be plotted on a Cartesian plane. So, I had to > imagine the screen dimensions, place the elements for each question in that > "screen", and then when the user had completed the question, move their > view to the coordinates of the next "screen". As I added questions, the > Cartesian plane grew and grew and grew. In the end, the entire survey > covered a virtual surface that was as large as the side of a barn. It > pushed the hard limit of 32,000 pixels maximum dimension. It was as if the > phone was a little window that the user looked through, and as they clicked > buttons, I would slide this barn sized survey around behind the window. > When you wanted to edit the thing, you had to drag around the Cartesian > plane until you found the one label or drop-down list you needed to change. > I still have nightmares about this thing. > > Despite all of the trouble, the survey turned out great. We could query > the database after every data sync, and look for errors that could be fixed > before data collection was complete. This, at a time when people doing > research in developing countries were shipping hundreds of pounds of paper > back to their universities where graduate students chained in basements > performed double data entry for months. Principal Investigators would get > their first look at data collected over the summer while leaves turned > orange in the crisp Autumn air. > > This proof of concept was an incredible risk, an expensive and difficult > gamble that only in hindsight could be shown to save a ton of money and > time while providing faster and higher quality data. Way to go, Us! This > was when I worked at Tulane with Phuong Pham and Patrick Vinck who now run > the KoBo Project at the Harvard > Humanitarian Initiative , and you have to give > them credit for foresight and stone cold nerve in taking the risk to prove > the point that paper was out-dated, inaccurate, and expensive. > > The result was a bit of funding to really get going with digital data > collection, and to come up with a better system than the pixel counting > horror movie that I first used. I was charged with figuring it out, and my > first thought was that it had to be better and easier to work with. It had > to be open source, and it had to be adaptable to a variety of surveys. Some > poking around led me to Open Data Kit, and a few conversations with Yaw > sealed the deal. I wrote my next survey in XML, again for Uganda, and it > was many times easier. Even then, this was a risk. There weren't a ton of > people using ODK, but it had a University program behind it, so there > wasn't a danger of the old bait-and-switch to having to pay for it (looking > at you, Magpi), if you wanted to add something to it you could edit the > code, and it had a charismatic leader in Gaetano Borriello. > > I met Gaetano when we were both on a panel at a the "Soul of the New > Machine: Human Rights, Technology & New Media" conference at UC Berkeley. > He embodied a virtue that I aspire to, Problem Solving Through Risk. As > previously mentioned in this discussion series, it was quite risky to > imagine that smartphones were the best solution to data collection in > developing countries at a time when only nerds and trust fundees had > smartphones, and everyone else thought that we should dumb down to collect > data on those little flip-phones. Gaetano knew that you aimed up and better > technology, not down at cheaper tech. I begged him to let me present > first, so that I could be the opening act instead of having to follow him > and the other luminaries on the panel. Afterwords, we had this deep > troubleshooting conversation in which I presented a haunting ODK related > issue. He leaned in to it the same way I lean into a seemingly unsolvable > problem, joyfully grinding through the symptoms, ruling out unknowable > variables, and finally concluding it could only be a simple but unavoidable > hardware issue persistent to that brand of phone. Few people can have this > kind of conversation, most people see a technical issue as a problem to > sweat through, not an opportunity to make things better. > > Now, when you take up ODK Collect to do your data collection, it's as > smooth as silk. Their is a huge community asking and answering questions, > and a ton of ancillary support mechanisms like KoboForm, ODK Aggregate, and > custom versions of Collect. My point is, because of people like Gaetano, > the road is smooth enough that data collection is democratized and open to > people who don't want to edit 5000 lines of XML by hand, they can just jump > in and get to work. > > There are, however, always those persons who seek the limits of the field. > They will ask "Can it do this?" and "Can it do that?" until someone says > No. There was a lot of that early on in ODK and a lot of features were > added and extra tools developed to accommodate the needs of researchers. > Now, ODK Collect can do almost everything you can image. > > Almost. > > But what if you want to query a database in the middle of a survey to > populate a list of choices? What if you want to modify the layout and look > of the question screen? What if you want to collect longitudinal data, > linking data collected last year with data you collect this year? ODK > Collect stores all of its collected data in discrete XML files, you can't > query them or do anything really clever. There is no real database behind > ODK Collect while you are working on the phone. If you want to take things > to the next level, Open Data Kit has your back! It's time to look at ODK > 2.0. > > I've been working with ODK 2.0 for > a year now, and it has that seat of your pants feeling of experimental > excitement that can only come with Problem Solving Through Risk that you > can only get trying something new. I'm happy to say that the results of > using this in a very large and demanding field survey are very positive, > and I hope I can encourage more people to pick it up, and even to > contribute to its further development. > > In brief, an NGO called PATH has a Malaria > elimination program whose data > requirements exceed the capabilities of ODK Collect. They decided, in a > move whose sheer nerve I'm not sure if everyone fully understood, decided > to go with data collection using ODK 2.0. They needed to be able to do > things like query a CSV file mid-survey, and they needed to record a ton of > data about every member of a household, and then later on do things like > populate a list of choices with *"All Female members of the household > between the ages of 12 and 49 who tested Negative for Malaria"*. This > kind of complex work requires a database behind the survey, and the > flexibility to push the boundaries of the survey's capabilities. > > Not only did PATH decide to go with ODK 2.0, they went so far as to > develop a super cool front end on top of ODK Survey that adds capabilities > for epidemiological sampling and navigation. You can even try it out > yourself, it's in the google Play store (Yay, Open Source!) and it's called > Episample > . > (The developer is a wonderful Ethiopian gentleman named Belendia Serda who > you should all know as one of the single most talented ODK developers out > there.) > > Taking the risk to develop custom software on top of ODK 2.0, and to > deploy it in the field in something as critical as a mass drug trial in a > public health program shows the kind of forward thinking and enthusiastic > adoption of open-source technology that has made ODK great in the last ten > years. The results have been very positive, and the data collected is able > to demonstrate the clear success of the malaria elimination program. > > Now, I have a lot to say about the actual implementation, the pitfalls, > late night struggles, hilarious gaffs and mistakes I made and fixed in the > service of bringing an ODK 2.0 project to shimmering life against > incredible odds with all the chips on the table at the 11th hour and > without a net, but I may have written more than I intended about the > philosophical idea behind such a scheme, and I think I might be just as > well off to leave it open to questions from the peanut gallery. Since this > is a discussion series, let's discuss ODK 2.0: > > > - How do you decide if you should stick with ODK Collect, or go on to > ODK Survey? > - What are the best features of ODK 2.0 tools? > - What areas of development are most needed for wider adoption? > > > I would love to hear your questions and thoughts, so please Zap me with a > Zoltar and I'll answer with more directness and brevity than I have brought > to bear on my opening comments. > > Best from Washington DC, > > Neil Hendrick > > > > > >

Scan is probably the feature that most compels me to try out ODK 2.0 as
while it was the tyranny of paper forms that drove me to ODK in the first
place - there are some cases where paper is just easier for the
fieldworkers and for participants.

I'm going to try and find the time in the new year to install the whole 2.0
suite and try things out, it is useful to hear about other successful cases
especially when still in the Alpha stage of development.

The underlying open-source credentials are really valuable in convincing me
and others I've spoken to spend the time to skill up on these things - very
noble of course but it also allows people to take it in so many interesting
directions.

Hi Juan,
thanks for the reply, I'll answer by the numbers:

  1. Is ODK 2.0 going though the same development path as ODK? I mean, does
    the 3 principles exposed by Yaw in the first series apply to ODK 2.0

I'm sure they are, though that's more of a question for the actual
developers in the ODK group at UW.

  1. The implementation of a database behind the survey sounds great. We
    have been very busy implementing ODK and selling it to the organization and
    have yet to try ODK 2.0. We are heavily using ODK on the geospatial side. A
    corner we still haven't explore is the editing of existing polygons and
    registering their mutations/changes. This clearly calls for a database, as
    spatial data needs to be sideloaded and edited in the field. In this case
    we are dealing with data edits and synchronization to a record in an
    existing database, rather than the collection of new entities. Would this
    be suitable for ODK 2.0?

Yes, but it would take some development. As long as you are writing it into
a table in the db that the survey is writing to, the survey can query that
database.

Polygons is one of those things that a lot of people have been looking at
for a while. Usually, because they want to be able to define an area, like
the borders of a refugee camp, or an area of land within which you want to
randomly sample some trees or something. I thought that there was a
solution for this already, maybe using GeoODK http://geoodk.com/index.php.

A final thought. I see the most fun part of these developments is not
technology itself but all the developments and progress occurring around
technology. This is in particular the case of open source and escalable
entrepreneurships in developing countries where individuals and
organizations have to undergo strong changes and address questions that
recently where only in the realm of international powers. I find that more
than international monetary aid this a real way to bridge the gap
north-south and reach those UN development goals.

Agreed.

thanks for your comments, Juan. Keep up the good work.

~Neil

Anja,

These are good questions, and I hope I can help in answering them.

  • Ease of adding value and variable labels to your datase.

One of the big pitfalls to me is getting from a .csv to an SPSS or STATA
file in which all labels are added. Maybe I did not yet find the right
resource online on how to save time and have this done automated. Somehow
it should not be too difficult since all labels (both value as well as
variable labels) are included in the survey form.

Adding labels for analysis really is important. And you're right, they are
in the XML survey template, so it should be easy enough to handle that. For
this purpose, there is a great tool that is part of KoBoToolbox
http://www.kobotoolbox.org/, and it's compatible with ODK Collect
surveys.
KoBo has a survey design application (KoBoForm) that lets you build surveys
in a visual environment. It has some wonderful features, including the
ability to upload existing XML surveys that you have used in ODK Collect.
Now, KoBoForm has a newer better version, but the SPSS labels function
hasn't been implemented in the new version. You can still access the old
version of KoBoForm http://www.kobotoolbox.org/koboform/oldkoboform.html
and use that SPSS export function to create an SPSS script that will label
your SPSS database with all of the questions and options from your XML
Survey.
There is a quick YouTube tutorial https://youtu.be/uBXPdc9e3gwthat will
help you figure it out. It's easy.

If this is easier done in ODK 2.0 I'd love to try it out.

It's not. In ODK 2.0, the labels for all of the questions and options are
kept in a formdef.json file, while the database is kept in an actual
database called 000000001.db

  • Possibilities to convert working/existing survey forms from ODK
    Collect format to ODK2.0 format.

What I've seen online is that it is not directly possible to re-use
existing survey forms. One of the reasons for me to stick with ODK Collect
is that we've got quite some working forms at our disposal and re-designing
these again in a different format would make me think twice on whether or
not to move to ODK2.0. In case there is a programme available that would
convert files from ODK Collect format to ODK2.0 format, I come round.

No, you have to start from scratch. In fact, ODK 2.0 Surveys are
definitely more complex and require more effort to design. There is a cool
"Application Designer" that you can install locally, and it really helps
with the process, but there's a different set of rules and it takes some
work to gain that competence. That's the tradeoff for the additional
features and power of working with ODK 2.0

Best from Lusaka,
Neil

ODK Collect and the 1.x workflows will continue to function.

The number of updates to the tools will depend upon:

  • user contributions to add new functionality
  • user contributions to fix newly discovered bugs
  • user contributions to support newly released Android OS versions.

Date-time treatment, for example, need a full re-write in ODK Collect --
but the core team is NOT doing that re-write. If the community contributes
changes to fix then, we will incorporate it, but, for now, it partially
works and partially doesn't work.

Many users are just gathering data and don't need -- and more importantly,
don't want -- the ability to revise already-collected data.

For those users, ODK Collect is the ideal solution.

The 2.0 tools will inter-operate with the 1.x workflow.

ODK Survey, which provides similar functionality to ODK Collect, is easier
to customize and extend.
We anticipate some users migrating to the new 2.0 tools because of their
ease of customization.

··· On Mon, Nov 23, 2015 at 6:47 AM, Anja Wolsky wrote:

Dear Neil and Tino,

thank you both for the very useful feedback and showing me alternatives
for adding value and variable labels for SPSS. It's a real pity to hear
that that feature is not incorporated in ODK2. I hope it will happen in the
near future as that would really add to the user-friendliness of ODK.

@Neil: do you have any idea if and when ODK Collect will stop functioning?
Or, in other words: when will we be forced to move to ODK2.0 (not that I
don't want to start, just want to know the time frame for getting myself
updated and for getting familiar with ODK2.0)

Cheers,
Anja

--

Post: opendatakit@googlegroups.com
Unsubscribe: opendatakit+unsubscribe@googlegroups.com
Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups
"ODK Community" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to opendatakit+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
Mitch Sundt
Software Engineer
University of Washington
mitchellsundt@gmail.com

Jordan,

If the 1.0 generation of tools works for you (sounds like it does),
I'd keep using them.

If there is a particular use-case of 2.0 (e.g., bi-directional data
sync) is a must-have, then 2.0 is worth a look. The caveat is that 2.0
is an alpha, so you should have good technical team in place to
support your campaign.

Yaw

··· -- Need ODK consultants? Nafundi provides form design, server setup, in-field training, and software development for ODK. Go to https://nafundi.com to get started.

On Fri, Nov 27, 2015 at 12:12 PM, Jordan Levinson jord.levinson@gmail.com wrote:

Hi all,

Enjoyed this post and love this group! My team and I are new to ODK. I
definitely relate to the Problem Solving through Risk approach as it's
exactly what we're doing...after a few weeks watching teams of doctors open
and hand-sum hundreds of error-ridden Excel sheets, I've convinced a group
of high-level public health officials that we really ought to pilot this,
but we have no developer support and I, with about a year's experience
programming in R and a few techy phone-a-friends, have ended up in the
position of spearheading the whole thing. I'm pretty sure we're going to
pull it off, but I'm hoping nobody finds out I'm actually up at night
watching YouTube videos about how servers work so I can present solutions in
AM meetings.

So far, ODK Collect seems sufficient for what we need...we're collecting
simple data (3 different forms, each about 2 paper pages) from about 150
users to start and I don't think we need most of the functionality offered
in 2.0 over 1.x. However, we will eventually want to scale to many more
collectors at different tiers (though the surveys will remain pretty
simple), and the ability to control who sees what data is going to be
important to us down the line. It's also important that we have easy
web-based as well as device-based collection (we'll use Enketo to start, but
sounds like this may be easier in 2.0?) I've gotten a good handle on XLS
Form and Collect and we're working slowly through the Aggregate setup
process, likely hosting on AWS. It seems that 2.0 is in a pretty nascent
phase and may require more html/css/etc experience to implement? I also like
that 1.x has more extensive tutorials available and has been largely
de-bugged, as I am working a bit blind and rely very much on the online
community for support. I also will not be around forever (American on
contract in India) and need as straightforward and unbreakable a system as
possible to leave to the team.

The question: since we're building from scratch now, should I be starting
with 2.0 rather than anticipate migrating from the Collect/Aggregate suite
in the future, even if this future is far away? Should we just stick with
Collect since it seems a bit simpler, and not worry about upgrading just for
the sake of it? What additional skills would I have to learn to implement
2.0 over collect/aggregate? Any guidance in making this decision would be
welcome.

Thanks!!

Best,
Jordan

On Monday, October 12, 2015 at 10:30:39 PM UTC+5:30, Neil Hendrick wrote:

ODeeKers,

Early on, when the internet came to your house on an AOL CD through snail
mail, when the Spam Wars raged in every inbox, when dragging your 11 pound
laptop into a Guatemalan cyber cafe and asking to plug a Cat5 cable into the
network hub was both bizarre and obscene, I tried to change the nomenclature
of the day. You may recall in the mid 90s, that no one quite knew what to
call this thing. Luddite news anchors would snark through reports of new
software, websites, scams and schemes of the nascent network now
ubiquitously called "The Internet". They called it "The Web, The Net, The
InterWeb, The World Wide Web, the WebNet" and many other abbreviations and
portmanteaus. As an early adopter, I thought it would be great to have a
cool way to talk about sending a message over the WebNet, and I didn't like
the word "E-mail". So, I started asking people to "Zap me with a Zoltar".

"Here's my address, corvu...@juno.com," I would say, "Zap me with a Zoltar
tomorrow, and we'll talk." This was still at the time when people would ask
you if there were any spaces in your email address, or which letters were
capitalized. They would tell you to go to a web site at
H-T-T-P-Colon-Backslash-Backslash-Dubbleyou-Dubbleyou-Dubbleyou-.... before
telling your the address. At the time, everything was so up in the air and
so dumb that calling emails Zoltars didn't seem all that crazy. Now, it
sounds stupid, I know, but there are still a few choice friends and family
members who will humor me to this day by zapping my inbox with a friendly
Zoltar.

My point here is that not all ideas are good ideas, though they may sound
good at the time, it's the spin cycle of heavy use that centrifuges out the
fluff and nonsense and leaves you with something you can use and depend on.
If you looked at all of the ODK users, I'm sure you would see a spectrum,
some of whom are early adopters, many of whom just want something that
works. Me, I am a reckless early adopter, first in line for whatever
personal jet-pack, sub-dermal cerebral stimulator, or gadget that just
rolled off of the bespoke Akihabara electronics production line. And so it
was that I produced my first digital data collection system in 2006, to
support research on war refugees in Uganda. This thing had "early adopter"
written all over it. We used smart phones that probably cost $800 a piece
once we had jacked them up with super size batteries and memory cards, they
ran WindowsME (?!?!), and i built the survey in a sort of chopped down
version of Visual Basic called Visual CE.

(I'm coming around to talking about ODK pretty soon here, so stick with
me)

This Uganda survey was a monster, on paper it was 25 pages long and had
it's share of skip logic and data constraints. The software I used to build
it had a sort of weird conceit where every element you added (a label, a
list of options, a text entry box) was given an X,Y coordinate. That is,
every single item had to be plotted on a Cartesian plane. So, I had to
imagine the screen dimensions, place the elements for each question in that
"screen", and then when the user had completed the question, move their view
to the coordinates of the next "screen". As I added questions, the Cartesian
plane grew and grew and grew. In the end, the entire survey covered a
virtual surface that was as large as the side of a barn. It pushed the hard
limit of 32,000 pixels maximum dimension. It was as if the phone was a
little window that the user looked through, and as they clicked buttons, I
would slide this barn sized survey around behind the window. When you wanted
to edit the thing, you had to drag around the Cartesian plane until you
found the one label or drop-down list you needed to change. I still have
nightmares about this thing.

Despite all of the trouble, the survey turned out great. We could query
the database after every data sync, and look for errors that could be fixed
before data collection was complete. This, at a time when people doing
research in developing countries were shipping hundreds of pounds of paper
back to their universities where graduate students chained in basements
performed double data entry for months. Principal Investigators would get
their first look at data collected over the summer while leaves turned
orange in the crisp Autumn air.

This proof of concept was an incredible risk, an expensive and difficult
gamble that only in hindsight could be shown to save a ton of money and time
while providing faster and higher quality data. Way to go, Us! This was when
I worked at Tulane with Phuong Pham and Patrick Vinck who now run the KoBo
Project at the Harvard Humanitarian Initiative, and you have to give them
credit for foresight and stone cold nerve in taking the risk to prove the
point that paper was out-dated, inaccurate, and expensive.

The result was a bit of funding to really get going with digital data
collection, and to come up with a better system than the pixel counting
horror movie that I first used. I was charged with figuring it out, and my
first thought was that it had to be better and easier to work with. It had
to be open source, and it had to be adaptable to a variety of surveys. Some
poking around led me to Open Data Kit, and a few conversations with Yaw
sealed the deal. I wrote my next survey in XML, again for Uganda, and it was
many times easier. Even then, this was a risk. There weren't a ton of people
using ODK, but it had a University program behind it, so there wasn't a
danger of the old bait-and-switch to having to pay for it (looking at you,
Magpi), if you wanted to add something to it you could edit the code, and it
had a charismatic leader in Gaetano Borriello.

I met Gaetano when we were both on a panel at a the "Soul of the New
Machine: Human Rights, Technology & New Media" conference at UC Berkeley. He
embodied a virtue that I aspire to, Problem Solving Through Risk. As
previously mentioned in this discussion series, it was quite risky to
imagine that smartphones were the best solution to data collection in
developing countries at a time when only nerds and trust fundees had
smartphones, and everyone else thought that we should dumb down to collect
data on those little flip-phones. Gaetano knew that you aimed up and better
technology, not down at cheaper tech. I begged him to let me present first,
so that I could be the opening act instead of having to follow him and the
other luminaries on the panel. Afterwords, we had this deep troubleshooting
conversation in which I presented a haunting ODK related issue. He leaned in
to it the same way I lean into a seemingly unsolvable problem, joyfully
grinding through the symptoms, ruling out unknowable variables, and finally
concluding it could only be a simple but unavoidable hardware issue
persistent to that brand of phone. Few people can have this kind of
conversation, most people see a technical issue as a problem to sweat
through, not an opportunity to make things better.

Now, when you take up ODK Collect to do your data collection, it's as
smooth as silk. Their is a huge community asking and answering questions,
and a ton of ancillary support mechanisms like KoboForm, ODK Aggregate, and
custom versions of Collect. My point is, because of people like Gaetano, the
road is smooth enough that data collection is democratized and open to
people who don't want to edit 5000 lines of XML by hand, they can just jump
in and get to work.

There are, however, always those persons who seek the limits of the field.
They will ask "Can it do this?" and "Can it do that?" until someone says No.
There was a lot of that early on in ODK and a lot of features were added and
extra tools developed to accommodate the needs of researchers. Now, ODK
Collect can do almost everything you can image.

Almost.

But what if you want to query a database in the middle of a survey to
populate a list of choices? What if you want to modify the layout and look
of the question screen? What if you want to collect longitudinal data,
linking data collected last year with data you collect this year? ODK
Collect stores all of its collected data in discrete XML files, you can't
query them or do anything really clever. There is no real database behind
ODK Collect while you are working on the phone. If you want to take things
to the next level, Open Data Kit has your back! It's time to look at ODK
2.0.

I've been working with ODK 2.0 for a year now, and it has that seat of
your pants feeling of experimental excitement that can only come with
Problem Solving Through Risk that you can only get trying something new. I'm
happy to say that the results of using this in a very large and demanding
field survey are very positive, and I hope I can encourage more people to
pick it up, and even to contribute to its further development.

In brief, an NGO called PATH has a Malaria elimination program whose data
requirements exceed the capabilities of ODK Collect. They decided, in a move
whose sheer nerve I'm not sure if everyone fully understood, decided to go
with data collection using ODK 2.0. They needed to be able to do things
like query a CSV file mid-survey, and they needed to record a ton of data
about every member of a household, and then later on do things like populate
a list of choices with "All Female members of the household between the ages
of 12 and 49 who tested Negative for Malaria". This kind of complex work
requires a database behind the survey, and the flexibility to push the
boundaries of the survey's capabilities.

Not only did PATH decide to go with ODK 2.0, they went so far as to
develop a super cool front end on top of ODK Survey that adds capabilities
for epidemiological sampling and navigation. You can even try it out
yourself, it's in the google Play store (Yay, Open Source!) and it's called
Episample. (The developer is a wonderful Ethiopian gentleman named Belendia
Serda who you should all know as one of the single most talented ODK
developers out there.)

Taking the risk to develop custom software on top of ODK 2.0, and to
deploy it in the field in something as critical as a mass drug trial in a
public health program shows the kind of forward thinking and enthusiastic
adoption of open-source technology that has made ODK great in the last ten
years. The results have been very positive, and the data collected is able
to demonstrate the clear success of the malaria elimination program.

Now, I have a lot to say about the actual implementation, the pitfalls,
late night struggles, hilarious gaffs and mistakes I made and fixed in the
service of bringing an ODK 2.0 project to shimmering life against incredible
odds with all the chips on the table at the 11th hour and without a net, but
I may have written more than I intended about the philosophical idea behind
such a scheme, and I think I might be just as well off to leave it open to
questions from the peanut gallery. Since this is a discussion series, let's
discuss ODK 2.0:

How do you decide if you should stick with ODK Collect, or go on to ODK
Survey?
What are the best features of ODK 2.0 tools?
What areas of development are most needed for wider adoption?

I would love to hear your questions and thoughts, so please Zap me with a
Zoltar and I'll answer with more directness and brevity than I have brought
to bear on my opening comments.

Best from Washington DC,

Neil Hendrick

--

Post: opendatakit@googlegroups.com
Unsubscribe: opendatakit+unsubscribe@googlegroups.com
Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups
"ODK Community" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to opendatakit+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Neil,
Thanks for the great backgrounder on the 2.0 tools. Just to clarify on the
SPSS labeling feature in KoBoToolbox: It's indeed part of the new tools -
no need to use the old tool for this. Anyone who needs to build a fully
labeled SPSS file can download the labels from the deployed data collection
project dashboard. The labels are created for each language contained in
the form and stored in separate SPS files (one for each language).

[image: Inline image 1]

We're also working on getting similar syntax files for R and Stata
integrated, and include the variable definition and data import steps in
the syntax as well.

Best,
Tino

··· On Mon, Nov 2, 2015 at 10:41 AM, Neil Hendrick wrote:

Anja,

These are good questions, and I hope I can help in answering them.

  • Ease of adding value and variable labels to your datase.

One of the big pitfalls to me is getting from a .csv to an SPSS or STATA
file in which all labels are added. Maybe I did not yet find the right
resource online on how to save time and have this done automated. Somehow
it should not be too difficult since all labels (both value as well as
variable labels) are included in the survey form.

Adding labels for analysis really is important. And you're right, they are
in the XML survey template, so it should be easy enough to handle that. For
this purpose, there is a great tool that is part of KoBoToolbox
http://www.kobotoolbox.org/, and it's compatible with ODK Collect
surveys.
KoBo has a survey design application (KoBoForm) that lets you build
surveys in a visual environment. It has some wonderful features, including
the ability to upload existing XML surveys that you have used in ODK
Collect.
Now, KoBoForm has a newer better version, but the SPSS labels function
hasn't been implemented in the new version. You can still access the old
version of KoBoForm http://www.kobotoolbox.org/koboform/oldkoboform.html
and use that SPSS export function to create an SPSS script that will label
your SPSS database with all of the questions and options from your XML
Survey.
There is a quick YouTube tutorial https://youtu.be/uBXPdc9e3gwthat will
help you figure it out. It's easy.

If this is easier done in ODK 2.0 I'd love to try it out.

It's not. In ODK 2.0, the labels for all of the questions and options are
kept in a formdef.json file, while the database is kept in an actual
database called 000000001.db

  • Possibilities to convert working/existing survey forms from ODK
    Collect format to ODK2.0 format.

What I've seen online is that it is not directly possible to re-use
existing survey forms. One of the reasons for me to stick with ODK Collect
is that we've got quite some working forms at our disposal and re-designing
these again in a different format would make me think twice on whether or
not to move to ODK2.0. In case there is a programme available that would
convert files from ODK Collect format to ODK2.0 format, I come round.

No, you have to start from scratch. In fact, ODK 2.0 Surveys are
definitely more complex and require more effort to design. There is a cool
"Application Designer" that you can install locally, and it really helps
with the process, but there's a different set of rules and it takes some
work to gain that competence. That's the tradeoff for the additional
features and power of working with ODK 2.0

Best from Lusaka,
Neil

--

Post: opendatakit@googlegroups.com
Unsubscribe: opendatakit+unsubscribe@googlegroups.com
Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups
"ODK Community" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to opendatakit+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.