4.4 million in printing costs, 5,000 respondents & 99% completed

September 07, 2014
Every RE was given a backpack to carry their materials to protect them from the elements. This is the supply closest after we collected all the materials.

By Tony Fuller                       
MSc-GH student
(Week 3- August 31 – September 6, 2014)

These numbers do a great job of encapsulating my week. While spending that amount on printing sounds absurd, there’s more under the surface that I haven’t fully explained. One obvious fact is that the amount is in Ugandan schillings, which ultimately translates to around $1,700 USD. I’m not saying that this amount is not exorbitant, and anyone who knows me knows I hate printing anything, but it’s much better than what it could have been if we hadn’t employed technology. Our printing costs were mainly supplemental documents to our main survey tool, which ran through an application on a smartphone. To fully grasp the other numbers, I need to provide a short synopsis of the methodology of our study.

In an earlier post, I alluded to the fact that the study I am currently working on, with my partner Tu, is a study meant to quantify the burden of surgical conditions in Uganda. To get at this information, we used a survey tool developed by SOSAS, which is designed to be performed at the household level. Uganda is a country of about 36 million people, so doing a survey that is generalizable on the national level means that the sample must be large. From our sample size calculations, that number turned out to be around 5,000 respondents. With the help of the Uganda Bureau of Statistics (UBOS), which performs the national census, and our mentors, we used a sampling design that allowed us to get data in each of the regions of Uganda so that not only would the data be generalizable at the national level, but also at the regional level. Uganda is broken up into four main regions and each has a number of districts within it; furthermore, each district has distinct enumeration areas (EAs) that have been defined by UBOS. For our study, we had 105 EAs that we sampled, 24 households within each EA, and surveyed two people per household. This equates to 5,040 respondents.

Even before the finer details of this study were developed, we knew that we wanted to utilize technology to aid in the process. This decision turned out spectacularly, and I am proud to say that as of this morning we have collected 99% of the data in a little over two weeks. If we wouldn’t have used technology, then this process could have taken well over a few months. Specifically, the technology we used was smartphones, solar chargers, wireless routers, and servers to store our data. Each research enumerator (RE) on our study, of which we had 100, was provided with a smartphone (thanks to a partnership with PMA 2020 Ugandan Project leader Dr. Makumbi) which had the application Open Data Kit (ODK) uploaded on it, and either given an additional phone battery or a solar charger. This allowed for each RE to simply download the survey from the server and gather data from each respondent directly on their phone. Nightly, they uploaded their data to the server, which was then downloaded each morning and checked for errors. Having the extra battery and solar chargers allowed for REs to never be without a phone that was fully charged. This process was not without its own issues, but those challenges will be discussed in another post.

One thing that goes without saying is that while technology can truly help a project, as it did for us, what matters most is the people using the technology. All of our REs were familiar with the technology and application that we used, as they had already used it for PMA 2020. We also spent a significant amount of time before going to the field training our REs on how to use the technology in the context of our study. During this training we ran mock interviews, which allowed us to work out all the issues so that REs didn’t experience these in the field and get stuck. Lastly, we hired a data manager for our project, Sam K, who I worked with closely each morning to check and clean the data. This gave us the capability of correcting data errors (ie. missing data, wrong household numbers being put in) in real-time while the REs were still in the field. On Tuesday and Wednesday of this week, we had all the REs that were done bring their supplies back, and because of this real-time data processing, we have a data set free of errors and don’t need to call back any of the REs to follow-up on errors. We are waiting on two more REs to complete their surveying, and we will officially be done with the data collection portion of this study.

It is truly an amazing feeling to have only been here a little over three weeks and to have the most involved part of this study nearly completed.  Now it’s time to start the data collection for my own study!