3Vs Crowdsourcing Framework for Elections launched

[Guest post cross-posted from Ihub Research. About the author: Angela Crandall studies the uptake and increasing utility of ICTs in East Africa from a user’s perspective. Angela is currently a Research Manager at iHub and also co-lead of Waza Experience, an iHub community initiative aimed at prompting under-privileged children to explore innovation and entrepreneurship concepts grounded in real-world experience.]
iHub Research is pleased to publish the results of our research on developing a Crowdsourcing Framework for Elections. Over the past 6 months, we have been looking at a commonly held assumption that crowdsourced information (collected from citizens through online platforms such as Twitter, Facebook, and text messaging) captures more informa­tion about the on-the-ground reality than traditional media outlets like television and newspapers. We used Kenya’s General Elections on March 4, 2013 as a case study event to compare information collected from the crowd with results collected by traditional media and other sources.
The three main goals of this study were to:
1) test the viability of passive crowdsourcing in the Kenyan context,
2) to determine which information-gathering mechanism (passive crowdsourcing on Twitter, active crowdsourcing on Uchaguzi, or online publications of traditional media) produced the best real-time pic­ture of the on-the-ground reality, and
3) to develop a framework to help aspiring crowdsourcers to deter­mine whether crowdsourcing is viable in their context and if so, which techniques will offer verifiable and valid information.
Download the Final Report here.
Download the 3Vs Crowdsourcing Framework here.

Download the 3Vs Framework Visual here.

By conducting a quantitative analysis of the data col­lected from Twitter during the Kenyan election pe­riod, we found that ‘passive crowdsourcing’ (data mining of already generated online information) is indeed viable in the Kenyan election context, but only using machine learning techniques. Mining Kenyan Twitter data during an election scenario looks to be a very valuable and worthwhile technique when looking for timely, local information. However, mining is only pos­sible if the user has knowledge of machine learning techniques since, without such techniques, the man­ual process can take up to 270 working days.
The second objective of the study was to understand what information, if any, Twitter provided beyond traditional media sources, and other crowdsourcing platforms, such as Uchaguzi. We found that Twitter reported incidents as fast or faster than traditional media (as measured in days), though these reports had the disadvantage of not being previously verified like traditional media or Uchaguzi. Twitter contained sets of information/localized information useful to particular interest groups that may not be broadcast by traditional media. Aggregation of such content could string together newsworthy information on a grander scale.
Our third objective of this study was to determine whether there are particular conditions that need to be in place in order for crowdsourcing using on­line and mobile technologies to be a viable way to gather information during an election. By looking at the particular case of the 2013 Kenyan election, we found that indeed there are factors and considera­tions that are useful in assessing whether there will be an adequate online ‘crowd’ to source information from. These include, among others: 1) the availability of, and access to, Internet, 2) the adoption and pen­etration of mobile phone telephony, and 3) the ex­tent and culture of social media networks usage. We further found that it is crucial to consider what type of data is required by the aspiring crowdsourcers be­fore deciding how to gather data. For our project for instance, we desired data from multiple sources for comparative analysis, so we used both passive crowd­sourcing to compare to existing active crowdsourcing project, Uchaguzi.
Based on these findings, we designed a ‘3Vs Crowdsourcing Framework for Elec­tions’ made for practitioners such as journalists or crowdmappers. The aim of the framework is to pro­vide guidelines for any crowdsourcers, new or experi­enced, who are interested in seeing if crowdsourcing is a viable option in a particular location and if so, what type of crowdsourcing is appropriate. This framework helps to carry out an effective crowdsourcing activity by prompting the potential crowdsourcer to investi­gate the factors that facilitate the sharing of informa­tion by ‘ordinary citizens,’ who generate the bulk of crowdsourced information.
We are grateful to Canada’s International Development Research Centre for their support in funding this research.