CONNECT  facebook youtube instagram twitter soundcloud
search advanced search

Social media and geospatial technology offer access to huge amounts of data; vast ethical implications are often ignored

(21 June 2018) We sat down with Gabrielle Berman, our expert on research ethics, to chat about her two new discussion papers which explore the ethics of using new technologies to generate evidence about children. The papers, written collaboratively with UNICEF’s Office of Innovation, highlight the advantages and risks of using these technologies to gather data about children. They also provide useful guidance for researchers – especially those unfamiliar with technology – on the questions they should be asking in order to protect children’s rights.

Twelve-year-old Waibai Buka (second left) teaches her friends how to use a computer tablet provided by UNICEF, at a school in Baigai, northern Cameroon.


What inspired you to produce these two discussion papers?

During staff trainings, we kept getting requests from staff who wanted technical advice around technology and the use of technologies for data collection and evidence generation. Most didn’t know where to start or what to consider when thinking about using these technologies. This was the initial foray into an incredibly complex and important area for UNICEF around using technology for evidence generation.

Why focus on social media and geospatial technology specifically?

We started with social media and geospatial technology because these were the two that were the most prevalent in the organisation at the time, and there was the most demand for guidance. We should now also be considering the ethical implications of technologies like biometrics, blockchain and wearable technology. We have already started receiving requests from staff for technical advice around the ethical implications of these new technologies.

The papers were written in collaboration with the Office of Innovation. What were the benefits of the process?

Without this collaboration, the papers wouldn’t have the same status. The dialogue and relationships that were established through the collaboration of the two offices are just as important as the papers themselves. Working together also meant that we could establish an advisory group which spanned everything from ICT for Development, to communications, to data and analytics.  In this way, people with all sorts of expertise across the organization could input into the papers.

What are some of the most common misconceptions about the use of these technologies for evidence generation?

It depends on what side of the fence you’re on. From my perspective, one of the biggest misconceptions is that technology is unequivocally good, meaning these technologies could be used without appropriate reflection on the implications and potential impacts. However, for those on the technology side, one of the biggest issues is that technology won’t be used for fear of the complexity of the ethical implications. The benefit of collaborating with the Innovation office was that we had two different perspectives, but both are equally valid. Through dialogue, we acknowledged the benefits as much as the risks and agreed that what was required was reflective practice. It’s not an absolute yes or a no, but rather an “if” and “how can we do this?” and what strategies do we need to consider?

Adolescent girls look at social media posts while attending a "Lifeskills" event in Union Development & Culture Community Centre in Djibouti.


What is the biggest challenge with regards ensuring ethical compliance when using these technologies?

The biggest challenge is understanding the implications of the technology when you’re not a native technocrat. It’s incredibly difficult for staff in the field to understand the type of questions they should be asking. They also have to receive responses in simple English in order for them to start thinking through the ethical issues and the potential mitigation strategies. Part of the challenge is to change thinking and empower those who aren’t tech natives to feel comfortable enough to ask questions and interrogate the potential implications of the technology. We must avoid abdication of responsibility to tech experts and remind staff that they are in fact the experts on potential implications for children. To be a child advocate it’s incredibly important to ask the right questions, to understand and to take responsibility for the technology and its implications. 

"We must avoid abdication of responsibility to tech experts and remind staff that they are in fact the experts on potential implications for children."

How can we empower people to feel like they can ask the right questions?

Firstly, we need to provide them with guidance. But importantly, we need to get stakeholders around the same table - including the social media companies, the data scientists, and the communities we work with. We should bring these people with different perspectives together, acknowledging everyone’s expertise, and engaging in dialogue on what the potential ethical issues are and how they can be mitigated. This joint risk assessment is a key way to start a constructive dialogue on the issues and potential mitigation strategies.

How can we mitigate the threat of data and algorithms informing policy, without appropriate engagement and dialogue?

Firstly, it’s important to appreciate the implications of the data sets on which algorithms are based and to be aware that the algorithms may have built-in biases. For example, certain populations are more likely to be monitored and so data on arrests are more likely to be higher for this group.

Following from this, we need to understand that certain populations may be excluded from the data sets. Algorithms are based on training data, so unless all communities are included in the data, the outcomes and predictions are not going to be representative of these communities. For example, when data is gathered via smartphones, those who don’t have a smartphone are excluded.

Thirdly, we must recognize that modelling looks at trends only and does not consider individuals. Algorithms may see these trends as a whole but, like any type of quantitative data, it will miss the qualitative nuances underpinning these findings. While it may be very easy to adopt big data sets and use them to determine policies, we must not forget that there are very real risks in making decisions based on quantitative trends alone. The Convention on the Rights of the Child very clearly says that a child has the right to have a voice on matters that affect them. If we start basing policy exclusively on quantitative data, we are not giving voice to the nuances that may explain the data or the nuances of the individual who may differ from the broader findings.

It’s very important that we acknowledge the value of big data, but we must also acknowledge that individuals still need to have a voice. Listening to children’s voices should never be replaced by a purely quantitative approach, so while data is a very valuable tool, it is only one tool.

"We must not forget that there are very real risks in making decisions based on quantitative trends alone."

Are there any other ethical risks that stem from using this type of data?

With social media data in particular, if you run algorithms against the data you may start using it for purposes other than those originally intended when people submitted this data. The moment you start taking data out of context, it may lose its contextual integrity in which case we have to ask whether it is still relevant. This idea of contextual integrity needs to be interrogated.

Adolescent girls use cellphones and tablets in a solar kiosk providing internet connectivity in the Za’atari camp for Syrian refugees, Jordan.


Does ensuring ethical compliance also provide an opportunity to educate people on technology and online privacy?

I don’t see it as an opportunity but rather an obligation stemming from our use of these technologies. If you’re using third party data in a public way, it must also be announced publicly, in the spirit of full disclosure. We must recognise the importance of transparency in the work we do, particularly when it may be difficult if not impossible to secure informed consent. With social media projects, where you’re actively using the platform to engage young people, it’s incredibly important that you actually provide information around privacy. You cannot guarantee that children have child-friendly explanations, and so it’s our responsibility to educate and to be clear about the risks involved.

Are there any additional potential risks specifically associated with geospatial technology?

Geospatial technology has been invaluable in both development and humanitarian contexts, but we need to think about where we source the data, how useful it is, whether it’s a two-way dialogue, and if can we respond to any requests for help in humanitarian contexts. We particularly need to be concerned with the potential risks involved and the security of this data in humanitarian contexts. In these situations, we’re often dealing with vulnerable populations. Because of this, we must be very careful to ensure that this data is limited to those who absolutely need to access the data, particularly any raw and individual data, with specific consideration for the safety of the populations that may be captured by the technology. For this reason, we really need to think about the interoperability of geospatial data systems: which partners are we working with? who in these organisations has access to the data? why do they have access? do we have sufficient security measures? These types of reflections are necessary to fulfil our obligations to protect the communities that we work with.

You can download the paper on ethical considerations when using geospatial technology and social media now.