stitcherLogoCreated with Sketch.
Get Premium Download App
Listen
Discover
Premium
Shows
Likes

Listen Now

Discover Premium Shows Likes

Digital Impact

11 Episodes

22 minutes | Dec 21, 2020
Internet Sleuthing 2.0: Standards for Digital Open Source Investigations
Digital Impact 4Q4: Alexa Koenig on Expanding Digital Open Source Investigations SUBSCRIBE TO THIS PODCAST ON iTUNES. TRANSCRIPT BELOW. Jason Madara 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Alexa Koenig, Executive Director of the Human Rights Center at UC-Berkeley. In 2018, with help from a Digital Impact Grant, the Center established guidelines used to inform the Berkeley Protocol, an ambitious undertaking to standardize digital open source information for criminal and human rights investigations. The Protocol explores how digital information is gathered across a myriad of platforms in order to address human rights violations and other serious breaches of international law, including international crimes. This month, the Center released the Protocol in partnership with the United Nations Human Rights Office on the 75th anniversary of the Nuremberg Trials. 00:58 CHRIS DELATORRE: Alexa, establishing legal and ethical norms is critical to this process. How is digital information useful for justice and accountability purposes and how much of this are we seeing in the courts right now? 01:12 ALEXA KOENIG: Thanks, Chris, first of all, so much for having me. I’m really excited to have a chance to speak with you a little bit about what we’ve been doing at the Human Rights Center at UC Berkeley with support from the Digital Impact grant. To answer your question, how digital information can be useful for justice and accountability, I think the reason it’s becoming increasingly important for that purpose, and for documenting and getting justice for human rights violations, mass atrocities, and war crimes, really boils down to one fundamental thing. More and more of our communication, of course, is happening across digital platforms and in online spaces. So a lot of the communication that helps us to build war crimes cases and human rights cases, is actually happening now in digital space. “As more cases begin to pile through the system, we’re going to see this kind of content increase in value for justice and accountability purposes.” The gold standard for getting someone convicted of an international crime is to gather really three kinds of evidence. The first is testimonial information, which is what people have to say about what’s happened to them or to their communities. The second is physical information, which would be, of course, the murder weapon or the soil samples, if there’s allegations of a chemical weapons attack. And the third is documentary evidence. Historically, this would have been contracts that were issued or written commands being given from a high-level person to the people on the ground in a conflict zone. But more and more, what we’re seeing are posts to sites like Twitter or Facebook, or their counterparts in other parts of the world. Ideally, as a war crimes investigator or human rights investigator, you find information from all of those sources. And hopefully, they point in the same direction with regards to the who, the what, the when, the where, the why, and how of what’s happened. For international criminal investigators, we need all those things, and to not only be able to explain how something happened and who’s responsible, but how do we know the facts that we are claiming are in fact true? With regards to how much we’re seeing in the courts right now, it really depends on which courts you’re talking about. So domestically, we’ve certainly seen a rise in the introduction of social media content over the past 15 years. But with international courts, like the International Criminal Court, we’ve really seen a growing awareness since about 2014, 2015. And there have been two milestone cases that have really demonstrated how powerful social media content can be for court processes. The first was what’s referred to as the Al-Mahdi case. And it was a case that was brought in the International Criminal Court to try and get justice for destruction of cultural heritage property in Mali. In that particular situation, the prosecution used satellite imagery and other digital information that you could get out there to try and establish where these crimes happened and who might have been responsible. A second milestone really came in 2017, and again, in 2018, when an arrest warrant was issued for a guy named Al-Werfalli from Libya, who was accused of the extrajudicial killing of 33 people across that state. So because that became the basis for the warrant of arrest, I think much of us in the international justice community are sort of holding our breath to see how the court ultimately deals with information in those two cases. In the Al-Werfalli situation, the basis for the arrest warrants was seven videos that were posted to social media. So it’ll be the first time we really get a chance to see how powerful those kinds of videos and that kind of social media content can be for establishing an evidentiary record. Finally, I think we’ve really seen it also in the international Criminal Court’s strategic plan, their last two strategic plans, which have established that they need to be building more capacity within the courts to do these sorts of investigations. A war crimes case might take five, 15, even 30 years before the atrocity lands in a court of law. And I think as more and more cases begin to pile through the system—and we’ve seen more and more international cases that are being brought for atrocities that happened in the era of social media—we’re going to see that this kind of content has increased in value for justice and accountability purposes. 05:10 CHRIS DELATORRE: Developing a protocol like this must require collaboration across cultures and professions—no small task in this current social and political climate. According to the Protocol, one of the biggest challenges is an overwhelming amount of information online, some of which may be compromised or misattributed. Your question of how do we know what we’re claiming is true really resonates, especially now. I mean my mind goes straight to social media—disinformation, computational propaganda, cyber harassment. How do you preserve the chain of custody for this kind of information? In other words, how do you find the info you need on social media, and then how do you verify the content is real? Are you currently working for instance with private tech companies to ensure efficacy across existing platforms? “Trying to find relevant information [on social media] is a little bit like trying to find a needle in a haystack made out of needles.” 06:04 ALEXA KOENIG: You raise such an important point that the information ecosystem in which we’re working as we try and find evidence of human rights violations and war crimes is one that’s really replete with misinformation and disinformation today. And so, I think a very healthy skepticism on the part of the legal community, and particularly of judges, is, how do we know that a video that claims to be from Syria, say in 2018, is in fact from Syria, is in fact from 2018, and can be helpful to proving the facts that it claims to help establish? In terms of how we find that content, it’s obviously a really tricky environment in which to work. We now have over 6000 tweets going out every second, more than 500 hours of video being posted to YouTube every minute. So I have a colleague who’s explained that trying to find relevant information is a little bit like trying to find a needle in a haystack made out of needles. So that takes a tremendous amount of creativity. And as you said, collaboration to, and really cooperation across the international justice community, to know what to look for, what platforms might be hosting that information, to have the language skills to find it in the native languages of where these atrocities have taken place, and to know the rich array of tools that have been developed to help us with advanced searching across multiple social media sites. In terms of how we verify it, and in this kind of information ecosystem, it’s so critical for us to understand the facts that we’re coming across. Of course, for human rights advocates, facts are our currency and that currency is being devalued right now, as more and more people throw doubt on the kinds of information that we’re finding online. Our reputations are critical for the legitimacy of the work that we’re doing. So we take a really detailed three-step process to verifying that content, which is very much spelled out in the protocol. The first is technical. So if we find a video or a photograph, we look to see, is there metadata? So information about that item and when it was created and how it was captured, that can help us confirm or disprove what we’ve been told about that video or photograph. Unfortunately, when a video or photograph is put up on social media, however, most social media companies will strip the metadata, the pieces that say what camera this was shot on, what the date was, what the geo-coordinates were of where the person was standing when they captured it. So, a big part of the process that we then engage in is really building that information back in. The first way we would do that is to look at the content of the video or the photograph, and see if what we are seeing in that video or photograph is consistent with what we’ve been told something is. There’s a very, I would say, infamous video out there called “Syrian Hero Boy” that claims to be a young boy in Syria rescuing a young girl while they’re being fired at in a particular conflict. Many media were fooled into believing that that video was authentic, and circulated it, of course, saying that this is a really great example of people doing heroic things in times of crisis. However, it turned out that that was actually shot on a film set in Malta. And while the director had had really, I think, in support of what many of us would consider a very positive intent in shooting that film, he says it was to bring attention to the atrocities in Syria because the international community at the time really wasn’t paying as much attention as was warranted. I think it became deeply problematic and made people much more aware that they have to be careful in that verification process. The reporters who worked on that story and who shared that story noticed that there were signs in the visual imagery that suggested this was, in fact, Syria. However, the reporters that ultimately didn’t share that video further, were the ones that went into the third step of the process, which is to really analyze the source of that video. This can be the huge differentiator. So, you always want to trace the origins of a particular item back to the original person who shot it, and make an assessment of how reliable they are for that particular piece of content. In terms of working with private tech companies, we’ve been talking with them at the Human Rights Center at Berkeley since about 2013, with regards to a lot of these issues, trying to help them understand how valuable a lot of the information that they host really can be for justice and accountability. A lot of the information that we’re interested in is the very information that they’re most likely to take down because it violates their terms of service or their user guidelines. It can often be very graphic material, very contested material. And they take it down for, sometimes, really good reasons. We don’t want people inadvertently exposed to information that will be emotionally upsetting or distressing, potentially even harmful. They also will take it down for privacy considerations. But that can be so valuable to establishing, not only the legal, but the historical record of some important moments, not only geopolitically, but in terms of human lives and human experiences. So a big part of what we’ve been increasingly working on is trying to figure out how that information doesn’t get destroyed, but somehow is preserved for later accountability efforts. 11:13 CHRIS DELATORRE: You touched on the importance of language skills—now this fascinates me. In essence we seem to be talking about learning a new language altogether. Now, you’re also working with technologists to improve the quality of information found on social media—by better identifying misinformation, for instance. You’ve mentioned an 89% success rate. This reminds me of a conversation we had with Niki Kilbertus, a researcher at Max Planck Institute for Intelligent Systems earlier this year. He says that creating a culture of inclusivity is just as important as getting the numbers and the predictions right. You’ve made clear that this protocol isn’t a toolkit per se but a set of living guidelines—which seems to leave some room for interpretation. First, how are you matching the speed and scale of online information to preserve this success rate? And second, how do you get a fact-based conversation going that’s not only accurate but also one that people can care about and really get behind? 12:19 ALEXA KOENIG: I love that interview with Niki, and this idea of creating a culture of inclusivity. I think that’s ultimately been one of the most exciting and rewarding things about working in the online open source investigations community. Many of the methods that we use for legal purposes were really pioneered by journalists. And I think you’re absolutely right that when we’re talking about lawyers and journalists working together, we often speak different languages, and have to come together and figure out how we collaborate more efficiently. When we’re talking about bringing computer scientists and ethicists and others into the conversation, it becomes even more incumbent on all of us to figure out how we translate our areas of practice, so that ideally, we’re all working together to create something that’s even stronger than what we could produce on our own. I think it’s at the intersection of those disciplines where true innovation can happen. And I think that the more we learn to communicate across those disciplines, the more we’re actually able to have some kind of human rights impact. As you said, we’ve really had to focus on developing the protocol to be focused on principles and not tools, in part for that reason. A lot of the process was really trying to figure out what are the principles that regardless of your area of practice or your profession, are the ones that are common to sourcing facts in social media and other online places? The idea also was that because tools change so quickly, the pace of technology is such that if we were to focus on certain tools or platforms, it may be that Facebook today is a primary source of information, but we’re already seeing that begin to shift as people change where they communicate to places like Parler, et cetera. So, in order to not have the protocol be basically outdated by the very time it was produced, we figured we would at least start setting the foundation of what people needed to know to communicate more effectively. A lot of that meant that we had to focus on, first, even define [inaudible] terminology, like what is an online open source investigation? How does that differ from online open source information generally? As lawyers, we love definitions. They help us understand what we’re talking about, but it also helps us a communication tool. As I mentioned earlier with the speed and scale, with that many tweets going out, that many hours of video to YouTube, that many posts to Facebook, it’s not a human scale where we can go through that much information just using our more traditional methods. FURTHER VIEWING The Berkeley Protocol was launched in 2020 on the 75th anniversary of the Nuremberg Trials. So increasingly, what the Justice and Accountability community has had to think about is how we can bring automation into the process. Marc Faddoul, and other researchers at UC Berkeley’s Information School and at the nonprofit, AlgoTransparency, have really tried to figure out ways to automate the detection of misinformation. So one project that we worked on this past fall with our human rights centers investigations lab, which is a consortium of students who are kind of doing this online fact-finding, and then verifying that information, has been to help try and clean up that information ecosystem by supporting their work to come up with an algorithm to detect misinformation online, and to try and basically make it as accurate as possible. A big piece of how we get people to care, I think, really comes about figuring out how we help a general public merge both the ideas in their heads with the feelings in their hearts. So I think whenever we try and convince people about what has happened on the ground and a site of atrocity, is really about not only bringing in trustworthy data, but getting people to care about that data more generally. I think a lot of the research has shown that people care about people and not numbers. And they particularly care about people with whom they can identify and who remind them of themselves or those they love. So I think the storytelling part of this, that taking these disparate facts and videos and photographs, and bringing them back together like pieces in a puzzle to show a bigger picture, is a big part of what we’re trying to do—really merge that quantitative work with the qualitative storytelling. 16:20 CHRIS DELATORRE: Now let’s talk about your process. It has a lot in common with previous initiatives to develop standards around investigating torture, summary executions, sexual violence in conflict. As you’ve pointed out, these efforts were essential steps to help lawyers and judges really understand how to evaluate new investigative techniques, and to also guide first responders and civil society groups on how to legally collect information. How do you plan on getting the Berkeley Protocol into the hands of everyone using content to build human rights cases—technologists, data practitioners, activists, judicial servants? And lastly, how can our listeners get involved today? “How do we make sure this isn’t just some kind of elite exercise but is something that really empowers the people who are most impacted?” 17:06 ALEXA KOENIG: Thanks for asking all of that. I think getting this out really does depend on figuring out how we reach as broad a public as possible. And by public, I mean those audiences who can, and hopefully learn something from the protocol, and adopt and adapt pieces of it for the kinds of work that they’re trying to do in the human rights space. One of the biggest strategic decisions we made early on was to approach the United Nations Office of the High Commissioner for Human Rights, to see if they would be interested in coming on board and partnering with us in this bigger effort. They had increasingly been seeing, in their fact-finding teams, the need for deeper engagement with social media and other online information. And they have been an extraordinary asset and incredible partner in trying to build this from the ground up. It’s also really dependent on—we’ve done over 150 interviews with different kinds of experts in the space, whether they’re human rights investigators, war crimes investigators, journalists, technologists, etc., to better understand how the protocol could be designed to be as effective and as efficient as possible, but also to begin to answer the questions that as a community of practice, we knew none of us could answer on our own. So we also hosted a series of workshops to deal with some of the stickier unknowns, and begin to build the norms, and begin to build a networks that I think are really necessary for implementing some of the ethical guidelines and the principles that the protocol pulls together. Another big piece of this has been ensuring its accessibility. So we’re really excited, this is being translated into every language of the United Nations. We’re hoping by, at the latest, late spring, we’ll be able to get this out to a much broader global audience. Of course, many of the communities that have been hardest hit by human rights, atrocities, et cetera, don’t necessarily speak English or French. And we wanted to make sure, from the outset, that this was something that people could use in very different parts of the world. Another piece has been developing networks to support awareness and distribution, and of course, creating training so people even know how to use the protocol and how they can integrate it into their workflows. At this point, we’ve trained audiences as diverse as Interpol, parts of the International Criminal Court, different civil society groups, and activists all over the world, as well as war crimes investigators. We’ve also set up a professional training. We’re in partnership with the Institute for International Criminal Investigations, which tends to target investigative reporters and particularly war crimes and international criminal investigators, so that they ultimately know how some of these methods that were really pioneered by journalists can be brought in in a way that helps build those evidentiary foundations of cases. And of course, I don’t want to leave out the students who have been extraordinary pioneers in helping us think through how this sort of new frontier of human rights investigations can really be run. They’ve been so central, whether they’re working with us at Berkeley or in other universities around the world, to thinking through the ethics piece of this. How do these methods adapt to different parts of the world in very different contexts? How do we make sure that this isn’t just some kind of elite exercise, but is something that really empowers the people who are most impacted, to get the information they want to share with the world out to people that they want to communicate it to and into the hands of people in positions of power who can do something to respond to the atrocities that they have been experiencing? We’re also part of an international network known as Amnesty International Digital Verification Corps, which is a collaboration of seven universities around the world, trying to train students to be kind of a next generation of human rights defenders. And of course, we’re also working with student groups at places as diverse as Stanford or partners at UC Santa Cruz, UCLA, et cetera. As far as how people can get involved, I think taking one of our trainings are some of the other amazing trainers out there from First Draft News, from Bellingcat, et cetera. Mostly, educating ourselves about how we responsibly deal with social media content, how we can do some basic fact-checking to know what we’re looking at and make sure we don’t share this information. Another would be to really maybe even sign up for our newsletter. We try and share information about different resources about once a month at humanrights.berkeley.edu. And finally, supporting the work. I think like many organizations doing this, we’re a relatively small nonprofit on the Berkeley campus, which is why the Digital Impact Grant from the Digital Civil Society Lab at Stanford was so incredibly invaluable to doing this work. So anyone who wants to follow some of the work that we’re doing or be in conversation about it, we would love to have you follow us on Twitter @HRCBerkeley or @KAlexaKoenig. 21:53 CHRIS DELATORRE: Alexa Koenig, Executive Director of the Human Rights Center at UC-Berkeley, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Internet Sleuthing 2.0: Standards for Digital Open Source Investigations appeared first on Digital Impact.
18 minutes | Nov 20, 2020
A New Approach to Solving the Paradox of Platform Neutrality
Digital Impact 4Q4: Alison Carlman and Alix Guerrier on the Paradox of Platform Neutrality SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Alix Guerrier, CEO of GlobalGiving, and Alison Carlman, Director of Evidence and Learning at GlobalGiving. In 2019, the global crowdfunding platform introduced a research program and long-term discovery process to address what it calls the “Neutrality Paradox,” a phenomenon experienced by platforms moderating user-generated content. Operating under the guise of neutrality, platforms like GlobalGiving are often forced to take a stand on various issues, which presents an inherent paradox of neutrality. A year later, the project has taken a surprise turn. Today, two of the architects are here to talk about what this could mean for platforms and the future of content moderation. 00:55 CHRIS DELATORRE: Alix, one of GlobalGiving’s core values is being “always open.” The platform was designed to be neutral, to give everyone a voice. Which, as we see perhaps most vividly with social media platforms, ultimately requires making value judgements—deciding what is and isn’t allowed, for instance. Now it’s 20 years in, and you’re seeing a problem with neutrality, specifically for philanthropy intermediaries like GlobalGiving. Your new strategy, Ethos, could change all of that. Why the shift and why GlobalGiving? And how new is this concept? Should private sector platforms be following suit? 01:37 ALIX GUERRIER: Well first, Chris, I just want to thank you for giving us this chance to talk about the work. It’s great to have a chance to talk to you live and I know that Alison and I are pretty proud of the work that she’s been leading and that we’ve been doing on this topic. So it’s great to have a chance to share a little bit more about it. And I also want to thank you for starting with one of our core values, one of our four, “always open.” And that is the idea that great ideas can come from anywhere. Among our corporate values, it really is quite a foundational one for us. “Not all the right approaches are known. And so you want openness to allow for new ideas, new approaches that you’ve never heard of.” When our founders started GlobalGiving at the World Bank, when they were executives at the World Bank, the idea for this crowdfunding platform and this giving platform really grew out of this knowledge that no one person holds all the answers. Right? Even for the super smart people working at the World Bank, not all the answers are known. Not all the right approaches are known. And so you want that openness to allow for new ideas, new approaches that you’ve never heard of, to see the light of day and get funding and perhaps scale. So, openness is core to our identity. If you’ll forgive me for making this analogy. You know, I studied physics when I was an undergrad and there’s this fact in physics that at the very beginning of the universe there were many forces—magnetism, the forces that hold atomic nuclei together—that actually were the same force, that they started out being the same thing. And then as the universe evolved they sort of split off and differentiated. And actually, I think that that can be true—if it’s a little bit of a forced analogy—about the idea of openness and this other concept that you introduced which is neutrality. That, for a small scale, they start to look the same. But then as we grew that’s when it became clear that although it’s critical that we hold onto the idea of openness, this idea of neutrality really started to fall apart. And what happens as you grow is that, even though we can be neutral for 90% or 99% or 99.5%, that small fraction of cases where neutrality fails starts to take over much more time — much more of our time. And so that’s what we’ve experienced as we’ve grown as a platform, that those cases where it really is impossible to maintain neutrality. And I’ll just give you one example. And the example is, let’s say a nonprofit partner wants to come onto GlobalGiving to promote a project that is founded on an idea of vaccine hesitancy. Well, that’s a topic where, regardless of what you decide, even a sort of hands off, “let the project stand” approach, you’re actually not being neutral. You’re making a stand, either proactively or implicitly, on the validity of this idea of vaccine hesitancy. And so that’s just one example of where neutrality fails. And so, as you grow, those cases, although they take their small percentage, start to demand more and more attention. FURTHER READING How can online platforms be held accountable for not doing enough to remove harmful content? GlobalGiving has a plan. The reason that we started to work on this and how we came to Ethos was when we realized that we needed a more systematic way of thinking about this. In terms of novelty, you know, you asked how new is this concept. I think it’s based on tools that have existed for a long time. We’ve used human-centered design. We’ve incorporated ideas coming out of restorative justice. But putting them together on this topic is new. It’s showing that there’s actually quite a bit that we can do. And, you know, you also, I think, quite intuitively linked this to some of the challenges that are facing our for-profit cousins. You know, platforms that everyone out there is familiar with. And I very much do think that there are lessons here for those platforms to take away. 06:12 CHRIS DELATORRE: Alison, you explained the word “neutral” pretty well in a reflection earlier this year. You wrote, “Neutrality is, at best, a failed principle that has proved inadequate in practice, and, at worst, is a blunt tool used deliberately by those looking to avoid accountability and controversy.” In exploring neutrality, you identified a grounding concept that test groups agreed on. Can you tell us about that and how it influenced the shift to Ethos? 06:48 ALISON CARLMAN: Thanks, Chris. Yes, you’re right. So, you know, we’ve been working on this question for almost two years now. We’ve been working with our peers and stakeholders about basically how to create a framework for making these decisions. And so we co-developed some prototype tools earlier this year and I tested them with platform leaders in three countries, back when we could still travel, if you remember those days. And, you know, as we tested these tools, I actually watched some of them fail very publicly. I won’t forget watching a workshop in London in February where participants sat around tables and they tested out one of the original tools which was the values prioritization exercise. So, small groups sat around roleplaying as platform leaders facing dilemmas. And, you know, one part of the exercise is simply to sort and prioritize cards that have — had these values words on them. And after 20 minutes, some of the groups only got through one or two words. “We had to go beyond values. We had to move beyond the ‘why’ of working together because it wasn’t enough.” So, I learned this wasn’t going to work. We were going to need something different. People would get into these drawn out debates about how we interpret the term “transparency,” for example, and what that means for the hypothetical dilemma that they were facing. So, talking about values alone wasn’t helpful for getting to a solution. And I think that may even be why platforms that have these very clear mission statements and clear corporate values are still facing these challenging dilemmas. Because values alone aren’t helping us get to decisions. Because if we can’t get small groups of like-minded people to agree about how to interpret values when they face a dilemma, there’s no way that we can get competing stakeholders to agree. So, we had to go beyond values. We had to move beyond the ‘why’ of working together because it wasn’t enough. Instead, we had to start talking about how we will work together. And that “how” is what we mean by Ethos. 08:49 CHRIS DELATORRE: Ok, so what exactly does Ethos look like in this case? 08:54 ALISON CARLMAN: Ethos is this set of guiding principles that came out of our research with our own stakeholders. So that was our non-profit partners, our corporate partners, donors, and funders. And it’s an agreement about how we’re going to engage with everyone at the table with empathy and curiosity in order to uphold everyone’s integrity. Because integrity was that grounding principle that everyone could agree on. Not just the personal integrity and individual’s integrity making a decision, but also the integrity of the business model in the organization, so it can continue to run. So, the Ethos process is this how-to guide for helping groups engage in mindful inquiries through interviews and group conversations. And it’s designed to help them come to a creative resolution. The actual process begins by exploring and really getting to understand the root cause of the problem. And stopping to understand what power dynamics are at play. So really naming the problem and framing it well. And that helps us identify and then speak with the right stakeholders so we can conduct interviews, analyze and synthesize our findings, and then we can present that to a group of five or seven stakeholders that we call the Ethos Council. So this is a group that then meets and makes a recommendation to our leadership team. And in doing this, in our testing so far this process has led to more creative, more confident decisions that better uphold individual people’s integrity and also the integrity of the organization. “It’s an agreement about how to engage with everyone at the table with empathy and curiosity in order to uphold everyone’s integrity.” 10:32 CHRIS DELATORRE: Alix, for some, this all might seem counterintuitive, right? Sometimes remaining neutral makes sense. Journalists, scientists—these are professionals trained to observe and report, to avoid expressing opinions, to advance the objectivity of truth, if you will. Is neutrality dead or is there still a place for it on an intermediary platform like yours—or elsewhere? FURTHER READING GlobalGiving established a collaboration of more than 100 peers and the verdict is in: neutrality doesn’t work. 11:00 ALIX GUERRIER: That’s a good question. It has a pretty straightforward answer. The answer is neutrality is dead and it has a place on our platform and others. I’m reminded of an exercise that we did in the process of working on this, where we had a bunch of folks in the room from different organizations. And we labelled one side of the room with the phrase “neutrality is essential.” And then on the other side, we put “neutrality is garbage.” And that’s the actual word that we used, “neutrality is garbage.” And we asked people to go stand in the place that represented their thoughts on this. And guess what? People went to both sides. And the way to make sense of this seemingly, sort of, contradictory result is to think about specificity. You know, as an example, I used to be a classroom teacher—a math teacher, as a matter of fact. In the classroom when I was a teacher, I was very opinionated about the best way to teach math. I had a philosophy that developed over time and was based on evidence, and it was a certain perspective on the way that I thought was the right way to help kids learn. Now, here at GlobalGiving, we support a lot of organizations that are education-focused. And we are neutral—I think quite appropriately so—with respect to different pedagogical approaches. We don’t take a stand on them. This is where the idea of openness comes in. It’s, in fact, a critical part of our mission to provide a way for multiple approaches. Not just one, right? Not just my [inaudible] ideas about how to teach math, but multiple ideas to flourish. And so, we’re literally neutral with respect to pedagogical approach. So, there’s a place for it. Now, where it falls apart is where a platform, its leaders, try to take that idea of neutrality—which does have an appropriate place in specific instances—and kind of stretch it to be a blanket or a shield to cover everything. Especially under the ultimately false hope that it protects them from criticism for some of the dangerous, bad ideas that come onto the platform. So, it’s really that approach which, unfortunately, is pretty common still today. This idea that, as a platform, it can be neutral and, in fact, we’re going to be so neutral that you don’t even have to bother us and we’re, you know, safe from any criticism for anything that happens on our platform. It’s really that idea of neutrality that we’re attacking here. “As soon as we started talking about it to other platforms, people’s eyes lit up, they wanted to contribute.” 14:00 CHRIS DELATORRE: This question is for you both. Right now, you’re collaborating with peers and stakeholders to develop a library of tools that will be available to the public soon. How is GlobalGiving working with other platforms to make this available and how can organizations get involved? 14:30 ALIX GUERRIER: Well, so I’ll — I’ll let you in on a secret that when we first started to work on this, it seemed pretty risky because of the sorts of issues that we were tackling. And part of our thought process was let’s bring in some other partner, some other organizations to work on this. As protection, you know? To sort of widen the target in case we get criticism, or too much criticism for this. But as it turns out, that was pretty wise, even if we had this — a little bit of a protective motivation. Because the fact was that, as soon as we started talking about it to other platforms, people’s eyes lit up, they wanted to contribute. And we have such a better result now and such a better understanding through bringing this community of partners together. And it’s a partnership that we’re using to try things out, test things out, and improve the output. 15:42 ALISON CARLMAN: Alex is right. I’ve had the opportunity to work with so many people, including our friends at Candid and Charity Navigator, betterplace.org in Germany, I.G. Advisors hosted the event I talked about in London, and we’ve involved a lot of our nonprofit partners as well. Folks who work in Mexico and Kenya and Indonesia and Palestine, for example. And we’ve had a really brilliant design strategist, Eli MacClaren, who’s been helping us make sure that this process is really community-led and user-driven. So we’ve involved more than a hundred people in developing this solution, so it’s not just for one of us and it’s not by one of us either. It’s for all of us. And you know, the folks at the Bill & Melinda Gates Foundation have been involved from the very beginning as well. And just this fall, they helped fund this user research we’re conducting now. And they’re helping us develop a public-facing tool kit that we’ll hopefully launch next spring. So, it will be available to anybody who wants to try and implement a better way of addressing these high stakes dilemmas. So, if folks that are listening are interested in getting involved in helping us test or be potential new users, we would love for you to get in touch with us. You can visit globalgiving.org or you could google “GlobalGiving Ethos” and you’ll get our contact information and more information on that page. And you can also follow us on social media @globalgiving. 17:10 CHRIS DELATORRE: Alix Guerrier, CEO of GlobalGiving, and Alison Carlman, Director of Evidence and Learning at GlobalGiving, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post A New Approach to Solving the Paradox of Platform Neutrality appeared first on Digital Impact.
21 minutes | Nov 13, 2020
Investing in Digital Infrastructure: A Roadmap for Funders
Digital Impact 4Q4: Chantal Forster on Investing in Technology for the Social Sector SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Chantal Forster, Executive Director of the Technology Association of Grantmakers, or TAG. A new guide from TAG, NetHope, NTEN, and TechSoup looks at how nonprofit professionals can invest in what it describes as three core elements of digital infrastructure. 00:30 CHRIS DELATORRE: Chantal, let’s first demystify this idea of digital infrastructure. What exactly does it mean, what does it mean for funders specifically, and why is it so important for the social sector? 00:46 CHANTAL FORSTER: You know, it feels rather invisible, doesn’t it? Digital infrastructure, if we don’t see it, how vital is this thing that we call digital infrastructure, right? But, you know, imagine this, you’re driving to work, if you do drive into work, if you haven’t done that in a while. But if you do, you’ll hardly even notice the road that you’re on. That road literally becomes almost invisible, right, and invisible part of our everyday lives upon which we’re totally reliant. You just don’t even think about it until it becomes riddled with potholes or too small to support the local population. I think about digital infrastructure in the same way. Our modern lives are completely dependent upon it for education, for business, for a thousand personal uses. And yes, for also nearly every element of how a nonprofit operates, fundraises, provides its services. And Chris, you know, we barely even pause to consider that until something goes wrong, until this pandemic hit and payments to the nonprofits can’t go out the door because a funder is still cutting paper checks. So, digital infrastructure is around us. We rely on it entirely, and yet, we don’t even realize that until something goes wrong. FURTHER READING The collective strength of civil society depends on building the digital capacity of both funders and their nonprofit partners. There is, you know — and I think there’s a very important cautionary note about when something like digital infrastructure goes wrong, right, or falls apart. Michael Brennan, he’s a program officer with technology and society with the Ford Foundation. He wrote a really great publication a while back and mentioned, and this is a quote from Michael: “If digital infrastructure fails, the consequences will be the same as when physical infrastructure failed — falls apart. People with privilege and resources will find other ways to navigate the world, while those on the margins bear the brunt through higher costs, decreased access and a related lack of opportunity.” So, that gets to the heart of an equity issue about digital infrastructure. That’s really important for us to realize and many of us have already seen this with inequitable access to education, for example. So, we, you know, we’re at a very important moment of recognizing and awareness of our dependency on something that we’re calling digital infrastructure in this report. “Funders themselves have realized their own dependency on tech through the sudden reliance on remote work.” What is digital infrastructure? When you get into, when you recognize it’s there and you see it, what is it? You know, grantmakers that I represent through the Technology Association of Grantmakers have started realizing that that social change in the digital era requires an investment in technology. And it’s not just the tools, it’s tools and technology, it is digital skills and capacity, it is data-sharing platforms, it’s responsible data practices. It is the resourcing models to support these things—tools, tech capacity building programs, data sharing collaboratives, reporting platforms—however, we’re going to resource that. So digital infrastructure is also figuring out what’s the resourcing model for this. It’s also providing access to reliably connect to the internet at sustainable cost, regardless of one’s location globally. And then lastly, it is also a policy and regulatory framework regarding rights and equitable access to digital infrastructure. Courtesy TAG So, that’s six things at a minimum for digital infrastructure: Tools & Technology for the social sector to support its operations and its mission, and then [Access to] collaborate across organizations, across funders. So, the tools and tech, the Skills and capacity building to leverage those tools and tech. Data—sharing frameworks, collaboratives, reporting platforms (that’s the third), Resourcing Models to support access to the Internet in an equitable and reliable fashion. And then lastly, policy and regulatory frameworks for Rights & Equity as part of that digital infrastructure. How do we do this, right? And I think that’s the last piece. We talked about the what is digital infrastructure. And then there’s the how do we make that happen? I — we can’t — we don’t have the time to do a complete roadmap or blueprint for this. But that several organizations are working on this. But how do we deliver on digital infrastructure? One thing I want to note is that cross sector partnership is vital. This includes social sector partners, public sector partners, and yes, private sector partners. There’s a role for open source platforms here as well. I’ve personally lived the benefits of open source software when I ran the digital team for the mayor of Albuquerque, New Mexico years ago, limited budgets in local government. And so, we ended up developing an open source collective between other local and state institutions to share a platform and co-develop shared solutions on that platform. In the long term, there are some challenges to maintain these systems. But the cost savings is extraordinary. And it is something that then becomes built by the community. So, to me, a comprehensive and lasting form of digital infrastructure includes various forms of partnership between public, social, and private sectors. “Recognizing their own dependency on tech will invariably cause funders to increase funding, and provide tools and capacity for nonprofits as well.” 06:28 CHRIS DELATORRE: In the funding report, you talk about tech underinvestment for both funders themselves as well as their grantees. Why is this the case? 06:39 CHANTAL FORSTER: Oh, that’s an interesting one, Chris. You know, there are undoubtedly several factors at play here. We can talk about just a few reasons. I think, firstly, if you think about the mission-driven nature of the social sector and the people who make decisions, shaping, and driving this mission, historically, there have been fewer people comfortable with technology, let alone familiar with really the strategic potential of tech. Not just plugging in a tool here or there but the strategic potential of tech to scale their missions. So, I’ve been intrigued to see how new philanthropy is entering this space, and changing the way that many funders think about the role of tech. Some of our new — new philanthropy funders, their benefactors are extremely tech savvy. They started some of the Silicon Valley firms and so, they’re looking at the role of tech to support the mission of philanthropy and the mission of their nonprofits in entirely new ways. Another factor on the nonprofit side, and why we’ve seen an under investment is, you know, is that they can’t avoid talking about the overhead myth. And the fact that historically, tech was considered overhead or an indirect cost, and therefore investment in tech—baseline investment, let alone innovative investment—was limited to a portion of the nonprofit’s budget. That is, that’s so small, so small as to barely even equip the org. Let alone innovate, innovate the mission. It will be leveraged for innovation of the mission. So, we had a real watershed moment in 2019, and that was the work of Ford, Hewlett, MacArthur, Open Society, and Packard Foundations. You probably recall this, right, when they committed together to funding overhead, so-called overhead, at much higher rates. But I think, you know, enter the pandemic, and we are seeing a shift at TAG, the Technology Association of Grantmakers. I’m starting to see that funders themselves have realized their own dependency on tech through the sudden reliance on remote work. The numerous calls from their nonprofit partners and grantees who may need tech or training. Additionally, just the reliance on paperless systems. We’re seeing funders really recognize that tech is a vital part of not just their operations, but their mission and certainly their nonprofit’s operations and mission as well. 09:18 CHRIS DELATORRE: Something our listeners might ask is, how is this need different between funders and grantees? What would adequate or even strategic investment look like? 09:35 CHANTAL FORSTER: Earlier, I alluded to the fact that funders have under invested in tech historically, and are now shifting as a result of the pandemic. While this was anecdotal data when we first published the funder Skype report—and by we, I mean TAG in partnership with NetHope, TechSoup, and NTEN—when we published the funder support, the funding report for digital infrastructure, that was anecdotal data around how funders were starting to think about tech differently. But I now have some real data for you, some quantitative data from the 2020 State of Philanthropy Tech Survey, which TAG just wrapped up. We’ll be launching soon. We are now seeing funders as a result of the pandemic increase their own tech budgets. So, here’s some data for you: 51% of the foundations—and we had 233 foundations respond throughout North America, as well as a few foundations in the EU—51% of those foundations who responded said that their IT budgets will moderately or significantly increase in 2021. So, 51% of foundations have realized in a year like 2020 that perhaps technology is a strategic linchpin of their operations, but also of their mission. And so, that’s a very important moment. And I predict that recognizing their own dependency on technology will invariably cause funders to increase funding, and provide tools and capacity for nonprofits as well. In fact, I’m already seeing that. Again, from the state of philanthropy tech survey, we’re seeing this shift underway as well. We had a question that asks respondents in what new ways has your organization supported grantees and nonprofit partners in response to society’s challenges in 2020? So, new ways that funders were supporting grantees and nonprofits. There were, some of the leading answers were streamlining applications—61% were streaming applications. Fifty-one percent were moving to paperless payments, 47% streamlining reporting, 41% removing funding restrictions. But here’s what’s interesting. When we talk about thinking differently about tech for nonprofits, 28% of funders said they were now providing training and technical assistance directly to their nonprofit partners’ grantees. Twenty-two percent said they were actually providing tech and tools. There’s a foundation in San Francisco that provided Zoom licenses and Slack licenses to every single one of their grantees on very short notice so that those nonprofits could continue their operations in a fully remote fashion. So, that is brand new. And I suspect that we will see funders think about their nonprofits’ tech needs—a greater recognition of them, so yes, thinking about them, but also thinking about supporting them in entirely new ways. So, I believe we have a real watershed moment. A watershed moment when civil society realizes it operates in a digital era and invents new ways of partnering outside the sector and with nonprofits in the sector. New ways of partnering to leverage this digital reality to support the mission. “The pandemic was an extraordinary moment of recognition. One of the big takeaways for the social sector was that the baseline of digital readiness is lower than we all realized.” 13:25 CHRIS DELATORRE: Assuming it’s safe to say that every organization is in a different place when it comes to capacity for technology—tools and skills, for instance—how can funding or support strategies consider not only differences in digital maturity across organizations, but also the collective needs of the sector as a whole? 13:50 CHANTAL FORSTER: You know this — Chris, the pandemic was an extraordinary moment of recognition. And I think one of the big takeaways for the social sector was that the baseline of digital readiness is lower than we all realized. There was a study by TechSoup that we talked about in the funder — in the funding roadmap, in the funders’ guide, that talked about nonprofits of all sizes, reported a variety of needs, a lack of skills to understand options and make technology decisions, a lack of access to trainers or consultants who understand the sector, a lack of support or prioritization for tech from their own board of directors. Aging hardware, concerns about data management, and specifically in regards to regulations and their own security, and privacy for that data. So, these needs, of course, vary by the type of org, the size of org, the geography in which that org operates. But there’s a real invitation here for funders to recognize that investment in a nonprofit’s mission also requires an investment in digital access and tools and skills. So, as part of that report, NetHope, TechSoup, NTEN, and TAG, whom I represent, invited funders to consider three things. To really be bold. This is a moment. This is a catalytic moment. And be bold and considering new funding approaches. Providing tools, tech, unrestricted giving, right, so that indirect costs can be covered. so that a nonprofit can make choices about when to invest strategically in technology or to innovate with technology. Number two—this is really important. Oftentimes, funders who are providing, including a line item for funding for tech, perhaps don’t realize that nonprofits actually need access to expertise. So, number two is we invite funders to also provide access to expertise. This is the result of a survey that NTEN has conducted for 15 years, I believe. And that survey shows that nonprofit staff report often having the tools they need, but not the training on how to use them well. So, it’s not just the tools and tech, but it is access to training, to expertise, to strategic expertise as well. And then thirdly, to recognize — we invite funders to recognize that investing in digital infrastructure for nonprofits. It’s not a one and done, right? That digital maturity requires ongoing learning and support, and really is a relationship between the funder and the nonprofit whose mission they’ve chosen to invest in. There’s an important way that funders can go about assessing where nonprofits are. And I think that I want to share an example. NetHope and NTEN both provide digital maturity assessments that funders can use to assess themselves—it’s always good to look internally as well—and then leverage that digital maturity assessment, work with their grantees to assess themselves, and then meet them where they are in terms of the funding expertise and a longer-term commitment to their maturity. This is an approach that everyone from the smallest family office to a midsize community foundation or even a large private independent foundation can take. I’m really proud of the several TAG members such as the Pierce Family Foundation here in Chicago or the Northwest Area Foundation in Minneapolis who are already using such assessments with their nonprofits to help them reflect together and then invest in the tool’s tech and capacity and training that they may need. So, there’s a — we’re talking about a real groundswell, Chris. There’s a real groundswell of recognition that civil society in the digital era means. Shared investment is in tech is not an afterthought. In fact, as we speak, I’m leading a task force on digital infrastructure to catalyze this investment. It’s just a six-month task force. For the next six months TAG is convening funders, tech and data providers, as well as orgs representing nonprofit tech—NetHope, NTEN, TechSoup—to align the variety of efforts underway, and catalyze action and investment in this work. There’s a real call to action right now. And, you know, it’s an honor to work hard on building back better together. There’s a quote I want to share from a key member of that task force—John Mohr, who’s the CIO at the MacArthur Foundation. He, when talking about this with me, this being the need for investment in civil society, digital infrastructure for civil society, John Mohr, the CIO at the MacArthur Foundation says, “We are at an exceptional time. All foundations are being called to action and have an obligation to deliver and respond to the needs of the world. Investing in shared platforms and integrated solutions, to reduce the burden on grantees is not only a call, but also an obligation.” You are welcome to access the report at tagtech.org. And the report is included right there on the front page. You are also welcome to find us on Twitter @tagtechorg. And then you’re welcome to follow me on Twitter as well @cocoforster, that is C-O-C-O F-O-R-S-T-E-R. Thank you so much for having me, Chris. It is a real honor and a pleasure to be taking advantage and catalyzing this moment that we find ourselves in to invest very deeply in the work of civil society. 20:16 CHRIS DELATORRE: Chantal Forster, Executive Director of the Technology Association of Grantmakers, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Investing in Digital Infrastructure: A Roadmap for Funders appeared first on Digital Impact.
25 minutes | Oct 31, 2020
Rethinking Civic Space in an Age of Intersectional Crises
Digital Impact 4Q4: Poonam Joshi & Ben Hayes on Reclaiming Civic Space SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Poonam Joshi, Director of Global Dialogue’s Funders’ Initiative for Civil Society, or FICS, and Ben Hayes, Director of AWO, a legal firm and consulting agency working on data rights. In 2019, FICS launched a strategic review aimed at helping funders realize their potential to disrupt and reform the drivers of closing civic space through collaborative and targeted interventions. The organization has published the first in a series of recommendations considering how a range of factors created the conditions for an accelerated dismantling of civic space worldwide. Today, we’re sitting with the authors of the report, who are calling for new ways to expand the space for civic participation. CHRIS DELATORRE: Poonam, let’s start with you. Most of those you interviewed felt that existing funder initiatives were insufficient in responding to the scale of the challenges now facing civil society. What is your vision for building a global movement that pushes against the drivers of closing civic space and how likely are governments to agree? 01:20 POONAM JOSHI: Thank you, Chris. I think the challenge is in recent years we’ve all been aware of this phenomenon but we’ve all been looking downstream. So, a lot of the action has been around legal defense, protection security—really necessary interventions, but not ones that would go to the heart of what would really make a difference. And through this research we had a real a-ha moment in identifying—not just a range of drivers but three drivers that seem to cut across all of the issues that are going to be really contested over the next decade, from combating climate change to economic and social inequality and the future of democracy. And so, the first sort of basis for this global movement is do we have a shared analysis? And we’re really calling upon civil society funders to galvanize their focus around the three dominant drivers of closing civic space. The first one was governments using abusing counter-terrorism and security laws and discourse and tools for political and civic repression. The second was around ideological threats to democracy and civic space by the far right and religious right. And the third was around abuse and concentration of economic power. And I think COVID has elevated the importance of that first driver. And although there are movements that exist around challenging economic power, in combating the far right, what you currently have is no equivalent movement around looking at how the discourse, the laws the tools around securities—and here I’m talking about everything in terms of legal frameworks—if you look at the national security legislation introduced in Hong Kong or the anti-terror bill in the Philippines, the discourse, you know we’ve seen it in the US with the Trump administration characterizing racial justice protestors as a threat to security. We’re seeing it in the proliferation of surveillance technologies that are now being harnessed to look at problematic actors in the civic space, and not just in the sort of fora of terrorism and extremism. FURTHER READING Poonam Joshi (FICS) and Ben Hayes (AWO) are asking funders to realize their potential through collaborative and targeted interventions. “What opportunities does technology offer in terms of radical new forms of accountability?” What you don’t have is an equivalent movement to the climate movement, to the labor rights movement, to #MeToo, to the demand for racial and justice equality—that’s focusing on what does it mean to start questioning the logic of misuse of security powers and discourse and tools. In terms of where that would start, we can see pockets of movement building and resistance. We’ve seen it most vividly in the US and Black Lives Matter. There are equivalent micro-movements in other countries from within the communities—indigenous communities, feminist, women peace and security, those working on environmental justice who have been on the receiving end of police brutality and militarization for years. How do we start resourcing them at the scale required so they’re not just looking at protection security but they’re starting to question who’s behind these laws and who’s supporting this at the international level within the UN and other international bodies. The next phase would be to then link those movements globally but also link them vertically up to those international spaces where you have a handful of coalitions that have been looking at how the UN and other bodies have been driving proliferation of laws globally around counter-terrorism and security. But how do you link up these movements into a global networked response? In terms of how governments will respond, for years both authoritarian and western governments have benefited from the dominant focus on countering terrorism and security. But I think what we are beginning to see in particularly in the context of COVID, some real disquiet from amongst western democracies about how the proliferation of emergency powers and new forms of surveillance technologies are going to equip particularly authoritarian leaning actors with the tools and discourse and the laws they need to shut down civic space in their countries. So, the challenge is going to be how to convert sympathy of those western democracies into some concrete changes at the international level, but also for them to hold those authoritarian actors to account on the international stage. “From where we are now, the challenge is really about looking at the way security technologies are currently impacting civic space.” 05:58 CHRIS DELATORRE: Ben, you’ve said the future of civic space will be shaped by crisis, that crisis “moves the window of regressive forces.” let’s look at how tech can be both a problem and a solution here. Where does tech fit into this reimagining of security? Are we at risk of replacing one ill for another? 06:23 BEN HAYES: Thanks, Chris. I think I’ll start by sort of unpacking a little bit more about what we meant by crisis moving the window of regressive forces, and we know from history that crisis tends to favor forces on the right, right? So, whether that’s economic shock doctrine driven by neoliberal economists, the launch of the War on Terror, which Poonam mentioned in the introduction [inaudible] emergency powers that were then quickly made permanent, or the exploitation of economic crisis by authoritarian populists, which we’ve seen increasingly in recent years. We wrote our paper on the future of civic space before the pandemic hit. And obviously none of us saw it coming but it did reaffirm something that had seemed fairly obvious since the global financial crisis of 2008: and that’s that politics and social and material life more broadly were going to be shaped by the response to crisis. Now, whether that was climate change, recurring economic crisis, financial shocks, social polarization, geopolitical conflicts, whatever. So, there’s reasons to be pretty pessimistic about the direction of world travel in 2020 but it’s not to say that it’s inevitable that we’re sliding into chaos or spiraling inexorably into disaster. The reason we did sort of frame the research we did around future crises was really to make the point that looking ahead, civil society should be thinking about how crisis [inaudible] change may play out, and what that means for their own strategies and progressive change. So, if we don’t look at tech, I’m not sure it’s necessarily about replacing one kind of security with another as such, but more about responding more coherently to the challenges and opportunities that certain technologies pose in the context of civil society and civic space. I think probably all of your listeners by now have woken up to what Evgeny Morozov called “the net delusion” and how the utopian promise of the internet—global connectedness and so on—would quickly give way to sort of surveillance and social control, and push back against activism and civil society. I think that’s one point. I think we’re also collectively in the process of understanding the implications of what in the last few years has been termed surveillance capitalism as a mode of governance that potentially limits the capacity of civil society to push for radical social change on issues related to democracy, economy, and security. So if we then think about the sites of struggle in which civil society is present—so the streets, the workplace, education settings, government, the digital arena, the public sphere and so on—we then have to ask ourselves how is technology changing those spaces, and not least because—you know I think again as we’ve all seen that this sort of new normal engendered by COVID-19 is clearly accelerating these changes. Then I think the question becomes, you know, how is it that states are mediating or failing to mediate those spaces in ways which constrain or undermine civic action. As Poonam mentioned, we can ask the same question about ideological actors—those regressive forces you mentioned. How are they using these spaces to push their own agendas in ways which are often explicitly designed to limit or undermine the capacity for progressive or radical social change? In turn, I think the question then becomes what opportunities does technology offer in terms of radical new forms of accountability—so be that for the way we’re governed, be that the way security is constructed and practiced, or be that in the defense of civic space itself. I don’t think we’ve got all of the answers yet or at least sort of how to link those answers together. But what we tried to do with the paper is point towards the need to up our game from simply advocating for human rights and good government and an enabling environment for civil society and so on. Because the world is so much more messy and complex than it was even a decade ago. I think the final thing to say is yes, that sort of techno-utopian vision still persists within Silicon Valley and elsewhere. And there really is a danger that we say, look, these the problems that we have with unaccountable police forces or the misuse of certain police powers might be solved if we came up with another way of governing at distance that marshaled technology in ways to sort of take certain actors out of the picture or rethink the way security is conceived and practiced. But I think from where we are now, the challenge is really about looking at the way security technologies are currently impacting civic space, and then thinking constructively about how to develop a shared understanding of the harms and appropriate responses to these very messy challenges. “To get this on the agenda of governments, you have to have the public behind you on a massive scale.” 11:48 CHRIS DELATORRE: Poonam, one of your main objectives is to identify current initiatives on civic space that need scaling up. According to the Carnegie Endowment for International Peace, one of the factors limiting international response to closing civic space is a failure of funders to commit the necessary resources to addressing the issue. How do we convey to civic space funders the critical nature of investing in infrastructure and capacity? What would an effective response look like? 12:23 POONAM JOSHI: I think in terms of that urgency, and something we’ve been trying to get across in the aftermath of this report, was a very strong sense for many of those we interviewed, that we just had a very limited window for action. Many, particularly those working around technology and climate change, at most gave us a decade to really ensure that progressive forces were resourced and galvanized on par with some of the malign actors that we’re seeing already trying to advance their visions and values over the next decade. And that spans from governments—like China, who are experimenting in exporting repression, but also companies—fossil fuel industry, still engaging in climate science denialism but also attacks on civil society. So, we’re up against considerable odds here and when we ask the movements on the frontline what they needed, yes, they want us to galvanize to counter those drivers, but they also want philanthropy to invest in positive visions. And one of the things that we highlight in the report is a section on the playbook of the far right and the neoliberals. Why have they been so successful in positioning themselves now at the kind of vanguard of shaping the systems over the next decade and beyond. And what we found was that both the far right, religious right and particular neoliberal actors were incredibly good at investing in three things and these are the three things we think progressive funders should be investing in. First and foremost, movements. Not project funding, not short-term funding but long-term core-flexible support to movements on the ground but also connections between those movements at the transnational level. One of the ironies of nationalists like [Viktor] Orbán and [Jair] Bolsonaro and [Narendra] Modi, is even though they’re talking about sovereignty, they’re extremely good at sharing tactics, tools, and coming together in transnational spaces in a way that progressives simply aren’t. So, investing in those movements though the grassroots funders that already exist or trying to support those movements directly. Secondly, investing in visionary thinking. So, supporting progressives, not just to talk about what we don’t want but what we do want instead of current models of democracy or economy or security. And then thirdly, narratives. Because of this limited window of opportunity we have, as we were interviewing people, they were saying that time is not on your side. To get this on the agenda of governments, you have to have the public behind you on a massive scale. And so that requires yes, doing work on policy and advocacy and trying to convince government officials of our agendas, but actually it’s about getting the public on board. And this is something the far right, religious right, are extraordinarily good at doing. So, for progressive funders, that means funding in narrative change work in strategic communications at scale and making sure those resources are available for a whole range of civic actors that currently just don’t have access to the kind of expertise technology or tools. So that would be a starting point for what funders should do. “The question then becomes what opportunities does technology offer in terms of radical new forms of accountability?” 16:06 CHRIS DELATORRE: The last question is for you both. It’s clear these are critical times for defending civic space, and there’s still plenty we can do. I think the question on many people’s minds is: how can we match the scale of what we’re up against? How are these critical efforts being advanced right now inside and outside of the sector, and what kinds of organizations or initiatives should practitioners be aligning themselves with? Ben, let’s start with you. 16:40 BEN HAYES: That’s a great question. And I think if we give an honest response to that, the answer is quite challenging for us. So if we’re talking about the acceleration of authoritarianism, the dramatic capabilities that states now have using surveillance technology—Poonam mentioned China but look at the way technology has been able to manage repression at scale of say the Uighur community, that would have been unimaginable I think just sort of a decade ago—just the sophistication of the technology, right? I think the honest answer is that there is a massive mismatch between what we’re up against in terms of the scale of the problem and the resources that civil society have. But I think there’s also a number of trends that we can already see that give us cause for optimism. And I think the first of those perversely is that as closing civic space comes to affect more activists and movements and organizations and philanthropic funders, there is this growing realization that these problems are systemic, intersectional and profound in ways that Poonam mentioned. And is something that is affecting democratic and illiberal regimes alike. So now we’re starting to see humanitarians talking about closing humanitarian space, doctors criticizing over-securitized response to COVID, educators pushing back on ideological interference in the education system, tech workers demanding divestment or withdrawal from repressive technologies or toxic partnerships. And so, I think in much the same way as Ed Snowden—his revelations possibly didn’t realize to the sort of—didn’t result in the reform of surveillance policies he had hoped, we all had hoped, his legacy was to radicalize a generation of software engineers. And I think this sort of overtly authoritarian turn of the last few years is radicalizing people outside of traditional activist and civil society spaces in ways that the sector has to recognize and engage with. I think this is already resulting in the kinds of innovation and cross-sectoral alliance that are going to be needed moving forward, both within professional civil society and outside relationships that simply weren’t there before. And perhaps most importantly I think there’s a sort of realization that activists and social movements that are on the sharp end of the repression and institutional brutality that the term “civic space” so poorly captures in a way, are potentially much better vehicles for elaborating change and alternatives than sort of professional civic space advocates have managed. Poonam already mentioned Black Lives Matter and how they’ve already shifted the conversation on how to address overtly securitized unaccountable police forces in ways that just weren’t a part of the conversation just a couple of years ago. The overwhelming lesson and one that I think many people have already heeded, is that for those of us who are engaged in trying to reimagine or develop a shared vision of democracy, economy, security that’s fit for the decades ahead, it is going to be about listening and engaging with social movements in ways that civil society traditionally hasn’t done and hasn’t done well. 20:32 CHRIS DELATORRE: Poonam, same question. How are these efforts being advanced right now, and what kinds of organizations or initiatives should practitioners align themselves with? 20:44 POONAM JOSHI: I think across the three drivers, we’re seeing mobilization at different levels that would advance a whole range of issues including civic space. In relation to abuse and concentration of economic power, we’re obviously seeing a growing climate justice movement globally, but we’ve seen a resurgence of labor rights activism in the context of COVID, and there are existing networks and initiatives that funders could support. But there are also a number of philanthropic entry points for those new to this issue. And in particular I’d like to point out FORGE, which is a new pooled fund looking at issues of human rights around the global economy. There’s the Edge Funders Network, which is a space for funders to come together to figure out where they can focus their efforts around issues of climate change and environmental justice. But I know that climate—the intersection between climate and civic space is one of the issues that they’re looking at. I think that when you’re looking at the far right and the religious right, there’s been some amazing work done by a coalition of LGBTI feminist sexual reproductive rights groups over recent years in tracing where the money is coming from for systematic campaigns across US, Latin America, Europe, parts of Sub-Saharan Africa targeted at rolling back women’s rights, LGBTI rights, etc. And the coalition that’s behind that, which is being led by the global philanthropy partnership, is trying to do something very ambitious next year, which is bringing together 200 funders united by the fact that the movements that we are engaged with are on the receiving end of global systematic attack on human rights. So, that’s certainly another entry point that funders could align themselves with. When you’re talking about digital threats to democracy, there are some really interesting developments that are trying to match the scale of what we’re up against that Ben mentioned. So, for example, Luminate has set up a new fund called Reset, which is looking at countering digital threats to democracy, and within that looking at the heart of the power of the tech giants and how to start fragmenting that power in ways that they can actually be held to account. But you’ve also got initiatives like Ariadne that is trying to get all of its members to incorporate a focus of tech and power in their work. So, the stepping stone for much of this work is going to be convincing not just the five or six funders that already fund this work, but how do you build a community of funders, for example, willing to fund at the intersection of tech, threats to democracy, and threats from security. So, I think that’s certainly something that’s worth your listeners checking out. In relation to security itself, we’ve seen a massive gap in resources, so FICS is going to be pivoting over the next six months to set up a new fund on securitization and civic space to provide a vehicle for those funders who can see that all of the issues that they care about and the grantees that they support are being criminalized, are being delegitimized, are being surveilled, and would like to do more in addition to help them with digital and physical security but want to help them boost their efforts to counter what’s coming from the upstream level. So, FICS is going to be launching that fund in March of next year and through that we’re hoping to support both international groups but frontline movements to take united action against securitization. 24:45 CHRIS DELATORRE: Poonam Joshi, Director of Funders’ Initiative for Civil Society at Global Dialogue, and Ben Hayes, Director of AWO, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Rethinking Civic Space in an Age of Intersectional Crises appeared first on Digital Impact.
15 minutes | Oct 1, 2020
Can a Machine Learn Inclusivity? That Depends on the Teacher
Digital Impact 4Q4: Niki Kilbertus on Closing the Fairness Gap in Machine Learning SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Niki Kilbertus, doctoral student and researcher at Max Planck Institute for Intelligent Systems. With funding from Digital Impact, Kilbertus is developing an online library to help counteract bias. With racial unrest shedding new light on AI’s fairness problem, his open source tool called Fairensics, is aiming for a more holistic fix. 00:36 CHRIS DELATORRE: Ensuring fairness in artificial intelligence. This seems like a lofty expectation. But despite the challenges, ending discrimination in digital services is something you’ve set out to do. Where does the concept of fairness get lost in the stages of app development and what would a failsafe look like? 00:57 NIKI KILBERTUS: Right. That’s a tall order, I agree. I think that the most important point here is that we’re not just trying to fight this specific technical detail of one algorithm or multiple algorithms in isolation. I mean, these issues of fairness and injustice and discrimination are much larger than just a technical algorithm or a technical solution. These are things that are deeply ingrained and rooted in our society and sort of our collective thinking almost. I think therefore it’s sort of delusional to think that the challenge of achieving fairness or avoiding discrimination in machine learning and in AI is any smaller than the general discussion we’re having about these issues. “When it comes to these narrow statistical definitions, there will never be a satisfying standalone check for fairness.” Algorithms can certain exacerbate or amplify and even create harms by themselves. And if the question is where does it happen in the stages, that could happen anywhere. And that is precisely the issue. We cannot just put our finger on, oh it’s the data and only if we get the data right everything will be fine. Or it’s the optimization technique that we’re choosing. It could happen in all the design choices, from the very beginning to unexpected or unforeseen consequences in the long future as sort of an aftermath. And there’s no real failsafe solution I think that we can hope for. But what is crucial is to bring as many perspectives and as many viewpoints to the table while developing these things. So we can already start thinking about each of the stages. What could go wrong? How can we anticipate what could be going wrong? And what really matters here is to bring the people to the table that are effected by the algorithms. It’s not enough to just know that people are suffering hardship in society that may be exacerbated by algorithms, and sort of factor them into our decisions and sort of me trying to think about the people that could be harmed. We really need these people at the table when it comes to making design decisions and when it comes to developing these algorithms, and also when it comes the decisions of whether they should be deployed or not. So I think that’s the broader issue that we need to tackle, also as a research community. 03:14 CHRIS DELATORRE: You shifted your research in 2016 when ProPublica exposed racial bias in the court system—specifically a risk-assessment tool that exhibited bias against Black defendants. The software developer disagreed, but according to you, they were both right. As it turns out, a difference in approaches had resulted in mismatched conclusions. How often is this the case? In the past, you’ve said it’s a problem of scholars relying on different models and often being unaware of each other’s work. If good intentions aren’t enough, then how can we work toward a credible standard for assessing ethics going forward? 03:53 NIKI KILBERTUS: First things first. I think in this specific example, we’re talking about two very concrete and narrowly defined statistical notions of discrimination. So, when I said both parties were right in the past, I’m really referring to these definitions that can be checked sort of in a binary fashion, either they’re satisfied or they’re not. And while they’re way too narrow to do the term “discrimination” more broadly justice, they can nevertheless be useful criteria to figure out that something may be amiss. “The people we want to protect from algorithmic discrimination are the people who are underrepresented at the top.” So, typically what happens in these statistical criteria is that one splits all the data points that one has observed into two groups according to some sensitive attribute—for example into racial or gender groups, so it could also be multiple, not just two. And then within each of these groups we compute various statistics. For example, we can look at what fraction of people in each group got positive decisions, or how many errors were made in each of the groups. And it seems quite natural that we wouldn’t want an automated decision-making system to make errors at different rates in the different groups. The issue here is that errors can be measured in different ways. And that’s precisely what happened here. So, one group, ProPublica, decided to measure error rates in one way, and the algorithm designer decided to measure error rates in another way. And both of them seemed like reasonable fairness criteria. You don’t want errors to be made in any way, disparately between the two groups. And later, I think in 2016, researchers have proven mathematically that you cannot have both at the same time—both of these criteria for matching the error rates. And that is unless it’s a very simple or a very unlikely scenario. And this is precisely what happened here. So, ProPublica pointed out that one criterion isn’t satisfied, and it seems as if this criterion intuitively is very important to satisfy fairness. And the algorithm designer countered that they looked at another criterion that they did satisfy. So now we know that they could have not satisfied both, that they sort of had to make a choice. And that already shows that when it comes to these narrow statistical definitions, there will never be a satisfying standalone check for fairness. They can be good pointers to verify, you know, as basic checks, and that something is amiss here that something seems odd. But they can never be a certification of, you know, this thing satisfies fairness. I think realizing that there is not this one-size-fits-all kind of solution is really important in trying to design these systems. “We need to give the very people who suffer from the hardship more agency and more power to make decisions.” 06:40 CHRIS DELATORRE: When you began developing Fairensics a few years ago, you say there really was no way to see what others in the machine learning community were doing. But this changed when companies like Amazon, Google, and others were found to have AI systems that perpetuated certain types of discrimination. What happened next changed the AI fairness landscape. You describe the research community and industry as starting to “pull in the same direction in terms of trying to solve and mitigate the problem,” IBM and Microsoft included. This makes me think of the California Gold Rush when hundreds of thousands migrated west to find their fortune. Now, let’s assume that all of this new activity is primarily driven by the concept of fairness. How can the research community help to guarantee transparency where everyone plays by the same set of rules, and what are the implications otherwise? Are we looking at another Wild West? 07:40 NIKI KILBERTUS: I think that really is the one million dollar question here. Again, bringing about sort of larger scale structural societal changes that we certainly desperately need at this point in time is a much bigger endeavor than just the algorithm development. But I think you’re right that there is a danger of viewing this sort of from a purely technocratic perspective and saying that, you know, we’ll fix the problematic algorithms by developing even more algorithms. And now with all these parties—industry, governments, researchers – coming to the table, there is a danger of sort of making sure that people, as I may have said, pull in the same direction—who sets this direction and how can we make sure we set the direction properly? I think that there won’t be a good top-down approach to sort of set this direction properly. And the reason I don’t think this is going to work is that the people we want to protect from algorithmic discrimination are precisely the people who are heavily underrepresented at the top. So, even if industry and governments and researchers get together and even if they could somehow magically decide on a single direction on what they should do to build fair systems, that decision would largely exclude all the voices of the very people they are trying to support and protect. So, put very bluntly, only the people at harm can actually tell us how to improve their situation. We shouldn’t think that we can do this and that there’s sort of one right way and it’s a logical path and we can figure it out just by thinking about it. We really need to talk to these people and we really need to get them to the decision table and have them included in the decision processes and their viewpoints represented. This is something that needs to start very early on in our systems, from kindergarten to university, from management to faculty positions. We need to actively seek to work against our inherent biases and the status quo and to give the very people who suffer from the hardship more agency and more power to make decisions. So, we shouldn’t just ponder sort of how we can achieve fairness in a purely logical fashion and in specific algorithms but we really need to make sure that there’s a larger diversity and viewpoints represented when we make these decisions. “We need to avoid the technocratic message of ‘we will fix algorithms and thereby inequity will go away.'” 10:05 CHRIS DELATORRE: You see a disconnect between the technical and ethical aspects of developing AI. You’ve suggested that scholars are entrusting or consigning matters of ethics to policymakers and private companies. Something you said about the ProPublica piece resonated: you asked why we need pretrial risk assessments in the first place. Given that the research community should be focusing more on the why, what responsibility do researchers have to address inequality in machine learning, and where does that responsibility end? 10:40 NIKI KILBERTUS: As researchers we have two types of responsibility. First, we need to work on the inequity that we have within our own community. I think that machine learning here is on a path where underrepresented groups are really having their voices heard – not by everyone in the community but I think that there’s a general change going on and I hope that this will allow us to really increase diversity within our own ranks. I mentioned this before, I think it’s really important to have these viewpoints diversely represented in the research community, so we can pick out the right problems and right questions to even ask in the first place before trying to jump ahead and just develop solutions to problems that may not even be the actual issues. This is not just the responsibility to society at large but also towards our own culture and colleagues. So this is even just among friends sort of being respectful to everyone. And with that I hope that we can take a fresher look at what we should be doing as a machine learning research community, what are the right questions in terms of injustice and unfairness, and where are maybe the points where we can actually make a change. So, there are really great role models in the community that try to hammer home this point and fight tirelessly against all the backlash they’re getting. At the risk of maybe forgetting many and mispronouncing some of the names, there are people like Rediet Abebe, Timnit Gebru, Joy Buolamwini, Deb Raji, Shakir Mohammed, William Isaac, and many more. So people should really follow them and support their voices and listen them and see what we really should be doing as a community. FURTHER READING “There is no consensus on how discrimination in machine learning algorithms should be assessed or prevented.” Learn more about the origin and aims of the Fairensics project. And on a broader scope, or as a second responsibility, I think we need to make sure that we don’t oversell what we can do within machine learning algorithms. So, avoid the technocratic message of ‘we will fix algorithms and thereby inequity will go away.’ So, I think we have a responsibility to communicate these things in a more nuanced fashion to the broader public and really highlight that the key issues, the underlying structural issues, will not go away just because we developed a new optimization algorithm. I think especially for researchers, there rarely comes this sort of handing off component that almost seems to be more as part of a software engineer at a company that develops an algorithm and develops it to specification and then hands it off and feels unresponsible. And so if we include that into maybe it’s not a research role but a software development role, then I do not think that responsibility ends at any point. I think we all have a responsibility, especially when we’re working in research, to constantly keep questioning our approaches, constantly try to think outside the box, and constantly ask ourselves what may we have missed, what could there be that we have missed. And as soon as we find one of these things, definitely speak up, mention it, and also be willing to accept and state publicly that we have made mistakes when we realize them in hindsight. So, I don’t like to think about sort of this end of day handing it off cleaning your hands kind of situation. I think we have to constantly be skeptical of the approaches we are developing. 14:14 CHRIS DELATORRE: Niki Kilbertus, doctoral student and researcher at Max Planck Institute for Intelligent Systems, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Can a Machine Learn Inclusivity? That Depends on the Teacher appeared first on Digital Impact.
23 minutes | Apr 15, 2020
GDPR Compliance and Closing Civic Space
Digital Impact 4Q4: Vera Franz and Ben Hayes on the Good and Bad of GDPR Compliance SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Vera Franz, Deputy Director of Open Society Foundations’ Information Program and Ben Hayes, Director of AWO, a legal firm and consulting agency working on data rights. In February, OSF published a report looking at how the EU General Data Protection Regulation—or GDPR—impacts non-governmental organizations in practical terms. The report offers practical guidance based on compliance challenges, and specifically addresses the importance of defending social sector organizations against attempts by governments and corporations to misuse the GDPR against them. 00:50 CHRIS DELATORRE: Vera, we’re coming up on the 2-year anniversary of the GDPR. We’ve seen a lot of changes, not only in the European Union but also here in the United States. Of course, I’m referring to the CCPA—the California Consumer Privacy Act, which went into effect this year. Now, your report focuses on the GDPR but I would recommend it to anyone in the social sector dealing with data, regardless of where they operate. Early in the report, you point out a general absence of advice specifically geared toward the social sector—commercial interests notwithstanding. Why is it so important to help CSOs consider the broadest context of compliance for either of these regulations? 01:33 VERA FRANZ: Thanks Chris and thanks for having us on your podcast. So, when we started out with our research for this report, one thing we observed is that NGOs were tying themselves up in knots over their mailing lists, which resulted in flooding of our inboxes with re-consent requests, which were mostly unnecessary. So, in essence we saw NGOs over-complying with GDPR and at the same time in my role at OSF I support a lot of work to support GDPR vis-a-vis big tech—the big corporations, including Facebook and Google—and I observed there that some of these big companies were under-complying. That’s not me saying this, this is European regulators stating as much as well. “Civil society is in the business of going after some of the most powerful in society…to hold them to account. And in the process, we make powerful enemies.” So, we observed this phenomenon and this was an indication for us that some guidance for civil society was really needed. And as your question suggests, data protection compliance is not discreet [inaudible] exercise for NGOs. I think—or we think, as the outcomes of this report—it’s really about two big things. First, it is about—for civil society—living by our values, and more specifically protecting our constituents and partners. They can be marginalized vulnerable members of society or whistleblowers or similar. And protecting them means protecting their data against abuse once those data sets are in our systems. More importantly, GDPR compliance is also about protecting the resilience of our own organizations—I guess by extension our space for action. Or, in other words, minimizing our attack surfaces. FURTHER READING A new report from Open Society Foundations looks at the GDPR has impacted the social sector. As we all know, civil society is in the business of going after some of the most powerful in society, be they governments or corporations, to hold them to account. And in the process of doing so, we make powerful enemies. And we believe that if we don’t get data protection compliance right, our opponents may use this against us. Now to be clear, the use of law against civil society is nothing new. It goes back many years, even in the digital space. I remember 10 or 15 years ago, the Russian government abusing—going after NGOs by using their illegal Microsoft Office licenses as an excuse, so this is a well known tactic. But we are in an age today where the environment for civil society is really getting more hostile, including Western democracies, so including in the EU. OSF of course is painfully aware of that change in climate as well. And so what we did with this report—or what we wanted to do is to find out what a new body of law—the GDPR—in this new climate of growing hostility would be abused, and if so how. So that’s what we really tried to do with this report. 04:49 CHRIS DELATORRE: Ben, to Vera’s point—and as you mention in the report—with all of the good the GDPR represents for the sector, the regulation can also be used against NGOs. Could an oppressive government use the GDPR against nonprofits—for instance, to shut down an organization it doesn’t like—and also to Vera’s point, are free societies immune? Could the regulation be weaponized by a corrupt administration in the US or UK? 05:18 BEN HAYES: Thanks Chris and thanks again for having us on the podcast. You know, this is something that motivated us to write the report, right? And I guess with the caveat that you’ve already given, you know we’re huge fans of the GDPR in the sense that we think it’s right that the European Union has tried to set a high bar for all entities handling personal data, to do so in a responsible and accountable way. But as Vera said, as with all regulations there is a significant potential for misuse or abuse and fortunately we haven’t seen too much of that yet but we do document a few cases in the report. One of which provides a good example of how this can and has played out is an investigative journalist collective in Romania called RISE who basically published a bunch of data alleging, demonstrating the involvement of government officials in corruption. And shortly after this material was released, the Romanian data protection authority—which is supposed to be an independent branch of the Romanian government—sent the Rise Collective a letter demanding an explanation of all of the sources, requesting access to the data, using requirements in the GDPR that shouldn’t in our view have been applied in this case but do exist—asking why the data subjects were not informed about the potential use of their data and so forth and actually threatening this group of journalists with a 20 million euro fine. So you know that happens. There’s no doubt that the threat of this is real. “It’s not necessarily just repressive governments we need to worry about. There’s quite a crossover in the way GDPR might be used.” It’s good to report also that there was some strong pushback from civil society. A bunch of digital rights organizations—Privacy International, European Digital Rights, and others—wrote the European Data Protection Board, which is sort of the preeminent body established under the GDPR to provide guidance on the implementation and regulation and just said look this is a manifestly an abuse of the GDPR, it’s clearly not what the regulation is intended to do, the protections that should apply to a group like Rise have clearly been ignored. And to it’s credit the EDPB the Data Protection Board wrote publicly to the Romanian data protection authority setting out its concerns in this case. There’s a few others. As I say it’s not a massive trend that we should be particularly frightened of but there are a few other cases that we document in the report. I guess just on this I think it’s not necessarily just repressive governments that we need to worry about. If you look at the way regulation has been—to use the word we use in the report—weaponized against civil society, we do see a link between the way malevolent actors have used—as Vera said, like the example with the Russian government—the way malevolent actors have used regulatory requirements to go after civil society actors. Just two quick examples: I don’t know if you guys have followed the stuff around deplatforming. So, you get activists basically writing to financial service providers and saying this particular user of PayPal or whatever it is is an extremist or is associated with terrorism and ergo you as the platform should cut your ties with this organization. And you know because the way the publicity machine works, we’re seeing that quite a lot. And there’s quite a crossover in the way that the GDPR might be used. We at the AWO, we actually act already for a couple of nonprofits that have been subjected to what are in our view vexatious complaints to regulators that concern various matters. But it could be something as simple as not having a GDPR compliant privacy policy on your website or it could be an allegation of something more serious. But of course what happens is the regulator then—as it’s duty-bound to do—will then follow up with the civil society organization concerned, which can have a huge impact on them. 10:11 CHRIS DELATORRE: Vera, in light of what Ben just said, right now the future of journalism seems to depend on achieving a balance between free expression and data protection. deplatforming—it’s a good example of mediating this effect. The GDPR includes exemptions for media organizations, but not all organizations providing support services to journalists are considered as such. What does the GDPR mean for journalists and the organizations who support them? Anyone out there who may be working in this field, what would you say to them right now? 10:48 VERA FRANZ: Yeah, I think journalism is more—generally speaking, investigative journalism and research more specifically—have a very interesting, I would say, relationship with data protection and data privacy. Because if you think about it, on the one hand, investigative journalism and research are aiming to create greater transparency to expose injustices, corruption, etc. and there may be there doing so running into data protection problems and data privacy problems. Yet of course, in order to do their work, they at the same time rely on strong privacy and data protection frameworks—for example to protect their sources—so it’s a very interesting space. And I guess free expression and data protection are indeed two rights that need to be balanced. “We saw the rise of NGOs providing research support services to investigative journalists. And it’s currently unclear under GDPR whether [they] can rely on the journalistic exemption.” The good thing is this balancing exercise is something we’re very familiar in the human rights and social justice community and civil society more generally speaking as often it is about balancing different rights. But going back to the journalism question, how this tension is [inaudible] as you suggested to exemptions for data protection or for expression for journalism. And there is an interesting question of who falls under it. And there’s an interesting example we came across in our reports. So Global Witness, which is an anti-corruption investigation reporting outfit, they covered corruption by a mining company active in Africa. And the founder of that company and others associated with it brought Global Witness to court for the violation of data protection. Now, interestingly and crucially, the UK data protection regulator which weighed in as these court proceedings were happening, clarified that the journalistic exemption in data protection applied not only to conventional media organizations but also to civil society organizations engaged in journalism and public interest reporting, such as Global Witness. And I think the rules of this can now be overstated, as we have many NGOs today investigating and covering injustices. So this is a very important clarification. But to be clear, there are challenges that remain at the intersection of free expression and data protection and with a focus on journalism. So, one of the most interesting challenges we identified is that in recent years we saw the rise of NGOs providing research support services to investigative journalists. And it’s currently unclear under GDPR whether these entities actually can rely on the journalistic exemption. The reasons that they gather, analyze, visualize data but they don’t publish, they support others to publish. And some national implementations of the GDPR have stated that journalistic exemption only applies to activities of entities that intend to publish. So this is an interesting open question that we found a gray area so to say we also explored if these newer types of NGOs would be covered by other exemptions, such as archiving and research exemptions but that’s also challenging, not clear. And so ultimately what we did in our report is that we called on the European data protection regulators and also the EU Agency for Fundamental Rights to write updated guidance on the relationship between data protection and free expression. 14:56 CHRIS DELATORRE: Ben, let’s shift to how CSOs are using data protection laws to push BACK on attempts to shrink civic space. The report looks at subject access requests, which are derived from the right to access data collected by governments and companies. How are civil society organizations using this practice to protect the rights of individuals and how can they assess their vulnerabilities to those who would weaponize it against them? 15:22 BEN HAYES: Thanks, Chris. I think if we take an expansive view of civic space and say you know let’s look at all the ways in which big tech, big data is transforming our democracy and our economy and the relationship between civil society and power, SARs emerge as like a super interesting tool—SAR, subject access requests—for civil society organizations, not just to find out what exactly it is that particular entities are doing with data, which is essentially the rationale for having subject access requests. I mean, enshrining that within the law. But also to pursue more interesting and creative means of pushing back against some of these companies. “Some of this is pretty complicated but there are some base-level stuff all civil society organizations can and should be doing.” So, I’ll give you sort of three quick examples. Most people I’m sure most of your listeners will be fully aware of the Cambridge Analytica case but what they may not realize is that all of the litigation within that began with a single subject access request—actually by a US citizen, Professor David Cowell—and he’d learned about Cambridge Analytica, heard that they may have been involved in Trump’s election campaign and instructed UK lawyer—actually Ravi Naik, my partner at AWO—to make a subject access request on his behalf. And Cambridge Analytica—I think this was almost certainly at the beginning of their downfall—they actually replied and said, you know, basically you’re a US citizen. You have no more right to your data and are no more entitled to a response than a Taliban in a cape. Which is you know an absolutely astonishing response to someone exercising their legal right. Also erroneous under the law because it doesn’t matter where you’re situation if you make a subject access request to a European data controller. But all of that and the failure, the failure of Cambridge Analytica to respond as they are legally obliged to do so, to their subject access request, opened the door for all of the litigation that followed. Similarly we got a couple of great organizations actually that are using subject access requests just to push back on the gig economy. So, you have companies like Uber that will say we’re not obliged to provide our drivers with full employment contracts because they’re not employees because their job descriptions are different or their responsibilities are different. Or we’re not obliged to comply with certain environmental regulations because the nature of our business is such that we fall outside and all this and they come up with all sorts of ways of trying to exempt themselves from the law that many people think ought to apply. And what unions and organizations working with them are now doing are basically organizing gig economy workers to submit subject access requests then creating data trusts to house the responses—to create an evidence base that pushes back precisely on the kinds of arguments these guys are using in courts. And those involved think this can actually be a more effective way of getting to where we need to be than going through lengthy legal proceedings that could ultimately take years to achieve. And just the third one I was going to mention was facial recognition. You know if the news almost every day we’ve got companies like Clearview popping up and making lots of waves in the digital rights community. Again, the use of SARs and the demand for data controllers to facilitate subject access requests is leading to tangible changes in some activities of those companies and in my view can potentially—or can and will potentially lead to some very interesting litigation. Just the flip side of that, you asked what can civil society do to make themselves more resilient and again this sort of goes back to the conversation we were having at the beginning. It is the case that the regulation applies equally to all entities and we are starting to see, you know, vexatious malevolent actors using subject access requests that have no interest in getting the data but like the people we would call the good guys want to use the SAR process as a way to get legal leverage into a civil society organization’s activities. And you know this is really why we—one of the main motivations for drafting the report focusing on civil society resilience. Again, there’s a couple of cases your listeners can refer to in the report but the key thing being that if you don’t as a civil society organization have a robust policy in place for A. how you’re doing your data management, B. how you’re responding to subject access requests, and three, you know, take significant care when you’re doing those things, you are opening yourself up to regulatory pressure and potentially litigation in exactly the same way the Cambridge Analyticas of this world have done so. I’ll just finish with a plug for the report again. It sets out a bunch of recommendations, best practices, things that in our experience civil society organizations, NGOs, have really struggled to deal with or haven’t really thought about, right? Some of this is pretty complicated but there are some base-level stuff that we think all civil society organizations can and should be doing. So hopefully those from that community who are listening will check out the report and find it useful. And I’ll give you the website for AWO, that’s www.awo.agency. 21:53 VERA FRANZ: Yes, and if you’re interested in learning more about Open Society Foundations or work to support civil society, including in the digital age, how we support work to hold digital power to account, to protect information democracy, go to www.opensocietyfoundation.org and follow me on Twitter @vfranz73. Thank you. 22:19 CHRIS DELATORRE: Vera Franz, Deputy Director of Open Society Foundations’ Information Program and Ben Hayes, Director of AWO, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post GDPR Compliance and Closing Civic Space appeared first on Digital Impact.
14 minutes | Feb 11, 2020
Location Privacy: How Protected Are We?
Digital Impact 4Q4: Jeffrey Warren on Geolocation Privacy SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Jeffrey Warren, Co-founder of Public Lab, a nonprofit network of citizen scientists dedicated to creating awareness and accountability around environmental concerns. With funding from a 2018 Digital Impact grant, the Public Lab team is developing a set of management tools for privacy-first, user-friendly data input, archiving, and sharing among social sector organizations. The Community Data Privacy Toolkit, or CDPT, would mitigate the abuse and misuse of location data, a growing challenge that puts activists and the communities they serve at risk. 00:51 CHRIS DELATORRE: Jeff, this is a topic most of us might be too intimidated to explore on our own. But location tracking affects everyone with a mobile phone—through Google Maps and apps for social networking, banking and weather, to name a few. If an invisible “someone” knows where we are at every moment, how can we better protect ourselves? What weaknesses should we be aware of? 01:16 JEFF WARREN: Well, I think location privacy is something that has been increasingly in people’s awareness. The New York Times has done a couple of great pieces where they’ve used actually I think either leaked or court-ordered released data showing literally where tech companies—exactly what data points companies have on our location on a day-to-day or even minute-by-minute basis. And it’s, it is scary to see how they can pinpoint what building you’re in, how long you’re there, the route that you drove and where you stopped. And there certainly are promises of responsible use but it’s maybe more comforting for some people than others. “We develop specific tools but really what we’re trying to do is develop a set of norms around location privacy.” I think when it comes down to it, there’s two parts of it. One of them is we do need better transparency, we do need to demand better transparency. We’ve gotten a few more tools now where you can set location access with more granularity on apps, with recent versions of Android, for example. But beyond putting your phone in a metal wallet or something—I’m getting a little paranoid in that respect but—you don’t have a lot of granularity. So I think the real responsibility is on the people who are designing systems and that’s why with this project we really sought to come up with a framework—I mean we develop specific tools but really what we’re trying to do is develop a set of norms around location privacy. And get people thinking about the privacy of their location as opposed to just their social security number. FURTHER READING The CDPT includes tools to produce and manage semi-anonymous personal data and geodata and to view and display such data, many of which are key to protecting environmental and human rights advocates. I think people tend to think of location privacy as all or nothing, on or off. And that’s actually literally how apps are built for the most part. Like I think the only granularity you get now on Android is whether it can only access your location while the app is on or whether it can do it anytime it wants. Whereas what we’ve come up with is kind of a zoom level based granularity. So you could share a more specific location or less specific location. And that dimension I think is a really powerful way to think about location privacy. Because it shouldn’t be an all-or-nothing debate. 03:41 CHRIS DELATORRE: Josh Levinger’s work in Gaza while at the MIT Center for Civic Media is one example of how blurring location data can keep individuals safe. But traditionally vulnerable populations aren’t the only ones at risk. Let’s say you’re organizing around an environmental catastrophe—like an oil spill—or responding to a human rights crisis in a dangerous part of the world. What is it that makes location blurring so effective for regional activist networks—for human rights defenders and environmental activists who put themselves in harm’s way? 04:17 JEFF WARREN: You’re totally right. Public Lab is an environmental organization. We worked with environmental justice groups on pollution, issues that affect people, where health and people’s wellbeing is at risk. And so the privacy element is really important but environmental catastrophes are fundamentally spatial. So you could be upstream or downstream from something. You could be upwind or downwind of a plume of smoke. You might be at a relationship to something else in a given watershed so the groundwater is flowing across something that you’re worried about. So, those sorts of spatial distinctions are really important to be able to communicate and organize around. “It goes deeper than just being more diverse and inclusive. It has ramifications for how code is written and how systems are built.” And so what we really wanted to do was to create a vocabulary around the sharing of partial location. Which is to say I’m not going to tell you exactly where I am but I’m going to give you a general sense that I am in a neighborhood, you know, downstream from this other place, or something like that. Because people need to make spatial arguments in their advocacy work in order to hold polluters accountable and in order to record that kind of harm. So, the variable location or blurred location approach, it should be kind of familiar. I mean, you know, you could imagine if you blur someone’s face, you can tell it’s a person. And you can sort of generally see something about them without seeing exactly who they are. You might be able to tell some things about them. Or if you obfuscate data you might round the data so you’re not giving such a precise location. And that rounding is exactly what we’re doing with blurred location. We’re rounding the coordinates. 06:10 CHRIS DELATORRE: Public Lab’s approach to project development emphasizes community engagement, specifically through the use of open source code. It’s creating a seamless user experience that’s been particularly challenging for you. You’ve explored a number of existing options, from postal codes to Airbnb’s “in this area” model. What benefits have you found from using a community-based model to propel this technology forward? 06:39 JEFF WARREN: Well, what we’ve found is that there’s an enormous energy from what is typically a younger and more diverse community of people around the world who are learning how to code, and more than that they’re setting new norms for how to treat one another. They’re really flying in the face of the status quo, in fact. And by creating spaces that are really welcoming and mutually supportive we’ve been really lucky to work with hundreds of new programmers to build systems like this one. But to your question, I think it goes deeper than just being more diverse and inclusive. It really has ramifications for how code is written and how systems are built. By contrast you can look at, say, facial recognition or other systems that are envisioned and developed primarily by the technology community, which is primarily white, primarily male, primarily, say, in the Bay Area, for example. And their sort of failure to understand really critical issues around facial recognition that get at privacy, they get at vulnerability, they get at rights and things like that that just maybe aren’t front-of-mind for that group of people in the way they are for other demographics. FURTHER READING The New York Times: “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret.” Tech companies use smartphone locations to help third parties, including advertisers. They say it’s anonymous but the data shows how personal it is. So, by having a more diverse group and inviting people in an open source model, where anyone can learn the skills they’ll need to contribute and are supported and welcomed in, we get a much more diverse group of contributors working on this code and that informs the actual work. They’re not just executing the work that we’ve written down line by line or something. They’re bringing new ideas, new perspectives to the work and I think that really does play into how the systems we build —how they work, how they’re designed, how they look. I would note that I think there are real challenges to extending that into the space of design. Because the open source model is primarily one of writing code whereas a lot of what we’re dealing here is about doing design—interface designs or user experience design work—that is very smooth and effective and intuitive. So, I think we’re still learning about how to do a community-based model for design work. But, you know, I think it’s great that we’re trying and I think we’ve made a lot of progress on the code side to start with and to build off. 08:54 CHRIS DELATORRE: Humanizing geospatial technology seems to be the biggest challenge here – to go beyond the science community, to share scenarios in the real world where it’s useful. Here’s how you put it on your website: We’re interested in tools that can offer people in online spaces the ability to organize, coordinate, and communicate in regional scopes, while placing the decision of how precisely to share location in the hands of those whose privacy is at stake. How can we communicate the importance of local control of geospatial data, both to technologists and the data science community, and also to people who want to protect their privacy? 09:37 JEFF WARREN: Well, I think it’s a challenge. I think a lot of the prioritization of these issues comes from demands from people who are using the technologies. And if you think about in academia how a lot of these issues are addressed, they come from an awareness that people are vulnerable or might be, you know, say the institutional review board process, which I think is important ethically but it doesn’t really—it’s not really a process which is run by communities who have the biggest stake. It’s run by the institutions. So, I think there’s some kind of inverting of that model which has to happen for this to be effective. And that’s something Public Lab is very invested in— in the community science model where communities are really at the center of the process and are calling the shots, so to speak. “Often when we think about privacy or we think about designing systems, we think about what is shown to people as being different from what is actually stored. And that shouldn’t be the case.” So there’s that inversion. I also think there’s a couple basic things, like don’t make a mysterious algorithm when it’s not needed. You know, people think, well, we’ll have this model and it will be really complex for managing the privacy, and the user won’t have to think about it, and so forth. No, I think the technological systems that we create should follow the human understanding. So, it’s not like we have some sort of crude metaphor or just sort of something happening behind a curtain but you can actually see how it works and hopefully if it’s well designed it’s simple enough that you’re not juggling a lot. You should be able—in this case, with location privacy—you should be able to understand and act on the amount of privacy you get without having to do a calculation or something, without having to run code or anything like that. And I think we use the simple model of truncating the latitude and longitude coordinates so you can just glance at the coordinates and you get a sense—it’s not hard to translate that into an amount of privacy. So, I think that idea that what’s happening in the backend, we’re storing these latitudes and longitudes—it’s not being done differently than is being shown to you. There’s a commonality. And so with that comes an honesty about how much information has actually been stored. Often when we think about privacy or we think about designing systems, we think about what is shown to people as being different from what is actually stored. And that shouldn’t be the case, you know? If you are telling people that you’re only—if you’re showing people that you’re only storing this much privacy, it should not be the case that in your database, you actually have a lot more precision and you’re sort of the master of that extra secret information. No, it should be transparent through from what’s shown to the user down to what’s in the backend. So, I think that’s really important. I guess in that sense that mix of transparency and accountability, local community control, those are all part of the Public Lab model writ large. So, I think it’s an important thing to note that this idea of location privacy is just one facet of a broader set of work that Public Lab does and it sort of speaks to the values. If you’d like to learn more about Public Lab, you can follow us on Twitter @publiclab. And as I’m stepping away from Public Lab, you can continue to follow my work at @jywarren on Twitter. And you can learn more about Public Lab in general at publiclab.org. 13:05 CHRIS DELATORRE: Jeffrey Warren, Co-founder of Public Lab, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Location Privacy: How Protected Are We? appeared first on Digital Impact.
17 minutes | Feb 2, 2020
Upholding Transparency in the Age of Misinformation
Digital Impact 4Q4: Rachel Smith on the Neutrality Paradox SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today’s four questions are for Rachel Smith, Co-founder and Executive Director of GlobalGiving UK. As misinformation continues to spread and controversial issues become more frequent, many platforms for social good are struggling to remain efficient while maintaining the public trust. To address ethical challenges and increasingly complex dilemmas, GlobalGiving launched a research inquiry into what it calls the Neutrality Paradox—a problem faced by tech platforms that are forced to make difficult decisions that may not demonstrate a position of neutrality and do so while upholding transparency. The aim of the inquiry is to explore how these issues are affecting charitable giving platforms, in particular, and to find practical solutions that encourage responsible, ethical giving practices. Rachel, who leads GlobalGiving’s Evidence and Learning initiative, joins us today to share more. 01:06 CHRIS DELATORRE: Rachel, the Neutrality Paradox sounds like science fiction, like something out of Star Trek. But in fact, as you explained in a blog post last year, it’s a problem we’re all experiencing right now. What distinguishes the paradox in digital contexts and what are the implications for civil society organizations? 01:28 RACHEL SMITH: Hi, Chris. Well, this is an interesting and complex topic, as we will discover as we go through. In the context of philanthropy platforms, which is the position that GlobalGiving is coming at this from—we’re a nonprofit organization and we exist to support humanitarian purposes, really. So the paradox really comes from these contrasting ideals, where technology on the one hand gives us spaces that are open, that are democratic, we hope, and that allow for different ideas, diverse ideas to emerge. Whilst as a philanthropy platform we also hold a huge amount of responsibility to ensure that organizations that we put forward onto our platform as valid and reliable and doing good work, impactful work, are going to be supported by people—the public or maybe institutions—that are putting a lot of trust in us. So that also means we really need to think about what happens when there are controversies that come about—about an organization or about a particular theme or topic that might indeed do harm to other people if we don’t take a stance or make a decision around it. “We need to say that being open is possible but we also need to be dealing with problematic content and making decisions about what to do with that content.” So, I think in our case and for civil society organizations in general, thinking about technology, we go beyond—we need to go beyond just saying, well, technology is giving us space and is giving us tools to allow connections to happen regardless, and to allow content to flow regardless of what it is—and we need to keep our standards very high. And so we’ve got some serious challenges on our hands. So on the one hand we’ve got partners, for example, working in the Middle East. They’re fighting in complex contexts and they’re actually fighting for openness and neutrality because if they are removed from platforms like ours it actually creates an even more difficult environment for them to be able to share the realities of the contexts they’re working in, to continue to maintain funding—and that actually has real life consequences. On the other hand we have partners, for example, we were discussing with some partners in Germany who talked about the fact that their countries’ histories are indicative for them about how to pay attention to warning signs around content that may indeed be unhelpful or it may be harmful. And so in those cases some of our peers in Germany would much more quickly want to close down particular content because they’re worried about how those kinds of topics might escalate. There really is a lot of responsibility that we hold. We need to think about how to protect people, particularly those that are marginalized or particularly vulnerable, and so it isn’t ok for us to say that being open is possible with technology—we need to say being open is possible but we also need to be dealing with problematic content and making decisions about what to do with that content. So that’s the dilemma and that’s the paradox we’re facing. 04:54 CHRIS DELATORRE: Something I find challenging—and some listeners might relate to this—is understanding the difference between content moderation and censorship. Machine learning, for instance, could do so much to identify hate groups or extremist networks as they emerge. But as we’ve seen, algorithmic solutions are prone to human error. How can social good platforms design and adopt new tools that maintain integrity and an awareness of inherent bias? 05:22 RACHEL SMITH: I was struck when I was reading Lucy Bernholz’s Blueprint for 2020, and in particular picked up her thoughts around the digital landscape changing to become increasingly intersectional, you know, around digital, human rights, ethics, and I think that’s the starting point and the foundation for thinking about how tools, platforms need to develop and design themselves. So for social good platforms, we need to be building spaces for content to flow, for connections to be made, for funds to flow, for example, that are built on principles of good civil society, of social justice, of responsibility, of ethics. And it’s important for us to consider that when we’re thinking of content moderation as well. So we shouldn’t take it lightly that we moderate content and that we do consider how a particular organization might be positioning itself when it shares information about an impact program that it’s running, for example. And we have to think about that kind of power and bias that exists within those decisions. So as a platform we are of course moderating what is posted, what is said, and in allowing—or even in disallowing content—we’re actually taking a stance about what we as an organization that runs a platform believes is fair and true. “As we deal with each of these dilemmas or complex issues we actually find that we need to adapt, to change our responses.” It’s a really complicated situation and if we want to think about how to build digital tools and the algorithms that sit behind these platforms responsibly, we have to go beyond thinking about the regulatory and legislative environment and definitely build for that, but also critically think about our values and also the implicit biases that we’re building into tools or decision-making processes, whether that’s machine-based decisions or whether that’s human. FURTHER READING As Rachel Smith suggests, misinformation and hate speech aren’t reserved for Facebook and Twitter. Here’s how social good platforms can stay one step ahead. Platforms, funding intermediaries, philanthropists really need to take a step back and know their values before they get started on this. They need to make sure that they’re explicit about those values, and then begin to build digital solutions that actually embed those values. And I think critically we also do need to be prepared to adapt and change because environments change, our organizations evolve and we learn. As we deal with each of these dilemmas or complex issues we actually find that we need to adapt, to change our responses. 08:02 CHRIS DELATORRE: Vetting is a big part of being a philanthropy intermediary. But it’s impossible to track every connection and relationship, or to know what position an organization will take on a particular issue. What do you mean when you say philanthropy stewards must be both proactive and reactive in their vetting processes? 08:22 RACHEL SMITH: So, like many other philanthropy platforms, GlobalGiving has a robust, an up-front vetting process —a proactive vetting process, if you like—that holistically screens organizations that we might fund through our platform. So, mostly we’re doing that based on legal determinants, we’re looking at whether the organization is registered, we’re looking at the organization’s history of managing funds or its history of delivering social impact or environmental impact programs. “That really has drawn us to the heart of the issue—how do we make decisions that affect people and do so in a responsible way?” Where we see challenges is that obviously political, social and regulatory environments are changing ever rapidly and new information emerges. So that’s where we need to take this more reactive approach. And when I say reactive it’s not that we are unprepared. In fact, that’s partly what the Neutrality Paradox work is trying to address—the fact that actually oftentimes we have felt unprepared, and so when we need to be reactive what we need is a set of tools and principles to follow to help us to react smartly essentially. Some of the initial research from our Neutrality Paradox inquiry has helped us to determine some categories of dilemmas. Anything and everything from how do we handle affiliations with controversial people or organizations—so that might be an organization that we have vetted as being perfectly good and a solid organization but it emerges that there are affiliations with a particular group or a person or perhaps some political links—to things like conflicting ethical and legal standards. And so what we need to be able to do is to smartly navigate through and make the decisions in a transparent way. So I wanted to illustrate this—and it also gives a bit of insight into why GlobalGiving decided to start this inquiry into the Neutrality Paradox. An Indian organization passed our vetting processes, had demonstrated positive impact in their work, and last year they were approached—we were approached by an investigative journalist who was examining the practices of this particular organization. And here was the challenge for us. It was a legal one and an ethical one. The organization was operating within the law of India. However, arguably they didn’t meet global human rights standards in the work that they were doing. So, this case actually became the catalyst for this Neutrality Paradox work because it showed us that we often don’t have a clear set of principles and protocols that help us make these balanced decisions. So, in one sense we’ve used data and digital to create a space for this organization to raise funds and for us to complete vetting. But in another sense we were now faced with a very human dilemma about what set of principles that exist out there in the world should we use or might we use in order to make a decision about whether this organization was legitimate or not. And so that really has drawn us to the heart of the issue—how do we make decisions that affect people and do so in a responsible way? 11:40 CHRIS DELATORRE: GlobalGiving is leading this effort but you aren’t going it alone. Here’s how you frame it in your post: “It is our goal to collaboratively develop a standard, transparent set of tools and resources that strike the right balance between openness and curation, free speech and moderation, independence and trust. They should balance corporate values and business requirements, external frameworks and internal standards.” How are you calling on others to assist in designing solutions for the social sector at large, and what might a new set of standards look like? 12:15 RACHEL SMITH: This is an issue that we feel really deeply at GlobalGiving, and one that we would have explored anyway as an individual organization. But we wanted to explore very early on whom else might also might be facing these challenging paradoxical issues. And so rather than design behind closed doors for ourselves, we wanted to go out and really examine what the rest of the philanthropy sector might be thinking about in this regard. So, over the past few months we have interviewed, surveyed, and procreated different solutions with predominately other social good platforms but also big foundations and other kinds of philanthropists and indeed content curating platforms at large. And through that we’ve been able to speak to many organizations and collected more than 50 examples of actual dilemmas faced by these organizations and what they did in order to deal with them. In some cases they were successful in navigating through these issues and in other cases really struggled to make decisions and perhaps even received some public scrutiny around that, around their decision making. This has really been helpful for us because what we’ve been able to do is create an understanding, a foundation, that’s based on the reality of many organizations in this field of digital philanthropy. And so what we’ve been able to do I think is start to develop a set of standards—or what we’re calling a manifesto—which we think will be something philanthropy platforms and intermediaries can potentially commit to and use to build and guard their work. Our goals now are really to continue to develop a set of standards, to socialize that, to see what resonates and what doesn’t and to make it available to the wider sector. In the impact of the Neutrality Paradox work, this is really not about GlobalGiving designing for itself but really about starting a conversation and continuing that conversation to a place where we’ve both got practical protocols and abilities within our organizations to be responsible in the decisions that we make—but also to push an agenda forward and to demonstrate the trust that we can have in the wider public sector because of our transparency in these processes. So, you’ll see coming from GlobalGiving and our collaborative partners over the next few months a couple of things. This manifesto that I mentioned which will set out some shared principles of responsibility and this need for more proactive and dynamic approach to navigating issues. And you’ll also see a set of tools and resources that we are prototyping currently and testing with some of our philanthropy platform peers to help organizations to design solutions that mitigate some of these complex dilemmas. What you’ll see when that launches is that the approach is not just to wait until these dilemmas and issues come about but actually to examine your organization deeply before they even emerge. What we found really—and I think what is true to say, to conclude really—is that all organizations come at this Neutrality Paradox concept in different ways. Every organization has a different history, they have a different set of values, but there are commonalities and there is a shared commitment to ensuring that philanthropy is ethical. If you’re interested in continuing to learn more about the Neutrality Paradox work that we’re doing, we are obviously very open to discussion and collaboration with others. And so we would welcome anyone getting in touch with us either through our Twitter handle @GlobalGiving or going to our website which is globalgiving.org. And for any listeners that are interested to follow me and my thoughts, you can follow me at @rachelgguk. 16:34 CHRIS DELATORRE: Rachel Smith, Co-founder and Executive Director of GlobalGiving UK, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Upholding Transparency in the Age of Misinformation appeared first on Digital Impact.
16 minutes | Jan 21, 2020
Can Data Make Bad Housing Practice a Thing of the Past?
Digital Impact 4Q4 Podcast: Dan Kass on Housing Data SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4. I’m Chris Delatorre. Today’s four questions are for Dan Kass, Co-founder of JustFix.nyc. With funding from a 2018 Digital Impact grant, Dan and his team are leading a data coalition whose aim is to further housing justice in New York City. The Housing Data Coalition, or HDC, is working to make public data more accessible and actionable for housing justice groups. 00:37 CHRIS DELATORRE: Dan, housing speculation is known to drive displacement because it often requires evicting tenants. Meanwhile, speculators are profiting from the loss of hard-working families who may not be able to afford the rapidly rising market prices of urban centers like New York City. How can more accessible and transparent public data make predatory housing practices a thing of the past? 01:03 DAN KASS: Hey Chris. Well, first of all thanks for having me today. I’m really excited to get to share the work that we do, both at JustFix as well as at HDC. So, as many folks are probably already keenly aware, we’re in the midst of a housing crisis in this country, both in New York City, in many cities all over the US. We’re seeing a lot of tensions and difficulties with just providing folks an easy, safe, affordable place to live. You know, across the board we’re seeing trends in decreasing opportunities for homeownership, so increasing numbers of folks who are renting their apartments. And in New York as well as a lot of other places, we have never had a higher homeless population than we do right now. And a big driver of this housing crisis at the moment is just this huge imbalance of access and power between tenants and the real estate industry on the other side. So, if you’re an average tenant and again—this should be almost anyone who’s living in a city at the moment—it’s incredibly difficult for you to know who your landlord is, what’s happening in your neighborhood, crucial information about your building to inform on the larger context around your living situation. If you’re a tenant who needs to go to housing court, the vast majority of the time you don’t have access to free legal representation. While of course on the other side, almost all landlords have that, you know, paid attorneys who are in housing court every single day. So, our focus at JustFix and HDC is to take public data, to take this sort of information and make it more accessible to give people more power. “There was a lot of this sort of constant wheel reinvention that was happening and a big lack of shared resources.” So, we’re working regularly with data that we, was provided from the city level, the states, the courts, and use that to both again, make it available in a way that it isn’t currently. So just straight up just making the data available but also transforming that data into tools that can proactively equip tenants and tenant organizers to actually take action. So, that includes data on shell company networks, real estate finance that can give folks a much better lens onto who the actors in the space are, know where to be targeting their efforts, understanding in real time trends in their neighborhood. And, and so we think that’s not the solution to the housing crisis, but an incredibly important tool to have in the toolkit. 03:43 CHRIS DELATORRE: In a promotional video for JustFix.NYC, you describe how thousands of tenants have used the app to document issues and build legal cases against neglectful and deceitful landlords, to “navigate the complex bureaucracy of the city services as well as connect with people in your community who can assist with these issues.” Where and how do JustFix and the HDC intersect? How does participating in this process empower tenants and community leaders who may not have a voice otherwise? 04:17 DAN KASS: Yeah, so JustFix began in 2015 with my [fellow] co-founders George and Ashley. And we are a nonprofit. So our mission as a nonprofit is to utilize data and technology to support tenants and tenant advocates in fighting displacement. But pretty quickly after we begin working, it was very clear that there was already a group of people in many different places across the city who are doing some version of this work—who were researchers, data scientists data folks that were working at our partner organizations. And so a lot of people were already doing really interesting work and really critical research around housing data. Our observation, though, was that there was a lot of this sort of constant wheel reinvention that was happening and a big lack of shared resources. FURTHER READING The HDC provides opportunities for coalition members to connect, learn, and give mutual support to a variety of projects involving housing data. So you would see the same projects kind of popping up again and again, trying to make, just, you know, make, you know, use of the data in a very simple sense. And so part of the inspiration around HDC was for all of us just to sort of get in the same room, really started to map out, you know, where our work lies and ways that we could both collaborate as well as ways in which we could continue to work on our respective projects in a more dispersed way. One of the major projects of the housing data coalition is a tool called NYCdb, stands for NYC database and it’s a tool that helps collect data, all of this different disparate data from all of these different sources, that are relevant to doing housing justice work and puts it in the same place. So all of our projects can now be built on top of NYCdb, and we have good peer-review strategies and different sorts of ways in which, now the universe of building these projects has become that much easier. So the, that, it all sort of is a moot point without having a really meaningful co-design process, as you said, that empowers tenants and tenant leaders to have a real active voice in the development of these tools. So, beyond sort of these shared resources that we focus on at HDC, we’re also really looking and are constantly focused on creating a space where tenants, tenant organizers, technologists, designers academics can come together but also to be bringing in tenants and tenant leaders within those spaces and really focusing on ways to have transparency, shared language, decision-making frameworks that allow the people who are closest to this problem to also be the most active participants in developing the solution. 07:13 CHRIS DELATORRE: JustFix began working with tenants in New York City in 2015. Co-founder George Clement says, “Through our aggregation of data, we have tenants on opposite sides of the city that are dealing with similar issues but have the same landlords. Taking collective action can be the most powerful way to enact not just getting one repair made but enacting meaningful system-wide change.” You also see this project as an opportunity to understand how the HDC can serve as a model for addressing community data needs. Do you see similar initiatives happening, happening elsewhere in the country and is there a plan to replicate the project in other cities or to address other issues? 07:59 DAN KASS: Yeah, so a great question. That’s actually been something through, you know, the Digital Impact grant, both as in within HDC and and also in the course of our work at JustFix. We’ve really take that question on in 2019. So for the past several months, we’ve really started to understand what the ecosystem and landscape looks like in different cities and, and, and trying to establish what the need looks like in other places. And it’s very encouraging. “They want to be working on projects… but it’s not always clear how they can contribute in the most effective way.” So, what we see in other places we think is very similar to what we were seeing at the beginning stages of HDC. We have some really incredible folks both working in the tenants’ rights community, in the civic technology community and elsewhere that are doing interesting work are trying to support housing justice work wherever they’re living, if it’s in Los Angeles, Chicago. Just last week I was in San Francisco meeting with folks there. However, what we’re seeing is a very similar thing to, you know, again what we really started with HDC which was creating space, shared language and principles, to have a more collaborative environment in doing this work, and really to be able to share a co-design process that allows technologists to collaborate with tenants and tenant organizers. I think a very common thing that folks who are really interested in civic technology feel is, you know—a huge amount of interest—they want to be working on projects, they want to be contributing. But it’s not always clear how they can contribute in the most effective way. And so by taking the open-source work we’ve done here in New York, the things that we’re writing up about the projects that we’re doing here, we see a way to almost sort of promote these projects in other cities and, and build similar groups of folks who are local to that place to really start to do this work on their own and how we can support them in that process. FURTHER READING Digital Impact grantee JustFix.nyc is using the urban housing crisis to lay the groundwork for a nationwide movement built on public data. 10:09 CHRIS DELATORRE: Brooklyn Borough President Eric Adams says, “Information cannot stay in the hands of the numerical minority. It must go out to the countless number of majority people who are in need of that information.” The HDC is addressing housing justice from a variety of standpoints. A lack of access to information seems to be the driving factor behind that. Two-part question. First, what about people who can’t get online? Can tenants and advocates participate without access to Wi-Fi? And second, is having that access enough? How can tenants and community partners use what they learn to help themselves and each other? 10:50 DAN KASS: Yeah, I think that’s a great question and, and that’s a question that we’ve been asking ourselves since the very, very early days of starting our work at JustFix. You know, to me it goes back to an amazing quote that I’m going to paraphrase from Dana Boyd at Data and Society who made this observation that given the digital divide we currently see and how the vast majority of technology that’s currently being developed is being developed towards very specific populations and more tech-savvy users. So Dana Boyd has this quote talking about how if you in the process of building your technology aren’t actively combating that bias and inequity in the developing of your tools, then you’re only furthering them. That simply perpetuating the status quo is only making things worse. “Our focus at JustFix—and I think what’s reflected in our tools—is a desire to just go beyond simply displaying information.” So for example, if our tools, you know, one of our tools helps a tenant in the process of bringing a case to their landlord in court. So, we streamline the, the housing court process for someone who doesn’t have legal representation. If we only made that tool easier for someone who was already more likely to be able to use the technology, we’re only furthering that gap and widening it. So our focus is very much on meaningful distribution. So, we really partner with organizing groups, local neighborhood groups that can really put these tools in the hands of the folks who need it the most. So we’re not looking to just pick up any type of user, anyone who’s going to use our tools, but we specifically through our nonprofit mission have a focus on the people who are the most at risk of displacement or, you know, have repeated and flagrant harassment from landlords, who are in the state of being evicted and things like that. But also call, it speaks to the need for accessibility and usability. So we focus on creating SMS-based tools that are deeply accessible. Even my grandma knows how to text and send emojis and so we see, there’s a huge opportunity to deliver services through that platform. And we also create ways, and this was sort of learned through our design experiences and through the past several years of deploying our tools, that we also create pathways for advocates, caseworkers, paralegals, anyone who might be supporting tenants to be able to use it on their behalf. So, a percentage of the folks that are able to use our tools or not using them directly but we have folks that are already helping them and we’re making those advocates’ lives a lot easier and streaming their workflows as well. And towards your second question, I think that’s another really great, just like deep existential question about our work—creating access and just making information easier. So, of course, an important step in the right direction but we don’t pretend that that’s the solution. Our focus at JustFix—and I think what’s reflected in our tools—is a desire to just go beyond simply displaying information. There’s a lot of tools that you could potentially think of that will just display, you know, here’s the violations in your building, here’s some, you know, data points, here’s like some good visualizations. We think that’s great but we really want to translate that into actions. So, how can we leverage this data in a way to provide more personalized sets of tools, to use that information that we have to streamline these processes. So for example and is that you heard in that quote from George, we’re able to use our data analysis that connects landlord buildings. Oftentimes landlords register shell company networks to sort of hide what buildings they’re involved in. We’re able to use data analysis to connect those buildings together really meaningfully for the first time to build more proactive class-action litigation, to inform more proactive code enforcement from folks at city agencies. So, we’re really looking to not just open up access and provide information, which is of course important, but to really bring that to the next level in terms of having that actually result in concrete actions that can really change some of the underlying dynamics of landlord harassment, evictions, and displacement in New York. If you’re looking to learn more, our website is justfix.nyc. We’re on twitter @justfixnyc. And if you’re interested in learning more about the housing data coalition, that website is housingdatanyc.org. 15:42 CHRIS DELATORRE: Dan Kass, Co-founder of JustFix.nyc, thank you. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Can Data Make Bad Housing Practice a Thing of the Past? appeared first on Digital Impact.
12 minutes | Dec 20, 2019
Crowdfunding a Space for Civic Engagement
Digital Impact 4Q4 Podcast: Asha Curran on GivingTuesday SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4. I’m Chris Delatorre. Today’s four questions are for Asha Curran, CEO of GivingTuesday. Launched in 2012, the global initiative mobilizes resources, unmeasurable acts of kindness, and hundreds of millions in charitable donations in the United States alone. This year, $511 million was donated on GivingTuesday. Together with offline donations, the total for 2019 is projected at $1.97 billion. Last year, Asha joined us to talk about how the movement is collaborating across sectors to safeguard the data privacy of thousands of donors worldwide. A monumental task for a platform as distributed as this one. In fact, the distributed nature of GivingTuesday is the biggest factor driving its growth, in that anyone can participate. How will the global initiative make effective use of its data while also ensuring privacy and security for so many? 01:08 CHRIS DELATORRE: Asha, you’ve described the way people express their generosity now as fundamentally different, not just from a generation ago, but you say even from 5, 10, 2 years ago, you say that it’s changing all the time. You also mentioned the importance of peer networks and driving that generosity. With social networks like Facebook under fire for betraying the public trust, do you see any differences in how people are giving online this year versus last year, and how is GivingTuesday using insights about giving behavior to improve your digital infrastructure? 01:50 ASHA CURRAN: Yeah. First, well, thanks for having me back, Chris. I don’t know. It’s interesting, I don’t see the sort of existential concerns about Facebook or other big platforms filtering down in any concrete way, either on the donor side or on the nonprofit side. Certainly, I think there’s concern from nonprofits that they’re not getting as much of their donor data as they used to. “We’re not creating a brand new model, we’re creating a new model for this industry, one that is desperately needed.” I think what we’re really seeing in terms of people giving online is a continuation of trends that are amplified on GivingTuesday, right? So GivingTuesday sort of lifts up and spotlights things that are—trends that are happening anyway. So, people using social tools to come together, to give collectively, really strong theme with GivingTuesday, kind of the main thing that we think motivates and inspires people, donors then sharing their donation activity, so really bringing giving more kind of into the public square, that’s been on a real upward trajectory all of GivingTuesday’s eight years. More diverse giving to multiple causes, so no longer the sort of December 31st, sitting down at your desk, writing a check to the same old nonprofit year after year, but really spreading the generosity around. And then giving money as one reflection of a variety of generous behaviors and social media as a means of connecting those threads. So, especially young people give—you know, they might give a donation they might give crowd funding, they might volunteer, they might march, they might be activists, right, their generosity is really spread around and it’s useful I think to see donation in the bigger generosity context. 03:37 CHRIS DELATORRE: The GivingTuesday Data Collaborative started with a simple idea, which was to understand how much money was being donated just in the United States, but you encountered a number of challenges. For instance, there wasn’t a centralized repository where all this data was held. A lack of collaboration in the social sector made it even more challenging. How are you working to encourage collaboration and transparency while also ensuring privacy for so many donors across the globe? Has the journey led you to new models of governance? 04:12 ASHA CURRAN: Absolutely. And in fact, at my Stanford PACS Fellowship was really sort of the place where we started thinking through a lot of those big questions and trying to figure all of these governance questions out, really prioritizing security and privacy, while at the same time, trying to build what essentially will be the sector’s largest global data collaborative. Which is essentially the same kind of analytics platform for the giving economy that all other industries have. So, we’re not creating a brand new model, we’re just creating a new model for this industry, one that is desperately needed. “That’s where you start seeing individual citizens start to really understand the power of their own agency.” And with that kind of data collaborative, we currently have over 60 partners, of both nonprofit and for-profit, you know, payment processors, different kinds of platforms. And with those kinds of tools and that kind of data, we’re able to collectively examine some of the questions and answers that commercial entities have been doing for decades. So, who gives to which causes? What motivates them to act? How can they be retained and engaged? Why do supporters stay? Why do they go? What other ways are they expressing their generosity? So, there’s a big opportunity here for the sector I think to look at our information, both qualitative and quantitative, collectively, and find information, find answers that benefit everybody, that benefit everybody in every corner of the sector. 05:41 CHRIS DELATORRE: You were recently appointed new board chair of guardian.org. I’m thinking of the example of philanthropy and local journalism. Julie Sandorf recently wrote a piece in the Stanford Social Innovation Review where she points to the need to draw on many different types of donations in order to build resilience, instead of relying too heavily on advertising. But this requires new strategies and insights, right? How can the data from GivingTuesday reveal good places or new strategies for regional movements to focus their campaigns and limited fundraising resources? 06:23 ASHA CURRAN: Yeah. So, obviously, that’s—that particular that particular thing, media, local media, local journalism, cause-related journalism, it’s all very near and dear to my heart because of my work with the Guardian. From a GivingTuesday perspective, it’s really interesting, right? One of the things that’s happened with GivingTuesday that we never expected that has only been going from strength to strength in the recent years is the rise of these regional GivingTuesday movements, so tiny towns, big cities, you know, entire states coming together to create this generosity movement that draws together all of the sectors of that community, the local government, for-profits, nonprofits, schools, houses of worship. And because of that or because of all of those sectors coming together in high, high levels of collaboration and civic pride and civic engagement, it really—one of the things that that has done, to your point, is to work to surface what issues are—what challenges, what opportunities are happening in those local communities. The social sector has a lot to learn from the innovation network that has emerged from GivingTuesday. Article by Asha Curran. Now the truth is, data-wise, that’s still tough. It’s really tough to get hyperlocal with philanthropic data, but qualitatively, the stories that come out of those community campaigns are so strong that they really do work to amplify the unique challenges and opportunities of each of those local communities. And I do think it’s really important that nonprofits, in particular, collaborate with one another, much more than they are now, to form a sort of resilient and vibrant civil society in any regional community where everything is interconnected. And then also to form those relationships with other sectors of that local community. That’s where you start seeing really strong civic engagement and where you start seeing individual citizens start to really understand the power of their own agency—even if they’re not billionaires, even if they’re not holding elected office—that they really start understanding that they have significant power to impact where they live and the civic space that they share with others. 08:38 CHRIS DELATORRE: I want to lead this last question with something you said about the mission of GivingTuesday as it relates to local communities. You said, “We’re all driving toward a common goal, which is a more generous human society. And yet, GivingTuesday in each of these places really reflects the local identity and feeling of those different places.” Why is understanding digital technology so crucial to creating a more generous human society? What are some examples of how online giving looks unique from one country to another? 09:16 ASHA CURRAN: So, I think understanding digital technology is crucial because that is the means by which we are connecting all of these different threads right now. It is equally important to understand the dynamics behind the technology. Why is it the people are so attracted to movements now? Why is it that people want to co-create and adapt and co-own the things that they are passionate about and the causes that they want to—that they want to become involved in and make an impact in? And so, you know, understanding all of that, not just how to open a Twitter account, not just how to have a working donate button, but really the dynamics that underpin all of that all, of that is really, really crucial and the end sector is frankly still quite behind in rapidly adapting to all of those changes that we’re seeing. So, I think one of the opportunities of GivingTuesday is that it’s a global learning platform, right? So online giving and giving, in general, and generosity, in general, looks different in South Africa and Brazil and Russia and Croatia, but all of those places have a massive amount to learn from one another. So, it’s also using technology as the means by which we, as social sector actors, connect with one another, and the intentionality that we bring to that. Are we using it to share best practices, to share ideas, to be transparent, and to be careful, and to be thoughtful, that’s where I think the game change in nature of this movement comes in. 10:49 CHRIS DELATORRE: Asha Curran, CEO of GivingTuesday, thank you. To learn more about GivingTuesday, visit GivingTuesday.org and follow Asha’s work on Twitter @RadioFreeAsha and @GivingTuesday. Digital Impact is a program of the Digital Civil Society Lab at the Stanford Center on Philanthropy and Civil Society. Follow this and other episodes at digitalimpact.io and on Twitter @dgtlimpact with #4Q4Data. The post Crowdfunding a Space for Civic Engagement appeared first on Digital Impact.
12 minutes | Sep 6, 2019
Data Incidents: Design With Responsibility in Mind
Digital Impact 4Q4 Podcast: Jos Berens and Stuart Campo on Humanitarian Data SUBSCRIBE TO THIS PODCAST ON iTUNES. 00:00 CHRIS DELATORRE: This is Digital Impact 4Q4, I’m Chris Delatorre. Today, we’re joined by Stuart Campo and Jos Berens at the UNOCHA Centre for Humanitarian Data. In March, the Centre introduced guidelines to help UNOCHA staff better assess the sensitivity of the data they handle in different crisis contexts. The Centre defines data incidents as events involving the management of data that have caused harm or have the potential to cause harm to crisis affected populations, organizations, and other individuals or groups. This could be a physical breach of infrastructure, unauthorized disclosure of data, or the use of humanitarian data for non-humanitarian purposes. 00:51 CHRIS DELATORRE: Jos, can you walk us through an example of a data incident (real or imagined)? Sarah Telford, who heads the UNOCHA Centre for Humanitarian Data, has described the sector as reticent to sharing. Why is it so difficult for humanitarian organizations to be open about these incidents when they happen? 01:10 JOS BERENS: Sure, Chris. So, a hypothetical example of a humanitarian data incident would be a response setting in which we would have an armed actor looking to identify the location of an ethnic minority in order to do harm. Now if that armed actor were to raid an office of a humanitarian organization and seize several hard drives on the premises, that could be the beginning of an incident. If those hard drives contain unencrypted beneficiary data of members of the ethnic minority that this armed group is looking to target, including their location, then you can see how that could become an issue. Even if unique beneficiary identifiers have been removed that individual privacy protection would not prevent group level targeting the ethnic minority. And so, if the armed actor were to target this minority — and this could lead to injury or even death — that would be an example of a humanitarian data incident. Now, it’s important to note that data incidents do not always need to be caused intentionally by an outside bad actor. They can also be caused by accident, often due to staff unawareness of risks associated with humanitarian data management. “You can’t manage or prevent an incident if you don’t understand how it arises in the first place.” Now, to the second part of your question regarding the openness about incidents. A key reason why organizations are currently not being very transparent about humanitarian data incidents, is that there’s no clear incentive for this transparency. While on the other hand there are definitely disincentives, including possible exposure to similar incidents, damage to the reputation of an organization, a chilling effect on data sharing, and other consequences. 03:20 CHRIS DELATORRE: Stuart, you’ve laid out four aspects of data incidents — a threat source, an event, vulnerability, and an adverse impact. How can this breakdown help organizations, not only in terms of managing crises but also in taking preventative measures with data? 03:39 STUART CAMPO: Thanks, Chris. We reviewed a number of different models for risk assessment and risk modeling, and ultimately landed on this approach which is borrowed from the National Institute of Standards and Technology of the US Chamber of Commerce, or NIST. We think this is relevant to the humanitarian sector because it’s a clear approach that’s straightforward to adapt. Let’s think about the example Jos just shared, where these four aspects are really easy to identify. In the scenario that he described, the threat source is the armed group that’s looking to target the different members of this population. The threat event is the actual raid of the facility where the hard drives of the data are located. The primary vulnerability that allows this to manifest into an incident is the fact that the hard drives contain data that’s unencrypted. We might also identify the absence of robust physical security measures and related protective measures for the data as vulnerabilities that have contributed to this incident. GUIDANCE NOTE ON DATA INCIDENT MANAGEMENT “Humanitarians have not had a common understanding of what comprises a data incident, nor is there a minimum technical standard for how these incidents should be prevented and managed.” Finally, the adverse impact is the actual misuses of the data, whether it’s the beneficiary data in its rawest form or the aggregated form clean of some of the identifiers Jos mentioned. The threat actor taking this to target, potentially injure, or kill members of the population manifests into the impact that ultimately defines this incident. So, how does this breakdown help humanitarian organizations manage data incidents more effectively? As one of our collaborators on this notes from the Yale Jackson Institute — Nathaniel Raymond — often says, “You can’t ‘do no harm’ if you don’t know the harm.” The same holds true for data incidents. You can’t manage or prevent an incident if you don’t understand how it arises in the first place. By unpacking the source of the threat, the threat event itself, the underlying vulnerabilities, and the adverse impacts that then characterize an incident, humanitarian organizations can then improve their understanding of how these incidents unfold and put measures in place to prevent them. In our experience, organizations need to spend more time thinking about the vulnerabilities and related predisposing conditions that allow the threat sources and events to cause adverse impacts, rather than focusing on the more nebulous specter of what different threats and events might look like. Most vulnerabilities are, sadly, weaknesses and internal systems, procedures and practices that are largely avoidable if addressed directly. And so that’s a really good place for organizations to start. 06:05 CHRIS DELATORRE: Jos, responding to Stuart’s point, preventing or responding to data incidents is clearly a need for the humanitarian community. How can organizations prioritize response and prevention techniques in their strategic planning and workflows? 06:19 JOS BERENS: So, we’ve identified three areas for investment to improve on humanitarian data incident management. And these are areas for investment for both individual organizations as well as networks. The first area [of investment] is to establish a common understanding of what a humanitarian data incident is and that starts by understanding the causal chain that can lead to data incidents for specific offices and systems. Identifying key threat actors and vulnerabilities is [the second] component — and understanding existing security controls and their effectiveness. And thirdly, mapping existing data incident management capacity and determining whether that’s positioned appropriately is an important third component. Once clear definitions and processes are articulated, it’s important to invest in staff awareness and to support a culture of open dialogue about incidents in which proactive reporting and management of incidents is incentivized and not punished. “The guidance note will offer examples of how to develop and use predictive analytics ethically in humanitarian action.” The second area of investment is to follow the steps for data incident management. And so that starts by taking measures to put in place security controls to mitigate the risk of data incidents and sharing that best practice with partners. Second, it’s important to build on existing work in the humanitarian sector to fill governance gaps which can create vulnerabilities for the organization. Third, it’s important to engage with organizational partners to set up information channels around data incidents to share lessons learned. And the fourth component is to share known vulnerabilities in a controlled manner with trusted counterparts for cross-organizational learning. The third area of investment is to support continuous learning. And that is done by supporting learning and development of improved data incident management by organizing training and drills for staff based on scenarios that are likely to occur in different operational settings. And these exercises should occur regularly and they may even involve multiple organizations, training and drilling together. And finally in this last area of investment, it’s critical to document cases of data incidents so that we can learn over time. 08:59 CHRIS DELATORRE: Stuart, the next set of notes from your project will focus on data responsibility in public-private partnerships and predictive analytics and ethics. How will this build on what you’ve learned about the use of data in the humanitarian community? How can our listeners connect with the project? 09:15 STUART CAMPO: As with the first two notes in the series, including the one we’re discussing on this podcast, we’ve really been drawing on the centers experience managing data in different response environments, as well as that of our collaborators and different partners in the sector. That includes so called traditional humanitarian actors as well as non traditional actors. Public-private partnerships have been a major topic of interest in recent months, and I think you’ve covered that extensively on some of the episodes you’ve produced this year. Our goal of note is highlight what good practice looks like, rather than necessarily focusing on different examples of core practice or risky practice because we think it’s more constructive to really look to the city on the hill and then help think about how to get there. We want to help colleagues within OCHA and other humanitarian organizations as well as our private secto
COMPANY
About us Careers Stitcher Blog Help
AFFILIATES
Partner Portal Advertisers Podswag Stitcher Originals
Privacy Policy Terms of Service Do Not Sell My Personal Information
© Stitcher 2022