• How AI is destroying our moral & civil efficacy ft. Elizabeth Adams

  • Oct 19 2021
  • Length: 27 mins
  • Podcast
How AI is destroying our moral & civil efficacy ft. Elizabeth Adams  By  cover art

How AI is destroying our moral & civil efficacy ft. Elizabeth Adams

  • Summary

  • How often do we trust the technology around us? Should we ever? CEO and founder of EMA Advisory Services, Elizabeth Adams wants to know – especially as it relates to AI surveillance. Smart phones, social media, and facial and voice recognition are commonplace for many. But do we know what, if any, ethical considerations shaped their development? That’s why Elizabeth is on a mission to fight for ethical, human-centric AI. Join us as we uncover hard truths about the role civic tech plays in our communities.    Key Takeaways: [1:56] Elizabeth, a long-time technologist, shares how she came to be involved in the ethical use of AI. After being part of the working poor for many years, she made a decision to focus on giving a voice to the voiceless. [4:31] How does bias get coded into facial recognition? Systems are sold and trained by law enforcement that can be biased in a way that shows Black and Brown people as more suspicious. This can do irreversible harm to communities that are traditionally discriminated against. [6:00] It’s not just facial recognition technology that can be biased and ultimately harmful, it can be other computer vision technologies as well.  Elizabeth discusses the example of how an infrared thermometer used during COVID picked up a firearm image more in darker-skinned users than lighter-skinned ones. When this type of technology is in the hands of governing bodies, this kind of AI can be dangerous to civilians. [6:20] Elizabeth’s work with AI is first and foremost about making tech, especially surveillance tech, safe for citizens. That work took root in the city of Minneapolis, where she zeroed in on civic tech initiatives. Elizabeth explains that civic tech is when the government and the community work together on a shared leadership decision around what technology should be used to help govern society. [7:27] Elizabeth discusses the coalition POSTME (Public Oversight of Surveillance Technology and Military Equipment) that she founded in Minneapolis. The murder of George Floyd by former police officer Derek Chauvin in 2020 sent a shockwave across the world. One that resulted in public demand for greater accountability and oversight of the way citizens, and especially communities of color, are policed. As a technologist focused on civic tech, Elizabeth uses her expertise, coupled with the power of advocacy, to make changes to the kinds of tech that police in Minneapolis can use. [10:41] Often, those doing the surveillance are too removed from those being policed. This is especially dangerous for black and brown communities. Because if the police don’t know the people they’re supposed to be serving, they often fail to distinguish between who is a threat, and who isn’t. [13:49] Clearview AI is a facial recognition technology designed for use by law enforcement. When it was adopted by the city of Minneapolis, Elizabeth’s coalition discovered the tech was using data in clearly unethical ways. In February of this year, the Minneapolis City Council banned the use and voted unanimously to ban the use of facial recognition technology. Although challenging, this was a big win for Elizabeth and her team. [16:01] So what business does AI-driven facial recognition have in the hands of the law? Elizabeth explains how it could be used for good including everything from helping recover someone lost with dementia, and to identify the perpetrator of a crime. [19:18] Whether it’s an issue of bias coded into the AI itself, or just in those using it, we need more attention to the way we govern it, and that needs to start from the design. [20:11] As consumers, we trust new technologies too easily and forget to think about who may be harmed by them. Elizabeth gives the example of Hello Barbie, which was discontinued in 2015 after the AI was powered in a way that could not only speak to kids but listen to them too. [23:02] Elizabeth and other leading technologists have given so much to society but no one has asked what they have given up. Time, educational goals, and personal moments with family all sometimes get lessened by the time it takes to create new and ethical AI that is safe for everyone. [25:20] With endless opportunities to innovate, we need to ask what is its purpose, and who is it serving? How can it bring us together, and who may it potentially hurt?   Quotes: “I made a decision that I would definitely focus on those who are the voiceless, those who have no seat at the table and have no decision-making power or shared decision-making power at the table.” - [2:23] Elizabeth “It starts in the design session with the data. And if the data is not diverse, then the system output will not be able to identify diverse people.” - [4:50] Elizabeth “Often, those doing the surveillance are too removed from those being policed.” - [10:41] Jo “I don't think that we can live in a world post 9/11 here in the US without some sort of surveillance. However, it needs to be ...
    Show more Show less

What listeners say about How AI is destroying our moral & civil efficacy ft. Elizabeth Adams

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.