This week on the show, hosts of the podcast Mystery A.I. Hype Theater 3000, Emily M. Bender (Professor of Linguistics at University of Washington) and Alex Hanna (Director of Research at the Distributed AI Research Institute), join Cayden to help make sense of the inescapable hype of all things “Artificial Intelligence.” What’s being oversold? What are the actual threats? And what exactly are we talking about when we say A.I.? Hype or not, the advancement of technology in the workplace has strong implications for workers which some unions and communities are already organizing to protect themselves against.
Also on the show, Cayden and his guests will look at the ongoing campus protests for divestment from Israel and ceasefire in Gaza.
Support this show and others like it by becoming a Patreon member: patreon.com/convergencemag
[00:00:00] Cayden Mak: Welcome to Block and Build, a podcast from Convergence Magazine. I’m your host and the publisher of Convergence, Caden Mock. On this show, we’re building a roadmap for people and organizations trying to unite anti fascist forces in order to build the influence of a progressive trend while blocking the rise of authoritarianism in the United States.
[00:00:24] This week, we are looking at some big headlines. Thanks Campus encampment protests continue to spread across the country this week, and I saw this morning that Sciences Po in Paris has started the first encampment in Europe. Students across dozens of campuses are demanding that their schools divest from Israel and for an immediate ceasefire in Gaza.
[00:00:42] Over 500 arrests have already happened, including students, professors, and members of the press. Texas Governor Greg Abbott even went so far as to call in state troopers to UT Austin, escalating a situation that absolutely did not need to be escalated. Meanwhile, Biden signed a 95 billion military aid package to Ukraine, [00:01:00] Israel, and Taiwan on Wednesday.
[00:01:01] Also on Wednesday, the U. S. State Department refused to openly support U. N. calls for a probe into the discovery of mass graves at the Nasser Medical Complex in Al Shifa Hospital in Gaza. And there’s a few items in the world of tech that you may have missed amongst all of this. First, Biden found some time on Wednesday to sign the unusually fast moving bill, certainly fueled by xenophobia, which will ban TikTok in the U.
[00:01:24] S. Unless its owner, ByteDance, agrees to sell it to a US based company within the next year, reaffirming the George W. Bush era stance, civilian surveillance is fine, but only if we’re the ones who get to do it. Google employees last week also staged a protest under the banner of No Tech for Apartheid, demanding that the company end a cloud computing contract with Israel.
[00:01:45] So far, 50 employees have been fired. Some protest organizers say there were just non participating bystanders. And on Monday, hundreds of Kaiser Permanente nurses who collectively bargained at California Nurses United, gathered outside of a San Francisco [00:02:00] hospital to protest the use of AI in their workplace.
[00:02:03] Joining me to discuss some of these stories, as well as how AI hype and tech surveillance are wrapped up in all of this, are the Director of Research at the Distributed AI Research Institute, Alex Hanna, and Professor of Linguistics at the University of Washington, Emily M. Bender, and collectively, they are the hosts of a podcast I listen to religiously Mystery AI Hype Theater 3000.
[00:02:24] Thank you both so much for joining us today. I think Alex one of the things that we were talking about yesterday that seems apropos that you and I know each other from the way back when we were graduate student labor organizers. So a lot of the campus protest stuff feels particularly timely to discuss with you.
[00:02:42] So yeah, Alex, Thank you so much for joining us.
[00:02:46] Alex Hanna: Yeah. Thanks for having us on Kian. Really great to be on here.
[00:02:50] Cayden Mak: Yeah. And Emily it’s been a pleasure to meet you and thank you for joining us as well.
[00:02:53] Emily M. Bender: Yeah. My pleasure. I’m excited to be part of this project.
[00:02:56] Cayden Mak: Awesome. I think one of the things that, [00:03:00] uh, before we jump into the headlines and these news items, I really want to take some time to do is pull apart what we’re really talking about.
[00:03:09] We’re talking about AI and large language models or LLMs, or just, terminology that the listeners of this podcast may have heard. But may not know really what that means in practice. And Emily, I’m going to toss this to you first to see if you want to take a first crack at demystifying what this stuff is.
[00:03:27] Just cause I think yeah, it’s Easy to get lost in, right?
[00:03:31] Emily M. Bender: It’s easy to get lost in, and it’s not an accident that you get lost in it. Because AI, artificial intelligence, is a marketing term. So we are being sold something when something is called AI. And it’s a complex set of things that we’re being sold.
[00:03:45] Part of it is that all of these different, diverse technologies are actually one thing. And that way, anytime you come across something that seems impressive Hey, the camera on my phone can figure out where to focus on a face, right? Or, Hey, the spell [00:04:00] checker on my computer got a lot better.
[00:04:01] Then we think if it’s all AI, then that means that is one thing that’s getting smarter and smarter when in fact, it’s a bunch of disparate technologies. Artificial intelligence also conjures up notions of sentient autonomous entities. That might therefore have accountability and it allows people to displace accountability.
[00:04:17] And probably the most oppressive thing right now to people is the large language models like chat GPT. And those are an enormous parlor trick, an enormous carbon intensive water, thirsty parlor trick. That basically is technology designed to come up with plausible sequences of words based on a very large training corpus.
[00:04:38] And that’s all it is. But because language is like inherent in how we exist, how we relate to each other, we immediately imagine a mind behind the text when we encounter it, and so we project our own sentience, our own feelings, our own thoughts onto these machines, and then if Chad GPT is AI, and the system that’s [00:05:00] being used to, surveil public spaces for gunshots is AI, then we the Apparent effectiveness of ChatGPT makes some of that other technology seem way more plausible than it is.
[00:05:12] Cayden Mak: We’re really talking about something that’s pretty extensive, and I think that one of the things that you brought up in that little introduction was like, we are talking about things that are dealing with natural language, ostensibly, like the language that you and I use to talk about, What we’re talking about right now, but also like data supposed gunshots, loud noises.
[00:05:31] I wonder if one or both of you could talk a little bit to like where these things are gathering data from and why that should be alarming to organizers.
[00:05:40] Emily M. Bender: Yeah, absolutely everywhere. But I think maybe Alex has some more specific thoughts on that.
[00:05:46] Alex Hanna: Yeah, absolutely. These, the data is coming from per wherever they, which is whatever is basically not nailed down on the internet.
[00:05:55] Groups have been collecting. There’s been a few places in which these data [00:06:00] sets have been collected together. So for instance, there’s a dataset called the common crawl, which had been a San Francisco nonprofit. That had this great things for researchers for years. And then somebody at open AI.
[00:06:15] I think they may have been the first were like, Oh, we could use this for training these language models. And then they basically went ahead and use that as a way to feed language models. Then you’ve also been seeing deals that open AI has been doing. With Axel Springer and Reddit and all these other places where they have a bunch of user data.
[00:06:37] And those data are effectively, now it’s, sorry, Emily’s cat is braying so loud onto the, on the air.
[00:06:47] Emily M. Bender: I promise I’m not a cat.
[00:06:48] Cayden Mak: We’re going to be like only listening to the audio. There was a moment on video where it did look a little bit like Emily was turning into a cat and it was, that’s,
[00:06:55] Alex Hanna: That’s incredible.
[00:06:56] Incredible. So yeah, so the data is coming from these different [00:07:00] places and because it’s internet data, what you think would happen with kind of over representative this of English is happening there and effectively, where it’s really just edging out any kind of language that would be used in, on the African continent or indigenous languages, or really languages and what computer scientists will call the long tail, effectively languages that don’t have much data as well, speaking of tails and Emily’s purring at an opportune moment.
[00:07:29] And yeah the data are effectively, coming from all these places. And because those data are also coming from places like Reddit and also social media, there’s also no guarantee of privacy on these things. There, there was a researcher at Google. Nicholas Carlini has done a few studies on this and in an open data set where the system was also open, his team basically showed that if you had a part of private data, like a [00:08:00] name and maybe part of a phone number and you prompted one of these large language models with that, it could expose or output the rest of that information, like the rest of the phone number or an address.
[00:08:12] And so there’s really scary stuff around this and. There are probably some guardrails internally, but there’s no guarantee that the stuff won’t pop out something private. If you have part of the information.
[00:08:25] Emily M. Bender: Yeah. And it’s, so it’s collected at a scale that is like the whole thing that makes these work, the whole sort of heart of the parlor trick is absolutely enormous amounts of data.
[00:08:33] And so there’s this push to get to data sets that are just too big to actually document, actually understand what’s in there. And so it has to be collected indiscriminately. And this is also true for the image models. And the flip side of this is there’s no transparency, right? So not only are companies like gathering everything, like Alex said, that’s not nailed down which is a great way to put it.
[00:08:54] But they also have no requirement to disclose what it is that they’re using. [00:09:00] And this is problematic in a few ways. One is it means you can’t actually make an informed decision about whether or not to use the thing, but it also means that they go around claiming emergent capabilities.
[00:09:09] Like we look, we didn’t train it to do X, Y, Z, but it can. And when every time they’re making that claim without actually sharing the data, it’s like that’s as far as we know, the thing is in the data set. And it reminds me of magicians who never explained their tricks. It’s exactly equivalent to that.
[00:09:26] Yeah,
[00:09:27] Cayden Mak: no, it’s interesting. The things that you’re describing sound like they’re. literally designed to obfuscate what the actual process is, which to me raises the question, like we’ve seen this sort of proliferation of AI and like companies and other folks pitching their new tools to us as AI. How much of this is actual advancement and how much of this is hype?
[00:09:51] It seems one, maybe it’s hard to tell, but obviously this is this is something that you all talk about all the time. And I’m. Curious for our listeners. Like, how can we [00:10:00] what assumptions can we make about that?
[00:10:02] Emily M. Bender: So you can assume that if they aren’t telling you otherwise, it’s built on stolen data.
[00:10:06] You can assume if they aren’t telling you otherwise, that there’s people in the background who are doing a lot of precarious work to do the data labeling or possibly even to actually do the thing at runtime. Amazon recently. Has to say they’re discontinuing. They’re just carry out things.
[00:10:21] And in the media coverage of that, we learned that actually the so called automated systems that were supposedly based on computer vision, tracking you around the store, like luxury surveillance and spades, right? Involved 1000 employees in India. Reviewing those videos so that they could actually charge you the right thing.
[00:10:37] And one of the car companies, Alex, was it cruise or was it? It was
[00:10:41] Alex Hanna: cruise. It was cruise. Kyle Voigt, the CEO of cruise basically said there was a New York times article that said that at certain kind of decision moments, they would rely on workers to decide on whether, a car did one thing or another and rather embarrassingly [00:11:00] The CEO admitted it on Hacker News, which is one of these weird VC, fan boys sites.
[00:11:07] That’s hosted by a venture capital accelerator named Y Combinator. And if you live in the Bay area, you may know Y Combinator is CEO’s Gary Tan who’s like a real kind of fascist in San Francisco politics. Anyways, brought aside But yeah he admitted on Hacker News that, yeah, okay. What it’s like, we only use it for two to 4 percent of the time.
[00:11:31] And I’m like, okay, that’s still like hundreds, if not thousands of decisions being made, this thing is not fully automated at all.
[00:11:39] Emily M. Bender: He’s also saying that it wouldn’t be in quotes, cost effective to try to get that number down. That basically, and this is not surprising, right? If you think about.
[00:11:47] the problems between, the self driving cars are perpetually right around the corner, right? And they never fully arrive because speaking of long tails, the actual things like situations you have to deal with are too many [00:12:00] to enumerate. And so it’s always going to require, a human touch and the problem, sorry, cat in the way problem with, or what, one of the big issues here is that, Things that get sold as AI or sold as automation are really just a way to treat the labor of other people as something mechanistic and fully dehumanized because it’s got this computer skin over it.
[00:12:22] And we see that also with the Amazon, sorry, the Amazon mechanical Turk system was like deliberately set up to do that and named very on the nose.
[00:12:32] Cayden Mak: Yeah. That’s, I think that does also. Speak back to this what is this moment that we’re seeing in the discourse? Because I remember talking about Mechanical Turk labor when I was in grad school, like over a decade ago, like that was, we had a faculty member in our program who was specifically interested in labor and in virtual worlds.
[00:12:51] And that was something that she taught about a lot to her undergraduates. And so it’s like wild to me that this is like happening right now, because. Like it’s yeah, it seems like [00:13:00] this has been a conversation on some level for a long time. And so it seems to me that there’s something there’s something perhaps investor driven about this as opposed to actually there have been breakthroughs in technology.
[00:13:13] Alex Hanna: That’s the thing. This thing, and I’m guessing that person was Sylvia Linder, is Lindner. I’m just, no, it was
[00:13:19] Cayden Mak: Oh gosh, what
[00:13:21] Alex Hanna: is name? It’s okay.
[00:13:22] Cayden Mak: Yeah, it’s been a long time. It’s been a long time since I thought about that stuff. .
[00:13:25] Alex Hanna: It’s okay. I’m like Michigan Digital Studies Labor?
[00:13:28] No, this is at
[00:13:29] Cayden Mak: SUNY Buffalo. Oh, okay. Okay. Stephanie Roth.
[00:13:32] Alex Hanna: Oh, okay. Just going ahead and speculating now I have Anna, my cat here. Yeah, it is been a conversation for quite a long time. Amazon Mechanical Turk was released, I think in late the late two thousands. And, was if not the first crowd working platform was quickly became the one that was The best known and there have been a few kind of studies of that.
[00:13:58] They can, the big one was, it’s the [00:14:00] book by Mary Gray and Siddharth Suri, ghost work, which talks about the kind of use of crowd working around the world. This kind of rise of remote work that is done in which people do these very small data annotations. And it’s been persistent, I think. More recently, people have connected it to large language models themselves.
[00:14:23] So for instance there was great reporting from Karen Howe on the workers in Kenya that were doing Basically doing screening of chat GPT outputs for 2 a day. Basically to ensure that they didn’t have, awful things, beheadings swearing, whatever Like Nazi propaganda in your streams.
[00:14:48] And that’s still going on. This stuff is not, stuff for it to work at all. It’s being done. And with this being done in a place that is not here, it’s. [00:15:00] over there someplace, but that over there is, it’s not that far. It could be down the street. There are people that do this in the Bay area.
[00:15:08] There’s people that do this in different parts of different parts of the West. There’s refugees that do this and Bulgaria and the U so in Germany. It’s not like it’s over there. It’s just hidden. In terms of thinking about the labor economy of these AI systems.
[00:15:27] Cayden Mak: And that’s interesting because it sounds like there’s, there are these labor issues on both ends of this like weird pipe. And I know Alex, you mentioned you were at labor notes last week and that like this discussion that, is manifested as this protest on Monday at at Kaiser in San Francisco, but also that the sort of other end of the like labor issues pipe, the AI labor issues pipe is coming home to roost literally in the backyard of a lot of these companies. But I, and I know that I actually saw the statement from NNU [00:16:00] on probably your LinkedIn page before we talked about it.
[00:16:03] But I wonder if we could talk a little bit about what some of the concerns that these nurses are raising and I don’t know. I think there may be some things in there that are also generalizable to other industries. But yeah, I was wondering if you could talk a little bit about that statement about the KPE nurses concerns and what as part of a broader trend, maybe.
[00:16:24] Alex Hanna: Yeah, absolutely. AI just in general is being used as a cudgel to discipline workers or threaten them with firing or threaten them with downsizing. That’s across the board and certain professions or certain unions are seeing this as more existential than others. I think the nurses, National Nurses United is spot on.
[00:16:47] Also spot on as the writer’s guild of America and there’s AI became a huge part of the story because the producers and the AM, PTP, the streamers, they [00:17:00] wanted to talk about, let’s talk about having AI in the writer’s room. We don’t, we’re not going to implement it. We’re just going to talk about it.
[00:17:07] And the WG WGA said. Fuck no, we’re not bringing this in the writer’s room because what that’s going to do is that you’re going to come to us with a shitty AI generated script. You’re going to force that to rewrite that. And the rewriting rate is much cheaper than the actual producing a script rate. And then you’re going to basically cut writer’s rooms more dramatically than it ever already been cut in the streaming era.
[00:17:33] And so the strike was only about residuals, which was a huge issue. But it was also about AI in, in the writer’s room. And then SAG AFRA followed with a weaker win, but they did get a win that they couldn’t do body scans of typically background actors to use them as digital replicas which would have really cut out a lot of work for background actors.
[00:17:54] Yeah. And so the nurses really see, we’re marching in solidarity. With the WGA, with [00:18:00] SAG AFRA, and really seeing this and just because AI and healthcare is everywhere. My, my sister’s a nurse. She was at a conference in San Diego and she’s they can’t stop talking about LLMs.
[00:18:13] And she’s, she’s in her job now. She works at a at Ohio State now and and they’re like, there’s just, she’s just getting this AI stuff all over her job. She’s I’m sick of it. And so this statement from National Nurses United is really strong. Cause it’s saying, we’re not against science for progress, but these things are aimed at just Getting Silicon Valley and Wall Street richer.
[00:18:34] And I’m like, yeah, what a great analysis. Absolutely true. And these so many examples of AI and nursing. So that includes quantifying nursing. I’m looking, I’m reading their statement here, quantifying. nursing workload and patient acuity levels. So that would basically go ahead and try to say workers can take nurses can take on more work when they actually can’t.
[00:18:56] It’s actually going to make the nurse to patient ratio worse. [00:19:00] But there’s also this prediction of how sick Patients are and we’ve seen cases in which insurance companies have been relying on AI. There’s this class action lawsuit leveraged against United Health Group, in which they were using a tool developed by one of their subsidiaries named Navi Health.
[00:19:20] And they had this thing called NH predict. So basically after patients that had Medicare advantage, if they encountered a an acute health event, so like a fall over stroke or a heart attack, it would then predict how many. Days of care that they were allowed. And it was just giving them no days of care, 20 days for, a fall when they’re still going through physical therapy and can’t do things alone.
[00:19:46] When they when they really, where their provider would say they need a hundred days of care and they’re using this as a means of efficiency in the back office. And so that’s puts more pressures on nurses, but they’re like, we need these patients, these care. There’s [00:20:00] that.
[00:20:00] There’s also remote patient patient monitoring. So again, this is another case of if there’s patient monitoring in the hospital or in the facility, they’re like, no, let’s just toss it to someone over there. Again, it’s another one of these cases of outsourcing. And then also these automated charting nursing plans.
[00:20:19] And then there’s also things like doing prediction of in hospital cases like sepsis which is a major problem in which it could be used in some cases, but it’s being used where there’s massive false alarms and that’s really tiring nurses out. And I really think that, so the nurses are really spot on saying, the, you don’t need to be putting AI in these things or things called AI, you need to be staffing these places up and not trying to cut corners wherever seems convenient for you.
[00:20:52] And I really, and the president of national nurses united to really put it plainly at labor notes, Debra Berger. She said, [00:21:00] Hey, AI is a tool of class warfare. And I’m like, Yeah, let’s not mince words here.
[00:21:06] Cayden Mak: I love that. Yeah. No, sometimes you got to call the spade.
[00:21:10] Alex Hanna: Yeah.
[00:21:13] Cayden Mak: That, this raises some actually like also really interesting questions about for us as, maybe it’s like our jobs currently are not being impinged upon by AI, but as consumers, we are surrounded by it.
[00:21:26] Like as just like private citizens doing whatever it’s like, all of the tools that I use for work are trying to foist AI assistance on me.
[00:21:35] I just
[00:21:35] Alex Hanna: went to my messaging app on my phone for Google messages. And at the top it says, Gemini view on your phone. I’m like, no, stay away.
[00:21:45] Emily M. Bender: And Meta just replaced their search function with their AI everywhere. And I’m like, I just want to go look up my friend’s wall. I don’t want to talk to your AI,
[00:21:53] Cayden Mak: yeah. Let me find my friends on Instagram so I can see their baby pictures. I don’t, leave me alone. Yeah.
[00:21:57] Emily M. Bender: Yeah, exactly. And like LinkedIn, like you start writing to me like, [00:22:00] would you like to rewrite this with AI? No, I would not. It’s just so yeah, it’s constantly being voiced on us everywhere. And I think that it’s worthwhile to look at that as this is in the way this is, basically pushing the results of labor exploitation onto me as a consumer.
[00:22:16] I don’t want any part of that. It’s also pushing. So if we’re talking about tech synthesis, it is pushing synthetic information. I talk about spills of synthetic information into the information ecosystem. This is a while back there was somebody who used a large language model, maybe chat GPT to extrude something in the shape of a mushroom foraging guide.
[00:22:34] And self publish it on Amazon.
[00:22:37] Cayden Mak: Oh, no.
[00:22:37] Emily M. Bender: And I was reading some news coverage of it and an actual mycologist was saying it had dangerous things in there, like advising that people taste a mushroom to identify it.
[00:22:46] Cayden Mak: Seems like a terrible advice.
[00:22:48] Emily M. Bender: Yeah. No. And so this is because these systems are designed to mimic human language use and they’re designed to mimic form.
[00:22:55] So you can make something that looks like a legal contract. You can make something that looks like a [00:23:00] medical diagnosis. And some of the time it’s going to, by chance, be right. But in the cases where you actually need the advice, need the information, you are not in a position to check it out. And so it’s if when I see, LinkedIn, and meta, and everybody pushing this all the time, it’s like walking past a faucet that’s dripping where Like you, you’ve only seen it for a moment, but if you think, just think about what that’s happening over time, like that is an ongoing drip.
[00:23:28] And in this case, it’s not just the waste. If you have clean water dripping, that’s wasted water, but in fact, it’s actually toxic stuff that’s getting dripped into our information ecosystem. My reaction to it is to turn off the faucets whenever I can to alert the people around me to what’s going on.
[00:23:45] And to also miss it’s such a pain, but to try to opt out anytime I had the chance to opt out on my data being collected.
[00:23:52] Cayden Mak: Yeah, that’s great. Super useful. And especially because we know it’s not just a consumer technology, right? Like one of the things that has been [00:24:00] coming out over the past couple weeks.
[00:24:01] And the reason that there was this big protest at Google is because of the ways that this technology is being used in like military and policing contexts.
[00:24:11] Emily M. Bender: Yeah, by this technology, again, we’re talking about different technologies, but technologies that’s. sold as the same thing.
[00:24:17] Cayden Mak: That’s, that is a good correction.
[00:24:18] That it’s, we’re it’s being pitched to us as the same friendly thing. Yeah. Useful thing. Yeah. But they are using different data sets to do different things.
[00:24:28] Emily M. Bender: Yeah, exactly. And, yeah. And it’s the same cloud compute. It’s the same sort of culture of, sure, we can automate everything.
[00:24:35] Sure. We can collect all the data. So certainly there, there’s same there and you’re absolutely right. That it’s not just consumer technology, but in fact, a bunch of things that are far more directly destructive.
[00:24:45] Cayden Mak: Yeah. Yeah. Yeah. Maybe we can talk a little bit about these, the protests at Google and.
[00:24:51] It’s remarkable to me that Google would go so far as to just straight up fire 50 people. It seems like a lot of
[00:24:57] Alex Hanna: folks. Not to me. It’s, [00:25:00] I think it’s, yeah, they’ve fired other people before for protests against. Customs and border patrol that was in 2000, gosh, what was it?
[00:25:10] 2020 or 2021 when they fired the Thanksgiving four or five, depending how you count. And they’ve been, Google has been very vocal and, basically supporting Israel and also quashing dissenters. They. They have an Israeli office. They, after the. People were fired.
[00:25:32] They put an Israeli exec in charge of all Google research. They I want to call out my Ubaid who was someone that was doing a Google internship. And disabled woman. And then she was killed in Gaza and, nary a word about her from Sundar Pichai that the CEO or anybody else.
[00:25:54] And so Google’s had this contract with. With the Israeli [00:26:00] government called project Nimbus and it has been something that organizers and fighting for years now, I think since 2021 three years into this contract, Google argued that they weren’t actually supplying any kind of support to military purposes, which would go against their AI principles, which I should say, Are only in place because of activists who fought against another project called project Maven in which there was major organizing for or rather against, and then effectively, Google was then shown by the intercept at first to be suggesting that tools AI, again, in quotes, AI tools could be used for military targeting.
[00:26:42] And then had been shown by time by Billy Prango. That they were actually having their contracted support the Israeli military. And the people that fired are part of this longer campaign that’s been going on the no tech for apartheid. Oh, it was in [00:27:00] the producer just said, Josh has said the CPP protests were in November, 2019.
[00:27:05] Thank you for fact checking that. Having some recency bias here. And yeah, so I mean that it’s been part of a longer term campaign. People associated with the campaign interrupted Google Cloud Next. Last year before the assault on Gaza began Or this is how tough guys that begin and it has been continuous.
[00:27:25] There have been actions throughout. And so this firing, what they ended up doing was sitting in the CEO of Google clouds office, Thomas Kurian, and then after since a very aggressive letter to all Google workers saying. Do not, bring your political opinions to work and and X, Y, Z, even though, as we know from Howard Zinn, great quote, you can’t be neutral on a moving train.
[00:27:51] And if your train is going and building weapons for the Israeli military, you certainly can’t be neutral. [00:28:00]
[00:28:00] Cayden Mak: Yeah, I think the other thing that we did chat a bit about in the sort of preparation for this episode was this report from 972 magazine that people may have seen also about Lavender, this system that is I guess the IOF is using to target civilians in Gaza and what it does, for the Israeli military, because it’s not as though we’ve seen there’s any actual discretion that the IOF is using in Gaza, right?
[00:28:29] So what is it that systems like this actually do? Do and I think that the question is perhaps like a little more, it’s a little more social and ideological than it is actually technological.
[00:28:39] Alex Hanna: Yeah. Yeah. One thing, there was the way that this report starts and the thing that’s very illustrative is that the, the, it’s, it starts with this quote from this person named brigadier Y S and who is confirmed to be now the head of unit 8, 8, 200, which is like [00:29:00] Israel’s.
[00:29:00] And I say and effectively what he says in this is that, we can’t produce targets fast enough, which is a monstrous thing to say it’s, I, it makes me sick to
[00:29:11] Cayden Mak: my stomach actually.
[00:29:12] Alex Hanna: Yeah. And the quote from the article says that. human, the human bottleneck for both locating the humans are the bottleneck for locating new targets and decision making to approve the targets.
[00:29:22] And it is an ideological thing to say we need some kind of technology that’s going to produce these targets on mass, and this, these tools, lavender and the gospel and this other one with this. Gross name. Where’s daddy? Basically affect, effectively instead of targeting where a, suspected Hamas or palestinian islamic jihad operative would be and then effectively providing a way to then.
[00:29:51] Target entire homes using and targeting junior operatives rather than, and basically saying we’re not going to waste [00:30:00] our expensive American provided smart bombs on them. We’re going to drop dumb bombs on them basically for, and take out entire buildings. trying to target one person and just go ahead and allow the, having an acceptable rate of civilians being killed to be incredibly high.
[00:30:18] Emily M. Bender: Yeah. It’s that, that reporting is extremely well done and extremely difficult to read because of just the absolute awfulness in there. But what we’re seeing is that the technology is functioning on the one hand to speed things up. Like they want to scale destruction and apparently don’t want to just take all the bombs and drop them.
[00:30:38] Indiscriminately, so they’re pretending to pick specific targets to go after. And so they need to do this target selection process. And as, as Alex was saying, that’s slow if you do it by hand. So can we automate that? Can we do this faster? And then on the other hand, it displaces accountability.
[00:30:55] Cayden Mak: If you
[00:30:56] Emily M. Bender: can, if you can say it was the machine that did the picking and [00:31:00] here’s where I think that the apparent value of calling it AI comes in, because if you were rolling dice to pick a target, you might not feel as able to say I’m the, random chance told me, so that’s how I’m picking this one.
[00:31:14] But if you think it’s an algorithm, the algorithm is called AI, then that maybe somehow makes it a little bit more palatable either to the people who are engaging in this or to the public around them who are, meant to be kept in support of it, so the function here of so called AI is to speed up the destruction and then also to displace accountability.
[00:31:36] And some of the discourse around this online was people saying, Oh nothing in there is really AI. So it’s so what what is your point here? And I think that these are people who felt very defensive about people criticizing AI as a bad technology, which I don’t think anybody is. I think people are criticizing the actions of the people using the technology.
[00:31:57] And the fact that they’re calling it AI [00:32:00] is part and parcel with how calling things AI is really just, an excuse to do things that are bad in some way. And this is just like the most extreme example that, that I can imagine, the most tragic.
[00:32:13] Alex Hanna: It should be mentioned that they also revealed that they had been using Google photos in such a way to pick targets.
[00:32:20] So they effectively Google photos. What they have is effectively, there’s a way in which Google photos basically groups people together. So you can. Even if you open it now, it’ll show a face and they’ll be like they’ll use it effectively as a means to say, look at this person.
[00:32:40] And so what they had done is taken like grainy CCTV images from, people, In Gaza and then, a picture of a suspected operative operative and effectively use it as a targeting system which is wild and it’s basically a way that, this is a [00:33:00] stock functionality of Google Photos and Google Photos.
[00:33:03] It’s taking, no actions to this and I’m reading from this other recent report that was also a nine 72 was published yesterday by Sophia, a good friend. But, she talks about how Google photos says in their terms of service, you can’t use this for. This kind of for any to, to cause immediate harm to people and effectively they’re, they’ve tried to blind eye to that.
[00:33:25] Emily M. Bender: It says something about who they think counts as people. No there’s a lot of
[00:33:29] Cayden Mak: like very chilling sort of implications to violating their own policies in this way. And I, the other thing that I think strikes me as immediately relevant here is also thinking about the ways in which some of this technology is deployed by domestic law enforcement.
[00:33:46] And that I think, in, in the realm of sort of surveillance and data collection That those are things that I think are they’re like difficult to see in the same way, right? That there’s a way in which the like [00:34:00] technology itself obfuscates the effect that it has on us socially And I know, Emily, when we were preparing for this, you mentioned this ShotSpotter thing that I don’t know if folks live in cities that have ShotSpotter, but here in Oakland, it’s been a really big contentious issue because ShotSpotter is as I know from our colleagues at Media Justice, ShotSpotter is not very good at even identifying what is or is not a gunshot.
[00:34:24] But it is allegedly a technology that will help police respond to gunshots. The discharge of a firearm. But
[00:34:32] Emily M. Bender: Yeah. So the idea behind it is supposedly place a whole bunch of microphones and then do machine learning, which is a sort of anthropomorphizing name for pattern recognition over datasets to be able to identify When a sound picked up by those microphones, and they make a big deal about how it’s multiple different microphones, so it’s triangulating.
[00:34:52] When a sound picked up by those microphones is a gunshot, so that you can then send the police in, with the idea that they are responding to an active [00:35:00] shooter situation, which definitely does not sound like a safe way to deploy people who have firearms themselves. But and the I had, on learning about it thought, how long until someone gets killed by police coming in, in this situation.
[00:35:15] And in fact, that has happened. In 2021, I’m looking at some reporting from South side weekly that was done together with wired Max Blaisdell and Jim Daley appeared on April 24th. So in 2021 police fatally shot a 13 year old boy named Adam Toledo, responding to a shot spotter alert. Like that was.
[00:35:34] fully predictable. But the and there’s no, as far as I know, there’s never been any like released evaluations from ShotSpotter. Like how do they show this actually works, right? Like an open evaluation of this. And there’s a, in this article, there’s a completely appalling quote from the ShotSpotter spokesperson who says That they refer to their singular purpose to close the public safety gap by [00:36:00] enabling law enforcement agencies globally to more efficiently and effectively respond to incidents of criminal gunfire where gunshot wounds victims lives are in the balance.
[00:36:09] And it’s if you actually cared about somebody who was bleeding out from a gunshot wound, it’s an ambulance that you would be sending. That’s right. Not the cops. So the alarming thing in this reporting though, is that. Apparently, when citizens of a city organize and manage to get the city to drop their contracts with ShotSpotter, the microphones stay, and the data keeps being collected, and apparently keeps being shared with the police.
[00:36:36] So the police aren’t paying for it anymore. The city’s not paying for it anymore, apparently, but that’s not stopping ShotSpotter from still doing the surveillance on the behalf of these cities which is super alarming. And I think a super important thing to know, both in terms of what has to be done if you’re in a city that’s been, had the stuff installed.
[00:36:55] But also just thinking long term about any time we accept an additional layer [00:37:00] of surveillance, it is. Going to take extra work to dig it back out. And part of the sort of economic thing behind that is that what these companies are selling is not the technology, but the service. And so the microphones, the city didn’t buy the microphones.
[00:37:16] The city doesn’t have to do the microphone upkeep. That’s all shots, but our shots, but are selling the service. And so shop spotter is apparently able to decide if they leave the microphones on or not.
[00:37:27] Alex Hanna: Yeah. There’s another, there’s a, there’s another example of that too. In San Diego, where. There’s a, there was General Electric provided these smart streetlights that had a recording capability as well.
[00:37:39] And thanks to a coalition of folks in San Diego, they were able to go ahead and, cut that contract. But GE still owns all those data and the city can’t do anything and can’t ask them to remove. And that was all in the fine print that the city didn’t care to read. [00:38:00]
[00:38:01] Emily M. Bender: So we need to really think carefully that any, anytime there’s data collection, anytime something is sold as smart, the question to ask is, what data is being collected behind this, and where is it going, and then how do I opt out?
[00:38:14] And, just want to point out that these companies change their names. So the Southside Weekly Reporting is referring to ShotSpotter as ShotSpotter, but they also noticed that in 2023, they rebranded as SoundThinking. So just want to put that on people’s radar.
[00:38:28] Cayden Mak: Sound thinking is a heck of a rebrand too, right?
[00:38:30] It sounds like it could be like anything from a I don’t know, like a, like an e learning company or
[00:38:38] Alex Hanna: sounds like something you could buy at Sharper Image.
[00:38:45] It’s like a Brooks Brothers like neck pillow with speakers in it or something, I don’t know. And the
[00:38:50] Emily M. Bender: associated meditation music that’s been preselected for you. Yeah.
[00:38:55] Alex Hanna: Yeah. I can’t wait for the sound thinking to come out. Headspace [00:39:00] CoLab.
[00:39:00] Cayden Mak: Oh yeah. Yeah. It’s coming for sure.
[00:39:03] Um, speaking of that, I think there’s also a thing that I would love to close on and touch on a little bit about this sort of I guess the ideological work or the the frameworks that are offered to us about AI and the way in which we bias I think, I don’t remember which one of you in prep use the term automation bias.
[00:39:23] But like really talk about the work that does for these companies to like make their technology seem like appealing and interesting and like, what is that also the work that is doing on us culturally, right? I think that’s a really deep question. deeply like it’s a deeply troubling question and one that I really would love the listeners of our show to be thinking about in their work interfacing with stuff.
[00:39:48] Emily M. Bender: Yeah. So automation bias refers to our tendency as people to look at anything automated and to think that it must be being Objective or more correct because a computer did it, right? And some of this [00:40:00] comes from our experience with learning arithmetic and then using a calculator, right? So if we get the wrong answer out of a calculator it is almost certainly because we used it wrong.
[00:40:08] And so we have this notion that the computer is right. We are fallible. And then you turn to these systems that are trained on massive datasets collected indiscriminately from everything on the internet that wasn’t held down, as Alex says. And then all of these kinds of automation, which are effectively reproducing the patterns in that data.
[00:40:27] It’s full of bias. And you have folks like Ali Al Khatib and Abebe Brahane who point out that kind of use of software where you’re taking something that catalogs patterns from the past and then using that to make decisions about the future is inherently conservative. It’s saying we approve of taking these patterns from the past to create the future.
[00:40:47] But when it is laundered through a computer, it just looks like the way it is. And another big example of this, and I’m thinking now of Sophia Noble’s work, especially her book, Algorithms of Oppression, Google sells itself as organizing the world’s [00:41:00] information,
[00:41:00] Cayden Mak: right?
[00:41:00] Emily M. Bender: And so it is just this.
[00:41:02] Sort of objective, anonymous just way of clever algorithmic way of collecting access to all the information that it’s just then hands over to you. And so then when, for example, as in noble’s work, if you search for black girls, you get lots of porn coming up as a result. Then that’s just how the world is, right?
[00:41:19] Maybe that’s what people have put on the internet. Maybe that’s what people want to see. She says something like people tend to think that what rises to the top in Google search results is. The most correct or the most popular answer when in fact, it’s all driven by Google’s advertising model, but it’s presented to us as well.
[00:41:34] It’s a computer. So therefore it is objective and correct and just showing you what’s out there. And it’s really important to keep that in view and to always keep all the people in mind as you’re thinking about this technology. Who designed it? Who’s choosing to deploy it? Who did all of the data work in the background to make it go?
[00:41:52] Who produced the original data that it was collected from? Who chose how to collect that data? And when you keep those people in mind, I think it’s much easier to see that these [00:42:00] things are, nowhere near a bias or objective as if that could exist at all.
[00:42:04] Alex Hanna: Yeah. And I would just add on, the kind of, there’s an ideology of kind of AI hype and what kind of thing hype is.
[00:42:11] Hype is this kind of idea that, you need to get in on this thing and if you don’t get in on it, You’re going to be left behind, you’re going to be called a Luddite and, and that kind of appeal, that kind of appeals, maybe not as appealing for end consumers or shouldn’t be appealing for most people on the left, I would argue, but is very appealing to like middle managers and CEOs and people who are in control of corporate entities, government agencies and whatnot.
[00:42:40] And this kind of ideology of hype really gives into that and also seeds a certain kind of power to, technological artifice to the people who are creating these things, right? That, we should be worshiping people like Sam Altman and Sundar Pichai and Jan Lacoon or whoever these people that [00:43:00] are the wise men and they are almost universally men who are in, who are developing this technologies as if they are, and it’s really a God complex.
[00:43:10] Honestly they say they, they have created a new intelligence and therefore, you really need to be paying attention to what they’re doing. And so hype has that functionality and hype also ties to these kind of cultural narratives about, Robots and things that, can know everything, but they are incredibly limited in what they actually can do.
[00:43:31] And that is a fiction that I think we need to subject to our, political economic analysis, just like anything else.
[00:43:39] Cayden Mak: Yeah, that’s absolutely right. Um, let’s let’s wrap up there because I think, I think that there’s a lot to chew on here. I think for folks who are maybe like, not so familiar with this world and have seen a lot of this rhetoric popping up.
[00:43:54] But one of the things I like to ask people is what are the things that you are reading media experiences [00:44:00] that have been grounding you or helping you keep going. Cause you slog through AI hell constantly.
[00:44:07] Emily M. Bender: Yeah. We invite people to, to come join us on that slog over at mystery AI hype theater 3000, where we survive it with ridicule as praxis as Alex coined, but for a specific other thing to shout out.
[00:44:20] I really enjoyed Dr. Joy Boulamwini’s book, Unmasking AI, which is part memoir, part like sort of current events and part, her own science story. And it’s incredibly well written. And I also want to flag the fact that she is an amazing speaker and poet, and she did her own recording for audio book.
[00:44:39] So if you’re an audio type, it’s a, it’s an excellent one.
[00:44:42] Cayden Mak: Nice. Alex, do you have anything to share?
[00:44:45] Alex Hanna: Oh I just try to not do anything AI related in my other time. I am listening to Kim Stanley Robinson’s ministry for the future, which is really depressing. So I don’t, it’s a great book, but it’s about near term.
[00:44:59] [00:45:00] consequences of climate change and I’m like and but I am listening and watching a lot of dimension 20, which is an actual play Dungeons and Dragons show as well as world worlds beyond number, which is also another actual play Dungeons and Dragons podcast. Love to get into those fantasy worlds and worlds written by humans.
[00:45:20] Cayden Mak: Ah, yes. Based worlds written by humans. Thank you so much for joining me today. Dr. Bender, Alex, like you all are great. Will put a link to Mystery AI Hypothesia 3000 in the show notes so that you can check out Alex and Emily’s work and listen to that show. Are there other places that people can find y’all other things you want to plug Alex, I imagine maybe D.
[00:45:44] A. R. E.
[00:45:46] Alex Hanna: Oh yeah. So D. A. R. E. sponsors the podcast. So check out our Institute. So that’s D. A. R. E. Institute. org. D. A. R. hyphen Institute. org. Emily and I [00:46:00] are also coming out with a book, hopefully available in spring of 2025. And we’ll post all about that on our socials once it’s available for pre order.
[00:46:12] Emily M. Bender: Yeah, absolutely. It’s super fun to be working on that book with Alex. And you can find me I’m super. Super findable through web search at the University of Washington. And I’m on many of the socials at Emily M. Bender in lots of places.
[00:46:24] Cayden Mak: Fantastic. Thank you so much again for joining us. This show is published by Convergence, a magazine for radical insights.
[00:46:30] I’m Caden Mock. Our producer is Josh Elstro. If you have something to say or a question, feel free to drop me a line. You can send me an email that we will consider running on an upcoming Mailbag episode at mailbag at convergencemag. com. And if you would like to support the work that we do here at Convergence, bringing our movements together to strategize, struggle, and win in this crucial historical moment, you can become a member at patreon.
[00:46:53] com slash convergencemag. This also gives you access to the live stream of us recording this show. Even a [00:47:00] few bucks a month goes a long way to making sure that our small independent team can continue to build a map for our movements. And in the meantime, we will see you on the internet.