Google’s GenAI Bots Are Struggling. But So Are Its Humans

This week on Gadget Lab, we talk about the rocky rollout of Google’s Gemini image generator, and some internal tensions recently reported by company employees.
Smartphone displaying Google Gemini logo
Photograph: Rafael Henrique/Getty Images

The last few months have been rough for Google. Company executives have been in the hot seat because of some embarrassing missteps, the most awkward of which was the bungled launch of Google’s latest image generator. The company launched it as part of its suite of GenAI tools named Gemini, but then quickly pulled it back after the generator produced some seriously weird results.

This week, we welcome WIRED senior writer Paresh Dave back to the show to talk about Gemini’s strange outputs. We also talk about some of the staffing pains Google has been going through recently, including layoffs and accusations of discrimination.

Show Notes

Read more about the “woke AI” controversy. Read Bloomberg’s story about Google’s layoffs on its trust and safety team. Read Paresh’s story about the Googler with a disability who alleges workplace discrimination at the company. Listen to our broader discussion about tech layoffs on episode 633.

Recommendations

Paresh recommends the food blog The Fancy Navajo. Lauren recommends Lauren Mechling’s story in The Guardian about journalism; the Le Carré Cast podcast, particularly the episode about the secret life of the famous spy author; and Mike recommends the film collection “And the Razzie Goes to …” on the Criterion Channel.

Paresh can be found on social media @peard33.bsky.social. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed.

Transcript

Note: This is an automated transcript, which may contain errors.

Michael Calore: Lauren.

Lauren Goode: Mike.

Michael Calore: Do you think that chatbots are too woke?

Lauren Goode: Mike, this intro is terrible already.

Michael Calore: I'm sorry. I do not make the rules.

Lauren Goode: Who makes the rules then?

Michael Calore: Some artificially intelligent agent that has been coded with biases apparently.

Lauren Goode: The future is so bright.

Michael Calore: Uh-huh.

Lauren Goode: Well, to answer your question, I think the power of chatbots is at the moment overstated, and I think we have yet to see what these companies are actually building AI for. But we should probably talk about what is going on with the so-called woke chatbots.

Michael Calore: I agree. So let's get to it.

Lauren Goode: Let's do it.

[Gadget Lab intro theme music plays]

Michael Calore: Hi everyone. Welcome to Gadget Lab. I am Michael Calore, WIRED's director of consumer tech and culture.

Lauren Goode: And I'm Lauren Goode. I'm a senior writer at WIRED.

Michael Calore: And once again, we are joined this week by WIRED senior writer Paresh Dave. Paresh, welcome back to the show.

Paresh Dave: Thank you, Mike.

Michael Calore: It's always good to have you here. And you were just here a couple of weeks ago, so it's like a quick repeat.

Paresh Dave: Yes. But we have important stuff to talk about.

Michael Calore: We do. Today we are going to be talking about Google. The last few months have been rough for Google. Company executives have been in the hot seat because of some embarrassing missteps. The one that was probably the most awkward was the bungled launch of Google's latest image generator. The company launched it and then quickly pulled it back after it produced some seriously weird results that everybody was sharing on social media.

Later on in the show, we're going to talk about some of the staffing pains that Google has been going through in recent months, but first we're going to dig into the product side of things. So Paresh, if you can help us out and bring us all up to speed on what Google Gemini is and how the most recent launch played out.

Paresh Dave: Yes. So Gemini, which used to be known as Bard, is a chatbot where users, just like ChatGPT, can enter prompts, get back responses. One new feature that you were just alluding to that they rolled out was an image generator similar to what people might know as Stable Diffusion or Dall-E, where you can enter in a prompt and you get an image or a series of images back. And what happened is basically when you would put in an image that involved, let's say a race or something from history, people started noticing that Nazis started becoming black. Vikings started having very strange looks, and it didn't make sense and it offended some people. And that became a huge firestorm of criticism for Google over the last couple of weeks.

Lauren Goode: So for a long time, the concern with some of these biases around AI tools is that they would preference white people, non people of color. In this instance, the opposite was happening.

Paresh Dave: Exactly. They had, in the words of some conservatives, turned on the woke engine, and not in a good way.

Michael Calore: But really what was happening probably was an overcorrection, right?

Paresh Dave: Yes. So there's a couple of things here that are interesting behind the scenes, right? There's one filter where you're saying how random you want your response to be. So these generators, these generative AI technologies basically choose, as someone put it to me, like the average of the internet, but sometimes you want to stray from the average a little bit and how much you want to deviate is one of the important dials that these companies control. And so that could have been tuned a little bit off.

And then, second, there's these filters that companies like Google put on top of the responses. Let's say you ask for an image of a CEO, the image generator choosing the average of the internet is probably going to give you a picture of a white man. And Google has this filter that says, wait a second, before we share that picture that we just generated with the user, let's do some behind-the-scenes math and add some randomness to it. Let's make sure there's a person with darker skin tone. Let's make sure there's a picture with someone maybe wearing a dress or a sweater or a Mark Zuckerberg hoodie, whatever it is. Add some more randomness. Let's make sure it's not just a picture of a white man. And it's that filter that seems to have gone awry, likely because there wasn't enough testing before it was launched.

Lauren Goode: And that's been one of the critiques of not just this Gemini image generator, but the way that Google rolled out Bard, its text-based chatbot last spring. Its name has since been changed. But the critique is that Google is rushing here, they're rushing to launch products. What are they rushing for? Who are they chasing? What are they concerned about?

Paresh Dave: I think they would admit that themselves, that pretty much since the release of ChatGPT at the end of 2022, Google has been freaking out. They didn't expect that consumers would start to use these products the way that they have. They pushed up these timelines that they had for rolling out generative AI technologies to their services by two years, three years in some cases. That's a huge leap for them. And so they're trying to do two or three years of work in what's been now not even 18 months, and we're seeing some of the consequences of that.

Michael Calore: Right. So the executives spoke up in this case, right? There was an outrage and some executives talked. Who should we talk about first? Should we talk about Sundar, Google's CEO?

Paresh Dave: Yeah, probably. So Sundar wrote a memo to the entire company that was shared with some media. And his point was basically that I know some of these responses have offended people and it's unacceptable. This behavior that Gemini was doing in creating these images was unacceptable. And he said, “We got it wrong.” I think in some senses he's trying to make sure that there's not backlash in Congress to Google, and that he doesn't get dragged to a hearing where he has to explain why Gemini went woke. So that's happening a little bit.

I think part of it is like a rallying cry to his own employees that we need to find a way to work even harder. We're already working hard, let's work even harder. Let's not just do this work even faster, but somehow also make sure that we're perfect when we do it. Though he did admit that no AI is perfect, especially at this emerging stage, but that Google is held to a higher bar, and we need to find a way to improve, and we'll review this case and let's see what the results of this internal audit are at the end of this, in maybe a few weeks. But the key that he's worried about is losing users' trust. Because if users feel like they can get better-quality answers elsewhere, they're going to go to other services, and that's what he has to protect against.

Lauren Goode: And then Sergey Brin spoke out as well. This is the Google cofounder. He was at something called the AGI House.

Michael Calore: I'm sorry.

Lauren Goode: I'm waiting for my invite. He said, “We definitely messed up on the image generation. I think it was mostly due to just not thorough testing. It definitely, for good reasons, upset a lot of people.” So this just adds to the fray. How significant is it that the Google cofounder has spoken up now?

Paresh Dave: It's a bit funny because from what I understand, he's been in the office very regularly. So Sergey Brin, one of the original Google cofounders stepped away from the company a little bit once Sundar Pichai became CEO, and Sergey stepped back from day-to-day responsibilities. But basically since the launch of ChatGPT, Sergey has been in the office almost every day working closely with the teams building these models. Maybe he's spending too much time on the foundational part of it and not enough on the product side of it, unclear what the issue is, but he's in the midst of it. So he probably played a role in approving some of these things that then launched to consumers. So it's funny to have him come out and say they should be owning up in part to their mistakes in this.

Michael Calore: Right. He was using “we” and not as much “I.”

Paresh Dave: Uh-huh.

Lauren Goode: How often do either of you guys use these image generators?

Michael Calore: Rarely. I would say rarely. Maybe once a week. I have a Pixel phone, and Gemini is on my phone and I do have access to it. I don't have the top-tier access to it, so I can't use it for everything, but I can make images, and I've made a couple of images just to show somebody what it looks like. I haven't used those images for anything. My wife runs a business, and she uses the image generator for her newsletters and communications on her social media accounts.

Lauren Goode: Paresh, do you use them?

Paresh Dave: No, I wouldn't say that I regularly turn to image generators, in part because Mike mentioned the Adobe tool, that one, they've said that all the content that they train on, the original images are paid for. The artists or photographers are compensated, but that's not true for all the tools. And so that part sketches me out that I want to make sure content creators like us are somehow compensated. I think that's potentially important and how that shakes out. So that's one hesitation.

And then two, going to another tool when I can just search for a GIF right on the text messaging app or something, that still feels easier, but as these tools maybe get more integrated right into the apps that I already use, I'll be more inclined to use them. And I'd say third, they're just slow. I don't want to sit there for 30 seconds waiting for an image to generate.

Michael Calore: Who has 30 seconds, really?

Paresh Dave: No one. No one.

Lauren Goode: Sometimes the results just aren't very good. They still look really strange.

Michael Calore: Yes. And surreal and crunchy, like a little too many colors, and the lines are too thick.

Lauren Goode: Yeah.

Paresh Dave: When you generate an image, does it say “generated by AI” or anything on your phone?

Michael Calore: Yeah, there's a little watermark on it. Yeah.

Paresh Dave: Yeah. Which I think is also a thing that I'd want to see broadly in all the products before I get more excited about using them. I think that disclosure is really important.

Michael Calore: Lauren, you use these image generators all the time, right?

Lauren Goode: All the time? No, I really don't. I probably use them about as frequently as you guys both do.

Michael Calore: OK.

Lauren Goode: There was one time, actually, Mike, you will remember and appreciate this. Our listeners should know that Mike and I sometimes go on bike rides together. We talk, we gossip, we basically, we tape the podcast, you just can never hear it. And there was one time recently when we were on our regular route and I said, “Oh, that's the DMV.” We had ridden by it a million times. And I was like, “I never knew that's where the DMV was.” And we got into a conversation with our colleague Tom about this, and just for fun, we generated an image of two journalists riding a bicycle past the DMV.

Michael Calore: Department of Motor Vehicles.

Lauren Goode: The Department of Motor Vehicles, a very important landmark. And this makes me laugh every, I don't know why you guys, tandem bicycles make me laugh so hard.

Michael Calore: Yeah, they put us on a tandem.

Lauren Goode: They put us on a tandem and we were wearing business wear.

Michael Calore: And you were in the stoker seat

Lauren Goode: Was I?

Michael Calore: Yeah.

Lauren Goode: I don't know what it is about tandem bikes, just like they just make me giggle. I'm not going to have a giggle fit. I'm not. And our faces were all distorted.

Michael Calore: Yes, that's normal.

Lauren Goode: It was very strange.

Paresh Dave: Was your prompt “in a bicycle” or “on bicycles?”

Lauren Goode: On bikes, I think, or maybe it was just was bike riding. But yeah. Yeah. Well, just to bring it back to the topic at the top of the show. Yeah, no, I haven't found them to be particularly useful. Not yet anyway.

Michael Calore: OK. Well, on that note, I do want to ask one more question before we take a break. These filters that we're talking about that push the chatbots in different ways, we can all agree that those are good things, right? Because these tech companies should not be putting these tools out to just run rampant and spit out whatever kinds of output they feel like. So it's a good thing to have some filter mechanism on them, but also you put too many controls on them and they become boring and less useful. So where is the balance here? Where are these companies going to land with regards to what people are going to be able to generate, whether or not they're going to be able to ask for things like specific races, specific genders, specific things that are representative of people and actually get results that they want?

Paresh Dave: I don't know if I agree with your premise though. Most people, maybe many people would say what Google is doing is a potentially valiant effort to increase inclusion, make people feel included in these products or what they generate. But there's other ways to do it. Rather than them having these filters, they could ask you follow-up questions as part of the image generator prompt process where it says, “OK, you've asked for two people on a bicycle, do you care what their skin color is?” Do you care about this? And just make it part of asking you and draw it out of you so you can get what you want out of the experience. They could optionally have just features where there's little toggles, right?

Right now they probably don't do that because it costs so much to generate these images. They're probably losing money on some of these products. And so they don't have those features. And that's why I think it's important to remember that we are in that experimental phase, but there's other ways to do it that are not as in your face.

Michael Calore: That are not in anywhere.

Paresh Dave: Yes.

Michael Calore: OK. All right. Well thanks for that. Let's take a break and we'll come back with more.

[Break]

Michael Calore: It's not just the chatbots who are having a hard time at Google. It's the humans too. Last week, Bloomberg reported that Google laid off around 10 workers out of about 250 from its trust and safety team. So a small portion of that team. Google and its parent company Alphabet, have actually been making a lot of staffing cuts as part of a wider effort to trim costs. We talked about this broader trend of layoffs on this show two weeks ago. It's episode 633 if you want to know more about that. But Google's recent layoffs hit that trust and safety team at a time when those particular workers are under pressure to help correct the public face-plant of the Gemini image generator rollout. I think the best place to start this segment is to talk about what trust and safety teams do. Most of the big tech platforms have these folks working for them. So Lauren, you've spent a lot of time reporting on trust and safety teams. I would like to ask you to walk us through what these teams do.

Lauren Goode: Trust and safety encompasses a wide range of issues on social media or service-based websites from the nuts and bolts of things like authentication to enforcing content moderation policies. For example, you can't tweet something that incites violence or an insurrection, or we'll kick you off our website, or you can't spam someone on a dating website or dating app. It also encompasses child sexual abuse material, which is a very real and disturbing problem online. And it also means ensuring that AI tools are safe and trustworthy.

Now, a lot of these big tech companies use a combination now of human content moderators, some of whom are employees, many who are contractors, and “automated tools,” which means essentially AI. Generally speaking, these tech companies, even the big ones, probably don't have nearly enough people around to really keep things safe on the internet. So moderating AI content, generative AI content, the kind that might be spit out by ChatGPT or Gemini is now this interesting ouroboros of AI, because AI is used as a part of trust and safety. And now it's also the AI that needs monitoring.

Michael Calore: Right. So this team that we're talking about at Google probably spends, we don't know how this team breaks down. So they're spending time looking at all of Google's different platforms and all of its different products. But the fact that trust and safety has been so busy on AI is something that makes these layoffs disconcerting to people on the outside.

Lauren Goode: Right. Why lay off and trust and safety right now?

Michael Calore: Right.

Paresh Dave: But there's two things to remember. It's not just the number of people. You can have a ton of people on trust and safety, but if they're not empowered or listened to, that's a big problem. And so that's been one of the complaints that we've been hearing coming out of Google the past couple of years, that they're either not given enough time to investigate things thoroughly or they're pushed in a bunch of different directions, and so they're giving everything their half mind or they're just not listened to, and their suggestions and ideas and criticisms are just ignored, and then things launch and there's problems. So I think that's important to keep in mind as well.

And then there might be layoffs in trust and safety, but one place where Google has been maybe beefing up a little bit is in compliance, which is adjacent. And the reason for that is there's just so many new regulations around the world that are pulling Google sometimes in conflicting directions where the regulators will say, “Google, we want you to protect users' data more and be more private.” And then on the other hand, they're saying, “We want you to better foster competition.” And that means opening up access to users' data. So there's that happening too where some of this work is shifting more to a compliance side, which the trust and safety folks or veterans of the trust and safety team are also concerned about, because they're not sure that the right things are prioritized when it moves to more of that compliance function.

Lauren Goode: Uh-huh. And this has an impact on startups too, not just giant tech companies like Google, because if you're a little bit leaner right now and you're hiring compliance officers where people have knowledge of that space, because you need to make sure that your product is in line with regulations, then maybe that's a resource that's not going towards your trust and safety team.

Michael Calore: Right.

Lauren Goode: That sounds like maybe I'm saying that the regulation is bad; it's not, but it's an impact.

Michael Calore: Right. Paresh, I want to ask you about the story you wrote for WIRED this week about a Google employee who is bringing legal complaints against the company because she had such a poor experience working there as a person with a disability. Can you tell us a little bit about her story?

Paresh Dave: Yes. So I've covered the story of Jolan Hall, who is a Googler who started out on the YouTube content moderation team, taking down misinformation and political stuff during the 2020 election and Covid misinformation. And then she's moved over to responsible AI, and she alleges that she's suffered now for over three years racism and audism at the company because she's Black and she's deaf. And she, through asking around, believe she's the only person who's ever worked at Google with those characteristics, who's Black and deaf, and of course she's female, which is another thing that it's tough in the Google environment, because for years now there's been allegations that Google hasn't done enough to respect the views of Black women in its workforce, and they do leave the company at a disproportionately higher rate than women of other races.

So at the core of her story are these two things that are just galling when you think about them. So one, when she was on content moderation, her interpreter was pushed outside of the room. So she had been relying on her interpreter to help understand speech and videos that she was trying to decide whether they should be removed from YouTube or not for violating policies. And Google said she wasn't allowed to have her interpreter there because it would create confidentiality concerns without specifying too much more. And so Google hired her to be a content moderator and then it said, you can't have your American sign language interpreter here. Just doesn't make sense. And Google's response is that they provide a range of accommodations and that they are committed to fostering an inclusive workplace. But certainly that situation raises concerns.

And then too, to our discussion about AI and responsibility, she realized when she moved to the responsible AI team, she was doing this research project, she was interviewing users, she was doing it over Google Meet, the video chat tool, very similar to Zoom. And she realized her conversation, 90 minute conversation with this user and two interpreters who were involved with this conversation didn't get recorded properly at all because Google Meet isn't capable of recording when someone is not speaking through their mouth, vocalizing. And because she was signing through this interpreter, all of that was lost. And it's a known thing. There's complaints on Google's support forums about this going back a couple of years, even before she encountered this issue and Google said, "This isn't a priority, we're not going to really address this issue."

And I think that goes to show a couple of things. One, yeah, there's a lot of things that people want, but if one of their own employees and someone who needs these kinds of accommodations can't get anything, is there hope for the rest of us? And how Google has to think about some of these priorities and how to build in that inclusion into their product. I think it's tough. How do you get the resources to do all of this and how do you decide what to do first?

And in this case, this worker is saying that it amounts to unlawful discrimination.

Michael Calore: Right.

Lauren Goode: And she is still working there.

Paresh Dave: She is. She wants to see things improve, she doesn't want to quit, and she's equally committed to trying to push Google in the direction that she thinks it should be going.

Michael Calore: Right. And that's a good effort. The law says that companies have to make reasonable accommodations for people who have disabilities, and that could be anything from a special keyboard to a special desk that's set up the way that you need it set up. And also things like interpreters, right?

Paresh Dave: Uh-huh.

Michael Calore: One of the details in your story that I thought was really interesting was that the American Sign Language interpreters come from a staffing agency. That's all they provide is interpreters, right?

Paresh Dave: Yeah. Jolan says that they come from Deaf Services of Palo Alto based here in Silicon Valley, and it raises that question of that confidentiality thing like interpreters abide by a code of ethics, a code of professional conduct that says literally the first thing is confidentiality is paramount. You don't talk about clients. I've spoken to a couple of interpreters. They refuse to discuss clients. It seems that they really abide by that, but that isn't good enough apparently for Google. And if it isn't good enough, why not hire the interpreters in house and make them sign the same NDAs that any other worker does? And it's a little unclear why Google doesn't do that.

Lauren Goode: The ironic part about this is that in the past, Google has really boasted about its live transcription technology and some of its other accessibility features that give you live captions, for example, that are supposed to be useful to the hard of hearing community. There's a dissonance between that product announcement versus how they're allegedly treating a worker who is in the deaf community.

Paresh Dave: Yeah. And I think there's still a lot of work happening to improve transcriptions, but the company, as we know, is all oriented around all things Gemini right now, and inevitably that means some of these other services are going to get the squeeze. And that's possibly what's happening right now where something like improving the captions on YouTube, which would help someone who's a deaf content moderator, just doesn't get prioritized, and as a result, she can't do the job properly.

Lauren Goode: Uh-huh.

Michael Calore: Right. Paresh, in your reporting for this story, I'm sure you got a sense of what the general vibe is at Google among employees with, like you said, this all in attitude on the AI tools coming at the same time as the whole industry is contracting.

Paresh Dave: Yeah, it's a little funny. People who've been at Google for a long time and survived the layoffs of the last couple of years, Google now has 8,000 less employees than it did a year ago. Those veterans who survived, they're getting angry, they're getting disappointed, they're getting frustrated where they feel like the direction the company is going. But at the same time, Google is hiring tons of new people. There's tons of job postings for jobs related to large language models and services like Gemini in addition to other things. And I think it feels like there's still a lot of enthusiasm to join Google. It's still an attractive place for people to work, and those newcomers are really jazzed about working with services that touch billions of users. And so it's weird. It's a little divided in some ways.

Lauren Goode: Uh-huh. Yeah, the contraction of the industry, the layoffs in tech are something that have hit tech workers hard because of I think some of the promises that they've been fed over the years of this being a really rich industry focused on progress and growth. And also they were promised a lot of perks with some of these jobs, and now we're seeing that big tech companies are mature. They've grown really, really fast, particularly during the pandemic, and now they're obsessed with efficiencies, which Paresh talked about on our earlier podcast. And so this is happening really against a bigger backdrop of quite a good jobs market right now. It just feels like it's hitting tech particularly hard.

Michael Calore: Yeah. All right, thank you both for an invigorating conversation. Listeners who are interested in reading Paresh's story about the Googler who is Black and deaf and is alleging unlawful discrimination, you can find that story on WIRED.com. You can also read Lauren's series of stories. It's like two or three, four stories now.

Lauren Goode: Just a couple at this point, but my DMs are open.

Michael Calore: Yes. So please read Lauren's stories about tech layoffs and the impact that's having on the tech industry and send her DMs if you have more stories to tell.

Lauren Goode: Paresh too.

Michael Calore: Paresh too.

Lauren Goode: Our DMs are open. Can I say that once more?

Michael Calore: Yeah.

Lauren Goode: Just kidding.

Michael Calore: OK, let's take another break and we'll come right back with our recommendations, which are human generated, we promise.

[break]

Michael Calore: All right, welcome back. This is the third segment where we do our recommendations. We go around the table and we tell listeners something that they might find interesting. Paresh, as our guest, you get to go first. What is your recommendation?

Paresh Dave: Thank you. So I had the opportunity to try a Native American restaurant in Oakland last week, and I was blown away by blue corn cake with a mixed berry compote. And it got me curious about Native American recipes and I found... Native American recipes in general, but also how to make this blue corn cake at home. And I found one that looked pretty close. I have not tried it yet on a blog called The Fancy Navajo. So my recommendation is The Fancy Navajo blog. I think one of the more wilder recipes on there was a blue corn popsicle, very intrigued, and it is Women's History Month, so I know that the restaurant that I went to, Wahpepah's Kitchen is owned by a female chef who says she was the first indigenous chef on the cooking program Chopped. And The Fancy Navajo is run by a female food blogger based out of Phoenix, Alana Yazzie, so.

Michael Calore: Awesome.

Lauren Goode: That sounds amazing. I'm going to try to go to that restaurant this weekend.

Michael Calore: I want the popsicle.

Lauren Goode: Yeah.

Michael Calore: Can you make some and bring them in?

Paresh Dave: I do have the little popsicle maker things, so maybe. Once it gets warmer. It's a little cold right now.

Michael Calore: Dude, it's anytime is a good time for a popsicle.

Paresh Dave: Anytime is a good time for ice cream, not a popsicle.

Lauren Goode: Paresh, you're cooking a lot on Instagram these days. I don't know. I think you're hiding a secret talent that needs to be brought out into the open here at the office.

Michael Calore: OK, Lauren, what is your delicious recommendation?

Lauren Goode: Not as delicious as that, I have to say. I have two recommendations. One's a quick one. There's a story out in The Guardian this week written by Lauren Mechling about a bunch of small independent news co-ops as she calls them, that are cropping up with a big media contracting. Consider reading that story and subscribing to some of these smaller outlets. She mentions 404 Media, which is doing a great job. The Defector is another one.

My second recommendation is a longer one. It's an episode of The le Carré Cast for those of you who are a fans of the now late writer John le Carré. John le Carré wrote a lot of spy novels. He was incredibly prolific. He was also just a great writer. And so there's an entire podcast dedicated to him. But this one particular episode from late 2023 features his biographer, Adam Sisman. And Adam Sisman wrote an earlier biography about John le Carré, and then after le Carré died, wrote a follow-up that just really just talked about all the dark side stuff. John le Carré, real name David Cornwell, had a very traumatic childhood and in adulthood he was also a prolific flanderer.

Michael Calore: Really?

Lauren Goode: Yeah. Yeah. And there were lots of letters and lots of women coming out of the woodwork, and this inspired some of le Carré's writing and some of the stories that are now famous. And I just found listening to this podcast to be completely fascinating and a really interesting portrayal of the writer.

So yeah, if you're into spy novels, if you're into John le Carré, if you're into podcasts, I recommend checking this one out. “The Secret Life of John le Carré.” And also, I just have to note, it feels like it was probably so much easier in the 1950s to be a spy. In the books, it's like, oh, let's just cross over to East Berlin. Someone shines a flashlight in your face and here's your fake passport, and they're like, "Go forth, sir." No one's like, "I don't know. I saw you posting on Twitter. You look pretty familiar."

Michael Calore: Yeah, we have not been tracking you on GPS.

Lauren Goode: Right, exactly.

Michael Calore: Geofence warrants really screwed that up for everybody.

Lauren Goode: I guess so, yeah. Text messaging, wireless networks, that whole thing.

Michael Calore: Uh-huh.

Paresh Dave: And the new thing, the push alert. The push alert way of tracking people that you should read the WIRED story about.

Lauren Goode: That's right. Yeah. With all of the push alerts, a push alert could come through from something related to an abortion in a state where that is not legal. Yeah, there's a really good story on WIRED about that this week, so check that out. We'll link to that in the show notes.

Michael Calore: Three recommendations. Solid.

Lauren Goode: OK, Mike, what's your recommendation?

Michael Calore: OK, if you are a subscriber of the Criterion Channel, which is the streaming network from the Film Preservation Society known as Criterion, then you are in luck because it's Oscar season. And yes, you can go and stream all the Oscar movies, but this weekend you may be interested in streaming some of the things that are available on the Criterion Channel under the new collection of Razzie winners. So the Razzies, also known as the Golden Raspberry Award, is awarded every year for the worst movie, and Criterion has rounded up many of the past winners into a collection called And The Razzy Goes To... So you can open up this collection and watch all of these awesome Razzy award winner movies, and it is delightful. You get movies such as Cocktail, the Tom Cruise bartender movie from the 1980s. Showgirls, Paul Verhoeven's movie about strippers, which is fantastic, by the way. Heaven's Gate, the three and a half hour long Michael Cimino movie with Kris Kristofferson in the lead. Ishtar. Does anybody remember Ishtar?

Lauren Goode: I don't.

Paresh Dave: It sounds familiar.

Lauren Goode: No. Was that...

Michael Calore: Warren Beatty and Dustin Hoffman.

Lauren Goode: I was going to say Costner. OK.

Michael Calore: Prince Under the Cherry Moon, which is terrible. I love Prince, but that movie is unwatchable.

Lauren Goode: What's so bad about it?

Michael Calore: It's unwatchable. It's just a very poorly made movie. Sorry, Prince. RIP. Best guitar player ever.

Lauren Goode: And Neil Dash is going to have something to say to you.

Michael Calore: I'm sure he'd agree with me. Under the Cherry Moon, objectively bad. The Wicker Man, not the 1973 original, but the Neil LaBute remake starring Nick Cage. “The Bees, the Bees, not the bees.” You know this movie. It's terrible, but it's so bad that it's good. There are a lot of movies in this collection that are so bad that they're good. There's also a lot of movies that are really bad, like Gigli, the movie with J Lo and-

Lauren Goode: Oh, with J Lo.

Michael Calore: Yeah.

Lauren Goode: Yeah. What else? Give us another one.

Michael Calore: Another one that's bad?

Lauren Goode: Yeah.

Michael Calore: Xanadu. The Olivia Newton-John movie. A Cocktail is pretty bad. Ishtar is pretty bad. Barb Wire, the Pam Anderson vehicle from the mid nineties. That one's actually OK. Freddie Got Fingered directed by and starring Tom Green is a fantastic movie. If you have not seen it, you must see Freddie Got Fingered. It won a Razzy. It made sense at the time.

Paresh Dave: So it's bad.

Michael Calore: It's bad, but it's 23 years old and watching it now, it's much better now than it was when it came out. It's also just extremely unhinged and weird, so I highly recommend that. Anyway, yes, Criterion people, go to your app, find And The Razzy Goes To and watch these cinematic delights that are being served up for you by these great historical film preservationists.

Lauren Goode: Are you going to tune into the actual Oscars this weekend?

Michael Calore: Yeah, probably. Yeah. Maybe time delayed so I can skip all the nonsense.

Paresh Dave: An hour earlier this year, don't forget.

Lauren Goode: Oh, right, because we're jumping ahead?

Michael Calore: Yeah, right. It's daylight savings on the 10th.

Lauren Goode: Yeah. That's right.

Michael Calore: Yeah. And a new moon apparently.

Lauren Goode: And we've got a total eclipse coming up next month.

Michael Calore: A lot of astrological and astronomical things happening.

Lauren Goode: That's right. We're just going to pivot this podcast to astrology.

Michael Calore: We will not. All right, well, that is our show, astrology or not. Paresh, thank you for joining us.

Paresh Dave: Thank you for having me, and good luck for those of us that have to deal with it. The loss of an hour this Sunday.

Michael Calore: That's right.

Lauren Goode: Yes. Like Kazu, is Kazu going to keep you up?

Paresh Dave: I think he's not going to be happy.

Lauren Goode: We're praying for you.

Michael Calore: And thanks to all of you for listening. If you have feedback, you can skeet all of us on Blue Sky. Just check the show notes. Our producer is Boone Ashworth and we will be back with a new show next week. And until then, goodbye.

[Gadget Lab outro theme music plays]