
Meta Goes MAGA Mode + a Big Month in A.I. + HatGPT
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
Casey, we’re back.
We’re back in the studio, Kevin.
So dirty secret is that we recorded our predictions episode that ran last week in 2024, before we left for the holiday break. We are just now coming back from a multi-week break. How are you doing? How was your break?
I’m doing great. We recorded that episode so long ago that when I listened to it, all the predictions were fresh to me. I was so excited to hear what we were going to say, but I’m doing good. I had a really nice break. And of course, I’m excited to be back. But what about you, Kevin?
Well, I had a disaster happened to me over this break.
OK.
Which was that I got robbed on Christmas.
Oh, wait, wait. Was it the Grinch?
[LAUGHS]
The citizens of Whoville are still looking for the suspect.
Oh, no. Who robbed you? How did you get robbed?
Well, I wasn’t home, luckily, but someone broke into my house.
Wait, what did they take?
So still sort of sorting through. We just got back. But it appears that the four thieves took some jewelry, some electronics.
Oh, god.
But weirdly — and this is the tech angle here — they did not take the Apple Vision Pro.
Not even a robber wants one of those.
It makes sense because robbers typically only want to take what is valuable, Kevin. And it’s not clear what they would actually do with a Vision Pro. Also keep in mind, if you’re a robber, you’re out there. You’re moving through the world. You’re breaking into homes. You can’t have that giant thing on your face. You sort of need to maintain clear vision.
Yes.
So to speak.
Yes.
Let me ask you this. Even though all your items were stolen, did you look at your family and your dogs and you think, you know what? At the end of the day, I got my family. And that’s all that really matters.
I did. And I don’t know why you’re saying it with such a —
— faux sentimentality.
I was looking for a nice, sentimental ending.
Honestly, it was — that was sort of the — the moral of this robbery was much the same as the moral of the Grinch who stole Christmas, which is that the real Christmas, the real household items are families.
Exactly. And so if you get robbed again, maybe don’t worry about it.
Was it you?
I’m changing the subject. We’re moving on.
OK.
Where were you on Christmas? [MUSIC PLAYING]
I’m Kevin Roose, a tech columnist at The New York Times.
I’m Casey Newton from Platformer.
And this is “Hard Fork.”
This week, Meta goes mega. We break down the company’s surrender to the right on speech issues. Then, why 2025 is shaping up to be a huge year in AI. And finally, some “HatGPT.” Call that a “HatGPTs.”
[MUSIC PLAYING]
Well, Casey, I think we “beta” talk about Meta.
We “beta” do it, Kevin, because I never “meta” bigger story for this podcast.
Yes. So the big news this week in the world of social media is that Meta is making a, I would say, pretty calculated and transparent.
Craven is another word people have used.
Yes, play to ingratiate itself with the incoming Trump administration by sort of surrendering to the demands of right-wing speech critics and changing a bunch of things about the way its platform works. I think this is a very big story, not just because of what it represents about Meta, but because it is the biggest and most prominent example of a Silicon Valley tech company sort of positioning itself for the second Trump term.
And I think it’s going to have very big implications for speech on the internet, for the rise of misinformation online, and potentially for the future of Meta itself.
Yeah, absolutely. I think that while we have talked about speech policies on Meta, basically as long as we’ve been doing this podcast, I think this set of changes that the company announced this week are the most important series of policy changes that they have made in the past five years, easily.
Yeah. So let’s run down what’s actually been happening over at Meta. So over the past week, there have been three main things that people are pointing to as being all part of this effort to curry favor with the incoming Trump administration.
The first was that last week, Meta’s global policy chief, Nick Clegg, a former British deputy prime minister who had served in that role for a number of years, stepped down and was replaced by Joel Kaplan.
Joel Kaplan is a longtime Republican operative, going back to the George W. Bush administration, who’s been working at Meta in their policy division for a while now and has sort of become the unofficial liaison between Mark Zuckerberg and The Washington, right?
That’s right.
And then this week on Monday, Meta announced that it was appointing three new board members, including Dana White, who is the CEO of UFC, the Ultimate Fighting Championship. Dana White, not known as a particular expert on social media governance, but definitely a close friend and ally of Donald Trump and someone who can presumably act as liaison between Meta and the Trump administration.
Yeah, so just sort of staffing that bench up with more Trump friends.
And then the big one came on Tuesday when Meta announced that it was ending its fact-checking program and replacing it with an X style Community Notes feature. The company also said it was redoing its rules to allow more speech and less censorship. It’s going to dial up the amount of quote, “civic content.”
That’s sort of Meta’s term for political content and current events content in their feeds, and said that they were moving their content review operations from California to Texas to avoid the appearance of political bias. There were some other details in there that we can talk about, including some changes to the way that its content moderation automated services will work.
But basically, this was a laundry list of things that right-wing critics of social media platforms had been asking for years. And Meta sort of stood up and said, we’re going to do all of it.
Yeah, or another way of putting it, Kevin, is just that they accepted wholesale the Republican critique of Facebook’s speech policies and actually use the same words that Republicans would do. In a previous time, we only used the word “censorship” to apply to state action to actually prohibit speech.
Some people would say it doesn’t actually apply to private companies just sort of policing online forums. But Mark Zuckerberg said, no, effectively, you’re right. We do a bunch of censorship. We’re doing too much censorship. And we’re going to stop doing censorship.
Yeah, so the reasons that Mark Zuckerberg gave and that Joel Kaplan gave when he went on “Fox & Friends” to announce these changes, which was a very deliberate decision and one that I probably don’t have to explain the meaning of to our listeners, but the reason that Mark Zuckerberg and Joel Kaplan gave for these changes was that Meta had been doing some soul-searching and basically had discovered that its former policies created too much censorship, and that they were going to return to the company’s roots as a platform for free expression.
I was really struck by just the way that they completely backed down here. They accepted the critique. And they seemingly are terrified of what the Trump administration could mean for them and for Mark Zuckerberg personally if they do not comply in advance with everything that Republicans have said about them for years. Keep in mind that none of these critiques are new. They were made throughout the first Trump administration. And Facebook stood up against them. And they said, we’re actually going to try to find a middle path here. We are going to try to do what we can to preserve free expression while also trying to make this a really safe and inclusive space for as many people as we can.
And in 2025, at the start of the year, Mark Zuckerberg came forward. And he said, no, not anymore. We’re done with that. Everything that the Republicans have been saying about us is true. And so we are going to lean into their version of what a social network should be. And so I’d like to play just some of what Zuckerberg said in the reel he posted on Instagram announcing these changes.
Governments and legacy media have pushed to censor more and more. A lot of this is clearly political, but there’s also a lot of legitimately bad stuff out there — drugs, terrorism, child exploitation. These are things that we take very seriously. And I want to make sure that we handle responsibly.
So we built a lot of complex systems to moderate content. But the problem with complex systems is they make mistakes. Even if they accidentally censor just 1 percent of posts, that’s millions of people. And we’ve reached a point where it’s just too many mistakes and too much censorship.
The recent elections also feel like a cultural tipping point towards once again prioritizing speech. So we’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms.
I was just struck by how craven and cynical, it felt like, Mark Zuckerberg in particular was being about this. I mean, he sounded like Elon Musk, to be totally honest. He used phrases like legacy media with this dripping disdain, which is a phrase that Elon Musk and his friends love to use in describing the mainstream media.
He also did use this word “censorship” that he has avoided studiously for years in describing the content moderation work that every social network, including all of Meta’s social networks, do as a matter of business. So it just sounded like a total capitulation, a total giving in to the demands of his most ardent right-wing critics.
And more than that, Kevin, he also threw his own contractors under the bus. And let’s hear that clip.
After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth. But the fact checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.
He says that the fact checkers had just proven to be too biased, gives no evidence for that, no examples. He just sort of says that these fact checkers, all of whom follow this very rigorous code for how they do their work, just sort of assert, oh, they’ve been super biased. So who knows what that meant?
He also, as you pointed out, says that they’re going to move their moderation teams to Texas to avoid bias. Well, first of all, I can tell you they have had moderators in Texas for many years, basically for as long as they’ve had moderators. They’ve also put moderators in red states for years.
In 2019, I visited Facebook moderation sites in Arizona and Florida. So there’s absolutely nothing new about this, but he is throwing his moderators under the bus. And the worst part about it to me is that he is suggesting that the moderators were the ones making decisions about policy.
Right.
When in fact, that person was Mark Zuckerberg.
Right.
So if Mark Zuckerberg wants to talk about the perception of bias around Facebook policy, he should reckon with the fact that he is the policymaker in chief over there.
Right. So what do you think the most impactful part of these changes is? Because for all of the talk about the end of the fact-checking program over at Meta, my sense is that the fact-checking program, for all the good people who worked very hard on it, really only ever touched a very tiny fraction of the content shared on Meta’s platforms.
It was a pretty ragtag effort that never really had as much of an impact as I think the fact-checking community would have liked, in part because of the way that Meta restricted it. So I don’t know that the average user of Facebook or Instagram is actually going to notice the fact that their fact-checking has disappeared. But what do you think the biggest impact on users will be?
Well, so let me speak to the fact-checking first because in some ways, I agree with you. I don’t know about you. I rarely encountered one of these fact checks on Facebook. On the other hand, I am someone who believes in harm reduction.
And fact checkers did look at millions of pieces of content that were getting presumably hundreds of millions or billions of views. And there were empirical studies done that showed that, overall, people came to have fewer false beliefs that they saw those fact checks.
So to the extent that people saw them, they were effective. And I think that there was a case to continue doing them, particularly if you want to be a good steward of a network that you have built that billions of people are using every day, and it’s important to you that they have a good experience on that platform and don’t come away from it stupider than when they started.
But I don’t actually think that that’s the most important thing that they announced. I think it’s something else. And I’m going to point to something that Mark Zuckerberg said in his reel. Let’s hear that clip.
We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations. And for lower-severity violations, we’re going to rely on someone reporting an issue before we take action.
So what does that mean? What it means is, whereas before, Meta used to rely on automated systems to catch all sorts of things, not just illegal things, but also just stuff that was annoying or hurtful, stuff that was a little bit bullying, harassment. I called you a name. I called you a slur. Meta would catch that stuff in advance and maybe not show it to you. Maybe take some sort of disciplinary action against the person who sent that.
What Zuckerberg is saying here is we are not the content moderators anymore. You are, Facebook user, Instagram user. We are now enlisting you in the fight. And we’re going to leave it to you. If you see a slur on our platform, you go ahead and report that. And then maybe we’ll take a look. And I think that this is a really big deal.
So yesterday, I wound up talking to a bunch of people who either work at Meta or used to work there. And I talked to one person who just said that they were extremely worried about what this meant because they had seen in so many countries around the world, where Meta has traditionally done much worse moderation than it does in the United States, where by not taking action against these lower-severity violations, stuff that was not obviously illegal, they had just seen violence fomented again and again.
They had seen harassment against women. They had seen abuse against LGBTQ people.
And Zuckerberg in his reel said, look, we’re going to have more bad stuff on the platform. But he doesn’t go the second step to what does that actually mean. Well, what it actually means is people could get hurt. People could die. So I want to be very clear about that. This is not two pointy-headed intellectuals, sitting in their podcast studio saying, oh, no, Facebook isn’t a safe space anymore for the college students.
What I’m saying is that violence has been fomented on Facebook before. And it will be fomented on Facebook again. And as a result of these changes, more people are going to be hurt. So that to me is the biggest consequence of these actions.
Yeah, I think this reporting thing that you bring up is so interesting because, as we know, a lot of the worst stuff on Facebook happens in groups, happens in semi-private spaces with hundreds or thousands of members. And so now, I think Meta is essentially saying that it will be up to the members of those groups to report any violative content that they want to be moderated rather than having these proactive scanners going around. And you might say, what’s the big deal about that? Well, if you’re in a Stop the Steal group or a QAnon conspiracy group or a group that’s plotting an insurrection at the Capitol, which members of that group are going to be reporting each other for violating Facebook’s rules? I don’t think that’s a thing that’s going to happen. And so I think what we’re going to end up with is just a much more unmoderated mess over at Facebook and Instagram and all the other Meta Platforms.
Yeah. When I was talking to employees this week, one of them pointed out to me what a sort of strange step backwards this is in this respect. For so many years, Mark Zuckerberg bragged about how automation was the future of content moderation. And he boasted about the systems that they were building that were getting better every single quarter at detecting the hate speech, detecting the bullying, and making this a sort of better place for his community.
And now, instead of saying, we’re going to lean into this even more, we’re going to make these filters better, he said, we’re going to stop using them. And we’re going to go back to human beings, who don’t even work for us or have any training or expertise, right?
This is an abandonment of his technological project in favor of something that is obviously inferior. So to me, that is one of the big twists here is Mark Zuckerberg walking away from the very good technology that he built.
Yeah, that’s a really good point. So what else in these changes caught your eye?
Yeah, so some of our listeners, Kevin, may use Facebook or Instagram and just wonder, what’s it going to be like now now that these changes have made? So I thought maybe it would be good to go through some of the offensive things that you can now say on Facebook and Instagram —
OK.
— if you want and not get in trouble. So for example, I’m gay. You can now tell me that I have a mental illness, Kevin. You can go right onto Facebook and tell me that I’m mentally ill for being gay. You can say that I don’t belong in the military. You can tell trans people —
I mean, you don’t belong in the military, but —
For other reasons.
For other reasons.
For other reasons.
Yes.
And that’s important.
[LAUGHS]
Yes, it has nothing to do with your sexuality.
No. I’m a terrible shot.
OK, go on.
There are some other changes.
Yes.
You — so look, if you want to say offensive things about trans people, they can’t use the bathroom of their choice. If you want to blame COVID-19 on Chinese people or some other ethnic group, you can just do that on Facebook and Instagram now. And Mark Zuckerberg says, well, that’s more in keeping with the mainstream discourse. Those are the words he uses. That is in keeping with the mainstream discourse.
And I look at that. And I think, oh, the standard on Facebook now is that it’s just going to feel like a middle school playground, right? All of this stuff is stuff that I used to hear when I was 12 years old in Washington Middle School. Maybe not the trans bathroom stuff. That was sort of still yet to come. Everything else, I heard in seventh grade. And that is the new standard that Mark Zuckerberg has set for his property.
Yes. He’s saying, I would like the discourse on my platforms to more closely resemble the dialogue in a Borat movie.
Yeah, which is satirical in the Borat case, but is very serious.
Yes.
And look, it’s easy for me to joke about it. Look, if you want to tell me I’m mentally ill for being gay, I can handle that. But if you’re 14 years old and queer, and it’s people in your high school that are calling you that on Instagram, we’ve seen over and over again that these kids harm themselves.
And one of the things I find so crazy about these series of decisions, Kevin, is that right now, 41 states in DC are suing Meta over the terrible child safety record it has on its platform. And my understanding is that these changes apply to younger users just as they apply to everyone else.
And so these classifiers that once used to try to find bullying and abuse and harassment against young people, they’re no longer going to be automatically enforced. And it is going to be up to, I guess, the other kids in school to say, hey, it looks like my friend is being bullied over here on Instagram. So that just seems like they’re opening up a huge amount of liability for themselves.
Right. And I think we should say it is not just right-wing culture warriors who have complained about excessive moderation on Meta’s platforms. People on the left complain that they’re pro-Palestinian speech is being targeted for takedowns or that —
And that’s true, by the way. Those are not just phony complaints. It is absolutely true that Meta has overenforced in some cases.
But what’s so interesting as I’m hearing you explain the details of some of these changes and how they are revising their rules, is that they all seem to be pointed in one direction. It’s like, let’s let people on the right mock people on the left in more ways.
Yeah, absolutely. And again, if — I wrote in my newsletter that a younger and more capable version of Mark Zuckerberg truly did handle this differently. And the way he handled it was like, oh, we’re overenforcing in this way. Let’s improve the classifier, right? Let’s adopt a technological solution to this problem. But what they said this week is we’re done trying to fix any of it. We are just abandoning the project altogether.
Yeah. So that is a lot about what of these changes. I want to talk now about the why of these changes. I think there is an obvious explanation. The one that has been popular among the critics that I’ve been reading and talking to over the past couple of days is the political opportunism angle, which is — this is Mark Zuckerberg’s attempt to ingratiate himself with the Trump administration.
It’s all business. It’s all strategy. It’s all cynical and probably all temporary until the next administration comes in. What do you make of that explanation for why these changes were made now?
So I think that there is a lot of truth to it. I think another factor that is in there — and we’ve talked about this on the show a bit — is that trying to be a good Democrat just didn’t really get Mark Zuckerberg anything. After the 2016 US presidential election and the huge backlash against Meta in particular that it created, Zuckerberg tried to say, whoa, whoa, whoa.
OK, I hear that you’re super mad. I’m going to try to fix this. And so they went out, and they built all these fancy machine learning classifiers to try to improve the service. And at the end of the day, I don’t think Democrats really liked him 1 percent better than they did before he did any of that.
So you have to remember that politics is transactional, and people vote for people who they think they can get things out of. By the end of 2024, I think it was very clear to Mark Zuckerberg, he was truly not going to get one thing out of the Democrats. But then along comes Donald Trump.
And Donald Trump has this really interesting relationship with Elon Musk where Elon Musk used to be a liberal guy too. He had a bunch of bog-standard liberal positions. But then he changed his views for whatever reason, gave a bunch of money to Trump. Trump said, hey, I like this guy. I’m going to give him every political advantage that he wants. And Mark Zuckerberg is a pretty smart guy. And he thought, oh, well, you know what? Maybe I could do the same thing.
Right. Well, I mean, I think the one thing that we know about the values of Mark Zuckerberg and Meta is that they are an extremely efficient organism at self-preservation. They will do anything to stay relevant and stay ahead. They will copy features. They will change the name of the damn company.
We know that Mark Zuckerberg’s own views on speech are very flexible. They tend to shift as the political winds shift. But I also think there’s another potential why here, which is about Mark Zuckerberg personally and his own shifting political allegiances.
I’ve been talking recently with some folks who know Mark Zuckerberg or who have worked with him in the past. And what they have said to me is that this is a man who is following a very conventional sort of former Democrat turned Republican arc, right?
He is a man. He’s 40 years old. He’s sort of approaching middle age. He’s very into these male-coded hobbies like mixed martial arts. He spends a lot of time talking with Joe Rogan and hanging out with Dana White. And he’s just sort of enmeshed in this manosphere outside of work.
And he’s also been the target of a lot of criticism from especially the left. And one thing that we know about successful men who get targeted by left-wing opprobrium is that they often respond to that by becoming sort of disaffected former liberals who embrace the right because they feel like they’re getting a more fair treatment.
So I just want to put that out there. I can’t prove this theory, but some people who know Mark Zuckerberg have floated it to me that he has actually become personally quite red-pilled or conservative over the last few years. Now, obviously, he’s not Elon Musk.
He’s not broadcasting his political opinions on social media dozens of times a day. He’s been more careful about signaling which team he’s on. But I just offer this as a theory because I think we’re starting to see more evidence that his own views may have shifted quite a bit independent of what’s good for Meta.
Yeah, I mean, I think that there was a version of all of this that was less extreme. And that if Zuckerberg himself were more truly liberal or progressive in his heart, we would not have seen these changes. So I do think that the changes that they announced this week offer some evidence for what you just said.
Also, my colleagues Mike Isaac and Teddy Schleifer reported last year that Mark Zuckerberg has begun referring to himself as a classical liberal, which, if you’ve ever watched a right-wing YouTube video, is what every former liberal who has now become a Republican says. They call themselves classical liberals. So I’ll just put that out there. That is a code word.
So OK, last question about the implications of these changes, do you think that we are going to see an exodus of liberal and progressive users from Meta Platforms the way that we did from X after Elon Musk took it over?
Well, it depends on how all of these changes play out. And we’re just not going to know for a while. My assumption is that Meta will continue to do a significantly better job at moderation than X does. It’s a much bigger company. It has more infrastructure in place.
And so I don’t think you’re going to get this sort of overnight transformation you got with Elon Musk. Also, Facebook and Instagram, they’re just structured very different than X is. Zuckerberg, I don’t think, can really take over those platforms in terms of the actual posts that you’re seeing and the feed the same way that Elon does. So I would be somewhat surprised by that.
On the other hand, if Facebook and Instagram do truly come to feel like seventh grade playgrounds at recess, and the discourse just gets much rougher and coarser, I do think you’re going to see people walking away from it.
Because while we almost only ever discuss content moderation in terms of the politics of it, the truth is, there’s a huge commercial demand for it. People do not want to spend time on networks that are full of violence and harassment and abuse and gore and porn. And that is the main reason why all of these companies build systems to remove those things or suppress them.
So the real question, I think, Kevin, is how far ultimately does Zuckerberg go in this direction? Because whatever the politics might be, the vast majority of his users just want a safe and friendly place to hang out online.
Yeah. OK, so that is where we are with Meta today and what some of the implications will be. Do you have any more predictions about where this will all head?
I have a really fun one for you, Kevin.
Yes.
So Meta has told its partners in this fact checking partnership that it has been funding for the past several years that their contracts will end in March. So in March, the fact checks on Meta properties are going to end.
The Community Notes product that Meta is planning to build, which is essentially a volunteer content moderation system, that’s going to take a little bit longer to build. So that means, Kevin, that you and I can look forward to fact-free spring on Facebook.
Let’s go.
We can truly say the craziest things, and not one person is going to be able to stop us. And let me just say, I’m cooking up some whoppers. The things I’m about to say on Facebook and Instagram, let’s just say you’re going to want to follow me.
Yeah. So follow Casey over at Threads.
Yeah.
And let’s just say start piling up the drafts now.
Yeah.
Because the purge is coming, and you’re ready.
I’m ready for the purge.
[MUSIC PLAYING]
When we come back, Oh, say, can o3 forge a new path forward for AGI?
OK, we’ll go with that.
Well, Casey, we have more news from over the break about one of our favorite topics, AI.
Boy, do we. It was a huge couple of weeks for AI, Kevin, during a time of year when, normally, the news cycle gets pretty slow.
I was wondering about that. Because usually in December, people are sort of getting ready to go on holiday break. The news trails off, but not this year. The AI labs were sort of trampling all over each other to try to get their big news out before the end of the year.
Yeah, and I think it was led by OpenAI, which, of course, announced their 12 days of Shipmas where they tried to announce something big, something small every day for 12 days. And they did wind up ending on something pretty important, I think.
Yes. So this is all moving very fast. There’s a lot to catch up on today. And I want to take some time to really dig into what happened and what we can expect for the first few months of the new year. But before we get into all that, Casey, you have something to tell us.
I do. So Kevin, of course, our listeners’ trust is of paramount importance to us. And so I wanted to let folks know about something that happened in my life that I just think I want to be upfront about, which is that at the end of 2023, I met a man who had many wonderful qualities.
One of those qualities that I loved was that he worked for a company I’d never heard of which meant, fine, I could keep doing my job as normal. But as of this week, Kevin, my wonderful boyfriend started a job at a company we talk about sometimes on the show. He’s a software engineer at Anthropic.
Is his name Claude?
Many people have written to me, asking me if I fell in love with Claude. And while I do find it to be very useful for some things, no, this was a human man that I am currently in love with.
I’ve met him. He’s real. I can confirm. He’s wonderful. But yes, you are disclosing that you have this new — let’s call it an entanglement because this is a company that you and I talk about, that you also cover in Platformer. And so we just wanted our listeners to know that this is happening out in the world and in your life. Is there anything more you want to say about this?
Yeah. I mean, people have some questions about this. I did not play any role in my boyfriend getting this job. Anthropic didn’t know about our relationship before this happened. Of course, we have since told them about this. I do plan to continue writing, reporting about Anthropic because I think it’s a really important company.
But whenever I do that, I’m going to remind you that this relationship exists. A couple other things that I would say, my boyfriend and I do not have any financial entanglements. We do not currently live together. But I’m also going to commit to updating folks as that changes.
Basically, I’m going to try to do the same job that I always do, try to bring the same skeptical critical eye that I do to everything. But I’m also just going to remind you that I have this relationship.
But if you have questions about that, email the show hardfork@nytimes.com. I will try to answer any respectful questions that I can about this.
Now, Casey, I will just editorialize and add a little bit here to your disclosure, which I think is laudable. And I’m glad you’re doing it. And I’m glad you did it in your newsletter. I’m glad you’re doing it on the podcast. I have known you for a long time. I have known how hard you have tried to avoid dating men who work in the technology industry,
I truly have. I mean, for more than 10 years, Kevin, I would be on apps like Tinder. And I would see that somebody cute worked at a Google, Meta, Twitter, you name it. And I would just always swipe left because I thought, I don’t need that drama in my life. I don’t need that complication.
Which is tough in San Francisco because everyone works in tech. It is a very small town. And the number of eligible bachelors out there who do not work at one of the companies you cover limits your dating pool considerably.
It really did. And it sort of explains why I was mostly single for the last 10 years.
And I thought, well, I finally found something that sort of gets me out of it. But sometimes life just has other plans for you, and you have to roll with the punches.
Yeah.
So here I am.
Well, anyway, thank you, Casey, for that disclosure. I think transparency is very important. We are obviously going to keep talking about developments in AI at Anthropic and elsewhere. But we will also put this disclosure in the way we do when we talk about OpenAI and the fact that the New York Times company is suing OpenAI and Microsoft, alleging copyright violations.
And when I disclosed this in my newsletter this week, Kevin, one reader actually replied that they thought it was cute that I would now have a disclosure to go along with your disclosure that you do every week. So we’re sort of now 1 for 1.
Well, let’s proceed to the real meat of this segment, which is about AI news.
Because so many things happened.
Truly. So let’s start by talking about OpenAI. We’ve already made the disclosure. We don’t have to do that one again. This was a big month for OpenAI. On December 20, they announced a new model called o3. This was a successor to o1. Funnily, they skipped o2 in the naming process because of a lawsuit threat from O2, the telecom company.
I’m not sure if it was a threat. They said they did it out of respect. But yes, presumably, there would have been some sort of legal problems.
Yes. So they skipped right over o2 to o3. This model is not yet available for users, but they did give of preview of it to some researchers. And they also talked about how it had performed on some benchmark evaluations. Casey, tell us about o3.
What is o3? So o3 is a large language model, Kevin, like you would already find in ChatGPT. But it is built in a different way. And it’s known as a reasoning model. And the reasoning models are a little bit different. A main way that they are different is how they are trained. So they are trained to try to be better at handling logical operations and structured data.
The second big way that they are different is that when you make a query, you type into the little box whatever you want it to do, the reasoning model takes longer to go over it. It uses more computing power. It will take multiple passes through the data. And it will really try to bring true reasoning to what it is looking at.
And so the result of taking more time, doing more passes, being structured in a slightly different way, is that it can perform a lot better on very complicated tasks. And what OpenAI found with o3 is that they were actually able to get way further on some of the hardest benchmarks ever designed for LLMs to pass than anything that has come before them.
Yes. We talked a little bit about this idea of test time inference or test time compute back when we discussed o1, the previous reasoning model. But this is basically a different step than the classic pre-training step of building a large language model. This is something that happens when the user makes the query, instead of just spitting out an answer right away, it goes through the secondary test time step.
And that is something that researchers were very excited about when o1 came out. They thought, OK, maybe if we are tapping out the limits of the pre-training step, maybe there is a new scaling law developing around this test time or inference compute. And maybe if we pour more resources into that step, the models will get better along a different axis. And so what people were very excited about when o3 came out was that it looks like that actually worked.
Yes. And now, this stuff is not yet in the hands of everyday users. But OpenAI did enter this o3 model in this really fascinating public competition known as the ARC Prize. You know the ARC Prize, Kevin?
Yes.
So the basic idea with the ARC Prize is they try to come up with problems that would be insanely difficult for an LLM to solve. And one of the ways that they’re difficult, by the way, is that they are original problems. So these problems are known to not be in the training data of any of these models because, of course, one of the criticisms of the LLMs is essentially, oh, well, you already have all that data stored.
You just essentially did a quick search. So this prize says, no, no, no, we’re not going to you search. You actually are going to have to show that you can reason your way through something really difficult. So this ARC-AGI-1 public training set has been around since at least 2020. And at that time, Kevin, GPT-3, previous OpenAI model, got a 0 percent. So just four or five years ago, we were at 0 percent. In 2024, last year, GPT-4o got to 5 percent.
With o3, it gets to 75.7 percent in one evaluation where the limit was could only spend $10,000 on computing power. In a second test where they let OpenAI spend as much money as they wanted, which we actually think it was more than $1 million, o3 hit 87.5 percent on this model.
So something that was essentially impossible through all of 2024, almost instantly, we have now hit 87.5 percent of that benchmark. And that is essentially the only public data we have about how good this thing is. But man, did that get people’s attention.
Yeah, it got people’s attention. I also saw a lot of people paying attention to o3’s performance on something called Codeforces. This is a programming competition benchmark. And this is one way that these AI companies try to assess how good their models are at coding.
OpenAI’s o3 received a rating on Codeforces of 2,727. That is roughly equivalent to about the 179th best human competitive coder on the planet. And just for context, Sam Altman, in presenting this result, mentioned that only one programmer at OpenAI has a rating higher than 3,000 on Codeforces.
So why does this matter? Well, you think about some of the discussion that was happening at the end of 2024, Kevin. And you started to hear people saying, we are hitting a scaling wall. This was the phrase, right?
Yes.
And the idea was the techniques that we used to build the previous LLMs were just sort of running out of the low-hanging fruit. And it’s going to require some sort of conceptual breakthrough in order for them to continue improving. And o3 comes along and effectively does just that.
And what I think is so important about these benchmarks and why we want to take some time today going through them is there’s a lot of questions and criticism right now that is justified around, how much are these things being hyped up? We know that the companies love to hype up their products and tell us how incredible they are.
But the benchmarks are something objective that you can actually use to measure their performance. And so when you have one of those benchmarks saying that there is now a model that is better than all but 179 people on Earth, well, it seems like we might be getting pretty close to superintelligence. Because what is superintelligence if not a system that is better than every human at something?
Yeah, and I would just add to that a little bit of a caveat, which is that these so-called reasoning models, they seem, from what we know about them so far, to be very good at the kinds of tasks that you can design what are called reward functions, for which are things that have sort of a definite right answer.
Coding, either the code runs or it doesn’t. Math has a definite right and wrong answer. So in these domains where you can give the reinforcement learning model a goal and the indicator of whether it is right or wrong in pursuing that goal, it tends to do very well, I think.
But if you asked it, what is the meaning of true love? It would never know. It wouldn’t know the first thing about it. And I think that’s beautiful.
Right. So I think for the short term like the next year or two, we’re going to have these early reasoning models that are very good and potentially even superhuman at some tasks, the kinds of tasks that have sort of definite right and wrong answers.
But for other things like fiction writing or life coaching or these vaguer tasks that don’t necessarily have one right and one wrong answer, they may not advance much beyond what we see today.
Yeah. And some people will use that as an excuse to say, well, then this doesn’t matter that much. And I would just point out that, at some point in your life, you’re probably going to go see a surgeon. And that surgeon might be not that great of a painter. And it’s not actually going to change the fact that the surgery that you got was very valuable. So I think it’s important to think more in terms of what these things are capable of in the moment than what they are not capable of.
Yes. The other thing from OpenAI that we should talk about quickly is that Sam Altman wrote a new blog post on January 5 called “Reflections,” basically talking about some of his thoughts about the two years since ChatGPT was released.
And the big headline from this blog post is that Sam Altman is claiming now that OpenAI knows how to build AGI, that the artificial general intelligence that people have been speculating about for years now, that OpenAI has been sort of hinting at, that they are within sight of that goal, and that he believes it could happen very quickly, and that they are already starting to look past AGI to ASI, to artificial superintelligence. So Casey, what did you make of this blog post?
Well, so I spent basically a day trying to figure out what exactly does Sam mean when he says that they know how to build AGI. And another thing that happened this week, Kevin, is that Sam did an interview with Josh Tyrangiel at Bloomberg.
And one of the things that he tells Josh is — I’m going to quote. “I don’t have deep, precise answers there yet. But if you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, OK, that’s AGI-ish.”
My interpretation based on conversations that I had this week is this actually is the destination that everyone has in mind for 2025. This is where the race is going. You are going to see all the big AI labs race to try to release a virtual AI coworker.
And if they can do that, and if the coworker is pretty good, then they’re going to say, this is actually what AGI is. Because at the moment, you can hire a sort of virtual entity to do some task or series of tasks in your companies that you no longer need a person for. That is where this entire thing has been driving the whole time.
Yeah, I agree. And I think that it is just — it is not necessarily something that we need to accept uncritically, right? Sam Altman is a person with his own goals and motives. And OpenAI is —
And reward functions.
And reward functions. And we should maybe apply some discount to what he says about his projections for AI because he does have a vested stake in the outcome. But I think we should also just use this as sort of sticking our finger in the wind of what conversations are happening in the AI scene in San Francisco.
People here — I cannot emphasize this enough — are very sincere and very genuine about the fact that they believe that we are going to get AGI or something like it very, very soon, possibly this year.
Yeah. And when you look at the improvement in these models that we saw in December alone, I think you have to take them seriously.
Yes. OK, moving on from OpenAI. Another thing that happened in December is that Google released Gemini 2.0, the new version of its flagship AI model. And Casey, have you tried it yet? What do you make of it?
I have not tried it yet, Kevin, because it is not in the consumer brand Gemini that I pay for, with the exception of they have this new feature called Deep Research, where you can ask Gemini to go and read the web and prepare a little report for you about something.
I think I’ve only used it one time. It seemed OK. To be candid with you, I have not followed the 2.0 stuff as closely because it just hasn’t seemed as shocking or impressive as the OpenAI stuff. Have you?
I’ve played around a little bit with Gemini 2.0, mostly in a series of demos that I got at Google before it came out. Some of what has been in there is sort of catching up with other models. Google also released a Gemini 2.0 Flash Thinking Mode, which was their first attempt at an inference time compute reasoning model, similar to o1 and o3 from OpenAI.
I have not played around with Gemini Deep Research Mode yet, but I’ve heard people talking about how cool it is, so I’m excited to try that out. But people I trust, whose judgment I trust about this stuff say that this is basically Google sort of announcing that it is on the same trajectory as OpenAI and all the other companies that are its peers and rivals, that it is going to be scaling up very quickly in 2025, and that we should look forward to more there.
Yes. Although there was a post on X that went viral this week where someone asked Google, does corn get digested? And all of the image results are of AI slop that appear to be diagrams of corn and just make no sense whatsoever. And it’s extremely funny. So maybe it’ll be patched by the time this comes out. But if not, just go ahead and do an image search for “does corn get digested?” And you’ll get a sense of where Google’s AI search skills are at.
Got it. So in conclusion, Google is cooking in the AI department, but not much of this has gotten out into consumers’ hands yet. And so I think that will be the question for 2025. Is this stuff actually as good as Google says it is?
Yeah.
All right. The third and final story that we’re going to catch up on today from over the break is something out of a Chinese company called DeepSeek. DeepSeek is a Chinese AI company. It’s actually run by a Chinese hedge fund called High-Flyer.
And right around Christmas, as my house was getting robbed, they released a new model called DeepSeek-V3 that ranks up there with some of the world’s leading chat bots and caught a lot of people’s attention.
Yeah. And look, I have not used this one yet, but there’s a few things to know about this one. One is that it’s really big. It has more than 680 billion parameters, which makes it significantly bigger than the largest model in Meta’s Llama series, which I would say, up to this point, has been the gold standard for open models. That one has 405 billion parameters.
But the really, really important thing about DeepSeek is that it apparently was trained at a cost of $5.5 million. And so what that means is you now have an LLM that is about as good as the state of the art that was trained for a tiny fraction of what something like Llama or a GPT was trained for.
I saw some speculation from this great blogger, Simon Willison, who said, it seems like the export controls that the US is placing on ships is actually inspiring these Chinese developers to get much better at optimizing. And indeed, you now have this state of the art model for $5.5 million. So this is a huge step toward the proliferation of LLMs everywhere.
Yeah, let me just back up and go a little more slowly through what you just described.
Oh, OK.
Because I think it’s really important.
I was trying to go really slowly.
I need it slower.
All right.
I need the Deep Research Mode here.
OK.
So one of the big questions over the past five or so years is about the Chinese AI industry and where they are relative to the leading frontier AI labs in the US, and whether we need to be doing more to slow them down, and if we even can slow them down, or if this stuff is just common knowledge. That as soon as someone invents a new way of doing AI, it spreads throughout the world, and there’s not much you can do to stop it.
One of the things that we’ve done in the United States was to pass something called the CHIPS Act, along with a set of controls that basically limited which AI chips you could export to China. And we put a lot of faith in the ability of these restrictions to effectively constrain the Chinese AI industry.
If they couldn’t get the latest chips out of NVIDIA and other companies, they wouldn’t be able to build models that were competitive with the state of the art US models. And that was one way that we were going to try to keep our national advantage.
What DeepSeek, I think, has showed, or at least what they have hinted at, is the possibility that China is actually not that far behind. Because this model, whatever you think about it, I have not tried it myself, but according to its benchmarks, it is up there in many respects with the latest and greatest models from companies like OpenAI and Google and Anthropic.
It is, according to some measures, the highest-ranking open source or open weights model that we have. And it does not appear to have needed the latest and greatest hardware to be trained on.
According to the report that DeepSeek put out, they trained this new model, V3, at an estimated cost of about $5.5 million. And they did it not on the leading edge NVIDIA H100 or A100 chips that all the big AI labs use, but on a different version of NVIDIA chips known as the H800, which is basically just a less capable version of the state of the art chips from NVIDIA.
And so I think what this all boils down to is the conclusion that regulating AI by limiting access to hardware is just going to be much more complicated than we thought. One interpretation would be that you actually can’t stop China from building state of the art foundation models, and that our regulatory regime just isn’t going to cut it when it comes to keeping the US ahead of China. What do you make of that?
So I mean, the first thing I would say is I do get a little bit nervous when people frame the debate this way because I think a lot of the people who try to frame the AI story as a race between the United States and China are sort of very hawkish and leading us to a potential conflict that I would rather avoid.
And it also presupposes that all of the American companies have to race as fast as they can. And they have to build AGI as fast as they can, even if it means cutting corners on safety. Because otherwise, this looming specter of China and everything that could happen.
So I just would say we don’t necessarily have to do that. We can choose to still move somewhat deliberately and with caution here. But do I think that this shows that it is going to be harder to prevent China from developing extremely high-end models, and that regulations will be more complicated? Yes, absolutely.
All right, Casey, that is a small fraction of what happened in AI while we were gone.
But probably the most important things.
I think we covered most of what really mattered. And if there’s one thing that we can be sure of in 2025, it’s that we are going to be very busy talking about more AI changes and progress.
Somebody was telling me that if 2023 was the year that made everybody say, oh my gosh, AI is going so fast, and 2024 was the year that felt very business as usual, 2025 is the year where we could be going back to, oh my gosh, AI is going so fast. And then maybe it’ll just feel like that all the time forever.
Isn’t that a pleasant thought?
Yeah. So anyway, happy New Year.
I vertigo.
Forever.
Forever.
When we come back, 2025’s first game of “HatGPT.”
[MUSIC PLAYING]
Well, Kevin, from time to time, we like to check in on some of the wilder headlines from the world of tech in a segment we call “HatGPT.”
Yes!
[MUSIC PLAYING]
In “HatGPT,” of course, we take headlines. We put them into a hat. We fish headlines out, discuss them for a bit. And when one or the other of us gets bored, we simply say, stop generating.
We have not done a “HatGPT” in a while, and there’s been so much that I’m excited to see what’s in the hat.
Me too. Well, let’s. Why don’t you go ahead and get us started?
OK, I’ll pick first.
OK.
All right, this one is called “Meta kills AI-generated people like proud Black queer mama.” This is from Futurism. So this was sparked by a interview that was given by a Meta executive in the “Financial Times” at the end of 2024, basically talking about their plans to let users create a bunch of AI profiles and sort of fake people and get them to share generated content on Meta platforms.
And then people began discovering the existence of these older AI-generated profiles that Meta had started up back in 2023. And Washington Post columnist Karen Attiah posted on Bluesky about one AI-generated profile in particular that was described as a proud Black queer mama of two and truth teller named Liv.
And Karen started chatting with this chat bot. She then posted her chat on Bluesky. And Meta, summarily, killed Liv and many of its other older AI personas.
This whole thing was so silly. And I think there’s been a lot of just backlash against Facebook for this one because this truly is a case where you wonder, why are they doing any of this?
Yes.
And I think the answer would probably be that they saw Character AI have some success by letting people chat with all of these different sorts of characters. But I think where Character AI succeeded was they let you pretend like you were talking to Luke Skywalker or Spider-Man or characters that were very personally meaningful to you.
Meta just made up a bunch of essentially generic humans and said, go nuts, and had them say generic things. And it just felt incredibly creepy to people, I think.
Yeah, I think this is a case of an idea that needs to be taken out back and dispensed with. But Meta is not giving up on the idea of AI-generated personas. In fact, they have signaled that they intend to put more AI-generated personas inside all of their apps. And I’m just fascinated to see what fresh horrors emerge when that happens.
Here’s what I hope. I hope that at some point, Meta will be able to detect when you’re harassing or abusing someone, which is, of course, now allowed under their new rules. And they just actually route you to an AI, so that the can absorb all of your prejudice and bigotry. It might be a nice solution.
I like that, like an AI punching bag.
Exactly.
Yeah. OK, stop generating.
All right. I feel like, normally, when it’s my turn to pick, I get to shake the hat. But for some reason, this week —
Sorry.
— you’ve decided you want to shake that.
Sorry.
OK. I’m just going to shake the hat, as it’s my right.
All right, here’s one. “Apple agrees to pay a $95 million settlement in a Siri privacy lawsuit.” Kevin, this is from Chris Velazco at “The Washington Post.” “Apple has agreed to end a five-year legal battle over user privacy related to its virtual assistant, Siri, with a $95 million payout to affected customers, according to a preliminary settlement.”
Apparently, Kevin, Siri was a bit overzealous in listening for wake words like “Siri.” So when it thought it was being called into action, it would start recording audio that it wasn’t supposed to. And a number of those clips somehow ended up in the hands of third-party contractors.
Back in 2019, “The Guardian” reported on Apple contractors regularly hearing confidential medical information, drug deals, and, of course, recordings of couples having sex. So if a judge signs off on the settlement, anyone who qualifies can submit a claim for up to five Siri-enabled devices for a max payout of $20 per device. So I guess my question to you is, would you be willing to let Apple listen to you have sex for $100?
[LAUGHS]
Because let me just say, I’d go for it.
No, I don’t think —
No?
My price is a little higher than that. No. But Casey, I saw this one making the rounds because people said, oh, finally, they’re admitting that they listened to you through the microphone in your iPhone, which has been, of course, a favorite conspiracy theory of people, including critics of Meta for years now. There’s no proof that is true.
What this essentially seems to be saying is, it’s not that this was sort of an omnipresent listening Siri that was listening when it shouldn’t be. It’s that, obviously, Siri needs to be listening sort of ambiently in order to tell when a user says, “hey, Siri.”
That’s right.
And I’m sorry if we just woke up your Siri on your iPhone, and you’re no longer listening to this podcast because I just said that. But this is essentially saying it sounds like that it was a little miscalibrated to where it was listening more than it needed to be to listen for that wake word, or that it was recording more audio than it needed.
Yeah. And I don’t care about the actual incident, Kevin. And here’s this reason. In the 14 years that Siri has existed, I think it’s correctly understood me about four times. This is not a technology that ever knows what I’m talking about for any reason. Siri could take an hour-long recording of me and have no idea what to do with it, so I don’t care about that aspect.
What I do care about is this is just going to fuel the most annoying conspiracy theory in tech, which is that all the tech companies are secretly listening to you. So yeah, we’re just going to see a lot more conspiracies around this. And it is super unfortunate because, again, this is only Siri we’re talking about. It doesn’t know anything.
Yeah, it’s not that serious.
Stop generating.
OK.
This one is from “The Athletic.” “Netflix’s WWE investment and the future of live events on the platform, quote, “we’re learning as we go.” Starting January 6, the story says, the WWE’s popular weekly wrestling show “Raw” will stream exclusively on Netflix in the United States.
This is part of a decade long agreement worth a reported $5 billion.” And Casey, as “Hard Fork’s” resident WWE fan and expert, why don’t you take this one on?
Well, Kevin, I mean, did you watch?
No, I did not.
Well, you missed something huge, which is that Roman Reigns beat his cousin Solo Sikoa in a tribal combat match, winning back the Ula Fala, and becoming the one Tribal Chief of the World Wrestling Entertainment.
Is that true?
That is all true. It was a great match. It was a really fun show. And I think it looked great. WWE positioned this as a really huge thing for them. And it is. It’s also huge for Netflix. From WWE’s perspective, now they can be in something like 280 million homes around the globe. For Netflix, they get to experiment with some of this live programming, which they’ve been dipping their toes into.
Of course, there’s a lot of speculation about whether they might soon go after more traditional sports, so maybe they want to get a big football deal, a big baseball deal. And so I’m very interested to see how these two things work together. And I’m very interested to see who Cody Rhodes will be fighting at WrestleMania this year.
I did see the — I mean, obviously they did the big Jake Paul-Mike Tyson fight. That was on Netflix. I also saw on Christmas Day, they had some live football on Netflix.
That’s right.
Do you think this is hastening the death of cable TV? Or do you think it’s just that was sort of already happening, and this is just Netflix trying to pick up the pieces?
I absolutely do. I watch, in addition to WWE, another wrestling promotion, AEW. And the reason that I had my YouTube TV account, which cost me something like $80 a month, was so that I could watch AEW programming because that is only available on cable.
Well, guess what, Kevin? AEW started streaming on Max. And so I was able to cut the cord once again. And now, I am fully streaming again. So yes, as these sort of live events that have these intense, weird fandoms move from traditional cable to streaming, it absolutely becomes a moment where more people cut the cord.
Now, this is a little bit of a tangent, but I did have an interesting moment over the break where we were stuck in a motel in Lake Tahoe. And our iPad that we use to sometimes entertain our child had run out of battery.
Oh, no.
And so I was forced to turn on the hotel TV and try to explain to my two-year-old son the concept of linear TV. And Casey, it blew his freaking mind. I was like, so on this screen, you can watch “Bluey” sometimes, but not all the time.
And you can’t pick a specific episode. And then about twice an episode, they’re going to interrupt the episode to try to sell you toys. And he was just so confused by the concept of linear TV that I thought, this industry probably does not have a long time left.
No, It doesn’t. Your child knows.
Yeah.
All right, we’ll stop generating. Now, oh, this was a fun one. So the YouTuber MegaLag posted a video on December 21 titled “Exposing the Honey Influencer Scam.” And ever since, Kevin, YouTube has been overtaken by discussion of what honey did.
Yeah. This in the world of YouTube creators was probably the biggest news story of the year.
Yeah.
And I don’t think I’ve heard much about it outside of YouTube because of the way that insular platform works. But essentially, this was a massive scandal among major YouTubers over the holidays. Maybe we should just explain what happened for people who are not glued to YouTube 24/7.
I think we should. So Honey is a company that was acquired by PayPal a while back. And they are a browser extension. And the idea is, before you go to checkout online, before you make an online purchase, you click the Honey button. And Honey will scan the landscape for the best coupon.
Because often, if you have a coupon code, you can get a little discount. And so Honey went out to a bunch of YouTubers and signed these deals. And they said, hey, please go ahead and promote Honey.
And the reason that this is important is that these sort of coupon codes are a big part of the creator economy. We’ve talked on this show in the past about affiliate links. A lot of the internet is built on companies that sell things, giving a little kickback to people who talk about their things.
And I think before we say what the allegations against Honey are, we should just set the scene for people who are not YouTube heads. The relationship between — Honey was maybe the most prominent advertiser on major mainstream YouTube channels.
I mean, I would say that Honey sponsorships propped up YouTubers and YouTube content creation in a similar way that online mattresses propped up the podcast industry for a couple of years. Major, major YouTube influencers — David Dobrik, Emma Chamberlain, the Paul brothers, Marques Brownlee — these people, many of them had major deals with Honey to underwrite their channels.
That’s right.
So they were basically ubiquitous. It was hard to watch a lot of YouTube a couple of years ago without running into Honey ad after Honey ad.
Right. So what are the allegations that MegaLag publishes? Well, it’s two things. One is that — and this is just sort of hiding in plain sight on Honey’s website. Honey will actually go to online retailers and charge those retailers money to keep their best codes out of the Honey database.
So let’s say you have your online store, and you have a crazy 80 percent coupon that you gave out, Honey will say, oh, we’ll make sure that no Honey user actually ever sees that coupon code. So Honey’s straightforward about that, but it’s obviously a terrible user experience, right?
Because the way Honey works in a nutshell is there are these coupon codes. People had their sites where you can go look up coupon codes before you buy something, try to find a 10 percent or 20 percent coupon. Honey will basically go out and scour the internet for these codes for you, and then automatically apply them to your purchase in your browser for basically any e-commerce website that has these codes.
That’s right.
Save you a little money while you’re out shopping.
That’s right. And if that had been all that Honey was doing, this wouldn’t have been a scandal. But then there was the second allegation from MegaLag, Kevin. And that was that when people would see products in these influencer videos, and they would go to buy them, those shopping carts would often get the creators’ affiliate link inserted.
So the creator would then get a kickback, which is, of course, the whole point that creators like to work with these companies that share affiliate links, and so they can get a little bit of money. And the allegation is that Honey was going in at the end of this process and replacing the creators’ affiliate link with Honey’s affiliate link. So Honey got to keep all of the affiliate revenue and cut the creators out of the process.
So let’s just walk through this step by step, OK? So I am watching a major YouTubers video.
You’re watching the Hard Fork channel.
I’m watching the Hard Fork channel. We don’t actually have affiliate links in our videos. But if say we did, say we’re say we’re out there. We’ve got an online mattress company that we have a promo deal with. And every time you go and buy a mattress and enter the code “HARDFORK” at checkout, you get percent off.
The allegation was that, Honey, in the instances where a user went to go buy a mattress from our affiliate link, if they used Honey in their browser, Honey would find that affiliate link and replace it with the Honey affiliate link. And so instead of getting a kickback on that sale ourselves, that money would instead go to Honey.
That is exactly right. And so people are quite mad about this. There’s a channel called LegalEagle that is suing them, which I know nothing about LegalEagle, but I have to say, that sounds exactly what a YouTube channel named LegalEagle would do — would just be to sue one of their advertisers.
When The Verge asked PayPal, by the way, about all of this, PayPal said, quote, “Honey follows industry rules and practices, including last-click attribution.” And what I take that to mean is that the industry rules and practices is horrible.
And Honey is not doing one thing to try to improve on them in any way. So this was really a case where creators took a look at the situation. And they said, I don’t think so, Honey. And that’s a lost cultural reference.
And I would just say that I think this is a case of people just really being naive about how the internet works. Honey was a very popular, very profitable. So profitable and popular that PayPal acquired it. And people just — really, YouTubers just thought they were out there providing these coupon codes to people out of the goodness of their hearts. And I just want to say, bless your heart if you thought that’s what Honey was about.
Youtubers are telling Honey to mind their own beeswax.
Yeah.
And with that, I’ll stop generating.
OK, last one.
“LA Tech entrepreneur nearly misses flight after getting trapped in robotaxi. Passenger Mike Johns was reportedly riding in an autonomous Waymo car on the way to the Phoenix airport when the vehicle began driving around a parking lot repeatedly, circling eight times as he was on the phone seeking help from the company.” Did you see this video?
I did see this.
This was so wild. So he initially believed it was a prank, he told “The Guardian.” And then he sort of gets on the phone with the support person at Waymo as he’s inside this car that is just circling the parking lot. And it won’t let him out. And as a result, he almost missed his flight.
I think this is every Waymo support person’s fantasy is that one day, you just pick a random Waymo. And you just start driving it around in circles in the parking lot with no explanation. Maybe you’re teaching your kid how to drive or something like that.
No, this would obviously be somewhat disconcerting, but it is also hilarious. And I have to say, if I made a list of the 10 worst things that ever happened to me in an Uber, for example, driving around in a circle eight times would not make the top 10.
Yeah, I’ve almost missed my flight several times because of Uber drivers just thinking they know a better way to the airport. So yes, I would say we shouldn’t make light of this. People are placing their life in Waymo’s hands when they get into one of these autonomous cars.
And I did see some people saying, see, this is why I would never trust a self-driving taxi. And I do think it’s worth taking these incidents seriously at the same time. No one was hurt. This was a case of clearly some little software glitch or something or some issue with them. I don’t think they ever got to the bottom of what happened here.
Look, here’s another way of thinking about it. Maybe this is a final destination situation where if the Waymo had gotten immediately on the freeway, maybe there would have been a terrible accident. But something in the training said, no, we need to stay in this parking lot. We’re going to drive around in eight circles. And that will reset the timeline and ensure that Mike makes it safely to the airport. It’s something to think about.
Do you know how airport Wi-Fi sometimes makes you watch an ad before you can get the free Wi-Fi?
Yes, yeah.
This is giving me an evil business idea, which is like, oh, you want to get out of your Waymo and make your flight? Time to click over to Honey.
Complete your purchase with Honey if you want us to stop circling this parking lot.
[LAUGHS]
God, someone out there is taking notes. I’m so sorry. All right, stop generating. That is “HatGPT.” Casey, it is so good to be back with you in the studio, doing one of our favorite games.
Hats off to you, Kevin. And hats off to all of our listeners.
[MUSIC PLAYING]
“Hard Fork” is produced by Whitney Jones and Rachel Cohn. We’re edited this week by Rachel Dry. We’re fact-checked by Caitlin Love. Today’s show was engineered by Chris Wood. Original music by Elisheba Ittoop, Rowan Niemisto, and Dan Powell. Our executive producer is Jen Poyant. Our audience editor is Nell Gallogly. Video production by Ryan Manning and Chris Schott.
You can watch this whole episode on YouTube at youtube.com/hardfork. Special Thanks to Paula Szuchman, Pui-Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at hardfork@nytimes.com or something really mean that you can say on Facebook now.
[MUSIC PLAYING]
2025-01-10 12:05:30