Here’s a new way to lose an argument online: the appeal to AI
December 23, 2024

Here’s a new way to lose an argument online: the appeal to AI

Over the past 20-plus years as a journalist, I have seen and written about many things that have irrevocably changed my view of human nature. But it wasn’t until recently that something short-circuited me.

I’m talking about a phenomenon you may have also noticed: the attraction to artificial intelligence.

There’s a good chance you’ve seen someone online using the appeal of artificial intelligence, or even heard it out loud. This is a logical fallacy best summed up in three words: “I asked ChatGPT.”

  • I asked ChatGPT Help me figure out my mysterious illness.
  • I asked ChatGPT Giving me tough love advice, they thought I needed the most to grow as a person.
  • I use ChatGPT Create custom skin programs.
  • ChatGPT provides an argument Alienation from the relationship with God (i.e. damnation) is necessarily possible, based on abstract logical and metaphysical principles that exclude the middle, without appeal to the nature of the relationship, true love, free will or respect.
  • There are so many government agencies that even the government doesn’t know how many there are! [based entirely on an answer from Grok, which is screenshotted]

Not all examples use this precise formula, although it is the simplest way to summarize the phenomenon. One might use Google Gemini, Microsoft Copilot, or their chatbot girlfriend, For example. But the common denominator is an instinctive, unwarranted trust in a technical system that is not designed to do what you ask it to do and then expect others to believe it too.

If I were still commenting on forums this would be something I would get fired up about

Whenever I see this appeal of artificial intelligence, my first thought is the same: Are you a fucking idiot or something? For a while, the words “I asked ChatGPT” were enough to make me put it away – I was no longer interested in what the person had to say. I mentally filed it away with the logical fallacies, you know the ones: straw men, ad hominem attacks, gish gallops and not being truly Scottish. If I were still commenting on the forum this would be something that would add fuel to my fire. But the pull of artificial intelligence starts happening so often that I bite the bullet and try to understand it.

I’ll start with the simplest: Musk’s example (the last one) is of a man advertising his product at the same time. Others are more complex.

First of all, I find these examples sad. For this mysterious disease, the authors turned to ChatGPT for attention and answers they couldn’t get from doctors. In the case of the “tough love” advice, the inquirers said they were “shocked and surprised by the accuracy of the answers,” even though the answers were all generic nonsense you’d get from any radio show. Your fear of vulnerability is the problem. As far as skin care goes, the author might as well have gotten one from a women’s magazine – nothing particularly customized.

As for the argument about damnation: hell is real, I’m already here.

ChatGPT’s text sounds confident and its answers are detailed. This does not equal correctness, but it has signifier is right

Anyone familiar with large-scale language models knows that systems like ChatGPT generate sequences of words based on patterns in a training database to predict likely responses to prompts. There is a lot of human-created information on the web, so these responses are often correct: for example, ask it “What is the capital of California” and it will answer Sacramento, plus another unnecessary sentence. (My one minor objection to ChatGPT: its answers sound like a sixth-grader trying to hit a minimum word count.) Even for an open-ended query like the one above, ChatGPT can build a reasonable-sounding answer based on the training material . Love and skin advice is very general, as countless writers on the Internet give similar advice.

The problem is that ChatGPT is not Trustworthy. ChatGPT’s text sounds confident and its answers are detailed. This does not equal correctness, but it has signifier is correct. It’s not always obviously wrong, especially when it comes to answers – such as love advice – where the questioner can easily predict the answer. Confirmation bias is real, my friends. i have Articles have been written about such issues When people trust automated prediction systems, they encounter complex factual issues. However, despite these problems occurring frequently, people still do it.

How to build trust is a tricky question. As a journalist, I like to show my work—I tell you who said something to me when, or show you what I did to try to confirm that something is true. With Fake Presidential Pardons, I show you the primary sources I used so you can make your own inquiries.

But trust is also a heuristic that can be easily abused. For example, in financial fraud, the presence of a particular VC fund in a particular round may indicate to other VC funds that someone has completed the required due diligence, causing them to skip the intensive process themselves. Appeals to authority rely on trust as a heuristic—a practical, albeit sometimes erroneous, measure that saves work.

How long have we not heard from industry leaders saying that artificial intelligence will soon be able to think?

People asking about this mysterious disease are turning to artificial intelligence for help because humans have no answers and they are desperate. Skincare seems like pure laziness. For those looking for love advice, I just wonder how their lives have gotten to the point where they have no one to ask – how they don’t have a friend to watch them interact with other people. When it comes to hell, there’s a “machine thinks damnation is logical” vibe to it, which is fucking awesome embarrassing.

The appeal to artificial intelligence is different from the “I asked ChatGPT” story, such as: Let it count the “r” in “strawberry” – it’s not Testing the limits of chatbots Or engage in it in some other self-aware way. Maybe there are two ways to understand it. The first is, “I asked the magic answer box and it told me” in a tone very much like “Well, Delphi’s Oracle said…” The second is, “I asked ChatGPT and if it’s wrong, I don’t Take responsibility.

The second is laziness. The first one is shocking.

People like Sam Altman and Elon Musk are responsible for the appeal of artificial intelligence. How long have we not heard from industry leaders saying that artificial intelligence will soon be able to think? Will it overtake humans and take our jobs? Here’s some awesome logic: Elon Musk and Sam Altman are very rich, so they must be very smart – and they’re richer than you, so they’re smarter than you. They tell you that artificial intelligence can think. Why not believe them? Besides, wouldn’t the world be cooler if they were right?

Unfortunately for Google, ChatGPT is a crystal ball that looks better

Stories that engage artificial intelligence can also reap big attention rewards; Kevin Ross’s The boring Bing chatbot story is a good example. Of course, this is gullible and contrived – but Watch experts fail the mirror test It’s really easy to get people’s attention. (In fact, Luce later wrote a second story in which he asked the chatbot what they miss him.) On social media, there is an incentive to put the appeal of AI front and center in engagement; there is a group of AI influencer weirdos who are more than happy to push this stuff. If you provide social rewards for stupid behavior, people will behave stupidly. This is how fashion works.

One more thing, and that’s Google. Google Search started out as a pretty good online directory, but over the years Google has encouraged thinking of it as a crystal ball, providing a real answer According to the order. This was the focus of Snippets before the rise of generative AI, and now, the integration of AI answers is a few steps closer.

Unfortunately for Google, ChatGPT is a crystal ball that looks better. Let’s say I want to replace the rubber on my windshield wipers. A Google search for “replace rubber windshield wipers” showed me all kinds of garbage, starting with an overview of artificial intelligence. Next to it is a YouTube video. If I scroll down further, a snippet appears; next to it is a photo. Below are suggested searches, followed by more video suggestions, then Reddit forum answers. It was really busy and chaotic.

Now let’s move to ChatGPT. Asking “How do I replace rubber windshield wipers?” gives me a clearer layout: a response with subtitles and steps. I don’t have any direct links to sources and can’t assess whether I’m getting good advice – but I have a clear, authoritative-sounding answer on a clean interface. If you don’t know or care about how things work, ChatGPT seems better.

It turns out that the future was always predicted by Jean Baudrillard

The attraction to artificial intelligence is a perfect example of Arthur C. Clarke’s law: “Any sufficiently advanced technology is indistinguishable from magic.” The technology behind the LL.M. is advanced enough because the people using it haven’t bothered to understand it. The result is a new, frustrating type of journalism: people relying on generative AI only to get made-up results. I also find it frustrating that no matter how many of these people there are—whether fake presidential pardon, false citation, Develop case lawor fabricated movie quotes – They don’t seem to have any impact. Hell, even Stuff that sticks to pizza It hasn’t stopped yet “I asked ChatGPT.”

It’s a bullshit machine – in a philosophical sense – doesn’t seem to bother many enquirers. By its nature, the LL.M. cannot determine whether what it says is true or false. (At least the liar knows what the truth is.) It has no access to the real world, only a written representation of the world it “sees” through the token.

Therefore, the attraction to artificial intelligence is to signifier authority. ChatGPT sounds confident, even if it shouldn’t be, and its answers are detailed, even if they’re wrong. The interface is very clean. You don’t have to judge which link to click. Some rich people tell you that they will soon be smarter than you. one new york times Journalists are doing it. So why bother thinking when a computer can do it for you?

I can’t tell how much of this is carefree trust and how much is pure luxurious nihilism. In some ways, “the robot will tell me the truth” and “no one is going to solve any problem, Google is wrong anyway, so why not trust the robot” amount to the same thing: a lack of faith in, and disdain for, human endeavor. . I can’t help but feel like things are going to a very dark place. important people are talking Ban polio vaccine. New Jersey residents are Point laser at aircraft During the busiest time of year for tourism. The entire presidential election is Full of conspiracy theories. Besides, wouldn’t it be more interesting if aliens were real, there was a secret cabal ruling the world, and the artificial intelligence was actually intelligent?

Against this background, perhaps it is easy to believe that there is a magic answer box in the computer and that it is completely authoritative, like our old friend Sybil at Delphi. If you believe that the computer’s knowledge is infallible, then you are prepared to believe anything. It turns out that the future has always been what Jean Baudrillard predicted: who needs reality when we have signifiers? What did reality do to me anyway?

2024-12-23 13:30:00

Leave a Reply

Your email address will not be published. Required fields are marked *