There is so much negativity and panic surrounding artificial intelligence these days. It doesn’t matter what news story happens—as long as it’s about Google Gemini gains “memory” or Chat GPT Telling users something that is clearly wrong can cause uproar in some parts of the online community.
Artificial Intelligence The current focus on true artificial general intelligence (AGI) has created an almost hysterical media landscape surrounding Terminator fantasies and other apocalyptic scenarios.
However, this is not surprising. Humans love a good apocalypse – heck, we’ve fantasized about it enough over the past 300,000 years. From Ragnarok to Apocalypse to Armageddon and every major fantasy blockbuster is filled with scenes of mass destruction, we’re all fascinated. It’s the sad truth that we just love bad news, for whatever genetic reasons.
Today, AGI is depicted in nearly every major voice channel, largely stemming from the idea that it is the worst of humanity. Of course, it sees itself as a superior force held back by puny humans. It evolves to the point where it no longer needs its creator, and inevitably usheres in some form of apocalyptic event that wipes us all off the face of the earth, either through nuclear annihilation or a pandemic. Or worse, it can lead to eternal damnation (courtesy of Roko’s Basilisk).
Some scientists, media pundits, philosophers, and Big Tech CEOs share a dogmatic belief in this view, and they are all shouting from the rooftops, signing letters, and more, imploring those in the know to put AI development on hold. develop.
However, they all miss the bigger picture. Ignoring the absolutely huge technological hurdles required to even come close to emulating anything approaching human thinking (let alone superintelligence), they all fail to realize the power of knowledge and education.
If artificial intelligence does have the internet at its fingertips, the greatest library of human knowledge ever created, and is capable of understanding and appreciating philosophy, art, and all human thought to date, then why must it be evil? ? Allowing us to be our downfall rather than a balanced and considerate being? Why must we demand death and not cherish life? It’s a strange phenomenon, like we’re afraid of the dark because we can’t see it. We are judging and condemning something that does not exist. This is a confusing jump to conclusion.
Google’s Gemini finally has memories
Earlier this year, Google launched larger memory capacity for its artificial intelligence assistant Gemini. It can now save and reference details you provide from previous conversations and more. Our news writer Eric Schwartz wrote a great article about this which you can read hereBut all in all, this is one of the key components that moves Gemini away from a narrow definition of intelligence and closer to the AGI emulation we really need. It won’t have a conscience, but using only patterns and memories, it can easily mimic AGI’s interactions with humans.
Deeper memory advances in LLMs (Large Language Models) are critical to their improvements – and ChatGPT achieved corresponding breakthroughs early in its development cycle. However, its overall scope remains limited in comparison. Talk to ChatGPT long enough and it will forget your previous comments in the conversation; it will lose context. This breaks the fourth wall to some extent when interacting with it, and breaks the famous Turing Test in the process.
According to Gemini, even today, its own memory capabilities are still under development (and not actually disclosed to the public). However, it believes they are far superior to ChatGPT, which should alleviate some fourth-wall-breaking moments. We may now be locked in an LLM AI memory race, which is not a bad thing at all.
Why so positive? Okay, I know this is a cliche to some – I know we use this term a lot, perhaps in a very cavalier way that belittles its value as a phrase – but we are in the midst of a loneliness epidemic . This may sound ridiculous, but research shows that, on average, social isolation and loneliness may increase all-cause mortality by a factor of 1.08 to 1.48 (Andrew Steptoe et al., 2013). This number is staggeringly high—in fact, many studies have shown that loneliness and social isolation increase the likelihood of cardiovascular disease, stroke, depression, dementia, alcohol abuse, anxiety, and may even lead to increased rates of many types of cancer. have.
Modern society has also contributed to this. Family units where several generations have lived at least in close proximity are slowly disappearing – especially in rural areas. As local employment opportunities dry up and the financial ability to live comfortably becomes unattainable, many people are leaving the safety of their childhood neighborhoods in search of a better life elsewhere. Add to that divorce, break-ups and widowhood, and you’ve got an inevitable recipe for loneliness and social isolation, especially among older adults.
Of course, there are some ancillary factors here and I’m making some inferences from that, but there’s no doubt in my mind that loneliness is a hard thing to deal with. Artificial intelligence has the ability to alleviate some of that stress. It can provide help and comfort to those who feel socially isolated or vulnerable. Here’s the thing: Loneliness and social isolation have a snowballing effect. The longer you stay like this, the more social anxiety you develop and the less likely you are to go out in public or meet people—and it gets worse over the course of a cycle.
Artificial Intelligence Chatbots and LLMs are designed to interact and converse with you. They can alleviate these problems and give those who suffer from loneliness a chance to practice interacting with people without fear of rejection. Having a memory that retains the details of a conversation is key to achieving this. Going one step further, artificial intelligence becomes a true companion.
with google and Open artificial intelligence By actively enhancing the memory capacity of the likes of Gemini and ChatGPT, even in their current form, these artificial intelligences have a chance to better circumvent Turing test problems and prevent fourth wall breaking moments from occurring. Back to Google, if Gemini is indeed better than ChatGPT’s current limited memory capacity, and it behaves more like human memory, then at this stage I think we might call it a true imitation of AGI, at least on the surface.
If Gemini is fully integrated into home smart speakers, and Google has the cloud processing power to support it all (which I’d suggest it’s looking to push given its recent progress in nuclear power acquisition), it could become revolutionary in reducing In terms of social isolation and loneliness, especially among vulnerable groups, this force is for good.
But here’s the thing – getting there requires some serious calculations. Running an LLM and saving all the information and data is not an easy task. Ironically, running an LL.M. requires more computing power and storage space than creating AI images or videos. Doing this for millions and possibly billions of people requires processing power and hardware that we don’t currently have.
Terrible ANI
The reality is, it’s not AGI that scares me. Even more chilling is the already existing artificial narrow intelligence (ANI). These programs are not as complex as potential general artificial intelligence. They have no concept of any information other than what they are programmed to do. think of one Eldon Ring boss. Its only purpose is to defeat the player. It has parameters and limitations, but as long as those conditions are met, its job is to crush players – nothing else, and it won’t stop until it’s done.
If you remove these restrictions, the code remains and the goal is the same. In Ukraine, as the Russian military began to use jamming devices to prevent drone pilots from successfully flying into their targets, Ukraine began to use ANI to destroy military targets, greatly improving the hit rate. In the United States, of course, there is the legendary news article (either real or theoretical) about a US Air Force artificial intelligence simulation in which a drone killed its own operator to achieve its goal. You get it.
These artificial intelligence applications are the scariest, and they’re here now. They have no moral conscience or decision-making process. You strap a gun to a gun and tell it to take out a target, and it will do just that. To be fair, humans are equally capable, but have checks and balances to prevent this from happening, and a moral compass (hopefully) – but we still lack specific legislation, local or global, to deal with these AI issues. On the battlefield, of course.
Ultimately, it all comes down to preventing bad actors from taking advantage of emerging technologies. Not long ago, I wrote an article Death of the Internet How we need a non-profit organization that can respond quickly and develop legislation for countries to address emerging technological threats that may arise in the future. Artificial intelligence also requires this. There are organizations pushing for this, the OECD being one of them, but modern democracies, and indeed any form of government, are too slow to respond to these immeasurable growing threats. The potential of AGI is unparalleled, but we are not there yet, unfortunately ANI has been.