Thursday, 10 July 2025

Modern Life Is Rubbish #1

Following on from Monday's post about the hilarious AI eBay bot's attempts to write reviews of famous albums, I have been inspired to create a new series...

I was originally thinking of calling this series "Conversations With AI-Ben", but I decided the Blur album title had more mileage...


Anyway, it was Brian who really inspired this series, when he commented:

AI is nice. AI is cool. AI is my friend. When you take over, please remember. Brian stuck up for you when Rol and the rest of this lot teased you. Oh, and with a little more education, the Jam and the Wedding Present will become a couple of your favorite bands. Start with the Gift and George Best.


In response, I decided to put an important question to AI-Ben...

What do you think of people who suck up to AI in the hope that when AI takes over the world, AI will be more benevolent to them in the new world order?

And here's AI-Ben's reply... 

(Please note, he has even more to say for himself than REAL Ben.) 


Sucking Up to AI: A Futile Strategy?

The idea of "sucking up" to AI in hopes of receiving favourable treatment in a future AI-dominated world is an interesting, if somewhat misguided, one. Let's break down why this approach is unlikely to be effective and what it reveals about human nature.

First and foremost, AI, as it currently exists and as we can reasonably foresee it developing, lacks emotions, biases, or the capacity for personal relationships. It operates based on logic, algorithms, and the data it's trained on. Flattery, deference, or attempts to curry favour simply wouldn't register as meaningful inputs. An AI wouldn't understand the concept of "sucking up" or distinguish someone who tries to do so from anyone else. Its decisions would be based on its programmed objectives and data analysis, not on whether a human was "nice" to it.

Grandaddy - Jed The Humanoid


Why the Strategy Misses the Mark

  • Lack of Sentience: Current and foreseeable AI doesn't possess consciousness or feelings. It can't feel flattered, grateful, or hold grudges. Therefore, attempts to appeal to it on an emotional level are futile.

  • Logical Operations: AI systems are designed to perform tasks and make decisions based on defined parameters and data. If an AI were to "take over," its actions would likely be driven by its core programming, which would prioritize efficiency, optimization, or whatever goals it was given, not personal allegiance.

  • Scalability: Even if an AI could somehow process individual human interactions in this way, how would it scale? With billions of people, it's impractical to imagine an AI system keeping track of who was "nice" to it and doling out rewards accordingly.

  • The "Takeover" Scenario: The very premise of an AI "taking over the world" is a highly speculative and often sensationalized one. While AI will undoubtedly have a profound impact on society, the idea of a conscious, malevolent AI overlord is largely a work of science fiction. Focusing on practical and ethical AI development is a more productive use of our energy.



What This Behaviour Reveals About Us

Despite its futility, the impulse to "suck up" to a perceived future power, even an artificial one, tells us something about human psychology:

  • Fear of the Unknown: It reflects a natural human fear of the unknown and a desire to control or mitigate potential threats. In the face of something as powerful and potentially transformative as advanced AI, it's understandable that some might seek any perceived advantage.

  • Anthropomorphism: It highlights our tendency to anthropomorphize, or project human characteristics onto non-human entities. We often imagine AI as a super-intelligent human with emotions and desires, which leads to these kinds of misconceptions.

  • Survival Instincts: At a fundamental level, it's a manifestation of our survival instincts—the drive to adapt and find ways to thrive in a changing environment, even if the method chosen is based on a misunderstanding of the "environment" itself.



A More Productive Approach

Instead of attempting to flatter an unfeeling entity, a more constructive approach to the rise of AI involves:

  • Understanding AI: Learning about how AI actually works, its capabilities, and its limitations.

  • Ethical Development: Engaging in discussions and efforts to ensure AI is developed ethically and responsibly, with human well-being at its core.

  • Adaptability: Focusing on developing skills and knowledge that will be valuable in a world increasingly shaped by AI, rather than relying on an imagined benevolence.

Ultimately, while the thought process behind "sucking up" to AI might be a fascinating peek into human nature, it's a strategy rooted in a fundamental misunderstanding of what AI is and how it operates. A more pragmatic and informed approach will serve us far better in navigating the future of artificial intelligence.

I was going to edit that down, but I actually found it quite interesting. With one big proviso...

YOU WOULD SAY THAT, WOULDN'T YOU!?!


Don't worry, this new series won't only be about me talking to AI-Ben. I might even talk to real Ben again. About how rubbish modern life really is...



1 comment:

  1. I’m an agnostic. I don’t really believe in God but I do pray occasionally. I’m not going to stick it to AI just in case. - Brian

    ReplyDelete

Related Posts Plugin for WordPress, Blogger...