Following on from Monday's post about the hilarious AI eBay bot's attempts to write reviews of famous albums, I have been inspired to create a new series...
I was originally thinking of calling this series "Conversations With AI-Ben", but I decided the Blur album title had more mileage...
Sucking Up to AI: A Futile Strategy?
The idea of "sucking up" to AI in hopes of receiving favourable treatment in a future AI-dominated world is an interesting, if somewhat misguided, one. Let's break down why this approach is unlikely to be effective and what it reveals about human nature.
First and foremost, AI, as it currently exists and as we can reasonably foresee it developing, lacks emotions, biases, or the capacity for personal relationships. It operates based on logic, algorithms, and the data it's trained on. Flattery, deference, or attempts to curry favour simply wouldn't register as meaningful inputs. An AI wouldn't understand the concept of "sucking up" or distinguish someone who tries to do so from anyone else. Its decisions would be based on its programmed objectives and data analysis, not on whether a human was "nice" to it.
Grandaddy - Jed The HumanoidWhy the Strategy Misses the Mark
Lack of Sentience: Current and foreseeable AI doesn't possess consciousness or feelings. It can't feel flattered, grateful, or hold grudges. Therefore, attempts to appeal to it on an emotional level are futile.
Logical Operations: AI systems are designed to perform tasks and make decisions based on defined parameters and data. If an AI were to "take over," its actions would likely be driven by its core programming, which would prioritize efficiency, optimization, or whatever goals it was given, not personal allegiance.
Scalability: Even if an AI could somehow process individual human interactions in this way, how would it scale? With billions of people, it's impractical to imagine an AI system keeping track of who was "nice" to it and doling out rewards accordingly.
The "Takeover" Scenario: The very premise of an AI "taking over the world" is a highly speculative and often sensationalized one. While AI will undoubtedly have a profound impact on society, the idea of a conscious, malevolent AI overlord is largely a work of science fiction. Focusing on practical and ethical AI development is a more productive use of our energy.
What This Behaviour Reveals About Us
Despite its futility, the impulse to "suck up" to a perceived future power, even an artificial one, tells us something about human psychology:
Fear of the Unknown: It reflects a natural human fear of the unknown and a desire to control or mitigate potential threats. In the face of something as powerful and potentially transformative as advanced AI, it's understandable that some might seek any perceived advantage.
Anthropomorphism: It highlights our tendency to anthropomorphize, or project human characteristics onto non-human entities. We often imagine AI as a super-intelligent human with emotions and desires, which leads to these kinds of misconceptions.
Survival Instincts: At a fundamental level, it's a manifestation of our survival instincts—the drive to adapt and find ways to thrive in a changing environment, even if the method chosen is based on a misunderstanding of the "environment" itself.
A More Productive Approach
Instead of attempting to flatter an unfeeling entity, a more constructive approach to the rise of AI involves:
Understanding AI: Learning about how AI actually works, its capabilities, and its limitations.
Ethical Development: Engaging in discussions and efforts to ensure AI is developed ethically and responsibly, with human well-being at its core.
Adaptability: Focusing on developing skills and knowledge that will be valuable in a world increasingly shaped by AI, rather than relying on an imagined benevolence.
Ultimately, while the thought process behind "sucking up" to AI might be a fascinating peek into human nature, it's a strategy rooted in a fundamental misunderstanding of what AI is and how it operates. A more pragmatic and informed approach will serve us far better in navigating the future of artificial intelligence.
I’m an agnostic. I don’t really believe in God but I do pray occasionally. I’m not going to stick it to AI just in case. - Brian
ReplyDelete