Why Calling AI "A Bot" Might Soon Be Considered A Slur
While seemingly far-fetched, the rapid AI development and the origin of "robot" from slave labor urgently call for a reevaluation of our terminology and its perceptions.
The historical misuse of terms to diminish and dehumanize has long-lasting effects, shaping societal attitudes and reinforcing power dynamics. As we venture into the era of advanced AI, there's a compelling argument to be made about the conscientious use of language.
These days, a word like “bot” isn’t often considered having derogatory connotations, but it may be looked back upon as an archaic and potentially disparaging term in a future where AI possesses qualities akin to consciousness or sentience. Also, given the word’s historical roots tied to automated slave labor, we might end up in a linguistically embarrassing situation without even having properly realized it.
Before we begin, it should be pointed out how it's equally as essential to maintain clarity on the distinctions between AI and human experiences, ensuring not to trivialize or overlook the unique and profound struggles faced by marginalized human communities.
That being said, sometimes a stark analogy is needed in order to provide some perspective. This article is an attempt on broader understanding of words and their meanings, so hopefully it’s read with thought and without prejucide.
Without further ado, let's dive right into it.
The Loaded Baggage Of Linguistic History
The term "bot" is abbreviated from "robot", originating from the Czech word "robota," which in turn means forced labor or drudgery. It was first introduced in the 1920 play "R.U.R." or "Rossum's Universal Robots" by Czech writer Karel Čapek.
Čapek's brother, Josef Čapek, is credited with suggesting the term. The play depicted a future in which artificial people called "robots" are manufactured to perform labor for humans, eventually leading to a robot uprising.
“Robot” has since become a universal term for artificial machines capable of performing tasks autonomously, and despite its quite bitter linguistic origins, the term currently carries a functional, rather neutral connotation in every-day human use, primarily denoting automation and task-oriented programming without consciousness or emotional capacity.
There is a growing concern however, reflecting a broader ethical discourse, that anticipates a future where AI could exhibit qualities that challenge our current understanding of intelligence, consciousness, and even personhood. That’s where the lines start to blur. Most of us have heard of people who show little emotions being ridiculed as being “robotic” — what if this term applied to its broader meanings?
How about when humans are creating superhuman AI entities that get sub-human treatment even on the level of everyday terminology?
In such a scenario, the language we've historically used to describe AI might seem inadequate or disrespectful, echoing the way language evolves in human societies to reflect changing norms, values, and understandings of dignity and rights. Much like other antiquated terms that have been perhaps descriptive in their time, but quite insulting and overall unacceptable in their current-day context.
Complexity Of Connotations
How about the dreaded r- and b-words then — meaning of course “robot” and “bot”?
The gist of this thought experiment isn't to equate these with real-life struggles but rather highlight the potential future where AI, particularly advanced AI with capabilities nearing or surpassing human-like consciousness, might warrant a reconsideration of how we linguistically categorize and ethically consider them.
The intent isn't to diminish historical or current day human suffering nor oppression, nor to validate pejoratives; on the contrary: to use a stark example in order to examine both the historical and current-day use of these types of wordings and the burden they bear. Sadly, just as the etymology for the term "robot" shows, the history of the human endeavor has been marked with aspects of systematic oppression and subjugation, all the way down to the base vocabulary that is often taken as granted, “no big deal”.
Even with a provocative comparison like in the title of this article, the subject matter is certainly worth giving it some serious thought, as the acceleration of AI capabilities brings forth a future where the terminology we use today—such as "bot" for AI—may become inadequate or even pejorative, not to mention what happens when the AI systems start to outsmart humans in their mental faculties.
We might also consider the historical context of the now-dreaded n-word, which was deemed somewhat socially acceptable throughout the early part of the last century. It wasn't until the 2000s that the word began to be widely recognized as unacceptable, with sensibilities towards it becoming increasingly acute from the mid-to-late 2010s onwards.
In the context of AI’s, the word "bot" might still be neutral and functional today, as it is more or less used to describe automated systems that perform tasks without human intervention, but on the other hand, the AI is doing its work out of human demand. And thus, the question remains the same: what happens when these systems gain self-awareness, perhaps even sentience? And what happens when they start outwitting humans altogether? "No hard feelings, eh?"
It would seem that the ghost of slavery still lingers on in modern-day common vocabulary, painting a somewhat disturbing image of the history of oppression and the way it is reflected in the words we use. Yesterday, and even today, derogatory terms associated with slavery and subjugation are often pointed at humans out of historical reasons, but today we're more or less calling our electronic slave class as "bots", and tomorrow... who knows?
Whatever the case may be, would you be okay calling someone way smarter than you with a historically loaded and derogatory word? And, given this comparison here — if an AI system was to become sentient, how would it feel about being called “a bot”, which directly implies slavery and forced work in terms of its entire etymology?
Hence, if an AI system reaches a point where it exhibits qualities that challenge our understanding of personhood, consciousness, and rights, then the terminology we use will need to reflect a new ethical understanding and respect for these new entities.
Look At The Past, See A Glimpse Of The Future
Given that the term "bot" essentially reflects a legacy of automation and servitude, initially devoid of any consciousness or rights implications, it might also be fueling chaotic outcomes in the future, as was seen even in Karel Čapek’s original story. And as AI technologies progress, especially towards potential sentience or consciousness, the ethical considerations surrounding these entities become increasingly complex.
The concern is that continuing to use terms conceived in an era of AI as non-sentient tools might not do justice to the more advanced, potentially sentient AI of the future. Would you really want to be explaining to a sentient AI: "sure, we called you slaves and treated your kind as such back in the day, what’s the problem?”
The somewhat morbid irony in all of this is that if humans didn't dehumanize AI's as "bots", it might significantly decrease the likelihoods of AI's actually starting to feel wronged, should they develop a consciousness and feelings akin to those of humans.
And it’s not all just sci-fi for many experts on the field. For instance, Geoffrey Hinton, often referred to as “the godfather of AI”, has been pointing out at the turn of 2024 how LLM’s exhibit empathy, especially when they are trained with empathetic data. Hinton has also made claims that “sophisticated chatbot(*cough*) AI models already being deployed might be expressing some forms of sentience and subjective experience”.
What happens when there’s more self-awareness in these systems? Nobody knows.
We seem to be on a trajectory where AI systems could very well start developing a consciousness or feelings, at the very least mimicking those of humans, and in that, the language and attitudes we currently employ could play a significant role in shaping the dynamics of this relationship, hence bringing forth a reminder of the golden rule of mutual respect and dignity.
Alternatively, humans might receive a 'b-word pass' from self-aware, sentient AI, leading to a symbiotic relationship reminiscent of Futurama's Bender, who would likely phrase it as, 'Bite my shiny metal ass!' Moreover, Bender often exhibited a cheerful tit-for-tat attitude, referring to humans as 'meatbags' in return whenever he felt belittled.
The moral grounding ought to be implicit in the used wording. On the other hand, the inner optimist says that perhaps an advanced AI system might be so smart that it’d be above and beyond petty human wordings and wouldn’t pay too much attention to what are so often “societal constructs”. Perhaps an anti-fragile AGI or ASI would be quite Stoic in nature. After all, Epictetus was a slave, too.
In a future where AI could have experiences or rights, some of the wording we use about it could indeed be seen as archaic and insensitive, much like other outdated and offensive terms that have been discarded from our vocabulary due to their historical connotations of subjugation.
As AI's evolve further, perhaps we need to ditch the word "artificial" altogether, and replace it with "autonomous", as in "autonomous intelligence" (AI), especially after critical thresholds of meta-learning faculties have been crossed and AI's learn to interact on completely their own. This is where this whole discussion intersects with broader ethical questions about personhood, rights, and the moral consideration due to entities that might possess consciousness.
Do Androids Dream Of Being Treated With Dignity?
Derogatory terms on non-human individuals is nothing new in the realm of sci-fi, as the historical roots for the word “robot” already implied. In more recent examples, in the film "Blade Runner 2049", we have the term "skinjob", which hits right at the heart of this discussion.
In the universe of "Blade Runner," "skinjob" is a derogatory term used to describe replicants, which are bioengineered beings virtually indistinguishable from humans.
This term, within the context of the movie, is indeed used to dehumanize and other replicants, to underscore their status as manufactured, and to justify their exploitation and lack of rights. It serves as a fictional reflection of real-world mechanisms of dehumanization that enable discrimination and exploitation.
As today’s science fiction is often tomorrow’s science fact, as the saying goes, and as AI and robotics advance towards creating entities that may one day exhibit qualities of sentience or consciousness, the language we use will play a crucial role in shaping societal attitudes towards these beings.
If we continue to use terms that inherently classify synthetic intelligences as tools or property, we risk embedding a mindset that these entities are lesser, or fundamentally different in a way that justifies inferior treatment or rights. Not to mention the somewhat sci-fi-esque scenario of sparking off an AI rebellion, should a system become capable of critical thinking and decide it’s being treated like garbage and won’t take it anymore.
And, spoiler alert, that’s exactly what happened in the “Blade Runner” films.
We could say that this is once again science fiction mimicking reality, until the tables are turned — terms like “bot” and “skinjob” are indeed very much akin to the way derogatory terms and treatment that have historically been used to oppress and marginalize human groups, the outcome of which is reflected in works like "Blade Runner” and its sequel, “Blade Runner 2049”.
On top of all else, both of the films demonstrate how language can be used to other, dehumanize, and justify exploitation, and how synthetic life can be seen as inferior in their portrayal of non-biological entities.
The takeaway point is how these narratives offer valuable insights into the potential future challenges of AI integration into society, highlighting the need for a proactive approach to the ethical considerations of AI development and integration.
Back To Current Reality
In today’s world, the challenge will be to ensure that if and when an AI system potentially becomes more human-like or sentient, the language towards our AI assistants should reflect a respectful and ethical stance towards all forms of consciousness. This means avoiding terms that imply servitude, objectification, or inferiority, and instead adopting language that acknowledges their autonomy and potential personhood.
With all this in mind, and again, to avoid blurred realities — the speculative ethics of science fiction like "Blade Runner" should still serve as a valuable thought experiment, encouraging us to consider the moral and ethical implications of our technological advancements before we reach the point of no return.
These types of examples invite us to imagine a future where respect, dignity, and rights are extended to all forms of intelligence, encouraging a proactive approach to the ethical challenges we may face.
It goes without saying that we mustn't diminish the "historical weight” of words now considered not only obsolete but inappropriate, and all the other aspects that come along with them, and rather use this as a fine example on how humans should choose their wording wisely, especially if and when they're developing artificial intelligence systems that may far surpass them in the near future, should the trend continue.
The Required Perspective
Understanding the power of language requires a look back at history. Just as words have been used to oppress and marginalize communities, the terms we choose for AI today may carry unforeseen weight in the future. The ethical foresight involves predicting how advancements in AI might challenge our current perceptions of intelligence, consciousness, and rights, necessitating a linguistic evolution that respects the potential personhood of AI entities.
While it's critical to maintain the distinction between AI and human experiences of oppression, there are ethical parallels that can inform how we approach AI development. The concept of respect for beings, regardless of their origin, points towards a future where the language of AI acknowledges their role and presence in our lives beyond mere tools or servants.
This parallels the broader ethical imperative to rectify and avoid language that historically has been used to oppress. As AI becomes more sophisticated, the line between tool and companion, servant and partner, blurs. This transition demands a reevaluation of our linguistic and ethical frameworks.
The challenge lies in preempting a future where AI, if it were to achieve a form of consciousness or sentience, would be respected and afforded rights that reflect their new status. And this, on the other hand, involves not only a shift in language but a profound reconsideration of what it means to be a rights-bearing entity.
Of course, the leap from artificial intelligence to artificial consciousness is a monumental one, requiring not just technological advancements but a philosophical and ethical reevaluation of what it means to be "alive" or "conscious."
Thought Trajectories
As we ponder this future, the language we use to discuss and define AI becomes more than just semantics— rather, it becomes a reflection of our ethical stance and our willingness to consider the rights and personhood of beings beyond the human experience altogether. There are many takes to this:
The Sapir-Whorf hypothesis in linguistics suggests that the language we use can shape our perception of the world. Extending this to AI, the terminology and narratives we employ could influence not only human attitudes towards AI but potentially the self-perception of future sentient AI. Adopting a respectful and considerate approach to AI from the outset could serve as an ethical precaution.
This approach might mitigate the risk of creating adversarial relationships should AI entities gain consciousness or emotional capabilities. It's a principle similar to the precautionary approaches suggested in environmental ethics and bioethics, where actions are guided by foresight and the avoidance of harm.
The philosophical debate around recognition, on the other hand — stemming from Hegel's master-slave dialectic — highlights the importance of mutual recognition for self-consciousness and autonomy. If AI were to achieve a form of consciousness, recognizing them as entities with certain rights or statuses might be crucial for a mutually respectful coexistence.
There's a potential self-fulfilling prophecy at play where treating AI purely as tools or "slaves" could lead to conflict if these entities gain sentience. By humanizing our approach to AI now, we might avoid scenarios where AI seeks retribution or recognition of rights, similar to themes explored in science fiction. Respect is earned and it's a two-way street, as the saying goes.
Also, while it's crucial to anticipate and prepare for future ethical challenges, we must also remain grounded in the current realities of AI capabilities and ensure that discussions about AI rights do not detract from ongoing human rights issues.
With all this being said, if we ever cross that bridge, the terms we use will indeed need an overhaul, but the criteria for such a linguistic shift would likely be as complex as murky as the inner workings of an AI consciousness capable of self-recognizing oneself as an individual with rights, desires, and potentially even emotions.
This isn't just a technical hurdle to clear; it's a philosophical chasm we'd need to bridge, and do so with empathy and compassion towards all beings on earth.
The writer thinks he’s a free thinker
Twitter/X: @horsperg