As the people struggle with the reality that the services they provide in return for a living wage are not essential to the survival of their society, other non-human intelligences have been working. I was always one to think that the deluge of Artificial intelligence (Ai) and its attack on human error would destroy first the supposedly complicated and esteemed Jobs in society. For instance, I thought machines would make for better accountants, engineers, medicinal practitioners, and lawyers, because “Human error” is perhaps something less appreciated in such roles.
Personal bias deluded me to believe that instantaneous human creativity was the only thing that could not be emulated by Ai… until Sophia the robot was afforded citizenship by Saudi Arabia and thrown into the Marketing industry as what appears to be an influencer of sorts… I wondered if she would eventually dabble with a little creative copy-writing, and so down the rabbit hole I went. Watching you tube videos, I was first bombarded with Grammarly ads, promising to improve my writing… They got me by saying, “this sentence is grammatically correct, but wordy”… could this be the Ai I have come in search of, algorithms finding me, targetting me, the google ai, telling the Youtube Ai… “…Struggling writer alert, advertise that other writing Ai to him…”. A quick search revealed, the company, grammerly is selling a product “powered by an advanced system that combines rules, patterns, and artificial intelligence techniques like machine learning, deep learning, and natural language processing to improve your writing”. I’ll declare now, i didn’t subscribe, perhaps as a protest against language generating technology, its ghost-writing tendencies and my feelings towards it as an attempt to mess with a fundamental of being human… “our” language. So prepare yourself for a sometimes grammatically correct, wordy and imperfectly human analysis.
It seems the celebrity writers and journalists are on a hiatus and waiting for commissions from a dying print industry, doers have been doing attempting to perfect an AI based language-generator technology that is sophisticated enough to write, read and reason at a skill level indistinguishable from a human being (you should be sceptical at this point, but read on). That technology is probably going to market by the end of this year 2020, but reviewing the recent flawless improvement of the “Translate” functions on social media, I think the tech has been deployed online, absolutely fascinating i must say. What will be the future of Journalism, as a human art-form that once had a value per word, whose purpose involved also maintaining the responsibility for informing the public on the objective truths of what happened or what is happening. Writers, i’ve long suspected, were just tools in the continued commercialisation of truths that are scandalous, fear-inducing and decadent… with little regard for personality and often times, lacking valuable nuance to frame information in the ways that would benefit “the people”. And So animate tools they remain, subject only to the will of the masters of such tools to be used against the interests of the people. Almost machine-like, the writers churn out articles framed towards an end that is not always the objective truths, frighteningly, in a time when social media users cannot discern between the ramblings of a bot and a real credible personality ( whatever that is for now).
The questions, about whether non-human creations can think or display some intelligence distinct or similar to humans, have been with thinkers of the western world before Descartes, a french philosopher, mathematician and scientist who had interesting writings in the 1600’s. Following the return of the crusaders from the Arab world with all sorts of knowledge and creations from the golden age said to have lasted from the 8th century to the 13th Century , the western world got introduced to the concept of the humanoid automaton. An automaton is a moving mechanical device made in imitation of a human being… which performs a range of functions according to a predetermined set of coded instructions. These machines could display a kind of physical intelligence observed in ”self moving” but could not necessarily be called a “machine”. It can be argued that this was an aritficial intelligence, but of course very different from what machine thinking is about today. Descartes, In the Meditations, through the meditator wonders whether ” beneath the cloaks and gowns he sees outside his window there are humans or automata”. Around the 20th century, the English Mathematician, logician and computer scientist Alan Turing is fascinated with machine intelligence and the possibility of that intelligence being indistinguishable from human intelligence, he develops a test.
“The Turing test, developed in 1950, is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine’s ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine’s ability to give correct answers to questions, only how closely its answers resemble those a human would give.”
It is safe to note that back in 2014 a computer program called Eugene Goostman, which, at the time simulated a 13-year-old Ukrainian boy, is said to have passed the Turing test at an event organised by the University of Reading, England. Humans could not distinguish whether or not an artificial intelligence was responsible for the responses that were given. A year later, In 2015 a “Cheating/dating” site, Ashley Madison was hacked and it was reveleved the company had created a small army of bots to trick its users and inflate its numbers prior to the hack. Investigations showed 70,000 female bots were programmed to send male users fake messages essentially working as a sales team. Most of the men could not recognise they were NOT interacting with a human, but a machine intelligence… The turings test, going live on the internet. There are many implications to these developments, implications that have been brewing the last few years, revolving around the very structure of reality and how it is constructed in the minds of the people. Propaganda, fake news and the ability to structure text that is designed to deceive, targeting certain groups in an effort to influence their decision making process are all very real and very now.
Open AI is a startup co-founded by Elon Musk, who in 2018 warned humans about the very real dangers that may be looming if the Ai is not regulated and “democratised” as he put it. His company is devoted to ensuring artificial general intelligence is safe for humanity, they say. In 2019 OpenAi announced it had created a “neural network for natural language processing” called GPT-2. In an almost apocalypse averse decision, OpenAI chose not to publicly release the language generator to the public. Amongst the reasons cited is, “the tool could produce text realistic enough that it was, in some cases, hard to distinguish from human writing. Its creators worried GPT-2 could be “… appropriated as an easy way for bad actors to crank out lots of fake news or propaganda…”
What is it about fake news that we fear… Does it have to do with an inherent very human observation, that stories have a way to shape reality, whether they are true or… not. “Fake news” has certainly become a widespread problem, for both the public and the people in power. For the public because in order to make informed decisions, the information needs to be most likely beneficial to the public. The question of trust about the source of the information, its credibility and its usefulness for ones life should become the first questions while rowing your boat down the stream of information that 21st century living has become. Fake news, or what can be considered fake news in now illegal under the disaster management act, which in a sense can be restricting, almost bordering on censorship to inquisitive writers and thinkers who are desperately need to interrogate possible solutions to a health crisis that is affecting more people as it is increasingly infecting them. While we remain uncertain through COvid 19, the celebrity entrepreneur types and their minions have been improving the language-generating technology, despite the potential risks. OpenAI announced late last month that GPT-2’s successor GPT-3 is complete.
A paper published by OpenAI researchers describe GPT-3 as an auto-regressive language model with 175 billion parameters. “Parameter” refers to an attribute a machine learning model defines based on its training data. A statistical model is autoregressive if it predicts future values based on past values, in this sense I understand it as the intelligence to utilise and predict diction, word choice, grammar and colloquial phrases as a human would. Now the question would be, what kind of human intelligence would be simulated, a 14 year old Ukrainian boy, a 40 something year old African female Professor, a Chinese sage master of the Tau or a person who is really bad grammar and great natural reasoning intelligence, a CEO of a multinational company. A machine is not bound by the physical limitation that humans must endure… and so a machine can be all these things, limited only by the parameters programmed by the humans.
The OpenAI team notes that GPT-3 performed well when tasked with translation, answering questions, and doing “reading comprehension-type exercises that required filling in the blanks where words had been removed”. They also say the model was able to do “on-the-fly reasoning,” and that it generated sample news articles 200 to 500 words long that were hard to tell apart from ones articles written by people, and so I ask, what value will Journalism have in the future, when machine intelligence can and will be used to reason away our reality and depending on who is in power, machine intelligence can be used to reason away our oppression.
The authors and researchers acknowledge that GPT-3 could be misused in several ways, including to generate misinformation and spam, phishing, abuse of legal and governmental processes, and even fake academic essays all these are already happening. It has been making news that there are those more proficient at writing who have offered their services to construct academic papers at a fee, will there still be a need for this when Ai will be probably cheaper and more convenient to access in the future. The human spirit and whatever semblance is left in it to create art, poetry, essays will be a valuable asset in a future devoid of human imperfection. But My vanities to be a celebrity writer have stirred curiosity as to how this language generating technology could’ve enhanced this reading experience for you.
A fear of the working class since the first industrial revolution has been… “the machines are taking our Jobs”, and with that taking away the security of having an income, taking away with that, the chance at feeling like a celebrity. This exposes us to ourselves, valuing security over freedom, envious of progress but without the means to save our imperfect livelihoods from human error. Philosophically and strategically, it appears the human error is the very act of teaching language to an artificial intelligence that has the potential to be smarter than the average human… and since we’re existing in “unprecedented times”, the next “logical” thing to be sold to us is an upgrade, artificial but intelligent. Enter the transhumanists.