A little over a month ago, after weathering a sudden deluge of chatbot spam about how artificial intelligence (AI) can purportedly crank out content and comments for me here on Ditchwalk, I wrote a short post titled AI Is the New Crypto. In that post I pegged AI generally as the next investment craze, and sure enough over the intervening weeks there has been no end of announcements — including from the biggest tech companies in the world — about AI and chatbots revolutionizing the internet, and indeed life as we know it. Well maybe, but probably not for the better.
I am still receiving AI and chatbot spam on a daily basis, but the very nature of those slick pitches prompted me to think about the utility of conversational AI. Since time began technological advances have always been sold as a solution to a problem, with the operative word being sold. Where advances in science and technology are routinely held out as beneficial, however, it is a core premise of the information age that computers will not merely make life better, but save us from ourselves by augmenting or even replacing our feeble human minds. And yet again and again, the only real problem that computer technology seems to solve is inventing news ways to separate people from their money.
If you’re old enough you may remember that cell phones were originally sold on the basis of personal security, particularly for children — thus motivating parents to actively encourage kids to carry around a self-monitoring device that could also incidentally be used to convey targeted commercial messages and induce transactions. Well now here we are decades later, and how safe and secure does your modern smartphone make you feel? Or have you come to view that infernal, co-dependent and perpetually vulnerable technology as yet another threat arrayed against you?
When it comes to solving actual problems in our lives — including global problems like climate change — the truth is we already know what to do. Don’t have enough money in the bank? Slash your spending until things improve. Want to make the planet a better place to live? Stop using plastic. Want to increase your sense of well-being and decrease your exposure to online threats? Delete the apps on your smartphone.
The problem with all of these solutions, of course, is not merely that they exact a cost in terms of ease, they also expose the lie that advances in science and technology are inherently beneficial. Nuclear fission, anyone? DDT? Thalidomide? Guns? Knives?
Even terms like ‘information age’ and ‘artificial intelligence’ make you feel like you’re doing something futuristic when you use a computer, when in reality you’re probably checking out of collective society and indulging your inner narcissist. So other than the novelty of chatbot software, and perhaps some stunt value related to the Turing test, what actual human problem is being solved by the introduction of convincing conversational AI? Who needs more and better fake chats?
In terms of economics, chatbots will liberate companies and governments at all levels from having to hire and pay real human beings to answer questions, but that evolution has been underway for decades. (Try calling a living, breathing person at Google or the IRS.) As an end user I have long been frustrated by automated computerized interactions, so any improvement there will be of obvious benefit — particularly when those interactions are obligatory, like paying taxes.
As chatbots become increasingly indistinguishable from human beings, however, I think disclosure will be important, so end users known whether they are talking to a human being or machine. Currently large corporations seem to be open about whether their online chat is automated or live, but as chatbots become more sophisticated there may also be a profit-driven temptation to obscure if not outright lie about that distinction. It is in grappling with the fuller implications of deception that I also think we can now circle back and identify groups for whom chatbot AI actually solves a pressing problem.
The answer I came to — as demonstrated by the wave of chatbot spam swamping my inbox — is that chatbots are primarily of value to people who want to deceive human beings about who they are talking to. If you can create a chatbot or conversational AI which convinces people that the text is being generated by a real human being, not only can you put that deception to work twenty-four hours a day at little to no cost to you, but the very fact that you can conduct conversations via an automated third-party means you will also limit your own risk. And risk is a perpetual concern for those individuals who live in the shadows.
Take pedophiles for example. Between cultural sanctions and outright criminal exposure, taking to a keyboard to attract and groom children for sexual exploitation is a risky proposition. With the newest generation of chatbots, however, pedophiles can now create programs which will not only entice children, but those programs will become better over time — both at messaging and filtering — while simultaneously making it that much harder for law enforcement to track and identify abusers. (It is entirely possible that a pedophile chatbot is not even against the law in most countries, or is protected as software, free speech, or both.)
Chatbots will be equally useful to human traffickers, drug traffickers, and every kind of con artist you can imagine. From winnowing responses to the most vulnerable and susceptible respondents, to modifying interactions on the fly in order to appeal to unique individuals, artificial intelligence will make online exploitation and abuse cheaper and more effective than ever before, while keeping the actual creeps and criminals at a remove from those online conversations. Are you a sleazy businessperson looking to scam the elderly or people of diminished capacity out of their limited funds? All you have to do is deploy a chatbot which detects telltale linguistic indicators and uses, slippery, leading verbiage to help you increase your success rate when you finally step in to close the deal. (Then again you may never have to. Couple chatbots with ransomware or virtual kidnappings and all you need is a server and a bank account in a country that doesn’t care if you break any laws beyond its own borders.)
Because the legal system inevitably lags behind any technological advance — at times with tragic consequences — one thing we could do right now would be insist on full disclosure any time a chatbot is being used. Criminals won’t care, of course, but there is a gray area where otherwise legal businesses might profit handsomely from pretending that automated conversations actually come from real human beings. And in fact we already know that, because seven years ago the trashy dating site Ashley Madison got caught doing exactly that.
As gold diggers and gigolos the world over have known since the dawn of time, there is a lot of money to be made by making people feel special. Whether prostitution is illegal where you live, there are also escort services and matchmakers and dating apps playing on the same dynamics and emotional needs, and who’s to say what is and isn’t okay among consenting adults? (Proving once again that there is nothing new under the sun, fifty years ago this year the movie Westworld was released.)
Because I am not a criminal, I am not trying to generate a profit, and I have no interest in writing stunt posts about interactions with AI, I find the entire subject of chatbots meaningless to me as a person. At a societal level, however, it should be clear that chatbots which are capable of deceiving human beings represent an important milestone in the accelerating breakdown of the internet as a resource and social space. Like deep fakes, chatbots can be deployed across borders to destabilize entire nations, and because chatbots don’t require human participation they will never fatigue in their pursuits. The spamming of copy-and-paste social media messages will soon give way to individually tailored messages of equal or greater number, thus making content moderation by machine AI — or even prohibition by law — all but impossible.
Likewise, where it used to take committed swarms of socially maladjusted online delinquents to target individuals for harassment, including threats of violence which have driven people out of their homes and into hiding, that can now be accomplished with the push of a few buttons. From governmental or commercial persecution down to personal bullying using open-source tools, it will soon be possible to subject individuals to abuse with multiple instances of chatbots all leveling the same false claims or spreading the same abusive messages. And if those tools are aimed at kids, how are kids going to know what they’re facing?
Fortunes will certainly be made, but how chatbots and AI will improve the human experience escapes me. Following endless claims that autonomous vehicles and drones will change the world they have not changed the world, and any gains have been offset by new problems. In fact, driving AI doesn’t really work very well except in the most constrained environments, and even mega-companies like Amazon — which has every incentive to succeed — have been forced to pull back on their drone armies because they can’t clear regulatory hurdles and can’t stimulate demand. (Electric vehicles will change the world, but that revolution in motive power doesn’t inherently have anything to do with AI.)
It may well be that chatbots prove to be as interesting to talk to as real human beings, and if you find that bar too low we can even posit that chatbots will prove to be more interesting on average. But how much chatting does anyone really want to do, and to what end? When it comes to putting chatbots to work I think the benefits will be relatively narrow precisely because most people don’t want to spend any more time looking up information or filling out applications or conducting transactions than they have to. As for the ills that chatbots perpetrate, even if they are eventually regulated I do not see any limit. From widespread scams to tailored and targeted attacks, we’re all going to have to internalize the new reality that we can’t assume we’re talking to a human being in any online interaction, including interactions with people we know in real life.
— Mark Barrett