On Tuesday, only a day before Ashley Judd said she would start pressing charges against abusive Twitter users, James Poulos at the Daily Beast reminded us all of something called Block Bot. The application is designed to “automatically block the people added to its lists… discreetly blocking them on your Twitter account.” That is, once installed, you won’t hear from anyone who’s been nominated to one of the Block Bot’s lists.
There are three of those. The first, Level 1, is the “‘worst of the worst’ (as determined by the blockers), plus impersonators, stalkers and spammers.” Level 2 is all of those folks, plus additional people who are bad, but perhaps not the worst of the worst. And Level 3 includes both the first two lists, and adds “those who can be tedious and obnoxious.” (By the time you’ve applied Level 3 blocking, one might wonder whether there remains much point in being on Twitter at all.)
As Poulos contends, the fact that something like this exists highlights the fallacy living at the root of many online social networks – that rather than making us closer, things like Twitter, sometimes purely by their instantaneousness, have a tendency to push us farther apart. Wouldn’t it be nice to get beyond all the trolls and have a nice conversation (again)?
“Instead of clinging to the unreasonable expectation that people impose the same self-restraint online that they still mostly do in real life, we need to accept the ‘net for what it is and accept a new modus operandi. The bots can help us, where we cannot help ourselves. It’s time to let them… humanity’s last hope for decent online discourse isn’t human,” Poulos concluded.
Hmm. Actually, when you put it that way…
Beyond the argument that it is, in fact, entirely reasonable to expect people impose the same self-restraint online as they would elsewhere, there is the issue of the bot itself. At the end of the day, with Block Bot switched on, you are no longer seeing Twitter as you have chosen to see it; you are seeing what the bot wants you to see.
The Internet has a problem when it comes to choice.
Gradually, with each search, each site visited, each link clicked, we teach the Internet what we want. And the Internet responds by giving us more of the same. So, when we see Google results or even advertisements, it’s because an algorithm has been churning away, calculating what might appeal most to us. For the most part, the Internet is only as vast as we are selective. If we don’t bother to stray beyond our habitual interests or sites, the Internet will not necessarily show us anything new. We liked X so we’ll probably like Y. We choose so as to end up no longer choosing.
Arguably, some level of comfort can be achieved by knowing this. Chances are, your Google results won’t be too obscure – not past the first few pages, anyway (and who really clicks much further?). Searching the Internet is therefore never usually an unsettling experience; it’s a comfortable world, full of sites that you’ve visited many times before and therefore feel all the more legitimate. The bots are in control, but so what? They give you what you expected to find, and the ads are for things you want to buy. You’re not seeing anything you don’t want to see.
But what if the bots make something you don’t want to see look like something you do?
Monday morning the story out of the South-by-Southwest festival in Austin was about exactly this. Ad Week told a tale of unrequited Tinder love. Not because the swipe-right relationship didn’t go as predicted, but because the goals of both users were so markedly different.
Ava, an ostensibly 25 year-old Tinder user was apparently making connections with men in Austin, TX, striking up conversation, and asking them to follow her on Instagram. The catch? She was a bot – an advertisement for a new film premiering at the festival, a phoney user programmed to respond to men on Tinder with generic questions until ultimately ‘she’ could direct them to a trailer for the film on Instagram.
The story was a big hit online, perhaps because while most of us who engage with the Internet on a daily (hourly?) basis and are aware of the bots, few have engaged so directly – and had one chat so convincingly back. It’s not an entirely new story, but there are still few enough of them to grab attention as a harbinger of the future, one we were not quite ready to come so soon, for it seems to mean we need to refresh our consideration of what bots are and what they might do.
For a number of decades, when robots are discussed generally, our conversation has tended to centre on productivity. We either look forward to the day robots take our jobs and leave us with more time to be at leisure, or we fear the moment the robots take over, leaving us with no livelihood, and therefore no way to actually enjoy any free time we’ve been given. Eventually, in both scenarios, however, there is the assumption that we will witness the development of artificial intelligence and, perhaps one day, singularity.
That this point seems to arriving rather more quickly than we might have expected, without fanfare, and as innocuously as via a clever derivative of a what is normally a pornography marketing tool, is arresting. And yet, here we are.
At the New York Review of Books this week, Sue Halpern muddled over our future-present while discussing Nicholas Carr’s new book, The Glass Cage. Carr, she reports, is one of the few cautionary voices “urging us to take stock, especially, of the effects of automation on our very humaneness – what makes us who we are as individuals – and on our humanity – what makes us who we are in aggregate.” Halpern then wonders about the warning issued by Elon Musk, Stephen Hawking, and others that AI development must focus on “maximizing the societal benefit… our AI systems must do what we want them to do.”
Halpern points out this collective “we” is not the public, but engineers.
“The authors acknowledge that ‘aligning the values of powerful AI systems with our own values and preferences [may be] difficult’, though this might be solved by building ‘systems that can learn or acquire values at run-time.’ However well-meaning, they fail to say what values, or whose, or to recognize that most values are not universal but, rather, culturally and socially constructed, subjective, and inherently biased,” Halpern writes. “We, the people, are on our own here – though if the AI developers have their way, not for long.”
Perhaps Judd is right to confront her tormentors the good old-fashioned way.