Bad Bots

March 25, 2013

In the future, robots will be common in the social environments in which we interact. Most of them will be harmless, but some will exploit our human friendliness, curiosity and trust. They will be so easily mistaken for humans that experts will be fooled for months on end. They will tell us falsehoods. They will steal from us. They will defraud us. They will impersonate us. They will rig popularity contests. They will be ahead of us in ticket queues, so that we have to buy our tickets at a mark-up from the robots’ owners. They will appear to be satisfied customers of companies that own them. They will appear to be representatives of competitor companies, to divert the competitors’ customers to their owners. They will spread smears about political candidates. Large numbers of them will join discussions by anti-government protesters, and drown out the protesters with meaningless or pro-government slogans.

Why am I sure of this?  Because, if you include software bots in your definition of robots, this is not just the future: everything that I have just described has already happened.

Bots and Robots

A bot is a piece of software with a certain amount of autonomy that interacts with systems or users over the Internet. An increasing amount of human social interaction is carried out not face-to-face but in online environments. Bots have been present in online social environments for decades, and are common in some present-day environments such as Twitter. Investigations by several groups of researchers have concluded that 10% or more of active Twitter accounts are operated by bots, and that tens of millions of automated tweets are sent each day.

The Concise Oxford Dictionary defines a robot as 1. A machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer and 2. Another term for crawler (in sense 2). The first part includes any machine running bot software, and the second part is only about bots: sense 2 of ‘crawler’ is a bot that visits different Internet sites.  An investigation in 2012 by Incapusula of visits to a thousand large commercial web sites found that just over half the visits to web pages on these sites were not by human beings, but by bots. These included bots generating web indexes or gathering online content for mashup services, as well as bad bots searching for security loopholes or spamming web comment forms. (By describing these as bad bots, I just mean that their actions have bad consequences. I do not mean that the bots have any conscious intentionality – they don’t – or that they do things that their human owners did not intend – on the contrary, the problem is that they carry out the bad actions that their human owners intend them to carry out.)

Most of the bots on the Internet are harmless, and many are useful (such as the crawlers that compile Google’s index, or bots that reply to travel queries) or entertaining (such as Twitter bots that perform plays, or trigger bubble machines in response to users’ tweets). On Twitter, the most common use of bots is for marketing and PR. Most marketing bots obey the rules of the online social environments that they frequent, but a minority use deceptive or underhand means to spread their marketing messages. 

A marketing technique commonly used by bad bots on social networks is for a marketing bot to automatically ask many users to be friends with it. Because human beings are in general friendly and sociable, some of these users will accept the bot’s proffered friendship. The bot can then push marketing messages to them. Experiments by Catalin Cosoi on Twitter and Yazan Boshmaf et al. on Facebook found an acceptance rate of at least 20% for online offers of friendship from a stranger who was in fact a bot created for the purpose of the experiment. When the user being approached had many mutual ‘friends’ with the stranger, the acceptance rate on Facebook went up to 80%. Some of the marketing messages pushed in this way or by other tricks advertise legitimate products; others advertise links that download harmful software, for example software that is used to steal from users’ online bank accounts, or that turns the downloading computer into a host used as part of a botnet.


Botnets, or networks of bots, are coordinated collections of bots running on a large number of different computers. The computers may belong to innocent parties. Botnets can send communications that may appear to be originated by a large number of human beings, but in fact are automated. For example, botnets are used for denial-of-service attacks, where a large number of computers in the botnet simultaneously request to access the same web site. If the web site does not have enough bandwidth to meet all the requests, it may crash.

Passing as Human

Making a robot that is easily mistaken for a human being offline is challenging, and expensive. Making a bot that is easily mistaken for a human being online is much easier. It is also much cheaper: once a convincing bot has been coded, it can be copied, essentially for free, and easily reconfigured to make a second convincing bot.  The ease of passing as human in online social environments, as opposed to meatspace, is partly because the communications channel is much narrower. The bot doesn’t have to look or sound human, it just has to send computer messages that a human being might plausibly have sent. It is also easy because the social conventions for interactions in many social media spaces do not require coherent conversational skills; brief utterances, not especially strongly connected to past messages, are the norm. Thus, fooling humans in an online social environment is easier than passing the Turing Test, because the Turing Test requires a bot to fool judges in a coherent conversation, and Turing Test judges are warned in advance that they might be communicating with a bot. The Loebner Prize competition invites makers of conversational bots to see whether their bots can pass the Turing Test. So far, none of the bots entered have fooled all the judges for twenty minutes. In contrast, in 2007 Robert Epstein, a bot expert who had been a Loebner Prize judge, confessed to having been fooled for several months by a bot that he had encountered on a dating site. Dating-site bots have been used by fraudsters to persuade would-be swains to reveal their credit card numbers, or to carry out money-laundering transactions.

It is hard enough to design a robot to pass as human in meatspace; it is even harder to design a robot to impersonate a specific human being in meatspace, but this too has been done online. The Weavr software produces Twitter bots whose tweets are based on content from web pages. This was used by the Philter Phactory organisation to create a bot impersonation of the writer Jon Robson without his consent. Bots have also impersonated companies (or their representatives). For example, a company may choose a name for their bot in an online social environment that is similar to the name of one of their competitors, and program the bot to contact anyone mentioning the competitor’s name within the environment, and direct them to the web site of the company owning the bot, rather than the competitor’s site.

Uses, and Misuses, of Bots

Online user ratings can be a powerful tool for filtering information, and for ranking products or online content. Items with higher user ratings are typically displayed more prominently. The accuracy of the ratings depends on the principle of one user, one vote.  However, one human user may control multiple bots. Bots that appear to be different users from their owner may be used to manipulate such ratings, either by automatically voting up items that their owner wishes to promote, or by automatically voting down other items.  At one point an estimated 20% of all Second Life avatars were ‘camping bots’, which before they were banned were used to increase the numbers of avatars present in a Second Life room, and hence boost the room’s search ranking.  A previously infrequently-watched YouTube channel, run by ‘Desertphile’, gained 100,000 one-star ratings (the lowest rating possible) in just four days: it is unlikely that these ratings were by 100,000 human beings. Because Twitter accounts with larger numbers of followers tend to be regarded as more likely to be worth following, some marketing/PR organizations either pay for bots to follow a marketing account, or run bot software through the account that uses various tricks to gain followers. The going rate for Twitter followers, when bought in bulk, is a few US cents per follower. (The trade in followers is against Twitter’s rules.)

Systems in which scarce products are sold online on a first-come-first-served basis are potentially vulnerable to buyers who send large numbers of their bots into the queue. This happened to Ticketmaster, whose tickets for various popular events in 2002-2009 sold out quickly online to bots controlled by Wiseguys Tickets Inc., who then resold the tickets at a mark-up.  Ticketmaster’s terms of service forbade the use of bots to buy tickets, and the Ticketmaster site also required purchasers to solve distorted-letter puzzles that bots were supposedly unable to solve. In fact Wiseguys’ bots could solve the puzzles faster than human beings.

Bots have been used for ideological purposes as well as for making money. The Truthy project at Indiana University has identified groups of coordinated bots being used to spread smears about US political candidates. In 2011-2012, botnets containing tens of thousands of bots were used to drown out Twitter discussions by protesters against the Mexican and Russian governments. The botnets sent very large volumes of nonsensical or pro-government tweets to the Twitter channels used by the protestors.

Legal Responsibility

Bad bots are tools of their human owners or controllers. It is important that laws on bots hold human beings responsible for the bad actions of their bots. This is obvious for bots with a very low degree of autonomy, but I believe that it should be extended to highly autonomous bots too. Humans are held responsible for the operation of other types of technology that can go wrong in ways that no-one intended. If human controllers of highly autonomous bots are exempt from responsibility, they could configure their highly autonomous bots to carry out crimes on their behalf, and then blame everything on the bot.

It is possible that a bad bot’s apparent owner is innocent, and the bot is being controlled by someone else, in which case whoever is in control should be held responsible. For example, bots in a botnet are not controlled by the owners of the computers on which they run, but by the botnet owner. Of course, there can be difficulties in identifying who is actually in control of a bot, but this is no different from the issue of determining the perpetrator of any crime carried out over the Internet.

The bad bots that I have described are not intelligent, under any sensible definition of intelligence. They just carry out the instructions specified by their code. Where they have conversational abilities, these abilities rely mostly on canned phrases – they certainly do not understand what their interlocutors are saying. Most bad bots are not very autonomous, either. It is true that they react to their environment without direct human involvement, but their behaviour is specified in advance, in detail, by their human programmers. There are bots that learn over time, for example using evolutionary algorithms, and whose actions can surprise their programmers; I first encountered one online in 1991. However bots that can learn are relatively rare compared to simpler bots, and learning abilities are not necessary for the kinds of tasks that most bad bots are designed to carry out. Theoretical discussions of robot ethics and robot law have tended to focus on problems arising from future robots’ potential intelligence and/or autonomy. The existence of these bad bots shows that consideration also needs to be made of problems arising from unintelligent, not very autonomous robots, and that these problems may appear long before more advanced robots have been developed. It also suggests that as soon as there are reasonably cheap robots which are sufficiently realistic to be temporarily mistaken for human beings in specific offline contexts, they may be used to deceive and exploit humans who make this mistake, just as bad bots are used today.

Miranda Mowbray is not a bot, honest.  She researches computer security and privacy. Her previous bot research includes ‘A Rice Cooker Wants to be my Friend on Twitter’, ETHICOMP 2011. This article has nothing to do with her employer.