When I hear about bots, or software designed to act like humans on social networks, the first thing that pops into my mind is automated accounts spewing spam across the Internet.
Not all bots are bad, though. In fact, some Twitter bots are circulating valuable news and information, according to a forthcoming research article by doctoral candidate Tetyana Lokot and assistant professor Nicholas Diakopoulos, both of the Philip Merrill College of Journalism at the University of Maryland.
Bots represent a potential way to engage audiences and increase content distribution without hand-curating content.
One bot, for example, will tweet you the forecast for a particular geographic area if you ask for it. Another will suggest changes to headlines that use passive voice. Yet another tweets about Shakespeare-related news.
In their ambitious analysis, Lokot and Diakopoulos analyzed 238 news bots on Twitter. They hand-picked 60 bots using Google and Twitter searches. They then developed an algorithm to search for additional news bots, identifying 178 more. The authors wanted to know how the bots worked along four different dimensions:
- Inputs and sources: what content the bot uses
- Outputs: what sort of tweets the bot produces
- Algorithm: how the bot translates the input into the output
- Intent/function: what service the bot provides
The findings suggest that bots can be a useful way to curate and share information, and the results also point to some significant room for improvement.
For 45 percent of the bots analyzed, for example, Lokot and Diakopoulos couldn’t figure out what sources the bots used to generate the content. A lack of transparency about sourcing poses ethical questions about bots focused on disseminating news.
For newsrooms, bots represent a potential way to engage audiences and increase content distribution without hand-curating content. They also provide a way to disseminate valuable information to niche groups that wouldn’t be reached by a general news product. The diversity of bots described by Lokot and Diakopoulos should give newsrooms plenty of food for thought. Via email, Lokot gave me a few recommendations on creating Twitter bots (see here, here, and here).
Inputs and sources
When analyzing where bots retrieved data, Lokot and Diakopoulos found that most bots relied on either a single website or multiple websites. @TCEurope, for example, posts articles from TechCrunch Europe. Other, less common sources of data included databases and standalone tweets. @stopandfrisk draws on the NYCLU database of stop-and-frisk incidents and @clearcongress uses Congressional tweets.
The bots that Lokot and Diakopoulos gathered by searching on Google and Twitter tended to be commentary bots, or bots that add content to the original data source.
@DrunkBuzzfeed was one example highlighted. Here, the bot mixes parts of different BuzzFeed headlines together for entertainment purposes.
This Cringeworthy Video Crucial Things To Consider The Hashtag Symbol With Your Hands
— Drunk Buzzfeed (@DrunkBuzzfeed) April 13, 2015
The other bots analyzed by the authors – those gathered using the automated strategy – were more likely to be niche or topical bots. Niche bots catered to a beat not generally covered by the news media (e.g. Shakespeare) and topical bots focused on a specific subject area (e.g. politics or business).
Many of the bots that Lokot and Diakopoulos analyzed aggregated information across multiple sources, such as tweeting news stories from a variety of sources that mentioned a particular geographic area. Other bots rebroadcasted information from one source, such as Reddit, to Twitter.
Lokot and Diakopoulos found fewer examples of bots that analyzed data or reacted to others. @Treasury_io was an interesting exception of a bot engaged in data analysis; the bot “processes data from a US Treasury database and turns useful bits of the daily reports into tweets.” The previously mentioned weather bot is an example of reacting to others – the bot replies to requests for forecast information.
Overwhelmingly, the purpose of the examined bots was to inform the public. Other functions that appeared less frequently included bots aiming to increase accountability, such as one that shared Congressional tweets mentioning firearms, and bots that aimed to surface breaking news content.
Lokot and Diakopoulos are careful to caution that their analysis does not cover all bots, because it’s not possible to survey the universe of news bots. Rather, their work serves as a snapshot of the variety of different types of bots from their sample. The strength of the project is the breadth of possibilities covered. News organizations will find the overview fertile ground for brainstorming on how they might start, or continue, experimenting with bots.
Scholars and news organizations interested in continuing this line of research could collaborate to analyze:
- How does the public perceive news bots? Do they recognize bots as automated content providers? Are they skeptical of content shared by a news bot? These questions can be answered by doing both survey and experimental work targeting Twitter users who may encounter bots. Some work has begun examining these questions; Clerwall (2014), for example, suggests that “the software-generated content is perceived as descriptive and boring, it is also considered to be objective.”
- How do news bots affect the bottom line? Does having several niche news bots increase traffic, or return visits, to a news organization’s website? By partnering with news organizations that are introducing a news bot, scholars could analyze any changes in their traffic figures over time.
- How does bot content differ from what a journalist would select? By comparing bot-generated content streams to hand-curated content, we could better understand the strengths and limitations of bot-curated news.
Tetyana Lokot & Nicholas Diakopoulos. (2015). News bots: Automating news and information dissemination on Twitter. Digital Journalism. doi: 10.1080/21670811.2015.1081822