Tuesday, July 6, 2021

Study shows AI-generated fake reports fool experts

It doesn’t take a human mind to produce misinformation convincing enough to fool experts in such critical fields as cybersecurity. iLexx/iStock via Getty Images
Priyanka Ranade, University of Maryland, Baltimore County; Anupam Joshi, University of Maryland, Baltimore County, and Tim Finin, University of Maryland, Baltimore County

Takeaways

· AIs can generate fake reports that are convincing enough to trick cybersecurity experts.

· If widely used, these AIs could hinder efforts to defend against cyberattacks.

· These systems could set off an AI arms race between misinformation generators and detectors.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.

There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.

General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.

To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.

Transformers

Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that there’s too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.

A block of text on a smartphone screen
AI can help detect misinformation like these false claims about COVID-19 in India – but what happens when AI is used to generate the misinformation? AP Photo/Ashwini Bhatia

Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying humanlike capabilities in generating text.

Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writer’s block.

Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.

Critical misinformation

Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.

We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.

We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.

A block of text with false information about a cybersecurity attack on airlines
An example of AI-generated cybersecurity misinformation. The Conversation, CC BY-ND

This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.

A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the COVID-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv. They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some COVID-19-related papers.

A block of text showing health care misinformation.
An example of AI-generated health care misinformation. The Conversation, CC BY-ND

The model was able to generate complete sentences and form an abstract allegedly describing the side effects of COVID-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions, and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.

[The Conversation’s most important coronavirus headlines, weekly in a science newsletter]

An AI misinformation arms race?

Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognize possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.

We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognize it.

Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognize it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.

Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit people’s credulity, especially if the information is not from reputable news sources or published scientific work.The Conversation

Priyanka Ranade, PhD Student in Computer Science and Electrical Engineering, University of Maryland, Baltimore County; Anupam Joshi, Professor of Computer Science & Electrical Engineering, University of Maryland, Baltimore County, and Tim Finin, Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wednesday, June 23, 2021

Space tourism is here – 20 years after the first stellar tourist, Jeff Bezos’ Blue Origin plans to send civilians to space

Astronaut Tracy Caldwell Dyson on the International Space Station with a view many more are likely to see soon. NASA/Tracy Caldwell Dyson/WIkimediaCommons
Wendy Whitman Cobb, US Air Force School of Advanced Air and Space Studies

For most people, getting to the stars is nothing more than a dream. But on May 5, 2021, the 60th anniversary of the first suborbital flight, that dream became a little bit more achievable.

The space company Blue Origin announced that it would start selling tickets for suborbital flights to the edge of space. The first flight is scheduled for July 20, and Jeff Bezos’ company is auctioning off one single ticket to the highest bidder.

But whoever places the winning bid won’t be the first tourist in space.

On April 28, 2001, Dennis Tito, a wealthy businessman, paid US$20 million for a seat on a Russian Soyuz spacecraft to be the first tourist to visit the International Space Station. Only seven civilians have followed suit in the 20 years since, but that number is poised to double in the next 12 months alone.

NASA has long been hesitant to play host to space tourists, so Russia – looking for sources of money post-Cold War in the 1990s and 2000s – has been the only option available to those looking for this kind of extreme adventure. However, it seems the rise of private space companies is going to make it easier for regular people to experience space.

From my perspective as a space policy analyst, recent announcements from companies like Blue Origin and SpaceX are the opening of an era in which more people can experience space. Hoping to build a future for humanity in space, these companies are seeking to use space tourism as a way to demonstrate both the safety and reliability of space travel to the general public.

Three men floating in the International Space Station
Dennis Tito, on the left beside two Russian astronauts, was the first private citizen to ever go to space – and he spent more than a week on the International Space Station. NASA/WikimediaCommons

The development of space tourism

Flights to space like Dennis Tito’s are expensive for a reason. A rocket must burn a lot of costly fuel to travel high and fast enough to enter Earth’s orbit.

Another cheaper possibility is a suborbital launch, with the rocket going high enough to reach the edge of space and coming right back down. This is the kind of flight that Blue Origin is now offering. While passengers on a suborbital trip experience weightlessness and incredible views, these launches are more accessible.

The difficulty and expense of either option has meant that, traditionally, only nation-states have been able to explore space. This began to change in the 1990s as a series of entrepreneurs entered the space arena. Three companies led by billionaire CEOs have emerged as the major players: Blue Origin, SpaceX and Virgin Galactic. Though none have taken paying, private customers to space, all anticipate doing so in the very near future.

British billionaire Richard Branson has built his brand on not just business but also his love of adventure. In pursuing space tourism, Branson has brought both of those to bear. He established Virgin Galactic after buying SpaceShipOne – a company that won the Ansari X-Prize by building the first reusable spaceship. Since then, Virgin Galactic has sought to design, build and fly a larger SpaceShipTwo that can carry up to six passengers in a suborbital flight.

A silvery ship that looks like a fighter plane with elongated tail fins.
The VSS Unity spacecraft is one of the ships that Virgin Galactic plans to use for space tours. AP Photo/Matt Hartman

The going has been harder than anticipated. While Branson predicted opening the business to tourists in 2009, Virgin Galactic has encountered some significant hurdles – including the death of a pilot in a crash in 2014. After the crash, engineers found significant problems with the design of the vehicle, which required modifications.

Elon Musk and Jeff Bezos, respective leaders of SpaceX and Blue Origin, began their own ventures in the early 2000s.

Musk, fearing that a catastrophe of some sort could leave Earth uninhabitable, was frustrated at the lack of progress in making humanity a multiplanetary species. He founded SpaceX in 2002 with the goal of first developing reusable launch technology to decrease the cost of getting to space. Since then, SpaceX has found success with its Falcon 9 rocket and Dragon spacecraft. SpaceX’s ultimate goal is human settlement of Mars; sending paying customers to space is an intermediate step. Musk says he hopes to show that space travel can be done easily and that tourism might provide a revenue stream to support development of the larger, Mars-focused Starship system.

Bezos, inspired by the vision of physicist Gerard O’Neill, wants to expand humanity and industry not to Mars but to space itself. Blue Origin, established in 2004, has proceeded slowly and quietly in also developing reusable rockets. Its New Shepard rocket, first successfully flown in 2015, will be the spaceship taking tourists on suborbital trips to the edge of space this July. For Bezos, these launches represent an effort at making space travel routine, reliable and accessible as a first step to enabling further space exploration.

A large silvery rocket standing upright on a launchpad.
SpaceX has already started selling tickets to the public and has future plans to use its Starship rocket, a prototype of which is seen here, to send people to Mars. Jared Krahn/WikimediaCommons, CC BY-SA

Outlook for the future

Blue Origin is not the only company offering passengers the opportunity to go into space and orbit the Earth.

SpaceX currently has two tourist launches planned. The first is scheduled for as early as September 2021, funded by billionaire businessman Jared Isaacman. The other trip, planned for 2022, is being organized by Axiom Space. These trips will be costly for wannabe space travelers, at $55 million for the flight and a stay on the International Space Station. The high cost has led some to warn that space tourism – and private access to space more broadly – might reinforce inequality between rich and poor.

A white domed capsule with windows in the Texas desert.
The first tourist to fly on a privately owned spaceship will ride in Blue Origin’s New Shepard Crew Capsule, seen here after a test flight in Texas. NASA Flight Opportunities/WikimediaCommons

While Blue Origin is already accepting bids for a seat on the first launch, it has not yet announced the cost of a ticket for future trips. Passengers will also need to meet several physical qualifications, including weighing 110 to 223 pounds (50 to 101 kg) and measuring between 5 feet and 6 feet, 4 inches (1.5 to 1.9 meters) in height. Virgin Galactic, which continues to test SpaceShipTwo, has no specific timetable, but its tickets are expected to be priced from $200,000 to $250,000.

Though these prices are high, it is worth considering that Dennis Tito’s $20 million ticket in 2001 could potentially pay for 100 flights on Blue Origin soon. The experience of viewing the Earth from space, though, may prove to be priceless for a whole new generation of space explorers.

This is an updated version of an article originally published on April 28, 2021. It has been updated to include the announcement by Blue Origin.The Conversation

Wendy Whitman Cobb, Professor of Strategy and Security Studies, US Air Force School of Advanced Air and Space Studies

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, June 18, 2021

Fast computers, 5G networks and radar that passes through walls are bringing ‘X-ray vision’ closer to reality

Seeing through walls has long been a staple of comics and science fiction. Something like it could soon be a reality. Paul Gilligan/Photodisc via Getty Images
Aly Fathy, University of Tennessee

Within seconds after reaching a city, earthquakes can cause immense destruction: Houses crumble, high-rises turn to rubble, people and animals are buried in the debris.

In the immediate aftermath of such carnage, emergency personnel desperately search for any sign of life in what used to be a home or office. Often, however, they find that they were digging in the wrong pile of rubble, and precious time has passed.

Imagine if rescuers could see through the debris to spot survivors under the rubble, measure their vital signs and even generate images of the victims. This is rapidly becoming possible using see-through-wall radar technology. Early versions of the technology that indicate whether a person is present in a room have been in use for several years, and some can measure vital signs albeit under better conditions than through rubble.

I’m an electrical engineer who researches electromagnetic communication and imaging systems. I and others are using fast computers, new algorithms and radar transceivers that collect large amounts of data to enable something much closer to the X-ray vision of science fiction and comic books. This emerging technology will make it possible to determine how many occupants are present behind a wall or barrier, where they are, what items they might be carrying and, in policing or military uses, even what type of body armor they might be wearing.

These see-through-wall radars will also be able to track individuals’ movements, and heart and respiration rates. The technology could also be used to determine from a distance the entire layout of a building, down to the location of pipes and wires within the walls, and detect hidden weapons and booby traps.

See-through-wall technology has been under development since the Cold War as a way to replace drilling holes through walls for spying. There are a few commercial products on the market today, like Range-R radar, that are used by law enforcement officers to track motion behind walls.

How radar works

Radar stands for radio detection and ranging. Using radio waves, a radar sends a signal that travels at the speed of light. If the signal hits an object like a plane, for example, it is reflected back toward a receiver and an echo is seen in the radar’s screen after a certain time delay. This echo can then be used to estimate the location of the object.

In 1842, Christian Doppler, an Austrian physicist, described a phenomenon now known as the Doppler effect or Doppler shift, where the change in frequency of a signal is related to the speed and direction of the source of the signal. In Doppler’s original case, this was the light from a binary star system. This is similar to the changing pitch of a siren as an emergency vehicle speeds toward you, passes you and then moves away. Doppler radar uses this effect to compare the frequencies of the transmitted and reflected signals to determine the direction and speed of moving objects, like thunderstorms and speeding cars.

The Doppler effect can be used to detect tiny motions, including heartbeats and chest movement associated with breathing. In these examples, the Doppler radar sends a signal to a human body, and the reflected signal differs based on whether the person is inhaling or exhaling, or even based on the person’s heart rate. This allows the technology to accurately measure these vital signs.

How radar can go through walls

Like cellphones, radars use electromagnetic waves. When a wave hits solid walls like drywall or wood walls, a fraction of it is reflected off the surface. But the rest travels through the wall, especially at relatively low radio frequencies. The transmitted wave can be totally reflected back if it hits a metal object or even a human, because the human body’s high water content makes it highly reflective.

If the radar’s receiver is sensitive enough – a lot more sensitive than ordinary radar receivers – it can pick up the signals that are reflected back through the wall. Using well-established signal processing techniques, the reflections from static objects like walls and furniture can be filtered out, allowing the signal of interest – like a person’s location – to be isolated.

A diagram showing a square on the left, a vertical rectangle in the middle and a sphere on the right. A series of four diminishing sine waves pass from the square to the wall, the wall to the sphere, the sphere back to the wall and from the wall to the sq
The key to using radar to track objects on the other side of a wall is having a very sensitive antenna that can pick up the greatly diminished reflected radio waves. Abdel-Kareem Moadi, CC BY-ND

Turning data into images

Historically, radar technology has been limited in its ability to aid in disaster management and law enforcement because it hasn’t had sufficient computational power or speed to filter out background noise from complicated environments like foliage or rubble and produce live images.

Today, however, radar sensors can often collect and process large amounts of data – even in harsh environments – and generate high-resolution images of targets. By using sophisticated algorithms, they can display the data in near real-time. This requires fast computer processors to rapidly handle these large amounts of data, and wideband circuits that can rapidly transmit data to improve the images’ resolution.

Recent developments in millimeter wave wireless technology, from 5G to 5G+ and beyond, are likely to help further improve this technology, providing higher-resolution images through order-of-magnitude wider bandwidth. The wireless technology will also speed data processing times because it greatly reduces latency, the time between transmitting and receiving data.

My laboratory is developing fast methods to remotely characterize the electrical characteristics of walls, which help in calibrating the radar waves and optimize the antennas to make the waves more easily pass through the wall and essentially make the wall transparent to the waves. We are also developing the software and hardware system to carry out the radar systems’ big data analyses in near real-time.

On the left, a laboratory set up showing a cinderblock wall and a foil-covered cardboard silhouette of a person, and, on the right, a radar image showing a corresponding silhouette in a three-dimensional space
This laboratory wall-penetrating radar provides more detail than today’s commercial systems. Aly Fathy

Better electronics promise portable radars

Radar systems at the low frequencies usually required to see through walls are bulky due to the large size of the antenna. The wavelength of electromagnetic signals corresponds to the size of the antenna. Scientists have been pushing see-through-wall radar technology to higher frequencies in order to build smaller and more portable systems.

In addition to providing a tool for emergency services, law enforcement and the military, the technology could also be used to monitor the elderly and read vital signs of patients with infectious diseases like COVID-19 from outside a hospital room.

One indication of see-through-wall radar’s potential is the U.S. Army’s interest. They’re looking for technology that can create three-dimensional maps of buildings and their occupants in almost real-time. They are even looking for see-through-wall radar that can create images of people’s faces that are accurate enough for facial recognition systems to identify the people behind the wall.

Whether or not researchers can develop see-through-wall radar that’s sensitive enough to distinguish people by their faces, the technology is likely to move well beyond blobs on a screen to give first responders something like superhuman powers.

[Understand new developments in science, health and technology, each week. Subscribe to The Conversation’s science newsletter.]The Conversation

Aly Fathy, Professor of Electrical Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, April 11, 2021

Top Stories

Embrace the unexpected: To teach AI how to handle new situations, change the rules of the game

Most of today’s AI’s come to a grinding halt when they encounter unexpected conditions, like a change in the rules of a game. LightFieldStudios/iStock via Getty Images
Mayank Kejriwal, University of Southern California

My colleagues and I changed a digital version of Monopoly so that instead of getting US$200 each time a player passes Go, the player is charged a wealth tax. We didn’t do this to gain an advantage or trick anyone. The purpose is to throw a curveball at artificial intelligence agents that play the game.

Our aim is to help the agents learn to handle unexpected events, something AIs to date have been decidedly bad at. Giving AIs this kind of adaptability is important for futuristic systems like surgical robots, but also algorithms in the here and now that decide who should get bail, who should get approved for a credit card and whose resume gets through to a hiring manager. Not dealing well with the unexpected in any of those situations can have disastrous consequences.

AI agents need the ability to detect, characterize and adapt to novelty in human-like ways. A situation is novel if it challenges, directly or indirectly, an agent’s model of the external world, which includes other agents, the environment and their interactions.

While most people do not deal with novelty in the most perfect way possible, they are able to to learn from their mistakes and adapt. Faced with a wealth tax in Monopoly, a human player might realize that she should have cash handy for the IRS as she is approaching Go. An AI player, bent on aggressively acquiring properties and monopolies, may fail to realize the appropriate balance between cash and nonliquid assets until it’s too late.

Adapting to novelty in open worlds

Reinforcement learning is the field that is largely responsible for “superhuman” game-playing AI agents and applications like self-driving cars. Reinforcement learning uses rewards and punishment to allow AI agents to learn by trial and error. It is part of the larger AI field of machine learning.

The learning in machine learning implies that such systems are already capable of dealing with limited types of novelty. Machine learning systems tend to do well on input data that are statistically similar, although not identical, to those on which they were originally trained. In practice, it is OK to violate this condition as long as nothing too unexpected is likely to happen.

Such systems can run into trouble in an open world. As the name suggests, open worlds cannot be completely and explicitly defined. The unexpected can, and does, happen. Most importantly, the real world is an open world.

However, the “superhuman” AIs are not designed to handle highly unexpected situations in an open world. One reason may be the use of modern reinforcement learning itself, which eventually leads the AI to be optimized for the specific environment in which it was trained. In real life, there are no such guarantees. An AI that is built for real life must be able to adapt to novelty in an open world.

Novelty as a first-class citizen

Returning to Monopoly, imagine that certain properties are subject to rent protection. A good player, human or AI, would recognize the properties as bad investments compared to properties that can earn higher rents and not purchase them. However, an AI that has never before seen this situation, or anything like it, will likely need to play many games before it can adapt.

Before computer scientists can even start theorizing about how to build such “novelty-adaptive” agents, they need a rigorous method for evaluating them. Traditionally, most AI systems are tested by the same people who build them. Competitions are more impartial, but to date, no competition has evaluated AI systems in situations so unexpected that not even the system designers could have foreseen them. Such an evaluation is the gold standard for testing AI on novelty, similar to randomized controlled trials for evaluating drugs.

In 2019, the U.S. Defense Advanced Research Projects Agency launched a program called Science of Artificial Intelligence and Learning for Open-world Novelty, called SAIL-ON for short. It is currently funding many groups, including my own at the University of Southern California, for researching novelty adaptation in open worlds.

One of the many ways in which the program is innovative is that a team can either develop an AI agent that handles novelty, or design an open-world environment for evaluating such agents, but not both. Teams that build an open-world environment must also theorize about novelty in that environment. They test their theories and evaluate the agents built by another group by developing a novelty generator. These generators can be used to inject unexpected elements into the environment.

Under SAIL-ON, my colleagues and I recently developed a simulator called Generating Novelty in Open-world Multi-agent Environments, or GNOME. GNOME is designed to test AI novelty adaptation in strategic board games that capture elements of the real world.

Diagram of a Monopoly game with symbols indicating players, houses and hotels
The Monopoly version of the author’s AI novelty environment can trip up AI’s that play the game by introducing a wealth tax, rent control and other unexpected factors. Mayank Kejriwal, CC BY-ND

Our first version of GNOME uses the classic board game Monopoly. We recently demonstrated the Monopoly-based GNOME at a top machine learning conference. We allowed participants to inject novelties and see for themselves how preprogrammed AI agents performed. For example, GNOME can introduce the wealth tax or rent protection “novelties” mentioned earlier, and evaluate the AI following the change.

By comparing how the AI performed before and after the rule change, GNOME can quantify just how far off its game the novelty knocked the AI. If GNOME finds that the AI was winning 80% of the games before the novelty was introduced, and is now winning only 25% of the games, it will flag the AI as one that has lots of room to improve.

The future: A science of novelty?

GNOME has already been used to evaluate novelty-adaptive AI agents built by three independent organizations also funded under this DARPA program. We have also built GNOMEs based on poker, and “war games” that are similar to Battleship. In the next year, we will also be exploring GNOMEs for other strategic board games like Risk and Catan. This research is expected to lead to AI agents that are capable of handling novelty in different settings.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

Making novelty a central focus of modern AI research and evaluation has had the byproduct of producing an initial body of work in support of a science of novelty. Not only are researchers like ourselves exploring definitions and theories of novelty, but we are exploring questions that could have fundamental implications. For example, our team is exploring the question of when a novelty is expected to be impossibly difficult for an AI. In the real world, if such a situation arises, the AI would recognize it and call a human operator.

In seeking answers to these and other questions, computer scientists are now trying to enable AIs that can react properly to the unexpected, including black-swan events like COVID-19. Perhaps the day is not far off when an AI will be able to not only beat humans at their existing games, but adapt quickly to any version of those games that humans can imagine. It may even be capable of adapting to situations that we cannot conceive of today.The Conversation

Mayank Kejriwal, Research Assistant Professor of Computer Science, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, December 27, 2020

Top Stories

An AI tool can distinguish between a conspiracy theory and a true conspiracy – it comes down to how easily the story falls apart

In the age of social media, conspiracy theories are collective creations. AP Photo/Ted S. Warren
Timothy R. Tangherlini, University of California, Berkeley

The audio on the otherwise shaky body camera footage is unusually clear. As police officers search a handcuffed man who moments before had fired a shot inside a pizza parlor, an officer asks him why he was there. The man says to investigate a pedophile ring. Incredulous, the officer asks again. Another officer chimes in, “Pizzagate. He’s talking about Pizzagate.”

In that brief, chilling interaction in 2016, it becomes clear that conspiracy theories, long relegated to the fringes of society, had moved into the real world in a very dangerous way.

Conspiracy theories, which have the potential to cause significant harm, have found a welcome home on social media, where forums free from moderation allow like-minded individuals to converse. There they can develop their theories and propose actions to counteract the threats they “uncover.”

But how can you tell if an emerging narrative on social media is an unfounded conspiracy theory? It turns out that it’s possible to distinguish between conspiracy theories and true conspiracies by using machine learning tools to graph the elements and connections of a narrative. These tools could form the basis of an early warning system to alert authorities to online narratives that pose a threat in the real world.

The culture analytics group at the University of California, which I and Vwani Roychowdhury lead, has developed an automated approach to determining when conversations on social media reflect the telltale signs of conspiracy theorizing. We have applied these methods successfully to the study of Pizzagate, the COVID-19 pandemic and anti-vaccination movements. We’re currently using these methods to study QAnon.

Collaboratively constructed, fast to form

Actual conspiracies are deliberately hidden, real-life actions of people working together for their own malign purposes. In contrast, conspiracy theories are collaboratively constructed and develop in the open.

Conspiracy theories are deliberately complex and reflect an all-encompassing worldview. Instead of trying to explain one thing, a conspiracy theory tries to explain everything, discovering connections across domains of human interaction that are otherwise hidden – mostly because they do not exist.

People are susceptible to conspiracy theories by nature, and periods of uncertainty and heightened anxiety increase that susceptibility.

While the popular image of the conspiracy theorist is of a lone wolf piecing together puzzling connections with photographs and red string, that image no longer applies in the age of social media. Conspiracy theorizing has moved online and is now the end-product of a collective storytelling. The participants work out the parameters of a narrative framework: the people, places and things of a story and their relationships.

The online nature of conspiracy theorizing provides an opportunity for researchers to trace the development of these theories from their origins as a series of often disjointed rumors and story pieces to a comprehensive narrative. For our work, Pizzagate presented the perfect subject.

Pizzagate began to develop in late October 2016 during the runup to the presidential election. Within a month, it was fully formed, with a complete cast of characters drawn from a series of otherwise unlinked domains: Democratic politics, the private lives of the Podesta brothers, casual family dining and satanic pedophilic trafficking. The connecting narrative thread among these otherwise disparate domains was the fanciful interpretation of the leaked emails of the Democratic National Committee dumped by WikiLeaks in the final week of October 2016.

AI narrative analysis

We developed a model – a set of machine learning tools – that can identify narratives based on sets of people, places and things and their relationships. Machine learning algorithms process large amounts of data to determine the categories of things in the data and then identify which categories particular things belong to.

We analyzed 17,498 posts from April 2016 through February 2018 on the Reddit and 4chan forums where Pizzagate was discussed. The model treats each post as a fragment of a hidden story and sets about to uncover the narrative. The software identifies the people, places and things in the posts and determines which are major elements, which are minor elements and how they’re all connected.

The model determines the main layers of the narrative – in the case of Pizzagate, Democratic politics, the Podesta brothers, casual dining, satanism and WikiLeaks – and how the layers come together to form the narrative as a whole.

To ensure that our methods produced accurate output, we compared the narrative framework graph produced by our model with illustrations published in The New York Times. Our graph aligned with those illustrations, and also offered finer levels of detail about the people, places and things and their relationships.

Sturdy truth, fragile fiction

To see if we could distinguish between a conspiracy theory and an actual conspiracy, we examined Bridgegate, a political payback operation launched by staff members of Republican Gov. Chris Christie’s administration against the Democratic mayor of Fort Lee, New Jersey.

As we compared the results of our machine learning system using the two separate collections, two distinguishing features of a conspiracy theory’s narrative framework stood out.

First, while the narrative graph for Bridgegate took from 2013 to 2020 to develop, Pizzagate’s graph was fully formed and stable within a month. Second, Bridgegate’s graph survived having elements removed, implying that New Jersey politics would continue as a single, connected network even if key figures and relationships from the scandal were deleted.

The Pizzagate graph, in contrast, was easily fractured into smaller subgraphs. When we removed the people, places, things and relationships that came directly from the interpretations of the WikiLeaks emails, the graph fell apart into what in reality were the unconnected domains of politics, casual dining, the private lives of the Podestas and the odd world of satanism.

In the illustration below, the green planes are the major layers of the narrative, the dots are the major elements of the narrative, the blue lines are connections among elements within a layer and the red lines are connections among elements across the layers. The purple plane shows all the layers combined, showing how the dots are all connected. Removing the WikiLeaks plane yields a purple plane with dots connected only in small groups.

Two graphs, one above and one below, showing dots with interconnecting lines
The layers of the Pizzagate conspiracy theory combine to form a narrative, top right. Remove one layer, the fanciful interpretations of emails released by WikiLeaks, and the whole story falls apart, bottom right. Tangherlini, et al., CC BY

Early warning system?

There are clear ethical challenges that our work raises. Our methods, for instance, could be used to generate additional posts to a conspiracy theory discussion that fit the narrative framework at the root of the discussion. Similarly, given any set of domains, someone could use the tool to develop an entirely new conspiracy theory.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

However, this weaponization of storytelling is already occurring without automatic methods, as our study of social media forums makes clear. There is a role for the research community to help others understand how that weaponization occurs and to develop tools for people and organizations who protect public safety and democratic institutions.

Developing an early warning system that tracks the emergence and alignment of conspiracy theory narratives could alert researchers – and authorities – to real-world actions people might take based on these narratives. Perhaps with such a system in place, the arresting officer in the Pizzagate case would not have been baffled by the gunman’s response when asked why he’d shown up at a pizza parlor armed with an AR-15 rifle.The Conversation

Timothy R. Tangherlini, Professor of Danish Literature and Culture, University of California, Berkeley

This article is republished from The Conversation under a Creative Commons license. Read the original article.