Sunday, December 27, 2020

Top Stories

An AI tool can distinguish between a conspiracy theory and a true conspiracy – it comes down to how easily the story falls apart

In the age of social media, conspiracy theories are collective creations. AP Photo/Ted S. Warren
Timothy R. Tangherlini, University of California, Berkeley

The audio on the otherwise shaky body camera footage is unusually clear. As police officers search a handcuffed man who moments before had fired a shot inside a pizza parlor, an officer asks him why he was there. The man says to investigate a pedophile ring. Incredulous, the officer asks again. Another officer chimes in, “Pizzagate. He’s talking about Pizzagate.”

In that brief, chilling interaction in 2016, it becomes clear that conspiracy theories, long relegated to the fringes of society, had moved into the real world in a very dangerous way.

Conspiracy theories, which have the potential to cause significant harm, have found a welcome home on social media, where forums free from moderation allow like-minded individuals to converse. There they can develop their theories and propose actions to counteract the threats they “uncover.”

But how can you tell if an emerging narrative on social media is an unfounded conspiracy theory? It turns out that it’s possible to distinguish between conspiracy theories and true conspiracies by using machine learning tools to graph the elements and connections of a narrative. These tools could form the basis of an early warning system to alert authorities to online narratives that pose a threat in the real world.

The culture analytics group at the University of California, which I and Vwani Roychowdhury lead, has developed an automated approach to determining when conversations on social media reflect the telltale signs of conspiracy theorizing. We have applied these methods successfully to the study of Pizzagate, the COVID-19 pandemic and anti-vaccination movements. We’re currently using these methods to study QAnon.

Collaboratively constructed, fast to form

Actual conspiracies are deliberately hidden, real-life actions of people working together for their own malign purposes. In contrast, conspiracy theories are collaboratively constructed and develop in the open.

Conspiracy theories are deliberately complex and reflect an all-encompassing worldview. Instead of trying to explain one thing, a conspiracy theory tries to explain everything, discovering connections across domains of human interaction that are otherwise hidden – mostly because they do not exist.

People are susceptible to conspiracy theories by nature, and periods of uncertainty and heightened anxiety increase that susceptibility.

While the popular image of the conspiracy theorist is of a lone wolf piecing together puzzling connections with photographs and red string, that image no longer applies in the age of social media. Conspiracy theorizing has moved online and is now the end-product of a collective storytelling. The participants work out the parameters of a narrative framework: the people, places and things of a story and their relationships.

The online nature of conspiracy theorizing provides an opportunity for researchers to trace the development of these theories from their origins as a series of often disjointed rumors and story pieces to a comprehensive narrative. For our work, Pizzagate presented the perfect subject.

Pizzagate began to develop in late October 2016 during the runup to the presidential election. Within a month, it was fully formed, with a complete cast of characters drawn from a series of otherwise unlinked domains: Democratic politics, the private lives of the Podesta brothers, casual family dining and satanic pedophilic trafficking. The connecting narrative thread among these otherwise disparate domains was the fanciful interpretation of the leaked emails of the Democratic National Committee dumped by WikiLeaks in the final week of October 2016.

AI narrative analysis

We developed a model – a set of machine learning tools – that can identify narratives based on sets of people, places and things and their relationships. Machine learning algorithms process large amounts of data to determine the categories of things in the data and then identify which categories particular things belong to.

We analyzed 17,498 posts from April 2016 through February 2018 on the Reddit and 4chan forums where Pizzagate was discussed. The model treats each post as a fragment of a hidden story and sets about to uncover the narrative. The software identifies the people, places and things in the posts and determines which are major elements, which are minor elements and how they’re all connected.

The model determines the main layers of the narrative – in the case of Pizzagate, Democratic politics, the Podesta brothers, casual dining, satanism and WikiLeaks – and how the layers come together to form the narrative as a whole.

To ensure that our methods produced accurate output, we compared the narrative framework graph produced by our model with illustrations published in The New York Times. Our graph aligned with those illustrations, and also offered finer levels of detail about the people, places and things and their relationships.

Sturdy truth, fragile fiction

To see if we could distinguish between a conspiracy theory and an actual conspiracy, we examined Bridgegate, a political payback operation launched by staff members of Republican Gov. Chris Christie’s administration against the Democratic mayor of Fort Lee, New Jersey.

As we compared the results of our machine learning system using the two separate collections, two distinguishing features of a conspiracy theory’s narrative framework stood out.

First, while the narrative graph for Bridgegate took from 2013 to 2020 to develop, Pizzagate’s graph was fully formed and stable within a month. Second, Bridgegate’s graph survived having elements removed, implying that New Jersey politics would continue as a single, connected network even if key figures and relationships from the scandal were deleted.

The Pizzagate graph, in contrast, was easily fractured into smaller subgraphs. When we removed the people, places, things and relationships that came directly from the interpretations of the WikiLeaks emails, the graph fell apart into what in reality were the unconnected domains of politics, casual dining, the private lives of the Podestas and the odd world of satanism.

In the illustration below, the green planes are the major layers of the narrative, the dots are the major elements of the narrative, the blue lines are connections among elements within a layer and the red lines are connections among elements across the layers. The purple plane shows all the layers combined, showing how the dots are all connected. Removing the WikiLeaks plane yields a purple plane with dots connected only in small groups.

Two graphs, one above and one below, showing dots with interconnecting lines
The layers of the Pizzagate conspiracy theory combine to form a narrative, top right. Remove one layer, the fanciful interpretations of emails released by WikiLeaks, and the whole story falls apart, bottom right. Tangherlini, et al., CC BY

Early warning system?

There are clear ethical challenges that our work raises. Our methods, for instance, could be used to generate additional posts to a conspiracy theory discussion that fit the narrative framework at the root of the discussion. Similarly, given any set of domains, someone could use the tool to develop an entirely new conspiracy theory.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

However, this weaponization of storytelling is already occurring without automatic methods, as our study of social media forums makes clear. There is a role for the research community to help others understand how that weaponization occurs and to develop tools for people and organizations who protect public safety and democratic institutions.

Developing an early warning system that tracks the emergence and alignment of conspiracy theory narratives could alert researchers – and authorities – to real-world actions people might take based on these narratives. Perhaps with such a system in place, the arresting officer in the Pizzagate case would not have been baffled by the gunman’s response when asked why he’d shown up at a pizza parlor armed with an AR-15 rifle.The Conversation

Timothy R. Tangherlini, Professor of Danish Literature and Culture, University of California, Berkeley

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wednesday, December 23, 2020

Top Stories

How tech firms have tried to stop disinformation and voter intimidation – and come up short

Facebook and the other social media platform companies are facing a reckoning for their handling of disinformation. AP Photo/Noah Berger
Scott Shackelford, Indiana University

Neither disinformation nor voter intimidation is anything new. But tools developed by leading tech companies including Twitter, Facebook and Google now allow these tactics to scale up dramatically.

As a scholar of cybersecurity and election security, I have argued that these firms must do more to rein in disinformation, digital repression and voter suppression on their platforms, including by treating these issues as a matter of corporate social responsibility.

Earlier this fall, Twitter announced new measures to tackle disinformation, including false claims about the risks of voting by mail. Facebook has likewise vowed to crack down on disinformation and voter intimidation on its platform, including by removing posts that encourage people to monitor polling places.

Google has dropped the Proud Boys domain that Iran allegedly used to send messages to some 25,000 registered Democrats that threatened them if they did not change parties and vote for Trump.

But such self-regulation, while helpful, can go only so far. The time has come for the U.S. to learn from the experiences of other nations and hold tech firms accountable for ensuring that their platforms are not misused to undermine the country’s democratic foundations.

Voter intimidation

On Oct. 20, registered Democrats in Florida, a crucial swing state, and Alaska began receiving emails purportedly from the far-right group Proud Boys. The messages were filled with threats up to and including violent reprisals if the receiver did not vote for President Trump and change their party affiliation to Republican.

Less than 24 hours later, on Oct. 21, U.S. Director of National Intelligence John Ratcliffe and FBI Director Christopher Wray gave a briefing in which they publicly attributed this attempt at voter intimidation to Iran. This verdict was later corroborated by Google, which has also claimed that more than 90% of these messages were blocked by spam filters.

The rapid timing of the attribution was reportedly the result of the foreign nature of the threat and the fact that it was coming so close to Election Day. But it is important to note that this is just the latest example of such voter intimidation. Other recent incidents include a robo-call scheme targeting largely African American cities such as Detroit and Cleveland.

It remains unclear how many of these messages actually reached voters and how in turn these threats changed voter behavior. There is some evidence that such tactics can backfire and lead to higher turnout rates in the targeted population.

Disinformation on social media

Effective disinformation campaigns typically have three components:

  • A state-sponsored news outlet to originate the fabrication
  • Alternative media sources willing to spread the disinformation without adequately checking the underlying facts
  • Witting or unwitting “agents of influence”: that is, people to advance the story in other outlets

Pages from the U.S. State Department's Global Engagement Center report released on Aug. 5, 2020
Russia is using a well-developed online operation to spread disinformation, according to the U.S. State Department. AP Photo/Jon Elswick

The advent of cyberspace has put the disinformation process into overdrive, both speeding the viral spread of stories across national boundaries and platforms with ease and causing a proliferation in the types of traditional and social media willing to run with fake stories.

To date, the major social media firms have taken a largely piecemeal and fractured approach to managing this complex issue. Twitter announced a ban on political ads during the 2020 U.S. election season, in part over concerns about enabling the spread of misinformation. Facebook opted for a more limited ban on new political ads one week before the election.

The U.S. has no equivalent of the French law barring any influencing speech on the day before an election.

Effects and constraints

The impacts of these efforts have been muted, in part due to the prevalence of social bots that spread low-credibility information virally across these platforms. No comprehensive data exists on the total amount of disinformation or how it is affecting users.

Some recent studies do shed light, though. For example, one 2019 study found that a very small number of Twitter users accounted for the vast majority of exposure to disinformation.

Tech platforms are constrained from doing more by several forces. These include fear of perceived political bias and a strong belief among many, including Mark Zuckerberg, in a robust interpretation of free speech. A related concern of the platform companies is that the more they’re perceived as media gatekeepers, the more likely they will be to face new regulation.

The platform companies are also limited by the technologies and procedures they use to combat disinformation and voter intimidation. For example, Facebook staff reportedly had to manually intervene to limit the spread of a New York Post article about Hunter Biden’s laptop computer that could be part of a disinformation campaign. This highlights how the platform companies are playing catch-up in countering disinformation and need to devote more resources to the effort.

Regulatory options

There is a growing bipartisan consensus that more must be done to rein in social media excesses and to better manage the dual issues of voter intimidation and disinformation. In recent weeks, we have already seen the U.S. Department of Justice open a new antitrust case against Google, which, although it is unrelated to disinformation, can be understood as part of a larger campaign to regulate these behemoths.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

Another tool at the U.S. government’s disposal is revising, or even revoking, Section 230 of the 1990s-era Communications Decency Act. This law was designed to protect tech firms as they developed from liability for the content that users post to their sites. Many, including former Vice President Joe Biden, argue that it has outlived its usefulness.

Another option to consider is learning from the EU’s approach. In 2018, the European Commission was successful in getting tech firms to adopt the “Code of Practice on Disinformation,” which committed these companies to boost “transparency around political and issue-based advertising.” However, these measures to fight disinformation, and the related EU’s Rapid Alert System, have so far not been able to stem the tide of these threats.

Instead, there are growing calls to pass a host of reforms to ensure that the platforms publicize accurate information, protect sources of accurate information through enhanced cybersecurity requirements and monitor disinformation more effectively. Tech firms in particular could be doing more to make it easier to report disinformation, contact users who have interacted with such content with a warning and take down false information about voting, as Facebook and Twitter have begun to do.

Such steps are just a beginning. Everyone has a role in making democracy harder to hack, but the tech platforms that have done so much to contribute to this problem have an outsized duty to address it.The Conversation

Scott Shackelford, Associate Professor of Business Law and Ethics; Executive Director, Ostrom Workshop; Cybersecurity Program Chair, IU-Bloomington, Indiana University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, December 21, 2020

What's cellular about a cellphone?

Daniel Bliss, Arizona State University

Editor’s note: Daniel Bliss is a professor of electrical engineering at Arizona State University and the director of the Center for Wireless Information Systems and Computational Architecture. In this interview, he explains the ideas behind the original cellular networks and how they evolved over the years into today’s 5G (fifth generation) and even 6G (sixth generation) networks.

Daniel Bliss provides a brief history of cellular networks.

How did wireless phones work before cellular technology?

The idea of wireless communications is quite old. Famously, the Marconi system could talk all the way across the Atlantic Ocean. It would have one system, which was the size of a building, talking to another system, which was the size of a building. But in essence, it just made a radio link between the two. Eventually people realized that’s a really useful capability. So they put up a radio system, say at a high point in the city, and then everybody – well, those few who had the right kind of radio system – talked to that high point. So if you like, there was only one cell – it wasn’t cellular in any sense. But because the amount of data you can send over time is a function of how far away you are, you want to get these things closer together. And so that’s the the invention of the cellular system.

The CenturyLink Building in Minneapolis with a microwave antenna on the top. It looks like a black spiky crown on the top of the building.
The CenturyLink building in Minneapolis has a microwave antenna on the top which was used in early wireless phone networks. Mulad via Wikimedia Commons, CC BY-SA

How are cellular systems different?

The farther your phone and the base station are from each other, the harder it is to send a signal across. If you just have one base station and you’re too far away from it, it just doesn’t work. So you want to have many base stations and talk to the one that’s closest to you.

If you draw a boundary between those base stations and look down on it on a map, you see these different little cell towers which your phone is supposed to talk to. That’s where the technology gets its name. The amazing thing that happened during the development of cellular systems is that it automatically switched which base station the phone talks to as its location changed, such as while driving. It’s really remarkable that this system works as well as it does, because it’s pretty complicated and you don’t even notice.

A diagram of a cellular network
Cellular technology gets its name from the diagrams of the networks which are divided into cells. This diagram shows cellular phone towers at corners of each haxagon cell. Greensburger via Wikimedia Commons

What are the major improvements to cellular networks that have enabled faster data rates?

If you go back to the first-generation cellular systems, those were primarily analog systems. It was just a way of converting your voice to an analog signal.

The second-generation systems focused on taking your voice, digitizing it and then sending it as a data link to improve stability and security. As an accident, it could also send data across. People found that it’s really useful to send a photo or send some other information as well. So they started using the same link to send data, but then complained that it’s not fast enough.

Subsequent generations of cellular networks allocated increasingly wider bandwidths using different techniques and were powered by a denser network of base stations. We tend to notice the big tall towers. But if you start looking around, particularly in a city, you’ll notice these boxes sitting on the sides of buildings all over the place. They are actually cellular base stations that are much lower down. They’re intended to reach people within just a kilometer or a half-kilometer.

The easiest way to achieve much higher data rates is for your phone to be close to a signal source. The other way is to have antenna systems that are pointing radio waves at your phone, which is one of the things that’s happening in 5G.

5G networks are still being rolled out around the country, but work on 6G technologies is already underway. What can we expect from that?

We don’t really know which technologies that are being developed right now will be used in 6G networks, but I can talk about what I think what’s going to happen.

6G networks will allow a much broader set of user types. What do I mean by that? Cellular systems, from the very start, were designed for humans to communicate. So it had certain constraints on what you needed. But now, humans are now a minority of users, because we have so many machines talking to each other too, such as smart appliances, for example. These machines have varying needs. Some want to send lots of data, and some need to send almost no data and maybe send nothing for months at a time. So 6G technologies need to work well for humans as well as a broad range of devices.

Another piece of this is that we often think about communication systems as being the only users of the radio frequency spectrum, but it’s very much not true. Radars use spectrum too, and pretty soon you won’t be able to buy a car that doesn’t have a suite of radars on it for safety or autonomous driving. There’s also position navigation and timing, which are necessary for, say, cars to know the distance between each other. So with 6G, you’ll have these multi-function systems.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

And then there is a push to go to yet higher frequencies. These frequencies work for only very, very short links. But a lot of our problems are over very short links. You can potentially send really huge amounts of data over short distances. If we can get the prices down, then it can potentially replace your Wi-Fi.

We can also expect a refinement of the technologies currently used in 5G – such as improving the pointing of the antenna to your phone, as I mentioned earlier.The Conversation

Daniel Bliss, Professor of Electrical Engineering, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, September 18, 2020

Nuclear threats are increasing – here's how the US should prepare for a nuclear event

A visitor to the Hiroshima Peace Memorial Museum views a photo of the aftermath of the 1945 bombing. Carl Court/Getty Images
Cham Dallas, University of Georgia

On the 75th anniversary of the bombings of Hiroshima and Nagasaki, some may like to think the threat from nuclear weapons has receded. But there are clear signs of a growing nuclear arms race and that the U.S. is not very well-prepared for nuclear and radiological events.

I’ve been studying the effects of nuclear events – from detonations to accidents – for over 30 years. This has included my direct involvement in research, teaching and humanitarian efforts in multiple expeditions to Chernobyl- and Fukushima-contaminated areas. Now I am involved in the proposal for the formation of a Nuclear Global Health Workforce, which I proposed in 2017.

Such a group could bring together nuclear and nonnuclear technical and health professionals for education and training, and help to meet the preparedness, coordination, collaboration and staffing requirements necessary to respond to a large-scale nuclear crisis.

What would this workforce need to be prepared to manage? For that we can look back at the legacy of the atomic bombings of Hiroshima and Nagasaki, as well as nuclear accidents like Chernobyl and Fukushima.

The Hiroshima Prefecture Industrial Promotion Hall after the blast. Maarten Heerlien/Flickr, CC BY-SA

What happens when a nuclear device is detonated over a city?

Approximately 135,000 and 64,000 people died, respectively, in Hiroshima and Nagasaki. The great majority of deaths happened in the first days after the bombings, mainly from thermal burns, severe physical injuries and radiation.

The great majority of doctors and nurses in Hiroshima were killed and injured, and therefore unable to assist in the response. This was largely due to the concentration of medical personnel and facilities in inner urban areas. This exact concentration exists today in the majority of American cities, and is a chilling reminder of the difficulty in medically responding to nuclear events.

What if a nuclear device were detonated in an urban area today? I explored this issue in a 2007 study modeling a nuclear weapon attack on four American cities. As in Hiroshima and Nagasaki, the majority of deaths would happen soon after the detonation, and the local health care response capability would be largely eradicated.

Models show that such an event in an urban area in particular will not only destroy the existing public health protections but will, most likely, make it extremely difficult to respond, recover and rehabilitate them.

Very few medical personnel today have the skills or knowledge to treat the kind and the quantity of injuries a nuclear blast can cause. Health care workers would have little to no familiarity with the treatment of radiation victims. Thermal burns would require enormous resources to treat even a single patient, and a large number of patients with these injuries will overwhelm any existing medical system. There would also be a massive number of laceration injuries from the breakage of virtually all glass in a wide area.

Officials in protective gear check for signs of radiation on children who are from the evacuation area near the Fukushima Daini nuclear plant in Koriyama in this March 13, 2011 photo. Reuters/Kim Kyung-Hoon/Files

Getting people out of the blast and radiation contamination zones

A major nuclear event would create widespread panic, as large populations would fear the spread of radioactive materials, so evacuation or sheltering in place must be considered.

For instance, within a few weeks after the Chernobyl accident, more than 116,000 people were evacuated from the most contaminated areas of Ukraine and Belarus. Another 220,000 people were relocated in subsequent years.

The day after the Fukushima earthquake and tsunami, over 200,000 people were evacuated from areas within 20 kilometers (12 miles) of the nuclear plant because of the fear of the potential for radiation exposure.

The evacuation process in Russia, Ukraine, Belarus and Japan was plagued by misinformation, inadequate and confusing orders and delays in releasing information. There was also trouble evacuating everyone from the affected areas. Elderly and infirm residents were left in areas near radioactive contamination, and many others moved unnecessarily from uncontaminated areas (resulting in many deaths from winter conditions). All of these troubles lead to a loss of public trust in the government.

However, an encouraging fact about nuclear fallout (and not generally known) is that the actual area that will receive dangerous levels of radioactive fallout is actually only a fraction of the total area in a circle around the detonation zone. For instance, in a hypothetical low-yield (10 kiloton) nuclear bomb over Washington, D.C., only limited evacuations are planned. Despite projections of 100,000 fatalities and about 150,000 casualties, the casualty-producing radiation plume would actually be expected to be confined to a relatively small area. (Using a clock-face analogy, the danger area would typically take up only a two-hour slot on the circle around the detonation, dictated by wind: for example, 2-4 o'clock.)

People upwind would not need to take any action, and most of those downwind, in areas receiving relatively small radiation levels (from the point of view of being sufficient to cause radiation-related health issues), would need to seek only “moderate shelter.” That means basically staying indoors for a day or so or until emergency authorities give further instructions.

The long-term effects of radiation exposure

The Radiation Effects Research Foundation, which was established to study the effects of radiation on survivors of Hiroshima and Nagasaki, has been tracking the health effects of radiation for decades.

According to the Radiation Effects Research Foundation, about 1,900 excess cancer deaths can be attributed to the atomic bombs, with about 200 cases of leukemia and 1,700 solid cancers. Japan has constructed very detailed cancer screenings after Hiroshima, Nagasaki and Fukushima.

But the data on many potential health effects from radiation exposure, such as birth defects, are actually quite different from the prevailing public perception, which has been derived not from validated science education but from entertainment outlets (I teach a university course on the impact of media and popular culture on disaster knowledge).

While it has been shown that intense medical X-ray exposure has accidentally produced birth defects in humans, there is doubt about whether there were birth defects in the descendants of Hiroshima and Nagasaki atomic bomb survivors. Most respected long-term investigations have concluded there are no statistically significant increases in birth defects resulting in atomic bomb survivors.

Looking at data from Chernobyl, where the release of airborne radiation was 100 times as much as Hiroshima and Nagasaki combined, there is a lack of definitive data for radiation-induced birth defects.

A wide-ranging WHO study concluded that there were no differences in rates of mental retardation and emotional problems in Chernobyl radiation-exposed children compared to children in control groups. A Harvard review on Chernobyl concluded that there was no substantive proof regarding radiation-induced effects on embryos or fetuses from the accident. Another study looked at the congenital abnormality registers for 16 European regions that received fallout from Chernobyl and concluded that the widespread fear in the population about the possible effects of radiation exposure on the unborn fetus was not justified.

Indeed, the most definitive Chernobyl health impact in terms of numbers was the dramatic increase of elective abortions near and at significant distances from the accident site.

In addition to rapid response and evacuation plans, a Nuclear Global Health Workforce could help health care practitioners, policymakers, administrators and others understand myths and realities of radiation. In the critical time just after a nuclear crisis, this would help officials make evidence-based policy decisions and help people understand the actual risks they face.

What’s the risk of another Hiroshima or Nagasaki?

Today, the risk of a nuclear exchange – and its devastating impact on medicine and public health worldwide – has only escalated compared to previous decades. Nine countries are known to have nuclear weapons, and international relations are increasingly volatile. The U.S. and Russia are heavily investing in the modernization of their nuclear stockpiles, and China, India and Pakistan are rapidly expanding the size and sophistication of their nuclear weapon capabilities. The developing technological sophistication among terrorist groups and the growing global availability and distribution of radioactive materials are also especially worrying.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

In recent years, a number of government and private organizations have held meetings (all of which I attended) to devise large-scale medical responses to a nuclear weapon detonation in the U.S. and worldwide. They include the National Academy of Sciences, the National Alliance for Radiation Readiness, National Disaster Life Support Foundation, Society for Disaster Medicine and Public Health, and the Radiation Injury Treatment Network, which includes 74 hospitals nationwide actively preparing to receive radiation-exposed patients.

Despite the gloomy prospects of health outcomes of any large-scale nuclear event common in the minds of many, there are a number of concrete steps the U.S. and other countries can take to prepare. It’s our obligation to respond.

This article is an update to an article originally published in 2015 that includes links to more recent research and updated information on the threat of nuclear incidents.The Conversation

Cham Dallas, University Professor Department of Health Policy & Management, University of Georgia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, September 14, 2020

The US has lots to lose and little to gain by banning TikTok and WeChat

Banning TikTok and WeChat would cut off many Americans from popular social media. AP Photo/Mark Schiefelbein
Jeremy Straub, North Dakota State University

The Trump administration’s recently announced bans on Chinese-owned social media platforms TikTok and WeChat could have unintended consequences. The orders bar the apps from doing business in the U.S. or with U.S. persons or businesses after Sept. 20 and require divestiture of TikTok by Nov. 12.

The executive orders are based on national security grounds, though the threats cited are to citizens rather than the government. Foreign policy analysts see the move as part of the administration’s ongoing wrestling match with the Chinese government for leverage in the global economy.

Whatever the motivation, as someone who researches both cybersecurity and technology policy, I am not convinced that the benefits outweigh the costs. The bans threaten Americans’ freedom of speech, and may harm foreign investment in the U.S. and American companies’ ability to sell software abroad, while delivering minimal privacy and cybersecurity benefits.

National security threat?

The threats posed by TikTok and WeChat, according to the executive orders, include the potential for the platforms to be used for disinformation campaigns by the Chinese government and to give the Chinese government access to Americans’ personal and proprietary information.

Video of two young women on smartphone screen
TikTok is an immensely popular social media platform that allows people to share short video clips. Aaron Yoo/Flickr, CC BY-NC-SA

The U.S. is not the only country concerned about Chinese apps. The Australian military accused WeChat, a messaging, social media and mobile payment app, of acting as spyware, saying the app was caught sending data to Chinese Intelligence servers.

Disinformation campaigns may be of particular concern, due to the upcoming election and the impact of the alleged “sweeping and systematic” Russian interference in the 2016 elections. The potential for espionage is less pronounced, given that the apps access basic contact information and details about the videos Americans watch and the topics they search on, and not more sensitive data.

But banning the apps and requiring Chinese divestiture also has a national security downside. It damages the U.S.‘s moral authority to push for free speech and democracy abroad. Critics have frequently contended that America’s moral authority has been severely damaged during the Trump administration and this action could arguably add to the decline.

Protecting personal information

The administration’s principal argument against TikTok is that it collects Americans’ personal data and could provide it to the Chinese government. The executive order states that this could allow China to track the locations of federal employees and contractors, build dossiers of personal information for blackmail and conduct corporate espionage.

Skeptics have argued that the government hasn’t presented clear evidence of privacy issues and that the service’s practices are standard in the industry. TikTok’s terms of service do say that it can share information with its China-based corporate parent, ByteDance.

smartphone screenshot showing the WeChat app
WeChat is a messaging, social media and mobile payment app that is nearly ubiquitous in China. Albert Hsieh/Flickr, CC BY-NC

The order against WeChat is similar. It also mentions that the app captures the personal and proprietary information of Chinese nationals visiting the United States. However, some of these visiting Chinese nationals have expressed concern that banning WeChat may limit their ability to communicate with friends and family in China.

While TikTok and WeChat do raise cybersecurity concerns, they are not significantly different from those raised by other smart phone apps. In my view, these concerns could be better addressed by enacting national privacy legislation, similar to Europe’s GDPR and California’s CCPA, to dictate how data is collected and used and where it is stored. Another remedy is to have Google, Apple and others review the apps for cybersecurity concerns before allowing new versions to be made available in their app stores.

Freedom of speech

Perhaps the greatest concern raised by the bans are their impact on people’s ability to communicate, and whether they violate the First Amendment. Both TikTok and WeChat are communications channels and TikTok publishes and hosts content.

While the courts have allowed some regulation of speech, to withstand a legal challenge the restrictions must advance a legitimate government interest and be “narrowly tailored” to do so. National security is a legitimate governmental interest. However, in my opinion it’s questionable whether a real national security concern exists with these specific apps.

In the case of TikTok, banning an app that is being used for political commentary and activism would raise pronounced constitutional claims and likely be overturned by the courts.

Whether the bans hold up in court, the executive orders instituting them put the U.S. in uncomfortable territory: the list of countries that have banned social media platforms. These include Egypt, Hong Kong, Turkey, Turkmenistan, North Korea, Iran, Belarus, Russia and China.

Though the U.S. bans may not be aimed at curtailing dissent, they echo actions that harm free speech and democracy globally. Social media gives freedom fighters, protesters and dissidents all over the world a voice. It enables citizens to voice concerns and organize protests about monarchies, sexual and other human rights abuses, discriminatory laws and civil rights violations. When authoritarian governments clamp down on dissent, they frequently target social media.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

Risk of retaliation

The bans could also harm the U.S. economy because other countries could ban U.S. companies in retaliation. China and the U.S. have already gone through a cycle of reciprocal company banning, in addition to reciprocal consulate closures.

The U.S. has placed Chinese telecom firm Huawei on the Bureau of Industry Security Entity List, preventing U.S. firms from conducting business with it. While this has prevented Huawei from selling wireless hardware in the U.S., it has also prevented U.S. software sales to the telecom giant and caused it to use its own chips instead of buying them from U.S. firms.

Over a dozen U.S. companies urged the White House not to ban WeChat because it would hurt their business in China.

Other countries might use the U.S. bans of Chinese firms as justification for banning U.S. companies, even though the U.S. has not taken action against them or their companies directly. These trade restrictions harm the U.S.‘s moral authority, harm the global economy and stifle innovation. They also cut U.S. firms off from the high-growth Chinese market.

TikTok is in negotiations with Microsoft and Walmart and an Oracle-led consortium about a possible acquisition that would leave the company with American ownership and negate the ban.

Oversight, not banishment

Though the TikTok and WeChat apps do raise some concerns, it is not apparent that cause exists to ban them. The issues could be solved through better oversight and the enactment of privacy laws that could otherwise benefit Americans.

Of course, the government could have other causes for concern that it hasn’t yet made public. Given the consequences of banning an avenue of expression, if other concerns exist the government should share them with the American public. If not, I’d argue less drastic action would be more appropriate and better serve the American people.The Conversation

Jeremy Straub, Assistant Professor of Computer Science, North Dakota State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.