Saturday, January 7, 2023

Beyond Section 230: A pair of social media experts describes how to bring transparency and accountability to the industry

Social media regulation – and the future of Section 230 – are top of mind for many in Congress. Pavlo Conchar/SOPA Images/LightRocket via Getty Images
Robert Kozinets, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Pepperdine University

One of Elon Musk’s stated reasons for purchasing Twitter was to use the social media platform to defend the right to free speech. The ability to defend that right, or to abuse it, lies in a specific piece of legislation passed in 1996, at the pre-dawn of the modern age of social media.

The legislation, Section 230 of the Communications Decency Act, gives social media platforms some truly astounding protections under American law. Section 230 has also been called the most important 26 words in tech: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But the more that platforms like Twitter test the limits of their protection, the more American politicians on both sides of the aisle have been motivated to modify or repeal Section 230. As a social media media professor and a social media lawyer with a long history in this field, we think change in Section 230 is coming – and we believe that it is long overdue.

Born of porn

Section 230 had its origins in the attempt to regulate online porn. One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

Section 230 explained.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls – which contains not just porn but also misinformation and hate speech – the absolutist stance that they have total protection and total legal “immunity” is untenable.

A lot of good has come from Section 230. But the history of social media also makes it clear that it is far from perfect at balancing corporate profit with civic responsibility.

We were curious about how current thinking in legal circles and digital research could give a clearer picture about how Section 230 might realistically be modified or replaced, and what the consequences might be. We envision three possible scenarios to amend Section 230, which we call verification triggers, transparent liability caps and Twitter court.

Verification triggers

We support free speech, and we believe that everyone should have a right to share information. When people who oppose vaccines share their concerns about the rapid development of RNA-based COVID-19 vaccines, for example, they open up a space for meaningful conversation and dialogue. They have a right to share such concerns, and others have a right to counter them.

What we call a “verification trigger” should kick in when the platform begins to monetize content related to misinformation. Most platforms try to detect misinformation, and many label, moderate or remove some of it. But many monetize it as well through algorithms that promote popular – and often extreme or controversial – content. When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.

Twitter began selling verification check marks for user accounts in November 2022. By verifying a user account is a real person or company and charging for it, Twitter is both vouching for it and monetizing that connection. Reaching a certain dollar value from questionable content should trigger the ability to sue Twitter, or any platform, in court. Once a platform begins earning money from users and content, including verification, it steps outside the bounds of Section 230 and into the bright light of responsibility – and into the world of tort, defamation and privacy rights laws.

Transparent caps

Social media platforms currently make their own rules about hate speech and misinformation. They also keep secret a lot of information about how much money the platform makes off of content, like a given tweet. This makes what isn’t allowed and what is valued opaque.

One sensible change to Section 230 would be to expand its 26 words to clearly spell out what is expected of social media platforms. The added language would specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it. We acknowledge that this definition isn’t easy, that it’s dynamic, and that researchers and companies are already struggling with it.

But government can raise the bar by setting some coherent standards. If a company can show that it’s met those standards, the amount of liability it has could be limited. It wouldn’t have complete protection as it does now. But it would have a lot more transparency and public responsibility. We call this a “transparent liability cap.”

Twitter court

Our final proposed amendment to Section 230 already exists in a rudimentary form. Like Facebook and other social platforms, Twitter has content moderation panels that determine standards for users on the platform, and thus standards for the public that shares and is exposed to content through the platform. You can think of this as “Twitter court.”

Effective content moderation involves the difficult balance of restricting harmful content while preserving free speech.

Though Twitter’s content moderation appears to be suffering from changes and staff reductions at the company, we believe that panels are a good idea. But keeping panels hidden behind the closed doors of profit-making companies is not. If companies like Twitter want to be more transparent, we believe that should also extend to their own inner operations and deliberations.

We envision extending the jurisdiction of “Twitter court” to neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform. Rather than going to actual court for cases of defamation or privacy violation, Twitter court would suffice under many conditions. Again, this is a way to pull back some of Section 230’s absolutist protections without removing them entirely.

How would it work – and would it work?

Since 2018, platforms have had limited Section 230 protection in cases of sex trafficking. A recent academic proposal suggests extending these limitations to incitement to violence, hate speech and disinformation. House Republicans have also suggested a number of Section 230 carve-outs, including those for content relating to terrorism, child exploitation or cyberbullying.

Our three ideas of verification triggers, transparent liability caps and Twitter court may be an easy place to start the reform. They could be implemented individually, but they would have even greater authority if they were implemented together. The increased clarity of transparent verification triggers and transparent liability would help set meaningful standards balancing public benefit with corporate responsibility in a way that self-regulation has not been able to achieve. Twitter court would provide a real option for people to arbitrate rather than to simply watch misinformation and hate speech bloom and platforms profit from it.

Adding a few meaningful options and amendments to Section 230 will be difficult because defining hate speech and misinformation in context, and setting limits and measures for monetization of context, will not be easy. But we believe these definitions and measures are achievable and worthwhile. Once enacted, these strategies promise to make online discourse stronger and platforms fairer.The Conversation

Robert Kozinets, Professor of Journalism, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Adjunct Professor of Law, Pepperdine University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, November 6, 2022

Mass migration from Twitter is likely to be an uphill battle – just ask ex-Tumblr users

The turmoil inside Twitter headquarters is sparking discussion of a mass exodus of users. What will happen if there is a rush to the exits? AP Photo/Jeff Chiu
Casey Fiesler, University of Colorado Boulder

Elon Musk announced that “the bird is freed” when his US$44 billion acquisition of Twitter officially closed on Oct. 27, 2022. Some users on the microblogging platform saw this as a reason to fly away.

Over the course of the next 48 hours, I saw countless announcements on my Twitter feed from people either leaving the platform or making preparations to leave. The hashtags #GoodbyeTwitter, #TwitterMigration and #Mastodon were trending. The decentralized, open source social network Mastodon gained over 100,000 users in just a few days, according to a user counting bot.

As an information scientist who studies online communities, this felt like the beginning of something I’ve seen before. Social media platforms tend not to last forever. Depending on your age and online habits, there’s probably some platform that you miss, even if it still exists in some form. Think of MySpace, LiveJournal, Google+ and Vine.

When social media platforms fall, sometimes the online communities that made their homes there fade away, and sometimes they pack their bags and relocate to a new home. The turmoil at Twitter is causing many of the company’s users to consider leaving the platform. Research on previous social media platform migrations shows what might lie ahead for Twitter users who fly the coop.

Elon Musk’s acquisition of Twitter has caused turmoil within the company and prompted many users to consider leaving the social media platform.

Several years ago, I led a research project with Brianna Dym, now at University of Maine, where we mapped the platform migrations of nearly 2,000 people over a period of almost two decades. The community we examined was transformative fandom, fans of literary and popular culture series and franchises who create art using those characters and settings.

We chose it because it is a large community that has thrived in a number of different online spaces. Some of the same people writing Buffy the Vampire Slayer fan fiction on Usenet in the 1990s were writing Harry Potter fan fiction on LiveJournal in the 2000s and Star Wars fan fiction on Tumblr in the 2010s.

By asking participants about their experiences moving across these platforms – why they left, why they joined and the challenges they faced in doing so – we gained insights into factors that might drive the success and failure of platforms, as well as what negative consequences are likely to occur for a community when it relocates.

‘You go first’

Regardless of how many people ultimately decide to leave Twitter, and even how many people do so around the same time, creating a community on another platform is an uphill battle. These migrations are in large part driven by network effects, meaning that the value of a new platform depends on who else is there.

In the critical early stages of migration, people have to coordinate with each other to encourage contribution on the new platform, which is really hard to do. It essentially becomes, as one of our participants described it, a “game of chicken” where no one wants to leave until their friends leave, and no one wants to be first for fear of being left alone in a new place.

For this reason, the “death” of a platform – whether from a controversy, disliked change or competition – tends to be a slow, gradual process. One participant described Usenet’s decline as “like watching a shopping mall slowly go out of business.”

It’ll never be the same

The current push from some corners to leave Twitter reminded me a bit of Tumblr’s adult content ban in 2018, which reminded me of LiveJournal’s policy changes and new ownership in 2007. People who left LiveJournal in favor of other platforms like Tumblr described feeling unwelcome there. And though Musk did not walk into Twitter headquarters at the end of October and turn a virtual content moderation lever into the “off” position, there was an uptick in hate speech on the platform as some users felt emboldened to violate the platform’s content policies under an assumption that major policy changes were on the way.

So what might actually happen if a lot of Twitter users do decide to leave? What makes Twitter Twitter isn’t the technology, it’s the particular configuration of interactions that takes place there. And there is essentially zero chance that Twitter, as it exists now, could be reconstituted on another platform. Any migration is likely to face many of the challenges previous platform migrations have faced: content loss, fragmented communities, broken social networks and shifted community norms.

But Twitter isn’t one community, it’s a collection of many communities, each with its own norms and motivations. Some communities might be able to migrate more successfully than others. So maybe K-Pop Twitter could coordinate a move to Tumblr. I’ve seen much of Academic Twitter coordinating a move to Mastodon. Other communities might already simultaneously exist on Discord servers and subreddits, and can just let participation on Twitter fade away as fewer people pay attention to it. But as our study implies, migrations always have a cost, and even for smaller communities, some people will get lost along the way.

The ties that bind

Our research also pointed to design recommendations for supporting migration and how one platform might take advantage of attrition from another platform. Cross-posting features can be important because many people hedge their bets. They might be unwilling to completely cut ties all at once, but they might dip their toes into a new platform by sharing the same content on both.

Ways to import networks from another platform also help to maintain communities. For example, there are multiple ways to find people you follow on Twitter on Mastodon. Even simple welcome messages, guides for newcomers and easy ways to find other migrants could make a difference in helping resettlement attempts stick.

And through all of this, it’s important to remember that this is such a hard problem by design. Platforms have no incentive to help users leave. As long-time technology journalist Cory Doctorow recently wrote, this is “a hostage situation.” Social media lures people in with their friends, and then the threat of losing those social networks keeps people on the platforms.

But even if there is a price to pay for leaving a platform, communities can be incredibly resilient. Like the LiveJournal users in our study who found each other again on Tumblr, your fate is not tied to Twitter’s.The Conversation

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 14, 2022

EU law would require Big Tech to do more to combat child sexual abuse, but a key question remains: How?

European Commissioner for Home Affairs Ylva Johansson announced a set of proposed regulations requiring tech companies to report child sexual abuse material. AP Photo/Francisco Seco
Laura Draper, American University

The European Commission recently proposed regulations to protect children by requiring tech companies to scan the content in their systems for child sexual abuse material. This is an extraordinarily wide-reaching and ambitious effort that would have broad implications beyond the European Union’s borders, including in the U.S.

Unfortunately, the proposed regulations are, for the most part, technologically unfeasible. To the extent that they could work, they require breaking end-to-end encryption, which would make it possible for the technology companies – and potentially the government and hackers – to see private communications.

The regulations, proposed on May 11, 2022, would impose several obligations on tech companies that host content and provide communication services, including social media platforms, texting services and direct messaging apps, to detect certain categories of images and text.

Under the proposal, these companies would be required to detect previously identified child sexual abuse material, new child sexual abuse material, and solicitations of children for sexual purposes. Companies would be required to report detected content to the EU Centre, a centralized coordinating entity that the proposed regulations would establish.

Each of these categories presents its own challenges, which combine to make the proposed regulations impossible to implement as a package. The trade-off between protecting children and protecting user privacy underscores how combating online child sexual abuse is a “wicked problem.” This puts technology companies in a difficult position: required to comply with regulations that serve a laudable goal but without the means to do so.

Digital fingerprints

Researchers have known how to detect previously identified child sexual abuse material for over a decade. This method, first developed by Microsoft, assigns a “hash value” – a sort of digital fingerprint – to an image, which can then be compared against a database of previously identified and hashed child sexual abuse material. In the U.S., the National Center for Missing and Exploited Children manages several databases of hash values, and some tech companies maintain their own hash sets.

The hash values for images uploaded or shared using a company’s services are compared with these databases to detect previously identified child sexual abuse material. This method has proved extremely accurate, reliable and fast, which is critical to making any technical solution scalable.

The problem is that many privacy advocates consider it incompatible with end-to-end encryption, which, strictly construed, means that only the sender and the intended recipient can view the content. Because the proposed EU regulations mandate that tech companies report any detected child sexual abuse material to the EU Centre, this would violate end-to-end encryption, thus forcing a trade-off between effective detection of the harmful material and user privacy.

Here’s how end-to-end encryption works, and which popular messaging apps use it.

Recognizing new harmful material

In the case of new content – that is, images and videos not included in hash databases – there is no such tried-and-true technical solution. Top engineers have been working on this issue, building and training AI tools that can accommodate large volumes of data. Google and child safety nongovernmental organization Thorn have both had some success using machine-learning classifiers to help companies identify potential new child sexual abuse material.

However, without independently verified data on the tools’ accuracy, it’s not possible to assess their utility. Even if the accuracy and speed are comparable with hash-matching technology, the mandatory reporting will again break end-to-end encryption.

New content also includes livestreams, but the proposed regulations seem to overlook the unique challenges this technology poses. Livestreaming technology became ubiquitous during the pandemic, and the production of child sexual abuse material from livestreamed content has dramatically increased.

More and more children are being enticed or coerced into livestreaming sexually explicit acts, which the viewer may record or screen-capture. Child safety organizations have noted that the production of “perceived first-person child sexual abuse material” – that is, child sexual abuse material of apparent selfies – has risen at exponential rates over the past few years. In addition, traffickers may livestream the sexual abuse of children for offenders who pay to watch.

The circumstances that lead to recorded and livestreamed child sexual abuse material are very different, but the technology is the same. And there is currently no technical solution that can detect the production of child sexual abuse material as it occurs. Tech safety company SafeToNet is developing a real-time detection tool, but it is not ready to launch.

Detecting solicitations

Detection of the third category, “solicitation language,” is also fraught. The tech industry has made dedicated efforts to pinpoint indicators necessary to identify solicitation and enticement language, but with mixed results. Microsoft spearheaded Project Artemis, which led to the development of the Anti-Grooming Tool. The tool is designed to detect enticement and solicitation of a child for sexual purposes.

As the proposed regulations point out, however, the accuracy of this tool is 88%. In 2020, popular messaging app WhatsApp delivered approximately 100 billion messages daily. If the tool identifies even 0.01% of the messages as “positive” for solicitation language, human reviewers would be tasked with reading 10 million messages every day to identify the 12% that are false positives, making the tool simply impractical.

As with all the above-mentioned detection methods, this, too, would break end-to-end encryption. But whereas the others may be limited to reviewing a hash value of an image, this tool requires access to all exchanged text.

No path

It’s possible that the European Commission is taking such an ambitious approach in hopes of spurring technical innovation that would lead to more accurate and reliable detection methods. However, without existing tools that can accomplish these mandates, the regulations are ineffective.

When there is a mandate to take action but no path to take, I believe the disconnect will simply leave the industry without the clear guidance and direction these regulations are intended to provide.The Conversation

Laura Draper, Senior Project Director at the Tech, Law & Security Program, American University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, April 15, 2022

Elon Musk’s bid spotlights Twitter’s unique role in public discourse – and what changes might be in store

Twitter may not be a darling of Wall Street, but it occupies a unique place in the social media landscape. AP Photo/Richard Drew
Anjana Susarla, Michigan State University

Twitter has been in the news a lot lately, albeit for the wrong reasons. Its stock growth has languished and the platform itself has largely remained the same since its founding in 2006. On April 14, 2022, Elon Musk, the world’s richest person, made an offer to buy Twitter and take the public company private.

In a filing with the Securities and Exchange Commission, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”

As a researcher of social media platforms, I find that Musk’s potential ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.

What makes Twitter unique

Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.

Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is about 20 minutes, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.

Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict asthma-related emergency department visits, measure public epidemic awareness, and model wildfire smoke dispersion.

Tweets that are part of a conversation are shown in chronological order, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive provides instant and complete access to every public Tweet. This positions Twitter as a historical chronicler of record and a de facto fact checker.

Changes on Musk’s mind

A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several suggestions about how to change Twitter, including adding an edit button for tweets and granting automatic verification marks to premium users.

There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets.

There are numerous ways to retrieve deleted tweets, which allows researchers to study them. While some studies show significant personality differences between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a way for people to manage their online identities.

Analyzing deleting behavior can also yield valuable clues about online credibility and disinformation. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.

Studies of bot-generated activity on Twitter have concluded that nearly half of accounts tweeting about COVID-19 are likely bots. Given partisanship and political polarization in online spaces, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.

Twitter’s content moderation and revenue model

To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – online advertising ecosystem involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the primary revenue source for Twitter.

Musk’s vision is to generate revenue for Twitter from subscriptions rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This would make Twitter a sort of freewheeling opinion site for paying subscribers. Twitter has been aggressive in using content moderation in its attempts to address disinformation.

Musk’s description of a platform free from content moderation issues is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as algorithms that assign gender to users, potential inaccuracies and biases in algorithms used to glean information from these platforms, and the impact on those looking for health information online.

Testimony by Facebook whistleblower Frances Haugen and recent regulatory efforts such as the online safety bill unveiled in the U.K. show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s potential bid for Twitter highlights a whole host of regulatory concerns.

Because of Musk’s other businesses, Twitter’s ability to influence public opinion in the sensitive industries of aviation and the automobile industry would automatically create a conflict of interest, not to mention affecting the disclosure of material information necessary for shareholders. Musk has already been accused of delaying disclosure of his ownership stake in Twitter.

Twitter’s own algorithmic bias bounty challenge concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to re-imagine the YouTube platform with ethics in mind. Perhaps it’s time to ask Twitter to do the same, whoever owns and manages the company.

[Over 150,000 readers rely on The Conversation’s newsletters to understand the world. Sign up today.]The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, November 22, 2021

What is the metaverse? 2 media and information experts explain

Are these people interacting in some virtual world? Lucrezia Carnelos/Unsplash
Rabindra Ratan, Michigan State University and Yiming Lei, Michigan State University

The metaverse is a network of always-on virtual environments in which many people can interact with one another and digital objects while operating virtual representations – or avatars – of themselves. Think of a combination of immersive virtual reality, a massively multiplayer online role-playing game and the web.

The metaverse is a concept from science fiction that many people in the technology industry envision as the successor to today’s internet. It’s only a vision at this point, but technology companies like Facebook are aiming to make it the setting for many online activities, including work, play, studying and shopping. Facebook is so sold on the concept that it is renaming itself Meta to highlight its push to dominate the metaverse.

A book cover with a graphical representation of a massive stone gate with a pair of large unicorn friezes on either side, a futuristic cityscape on the far side of the gate and a male figure standing in the gate facing the city with a sword raised
The best-selling science fiction novel ‘Snow Crash’ gave the world the word ‘metaverse.’ RA.AZ/Flickr, CC BY

Metaverse is a portmanteau of meta, meaning transcendent, and verse, from universe. Sci-fi novelist Neal Stephenson coined the term in his 1992 novel “Snow Crash” to describe the virtual world in which the protagonist, Hiro Protagonist, socializes, shops and vanquishes real-world enemies through his avatar. The concept predates “Snow Crash” and was popularized as “cyberspace” in William Gibson’s groundbreaking 1984 novel “Neuromancer.”

There are three key aspects of the metaverse: presence, interoperability and standardization.

Presence is the feeling of actually being in a virtual space, with virtual others. Decades of research have shown that this sense of embodiment improves the quality of online interactions. This sense of presence is achieved through virtual reality technologies such as head-mounted displays.

Interoperability means being able to seamlessly travel between virtual spaces with the same virtual assets, such as avatars and digital items. ReadyPlayerMe allows people to create an avatar that they can use in hundreds of different virtual worlds, including in Zoom meetings through apps like Animaze. Meanwhile, blockchain technologies such as cryptocurrencies and nonfungible tokens facilitate the transfer of digital goods across virtual borders.

Standardization is what enables interoperability of platforms and services across the metaverse. As with all mass-media technologies – from the printing press to texting – common technological standards are essential for widespread adoption. International organizations such as the Open Metaverse Interoperability Group define these standards.

Why the metaverse matters

If the metaverse does become the successor to the internet, who builds it, and how, is extremely important to the future of the economy and society as a whole. Facebook is aiming to play a leading role in shaping the metaverse, in part by investing heavily in virtual reality. Facebook CEO Mark Zuckerberg explained in an interview his view that the metaverse spans nonimmersive platforms like today’s social media as well as immersive 3D media technologies such as virtual reality, and that it will be for work as well as play.

Hollywood has embraced the metaverse in movies like ‘Ready Player One.’

The metaverse might one day resemble the flashy fictional Oasis of Ernest Cline’s “Ready Player One,” but until then you can turn to games like Fortnite and Roblox, virtual reality social media platforms like VRChat and AltspaceVR, and virtual work environments like Immersed for a taste of the immersive and connected metaverse experience. As these siloed spaces converge and become increasingly interoperable, watch for a truly singular metaverse to emerge.

This article has been updated to include Facebook’s announcement on Oct. 28, 2021 that it is renaming itself Meta.The Conversation

Rabindra Ratan, Associate Professor of Media and Information, Michigan State University and Yiming Lei, Doctoral student in Media and Information, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.