Friday, July 25, 2025

Democratizing space’ is more than just adding new players – it comes with questions around sustainability and sovereignty

‘Democratizing space’ is more than just adding new players – it comes with questions around sustainability and sovereignty

A group of people gaze up at the Moon in Germany. AP Photo/Markus Schreiber
Timiebi Aganaba, Arizona State University; Adam Fish, UNSW Sydney; Niiyokamigaabaw Deondre Smiles, University of British Columbia, and Tony Milligan, King's College London

India is on the Moon,” S. Somanath, chairman of the Indian Space Research Organization, announced in August 2023. The announcement meant India had joined the short list of countries to have visited the Moon, and the applause and shouts of joy that followed signified that this achievement wasn’t just a scientific one, but a cultural one.

A group of cheering, smiling people hold signs depicting the Chandrayaan-3 lander.
India’s successful lunar landing prompted celebrations across the country, like this one in Mumbai. AP Photo/Rajanish Kakade

Over the past decade, many countries have established new space programs, including multiple African nations. India and Israel – nations that were not technical contributors to the space race in the 1960s and ‘70s – have attempted landings on the lunar surface.

With more countries joining the evolving space economy, many of our colleagues in space strategy, policy ethics and law have celebrated the democratization of space: the hope that space is now more accessible for diverse participants.

We are a team of researchers based across four countries with expertise in space policy and law, ethics, geography and anthropology who have written about the difficulties and importance of inclusion in space.

Major players like the U.S., the European Union and China may once have dominated space and seen it as a place to try out new commercial and military ventures. Emerging new players in space, like other countries, commercial interests and nongovernmental organizations, may have other goals and rationales. Unexpected new initiatives from these newcomers could shift perceptions of space from something to dominate and possess to something more inclusive, equitable and democratic.

We address these emerging and historical tensions in a paper published in May 2025 in the journal Nature, in which we describe the difficulties and importance of including nontraditional actors and Indigenous peoples in the space industry.

Continuing inequalities among space players

Not all countries’ space agencies are equal. Newer agencies often don’t have the same resources behind them that large, established players do.

The U.S. and Chinese programs receive much more funding than those of any other country. Because they are most frequently sending up satellites and proposing new ideas puts them in the position to establish conventions for satellite systems, landing sites and resource extraction that everyone else may have to follow.

Sometimes, countries may have operated on the assumption that owning a satellite would give them the appearance of soft or hard geopolitical power as a space nation – and ultimately gain relevance.

A small boxlike satellite ejected into orbit around Earth from a larger spacecraft.
Small satellites, called CubeSats, are becoming relatively affordable and easy to develop, allowing more players, from countries and companies to universities and student groups, to have a satellite in space. NASA/Butch Wilmore, CC BY-NC

In reality, student groups of today can develop small satellites, called CubeSats, autonomously, and recent scholarship has concluded that even successful space missions may negatively affect the international relationships between some countries and their partners. The respect a country expects to receive may not materialize, and the costs to keep up can outstrip gains in potential prestige.

Environmental protection and Indigenous perspectives

Usually, building the infrastructure necessary to test and launch rockets requires a remote area with established roads. In many cases, companies and space agencies have placed these facilities on lands where Indigenous peoples have strong claims, which can lead to land disputes, like in western Australia.

Many of these sites have already been subject to human-made changes, through mining and resource extraction in the past. Many sites have been ground zero for tensions with Indigenous peoples over land use. Within these contested spaces, disputes are rife.

Because of these tensions around land use, it is important to include Indigenous claims and perspectives. Doing so can help make sure that the goal of protecting the environments of outer space and Earth are not cast aside while building space infrastructure here on Earth.

Some efforts are driving this more inclusive approach to engagement in space, including initiatives like “Dark and Quiet Skies”, a movement that works to ensure that people can stargaze and engage with the stars without noise or sound pollution. This movement and other inclusive approaches operate on the principle of reciprocity: that more players getting involved with space can benefit all.

Researchers have recognized similar dynamics within the larger space industry. Some scholars have come to the conclusion that even though the space industry is “pay to play,” commitments to reciprocity can help ensure that players in space exploration who may not have the financial or infrastructural means to support individual efforts can still access broader structures of support.

The downside of more players entering space is that this expansion can make protecting the environment – both on Earth and beyond – even harder.

The more players there are, at both private and international levels, the more difficult sustainable space exploration could become. Even with good will and the best of intentions, it would be difficult to enforce uniform standards for the exploration and use of space resources that would protect the lunar surface, Mars and beyond.

It may also grow harder to police the launch of satellites and dedicated constellations. Limiting the number of satellites could prevent space junk, protect the satellites already in orbit and allow everyone to have a clear view of the night sky. However, this would have to compete with efforts to expand internet access to all.

The amount of space junk in orbit has increased dramatically since the 1960s.

What is space exploration for?

Before tackling these issues, we find it useful to think about the larger goal of space exploration, and what the different approaches are. One approach would be the fast and inclusive democratization of space – making it easier for more players to join in. Another would be a more conservative and slower “big player” approach, which would restrict who can go to space.

The conservative approach is liable to leave developing nations and Indigenous peoples firmly on the outside of a key process shaping humanity’s shared future.

But a faster and more inclusive approach to space would not be easy to run. More serious players means it would be harder to come to an agreement about regulations, as well as the larger goals for human expansion into space.

Narratives around emerging technologies, such as those required for space exploration, can change over time, as people begin to see them in action.

Technology that we take for granted today was once viewed as futuristic or fantastical, and sometimes with suspicion. For example, at the end of the 1940s, George Orwell imagined a world in which totalitarian systems used tele-screens and videoconferencing to control the masses.

Earlier in the same decade, Thomas J. Watson, then president of IBM, notoriously predicted that there would be a global market for about five computers. We as humans often fear or mistrust future technologies.

However, not all technological shifts are detrimental, and some technological changes can have clear benefits. In the future, robots may perform tasks too dangerous, too difficult or too dull and repetitive for humans. Biotechnology may make life healthier. Artificial intelligence can sift through vast amounts of data and turn it into reliable guesswork. Researchers can also see genuine downsides to each of these technologies.

Space exploration is harder to squeeze into one streamlined narrative about the anticipated benefits. The process is just too big and too transformative.

To return to the question if we should go to space, our team argues that it is not a question of whether or not we should go, but rather a question of why we do it, who benefits from space exploration and how we can democratize access to broader segments of society. Including a diversity of opinions and viewpoints can help find productive ways forward.

Ultimately, it is not necessary for everyone to land on one single narrative about the value of space exploration. Even our team of four researchers doesn’t share a single set of beliefs about its value. But bringing more nations, tribes and companies into discussions around its potential value can help create collaborative and worthwhile goals at an international scale.The Conversation

Timiebi Aganaba, Assistant Professor of Space and Society, Arizona State University; Adam Fish, Associate Professor, School of Arts and Media, UNSW Sydney; Niiyokamigaabaw Deondre Smiles, Adjunct Professor, University of British Columbia, and Tony Milligan, Research Fellow in the Philosophy of Ethics, King's College London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, July 14, 2024

In a battle of AI versus AI, researchers are preparing for the coming wave of deepfake propaganda

AI-powered detectors are the best tools for spotting AI-generated fake videos. The Washington Post via Getty Images
John Sohrawardi, Rochester Institute of Technology and Matthew Wright, Rochester Institute of Technology

An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning.

Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds.

As researchers who have been studying deepfake detection and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation.

The problem with deepfakes

Most people know that you can’t believe everything you see. Over the last couple of decades, savvy news consumers have gotten used to seeing images manipulated with photo-editing software. Videos, though, are another story. Hollywood directors can spend millions of dollars on special effects to make up a realistic scene. But using deepfakes, amateurs with a few thousand dollars of computer equipment and a few weeks to spend could make something almost as true to life.

Deepfakes make it possible to put people into movie scenes they were never in – think Tom Cruise playing Iron Man – which makes for entertaining videos. Unfortunately, it also makes it possible to create pornography without the consent of the people depicted. So far, those people, nearly all women, are the biggest victims when deepfake technology is misused.

Deepfakes can also be used to create videos of political leaders saying things they never said. The Belgian Socialist Party released a low-quality nondeepfake but still phony video of President Trump insulting Belgium, which got enough of a reaction to show the potential risks of higher-quality deepfakes.

University of California, Berkeley’s Hany Farid explains how deepfakes are made.

Perhaps scariest of all, they can be used to create doubt about the content of real videos, by suggesting that they could be deepfakes.

Given these risks, it would be extremely valuable to be able to detect deepfakes and label them clearly. This would ensure that fake videos do not fool the public, and that real videos can be received as authentic.

Spotting fakes

Deepfake detection as a field of research was begun a little over three years ago. Early work focused on detecting visible problems in the videos, such as deepfakes that didn’t blink. With time, however, the fakes have gotten better at mimicking real videos and become harder to spot for both people and detection tools.

There are two major categories of deepfake detection research. The first involves looking at the behavior of people in the videos. Suppose you have a lot of video of someone famous, such as President Obama. Artificial intelligence can use this video to learn his patterns, from his hand gestures to his pauses in speech. It can then watch a deepfake of him and notice where it does not match those patterns. This approach has the advantage of possibly working even if the video quality itself is essentially perfect.

SRI International’s Aaron Lawson describes one approach to detecting deepfakes.

Other researchers, including our team, have been focused on differences that all deepfakes have compared to real videos. Deepfake videos are often created by merging individually generated frames to form videos. Taking that into account, our team’s methods extract the essential data from the faces in individual frames of a video and then track them through sets of concurrent frames. This allows us to detect inconsistencies in the flow of the information from one frame to another. We use a similar approach for our fake audio detection system as well.

These subtle details are hard for people to see, but show how deepfakes are not quite perfect yet. Detectors like these can work for any person, not just a few world leaders. In the end, it may be that both types of deepfake detectors will be needed.

Recent detection systems perform very well on videos specifically gathered for evaluating the tools. Unfortunately, even the best models do poorly on videos found online. Improving these tools to be more robust and useful is the key next step.

[Get facts about coronavirus and the latest research. Sign up for The Conversation’s newsletter.]

Who should use deepfake detectors?

Ideally, a deepfake verification tool should be available to everyone. However, this technology is in the early stages of development. Researchers need to improve the tools and protect them against hackers before releasing them broadly.

At the same time, though, the tools to make deepfakes are available to anybody who wants to fool the public. Sitting on the sidelines is not an option. For our team, the right balance was to work with journalists, because they are the first line of defense against the spread of misinformation.

Before publishing stories, journalists need to verify the information. They already have tried-and-true methods, like checking with sources and getting more than one person to verify key facts. So by putting the tool into their hands, we give them more information, and we know that they will not rely on the technology alone, given that it can make mistakes.

Can the detectors win the arms race?

It is encouraging to see teams from Facebook and Microsoft investing in technology to understand and detect deepfakes. This field needs more research to keep up with the speed of advances in deepfake technology.

Journalists and the social media platforms also need to figure out how best to warn people about deepfakes when they are detected. Research has shown that people remember the lie, but not the fact that it was a lie. Will the same be true for fake videos? Simply putting “Deepfake” in the title might not be enough to counter some kinds of disinformation.

Deepfakes are here to stay. Managing disinformation and protecting the public will be more challenging than ever as artificial intelligence gets more powerful. We are part of a growing research community that is taking on this threat, in which detection is just the first step.The Conversation

John Sohrawardi, Doctoral Student in Computing and Informational Sciences, Rochester Institute of Technology and Matthew Wright, Professor of Computing Security, Rochester Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Saturday, January 7, 2023

Beyond Section 230: A pair of social media experts describes how to bring transparency and accountability to the industry

Social media regulation – and the future of Section 230 – are top of mind for many in Congress. Pavlo Conchar/SOPA Images/LightRocket via Getty Images
Robert Kozinets, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Pepperdine University

One of Elon Musk’s stated reasons for purchasing Twitter was to use the social media platform to defend the right to free speech. The ability to defend that right, or to abuse it, lies in a specific piece of legislation passed in 1996, at the pre-dawn of the modern age of social media.

The legislation, Section 230 of the Communications Decency Act, gives social media platforms some truly astounding protections under American law. Section 230 has also been called the most important 26 words in tech: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But the more that platforms like Twitter test the limits of their protection, the more American politicians on both sides of the aisle have been motivated to modify or repeal Section 230. As a social media media professor and a social media lawyer with a long history in this field, we think change in Section 230 is coming – and we believe that it is long overdue.

Born of porn

Section 230 had its origins in the attempt to regulate online porn. One way to think of it is as a kind of “restaurant graffiti” law. If someone draws offensive graffiti, or exposes someone else’s private information and secret life, in the bathroom stall of a restaurant, the restaurant owner can’t be held responsible for it. There are no consequences for the owner. Roughly speaking, Section 230 extends the same lack of responsibility to the Yelps and YouTubes of the world.

Section 230 explained.

But in a world where social media platforms stand to monetize and profit from the graffiti on their digital walls – which contains not just porn but also misinformation and hate speech – the absolutist stance that they have total protection and total legal “immunity” is untenable.

A lot of good has come from Section 230. But the history of social media also makes it clear that it is far from perfect at balancing corporate profit with civic responsibility.

We were curious about how current thinking in legal circles and digital research could give a clearer picture about how Section 230 might realistically be modified or replaced, and what the consequences might be. We envision three possible scenarios to amend Section 230, which we call verification triggers, transparent liability caps and Twitter court.

Verification triggers

We support free speech, and we believe that everyone should have a right to share information. When people who oppose vaccines share their concerns about the rapid development of RNA-based COVID-19 vaccines, for example, they open up a space for meaningful conversation and dialogue. They have a right to share such concerns, and others have a right to counter them.

What we call a “verification trigger” should kick in when the platform begins to monetize content related to misinformation. Most platforms try to detect misinformation, and many label, moderate or remove some of it. But many monetize it as well through algorithms that promote popular – and often extreme or controversial – content. When a company monetizes content with misinformation, false claims, extremism or hate speech, it is not like the innocent owner of the bathroom wall. It is more like an artist who photographs the graffiti and then sells it at an art show.

Twitter began selling verification check marks for user accounts in November 2022. By verifying a user account is a real person or company and charging for it, Twitter is both vouching for it and monetizing that connection. Reaching a certain dollar value from questionable content should trigger the ability to sue Twitter, or any platform, in court. Once a platform begins earning money from users and content, including verification, it steps outside the bounds of Section 230 and into the bright light of responsibility – and into the world of tort, defamation and privacy rights laws.

Transparent caps

Social media platforms currently make their own rules about hate speech and misinformation. They also keep secret a lot of information about how much money the platform makes off of content, like a given tweet. This makes what isn’t allowed and what is valued opaque.

One sensible change to Section 230 would be to expand its 26 words to clearly spell out what is expected of social media platforms. The added language would specify what constitutes misinformation, how social media platforms need to act, and the limits on how they can profit from it. We acknowledge that this definition isn’t easy, that it’s dynamic, and that researchers and companies are already struggling with it.

But government can raise the bar by setting some coherent standards. If a company can show that it’s met those standards, the amount of liability it has could be limited. It wouldn’t have complete protection as it does now. But it would have a lot more transparency and public responsibility. We call this a “transparent liability cap.”

Twitter court

Our final proposed amendment to Section 230 already exists in a rudimentary form. Like Facebook and other social platforms, Twitter has content moderation panels that determine standards for users on the platform, and thus standards for the public that shares and is exposed to content through the platform. You can think of this as “Twitter court.”

Effective content moderation involves the difficult balance of restricting harmful content while preserving free speech.

Though Twitter’s content moderation appears to be suffering from changes and staff reductions at the company, we believe that panels are a good idea. But keeping panels hidden behind the closed doors of profit-making companies is not. If companies like Twitter want to be more transparent, we believe that should also extend to their own inner operations and deliberations.

We envision extending the jurisdiction of “Twitter court” to neutral arbitrators who would adjudicate claims involving individuals, public officials, private companies and the platform. Rather than going to actual court for cases of defamation or privacy violation, Twitter court would suffice under many conditions. Again, this is a way to pull back some of Section 230’s absolutist protections without removing them entirely.

How would it work – and would it work?

Since 2018, platforms have had limited Section 230 protection in cases of sex trafficking. A recent academic proposal suggests extending these limitations to incitement to violence, hate speech and disinformation. House Republicans have also suggested a number of Section 230 carve-outs, including those for content relating to terrorism, child exploitation or cyberbullying.

Our three ideas of verification triggers, transparent liability caps and Twitter court may be an easy place to start the reform. They could be implemented individually, but they would have even greater authority if they were implemented together. The increased clarity of transparent verification triggers and transparent liability would help set meaningful standards balancing public benefit with corporate responsibility in a way that self-regulation has not been able to achieve. Twitter court would provide a real option for people to arbitrate rather than to simply watch misinformation and hate speech bloom and platforms profit from it.

Adding a few meaningful options and amendments to Section 230 will be difficult because defining hate speech and misinformation in context, and setting limits and measures for monetization of context, will not be easy. But we believe these definitions and measures are achievable and worthwhile. Once enacted, these strategies promise to make online discourse stronger and platforms fairer.The Conversation

Robert Kozinets, Professor of Journalism, USC Annenberg School for Communication and Journalism and Jon Pfeiffer, Adjunct Professor of Law, Pepperdine University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, November 6, 2022

Mass migration from Twitter is likely to be an uphill battle – just ask ex-Tumblr users

The turmoil inside Twitter headquarters is sparking discussion of a mass exodus of users. What will happen if there is a rush to the exits? AP Photo/Jeff Chiu
Casey Fiesler, University of Colorado Boulder

Elon Musk announced that “the bird is freed” when his US$44 billion acquisition of Twitter officially closed on Oct. 27, 2022. Some users on the microblogging platform saw this as a reason to fly away.

Over the course of the next 48 hours, I saw countless announcements on my Twitter feed from people either leaving the platform or making preparations to leave. The hashtags #GoodbyeTwitter, #TwitterMigration and #Mastodon were trending. The decentralized, open source social network Mastodon gained over 100,000 users in just a few days, according to a user counting bot.

As an information scientist who studies online communities, this felt like the beginning of something I’ve seen before. Social media platforms tend not to last forever. Depending on your age and online habits, there’s probably some platform that you miss, even if it still exists in some form. Think of MySpace, LiveJournal, Google+ and Vine.

When social media platforms fall, sometimes the online communities that made their homes there fade away, and sometimes they pack their bags and relocate to a new home. The turmoil at Twitter is causing many of the company’s users to consider leaving the platform. Research on previous social media platform migrations shows what might lie ahead for Twitter users who fly the coop.

Elon Musk’s acquisition of Twitter has caused turmoil within the company and prompted many users to consider leaving the social media platform.

Several years ago, I led a research project with Brianna Dym, now at University of Maine, where we mapped the platform migrations of nearly 2,000 people over a period of almost two decades. The community we examined was transformative fandom, fans of literary and popular culture series and franchises who create art using those characters and settings.

We chose it because it is a large community that has thrived in a number of different online spaces. Some of the same people writing Buffy the Vampire Slayer fan fiction on Usenet in the 1990s were writing Harry Potter fan fiction on LiveJournal in the 2000s and Star Wars fan fiction on Tumblr in the 2010s.

By asking participants about their experiences moving across these platforms – why they left, why they joined and the challenges they faced in doing so – we gained insights into factors that might drive the success and failure of platforms, as well as what negative consequences are likely to occur for a community when it relocates.

‘You go first’

Regardless of how many people ultimately decide to leave Twitter, and even how many people do so around the same time, creating a community on another platform is an uphill battle. These migrations are in large part driven by network effects, meaning that the value of a new platform depends on who else is there.

In the critical early stages of migration, people have to coordinate with each other to encourage contribution on the new platform, which is really hard to do. It essentially becomes, as one of our participants described it, a “game of chicken” where no one wants to leave until their friends leave, and no one wants to be first for fear of being left alone in a new place.

For this reason, the “death” of a platform – whether from a controversy, disliked change or competition – tends to be a slow, gradual process. One participant described Usenet’s decline as “like watching a shopping mall slowly go out of business.”

It’ll never be the same

The current push from some corners to leave Twitter reminded me a bit of Tumblr’s adult content ban in 2018, which reminded me of LiveJournal’s policy changes and new ownership in 2007. People who left LiveJournal in favor of other platforms like Tumblr described feeling unwelcome there. And though Musk did not walk into Twitter headquarters at the end of October and turn a virtual content moderation lever into the “off” position, there was an uptick in hate speech on the platform as some users felt emboldened to violate the platform’s content policies under an assumption that major policy changes were on the way.

So what might actually happen if a lot of Twitter users do decide to leave? What makes Twitter Twitter isn’t the technology, it’s the particular configuration of interactions that takes place there. And there is essentially zero chance that Twitter, as it exists now, could be reconstituted on another platform. Any migration is likely to face many of the challenges previous platform migrations have faced: content loss, fragmented communities, broken social networks and shifted community norms.

But Twitter isn’t one community, it’s a collection of many communities, each with its own norms and motivations. Some communities might be able to migrate more successfully than others. So maybe K-Pop Twitter could coordinate a move to Tumblr. I’ve seen much of Academic Twitter coordinating a move to Mastodon. Other communities might already simultaneously exist on Discord servers and subreddits, and can just let participation on Twitter fade away as fewer people pay attention to it. But as our study implies, migrations always have a cost, and even for smaller communities, some people will get lost along the way.

The ties that bind

Our research also pointed to design recommendations for supporting migration and how one platform might take advantage of attrition from another platform. Cross-posting features can be important because many people hedge their bets. They might be unwilling to completely cut ties all at once, but they might dip their toes into a new platform by sharing the same content on both.

Ways to import networks from another platform also help to maintain communities. For example, there are multiple ways to find people you follow on Twitter on Mastodon. Even simple welcome messages, guides for newcomers and easy ways to find other migrants could make a difference in helping resettlement attempts stick.

And through all of this, it’s important to remember that this is such a hard problem by design. Platforms have no incentive to help users leave. As long-time technology journalist Cory Doctorow recently wrote, this is “a hostage situation.” Social media lures people in with their friends, and then the threat of losing those social networks keeps people on the platforms.

But even if there is a price to pay for leaving a platform, communities can be incredibly resilient. Like the LiveJournal users in our study who found each other again on Tumblr, your fate is not tied to Twitter’s.The Conversation

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 14, 2022

EU law would require Big Tech to do more to combat child sexual abuse, but a key question remains: How?

European Commissioner for Home Affairs Ylva Johansson announced a set of proposed regulations requiring tech companies to report child sexual abuse material. AP Photo/Francisco Seco
Laura Draper, American University

The European Commission recently proposed regulations to protect children by requiring tech companies to scan the content in their systems for child sexual abuse material. This is an extraordinarily wide-reaching and ambitious effort that would have broad implications beyond the European Union’s borders, including in the U.S.

Unfortunately, the proposed regulations are, for the most part, technologically unfeasible. To the extent that they could work, they require breaking end-to-end encryption, which would make it possible for the technology companies – and potentially the government and hackers – to see private communications.

The regulations, proposed on May 11, 2022, would impose several obligations on tech companies that host content and provide communication services, including social media platforms, texting services and direct messaging apps, to detect certain categories of images and text.

Under the proposal, these companies would be required to detect previously identified child sexual abuse material, new child sexual abuse material, and solicitations of children for sexual purposes. Companies would be required to report detected content to the EU Centre, a centralized coordinating entity that the proposed regulations would establish.

Each of these categories presents its own challenges, which combine to make the proposed regulations impossible to implement as a package. The trade-off between protecting children and protecting user privacy underscores how combating online child sexual abuse is a “wicked problem.” This puts technology companies in a difficult position: required to comply with regulations that serve a laudable goal but without the means to do so.

Digital fingerprints

Researchers have known how to detect previously identified child sexual abuse material for over a decade. This method, first developed by Microsoft, assigns a “hash value” – a sort of digital fingerprint – to an image, which can then be compared against a database of previously identified and hashed child sexual abuse material. In the U.S., the National Center for Missing and Exploited Children manages several databases of hash values, and some tech companies maintain their own hash sets.

The hash values for images uploaded or shared using a company’s services are compared with these databases to detect previously identified child sexual abuse material. This method has proved extremely accurate, reliable and fast, which is critical to making any technical solution scalable.

The problem is that many privacy advocates consider it incompatible with end-to-end encryption, which, strictly construed, means that only the sender and the intended recipient can view the content. Because the proposed EU regulations mandate that tech companies report any detected child sexual abuse material to the EU Centre, this would violate end-to-end encryption, thus forcing a trade-off between effective detection of the harmful material and user privacy.

Here’s how end-to-end encryption works, and which popular messaging apps use it.

Recognizing new harmful material

In the case of new content – that is, images and videos not included in hash databases – there is no such tried-and-true technical solution. Top engineers have been working on this issue, building and training AI tools that can accommodate large volumes of data. Google and child safety nongovernmental organization Thorn have both had some success using machine-learning classifiers to help companies identify potential new child sexual abuse material.

However, without independently verified data on the tools’ accuracy, it’s not possible to assess their utility. Even if the accuracy and speed are comparable with hash-matching technology, the mandatory reporting will again break end-to-end encryption.

New content also includes livestreams, but the proposed regulations seem to overlook the unique challenges this technology poses. Livestreaming technology became ubiquitous during the pandemic, and the production of child sexual abuse material from livestreamed content has dramatically increased.

More and more children are being enticed or coerced into livestreaming sexually explicit acts, which the viewer may record or screen-capture. Child safety organizations have noted that the production of “perceived first-person child sexual abuse material” – that is, child sexual abuse material of apparent selfies – has risen at exponential rates over the past few years. In addition, traffickers may livestream the sexual abuse of children for offenders who pay to watch.

The circumstances that lead to recorded and livestreamed child sexual abuse material are very different, but the technology is the same. And there is currently no technical solution that can detect the production of child sexual abuse material as it occurs. Tech safety company SafeToNet is developing a real-time detection tool, but it is not ready to launch.

Detecting solicitations

Detection of the third category, “solicitation language,” is also fraught. The tech industry has made dedicated efforts to pinpoint indicators necessary to identify solicitation and enticement language, but with mixed results. Microsoft spearheaded Project Artemis, which led to the development of the Anti-Grooming Tool. The tool is designed to detect enticement and solicitation of a child for sexual purposes.

As the proposed regulations point out, however, the accuracy of this tool is 88%. In 2020, popular messaging app WhatsApp delivered approximately 100 billion messages daily. If the tool identifies even 0.01% of the messages as “positive” for solicitation language, human reviewers would be tasked with reading 10 million messages every day to identify the 12% that are false positives, making the tool simply impractical.

As with all the above-mentioned detection methods, this, too, would break end-to-end encryption. But whereas the others may be limited to reviewing a hash value of an image, this tool requires access to all exchanged text.

No path

It’s possible that the European Commission is taking such an ambitious approach in hopes of spurring technical innovation that would lead to more accurate and reliable detection methods. However, without existing tools that can accomplish these mandates, the regulations are ineffective.

When there is a mandate to take action but no path to take, I believe the disconnect will simply leave the industry without the clear guidance and direction these regulations are intended to provide.The Conversation

Laura Draper, Senior Project Director at the Tech, Law & Security Program, American University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, April 15, 2022

Elon Musk’s bid spotlights Twitter’s unique role in public discourse – and what changes might be in store

Twitter may not be a darling of Wall Street, but it occupies a unique place in the social media landscape. AP Photo/Richard Drew
Anjana Susarla, Michigan State University

Twitter has been in the news a lot lately, albeit for the wrong reasons. Its stock growth has languished and the platform itself has largely remained the same since its founding in 2006. On April 14, 2022, Elon Musk, the world’s richest person, made an offer to buy Twitter and take the public company private.

In a filing with the Securities and Exchange Commission, Musk stated, “I invested in Twitter as I believe in its potential to be the platform for free speech around the globe, and I believe free speech is a societal imperative for a functioning democracy.”

As a researcher of social media platforms, I find that Musk’s potential ownership of Twitter and his stated reasons for buying the company raise important issues. Those issues stem from the nature of the social media platform and what sets it apart from others.

What makes Twitter unique

Twitter occupies a unique niche. Its short chunks of text and threading foster real-time conversations among thousands of people, which makes it popular with celebrities, media personalities and politicians alike.

Social media analysts talk about the half-life of content on a platform, meaning the time it takes for a piece of content to reach 50% of its total lifetime engagement, usually measured in number of views or popularity based metrics. The average half life of a tweet is about 20 minutes, compared to five hours for Facebook posts, 20 hours for Instagram posts, 24 hours for LinkedIn posts and 20 days for YouTube videos. The much shorter half life illustrates the central role Twitter has come to occupy in driving real-time conversations as events unfold.

Twitter’s ability to shape real-time discourse, as well as the ease with which data, including geo-tagged data, can be gathered from Twitter has made it a gold mine for researchers to analyze a variety of societal phenomena, ranging from public health to politics. Twitter data has been used to predict asthma-related emergency department visits, measure public epidemic awareness, and model wildfire smoke dispersion.

Tweets that are part of a conversation are shown in chronological order, and, even though much of a tweet’s engagement is frontloaded, the Twitter archive provides instant and complete access to every public Tweet. This positions Twitter as a historical chronicler of record and a de facto fact checker.

Changes on Musk’s mind

A crucial issue is how Musk’s ownership of Twitter, and private control of social media platforms generally, affect the broader public well-being. In a series of deleted tweets, Musk made several suggestions about how to change Twitter, including adding an edit button for tweets and granting automatic verification marks to premium users.

There is no experimental evidence about how an edit button would change information transmission on Twitter. However, it’s possible to extrapolate from previous research that analyzed deleted tweets.

There are numerous ways to retrieve deleted tweets, which allows researchers to study them. While some studies show significant personality differences between users who delete their tweets and those who don’t, these findings suggest that deleting tweets is a way for people to manage their online identities.

Analyzing deleting behavior can also yield valuable clues about online credibility and disinformation. Similarly, if Twitter adds an edit button, analyzing the patterns of editing behavior could provide insights into Twitter users’ motivations and how they present themselves.

Studies of bot-generated activity on Twitter have concluded that nearly half of accounts tweeting about COVID-19 are likely bots. Given partisanship and political polarization in online spaces, allowing users – whether they are automated bots or actual people – the option to edit their tweets could become another weapon in the disinformation arsenal used by bots and propagandists. Editing tweets could allow users to selectively distort what they said, or deny making inflammatory remarks, which could complicate efforts to trace misinformation.

Twitter’s content moderation and revenue model

To understand Musk’s motivations and what lies next for social media platforms such as Twitter, it’s important to consider the gargantuan – and opaque – online advertising ecosystem involving multiple technologies wielded by ad networks, social media companies and publishers. Advertising is the primary revenue source for Twitter.

Musk’s vision is to generate revenue for Twitter from subscriptions rather than advertising. Without having to worry about attracting and retaining advertisers, Twitter would have less pressure to focus on content moderation. This would make Twitter a sort of freewheeling opinion site for paying subscribers. Twitter has been aggressive in using content moderation in its attempts to address disinformation.

Musk’s description of a platform free from content moderation issues is troubling in light of the algorithmic harms caused by social media platforms. Research has shown a host of these harms, such as algorithms that assign gender to users, potential inaccuracies and biases in algorithms used to glean information from these platforms, and the impact on those looking for health information online.

Testimony by Facebook whistleblower Frances Haugen and recent regulatory efforts such as the online safety bill unveiled in the U.K. show there is broad public concern about the role played by technology platforms in shaping popular discourse and public opinion. Musk’s potential bid for Twitter highlights a whole host of regulatory concerns.

Because of Musk’s other businesses, Twitter’s ability to influence public opinion in the sensitive industries of aviation and the automobile industry would automatically create a conflict of interest, not to mention affecting the disclosure of material information necessary for shareholders. Musk has already been accused of delaying disclosure of his ownership stake in Twitter.

Twitter’s own algorithmic bias bounty challenge concluded that there needs to be a community-led approach to build better algorithms. A very creative exercise developed by the MIT Media Lab asks middle schoolers to re-imagine the YouTube platform with ethics in mind. Perhaps it’s time to ask Twitter to do the same, whoever owns and manages the company.

[Over 150,000 readers rely on The Conversation’s newsletters to understand the world. Sign up today.]The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, November 22, 2021

What is the metaverse? 2 media and information experts explain

Are these people interacting in some virtual world? Lucrezia Carnelos/Unsplash
Rabindra Ratan, Michigan State University and Yiming Lei, Michigan State University

The metaverse is a network of always-on virtual environments in which many people can interact with one another and digital objects while operating virtual representations – or avatars – of themselves. Think of a combination of immersive virtual reality, a massively multiplayer online role-playing game and the web.

The metaverse is a concept from science fiction that many people in the technology industry envision as the successor to today’s internet. It’s only a vision at this point, but technology companies like Facebook are aiming to make it the setting for many online activities, including work, play, studying and shopping. Facebook is so sold on the concept that it is renaming itself Meta to highlight its push to dominate the metaverse.

A book cover with a graphical representation of a massive stone gate with a pair of large unicorn friezes on either side, a futuristic cityscape on the far side of the gate and a male figure standing in the gate facing the city with a sword raised
The best-selling science fiction novel ‘Snow Crash’ gave the world the word ‘metaverse.’ RA.AZ/Flickr, CC BY

Metaverse is a portmanteau of meta, meaning transcendent, and verse, from universe. Sci-fi novelist Neal Stephenson coined the term in his 1992 novel “Snow Crash” to describe the virtual world in which the protagonist, Hiro Protagonist, socializes, shops and vanquishes real-world enemies through his avatar. The concept predates “Snow Crash” and was popularized as “cyberspace” in William Gibson’s groundbreaking 1984 novel “Neuromancer.”

There are three key aspects of the metaverse: presence, interoperability and standardization.

Presence is the feeling of actually being in a virtual space, with virtual others. Decades of research have shown that this sense of embodiment improves the quality of online interactions. This sense of presence is achieved through virtual reality technologies such as head-mounted displays.

Interoperability means being able to seamlessly travel between virtual spaces with the same virtual assets, such as avatars and digital items. ReadyPlayerMe allows people to create an avatar that they can use in hundreds of different virtual worlds, including in Zoom meetings through apps like Animaze. Meanwhile, blockchain technologies such as cryptocurrencies and nonfungible tokens facilitate the transfer of digital goods across virtual borders.

Standardization is what enables interoperability of platforms and services across the metaverse. As with all mass-media technologies – from the printing press to texting – common technological standards are essential for widespread adoption. International organizations such as the Open Metaverse Interoperability Group define these standards.

Why the metaverse matters

If the metaverse does become the successor to the internet, who builds it, and how, is extremely important to the future of the economy and society as a whole. Facebook is aiming to play a leading role in shaping the metaverse, in part by investing heavily in virtual reality. Facebook CEO Mark Zuckerberg explained in an interview his view that the metaverse spans nonimmersive platforms like today’s social media as well as immersive 3D media technologies such as virtual reality, and that it will be for work as well as play.

Hollywood has embraced the metaverse in movies like ‘Ready Player One.’

The metaverse might one day resemble the flashy fictional Oasis of Ernest Cline’s “Ready Player One,” but until then you can turn to games like Fortnite and Roblox, virtual reality social media platforms like VRChat and AltspaceVR, and virtual work environments like Immersed for a taste of the immersive and connected metaverse experience. As these siloed spaces converge and become increasingly interoperable, watch for a truly singular metaverse to emerge.

This article has been updated to include Facebook’s announcement on Oct. 28, 2021 that it is renaming itself Meta.The Conversation

Rabindra Ratan, Associate Professor of Media and Information, Michigan State University and Yiming Lei, Doctoral student in Media and Information, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Saturday, November 6, 2021

Facebook whistleblower Frances Haugen testified that the company’s algorithms are dangerous – here’s how they can manipulate you

Whistleblower Frances Haugen called Facebook’s algorithm dangerous. Matt McClain/The Washington Post via AP
Filippo Menczer, Indiana University

Former Facebook product manager Frances Haugen testified before the U.S. Senate on Oct. 5, 2021, that the company’s social media platforms “harm children, stoke division and weaken our democracy.”

Haugen was the primary source for a Wall Street Journal exposé on the company. She called Facebook’s algorithms dangerous, said Facebook executives were aware of the threat but put profits before people, and called on Congress to regulate the company.

Social media platforms rely heavily on people’s behavior to decide on the content that you see. In particular, they watch for content that people respond to or “engage” with by liking, commenting and sharing. Troll farms, organizations that spread provocative content, exploit this by copying high-engagement content and posting it as their own, which helps them reach a wide audience.

As a computer scientist who studies the ways large numbers of people interact using technology, I understand the logic of using the wisdom of the crowds in these algorithms. I also see substantial pitfalls in how the social media companies do so in practice.

From lions on the savanna to likes on Facebook

The concept of the wisdom of crowds assumes that using signals from others’ actions, opinions and preferences as a guide will lead to sound decisions. For example, collective predictions are normally more accurate than individual ones. Collective intelligence is used to predict financial markets, sports, elections and even disease outbreaks.

Throughout millions of years of evolution, these principles have been coded into the human brain in the form of cognitive biases that come with names like familiarity, mere exposure and bandwagon effect. If everyone starts running, you should also start running; maybe someone saw a lion coming and running could save your life. You may not know why, but it’s wiser to ask questions later.

Your brain picks up clues from the environment – including your peers – and uses simple rules to quickly translate those signals into decisions: Go with the winner, follow the majority, copy your neighbor. These rules work remarkably well in typical situations because they are based on sound assumptions. For example, they assume that people often act rationally, it is unlikely that many are wrong, the past predicts the future, and so on.

Technology allows people to access signals from much larger numbers of other people, most of whom they do not know. Artificial intelligence applications make heavy use of these popularity or “engagement” signals, from selecting search engine results to recommending music and videos, and from suggesting friends to ranking posts on news feeds.

Not everything viral deserves to be

Our research shows that virtually all web technology platforms, such as social media and news recommendation systems, have a strong popularity bias. When applications are driven by cues like engagement rather than explicit search engine queries, popularity bias can lead to harmful unintended consequences.

Social media like Facebook, Instagram, Twitter, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content. These algorithms take as input what you like, comment on and share – in other words, content you engage with. The goal of the algorithms is to maximize engagement by finding out what people like and ranking it at the top of their feeds.

A primer on the Facebook algorithm.

On the surface this seems reasonable. If people like credible news, expert opinions and fun videos, these algorithms should identify such high-quality content. But the wisdom of the crowds makes a key assumption here: that recommending what is popular will help high-quality content “bubble up.”

We tested this assumption by studying an algorithm that ranks items using a mix of quality and popularity. We found that in general, popularity bias is more likely to lower the overall quality of content. The reason is that engagement is not a reliable indicator of quality when few people have been exposed to an item. In these cases, engagement generates a noisy signal, and the algorithm is likely to amplify this initial noise. Once the popularity of a low-quality item is large enough, it will keep getting amplified.

Algorithms aren’t the only thing affected by engagement bias – it can affect people too. Evidence shows that information is transmitted via “complex contagion,” meaning the more times people are exposed to an idea online, the more likely they are to adopt and reshare it. When social media tells people an item is going viral, their cognitive biases kick in and translate into the irresistible urge to pay attention to it and share it.

Not-so-wise crowds

We recently ran an experiment using a news literacy app called Fakey. It is a game developed by our lab that simulates a news feed like those of Facebook and Twitter. Players see a mix of current articles from fake news, junk science, hyperpartisan and conspiratorial sources, as well as mainstream sources. They get points for sharing or liking news from reliable sources and for flagging low-credibility articles for fact-checking.

We found that players are more likely to like or share and less likely to flag articles from low-credibility sources when players can see that many other users have engaged with those articles. Exposure to the engagement metrics thus creates a vulnerability.

The wisdom of the crowds fails because it is built on the false assumption that the crowd is made up of diverse, independent sources. There may be several reasons this is not the case.

First, because of people’s tendency to associate with similar people, their online neighborhoods are not very diverse. The ease with which social media users can unfriend those with whom they disagree pushes people into homogeneous communities, often referred to as echo chambers.

Second, because many people’s friends are friends of one another, they influence one another. A famous experiment demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire to conform distorts your independent judgment.

Third, popularity signals can be gamed. Over the years, search engines have developed sophisticated techniques to counter so-called “link farms” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are just beginning to learn about their own vulnerabilities.

People aiming to manipulate the information market have created fake accounts, like trolls and social bots, and organized fake networks. They have flooded the network to create the appearance that a conspiracy theory or a political candidate is popular, tricking both platform algorithms and people’s cognitive biases at once. They have even altered the structure of social networks to create illusions about majority opinions.

[Over 110,000 readers rely on The Conversation’s newsletter to understand the world. Sign up today.]

Dialing down engagement

What to do? Technology platforms are currently on the defensive. They are becoming more aggressive during elections in taking down fake accounts and harmful misinformation. But these efforts can be akin to a game of whack-a-mole.

A different, preventive approach would be to add friction. In other words, to slow down the process of spreading information. High-frequency behaviors such as automated liking and sharing could be inhibited by CAPTCHA tests, which require a human to respond, or fees. Not only would this decrease opportunities for manipulation, but with less information people would be able to pay more attention to what they see. It would leave less room for engagement bias to affect people’s decisions.

It would also help if social media companies adjusted their algorithms to rely less on engagement signals and more on quality signals to determine the content they serve you. Perhaps the whistleblower revelations will provide the necessary impetus.

This is an updated version of an article originally published on Sept. 20, 2021.The Conversation

Filippo Menczer, Luddy Distinguished Professor of Informatics and Computer Science, Indiana University

This article is republished from The Conversation under a Creative Commons license. Read the original article.