Tuesday, July 21, 2020


How fake accounts constantly manipulate what you see on social media – and what you can do about it

All is not as it appears on social media. filadendron/E+ via Getty Images
Jeanna Matthews, Clarkson University

Social media platforms like Facebook, Twitter and Instagram started out as a way to connect with friends, family and people of interest. But anyone on social media these days knows it’s increasingly a divisive landscape.

Undoubtedly you’ve heard reports that hackers and even foreign governments are using social media to manipulate and attack you. You may wonder how that is possible. As a professor of computer science who researches social media and security, I can explain – and offer some ideas for what you can do about it.

Bots and sock puppets

Social media platforms don’t simply feed you the posts from the accounts you follow. They use algorithms to curate what you see based in part on “likes” or “votes.” A post is shown to some users, and the more those people react – positively or negatively – the more it will be highlighted to others. Sadly, lies and extreme content often garner more reactions and so spread quickly and widely.

A 2018 file photo showing a business center building in St. Petersburg, Russia, known as the ‘troll factory,’ one of a web of companies allegedly controlled by Yevgeny Prigozhin, who has reported ties to Russian President Vladimir Putin. AP Photo/Dmitri Lovetsky

But who is doing this “voting”? Often it’s an army of accounts, called bots, that do not correspond to real people. In fact, they’re controlled by hackers, often on the other side of the world. For example, researchers have reported that more than half of the Twitter accounts discussing COVID-19 are bots.

As a social media researcher, I’ve seen thousands of accounts with the same profile picture “like” posts in unison. I’ve seen accounts post hundreds of times per day, far more than a human being could. I’ve seen an account claiming to be an “All-American patriotic army wife” from Florida post obsessively about immigrants in English, but whose account history showed it used to post in Ukranian.

Fake accounts like this are called “sock puppets” – suggesting a hidden hand speaking through another identity. In many cases, this deception can easily be revealed with a look at the account history. But in some cases, there is a big investment in making sock puppet accounts seem real.

Now defunct, the ‘Jenna Abrams’ account was created by hackers in Russia.

For example, Jenna Abrams, an account with 70,000 followers, was quoted by mainstream media outlets like The New York Times for her xenophobic and far-right opinions, but was actually an invention controlled by the Internet Research Agency, a Russian government-funded troll farm and not a living, breathing person.

Sowing chaos

Trolls often don’t care about the issues as much as they care about creating division and distrust. For example, researchers in 2018 concluded that some of the most influential accounts on both sides of divisive issues, like Black Lives Matter and Blue Lives Matter, were controlled by troll farms.

More than just fanning disagreement, trolls want to encourage a belief that truth no longer exists. Divide and conquer. Distrust anyone who might serve as a leader or trusted voice. Cut off the head. Demoralize. Confuse. Each of these is a devastating attack strategy.

Even as a social media researcher, I underestimate the degree to which my opinion is shaped by these attacks. I think I am smart enough to read what I want, discard the rest and step away unscathed. Still, when I see a post that has millions of likes, part of me thinks it must reflect public opinion. The social media feeds I see are affected by it and, what’s more, I am affected by the opinions of my real friends, who are also influenced.

The entire society is being subtly manipulated to believe they are on opposite sides of many issues when legitimate common ground exists.

I have focused primarily on U.S.-based examples, but the same types of attacks are playing out around the world. By turning the voices of democracies against each other, authoritarian regimes may begin to look preferable to chaos.

Founder and CEO of Facebook Mark Zuckerberg in Brussels, Feb. 17, 2020. Kenzo Tribouillard/AFP via Getty Images

Platforms have been slow to act. Sadly, misinformation and disinformation drives usage and is good for business. Failure to act has often been justified with concerns about freedom of speech. Does freedom of speech include the right to create 100,000 fake accounts with the express purpose of spreading lies, division and chaos?

Taking control

So what can you do about it? You probably already know to check the sources and dates of what you read and forward, but common-sense media literacy advice is not enough.

First, use social media more deliberately. Choose to catch up with someone in particular, rather than consuming only the default feed. You might be amazed to see what you’ve been missing. Help your friends and family find your posts by using features like pinning key messages to the top of your feed.

Second, pressure social media platforms to remove accounts with clear signs of automation. Ask for more controls to manage what you see and which posts are amplified. Ask for more transparency in how posts are promoted and who is placing ads. For example, complain directly about the Facebook news feed here or tell legislators about your concerns.

Third, be aware of the trolls’ favorite issues and be skeptical of them. They may be most interested in creating chaos, but they also show clear preferences on some issues. For example, trolls want to reopen economies quickly without real management to flatten the COVID-19 curve. They also clearly supported one of the 2016 U.S. presidential candidates over the other. It’s worth asking yourself how these positions might be good for Russian trolls, but bad for you and your family.

Perhaps most importantly, use social media sparingly, like any other addictive, toxic substance, and invest in more real-life community building conversations. Listen to real people, real stories and real opinions, and build from there.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend.]The Conversation

Jeanna Matthews, Full Professor, Computer Science, Clarkson University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, July 12, 2020


Scientists tap the world's most powerful computers in the race to understand and stop the coronavirus



It takes a tremendous amount of computing power to simulate all the components and behaviors of viruses and cells. Thomas Splettstoesser scistyle.com, CC BY-ND
Jeremy Smith, University of Tennessee
In “The Hitchhiker’s Guide to the Galaxy” by Douglas Adams, the haughty supercomputer Deep Thought is asked whether he can find the answer to the ultimate question concerning life, the universe and everything. He replies that, yes, he can do it, but it’s tricky and he’ll have to think about it. When asked how long it will take him he replies, “Seven-and-a-half million years. I told you I’d have to think about it.”
Real-life supercomputers are being asked somewhat less expansive questions but tricky ones nonetheless: how to tackle the COVID-19 pandemic. They’re being used in many facets of responding to the disease, including to predict the spread of the virus, to optimize contact tracing, to allocate resources and provide decisions for physicians, to design vaccines and rapid testing tools and to understand sneezes. And the answers are needed in a rather shorter time frame than Deep Thought was proposing.
The largest number of COVID-19 supercomputing projects involves designing drugs. It’s likely to take several effective drugs to treat the disease. Supercomputers allow researchers to take a rational approach and aim to selectively muzzle proteins that SARS-CoV-2, the virus that causes COVID-19, needs for its life cycle.
The viral genome encodes proteins needed by the virus to infect humans and to replicate. Among these are the infamous spike protein that sniffs out and penetrates its human cellular target, but there are also enzymes and molecular machines that the virus forces its human subjects to produce for it. Finding drugs that can bind to these proteins and stop them from working is a logical way to go.


The Summit supercomputer at Oak Ridge National Laboratory has a peak performance of 200,000 trillion calculations per second – equivalent to about a million laptops. Oak Ridge National Laboratory, U.S. Dept. of Energy, CC BY

I am a molecular biophysicist. My lab, at the Center for Molecular Biophysics at the University of Tennessee and Oak Ridge National Laboratory, uses a supercomputer to discover drugs. We build three-dimensional virtual models of biological molecules like the proteins used by cells and viruses, and simulate how various chemical compounds interact with those proteins. We test thousands of compounds to find the ones that “dock” with a target protein. Those compounds that fit, lock-and-key style, with the protein are potential therapies.
The top-ranked candidates are then tested experimentally to see if they indeed do bind to their targets and, in the case of COVID-19, stop the virus from infecting human cells. The compounds are first tested in cells, then animals, and finally humans. Computational drug discovery with high-performance computing has been important in finding antiviral drugs in the past, such as the anti-HIV drugs that revolutionized AIDS treatment in the 1990s.

World’s most powerful computer

Since the 1990s the power of supercomputers has increased by a factor of a million or so. Summit at Oak Ridge National Laboratory is presently the world’s most powerful supercomputer, and has the combined power of roughly a million laptops. A laptop today has roughly the same power as a supercomputer had 20-30 years ago.
However, in order to gin up speed, supercomputer architectures have become more complicated. They used to consist of single, very powerful chips on which programs would simply run faster. Now they consist of thousands of processors performing massively parallel processing in which many calculations, such as testing the potential of drugs to dock with a pathogen or cell’s proteins, are performed at the same time. Persuading those processors to work together harmoniously is a pain in the neck but means we can quickly try out a lot of chemicals virtually.
Further, researchers use supercomputers to figure out by simulation the different shapes formed by the target binding sites and then virtually dock compounds to each shape. In my lab, that procedure has produced experimentally validated hits – chemicals that work – for each of 16 protein targets that physician-scientists and biochemists have discovered over the past few years. These targets were selected because finding compounds that dock with them could result in drugs for treating different diseases, including chronic kidney disease, prostate cancer, osteoporosis, diabetes, thrombosis and bacterial infections.


Scientists are using supercomputers to find ways to disable the various proteins – including the infamous spike protein (green protrusions) – produced by SARS-CoV-2, the virus responsible for COVID-19. Thomas Splettstoesser scistyle.com, CC BY-ND

Billions of possibilities

So which chemicals are being tested for COVID-19? A first approach is trying out drugs that already exist for other indications and that we have a pretty good idea are reasonably safe. That’s called “repurposing,” and if it works, regulatory approval will be quick.
But repurposing isn’t necessarily being done in the most rational way. One idea researchers are considering is that drugs that work against protein targets of some other virus, such as the flu, hepatitis or Ebola, will automatically work against COVID-19, even when the SARS-CoV-2 protein targets don’t have the same shape.


ACE2 acts as the docking receptor for the SARS-CoV-2 virus’s spike protein and allows the virus to infect the cell. The Conversation, CC BY-SA

The best approach is to check if repurposed compounds will actually bind to their intended target. To that end, my lab published a preliminary report of a supercomputer-driven docking study of a repurposing compound database in mid-February. The study ranked 8,000 compounds in order of how well they bind to the viral spike protein. This paper triggered the establishment of a high-performance computing consortium against our viral enemy, announced by President Trump in March. Several of our top-ranked compounds are now in clinical trials.
Our own work has now expanded to about 10 targets on SARS-CoV-2, and we’re also looking at human protein targets for disrupting the virus’s attack on human cells. Top-ranked compounds from our calculations are being tested experimentally for activity against the live virus. Several of these have already been found to be active.
Also, we and others are venturing out into the wild world of new drug discovery for COVID-19 – looking for compounds that have never been tried as drugs before. Databases of billions of these compounds exist, all of which could probably be synthesized in principle but most of which have never been made. Billion-compound docking is a tailor-made task for massively parallel supercomputing.

Dawn of the exascale era

Work will be helped by the arrival of the next big machine at Oak Ridge, called Frontier, planned for next year. Frontier should be about 10 times more powerful than Summit. Frontier will herald the “exascale” supercomputing era, meaning machines capable of 1,000,000,000,000,000,000 calculations per second.
Although some fear supercomputers will take over the world, for the time being, at least, they are humanity’s servants, which means that they do what we tell them to. Different scientists have different ideas about how to calculate which drugs work best – some prefer artificial intelligence, for example – so there’s quite a lot of arguing going on.
Hopefully, scientists armed with the most powerful computers in the world will, sooner rather than later, find the drugs needed to tackle COVID-19. If they do, then their answers will be of more immediate benefit, if less philosophically tantalizing, than the answer to the ultimate question provided by Deep Thought, which was, maddeningly, simply 42.
[Get our best science, health and technology stories. Sign up for The Conversation’s science newsletter.]The Conversation
Jeremy Smith, Governor's Chair, Biophysics, University of Tennessee
This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wednesday, July 8, 2020


Scientific fieldwork 'caught in the middle' of US-Mexico border tensions

The political border cuts in two a region rich in biological and cultural diversity. John Moore/Getty Images News via Getty Images
Taylor Edwards, University of Arizona

Imagine you’re a scientist, setting out camera traps to snap pictures of wildlife in a remote area of southern Arizona. You set out with your gear early in the morning, but it took longer than expected to find all the locations with your GPS. Now, on your hike back, it’s really starting to heat up.

You try to stick to the shaded, dry washes, and as you round a bend, you’re surprised to see several people huddled under a scraggly mesquite tree against the side of the steep ravine: Mexican immigrants crossing the border. They look dirty and afraid, but so do you.

“¿Tienes agua?” they timidly ask, and you see their empty plastic water containers.

This fictionalized scenario reflects a composite of real incidents experienced by U.S. and Mexican researchers, including me, on both sides of the border in the course of their fieldwork. While giving aid may be the moral thing to do, there can be consequences. Humanitarian aid workers in Arizona have been arrested for leaving food and water for migrants in similar situations, and such arrests have risen since 2017.

In the course of their fieldwork, researchers can encounter migrants, Border Control agents and drug traffickers. Loren Elliott/AFP via Getty Images

The U.S.-Mexico border is a region of significant biological and cultural diversity that draws researchers from a wide variety of disciplines, including geology, biology, environmental sciences, archaeology, hydrology, and cultural and social sciences. It is also an area of humanitarian crisis and contentious politics.

Migrants have always been a part of this area, but dangerous drug cartels and increasing militarization have added additional challenges for those who live and work here. U.S. and Mexican researchers are faced with ethical and logistical challenges in navigating this political landscape. To better understand these complex dynamics, my colleagues and I conducted an anonymous survey among researchers who work in the border region to learn how border politics affect collaboration and researchers’ ability to perform their jobs.

Camera traps meant to take photos of wildlife also capture images of the people traversing this landscape. Myles Traphagen, CC BY-ND

Border fieldwork comes with complications

Our binational, multidisciplinary group of concerned scientists distributed an anonymous, online survey to 807 members of the Next-Generation Sonoran Desert Researchers Network. From this group of academic professionals, college students and employees of nonprofit organizations and federal and state agencies who work in the U.S.-Mexico border region, we received 59 responses. While not yet published in a peer-reviewed journal, a summary of our results can be found on the N-Gen website, and the original data is available online.

Researchers in our pre-pandemic study reported feeling safe for the most part while working in the U.S.-Mexico border region. However this may reflect the fact that they adjust their work to stay away from risky places.

Respondents noted the importance of knowing individuals and communities where they work. For instance, one U.S.-based researcher told us, “I feel safe in Mexico where I know landowners and they know me. I don’t feel safe in U.S. public lands due to Border Patrol’s extensive presence, their racial profiling ways and guns pulled on me.”

Many respondents reported having encountered situations during fieldwork when they felt their security was threatened, occurring relatively equally on both sides of the border. Participants did not express safety concerns due to migrants themselves, but instead pointed to the militarization and criminal activity associated with the region.

Safety concerns on the Mexico side were primarily due to drug cartels and other criminal activity. Concerns in the U.S. centered on direct intimidation or “uneasy” or threatening encounters with U.S. Border Patrol, private landowners or militias.

As a result of safety concerns, many researchers from both countries reported their organization or employer had placed restrictions on working in the border areas of Mexico. In most cases, this meant limiting access to specific areas or requiring additional paperwork or approval through their institution.

Respondents reported logistical issues “altered or disrupted” their ability to perform fieldwork. These problems ranged from trouble crossing the border to difficulty obtaining necessary paperwork and permissions.

One researcher reported that permit delays for shipping scientific equipment across the border had stalled their research for over a year. More than half of respondents said these issues had increased in frequency or caused greater disruption to their work within the last three years.

Caught in the middle

Unsurprisingly, most researchers surveyed (69%) said they’ve encountered undocumented migrants while conducting fieldwork in the border region, although infrequently.

In situations of contact, migrants asked for assistance, such as food, water or a ride, a little over half of the time. Researchers drew a clear distinction between their willingness to offer food or water versus providing transportation.

Despite concerns about recent prosecutions of humanitarian aid workers in the border region, the threat was not sufficient to stop most respondents from taking action they viewed as moral or ethical.

“I would have pause given legal ramifications,” one person told us, “But I do not think this would change how I would act.” Survey respondents commented that they felt “caught in the middle” of an “impossible situation,” where the fear of prosecution conflicts with their moral imperative to help people in need.

A volunteer collects data as part of an ongoing Borderlands Sister Parks project in Rancho San Bernardino, Sonora, Mexico. Sky Island Alliance, CC BY-ND

Overall our results suggest that research is affected by border policies in myriad ways: Restricted access to areas reduces scientists’ ability to collect comprehensive data, such as are necessary for conducting biodiversity inventories.

Restrictions directly affecting the ability of researchers to collaborate over international boundaries can limit creativity and discovery. That can have long-term impacts, such as further separating countries’ ability to understand each other and foster meaningful partnerships catalyzed by science, including industrial innovation or ecological sustainability.

Societies have the right to enjoy the benefits of science. This requires that scientists are able to collaborate internationally and to fulfill their functions without discrimination or fear of repression or prosecution.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]The Conversation

Taylor Edwards, Associate Staff Scientist, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 23, 2020


Self-driving taxis could be a setback for those with different needs – unless companies embrace accessible design now

Wheelchair advocates and taxi drivers protest lack of accessibility and surge pricing in New York City on Tuesday, January 19, 2016. Richard Levine/Corbis via Getty Images
John Lunsford, Cornell University

Autonomous vehicles (AVs), like self-driving taxis, continue to garner media attention as industry and political stakeholders claim that they will improve safety and access to transportation for everyone. But for people who have different mobility needs and rely on human drivers for work beyond the task of driving, the prospect of driverless taxis may not sound like progress. Unless accommodations are built in to autonomous vehicle designs, companies risk undermining transportation access for the very communities this technology is promising to include.

The promise

A January 2020 joint report issued by the National Science and Technology Council and U.S. Department of Transportation paints a bright picture of an autonomous-enabled future. They predict autonomous vehicles will provide “improved quality of life, access and mobility for all citizens.” Replacing the driver with an autonomous system will create safer transportation by removing the “possibility of human error.”

In addition, synchronizing vehicle movement with distance and traffic patterns would not only result in more efficient service, but safer roadway navigation. These advances should mean fewer cars, less traffic, more economical fuel use and increased vehicle availability.

More than driving

If done right, autonomous vehicles could improve access to transportation for everyone. But by not accounting for the many other kinds of labor a driver performs, current AVs may present problems for people with different needs.

Drivers perform work beyond driving. Justice Ender/Flickr

For older people, those with disabilities and even individuals in emergency situations, the driver bridges the gap between personal capability and vehicle accessibility.

Drivers help people to and from vehicles, as well as into and out of them. Drivers move and store luggage and mobility equipment like wheelchairs and walkers, and navigate emergency situations like cardiac arrest, allergic reaction or drug overdose.

Yet right now asking an AV interface for assistance would be like asking Siri to help you up if you’ve fallen down.

Two unequal systems

In the 1970s and years thereafter, Congress determined that redesigning transportation for accessibility was too costly. Instead they fitted assistive devices to old transportation networks and expected private sector taxi drivers to help. Some did, many didn’t.

Problems of discrimination led to the landmark American with Disabilities Act of 1990. The ADA made discrimination based on ability illegal – but access to transportation was still dependent on the driver.

Taxi access is already problematic due to a two-tiered system. mokee81/iStock via Getty Images Plus

Today, cities and companies are still struggling with accessibility. People with different needs remain vulnerable to the whims and prejudices of the driver. Too often people with different needs are denied assistance or transportation altogether.

It was only in 2016, for instance, that Boston’s taxis, Uber and later Lyft began integrating a small number of Wheelchair Accessible Vehicles into their fleets, and other companies have emerged like SilverRide offer specialty service for people who are older.

But even with these additions, taxi, Uber and Lyft riders still experience cancellations and longer wait times in cities like Washington, D.C., Boston, Chicago, San Francisco and New York.

A 2019 study comparing the wait times for Wheelchair Accessible Vehicles (WAVs) to inaccessible vehicles in New York City. The wait time for Uber WAV was more than two times as long and Lyft WAV was more than five times as long. New York Lawyers for the Public Interest, Still Left Behind whitepaper, CC BY

While specialized vehicles are a valuable step toward accessible transportation, they also mean more cars on the road. A 2017 study found Uber and Lyft are increasing traffic congestion in cities leading to increases in safety risks, transit times and pollution. To add to the traffic problem, the International Transportation Forum predicts that traffic will likely increase even more as autonomous cars occupy the road alongside traditional ones.

The future

AV developers struggle with what accessibility should look like. Some leading AV companies focus on accessibility inside the car. Waymo and Lyft are working to communicate information to passengers with disabilities. Nissan’s Virtual Reality avatars may provide company, comfort and assistance to passengers in need.

Other AV companies approach accessibility by redesigning access. Startup May Mobility’s low speed shuttle can deploy a wheelchair ramp. Tesla’s gull wing doors open vertically for easier access and their Smart Summons feature allows drivers to call their car to fetch them.

In my opinion, vehicle specialization should not be the path forward. A wheelchair ramp in one car and Braille in another will increase cars on the road, decrease availability and increase consumer cost. For AVs to fulfill the promise of accessibility and be environmentally efficient, all cars need to be similarly accessible – even if the mechanisms of accessibility are not always in use. This way AVs can more closely mirror the variety of tasks human drivers currently perform and do it reliably, without discrimination. Standard features could include push button or voice activated motorized doors with sliding ramps, an entry space instead of front seats and interior handrails.

A good place to start is for stakeholders to agree on what accessibility needs must be met and treat AV developments as pieces of an accessibility solution rather than separate niche markets racing toward minimum accommodations. The nonprofit research and community equity organization, The Greenlining Institute, suggests, in addition to capability, accessibility should also include financial, cultural, technological, logistical, race, gender, age, class and geographic considerations. If autonomous vehicles are developed to handle the messiness and complexity taxi drivers currently deal with, society will be one step closer to real accessibility.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]The Conversation

John Lunsford, PhD Candidate in Media, Technology and Society, Cornell University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 2, 2020


To safely explore the solar system and beyond, spaceships need to go faster – nuclear-powered rockets may be the answer

Over the last 50 years, a lot has changed in rocketry. The fuel that powers spaceflight might finally be changing too. CSA-Printstock/DIgital Vision Vectors via Getty Images
Iain Boyd, University of Colorado Boulder

With dreams of Mars on the minds of both NASA and Elon Musk, long-distance crewed missions through space are coming. But you might be surprised to learn that modern rockets don’t go all that much faster than the rockets of the past.

There are a lot of reasons that a faster spaceship is a better one, and nuclear-powered rockets are a way to do this. They offer many benefits over traditional fuel-burning rockets or modern solar-powered electric rockets, but there have been only eight U.S. space launches carrying nuclear reactors in the last 40 years.

However, last year the laws regulating nuclear space flights changed and work has already begun on this next generation of rockets.

Why the need for speed?

The first step of a space journey involves the use of launch rockets to get a ship into orbit. These are the large fuel-burning engines people imagine when they think of rocket launches and are not likely to go away in the foreseeable future due to the constraints of gravity.

It is once a ship reaches space that things get interesting. To escape Earth’s gravity and reach deep space destinations, ships need additional acceleration. This is where nuclear systems come into play. If astronauts want to explore anything farther than the Moon and perhaps Mars, they are going to need to be going very very fast. Space is massive, and everything is far away.

There are two reasons faster rockets are better for long-distance space travel: safety and time.

Astronauts on a trip to Mars would be exposed to very high levels of radiation which can cause serious long-term health problems such as cancer and sterility. Radiation shielding can help, but it is extremely heavy, and the longer the mission, the more shielding is needed. A better way to reduce radiation exposure is to simply get where you are going quicker.

But human safety isn’t the only benefit. As space agencies probe farther out into space, it is important to get data from unmanned missions as soon as possible. It took Voyager-2 12 years just to reach Neptune, where it snapped some incredible photos as it flew by. If Voyager-2 had a faster propulsion system, astronomers could have had those photos and the information they contained years earlier.

Speed is good. But why are nuclear systems faster?

The Saturn V rocket was 363 feet tall and mostly just a gas tank. Mike Jetzer/heroicrelics.org, CC BY-NC-ND

Systems of today

Once a ship has escaped Earth’s gravity, there are three important aspects to consider when comparing any propulsion system:

  • Thrust – how fast a system can accelerate a ship
  • Mass efficiency – how much thrust a system can produce for a given amount of fuel
  • Energy density – how much energy a given amount of fuel can produce

Today, the most common propulsion systems in use are chemical propulsion – that is, regular fuel-burning rockets – and solar-powered electric propulsion systems.

Chemical propulsion systems provide a lot of thrust, but chemical rockets aren’t particularly efficient, and rocket fuel isn’t that energy-dense. The Saturn V rocket that took astronauts to the Moon produced 35 million Newtons of force at liftoff and carried 950,000 gallons of fuel. While most of the fuel was used in getting the rocket into orbit, the limitations are apparent: It takes a lot of heavy fuel to get anywhere.

Electric propulsion systems generate thrust using electricity produced from solar panels. The most common way to do this is to use an electrical field to accelerate ions, such as in the Hall thruster. These devices are commonly used to power satellites and can have more than five times higher mass efficiency than chemical systems. But they produce much less thrust – about three Newtons, or only enough to accelerate a car from 0-60 mph in about two and a half hours. The energy source – the Sun – is essentially infinite but becomes less useful the farther away from the Sun the ship gets.

One of the reasons nuclear-powered rockets are promising is because they offer incredible energy density. The uranium fuel used in nuclear reactors has an energy density that is 4 million times higher than hydrazine, a typical chemical rocket propellant. It is much easier to get a small amount of uranium to space than hundreds of thousands of gallons of fuel.

So what about thrust and mass efficiency?

The first nuclear thermal rocket was built in 1967 and is seen in the background. In the foreground is the protective casing that would hold the reactor. NASA/Wikipedia

Two options for nuclear

Engineers have designed two main types of nuclear systems for space travel.

The first is called nuclear thermal propulsion. These systems are very powerful and moderately efficient. They use a small nuclear fission reactor – similar to those found in nuclear submarines – to heat a gas, such as hydrogen, and that gas is then accelerated through a rocket nozzle to provide thrust. Engineers from NASA estimate that a mission to Mars powered by nuclear thermal propulsion would be 20%-25% shorter than a trip on a chemical-powered rocket.

Nuclear thermal propulsion systems are more than twice as efficient as chemical propulsion systems – meaning they generate twice as much thrust using the same amount of propellant mass – and can deliver 100,000 Newtons of thrust. That’s enough force to get a car from 0-60 mph in about a quarter of a second.

The second nuclear-based rocket system is called nuclear electric propulsion. No nuclear electric systems have been built yet, but the idea is to use a high-power fission reactor to generate electricity that would then power an electrical propulsion system like a Hall thruster. This would be very efficient, about three times better than a nuclear thermal propulsion system. Since the nuclear reactor could create a lot of power, many individual electric thrusters could be operated simultaneously to generate a good amount of thrust.

Nuclear electric systems would be the best choice for extremely long-range missions because they don’t require solar energy, have very high efficiency and can give relatively high thrust. But while nuclear electric rockets are extremely promising, there are still a lot of technical problems to solve before they are put into use.

An artist’s impression of what a nuclear thermal ship built to take humans to Mars could look like. John Frassanito & Associates/Wikipedia

Why aren’t there nuclear powered rockets yet?

Nuclear thermal propulsion systems have been studied since the 1960s but have not yet flown in space.

Regulations first imposed in the U.S. in the 1970s essentially required case-by-case examination and approval of any nuclear space project from multiple government agencies and explicit approval from the president. Along with a lack of funding for nuclear rocket system research, this environment prevented further improvement of nuclear reactors for use in space.

That all changed when the Trump administration issued a presidential memorandum in August 2019. While upholding the need to keep nuclear launches as safe as possible, the new directive allows for nuclear missions with lower amounts of nuclear material to skip the multi-agency approval process. Only the sponsoring agency, like NASA, for example, needs to certify that the mission meets safety recommendations. Larger nuclear missions would go through the same process as before.

Along with this revision of regulations, NASA received US$100 million in the 2019 budget to develop nuclear thermal propulsion. DARPA is also developing a space nuclear thermal propulsion system to enable national security operations beyond Earth orbit.

After 60 years of stagnation, it’s possible a nuclear-powered rocket will be heading to space within a decade. This exciting achievement will usher in a new era of space exploration. People will go to Mars and science experiments will make new discoveries all across our solar system and beyond.

[You’re too busy to read everything. We get it. That’s why we’ve got a weekly newsletter. Sign up for good Sunday reading. ]The Conversation

Iain Boyd, Professor of Aerospace Engineering Sciences, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.