Saturday, May 16, 2020


Robots are playing many roles in the coronavirus crisis – and offering lessons for future disasters

A nurse (left) operates a robot used to interact remotely with coronavirus patients while a physician looks on. MIGUEL MEDINA/AFP via Getty Images
Robin R. Murphy, Texas A&M University ; Justin Adams, Florida State University, and Vignesh Babu Manjunath Gandudi, Texas A&M University

A cylindrical robot rolls into a treatment room to allow health care workers to remotely take temperatures and measure blood pressure and oxygen saturation from patients hooked up to a ventilator. Another robot that looks like a pair of large fluorescent lights rotated vertically travels throughout a hospital disinfecting with ultraviolet light. Meanwhile a cart-like robot brings food to people quarantined in a 16-story hotel. Outside, quadcopter drones ferry test samples to laboratories and watch for violations of stay-at-home restrictions.

These are just a few of the two dozen ways robots have been used during the COVID-19 pandemic, from health care in and out of hospitals, automation of testing, supporting public safety and public works, to continuing daily work and life.

The lessons they’re teaching for the future are the same lessons learned at previous disasters but quickly forgotten as interest and funding faded. The best robots for a disaster are the robots, like those in these examples, that already exist in the health care and public safety sectors.

Research laboratories and startups are creating new robots, including one designed to allow health care workers to remotely take blood samples and perform mouth swabs. These prototypes are unlikely to make a difference now. However, the robots under development could make a difference in future disasters if momentum for robotics research continues.

Robots around the world

As roboticists at Texas A&M University and the Center for Robot-Assisted Search and Rescue, we examined over 120 press and social media reports from China, the U.S. and 19 other countries about how robots are being used during the COVID-19 pandemic. We found that ground and aerial robots are playing a notable role in almost every aspect of managing the crisis.

R. Murphy, V. Gandudi, Texas A&M; J. Adams, Center for Robot-Assisted Search and Rescue, CC BY-ND

In hospitals, doctors and nurses, family members and even receptionists are using robots to interact in real time with patients from a safe distance. Specialized robots are disinfecting rooms and delivering meals or prescriptions, handling the hidden extra work associated with a surge in patients. Delivery robots are transporting infectious samples to laboratories for testing.

Outside of hospitals, public works and public safety departments are using robots to spray disinfectant throughout public spaces. Drones are providing thermal imagery to help identify infected citizens and enforce quarantines and social distancing restrictions. Robots are even rolling through crowds, broadcasting public service messages about the virus and social distancing.

At work and home, robots are assisting in surprising ways. Realtors are teleoperating robots to show properties from the safety of their own homes. Workers building a new hospital in China were able work through the night because drones carried lighting. In Japan, students used robots to walk the stage for graduation, and in Cyprus, a person used a drone to walk his dog without violating stay-at-home restrictions.

Helping workers, not replacing them

Every disaster is different, but the experience of using robots for the COVID-19 pandemic presents an opportunity to finally learn three lessons documented over the past 20 years. One important lesson is that during a disaster robots do not replace people. They either perform tasks that a person could not do or do safely, or take on tasks that free up responders to handle the increased workload.

The majority of robots being used in hospitals treating COVID-19 patients have not replaced health care professionals. These robots are teleoperated, enabling the health care workers to apply their expertise and compassion to sick and isolated patients remotely.

A robot uses pulses of ultraviolet light to disinfect a hospital room in Johannesburg, South Africa. MICHELE SPATARI/AFP via Getty Images

A small number of robots are autonomous, such as the popular UVD decontamination robots and meal and prescription carts. But the reports indicate that the robots are not displacing workers. Instead, the robots are helping the existing hospital staff cope with the surge in infectious patients. The decontamination robots disinfect better and faster than human cleaners, while the carts reduce the amount of time and personal protective equipment nurses and aides must spend on ancillary tasks.

Off-the-shelf over prototypes

The second lesson is the robots used during an emergency are usually already in common use before the disaster. Technologists often rush out well-intentioned prototypes, but during an emergency, responders – health care workers and search-and-rescue teams – are too busy and stressed to learn to use something new and unfamiliar. They typically can’t absorb the unanticipated tasks and procedures, like having to frequently reboot or change batteries, that usually accompany new technology.

Fortunately, responders adopt technologies that their peers have used extensively and shown to work. For example, decontamination robots were already in daily use at many locations for preventing hospital-acquired infections. Sometimes responders also adapt existing robots. For example, agricultural drones designed for spraying pesticides in open fields are being adapted for spraying disinfectants in crowded urban cityscapes in China and India.

Workers in Kunming City, Yunnan Province, China refill a drone with disinfectant. The city is using drones to spray disinfectant in some public areas. Xinhua News Agency/Yang Zongyou via Getty Images

A third lesson follows from the second. Repurposing existing robots is generally more effective than building specialized prototypes. Building a new, specialized robot for a task takes years. Imagine trying to build a new kind of automobile from scratch. Even if such a car could be quickly designed and manufactured, only a few cars would be produced at first and they would likely lack the reliability, ease of use and safety that comes from months or years of feedback from continuous use.

Alternatively, a faster and more scalable approach is to modify existing cars or trucks. This is how robots are being configured for COVID-19 applications. For example, responders began using the thermal cameras already on bomb squad robots and drones – common in most large cities – to detect infected citizens running a high fever. While the jury is still out on whether thermal imaging is effective, the point is that existing public safety robots were rapidly repurposed for public health.

Don’t stockpile robots

The broad use of robots for COVID-19 is a strong indication that the health care system needed more robots, just like it needed more of everyday items such as personal protective equipment and ventilators. But while storing caches of hospital supplies makes sense, storing a cache of specialized robots for use in a future emergency does not.

This was the strategy of the nuclear power industry, and it failed during the Fukushima Daiichi nuclear accident. The robots stored by the Japanese Atomic Energy Agency for an emergency were outdated, and the operators were rusty or no longer employed. Instead, the Tokyo Electric Power Company lost valuable time acquiring and deploying commercial off-the-shelf bomb squad robots, which were in routine use throughout the world. While the commercial robots were not perfect for dealing with a radiological emergency, they were good enough and cheap enough for dozens of robots to be used throughout the facility.

Robots in future pandemics

Hopefully, COVID-19 will accelerate the adoption of existing robots and their adaptation to new niches, but it might also lead to new robots. Laboratory and supply chain automation is emerging as an overlooked opportunity. Automating the slow COVID-19 test processing that relies on a small set of labs and specially trained workers would eliminate some of the delays currently being experienced in many parts of the U.S.

Automation is not particularly exciting, but just like the unglamorous disinfecting robots in use now, it is a valuable application. If government and industry have finally learned the lessons from previous disasters, more mundane robots will be ready to work side by side with the health care workers on the front lines when the next pandemic arrives.

[You’re too busy to read everything. We get it. That’s why we’ve got a weekly newsletter. Sign up for good Sunday reading. ]The Conversation

Robin R. Murphy, Raytheon Professor of Computer Science and Engineering; Vice-President Center for Robot-Assisted Search and Rescue (nfp), Texas A&M University ; Justin Adams, President of the Center for Robot-Assisted Search and Rescue/Research Fellow - The Center for Disaster Risk Policy, Florida State University, and Vignesh Babu Manjunath Gandudi, Graduate Teaching Assistant, Texas A&M University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, May 11, 2020


How Apple and Google will let your phone warn you if you've been exposed to the coronavirus

Apps that warn about close contact with COVID-19 cases are key to relaxing social distancing rules. Walter Bibikow/Stone via Getty Images
Johannes Becker, Boston University and David Starobinski, Boston University

On April 10, Apple and Google announced a coronavirus exposure notification system that will be built into their smartphone operating systems, iOS and Android. The system uses the ubiquitous Bluetooth short-range wireless communication technology.

There are dozens of apps being developed around the world that alert people if they’ve been exposed to a person who has tested positive for COVID-19. Many of them also report the identities of the exposed people to public health authorities, which has raised privacy concerns. Several other exposure notification projects, including PACT, BlueTrace and the Covid Watch project, take a similar privacy-protecting approach to Apple’s and Google’s initiative.

So how will the Apple-Google exposure notification system work? As researchers who study security and privacy of wireless communication, we have examined the companies’ plan and have assessed its effectiveness and privacy implications.

Recently, a study found that contact tracing can be effective in containing diseases such as COVID-19, if large parts of the population participate. Exposure notification schemes like the Apple-Google system aren’t true contact tracing systems because they don’t allow public health authorities to identify people who have been exposed to infected individuals. But digital exposure notification systems have a big advantage: They can be used by millions of people and rapidly warn those who have been exposed to quarantine themselves.

Bluetooth beacons

Because Bluetooth is supported on billions of devices, it seems like an obvious choice of technology for these systems. The protocol used for this is Bluetooth Low Energy, or Bluetooth LE for short. This variant is optimized for energy-efficient communication between small devices, which makes it a popular protocol for smartphones and wearables such as smartwatches.

Bluetooth allows phones that are near each other to communicate. Phones that have been near each other for long enough can approximate potential viral transmission. Christoph Dernbach/picture alliance via Getty Images

Bluetooth LE communicates in two main ways. Two devices can communicate over the data channel with each other, such as a smartwatch synchronizing with a phone. Devices can also broadcast useful information to nearby devices over the advertising channel. For example, some devices regularly announce their presence to facilitate automatic connection.

To build an exposure notification app using Bluetooth LE, developers could assign everyone a permanent ID and make every phone broadcast it on an advertising channel. Then, they could build an app that receives the IDs so every phone would be able to keep a record of close encounters with other phones. But that would be a clear violation of privacy. Broadcasting any personally identifiable information via Bluetooth LE is a bad idea, because messages can be read by anyone in range.

Anonymous exchanges

To get around this problem, every phone broadcasts a long random number, which is changed frequently. Other devices receive these numbers and store them if they were sent from close proximity. By using long, unique, random numbers, no personal information is sent via Bluetooth LE.

Apple and Google follow this principle in their specification, but add some cryptography. First, every phone generates a unique tracing key that is kept confidentially on the phone. Every day, the tracing key generates a new daily tracing key. Though the tracing key could be used to identify the phone, the daily tracing key can’t be used to figure out the phone’s permanent tracing key. Then, every 10 to 20 minutes, the daily tracing key generates a new rolling proximity identifier, which looks just like a long random number. This is what gets broadcast to other devices via the Bluetooth advertising channel.

When someone tests positive for COVID-19, they can disclose a list of their daily tracing keys, usually from the previous 14 days. Everyone else’s phones use the disclosed keys to recreate the infected person’s rolling proximity identifiers. The phones then compare the COVID-19-positive identifiers with their own records of the identifiers they received from nearby phones. A match reveals a potential exposure to the virus, but it doesn’t identify the patient.

The Australian government’s COVIDSafe app warns about close encounters with people who are COVID-19-positive, but unlike the Apple-Google system, COVIDSafe reports the contacts to public health authorities. Florent Rols/SOPA Images/LightRocket via Getty Images

Most of the competing proposals use a similar approach. The principal difference is that Apple’s and Google’s operating system updates reach far more phones automatically than a single app can. Additionally, by proposing a cross-platform standard, Apple and Google allow existing apps to piggyback and use a common, compatible communication approach that could work across many apps.

No plan is perfect

The Apple-Google exposure notification system is very secure, but it’s no guarantee of either accuracy or privacy. The system could produce a large number of false positives because being within Bluetooth range of an infected person doesn’t necessarily mean the virus has been transmitted. And even if an app records only very strong signals as a proxy for close contact, it cannot know whether there was a wall, a window or a floor between the phones.

However unlikely, there are ways governments or hackers could track or identify people using the system. Bluetooth LE devices use an advertising address when broadcasting on an advertising channel. Though these addresses can be randomized to protect the identity of the sender, we demonstrated last year that it is theoretically possible to track devices for extended periods of time if the advertising message and advertising address are not changed in sync. To Apple’s and Google’s credit, they call for these to be changed synchronously.

But even if the advertising address and a coronavirus app’s rolling identifier are changed in sync, it may still be possible to track someone’s phone. If there isn’t a sufficiently large number of other devices nearby that also change their advertising addresses and rolling identifiers in sync – a process known as mixing – someone could still track individual devices. For example, if there is a single phone in a room, someone could keep track of it because it’s the only phone that could be broadcasting the random identifiers.

Another potential attack involves logging additional information along with the rolling identifiers. Even though the protocol does not send personal information or location data, receiving apps could record when and where they received keys from other phones. If this was done on a large scale – such as an app that systematically collects this extra information – it could be used to identify and track individuals. For example, if a supermarket recorded the exact date and time of incoming rolling proximity identifiers at its checkout lanes and combined that data with credit card swipes, store staff would have a reasonable chance of identifying which customers were COVID-19 positive.

And because Bluetooth LE advertising beacons use plain-text messages, it’s possible to send faked messages. This could be used to troll others by repeating known COVID-19-positive rolling proximity identifiers to many people, resulting in deliberate false positives.

Nevertheless, the Apple-Google system could be the key to alerting thousands of people who have been exposed to the coronavirus while protecting their identities, unlike contact tracing apps that report identifying information to central government or corporate databases.

[You need to understand the coronavirus pandemic, and we can help. Read The Conversation’s newsletter.]The Conversation

Johannes Becker, Doctoral student in Electrical & Computer Engineering, Boston University and David Starobinski, Professor of Electrical and Computer Engineering, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, April 21, 2020


Linking self-driving cars to traffic signals might help pedestrians give them the green light

An autonomous vehicle has no driver to communicate with about whether it’s safe to cross. Saklakova/iStock/Getty Images Plus
Lionel Peter Robert Jr., University of Michigan

The Research Brief is a short take on interesting academic work.

The big idea

Automated vehicles don’t have human operators to communicate their driving intentions to pedestrians at intersections. My team’s research on pedestrians’ perceptions of safety shows their trust of traffic lights tends to override their fear of self-driving cars. This suggests one way to help pedestrians trust and safely interact with autonomous vehicles may be to link the cars’ driving behavior to traffic lights.

In a recent study by my team at the University of Michigan, we focused on communication via a vehicle’s driving behavior to study how people might react to self-driving cars in different situations. We set up a virtual-reality simulator that let people experience street intersections and make choices about whether to cross the street. In different simulations, self-driving cars acted either more or less like an aggressive driver. In some cases there was a traffic light controlling the intersection.

In the more aggressive mode, the car would stop abruptly at the last possible second to let the pedestrian cross. In the less aggressive mode, it would begin braking earlier, indicating to pedestrians that it would stop for them. Aggressive driving reduced pedestrians’ trust in the autonomous vehicle and made them less likely to cross the street.

However, this was true only when there was no traffic light. When there was a light, pedestrians focused on the traffic light and usually crossed the street regardless whether the car was driving aggressively. This indicates that pedestrians’ trust of traffic lights outweighs any concerns about how self-driving cars behave.

Why it matters

Introducing autonomous vehicles might be one way to make roads more safe. Drivers and pedestrians often use nonverbal communication to negotiate safe passage at crosswalks, though, and cars without drivers can’t communicate in the same way. This could in turn make pedestrians and other road users less safe, especially since autonomous vehicles aren’t yet designed to communicate with systems that make streets safer, such as traffic lights.

Other research being done in the field

Some researchers have tried to find ways for self-driving cars to communicate with pedestrians. They have tried to use parts that cars already have, such as headlights, or add new ones, such as LED signs on the vehicle.

However, unless every car does it the same way, this strategy won’t work. For example, unless automakers agreed on how headlights should communicate certain messages or the government set rules, it would be impossible to make sure pedestrians understood the message. The same holds for new technology like LED message boards on cars. There would need to be a standard set of messages all pedestrians could understand without learning multiple systems.

Even if the vehicles communicated in the same way, several cars approaching an intersection and making independent decisions about stopping could cause confusion. Imagine three to five autonomous vehicles approaching a crosswalk, each displaying its own message. The pedestrian would need to read each of these messages, on moving cars, before deciding whether to cross.

What if all vehicles were communicating with the traffic lights ahead, even before they’re visible? elenabs/iStock/Getty Images Plus

What’s next

Our results suggest a better approach would be to have the car communicate directly with the traffic signal, for two reasons.

First, pedestrians already look to and understand current traffic lights.

Second, a car can tell what a traffic light is doing much sooner by checking in over a wireless network than by waiting until its camera can see the light.

This technology is still being developed, and scholars at Michigan’s Mcity mobility research center and elsewhere are studying problems like how to send and prioritize messages between cars and signals. It might effectively put self-driving cars under traffic lights’ control, with ways to adapt to current conditions. For example, a traffic light might tell approaching cars that it was about to turn red, giving them more time to stop. On a slippery road, a car might ask the light to stay green a few seconds longer so an abrupt stop isn’t necessary.

To make this real, engineers and policymakers would need to work together on developing technologies and setting rules. Each would have to better understand what the other does. At the same time, they would need to understand that not every solution works in every region or society. For example, the best way for traffic lights and self-driving cars to communicate in Detroit might not work in Mumbai, where roads and driving practices are far different.

[Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter.]The Conversation

Lionel Peter Robert Jr., Associate Professor of Information, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, March 13, 2020


How technology can combat the rising tide of fake science

A crop circle in Switzerland. Jabberocky/Wikimedia Commons
Chris Impey, University of Arizona

Science gets a lot of respect these days. Unfortunately, it’s also getting a lot of competition from misinformation. Seven in 10 Americans think the benefits from science outweigh the harms, and nine in 10 think science and technology will create more opportunities for future generations. Scientists have made dramatic progress in understanding the universe and the mechanisms of biology, and advances in computation benefit all fields of science.

On the other hand, Americans are surrounded by a rising tide of misinformation and fake science. Take climate change. Scientists are in almost complete agreement that people are the primary cause of global warming. Yet polls show that a third of the public disagrees with this conclusion.

In my 30 years of studying and promoting scientific literacy, I’ve found that college educated adults have large holes in their basic science knowledge and they’re disconcertingly susceptible to superstition and beliefs that aren’t based on any evidence. One way to counter this is to make it easier for people to detect pseudoscience online. To this end, my lab at the University of Arizona has developed an artificial intelligence-based pseudoscience detector that we plan to freely release as a web browser extension and smart phone app.

Americans’ predilection for fake science

Americans are prone to superstition and paranormal beliefs. An annual survey done by sociologists at Chapman University finds that more than half believe in spirits and the existence of ancient civilizations like Atlantis, and more than a third think that aliens have visited the Earth in the past or are visiting now. Over 75% hold multiple paranormal beliefs. The survey shows that these numbers have increased in recent years.

Widespread belief in astrology is a pet peeve of my colleagues in astronomy. It’s long had a foothold in the popular culture through horoscopes in newspapers and magazines but currently it’s booming. Belief is strong even among the most educated. My surveys of college undergraduates show that three-quarters of them think that astrology is very or “sort of” scientific and only half of science majors recognize it as not at all scientific.

Allan Mazur, a sociologist at Syracuse University, has delved into the nature of irrational belief systems, their cultural roots, and their political impact. Conspiracy theories are, by definition, resistant to evidence or data that might prove them false. Some are at least amusing. Adherents of the flat Earth theory turn back the clock on two millennia of scientific progress. Interest in this bizarre idea has surged in the past five years, spurred by social media influencers and the echo chamber nature of web sites like Reddit. As with climate change denial, many come to this belief through YouTube videos.

However, the consequences of fake science are no laughing matter. In matters of health and climate change, misinformation can be a matter of life and death. Over a 90-day period spanning December, January and February, people liked, shared and commented on posts from sites containing false or misleading information about COVID-19 142 times more than they did information from the Centers for Disease Control and the World Health Organization.

Combating fake science is an urgent priority. In a world that’s increasingly dependent on science and technology, civic society can only function when the electorate is well informed.

Educators must roll up their sleeves and do a better job of teaching critical thinking to young people. However, the problem goes beyond the classroom. The internet is the first source of science information for 80% of people ages 18 to 24.

One study found that a majority of a random sample of 200 YouTube videos on climate change denied that humans were responsible or claimed that it was a conspiracy. The videos peddling conspiracy theories got the most views. Another study found that a quarter of all tweets on climate were generated by bots and they preferentially amplified messages from climate change deniers.

Technology to the rescue?

The recent success of machine learning and AI in detecting fake news points the way to detecting fake science online. The key is neural net technology. Neural nets are loosely modeled on the human brain. They consist of many interconnected computer processors that identify meaningful patterns in data like words and images. Neural nets already permeate everyday life, particularly in natural language processing systems like Amazon’s Alexa and Google’s language translation capability.

At the University of Arizona, we have trained neural nets on handpicked popular articles about climate change and biological evolution, and the neural nets are 90% successful in distinguishing wheat from chaff. With a quick scan of a site, our neural net can tell if its content is scientifically sound or climate-denial junk. After more refinement and testing we hope to have neural nets that can work across all domains of science.

Neural net technology under development at the University of Arizona will flag science websites with a color code indicating their reliability (left). A smartphone app version will gamify the process of declaring science articles real or fake (right). Chris Impey, CC BY-ND

The goal is a web browser extension that would detect when the user is looking at science content and deduce whether or not it’s real or fake. If it’s misinformation, the tool will suggest a reliable web site on that topic. My colleagues and I also plan to gamify the interface with a smart phone app that will let people compete with their friends and relatives to detect fake science. Data from the best of these participants will be used to help train the neural net.

Sniffing out fake science should be easier than sniffing out fake news in general, because subjective opinion plays a minimal role in legitimate science, which is characterized by evidence, logic and verification. Experts can readily distinguish legitimate science from conspiracy theories and arguments motivated by ideology, which means machine learning systems can be trained to, as well.

“Everyone is entitled to his own opinion, but not his own facts.” These words of Daniel Patrick Moynihan, advisor to four presidents, could be the mantra for those trying to keep science from being drowned by misinformation.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wednesday, February 12, 2020


Hackers could shut down satellites – or turn them into weapons

Two CubeSats, part of a constellation built and operated by Planet Labs Inc. to take images of Earth, were launched from the International Space Station on May 17, 2016. NASA
William Akoto, University of Denver

Last month, SpaceX became the operator of the world’s largest active satellite constellation. As of the end of January, the company had 242 satellites orbiting the planet with plans to launch 42,000 over the next decade. This is part of its ambitious project to provide internet access across the globe. The race to put satellites in space is on, with Amazon, U.K.-based OneWeb and other companies chomping at the bit to place thousands of satellites in orbit in the coming months.

These new satellites have the potential to revolutionize many aspects of everyday life – from bringing internet access to remote corners of the globe to monitoring the environment and improving global navigation systems. Amid all the fanfare, a critical danger has flown under the radar: the lack of cybersecurity standards and regulations for commercial satellites, in the U.S. and internationally. As a scholar who studies cyber conflict, I’m keenly aware that this, coupled with satellites’ complex supply chains and layers of stakeholders, leaves them highly vulnerable to cyberattacks.

If hackers were to take control of these satellites, the consequences could be dire. On the mundane end of scale, hackers could simply shut satellites down, denying access to their services. Hackers could also jam or spoof the signals from satellites, creating havoc for critical infrastructure. This includes electric grids, water networks and transportation systems.

Some of these new satellites have thrusters that allow them to speed up, slow down and change direction in space. If hackers took control of these steerable satellites, the consequences could be catastrophic. Hackers could alter the satellites’ orbits and crash them into other satellites or even the International Space Station.

Commodity parts open a door

Makers of these satellites, particularly small CubeSats, use off-the-shelf technology to keep costs low. The wide availability of these components means hackers can analyze them for vulnerabilities. In addition, many of the components draw on open-source technology. The danger here is that hackers could insert back doors and other vulnerabilities into satellites’ software.

The highly technical nature of these satellites also means multiple manufacturers are involved in building the various components. The process of getting these satellites into space is also complicated, involving multiple companies. Even once they are in space, the organizations that own the satellites often outsource their day-to-day management to other companies. With each additional vendor, the vulnerabilities increase as hackers have multiple opportunities to infiltrate the system.

CubeSats are small, inexpensive satellites. Svobodat/Wikimedia Commons, CC BY

Hacking some of these CubeSats may be as simple as waiting for one of them to pass overhead and then sending malicious commands using specialized ground antennas. Hacking more sophisticated satellites might not be that hard either.

Satellites are typically controlled from ground stations. These stations run computers with software vulnerabilities that can be exploited by hackers. If hackers were to infiltrate these computers, they could send malicious commands to the satellites.

A history of hacks

This scenario played out in 1998 when hackers took control of the U.S.-German ROSAT X-Ray satellite. They did it by hacking into computers at the Goddard Space Flight Center in Maryland. The hackers then instructed the satellite to aim its solar panels directly at the sun. This effectively fried its batteries and rendered the satellite useless. The defunct satellite eventually crashed back to Earth in 2011. Hackers could also hold satellites for ransom, as happened in 1999 when hackers took control of the U.K.‘s SkyNet satellites.

Over the years, the threat of cyberattacks on satellites has gotten more dire. In 2008, hackers, possibly from China, reportedly took full control of two NASA satellites, one for about two minutes and the other for about nine minutes. In 2018, another group of Chinese state-backed hackers reportedly launched a sophisticated hacking campaign aimed at satellite operators and defense contractors. Iranian hacking groups have also attempted similar attacks.

Although the U.S. Department of Defense and National Security Agency have made some efforts to address space cybersecurity, the pace has been slow. There are currently no cybersecurity standards for satellites and no governing body to regulate and ensure their cybersecurity. Even if common standards could be developed, there are no mechanisms in place to enforce them. This means responsibility for satellite cybersecurity falls to the individual companies that build and operate them.

Market forces work against space cybersecurity

SpaceX, headquartered in Hawthorne, Calif., plans to launch 42,000 satellites over the next decade. Bruno Sanchez-Andrade Nuño/Wikimedia Commons, CC BY

As they compete to be the dominant satellite operator, SpaceX and rival companies are under increasing pressure to cut costs. There is also pressure to speed up development and production. This makes it tempting for the companies to cut corners in areas like cybersecurity that are secondary to actually getting these satellites in space.

Even for companies that make a high priority of cybersecurity, the costs associated with guaranteeing the security of each component could be prohibitive. This problem is even more acute for low-cost space missions, where the cost of ensuring cybersecurity could exceed the cost of the satellite itself.

To compound matters, the complex supply chain of these satellites and the multiple parties involved in their management means it’s often not clear who bears responsibility and liability for cyber breaches. This lack of clarity has bred complacency and hindered efforts to secure these important systems.

Regulation is required

Some analysts have begun to advocate for strong government involvement in the development and regulation of cybersecurity standards for satellites and other space assets. Congress could work to adopt a comprehensive regulatory framework for the commercial space sector. For instance, they could pass legislation that requires satellites manufacturers to develop a common cybersecurity architecture.

They could also mandate the reporting of all cyber breaches involving satellites. There also needs to be clarity on which space-based assets are deemed critical in order to prioritize cybersecurity efforts. Clear legal guidance on who bears responsibility for cyberattacks on satellites will also go a long way to ensuring that the responsible parties take the necessary measures to secure these systems.

Given the traditionally slow pace of congressional action, a multi-stakeholder approach involving public-private cooperation may be warranted to ensure cybersecurity standards. Whatever steps government and industry take, it is imperative to act now. It would be a profound mistake to wait for hackers to gain control of a commercial satellite and use it to threaten life, limb and property – here on Earth or in space – before addressing this issue.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend.]The Conversation

William Akoto, Postdoctoral Research Fellow, University of Denver

This article is republished from The Conversation under a Creative Commons license. Read the original article.