Tuesday, April 21, 2020


Linking self-driving cars to traffic signals might help pedestrians give them the green light

An autonomous vehicle has no driver to communicate with about whether it’s safe to cross. Saklakova/iStock/Getty Images Plus
Lionel Peter Robert Jr., University of Michigan

The Research Brief is a short take on interesting academic work.

The big idea

Automated vehicles don’t have human operators to communicate their driving intentions to pedestrians at intersections. My team’s research on pedestrians’ perceptions of safety shows their trust of traffic lights tends to override their fear of self-driving cars. This suggests one way to help pedestrians trust and safely interact with autonomous vehicles may be to link the cars’ driving behavior to traffic lights.

In a recent study by my team at the University of Michigan, we focused on communication via a vehicle’s driving behavior to study how people might react to self-driving cars in different situations. We set up a virtual-reality simulator that let people experience street intersections and make choices about whether to cross the street. In different simulations, self-driving cars acted either more or less like an aggressive driver. In some cases there was a traffic light controlling the intersection.

In the more aggressive mode, the car would stop abruptly at the last possible second to let the pedestrian cross. In the less aggressive mode, it would begin braking earlier, indicating to pedestrians that it would stop for them. Aggressive driving reduced pedestrians’ trust in the autonomous vehicle and made them less likely to cross the street.

However, this was true only when there was no traffic light. When there was a light, pedestrians focused on the traffic light and usually crossed the street regardless whether the car was driving aggressively. This indicates that pedestrians’ trust of traffic lights outweighs any concerns about how self-driving cars behave.

Why it matters

Introducing autonomous vehicles might be one way to make roads more safe. Drivers and pedestrians often use nonverbal communication to negotiate safe passage at crosswalks, though, and cars without drivers can’t communicate in the same way. This could in turn make pedestrians and other road users less safe, especially since autonomous vehicles aren’t yet designed to communicate with systems that make streets safer, such as traffic lights.

Other research being done in the field

Some researchers have tried to find ways for self-driving cars to communicate with pedestrians. They have tried to use parts that cars already have, such as headlights, or add new ones, such as LED signs on the vehicle.

However, unless every car does it the same way, this strategy won’t work. For example, unless automakers agreed on how headlights should communicate certain messages or the government set rules, it would be impossible to make sure pedestrians understood the message. The same holds for new technology like LED message boards on cars. There would need to be a standard set of messages all pedestrians could understand without learning multiple systems.

Even if the vehicles communicated in the same way, several cars approaching an intersection and making independent decisions about stopping could cause confusion. Imagine three to five autonomous vehicles approaching a crosswalk, each displaying its own message. The pedestrian would need to read each of these messages, on moving cars, before deciding whether to cross.

What if all vehicles were communicating with the traffic lights ahead, even before they’re visible? elenabs/iStock/Getty Images Plus

What’s next

Our results suggest a better approach would be to have the car communicate directly with the traffic signal, for two reasons.

First, pedestrians already look to and understand current traffic lights.

Second, a car can tell what a traffic light is doing much sooner by checking in over a wireless network than by waiting until its camera can see the light.

This technology is still being developed, and scholars at Michigan’s Mcity mobility research center and elsewhere are studying problems like how to send and prioritize messages between cars and signals. It might effectively put self-driving cars under traffic lights’ control, with ways to adapt to current conditions. For example, a traffic light might tell approaching cars that it was about to turn red, giving them more time to stop. On a slippery road, a car might ask the light to stay green a few seconds longer so an abrupt stop isn’t necessary.

To make this real, engineers and policymakers would need to work together on developing technologies and setting rules. Each would have to better understand what the other does. At the same time, they would need to understand that not every solution works in every region or society. For example, the best way for traffic lights and self-driving cars to communicate in Detroit might not work in Mumbai, where roads and driving practices are far different.

[Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter.]The Conversation

Lionel Peter Robert Jr., Associate Professor of Information, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, March 13, 2020


How technology can combat the rising tide of fake science

A crop circle in Switzerland. Jabberocky/Wikimedia Commons
Chris Impey, University of Arizona

Science gets a lot of respect these days. Unfortunately, it’s also getting a lot of competition from misinformation. Seven in 10 Americans think the benefits from science outweigh the harms, and nine in 10 think science and technology will create more opportunities for future generations. Scientists have made dramatic progress in understanding the universe and the mechanisms of biology, and advances in computation benefit all fields of science.

On the other hand, Americans are surrounded by a rising tide of misinformation and fake science. Take climate change. Scientists are in almost complete agreement that people are the primary cause of global warming. Yet polls show that a third of the public disagrees with this conclusion.

In my 30 years of studying and promoting scientific literacy, I’ve found that college educated adults have large holes in their basic science knowledge and they’re disconcertingly susceptible to superstition and beliefs that aren’t based on any evidence. One way to counter this is to make it easier for people to detect pseudoscience online. To this end, my lab at the University of Arizona has developed an artificial intelligence-based pseudoscience detector that we plan to freely release as a web browser extension and smart phone app.

Americans’ predilection for fake science

Americans are prone to superstition and paranormal beliefs. An annual survey done by sociologists at Chapman University finds that more than half believe in spirits and the existence of ancient civilizations like Atlantis, and more than a third think that aliens have visited the Earth in the past or are visiting now. Over 75% hold multiple paranormal beliefs. The survey shows that these numbers have increased in recent years.

Widespread belief in astrology is a pet peeve of my colleagues in astronomy. It’s long had a foothold in the popular culture through horoscopes in newspapers and magazines but currently it’s booming. Belief is strong even among the most educated. My surveys of college undergraduates show that three-quarters of them think that astrology is very or “sort of” scientific and only half of science majors recognize it as not at all scientific.

Allan Mazur, a sociologist at Syracuse University, has delved into the nature of irrational belief systems, their cultural roots, and their political impact. Conspiracy theories are, by definition, resistant to evidence or data that might prove them false. Some are at least amusing. Adherents of the flat Earth theory turn back the clock on two millennia of scientific progress. Interest in this bizarre idea has surged in the past five years, spurred by social media influencers and the echo chamber nature of web sites like Reddit. As with climate change denial, many come to this belief through YouTube videos.

However, the consequences of fake science are no laughing matter. In matters of health and climate change, misinformation can be a matter of life and death. Over a 90-day period spanning December, January and February, people liked, shared and commented on posts from sites containing false or misleading information about COVID-19 142 times more than they did information from the Centers for Disease Control and the World Health Organization.

Combating fake science is an urgent priority. In a world that’s increasingly dependent on science and technology, civic society can only function when the electorate is well informed.

Educators must roll up their sleeves and do a better job of teaching critical thinking to young people. However, the problem goes beyond the classroom. The internet is the first source of science information for 80% of people ages 18 to 24.

One study found that a majority of a random sample of 200 YouTube videos on climate change denied that humans were responsible or claimed that it was a conspiracy. The videos peddling conspiracy theories got the most views. Another study found that a quarter of all tweets on climate were generated by bots and they preferentially amplified messages from climate change deniers.

Technology to the rescue?

The recent success of machine learning and AI in detecting fake news points the way to detecting fake science online. The key is neural net technology. Neural nets are loosely modeled on the human brain. They consist of many interconnected computer processors that identify meaningful patterns in data like words and images. Neural nets already permeate everyday life, particularly in natural language processing systems like Amazon’s Alexa and Google’s language translation capability.

At the University of Arizona, we have trained neural nets on handpicked popular articles about climate change and biological evolution, and the neural nets are 90% successful in distinguishing wheat from chaff. With a quick scan of a site, our neural net can tell if its content is scientifically sound or climate-denial junk. After more refinement and testing we hope to have neural nets that can work across all domains of science.

Neural net technology under development at the University of Arizona will flag science websites with a color code indicating their reliability (left). A smartphone app version will gamify the process of declaring science articles real or fake (right). Chris Impey, CC BY-ND

The goal is a web browser extension that would detect when the user is looking at science content and deduce whether or not it’s real or fake. If it’s misinformation, the tool will suggest a reliable web site on that topic. My colleagues and I also plan to gamify the interface with a smart phone app that will let people compete with their friends and relatives to detect fake science. Data from the best of these participants will be used to help train the neural net.

Sniffing out fake science should be easier than sniffing out fake news in general, because subjective opinion plays a minimal role in legitimate science, which is characterized by evidence, logic and verification. Experts can readily distinguish legitimate science from conspiracy theories and arguments motivated by ideology, which means machine learning systems can be trained to, as well.

“Everyone is entitled to his own opinion, but not his own facts.” These words of Daniel Patrick Moynihan, advisor to four presidents, could be the mantra for those trying to keep science from being drowned by misinformation.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wednesday, February 12, 2020


Hackers could shut down satellites – or turn them into weapons

Two CubeSats, part of a constellation built and operated by Planet Labs Inc. to take images of Earth, were launched from the International Space Station on May 17, 2016. NASA
William Akoto, University of Denver

Last month, SpaceX became the operator of the world’s largest active satellite constellation. As of the end of January, the company had 242 satellites orbiting the planet with plans to launch 42,000 over the next decade. This is part of its ambitious project to provide internet access across the globe. The race to put satellites in space is on, with Amazon, U.K.-based OneWeb and other companies chomping at the bit to place thousands of satellites in orbit in the coming months.

These new satellites have the potential to revolutionize many aspects of everyday life – from bringing internet access to remote corners of the globe to monitoring the environment and improving global navigation systems. Amid all the fanfare, a critical danger has flown under the radar: the lack of cybersecurity standards and regulations for commercial satellites, in the U.S. and internationally. As a scholar who studies cyber conflict, I’m keenly aware that this, coupled with satellites’ complex supply chains and layers of stakeholders, leaves them highly vulnerable to cyberattacks.

If hackers were to take control of these satellites, the consequences could be dire. On the mundane end of scale, hackers could simply shut satellites down, denying access to their services. Hackers could also jam or spoof the signals from satellites, creating havoc for critical infrastructure. This includes electric grids, water networks and transportation systems.

Some of these new satellites have thrusters that allow them to speed up, slow down and change direction in space. If hackers took control of these steerable satellites, the consequences could be catastrophic. Hackers could alter the satellites’ orbits and crash them into other satellites or even the International Space Station.

Commodity parts open a door

Makers of these satellites, particularly small CubeSats, use off-the-shelf technology to keep costs low. The wide availability of these components means hackers can analyze them for vulnerabilities. In addition, many of the components draw on open-source technology. The danger here is that hackers could insert back doors and other vulnerabilities into satellites’ software.

The highly technical nature of these satellites also means multiple manufacturers are involved in building the various components. The process of getting these satellites into space is also complicated, involving multiple companies. Even once they are in space, the organizations that own the satellites often outsource their day-to-day management to other companies. With each additional vendor, the vulnerabilities increase as hackers have multiple opportunities to infiltrate the system.

CubeSats are small, inexpensive satellites. Svobodat/Wikimedia Commons, CC BY

Hacking some of these CubeSats may be as simple as waiting for one of them to pass overhead and then sending malicious commands using specialized ground antennas. Hacking more sophisticated satellites might not be that hard either.

Satellites are typically controlled from ground stations. These stations run computers with software vulnerabilities that can be exploited by hackers. If hackers were to infiltrate these computers, they could send malicious commands to the satellites.

A history of hacks

This scenario played out in 1998 when hackers took control of the U.S.-German ROSAT X-Ray satellite. They did it by hacking into computers at the Goddard Space Flight Center in Maryland. The hackers then instructed the satellite to aim its solar panels directly at the sun. This effectively fried its batteries and rendered the satellite useless. The defunct satellite eventually crashed back to Earth in 2011. Hackers could also hold satellites for ransom, as happened in 1999 when hackers took control of the U.K.‘s SkyNet satellites.

Over the years, the threat of cyberattacks on satellites has gotten more dire. In 2008, hackers, possibly from China, reportedly took full control of two NASA satellites, one for about two minutes and the other for about nine minutes. In 2018, another group of Chinese state-backed hackers reportedly launched a sophisticated hacking campaign aimed at satellite operators and defense contractors. Iranian hacking groups have also attempted similar attacks.

Although the U.S. Department of Defense and National Security Agency have made some efforts to address space cybersecurity, the pace has been slow. There are currently no cybersecurity standards for satellites and no governing body to regulate and ensure their cybersecurity. Even if common standards could be developed, there are no mechanisms in place to enforce them. This means responsibility for satellite cybersecurity falls to the individual companies that build and operate them.

Market forces work against space cybersecurity

SpaceX, headquartered in Hawthorne, Calif., plans to launch 42,000 satellites over the next decade. Bruno Sanchez-Andrade Nuño/Wikimedia Commons, CC BY

As they compete to be the dominant satellite operator, SpaceX and rival companies are under increasing pressure to cut costs. There is also pressure to speed up development and production. This makes it tempting for the companies to cut corners in areas like cybersecurity that are secondary to actually getting these satellites in space.

Even for companies that make a high priority of cybersecurity, the costs associated with guaranteeing the security of each component could be prohibitive. This problem is even more acute for low-cost space missions, where the cost of ensuring cybersecurity could exceed the cost of the satellite itself.

To compound matters, the complex supply chain of these satellites and the multiple parties involved in their management means it’s often not clear who bears responsibility and liability for cyber breaches. This lack of clarity has bred complacency and hindered efforts to secure these important systems.

Regulation is required

Some analysts have begun to advocate for strong government involvement in the development and regulation of cybersecurity standards for satellites and other space assets. Congress could work to adopt a comprehensive regulatory framework for the commercial space sector. For instance, they could pass legislation that requires satellites manufacturers to develop a common cybersecurity architecture.

They could also mandate the reporting of all cyber breaches involving satellites. There also needs to be clarity on which space-based assets are deemed critical in order to prioritize cybersecurity efforts. Clear legal guidance on who bears responsibility for cyberattacks on satellites will also go a long way to ensuring that the responsible parties take the necessary measures to secure these systems.

Given the traditionally slow pace of congressional action, a multi-stakeholder approach involving public-private cooperation may be warranted to ensure cybersecurity standards. Whatever steps government and industry take, it is imperative to act now. It would be a profound mistake to wait for hackers to gain control of a commercial satellite and use it to threaten life, limb and property – here on Earth or in space – before addressing this issue.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend.]The Conversation

William Akoto, Postdoctoral Research Fellow, University of Denver

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, February 10, 2020


AI could constantly scan the internet for data privacy violations, a quicker, easier way to enforce compliance

You leave bits of your personal data behind online, and companies are happy to trade in them. metamorworks/ iStock/Getty Images Plus
Karuna Pande Joshi, University of Maryland, Baltimore County

You’re trailing bits of personal data – such as credit card numbers, shopping preferences and which news articles you read – as you travel around the internet. Large internet companies make money off this kind of personal information by sharing it with their subsidiaries and third parties. Public concern over online privacy has led to laws designed to control who gets that data and how they can use it.

The battle is ongoing. Democrats in the U.S. Senate recently introduced a bill that includes penalties for tech companies that mishandle users’ personal data. That law would join a long list of rules and regulations worldwide, including the Payment Card Industry Data Security Standard that regulates online credit card transactions, the European Union’s General Data Protection Regulation, the California Consumer Privacy Act that went into effect in January, and the U.S. Children’s Online Privacy Protection Act.

Internet companies must adhere to these regulations or risk expensive lawsuits or government sanctions, such as the Federal Trade Commission’s recent US$5 billion fine imposed on Facebook.

But it is technically challenging to determine in real time whether a privacy violation has occurred, an issue that is becoming even more problematic as internet data moves to extreme scale. To make sure their systems comply, companies rely on human experts to interpret the laws – a complex and time-consuming task for organizations that constantly launch and update services.

My research group at the University of Maryland, Baltimore County, has developed novel technologies for machines to understand data privacy laws and enforce compliance with them using artificial intelligence. These technologies will enable companies to make sure their services comply with privacy laws and also help governments identify in real time those companies that violate consumers’ privacy rights.

Before machines can search for privacy violations, they need to understand the rules. Imilian/iStock/Getty Images Plus

Helping machines understand regulations

Governments generate online privacy regulations as plain text documents that are easy for humans to read but difficult for machines to interpret. As a result, the regulations need to be manually examined to ensure that no rules are being broken when a citizen’s private data is analyzed or shared. This affects companies that now have to comply with a forest of regulations.

Rules and regulations often are ambiguous by design because societies want flexibility in implementing them. Subjective concepts such as good and bad vary among cultures and over time, so laws are drafted in general or vague terms to allow scope for future modifications. Machines can’t process this vagueness – they operate in 1’s and 0’s – so they cannot “understand” privacy the way humans do. Machines need specific instructions to understand the knowledge on which a regulation is based.

One way to help machines understand an abstract concept is by building an ontology, or a graph representing the knowledge of that concept. Borrowing the concepts of ontology from philosphy, new computer languages, such as OWL, have been developed in AI. These languages can define concepts and categories in a subject area or domain, show their properties and show the relations among them. Ontologies are sometimes called “knowledge graphs,” because they are stored in graphlike structures.

An example of a simple knowledge graph. Karuna Pande Joshi, CC BY-ND

When my colleagues and I began looking at the challenge of making privacy regulations understandable by machines, we determined that the first step would be to capture all the key knowledge in these laws and create knowledge graphs to store it.

Extracting the terms and rules

The key knowledge in the regulations consists of three parts.

First, there are “terms of art”: words or phrases that have precise definitions within a law. They help to identify the entity that the regulation describes and allow us to describe its roles and responsibilities in a language that computers can understand. For example, from the EU’s General Data Protection Regulation, we extracted terms of art such as “Consumers and Providers” and “Fines and Enforcement.”

Next, we identified Deontic rules: sentences or phrases that provide us with philosophical modal logic, which deals with deductive behavior. Deontic (or moral) rules include sentences describing duties or obligations and mainly fall into four categories. “Permissions” define the rights of an entity/actor. “Obligations” define the responsibilities of an entity/actor. “Prohibitions” are conditions or actions that are not allowed. “Dispensations” are optional or nonmandatory statements.

The researchers’ application automatically extracted Deontic rules, such as permissions and obligations, from two privacy regulations. Entities involved in the rules are highlighted in yellow. Modal words that help identify whether a rule is a permission, prohibition or obligation are highlighted in blue. Gray indicates the temporal or time-based aspect of the rule. Karuna Pande Joshi, CC BY-ND

To explain this with a simple example, consider the following:

  • You have permission to drive.

  • But to drive, you are obligated to get a driver’s license.

  • You are prohibited from speeding (and will be punished if you do so).

  • You can park in areas where you have the dispensation to do so (such as paid parking, metered parking or open areas not near a fire hydrant).

Some of these rules apply to everyone uniformly in all conditions; while others may apply partially, to only one entity or based on conditions agreed to by everyone.

Similar rules that describe do’s and don'ts apply to online personal data. There are permissions and prohibitions to prevent data breaches. There are obligations on the companies storing the data to ensure its safety. And there are dispensations made for vulnerable demographics such as minors.

A knowledge graph for GDPR regulations. Karuna Pande Joshi, CC BY-ND

My group developed techniques to automatically extract these rules from the regulations and save them in a knowledge graph.

Thirdly, we also had to figure out how to include the cross references that are often used in legal regulations to reference text in another section of the regulation or in a separate document. These are important knowledge elements that should also be stored in the knowledge graph.

Rules in place, scanning for compliance

After defining all the key entities, properties, relations, rules and policies of a data privacy law in a knowledge graph, my colleagues and I can create applications that can reason about the data privacy rules using these knowledge graphs.

These applications can significantly reduce the time it will take companies to determine whether they are complying with the data protection regulations. They can also help regulators monitor data audit trails to determine whether companies they oversee are complying with the rules.

This technology can also help individuals get a quick snapshot of their rights and responsibilities with respect to the private data they share with companies. Once machines can quickly interpret long, complex privacy policies, people will be able to automate many mundane compliance activities that are done manually today. They may also be able to make those policies more understandable to consumers.The Conversation

Karuna Pande Joshi, Assistant Professor of Information Systems, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, February 3, 2020


How old should kids be to get phones?

Every kid should have their own cell phone. Or should they? Syda Productions/Shutterstock.com
Fashina Aladé, Michigan State University

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


What age should kids get phones? – Yuvi, age 10, Dayton, Ohio


If it seems like all your friends have smartphones, you may be on to something. A new report by Common Sense Media, a nonprofit that reports on technology and media for children, found that by the age of 11, more than half of kids in the U.S. have their own smartphone. By age 12, more than two-thirds do, and by 14, teens are just as likely as adults to own a smartphone. Some kids start much younger. Nearly 20% of 8-year-olds have their own smartphone!

So, what’s the right age for you? Well, I study the effects of media and technology on kids, and I’m here to say that there is no single right answer to this question. The best I can offer is this: When both you and your parents feel the time is right.

How to talk to your parents about a smartphone

Smartphones can help you stay in touch with your people. Tony Stock/Shutterstock.com

Here are some points to consider to help you and your parents make this decision.

Responsibility: Have you shown that you are generally responsible? Do you keep track of important belongings? Do you understand the value of money, and can you save up to buy things you want? These are all good signs that you may be ready for a phone. If not, it might be wise to wait a bit longer.

Safety: Do you travel to or from school or after-school activities without an adult? This is when phones often go from a “want” to a “need.” Sometimes parents report that they feel better knowing they can reach their children directly, and that their kids can reach them, too.

Social maturity: Do you treat your friends with kindness and respect? Do you understand the permanence of the internet, the fact that once something goes out onto the web, it can never truly be deleted? It is critically important that you have a grasp on these issues before you own a smartphone.

We all get angry and say hurtful things we don’t mean sometimes, but when you post something on the internet that you might not mean later, or might wish you could take back, even on a so-called anonymous app, it can have real and lasting harmful effects. In the era of smartphones, there have been huge increases in cyberbullying.

How will you use your smartphone? Twin Design/Shutterstock.com

Being smart about your smartphone

If you and your parents decide this is a good time to take that step, here are some tips to create a healthy relationship between you and your phone.

Parents should model good behavior! Your parents are the No. 1 most important influence in your life, and that goes for technology use as much as anything else. If parents are glued to their phones all day, guess what? Their children probably will be, too.

On the flip side, if parents model smartphone habits like putting the phone away during meals and not texting and driving, that will go a long way toward helping kids develop similar healthy behaviors.

Are you ready? 1shostak/Shutterstock.com

You and your parents should talk together about the importance of setting rules and limits around your phone use and screen time. Understanding why rules are made and set in place can help kids stick to a system.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Fashina Aladé, Assistant Professor, Advertising and Public Relations, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.