Monday, May 18, 2020


How the Hubble Space Telescope opened our eyes to the first galaxies of the universe

The launch of Hubble Space Telescope on April 24, 1990. This photo captures the first time that there were shuttles on both pad 39a and 39b. NASA
Rodger I. Thompson, University of Arizona

The Hubble Space Telescope launched on the 24th of April, 30 years ago. It’s an impressive milestone especially as its expected lifespan was just 10 years.

One of the primary reasons for the Hubble telescope’s longevity is that it can be serviced and improved with new observational instruments through Space Shuttle visits.

When Hubble, or HST, first launched, its instruments could observe ultraviolet light with wavelengths shorter than the eye can see, as well as optical light with wavelengths visible to humans. A maintenance mission in 1997 added an instrument to observe near infrared light, which are longer wavelengths than people can see. Hubble’s new infrared eyes provided two new major capabilities: the ability to see farther into space than before and see deeper into the dusty regions of star formation.

I am an astrophysicist at the University of Arizona who has used near infrared observations to better understand how the universe works, from star formation to cosmology. Some 35 years ago, I was given the chance to build a near infrared camera and spectrometer for Hubble. It was the chance of a lifetime. The camera my team designed and developed has changed the way humans see and understand the universe. The instrument was built at Ball Aerospace in Boulder, Colorado, under our direction.

The light we can see with our eyes is part of a range of radiation known as the electromagnetic spectrum. Shorter wavelengths of light are higher energy, and longer wavelengths of light are lower energy. The Hubble Space Telescope sees primarily visible light (indicated here by the rainbow), as well as some infrared and ultraviolet radiation. NASA/JHUAPL/SwRI

Seeing further and earlier

Edwin Hubble, HST’s namesake, discovered in the early 1900s that the universe is expanding and that the light from distant galaxies was shifted to longer, redder wavelengths, a phenomenon called the redshift. The greater the distance, the larger the shift. This is because the further away an object is, the longer it takes for the light to reach us here on Earth and the more the universe has expanded in that time.

The Hubble ultraviolet and optical instruments had taken images of the most distant galaxies ever seen, known as the Northern Hubble Deep Field, or NHDF, which were released in 1996. These images, however, had reached their distance limit due to the redshift, which had shifted all of the light of the most distant galaxies out of the visible and into the infrared.

One of the new instruments added to Hubble in the second maintenance mission has the awkward name, the Near Infrared Camera and Multi-Object Spectrometer, NICMOS, pronounced “Nick Moss.” The near infrared cameras on NICMOS observed regions of the NHDF and discovered even more distant galaxies with all of their light in the near infrared.

A typical image taken with NICMOS. It shows a gigantic star cluster in the center of our Milky Way. NICMOS, thanks to its infrared capabilities, is able to look through the heavy clouds of dust and gas in these central regions. NASA/JHUAPL/SwRI

Astronomers have the privilege of watching things happen in the past which they call the “lookback time.” Our best measurement of the age of the universe is 13.7 billion years. The distance that light travels in one year is called a light year. The most distant galaxies observed by NICMOS were at a distance of almost 13 billion light years. This meant that the light that NICMOS detected had been traveling for 13 billion years and showed what the galaxies looked like 13 billion years ago, a time when the universe was only about 5% of its current age. These were some of the first galaxies ever created and were forming new stars at rates that were more than a thousand times the rate at which most galaxies form stars in the current universe.

Hidden by dust

Although astronomers have studied star formation for decades, many questions remain. Part of the problem is that most stars are formed in clouds of molecules and dust. The dust absorbs the ultraviolet and most of the optical light emitted by forming stars, making it difficult for Hubble’s ultraviolet and optical instruments to study the process.

The longer, or redder, the wavelength of the light, the less is absorbed. That is why sunsets, where the light must pass through long lengths of dusty air, appear red.

The near infrared, however, has an even easier time passing through dust than the red optical light. NICMOS can look into star formation regions with the superior image quality of Hubble to determine the details of where the star formation occurs. A good example is the iconic Hubble image of the Eagle Nebula, also known as the pillars of creation.

The optical image shows majestic pillars which appear to show star formation over a large volume of space. The NICMOS image, however, shows a different picture. In the NICMOS image, most of the pillars are transparent with no star formation. Stars are only being formed at the tip of the pillars. The optical pillars are just empty dust reflecting the light of a group of nearby stars.

The Eagle Nebula in visible light. NASA, ESA and the Hubble Heritage Team (STScI/AURA)
In this Hubble Space Telescope image is the Eagle Nebula’s Pillars of Creation. Here, the pillars are seen in infrared light, which pierces through obscuring dust and gas and unveils a more unfamiliar — but just as amazing — view of the pillars. NASA, ESA/Hubble and the Hubble Heritage Team

The dawning of the age of infrared

When NICMOS was added into the HST in 1997 NASA had no plans for a future infrared space mission. That rapidly changed as the results from NICMOS became apparent. Based on the data from NICMOS, scientists learned that fully formed galaxies existed in the universe much earlier than expected. The NICMOS images also confirmed that the expansion of the universe is accelerating rather than slowing down as previously thought. The NHDF infrared images were followed by the Hubble Ultra Deep Field images in 2005, which further showed the power of near infrared imaging of distant young galaxies. So NASA decided to invest in the James Webb Space Telescope, or JWST, a telescope much larger than HST and completely dedicated to infrared observations.

On Hubble, a near infrared imager was added to the third version of the Wide Field camera which was installed in May of 2009. This camera used an improved version of the NICMOS detector arrays that had more sensitivity and a wider field of view. The James Webb Space Telescope has much larger versions of the NICMOS detector arrays that have more wavelength coverage than the previous versions.

The James Webb Space Telescope, scheduled to be launched in March 2021, followed by the Wide Field Infrared Survey Telescope, form the bulk of future space missions for NASA. These programs were all spawned by the near infrared observations by HST. They were enabled by the original investment for a near infrared camera and spectrometer to give Hubble its infrared eyes. With the James Webb Space Telescope, astronomers expect to see the very first galaxies that formed in the universe.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]The Conversation

Rodger I. Thompson, Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, May 17, 2020


The lack of women in cybersecurity leaves the online world at greater risk

Women bring a much-needed change in perspective to cybersecurity. Maskot/Maskot via Getty Images
Nir Kshetri, University of North Carolina – Greensboro

Women are highly underrepresented in the field of cybersecurity. In 2017, women’s share in the U.S. cybersecurity field was 14%, compared to 48% in the general workforce.

The problem is more acute outside the U.S. In 2018, women accounted for 10% of the cybersecurity workforce in the Asia-Pacific region, 9% in Africa, 8% in Latin America, 7% in Europe and 5% in the Middle East.

Women are even less well represented in the upper echelons of security leadership. Only 1% of female internet security workers are in senior management positions.

I study online crime and security issues facing consumers, organizations and nations. In my research, I have found that internet security requires strategies beyond technical solutions. Women’s representation is important because women tend to offer viewpoints and perspectives that are different from men’s, and these underrepresented perspectives are critical in addressing cyber risks.

Perception, awareness and bias

The low representation of women in internet security is linked to the broader problem of their low representation in the science, technology, engineering and mathematics fields. Only 30% of scientists and engineers in the U.S. are women.

The societal view is that internet security is a job that men do, though there is nothing inherent in gender that predisposes men to be more interested in or more adept at cybersecurity. In addition, the industry mistakenly gives potential employees the impression that only technical skills matter in cybersecurity, which can give women the impression that the field is overly technical or even boring.

Women are also generally not presented with opportunities in information technology fields. In a survey of women pursuing careers outside of IT fields, 69% indicated that the main reason they didn’t pursue opportunities in IT was because they were unaware of them.

Organizations often fail to try to recruit women to work in cybersecurity. According to a survey conducted by IT security company Tessian, only about half of the respondents said that their organizations were doing enough to recruit women into cybersecurity roles.

Gender bias in job ads further discourages women from applying. Online cybersecurity job ads often lack gender-neutral language.

Good security and good business

Boosting women’s involvement in information security makes both security and business sense. Female leaders in this area tend to prioritize important areas that males often overlook. This is partly due to their backgrounds. Forty-four percent of women in information security fields have degrees in business and social sciences, compared to 30% of men.

Female internet security professionals put a higher priority on internal training and education in security and risk management. Women are also stronger advocates for online training, which is a flexible, low-cost way of increasing employees’ awareness of security issues.

Female internet security professionals are also adept at selecting partner organizations to develop secure software. Women tend to pay more attention to partner organizations’ qualifications and personnel, and they assess partners’ ability to meet contractual obligations. They also prefer partners that are willing to perform independent security tests.

Increasing women’s participation in cybersecurity is a business issue as well as a gender issue. According to an Ernst & Young report, by 2028 women will control 75% of discretionary consumer spending worldwide. Security considerations like encryption, fraud detection and biometrics are becoming important in consumers’ buying decisions. Product designs require a trade-off between cybersecurity and usability. Female cybersecurity professionals can make better-informed decisions about such trade-offs for products that are targeted at female customers.

Attracting women to cybersecurity

Attracting more women to cybersecurity requires governments, nonprofit organizations, professional and trade associations and the private sector to work together. Public-private partnership projects could help solve the problem in the long run.

A computer science teacher, center, helps fifth grade students learn programming. AP Photo/Elaine Thompson

One example is Israel’s Shift community, previously known as the CyberGirlz program, which is jointly financed by the country’s Defense Ministry, the Rashi Foundation and Start-Up Nation Central. It identifies high school girls with aptitude, desire and natural curiosity to learn IT and and helps them develop those skills.

The girls participate in hackathons and training programs, and get advice, guidance and support from female mentors. Some of the mentors are from elite technology units of the country’s military. The participants learn hacking skills, network analysis and the Python programming language. They also practice simulating cyber-attacks to find potential vulnerabilities. By 2018, about 2,000 girls participated in the CyberGirlz Club and the CyberGirlz Community.

In 2017, cybersecurity firm Palo Alto Networks teamed up with the Girl Scouts of the USA to develop cybersecurity badges. The goal is to foster cybersecurity knowledge and develop interest in the profession. The curriculum includes the basics of computer networks, cyberattacks and online safety.

Professional associations can also foster interest in cybersecurity and help women develop relevant knowledge. For example, Women in Cybersecurity of Spain has started a mentoring program that supports female cybersecurity professionals early in their careers.

Some industry groups have collaborated with big companies. In 2018, Microsoft India and the Data Security Council of India launched the CyberShikshaa program in order to create a pool of skilled female cybersecurity professionals.

Some technology companies have launched programs to foster women’s interest in and confidence to pursue internet security careers. One example is IBM Security’s Women in Security Excelling program, formed in 2015.

Attracting more women to the cybersecurity field requires a range of efforts. Cybersecurity job ads should be written so that female professionals feel welcome to apply. Recruitment efforts should focus on academic institutions with high female enrollment. Corporations should ensure that female employees see cybersecurity as a good option for internal career changes. And governments should work with the private sector and academic institutions to get young girls interested in cybersecurity.

Increasing women’s participation in cybersecurity is good for women, good for business and good for society.

[Insight, in your inbox each day. You can get it with The Conversation’s email newsletter.]The Conversation

Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Saturday, May 16, 2020


Robots are playing many roles in the coronavirus crisis – and offering lessons for future disasters

A nurse (left) operates a robot used to interact remotely with coronavirus patients while a physician looks on. MIGUEL MEDINA/AFP via Getty Images
Robin R. Murphy, Texas A&M University ; Justin Adams, Florida State University, and Vignesh Babu Manjunath Gandudi, Texas A&M University

A cylindrical robot rolls into a treatment room to allow health care workers to remotely take temperatures and measure blood pressure and oxygen saturation from patients hooked up to a ventilator. Another robot that looks like a pair of large fluorescent lights rotated vertically travels throughout a hospital disinfecting with ultraviolet light. Meanwhile a cart-like robot brings food to people quarantined in a 16-story hotel. Outside, quadcopter drones ferry test samples to laboratories and watch for violations of stay-at-home restrictions.

These are just a few of the two dozen ways robots have been used during the COVID-19 pandemic, from health care in and out of hospitals, automation of testing, supporting public safety and public works, to continuing daily work and life.

The lessons they’re teaching for the future are the same lessons learned at previous disasters but quickly forgotten as interest and funding faded. The best robots for a disaster are the robots, like those in these examples, that already exist in the health care and public safety sectors.

Research laboratories and startups are creating new robots, including one designed to allow health care workers to remotely take blood samples and perform mouth swabs. These prototypes are unlikely to make a difference now. However, the robots under development could make a difference in future disasters if momentum for robotics research continues.

Robots around the world

As roboticists at Texas A&M University and the Center for Robot-Assisted Search and Rescue, we examined over 120 press and social media reports from China, the U.S. and 19 other countries about how robots are being used during the COVID-19 pandemic. We found that ground and aerial robots are playing a notable role in almost every aspect of managing the crisis.

R. Murphy, V. Gandudi, Texas A&M; J. Adams, Center for Robot-Assisted Search and Rescue, CC BY-ND

In hospitals, doctors and nurses, family members and even receptionists are using robots to interact in real time with patients from a safe distance. Specialized robots are disinfecting rooms and delivering meals or prescriptions, handling the hidden extra work associated with a surge in patients. Delivery robots are transporting infectious samples to laboratories for testing.

Outside of hospitals, public works and public safety departments are using robots to spray disinfectant throughout public spaces. Drones are providing thermal imagery to help identify infected citizens and enforce quarantines and social distancing restrictions. Robots are even rolling through crowds, broadcasting public service messages about the virus and social distancing.

At work and home, robots are assisting in surprising ways. Realtors are teleoperating robots to show properties from the safety of their own homes. Workers building a new hospital in China were able work through the night because drones carried lighting. In Japan, students used robots to walk the stage for graduation, and in Cyprus, a person used a drone to walk his dog without violating stay-at-home restrictions.

Helping workers, not replacing them

Every disaster is different, but the experience of using robots for the COVID-19 pandemic presents an opportunity to finally learn three lessons documented over the past 20 years. One important lesson is that during a disaster robots do not replace people. They either perform tasks that a person could not do or do safely, or take on tasks that free up responders to handle the increased workload.

The majority of robots being used in hospitals treating COVID-19 patients have not replaced health care professionals. These robots are teleoperated, enabling the health care workers to apply their expertise and compassion to sick and isolated patients remotely.

A robot uses pulses of ultraviolet light to disinfect a hospital room in Johannesburg, South Africa. MICHELE SPATARI/AFP via Getty Images

A small number of robots are autonomous, such as the popular UVD decontamination robots and meal and prescription carts. But the reports indicate that the robots are not displacing workers. Instead, the robots are helping the existing hospital staff cope with the surge in infectious patients. The decontamination robots disinfect better and faster than human cleaners, while the carts reduce the amount of time and personal protective equipment nurses and aides must spend on ancillary tasks.

Off-the-shelf over prototypes

The second lesson is the robots used during an emergency are usually already in common use before the disaster. Technologists often rush out well-intentioned prototypes, but during an emergency, responders – health care workers and search-and-rescue teams – are too busy and stressed to learn to use something new and unfamiliar. They typically can’t absorb the unanticipated tasks and procedures, like having to frequently reboot or change batteries, that usually accompany new technology.

Fortunately, responders adopt technologies that their peers have used extensively and shown to work. For example, decontamination robots were already in daily use at many locations for preventing hospital-acquired infections. Sometimes responders also adapt existing robots. For example, agricultural drones designed for spraying pesticides in open fields are being adapted for spraying disinfectants in crowded urban cityscapes in China and India.

Workers in Kunming City, Yunnan Province, China refill a drone with disinfectant. The city is using drones to spray disinfectant in some public areas. Xinhua News Agency/Yang Zongyou via Getty Images

A third lesson follows from the second. Repurposing existing robots is generally more effective than building specialized prototypes. Building a new, specialized robot for a task takes years. Imagine trying to build a new kind of automobile from scratch. Even if such a car could be quickly designed and manufactured, only a few cars would be produced at first and they would likely lack the reliability, ease of use and safety that comes from months or years of feedback from continuous use.

Alternatively, a faster and more scalable approach is to modify existing cars or trucks. This is how robots are being configured for COVID-19 applications. For example, responders began using the thermal cameras already on bomb squad robots and drones – common in most large cities – to detect infected citizens running a high fever. While the jury is still out on whether thermal imaging is effective, the point is that existing public safety robots were rapidly repurposed for public health.

Don’t stockpile robots

The broad use of robots for COVID-19 is a strong indication that the health care system needed more robots, just like it needed more of everyday items such as personal protective equipment and ventilators. But while storing caches of hospital supplies makes sense, storing a cache of specialized robots for use in a future emergency does not.

This was the strategy of the nuclear power industry, and it failed during the Fukushima Daiichi nuclear accident. The robots stored by the Japanese Atomic Energy Agency for an emergency were outdated, and the operators were rusty or no longer employed. Instead, the Tokyo Electric Power Company lost valuable time acquiring and deploying commercial off-the-shelf bomb squad robots, which were in routine use throughout the world. While the commercial robots were not perfect for dealing with a radiological emergency, they were good enough and cheap enough for dozens of robots to be used throughout the facility.

Robots in future pandemics

Hopefully, COVID-19 will accelerate the adoption of existing robots and their adaptation to new niches, but it might also lead to new robots. Laboratory and supply chain automation is emerging as an overlooked opportunity. Automating the slow COVID-19 test processing that relies on a small set of labs and specially trained workers would eliminate some of the delays currently being experienced in many parts of the U.S.

Automation is not particularly exciting, but just like the unglamorous disinfecting robots in use now, it is a valuable application. If government and industry have finally learned the lessons from previous disasters, more mundane robots will be ready to work side by side with the health care workers on the front lines when the next pandemic arrives.

[You’re too busy to read everything. We get it. That’s why we’ve got a weekly newsletter. Sign up for good Sunday reading. ]The Conversation

Robin R. Murphy, Raytheon Professor of Computer Science and Engineering; Vice-President Center for Robot-Assisted Search and Rescue (nfp), Texas A&M University ; Justin Adams, President of the Center for Robot-Assisted Search and Rescue/Research Fellow - The Center for Disaster Risk Policy, Florida State University, and Vignesh Babu Manjunath Gandudi, Graduate Teaching Assistant, Texas A&M University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, May 11, 2020


How Apple and Google will let your phone warn you if you've been exposed to the coronavirus

Apps that warn about close contact with COVID-19 cases are key to relaxing social distancing rules. Walter Bibikow/Stone via Getty Images
Johannes Becker, Boston University and David Starobinski, Boston University

On April 10, Apple and Google announced a coronavirus exposure notification system that will be built into their smartphone operating systems, iOS and Android. The system uses the ubiquitous Bluetooth short-range wireless communication technology.

There are dozens of apps being developed around the world that alert people if they’ve been exposed to a person who has tested positive for COVID-19. Many of them also report the identities of the exposed people to public health authorities, which has raised privacy concerns. Several other exposure notification projects, including PACT, BlueTrace and the Covid Watch project, take a similar privacy-protecting approach to Apple’s and Google’s initiative.

So how will the Apple-Google exposure notification system work? As researchers who study security and privacy of wireless communication, we have examined the companies’ plan and have assessed its effectiveness and privacy implications.

Recently, a study found that contact tracing can be effective in containing diseases such as COVID-19, if large parts of the population participate. Exposure notification schemes like the Apple-Google system aren’t true contact tracing systems because they don’t allow public health authorities to identify people who have been exposed to infected individuals. But digital exposure notification systems have a big advantage: They can be used by millions of people and rapidly warn those who have been exposed to quarantine themselves.

Bluetooth beacons

Because Bluetooth is supported on billions of devices, it seems like an obvious choice of technology for these systems. The protocol used for this is Bluetooth Low Energy, or Bluetooth LE for short. This variant is optimized for energy-efficient communication between small devices, which makes it a popular protocol for smartphones and wearables such as smartwatches.

Bluetooth allows phones that are near each other to communicate. Phones that have been near each other for long enough can approximate potential viral transmission. Christoph Dernbach/picture alliance via Getty Images

Bluetooth LE communicates in two main ways. Two devices can communicate over the data channel with each other, such as a smartwatch synchronizing with a phone. Devices can also broadcast useful information to nearby devices over the advertising channel. For example, some devices regularly announce their presence to facilitate automatic connection.

To build an exposure notification app using Bluetooth LE, developers could assign everyone a permanent ID and make every phone broadcast it on an advertising channel. Then, they could build an app that receives the IDs so every phone would be able to keep a record of close encounters with other phones. But that would be a clear violation of privacy. Broadcasting any personally identifiable information via Bluetooth LE is a bad idea, because messages can be read by anyone in range.

Anonymous exchanges

To get around this problem, every phone broadcasts a long random number, which is changed frequently. Other devices receive these numbers and store them if they were sent from close proximity. By using long, unique, random numbers, no personal information is sent via Bluetooth LE.

Apple and Google follow this principle in their specification, but add some cryptography. First, every phone generates a unique tracing key that is kept confidentially on the phone. Every day, the tracing key generates a new daily tracing key. Though the tracing key could be used to identify the phone, the daily tracing key can’t be used to figure out the phone’s permanent tracing key. Then, every 10 to 20 minutes, the daily tracing key generates a new rolling proximity identifier, which looks just like a long random number. This is what gets broadcast to other devices via the Bluetooth advertising channel.

When someone tests positive for COVID-19, they can disclose a list of their daily tracing keys, usually from the previous 14 days. Everyone else’s phones use the disclosed keys to recreate the infected person’s rolling proximity identifiers. The phones then compare the COVID-19-positive identifiers with their own records of the identifiers they received from nearby phones. A match reveals a potential exposure to the virus, but it doesn’t identify the patient.

The Australian government’s COVIDSafe app warns about close encounters with people who are COVID-19-positive, but unlike the Apple-Google system, COVIDSafe reports the contacts to public health authorities. Florent Rols/SOPA Images/LightRocket via Getty Images

Most of the competing proposals use a similar approach. The principal difference is that Apple’s and Google’s operating system updates reach far more phones automatically than a single app can. Additionally, by proposing a cross-platform standard, Apple and Google allow existing apps to piggyback and use a common, compatible communication approach that could work across many apps.

No plan is perfect

The Apple-Google exposure notification system is very secure, but it’s no guarantee of either accuracy or privacy. The system could produce a large number of false positives because being within Bluetooth range of an infected person doesn’t necessarily mean the virus has been transmitted. And even if an app records only very strong signals as a proxy for close contact, it cannot know whether there was a wall, a window or a floor between the phones.

However unlikely, there are ways governments or hackers could track or identify people using the system. Bluetooth LE devices use an advertising address when broadcasting on an advertising channel. Though these addresses can be randomized to protect the identity of the sender, we demonstrated last year that it is theoretically possible to track devices for extended periods of time if the advertising message and advertising address are not changed in sync. To Apple’s and Google’s credit, they call for these to be changed synchronously.

But even if the advertising address and a coronavirus app’s rolling identifier are changed in sync, it may still be possible to track someone’s phone. If there isn’t a sufficiently large number of other devices nearby that also change their advertising addresses and rolling identifiers in sync – a process known as mixing – someone could still track individual devices. For example, if there is a single phone in a room, someone could keep track of it because it’s the only phone that could be broadcasting the random identifiers.

Another potential attack involves logging additional information along with the rolling identifiers. Even though the protocol does not send personal information or location data, receiving apps could record when and where they received keys from other phones. If this was done on a large scale – such as an app that systematically collects this extra information – it could be used to identify and track individuals. For example, if a supermarket recorded the exact date and time of incoming rolling proximity identifiers at its checkout lanes and combined that data with credit card swipes, store staff would have a reasonable chance of identifying which customers were COVID-19 positive.

And because Bluetooth LE advertising beacons use plain-text messages, it’s possible to send faked messages. This could be used to troll others by repeating known COVID-19-positive rolling proximity identifiers to many people, resulting in deliberate false positives.

Nevertheless, the Apple-Google system could be the key to alerting thousands of people who have been exposed to the coronavirus while protecting their identities, unlike contact tracing apps that report identifying information to central government or corporate databases.

[You need to understand the coronavirus pandemic, and we can help. Read The Conversation’s newsletter.]The Conversation

Johannes Becker, Doctoral student in Electrical & Computer Engineering, Boston University and David Starobinski, Professor of Electrical and Computer Engineering, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, April 21, 2020


Linking self-driving cars to traffic signals might help pedestrians give them the green light

An autonomous vehicle has no driver to communicate with about whether it’s safe to cross. Saklakova/iStock/Getty Images Plus
Lionel Peter Robert Jr., University of Michigan

The Research Brief is a short take on interesting academic work.

The big idea

Automated vehicles don’t have human operators to communicate their driving intentions to pedestrians at intersections. My team’s research on pedestrians’ perceptions of safety shows their trust of traffic lights tends to override their fear of self-driving cars. This suggests one way to help pedestrians trust and safely interact with autonomous vehicles may be to link the cars’ driving behavior to traffic lights.

In a recent study by my team at the University of Michigan, we focused on communication via a vehicle’s driving behavior to study how people might react to self-driving cars in different situations. We set up a virtual-reality simulator that let people experience street intersections and make choices about whether to cross the street. In different simulations, self-driving cars acted either more or less like an aggressive driver. In some cases there was a traffic light controlling the intersection.

In the more aggressive mode, the car would stop abruptly at the last possible second to let the pedestrian cross. In the less aggressive mode, it would begin braking earlier, indicating to pedestrians that it would stop for them. Aggressive driving reduced pedestrians’ trust in the autonomous vehicle and made them less likely to cross the street.

However, this was true only when there was no traffic light. When there was a light, pedestrians focused on the traffic light and usually crossed the street regardless whether the car was driving aggressively. This indicates that pedestrians’ trust of traffic lights outweighs any concerns about how self-driving cars behave.

Why it matters

Introducing autonomous vehicles might be one way to make roads more safe. Drivers and pedestrians often use nonverbal communication to negotiate safe passage at crosswalks, though, and cars without drivers can’t communicate in the same way. This could in turn make pedestrians and other road users less safe, especially since autonomous vehicles aren’t yet designed to communicate with systems that make streets safer, such as traffic lights.

Other research being done in the field

Some researchers have tried to find ways for self-driving cars to communicate with pedestrians. They have tried to use parts that cars already have, such as headlights, or add new ones, such as LED signs on the vehicle.

However, unless every car does it the same way, this strategy won’t work. For example, unless automakers agreed on how headlights should communicate certain messages or the government set rules, it would be impossible to make sure pedestrians understood the message. The same holds for new technology like LED message boards on cars. There would need to be a standard set of messages all pedestrians could understand without learning multiple systems.

Even if the vehicles communicated in the same way, several cars approaching an intersection and making independent decisions about stopping could cause confusion. Imagine three to five autonomous vehicles approaching a crosswalk, each displaying its own message. The pedestrian would need to read each of these messages, on moving cars, before deciding whether to cross.

What if all vehicles were communicating with the traffic lights ahead, even before they’re visible? elenabs/iStock/Getty Images Plus

What’s next

Our results suggest a better approach would be to have the car communicate directly with the traffic signal, for two reasons.

First, pedestrians already look to and understand current traffic lights.

Second, a car can tell what a traffic light is doing much sooner by checking in over a wireless network than by waiting until its camera can see the light.

This technology is still being developed, and scholars at Michigan’s Mcity mobility research center and elsewhere are studying problems like how to send and prioritize messages between cars and signals. It might effectively put self-driving cars under traffic lights’ control, with ways to adapt to current conditions. For example, a traffic light might tell approaching cars that it was about to turn red, giving them more time to stop. On a slippery road, a car might ask the light to stay green a few seconds longer so an abrupt stop isn’t necessary.

To make this real, engineers and policymakers would need to work together on developing technologies and setting rules. Each would have to better understand what the other does. At the same time, they would need to understand that not every solution works in every region or society. For example, the best way for traffic lights and self-driving cars to communicate in Detroit might not work in Mumbai, where roads and driving practices are far different.

[Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter.]The Conversation

Lionel Peter Robert Jr., Associate Professor of Information, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.