Monday, February 10, 2020


AI could constantly scan the internet for data privacy violations, a quicker, easier way to enforce compliance

You leave bits of your personal data behind online, and companies are happy to trade in them. metamorworks/ iStock/Getty Images Plus
Karuna Pande Joshi, University of Maryland, Baltimore County

You’re trailing bits of personal data – such as credit card numbers, shopping preferences and which news articles you read – as you travel around the internet. Large internet companies make money off this kind of personal information by sharing it with their subsidiaries and third parties. Public concern over online privacy has led to laws designed to control who gets that data and how they can use it.

The battle is ongoing. Democrats in the U.S. Senate recently introduced a bill that includes penalties for tech companies that mishandle users’ personal data. That law would join a long list of rules and regulations worldwide, including the Payment Card Industry Data Security Standard that regulates online credit card transactions, the European Union’s General Data Protection Regulation, the California Consumer Privacy Act that went into effect in January, and the U.S. Children’s Online Privacy Protection Act.

Internet companies must adhere to these regulations or risk expensive lawsuits or government sanctions, such as the Federal Trade Commission’s recent US$5 billion fine imposed on Facebook.

But it is technically challenging to determine in real time whether a privacy violation has occurred, an issue that is becoming even more problematic as internet data moves to extreme scale. To make sure their systems comply, companies rely on human experts to interpret the laws – a complex and time-consuming task for organizations that constantly launch and update services.

My research group at the University of Maryland, Baltimore County, has developed novel technologies for machines to understand data privacy laws and enforce compliance with them using artificial intelligence. These technologies will enable companies to make sure their services comply with privacy laws and also help governments identify in real time those companies that violate consumers’ privacy rights.

Before machines can search for privacy violations, they need to understand the rules. Imilian/iStock/Getty Images Plus

Helping machines understand regulations

Governments generate online privacy regulations as plain text documents that are easy for humans to read but difficult for machines to interpret. As a result, the regulations need to be manually examined to ensure that no rules are being broken when a citizen’s private data is analyzed or shared. This affects companies that now have to comply with a forest of regulations.

Rules and regulations often are ambiguous by design because societies want flexibility in implementing them. Subjective concepts such as good and bad vary among cultures and over time, so laws are drafted in general or vague terms to allow scope for future modifications. Machines can’t process this vagueness – they operate in 1’s and 0’s – so they cannot “understand” privacy the way humans do. Machines need specific instructions to understand the knowledge on which a regulation is based.

One way to help machines understand an abstract concept is by building an ontology, or a graph representing the knowledge of that concept. Borrowing the concepts of ontology from philosphy, new computer languages, such as OWL, have been developed in AI. These languages can define concepts and categories in a subject area or domain, show their properties and show the relations among them. Ontologies are sometimes called “knowledge graphs,” because they are stored in graphlike structures.

An example of a simple knowledge graph. Karuna Pande Joshi, CC BY-ND

When my colleagues and I began looking at the challenge of making privacy regulations understandable by machines, we determined that the first step would be to capture all the key knowledge in these laws and create knowledge graphs to store it.

Extracting the terms and rules

The key knowledge in the regulations consists of three parts.

First, there are “terms of art”: words or phrases that have precise definitions within a law. They help to identify the entity that the regulation describes and allow us to describe its roles and responsibilities in a language that computers can understand. For example, from the EU’s General Data Protection Regulation, we extracted terms of art such as “Consumers and Providers” and “Fines and Enforcement.”

Next, we identified Deontic rules: sentences or phrases that provide us with philosophical modal logic, which deals with deductive behavior. Deontic (or moral) rules include sentences describing duties or obligations and mainly fall into four categories. “Permissions” define the rights of an entity/actor. “Obligations” define the responsibilities of an entity/actor. “Prohibitions” are conditions or actions that are not allowed. “Dispensations” are optional or nonmandatory statements.

The researchers’ application automatically extracted Deontic rules, such as permissions and obligations, from two privacy regulations. Entities involved in the rules are highlighted in yellow. Modal words that help identify whether a rule is a permission, prohibition or obligation are highlighted in blue. Gray indicates the temporal or time-based aspect of the rule. Karuna Pande Joshi, CC BY-ND

To explain this with a simple example, consider the following:

  • You have permission to drive.

  • But to drive, you are obligated to get a driver’s license.

  • You are prohibited from speeding (and will be punished if you do so).

  • You can park in areas where you have the dispensation to do so (such as paid parking, metered parking or open areas not near a fire hydrant).

Some of these rules apply to everyone uniformly in all conditions; while others may apply partially, to only one entity or based on conditions agreed to by everyone.

Similar rules that describe do’s and don'ts apply to online personal data. There are permissions and prohibitions to prevent data breaches. There are obligations on the companies storing the data to ensure its safety. And there are dispensations made for vulnerable demographics such as minors.

A knowledge graph for GDPR regulations. Karuna Pande Joshi, CC BY-ND

My group developed techniques to automatically extract these rules from the regulations and save them in a knowledge graph.

Thirdly, we also had to figure out how to include the cross references that are often used in legal regulations to reference text in another section of the regulation or in a separate document. These are important knowledge elements that should also be stored in the knowledge graph.

Rules in place, scanning for compliance

After defining all the key entities, properties, relations, rules and policies of a data privacy law in a knowledge graph, my colleagues and I can create applications that can reason about the data privacy rules using these knowledge graphs.

These applications can significantly reduce the time it will take companies to determine whether they are complying with the data protection regulations. They can also help regulators monitor data audit trails to determine whether companies they oversee are complying with the rules.

This technology can also help individuals get a quick snapshot of their rights and responsibilities with respect to the private data they share with companies. Once machines can quickly interpret long, complex privacy policies, people will be able to automate many mundane compliance activities that are done manually today. They may also be able to make those policies more understandable to consumers.The Conversation

Karuna Pande Joshi, Assistant Professor of Information Systems, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, February 3, 2020


How old should kids be to get phones?

Every kid should have their own cell phone. Or should they? Syda Productions/Shutterstock.com
Fashina Aladé, Michigan State University

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


What age should kids get phones? – Yuvi, age 10, Dayton, Ohio


If it seems like all your friends have smartphones, you may be on to something. A new report by Common Sense Media, a nonprofit that reports on technology and media for children, found that by the age of 11, more than half of kids in the U.S. have their own smartphone. By age 12, more than two-thirds do, and by 14, teens are just as likely as adults to own a smartphone. Some kids start much younger. Nearly 20% of 8-year-olds have their own smartphone!

So, what’s the right age for you? Well, I study the effects of media and technology on kids, and I’m here to say that there is no single right answer to this question. The best I can offer is this: When both you and your parents feel the time is right.

How to talk to your parents about a smartphone

Smartphones can help you stay in touch with your people. Tony Stock/Shutterstock.com

Here are some points to consider to help you and your parents make this decision.

Responsibility: Have you shown that you are generally responsible? Do you keep track of important belongings? Do you understand the value of money, and can you save up to buy things you want? These are all good signs that you may be ready for a phone. If not, it might be wise to wait a bit longer.

Safety: Do you travel to or from school or after-school activities without an adult? This is when phones often go from a “want” to a “need.” Sometimes parents report that they feel better knowing they can reach their children directly, and that their kids can reach them, too.

Social maturity: Do you treat your friends with kindness and respect? Do you understand the permanence of the internet, the fact that once something goes out onto the web, it can never truly be deleted? It is critically important that you have a grasp on these issues before you own a smartphone.

We all get angry and say hurtful things we don’t mean sometimes, but when you post something on the internet that you might not mean later, or might wish you could take back, even on a so-called anonymous app, it can have real and lasting harmful effects. In the era of smartphones, there have been huge increases in cyberbullying.

How will you use your smartphone? Twin Design/Shutterstock.com

Being smart about your smartphone

If you and your parents decide this is a good time to take that step, here are some tips to create a healthy relationship between you and your phone.

Parents should model good behavior! Your parents are the No. 1 most important influence in your life, and that goes for technology use as much as anything else. If parents are glued to their phones all day, guess what? Their children probably will be, too.

On the flip side, if parents model smartphone habits like putting the phone away during meals and not texting and driving, that will go a long way toward helping kids develop similar healthy behaviors.

Are you ready? 1shostak/Shutterstock.com

You and your parents should talk together about the importance of setting rules and limits around your phone use and screen time. Understanding why rules are made and set in place can help kids stick to a system.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Fashina Aladé, Assistant Professor, Advertising and Public Relations, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, January 31, 2020


3D printing of body parts is coming fast – but regulations are not ready

The technology of producing biological parts is advancing, raising new legal and regulatory questions. Philip Ezze, CC BY-SA
Dinusha Mendis, Bournemouth University and Ana Santos Rutschman, Saint Louis University

In the last few years, the use of 3D printing has exploded in medicine. Engineers and medical professionals now routinely 3D print prosthetic hands and surgical tools. But 3D printing has only just begun to transform the field.

Today, a quickly emerging set of technologies known as bioprinting is poised to push the boundaries further. Bioprinting uses 3D printers and techniques to fabricate the three-dimensional structures of biological materials, from cells to biochemicals, through precise layer-by-layer positioning. The ultimate goal is to replicate functioning tissue and material, such as organs, which can then be transplanted into human beings.

We have been mapping the adoption of 3D printing technologies in the field of health care, and particularly bioprinting, in a collaboration between the law schools of Bournemouth University in the United Kingdom and Saint Louis University in the United States. While the future looks promising from a technical and scientific perspective, it’s far from clear how bioprinting and its products will be regulated. Such uncertainty can be problematic for manufacturers and patients alike, and could prevent bioprinting from living up to its promise.

From 3D printing to bioprinting

Bioprinting has its origins in 3D printing. Generally, 3D printing refers to all technologies that use a process of joining materials, usually layer upon layer, to make objects from data described in a digital 3D model. Though the technology initially had limited applications, it is now a widely recognized manufacturing system that is used across a broad range of industrial sectors. Companies are now 3D printing car parts, education tools like frog dissection kits and even 3D-printed houses. Both the United States Air Force and British Airways are developing ways of 3D printing airplane parts.

The NIH in the U.S. has a program to develop bioprinted tissue that’s similar to human tissue to speed up drug screening. Paige Derr and Kristy Derr, National Center for Advancing Translational Sciences

In medicine, doctors and researchers use 3D printing for several purposes. It can be used to generate accurate replicas of a patient’s body part. In reconstructive and plastic surgeries, implants can be specifically customized for patients using “biomodels” made possible by special software tools. Human heart valves, for instance, are now being 3D printed through several different processes although none have been transplanted into people yet. And there have been significant advances in 3D print methods in areas like dentistry over the past few years.

Bioprinting’s rapid emergence is built on recent advances in 3D printing techniques to engineer different types of products involving biological components, including human tissue and, more recently, vaccines.

While bioprinting is not entirely a new field because it is derived from general 3D printing principles, it is a novel concept for legal and regulatory purposes. And that is where the field could get tripped up if regulators cannot decide how to approach it.

State of the art in bioprinting

Scientists are still far from accomplishing 3D-printed organs because it’s incredibly difficult to connect printed structures to the vascular systems that carry life-sustaining blood and lymph throughout our bodies. But they have been successful in printing nonvascularized tissue like certain types of cartilage. They have also been able to produce ceramic and metal scaffolds that support bone tissue by using different types of bioprintable materials, such as gels and certain nanomaterials. A number of promising animal studies, some involving cardiac tissue, blood vessels and skin, suggest that the field is getting closer to its ultimate goal of transplantable organs.

Researchers explain ongoing work to make 3d-printed tissue that could one day be transplanted into a human body.

We expect that advancements in bioprinting will increase at a steady pace, even with current technological limitations, potentially improving the lives of many patients. In 2019 alone, several research teams reported a number of breakthroughs. Bioengineers at Rice and Washington Universities, for example, used hydrogels to successfully print the first series of complex vascular networks. Scientists at Tel Aviv University managed to produce the first 3D-printed heart. It included “cells, blood vessels, ventricles and chambers” and used cells and biological materials from a human patient. In the United Kingdom, a team from Swansea University developed a bioprinting process to create an artificial bone matrix, using durable, regenerative biomaterial.

‘Cloneprinting’

Though the future looks promising from a technical and scientific perspective, current regulations around bioprinting pose some hurdles. From a conceptual point of view, it is hard to determine what bioprinting effectively is.

Consider the case of a 3D-printed heart: Is it best described as an organ or a product? Or should regulators look at it more like a medical device?

Regulators have a number of questions to answer. To begin with, they need to decide whether bioprinting should be regulated under new or existing frameworks, and if the latter, which ones. For instance, should they apply regulations for biologics, a class of complex pharmaceuticals that includes treatments for cancer and rheumatoid arthritis, because biologic materials are involved, as is the case with 3D-printed vaccines? Or should there be a regulatory framework for medical devices better suited to the task of customizing 3D-printed products like splints for newborns suffering from life-threatening medical conditions?

In Europe and the U.S., scholars and commentators have questioned whether bioprinted materials should enjoy patent protection because of the moral issues they raise. An analogy can be drawn from the famed Dolly the sheep over 20 years ago. In this case, it was held by the U.S. Court of Appeals for the Federal Circuit that cloned sheep cannot be patented because they were identical copies of naturally occurring sheep. This is a clear example of the parallels that exist between cloning and bioprinting. Some people speculate in the future there will be ‘cloneprinting,’ which has the potential for reviving extinct species or solving the organ transplant shortage.

Dolly the sheep’s example illustrates the court’s reluctance to traverse this path. Therefore, if, at some point in the future, bioprinters or indeed cloneprinters can be used to replicate not simply organs but also human beings using cloning technologies, a patent application of this nature could potentially fail, based on the current law. A study funded by the European Commission, led by Bournemouth University and due for completion in early 2020 aims to provide legal guidance on the various intellectual property and regulatory issues surrounding such issues, among others.

On the other hand, if European regulators classify the product of bioprinting as a medical device, there will be at least some degree of legal clarity, as a regulatory regime for medical devices has long been in place. In the United States, the FDA has issued guidance on 3D-printed medical devices, but not on the specifics of bioprinting. More important, such guidance is not binding and only represents the thinking of a particular agency at a point in time.

Cloudy regulatory outlook

Those are not the only uncertainties that are racking the field. Consider the recent progress surrounding 3D-printed organs, particularly the example of a 3D-printed heart. If a functioning 3D-printed heart becomes available, which body of law should apply beyond the realm of FDA regulations? In the United States, should the National Organ Transplant Act, which was written with human organs in mind, apply? Or do we need to amend the law, or even create a separate set of rules for 3D-printed organs?

We have no doubt that 3D printing in general, and bioprinting specifically, will advance rapidly in the coming years. Policymakers should be paying closer attention to the field to ensure that its progress does not outstrip their capacity to safely and effectively regulate it. If they succeed, it could usher in a new era in medicine that could improve the lives of countless patients.

[ You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend. ]The Conversation

Dinusha Mendis, Professor of Intellectual Property and Innovation Law and Co-Director of the Jean Monet Centre of Excellence for European Intellectual Property and Information Rights, Bournemouth University and Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, January 24, 2020


How Iran's military outsources its cyberthreat forces

Dorothy Denning, Naval Postgraduate School

In the wake of the U.S. killing of a top Iranian general and Iran’s retaliatory missile strike, should the U.S. be concerned about the cyberthreat from Iran? Already, pro-Iranian hackers have defaced several U.S. websites to protest the killing of General Qassem Soleimani. One group wrote “This is only a small part of Iran’s cyber capability” on one of the hacked sites.

Two years ago, I wrote that Iran’s cyberwarfare capabilities lagged behind those of both Russia and China, but that it had become a major threat which will only get worse. It had already conducted several highly damaging cyberattacks.

Since then, Iran has continued to develop and deploy its cyberattacking capabilities. It carries out attacks through a network of intermediaries, allowing the regime to strike its foes while denying direct involvement.

Islamic Revolutionary Guard Corps-supported hackers

Iran’s cyberwarfare capability lies primarily within Iran’s Islamic Revolutionary Guard Corps, a branch of the country’s military. However, rather than employing its own cyberforce against foreign targets, the Islamic Revolutionary Guard Corps appears to mainly outsource these cyberattacks.

According to cyberthreat intelligence firm Recorded Future, the Islamic Revolutionary Guard Corps uses trusted intermediaries to manage contracts with independent groups. These intermediaries are loyal to the regime but separate from it. They translate the Iranian military’s priorities into discrete tasks, which are then bid out to independent contractors.

Recorded Future estimates that as many as 50 organizations compete for these contracts. Several contractors may be involved in a single operation.

Iranian contractors communicate online to hire workers and exchange information. Ashiyane, the primary online security forum in Iran, was created by hackers in the mid-2000s in order to disseminate hacking tools and tutorials within the hacking community. The Ashiyane Digital Security Team was known for hacking websites and replacing their home pages with pro-Iranian content. By May 2011, Zone-H, an archive of defaced websites, had recorded 23,532 defacements by that group alone. Its leader, Behrouz Kamalian, said his group cooperated with the Iranian military, but operated independently and spontaneously.

Iran had an active community of hackers at least by 2004, when a group calling itself Iran Hackers Sabotage launched a succession of web attacks “with the aim of showing the world that Iranian hackers have something to say in the worldwide security.” It is likely that many of Iran’s cyber contractors come from this community.

Iran’s use of intermediaries and contractors makes it harder to attribute cyberattacks to the regime. Nevertheless, investigators have been able to trace many cyberattacks to persons inside Iran operating with the support of the country’s Islamic Revolutionary Guard Corps.

Cyber campaigns

Iran engages in both espionage and sabotage operations. They employ both off-the-shelf malware and custom-made software tools, according to a 2018 report by the Foundation to Defend Democracy. They use spearfishing, or luring specific individuals with fraudulent messages, to gain initial access to target machines by enticing victims to click on links that lead to phony sites where they hand over usernames and passwords or open attachments that plant “backdoors” on their devices. Once in, they use various hacking tools to spread through networks and download or destroy data.

Iran’s cyber espionage campaigns gain access to networks in order to steal proprietary and sensitive data in areas of interest to the regime. Security companies that track these threats give them APT (Advanced Persistent Threat) names such as APT33, “kitten” names such as Magic Kitten and miscellaneous other names such as OilRig.

The group the security firm FireEye calls APT33 is especially noteworthy. It has conducted numerous espionage operations against oil and aviation industries in the U.S., Saudi Arabia and elsewhere. APT33 was recently reported to use small botnets (networks of compromised computers) to target very specific sites for their data collection.

Another group known as APT35 (aka Phosphoros) has attempted to gain access to email accounts belonging to individuals involved in a 2020 U.S. presidential campaign. Were they to succeed, they might be able to use stolen information to influence the election by, for example, releasing information publicly that could be damaging to a candidate.

In 2018, the U.S. Department of Justice charged nine Iranians with conducting a massive cyber theft campaign on behalf of the Islamic Revolutionary Guard Corps. All were tied to the Mabna Institute, an Iranian company behind cyber intrusions since at least 2013. The defendants allegedly stole 31 terabytes of data from U.S. and foreign entities. The victims included over 300 universities, almost 50 companies and several government agencies.

Cyber sabotage

Iran’s sabotage operations have employed “wiper” malware to destroy data on hard drives. They have also employed botnets to launch distributed denial-of-service attacks, where a flood of traffic effectively disables a server. These operations are frequently hidden behind monikers that resemble those used by independent hacktivists who hack for a cause rather than money.

Hacking groups tied to the Iranian regime have successfully defaced websites, wiped data from PCs and have attempted to infiltrate industrial control systems.

In one highly damaging attack, a group calling themselves the Cutting Sword of Justice attacked the Saudi Aramco oil company with wiper code in 2012. The hackers used a virus dubbed Shamoon to spread the code through the company’s network. The attack destroyed data on 35,000 computers, disrupting business processes for weeks.

The Shamoon software reappeared in 2016, wiping data from thousands of computers in Saudi Arabia’s civil aviation agency and other organizations. Then in 2018, a variant of Shamoon hit the Italian oil services firm Saipem, crippling more than 300 computers.

Iranian hackers have conducted massive distributed denial-of-service attacks. From 2012 to 2013, a group calling itself the Cyber Fighters of Izz ad-Din al-Qassam launched a series of relentless distributed denial-of-service attacks against major U.S. banks. The attacks were said to have caused tens of millions of dollars in losses relating to mitigation and recovery costs and lost business.

In 2016 the U.S. indicted seven Iranian hackers for working on behalf of the Islamic Revolutionary Guard Corps to conduct the bank attacks. The motivation may have been retaliation for economic sanctions that had been imposed on Iran.

Looking ahead

So far, Iranian cyberattacks have been limited to desktop computers and servers running standard commercial software. They have not yet affected industrial controls systems running electrical power grids and other physical infrastructure. Were they to get into and take over these control systems, they could, for example, cause more serious damage such as the 2015 and 2016 power outages caused by the Russians in Ukraine.

One of the Iranians indicted in the bank attacks did get into the computer control system for the Bowman Avenue Dam in rural New York. According to the indictment, no damage was done, but the access would have allowed the dam’s gate to be manipulated if it not been manually disconnected for maintenance issues.

While there are no public reports of Iranian threat actors demonstrating a capability against industrial control systems, Microsoft recently reported that APT33 appears to have shifted its focus to these systems. In particular, they have been attempting to guess passwords for the systems’ manufacturers, suppliers, and maintainers. The access and information that could be acquired from succeeding might help them get into an industrial control system.

Ned Moran, a security researcher with Microsoft, speculated that the group may be attempting to get access to industrial control systems in order to produce physically disruptive effects. Although APT33 has not been directly implicated in any incidents of cyber sabotage, security researchers have found links between code used by the group with code used in the Shamoon attacks to destroy data.

While it is impossible to know Iran’s intentions, they are likely to continue operating numerous cyber espionage campaigns while developing additional capabilities for cyber sabotage. If tensions between Iran and the United States mount, Iran may respond with additional cyberattacks, possibly ones that are more damaging than we’ve seen so far.

[ Deep knowledge, daily. Sign up for The Conversation’s newsletter. ]The Conversation

Dorothy Denning, Emeritus Distinguished Professor of Defense Analysis, Naval Postgraduate School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Thursday, January 23, 2020


Screen time: Conclusions about the effects of digital media are often incomplete, irrelevant or wrong

Humans are barraged by digital media 24/7. Is it a problem? Bruce Rolff/Shutterstock.com
Byron Reeves, Stanford University; Nilam Ram, Pennsylvania State University, and Thomas N. Robinson, Stanford University

There’s a lot of talk about digital media. Increasing screen time has created worries about media’s impacts on democracy, addiction, depression, relationships, learning, health, privacy and much more. The effects are frequently assumed to be huge, even apocalyptic.

Scientific data, however, often fail to confirm what seems true based on everyday experiences. In study after study, screen time is often not correlated with important effects at a magnitude that matches the concerns and expectations of media consumers, critics, teachers, parents, pediatricians and even the researchers themselves. For example, a recent review of over 200 studies about social media concluded there was almost no effect of greater screen time on psychological well-being. A comprehensive study of adolescents reported small effects of screen time on brain development, and no relationship between media use and cognitive performance. A review of 20 studies about the effects of multitasking with media – that is, using two or more screens at the same time – showed small declines in cognitive performance because of multitasking but also pointed out new studies that showed the opposite.

As communication, psychological and medical researchers interested in media effects, we are interested in how individuals’ engagement with digital technology influences peoples’ thoughts, emotions, behaviors, health and well-being.

Moving beyond ‘screen time’

Has the power of media over modern life been overstated? Probably not, but no one knows, because there is a severe lack of knowledge about what people are actually seeing and doing on their screens.

Individuals all around the world are now all looking at pretty much the same screens and spending a lot of time with them. However, the similarities between us end there. Many different kinds of applications, games and messages flow across people’s screens. And, because it is so easy to create customized personal threads of experiences, each person ends up viewing very different material at different times. No two people share the same media experiences.

To determine the effects of media on people’s lives, whether beneficial or harmful, requires knowledge of what people are actually seeing and doing on those screens. But researchers often mistakenly depend on a rather blunt metric – screen time.

So many social media apps, so little time. Twin Design/Shutterstock.com

Reports of screen time, the most common way to assess media use, are known to be terribly inaccurate and describe only total viewing time. Today, on a single screen, you can switch instantly between messaging a neighbor, watching the news, parenting a child, arranging for dinner delivery, planning a weekend trip, talking on an office video conference and even monitoring your car, home irrigation and lighting. Add to that more troublesome uses – bullying a classmate, hate speech or reading fabricated news. Knowing someone’s screen time – their total dose of media – will not diagnose problems with any of that content.

A media solution based only on screen time is like medical advice to someone taking multiple prescription medications to reduce their total number of pills by half. Which medications and when?

Complex and unique nature of media use

What would be a better gauge of media consumption than screen time? Something that better captures the complexities of how individuals engage with media. Perhaps the details about specific categories of content – the names of the programs, software and websites - would be more informative. Sometimes that may be enough to highlight problems – playing a popular game more than intended, frequent visits to a suspicious political website or too much social time on Facebook.

Tracking big categories of content, however, is still not that helpful. My one hour of Facebook, for example, could be spent on self-expression and social comparison; yours could be filled with news, shopping, classes, games and videos. Further, our research finds that people now switch between content on their smartphones and laptops every 10 to 20 seconds on average. Many people average several hundred different smartphone sessions per day. The fast cadence certainly influences how people converse with each other and how engaged we are with information. And each bit of content is surrounded by other kinds of material. News read on Facebook sandwiches political content between social relationships, each one changing the interpretation of the other.

Screen time: work and play. Gorodenkoff/Shutterstock.com

A call for a Human Screenome Project

In this era of technology and big data, we need a DVR for digital life that records the entirety of individuals’ screen media experiences - what we call the screenome, analogous to the genome, microbiome and other “omes” that define an individual’s unique characteristics and exposures.

An individual’s screenome includes apps and websites, the specific content observed and created, all of the words, images and sounds on the screens, and their time of day, duration and sequencing. It includes whether the content is produced by the user or sent from others. And it includes characteristics of use, such as variations in how much one interacts with a screen, how quickly one switches between content, scrolls through screens, and turns the screen on and off.

Without knowledge of the whole screenome, no one – including researchers, critics, educators, journalists or policymakers – can accurately describe the new media chaos. People need much better data – for science, policy, parenting and more. And it needs to be collected and supported by individuals and organizations who are motivated to share the information for all to analyze and apply.

The benefits from studying the human genome required developing the field of genomics. The same will be true for the human screenome, the unique individual record of experiences that constitute psychological and social life on digital devices. Researchers now have the technologies to begin a serious study of screenomics, which we describe in the journal Nature. Now we need the data – a collective effort to produce, map and analyze a large and informative set of screenomes. A Human Screenome Project could inform academics, health professionals, educators, parents, advocacy groups, tech companies and policymakers about how to maximize the potential of media and remedy its most pernicious effects.

[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter. ]The Conversation

Byron Reeves, Professor of Communication, Stanford University; Nilam Ram, Professor of Human Development and Family Studies, and Psychology, Pennsylvania State University, and Thomas N. Robinson, Professor of Pediatrics and of Medicine, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.