Friday, March 13, 2020


How technology can combat the rising tide of fake science

A crop circle in Switzerland. Jabberocky/Wikimedia Commons
Chris Impey, University of Arizona

Science gets a lot of respect these days. Unfortunately, it’s also getting a lot of competition from misinformation. Seven in 10 Americans think the benefits from science outweigh the harms, and nine in 10 think science and technology will create more opportunities for future generations. Scientists have made dramatic progress in understanding the universe and the mechanisms of biology, and advances in computation benefit all fields of science.

On the other hand, Americans are surrounded by a rising tide of misinformation and fake science. Take climate change. Scientists are in almost complete agreement that people are the primary cause of global warming. Yet polls show that a third of the public disagrees with this conclusion.

In my 30 years of studying and promoting scientific literacy, I’ve found that college educated adults have large holes in their basic science knowledge and they’re disconcertingly susceptible to superstition and beliefs that aren’t based on any evidence. One way to counter this is to make it easier for people to detect pseudoscience online. To this end, my lab at the University of Arizona has developed an artificial intelligence-based pseudoscience detector that we plan to freely release as a web browser extension and smart phone app.

Americans’ predilection for fake science

Americans are prone to superstition and paranormal beliefs. An annual survey done by sociologists at Chapman University finds that more than half believe in spirits and the existence of ancient civilizations like Atlantis, and more than a third think that aliens have visited the Earth in the past or are visiting now. Over 75% hold multiple paranormal beliefs. The survey shows that these numbers have increased in recent years.

Widespread belief in astrology is a pet peeve of my colleagues in astronomy. It’s long had a foothold in the popular culture through horoscopes in newspapers and magazines but currently it’s booming. Belief is strong even among the most educated. My surveys of college undergraduates show that three-quarters of them think that astrology is very or “sort of” scientific and only half of science majors recognize it as not at all scientific.

Allan Mazur, a sociologist at Syracuse University, has delved into the nature of irrational belief systems, their cultural roots, and their political impact. Conspiracy theories are, by definition, resistant to evidence or data that might prove them false. Some are at least amusing. Adherents of the flat Earth theory turn back the clock on two millennia of scientific progress. Interest in this bizarre idea has surged in the past five years, spurred by social media influencers and the echo chamber nature of web sites like Reddit. As with climate change denial, many come to this belief through YouTube videos.

However, the consequences of fake science are no laughing matter. In matters of health and climate change, misinformation can be a matter of life and death. Over a 90-day period spanning December, January and February, people liked, shared and commented on posts from sites containing false or misleading information about COVID-19 142 times more than they did information from the Centers for Disease Control and the World Health Organization.

Combating fake science is an urgent priority. In a world that’s increasingly dependent on science and technology, civic society can only function when the electorate is well informed.

Educators must roll up their sleeves and do a better job of teaching critical thinking to young people. However, the problem goes beyond the classroom. The internet is the first source of science information for 80% of people ages 18 to 24.

One study found that a majority of a random sample of 200 YouTube videos on climate change denied that humans were responsible or claimed that it was a conspiracy. The videos peddling conspiracy theories got the most views. Another study found that a quarter of all tweets on climate were generated by bots and they preferentially amplified messages from climate change deniers.

Technology to the rescue?

The recent success of machine learning and AI in detecting fake news points the way to detecting fake science online. The key is neural net technology. Neural nets are loosely modeled on the human brain. They consist of many interconnected computer processors that identify meaningful patterns in data like words and images. Neural nets already permeate everyday life, particularly in natural language processing systems like Amazon’s Alexa and Google’s language translation capability.

At the University of Arizona, we have trained neural nets on handpicked popular articles about climate change and biological evolution, and the neural nets are 90% successful in distinguishing wheat from chaff. With a quick scan of a site, our neural net can tell if its content is scientifically sound or climate-denial junk. After more refinement and testing we hope to have neural nets that can work across all domains of science.

Neural net technology under development at the University of Arizona will flag science websites with a color code indicating their reliability (left). A smartphone app version will gamify the process of declaring science articles real or fake (right). Chris Impey, CC BY-ND

The goal is a web browser extension that would detect when the user is looking at science content and deduce whether or not it’s real or fake. If it’s misinformation, the tool will suggest a reliable web site on that topic. My colleagues and I also plan to gamify the interface with a smart phone app that will let people compete with their friends and relatives to detect fake science. Data from the best of these participants will be used to help train the neural net.

Sniffing out fake science should be easier than sniffing out fake news in general, because subjective opinion plays a minimal role in legitimate science, which is characterized by evidence, logic and verification. Experts can readily distinguish legitimate science from conspiracy theories and arguments motivated by ideology, which means machine learning systems can be trained to, as well.

“Everyone is entitled to his own opinion, but not his own facts.” These words of Daniel Patrick Moynihan, advisor to four presidents, could be the mantra for those trying to keep science from being drowned by misinformation.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wednesday, February 12, 2020


Hackers could shut down satellites – or turn them into weapons

Two CubeSats, part of a constellation built and operated by Planet Labs Inc. to take images of Earth, were launched from the International Space Station on May 17, 2016. NASA
William Akoto, University of Denver

Last month, SpaceX became the operator of the world’s largest active satellite constellation. As of the end of January, the company had 242 satellites orbiting the planet with plans to launch 42,000 over the next decade. This is part of its ambitious project to provide internet access across the globe. The race to put satellites in space is on, with Amazon, U.K.-based OneWeb and other companies chomping at the bit to place thousands of satellites in orbit in the coming months.

These new satellites have the potential to revolutionize many aspects of everyday life – from bringing internet access to remote corners of the globe to monitoring the environment and improving global navigation systems. Amid all the fanfare, a critical danger has flown under the radar: the lack of cybersecurity standards and regulations for commercial satellites, in the U.S. and internationally. As a scholar who studies cyber conflict, I’m keenly aware that this, coupled with satellites’ complex supply chains and layers of stakeholders, leaves them highly vulnerable to cyberattacks.

If hackers were to take control of these satellites, the consequences could be dire. On the mundane end of scale, hackers could simply shut satellites down, denying access to their services. Hackers could also jam or spoof the signals from satellites, creating havoc for critical infrastructure. This includes electric grids, water networks and transportation systems.

Some of these new satellites have thrusters that allow them to speed up, slow down and change direction in space. If hackers took control of these steerable satellites, the consequences could be catastrophic. Hackers could alter the satellites’ orbits and crash them into other satellites or even the International Space Station.

Commodity parts open a door

Makers of these satellites, particularly small CubeSats, use off-the-shelf technology to keep costs low. The wide availability of these components means hackers can analyze them for vulnerabilities. In addition, many of the components draw on open-source technology. The danger here is that hackers could insert back doors and other vulnerabilities into satellites’ software.

The highly technical nature of these satellites also means multiple manufacturers are involved in building the various components. The process of getting these satellites into space is also complicated, involving multiple companies. Even once they are in space, the organizations that own the satellites often outsource their day-to-day management to other companies. With each additional vendor, the vulnerabilities increase as hackers have multiple opportunities to infiltrate the system.

CubeSats are small, inexpensive satellites. Svobodat/Wikimedia Commons, CC BY

Hacking some of these CubeSats may be as simple as waiting for one of them to pass overhead and then sending malicious commands using specialized ground antennas. Hacking more sophisticated satellites might not be that hard either.

Satellites are typically controlled from ground stations. These stations run computers with software vulnerabilities that can be exploited by hackers. If hackers were to infiltrate these computers, they could send malicious commands to the satellites.

A history of hacks

This scenario played out in 1998 when hackers took control of the U.S.-German ROSAT X-Ray satellite. They did it by hacking into computers at the Goddard Space Flight Center in Maryland. The hackers then instructed the satellite to aim its solar panels directly at the sun. This effectively fried its batteries and rendered the satellite useless. The defunct satellite eventually crashed back to Earth in 2011. Hackers could also hold satellites for ransom, as happened in 1999 when hackers took control of the U.K.‘s SkyNet satellites.

Over the years, the threat of cyberattacks on satellites has gotten more dire. In 2008, hackers, possibly from China, reportedly took full control of two NASA satellites, one for about two minutes and the other for about nine minutes. In 2018, another group of Chinese state-backed hackers reportedly launched a sophisticated hacking campaign aimed at satellite operators and defense contractors. Iranian hacking groups have also attempted similar attacks.

Although the U.S. Department of Defense and National Security Agency have made some efforts to address space cybersecurity, the pace has been slow. There are currently no cybersecurity standards for satellites and no governing body to regulate and ensure their cybersecurity. Even if common standards could be developed, there are no mechanisms in place to enforce them. This means responsibility for satellite cybersecurity falls to the individual companies that build and operate them.

Market forces work against space cybersecurity

SpaceX, headquartered in Hawthorne, Calif., plans to launch 42,000 satellites over the next decade. Bruno Sanchez-Andrade Nuño/Wikimedia Commons, CC BY

As they compete to be the dominant satellite operator, SpaceX and rival companies are under increasing pressure to cut costs. There is also pressure to speed up development and production. This makes it tempting for the companies to cut corners in areas like cybersecurity that are secondary to actually getting these satellites in space.

Even for companies that make a high priority of cybersecurity, the costs associated with guaranteeing the security of each component could be prohibitive. This problem is even more acute for low-cost space missions, where the cost of ensuring cybersecurity could exceed the cost of the satellite itself.

To compound matters, the complex supply chain of these satellites and the multiple parties involved in their management means it’s often not clear who bears responsibility and liability for cyber breaches. This lack of clarity has bred complacency and hindered efforts to secure these important systems.

Regulation is required

Some analysts have begun to advocate for strong government involvement in the development and regulation of cybersecurity standards for satellites and other space assets. Congress could work to adopt a comprehensive regulatory framework for the commercial space sector. For instance, they could pass legislation that requires satellites manufacturers to develop a common cybersecurity architecture.

They could also mandate the reporting of all cyber breaches involving satellites. There also needs to be clarity on which space-based assets are deemed critical in order to prioritize cybersecurity efforts. Clear legal guidance on who bears responsibility for cyberattacks on satellites will also go a long way to ensuring that the responsible parties take the necessary measures to secure these systems.

Given the traditionally slow pace of congressional action, a multi-stakeholder approach involving public-private cooperation may be warranted to ensure cybersecurity standards. Whatever steps government and industry take, it is imperative to act now. It would be a profound mistake to wait for hackers to gain control of a commercial satellite and use it to threaten life, limb and property – here on Earth or in space – before addressing this issue.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend.]The Conversation

William Akoto, Postdoctoral Research Fellow, University of Denver

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, February 10, 2020


AI could constantly scan the internet for data privacy violations, a quicker, easier way to enforce compliance

You leave bits of your personal data behind online, and companies are happy to trade in them. metamorworks/ iStock/Getty Images Plus
Karuna Pande Joshi, University of Maryland, Baltimore County

You’re trailing bits of personal data – such as credit card numbers, shopping preferences and which news articles you read – as you travel around the internet. Large internet companies make money off this kind of personal information by sharing it with their subsidiaries and third parties. Public concern over online privacy has led to laws designed to control who gets that data and how they can use it.

The battle is ongoing. Democrats in the U.S. Senate recently introduced a bill that includes penalties for tech companies that mishandle users’ personal data. That law would join a long list of rules and regulations worldwide, including the Payment Card Industry Data Security Standard that regulates online credit card transactions, the European Union’s General Data Protection Regulation, the California Consumer Privacy Act that went into effect in January, and the U.S. Children’s Online Privacy Protection Act.

Internet companies must adhere to these regulations or risk expensive lawsuits or government sanctions, such as the Federal Trade Commission’s recent US$5 billion fine imposed on Facebook.

But it is technically challenging to determine in real time whether a privacy violation has occurred, an issue that is becoming even more problematic as internet data moves to extreme scale. To make sure their systems comply, companies rely on human experts to interpret the laws – a complex and time-consuming task for organizations that constantly launch and update services.

My research group at the University of Maryland, Baltimore County, has developed novel technologies for machines to understand data privacy laws and enforce compliance with them using artificial intelligence. These technologies will enable companies to make sure their services comply with privacy laws and also help governments identify in real time those companies that violate consumers’ privacy rights.

Before machines can search for privacy violations, they need to understand the rules. Imilian/iStock/Getty Images Plus

Helping machines understand regulations

Governments generate online privacy regulations as plain text documents that are easy for humans to read but difficult for machines to interpret. As a result, the regulations need to be manually examined to ensure that no rules are being broken when a citizen’s private data is analyzed or shared. This affects companies that now have to comply with a forest of regulations.

Rules and regulations often are ambiguous by design because societies want flexibility in implementing them. Subjective concepts such as good and bad vary among cultures and over time, so laws are drafted in general or vague terms to allow scope for future modifications. Machines can’t process this vagueness – they operate in 1’s and 0’s – so they cannot “understand” privacy the way humans do. Machines need specific instructions to understand the knowledge on which a regulation is based.

One way to help machines understand an abstract concept is by building an ontology, or a graph representing the knowledge of that concept. Borrowing the concepts of ontology from philosphy, new computer languages, such as OWL, have been developed in AI. These languages can define concepts and categories in a subject area or domain, show their properties and show the relations among them. Ontologies are sometimes called “knowledge graphs,” because they are stored in graphlike structures.

An example of a simple knowledge graph. Karuna Pande Joshi, CC BY-ND

When my colleagues and I began looking at the challenge of making privacy regulations understandable by machines, we determined that the first step would be to capture all the key knowledge in these laws and create knowledge graphs to store it.

Extracting the terms and rules

The key knowledge in the regulations consists of three parts.

First, there are “terms of art”: words or phrases that have precise definitions within a law. They help to identify the entity that the regulation describes and allow us to describe its roles and responsibilities in a language that computers can understand. For example, from the EU’s General Data Protection Regulation, we extracted terms of art such as “Consumers and Providers” and “Fines and Enforcement.”

Next, we identified Deontic rules: sentences or phrases that provide us with philosophical modal logic, which deals with deductive behavior. Deontic (or moral) rules include sentences describing duties or obligations and mainly fall into four categories. “Permissions” define the rights of an entity/actor. “Obligations” define the responsibilities of an entity/actor. “Prohibitions” are conditions or actions that are not allowed. “Dispensations” are optional or nonmandatory statements.

The researchers’ application automatically extracted Deontic rules, such as permissions and obligations, from two privacy regulations. Entities involved in the rules are highlighted in yellow. Modal words that help identify whether a rule is a permission, prohibition or obligation are highlighted in blue. Gray indicates the temporal or time-based aspect of the rule. Karuna Pande Joshi, CC BY-ND

To explain this with a simple example, consider the following:

  • You have permission to drive.

  • But to drive, you are obligated to get a driver’s license.

  • You are prohibited from speeding (and will be punished if you do so).

  • You can park in areas where you have the dispensation to do so (such as paid parking, metered parking or open areas not near a fire hydrant).

Some of these rules apply to everyone uniformly in all conditions; while others may apply partially, to only one entity or based on conditions agreed to by everyone.

Similar rules that describe do’s and don'ts apply to online personal data. There are permissions and prohibitions to prevent data breaches. There are obligations on the companies storing the data to ensure its safety. And there are dispensations made for vulnerable demographics such as minors.

A knowledge graph for GDPR regulations. Karuna Pande Joshi, CC BY-ND

My group developed techniques to automatically extract these rules from the regulations and save them in a knowledge graph.

Thirdly, we also had to figure out how to include the cross references that are often used in legal regulations to reference text in another section of the regulation or in a separate document. These are important knowledge elements that should also be stored in the knowledge graph.

Rules in place, scanning for compliance

After defining all the key entities, properties, relations, rules and policies of a data privacy law in a knowledge graph, my colleagues and I can create applications that can reason about the data privacy rules using these knowledge graphs.

These applications can significantly reduce the time it will take companies to determine whether they are complying with the data protection regulations. They can also help regulators monitor data audit trails to determine whether companies they oversee are complying with the rules.

This technology can also help individuals get a quick snapshot of their rights and responsibilities with respect to the private data they share with companies. Once machines can quickly interpret long, complex privacy policies, people will be able to automate many mundane compliance activities that are done manually today. They may also be able to make those policies more understandable to consumers.The Conversation

Karuna Pande Joshi, Assistant Professor of Information Systems, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, February 3, 2020


How old should kids be to get phones?

Every kid should have their own cell phone. Or should they? Syda Productions/Shutterstock.com
Fashina Aladé, Michigan State University

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to curiouskidsus@theconversation.com.


What age should kids get phones? – Yuvi, age 10, Dayton, Ohio


If it seems like all your friends have smartphones, you may be on to something. A new report by Common Sense Media, a nonprofit that reports on technology and media for children, found that by the age of 11, more than half of kids in the U.S. have their own smartphone. By age 12, more than two-thirds do, and by 14, teens are just as likely as adults to own a smartphone. Some kids start much younger. Nearly 20% of 8-year-olds have their own smartphone!

So, what’s the right age for you? Well, I study the effects of media and technology on kids, and I’m here to say that there is no single right answer to this question. The best I can offer is this: When both you and your parents feel the time is right.

How to talk to your parents about a smartphone

Smartphones can help you stay in touch with your people. Tony Stock/Shutterstock.com

Here are some points to consider to help you and your parents make this decision.

Responsibility: Have you shown that you are generally responsible? Do you keep track of important belongings? Do you understand the value of money, and can you save up to buy things you want? These are all good signs that you may be ready for a phone. If not, it might be wise to wait a bit longer.

Safety: Do you travel to or from school or after-school activities without an adult? This is when phones often go from a “want” to a “need.” Sometimes parents report that they feel better knowing they can reach their children directly, and that their kids can reach them, too.

Social maturity: Do you treat your friends with kindness and respect? Do you understand the permanence of the internet, the fact that once something goes out onto the web, it can never truly be deleted? It is critically important that you have a grasp on these issues before you own a smartphone.

We all get angry and say hurtful things we don’t mean sometimes, but when you post something on the internet that you might not mean later, or might wish you could take back, even on a so-called anonymous app, it can have real and lasting harmful effects. In the era of smartphones, there have been huge increases in cyberbullying.

How will you use your smartphone? Twin Design/Shutterstock.com

Being smart about your smartphone

If you and your parents decide this is a good time to take that step, here are some tips to create a healthy relationship between you and your phone.

Parents should model good behavior! Your parents are the No. 1 most important influence in your life, and that goes for technology use as much as anything else. If parents are glued to their phones all day, guess what? Their children probably will be, too.

On the flip side, if parents model smartphone habits like putting the phone away during meals and not texting and driving, that will go a long way toward helping kids develop similar healthy behaviors.

Are you ready? 1shostak/Shutterstock.com

You and your parents should talk together about the importance of setting rules and limits around your phone use and screen time. Understanding why rules are made and set in place can help kids stick to a system.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Fashina Aladé, Assistant Professor, Advertising and Public Relations, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Friday, January 31, 2020


3D printing of body parts is coming fast – but regulations are not ready

The technology of producing biological parts is advancing, raising new legal and regulatory questions. Philip Ezze, CC BY-SA
Dinusha Mendis, Bournemouth University and Ana Santos Rutschman, Saint Louis University

In the last few years, the use of 3D printing has exploded in medicine. Engineers and medical professionals now routinely 3D print prosthetic hands and surgical tools. But 3D printing has only just begun to transform the field.

Today, a quickly emerging set of technologies known as bioprinting is poised to push the boundaries further. Bioprinting uses 3D printers and techniques to fabricate the three-dimensional structures of biological materials, from cells to biochemicals, through precise layer-by-layer positioning. The ultimate goal is to replicate functioning tissue and material, such as organs, which can then be transplanted into human beings.

We have been mapping the adoption of 3D printing technologies in the field of health care, and particularly bioprinting, in a collaboration between the law schools of Bournemouth University in the United Kingdom and Saint Louis University in the United States. While the future looks promising from a technical and scientific perspective, it’s far from clear how bioprinting and its products will be regulated. Such uncertainty can be problematic for manufacturers and patients alike, and could prevent bioprinting from living up to its promise.

From 3D printing to bioprinting

Bioprinting has its origins in 3D printing. Generally, 3D printing refers to all technologies that use a process of joining materials, usually layer upon layer, to make objects from data described in a digital 3D model. Though the technology initially had limited applications, it is now a widely recognized manufacturing system that is used across a broad range of industrial sectors. Companies are now 3D printing car parts, education tools like frog dissection kits and even 3D-printed houses. Both the United States Air Force and British Airways are developing ways of 3D printing airplane parts.

The NIH in the U.S. has a program to develop bioprinted tissue that’s similar to human tissue to speed up drug screening. Paige Derr and Kristy Derr, National Center for Advancing Translational Sciences

In medicine, doctors and researchers use 3D printing for several purposes. It can be used to generate accurate replicas of a patient’s body part. In reconstructive and plastic surgeries, implants can be specifically customized for patients using “biomodels” made possible by special software tools. Human heart valves, for instance, are now being 3D printed through several different processes although none have been transplanted into people yet. And there have been significant advances in 3D print methods in areas like dentistry over the past few years.

Bioprinting’s rapid emergence is built on recent advances in 3D printing techniques to engineer different types of products involving biological components, including human tissue and, more recently, vaccines.

While bioprinting is not entirely a new field because it is derived from general 3D printing principles, it is a novel concept for legal and regulatory purposes. And that is where the field could get tripped up if regulators cannot decide how to approach it.

State of the art in bioprinting

Scientists are still far from accomplishing 3D-printed organs because it’s incredibly difficult to connect printed structures to the vascular systems that carry life-sustaining blood and lymph throughout our bodies. But they have been successful in printing nonvascularized tissue like certain types of cartilage. They have also been able to produce ceramic and metal scaffolds that support bone tissue by using different types of bioprintable materials, such as gels and certain nanomaterials. A number of promising animal studies, some involving cardiac tissue, blood vessels and skin, suggest that the field is getting closer to its ultimate goal of transplantable organs.

Researchers explain ongoing work to make 3d-printed tissue that could one day be transplanted into a human body.

We expect that advancements in bioprinting will increase at a steady pace, even with current technological limitations, potentially improving the lives of many patients. In 2019 alone, several research teams reported a number of breakthroughs. Bioengineers at Rice and Washington Universities, for example, used hydrogels to successfully print the first series of complex vascular networks. Scientists at Tel Aviv University managed to produce the first 3D-printed heart. It included “cells, blood vessels, ventricles and chambers” and used cells and biological materials from a human patient. In the United Kingdom, a team from Swansea University developed a bioprinting process to create an artificial bone matrix, using durable, regenerative biomaterial.

‘Cloneprinting’

Though the future looks promising from a technical and scientific perspective, current regulations around bioprinting pose some hurdles. From a conceptual point of view, it is hard to determine what bioprinting effectively is.

Consider the case of a 3D-printed heart: Is it best described as an organ or a product? Or should regulators look at it more like a medical device?

Regulators have a number of questions to answer. To begin with, they need to decide whether bioprinting should be regulated under new or existing frameworks, and if the latter, which ones. For instance, should they apply regulations for biologics, a class of complex pharmaceuticals that includes treatments for cancer and rheumatoid arthritis, because biologic materials are involved, as is the case with 3D-printed vaccines? Or should there be a regulatory framework for medical devices better suited to the task of customizing 3D-printed products like splints for newborns suffering from life-threatening medical conditions?

In Europe and the U.S., scholars and commentators have questioned whether bioprinted materials should enjoy patent protection because of the moral issues they raise. An analogy can be drawn from the famed Dolly the sheep over 20 years ago. In this case, it was held by the U.S. Court of Appeals for the Federal Circuit that cloned sheep cannot be patented because they were identical copies of naturally occurring sheep. This is a clear example of the parallels that exist between cloning and bioprinting. Some people speculate in the future there will be ‘cloneprinting,’ which has the potential for reviving extinct species or solving the organ transplant shortage.

Dolly the sheep’s example illustrates the court’s reluctance to traverse this path. Therefore, if, at some point in the future, bioprinters or indeed cloneprinters can be used to replicate not simply organs but also human beings using cloning technologies, a patent application of this nature could potentially fail, based on the current law. A study funded by the European Commission, led by Bournemouth University and due for completion in early 2020 aims to provide legal guidance on the various intellectual property and regulatory issues surrounding such issues, among others.

On the other hand, if European regulators classify the product of bioprinting as a medical device, there will be at least some degree of legal clarity, as a regulatory regime for medical devices has long been in place. In the United States, the FDA has issued guidance on 3D-printed medical devices, but not on the specifics of bioprinting. More important, such guidance is not binding and only represents the thinking of a particular agency at a point in time.

Cloudy regulatory outlook

Those are not the only uncertainties that are racking the field. Consider the recent progress surrounding 3D-printed organs, particularly the example of a 3D-printed heart. If a functioning 3D-printed heart becomes available, which body of law should apply beyond the realm of FDA regulations? In the United States, should the National Organ Transplant Act, which was written with human organs in mind, apply? Or do we need to amend the law, or even create a separate set of rules for 3D-printed organs?

We have no doubt that 3D printing in general, and bioprinting specifically, will advance rapidly in the coming years. Policymakers should be paying closer attention to the field to ensure that its progress does not outstrip their capacity to safely and effectively regulate it. If they succeed, it could usher in a new era in medicine that could improve the lives of countless patients.

[ You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend. ]The Conversation

Dinusha Mendis, Professor of Intellectual Property and Innovation Law and Co-Director of the Jean Monet Centre of Excellence for European Intellectual Property and Information Rights, Bournemouth University and Ana Santos Rutschman, Assistant Professor of Law, Saint Louis University

This article is republished from The Conversation under a Creative Commons license. Read the original article.