Friday, January 24, 2020


How Iran's military outsources its cyberthreat forces

Dorothy Denning, Naval Postgraduate School

In the wake of the U.S. killing of a top Iranian general and Iran’s retaliatory missile strike, should the U.S. be concerned about the cyberthreat from Iran? Already, pro-Iranian hackers have defaced several U.S. websites to protest the killing of General Qassem Soleimani. One group wrote “This is only a small part of Iran’s cyber capability” on one of the hacked sites.

Two years ago, I wrote that Iran’s cyberwarfare capabilities lagged behind those of both Russia and China, but that it had become a major threat which will only get worse. It had already conducted several highly damaging cyberattacks.

Since then, Iran has continued to develop and deploy its cyberattacking capabilities. It carries out attacks through a network of intermediaries, allowing the regime to strike its foes while denying direct involvement.

Islamic Revolutionary Guard Corps-supported hackers

Iran’s cyberwarfare capability lies primarily within Iran’s Islamic Revolutionary Guard Corps, a branch of the country’s military. However, rather than employing its own cyberforce against foreign targets, the Islamic Revolutionary Guard Corps appears to mainly outsource these cyberattacks.

According to cyberthreat intelligence firm Recorded Future, the Islamic Revolutionary Guard Corps uses trusted intermediaries to manage contracts with independent groups. These intermediaries are loyal to the regime but separate from it. They translate the Iranian military’s priorities into discrete tasks, which are then bid out to independent contractors.

Recorded Future estimates that as many as 50 organizations compete for these contracts. Several contractors may be involved in a single operation.

Iranian contractors communicate online to hire workers and exchange information. Ashiyane, the primary online security forum in Iran, was created by hackers in the mid-2000s in order to disseminate hacking tools and tutorials within the hacking community. The Ashiyane Digital Security Team was known for hacking websites and replacing their home pages with pro-Iranian content. By May 2011, Zone-H, an archive of defaced websites, had recorded 23,532 defacements by that group alone. Its leader, Behrouz Kamalian, said his group cooperated with the Iranian military, but operated independently and spontaneously.

Iran had an active community of hackers at least by 2004, when a group calling itself Iran Hackers Sabotage launched a succession of web attacks “with the aim of showing the world that Iranian hackers have something to say in the worldwide security.” It is likely that many of Iran’s cyber contractors come from this community.

Iran’s use of intermediaries and contractors makes it harder to attribute cyberattacks to the regime. Nevertheless, investigators have been able to trace many cyberattacks to persons inside Iran operating with the support of the country’s Islamic Revolutionary Guard Corps.

Cyber campaigns

Iran engages in both espionage and sabotage operations. They employ both off-the-shelf malware and custom-made software tools, according to a 2018 report by the Foundation to Defend Democracy. They use spearfishing, or luring specific individuals with fraudulent messages, to gain initial access to target machines by enticing victims to click on links that lead to phony sites where they hand over usernames and passwords or open attachments that plant “backdoors” on their devices. Once in, they use various hacking tools to spread through networks and download or destroy data.

Iran’s cyber espionage campaigns gain access to networks in order to steal proprietary and sensitive data in areas of interest to the regime. Security companies that track these threats give them APT (Advanced Persistent Threat) names such as APT33, “kitten” names such as Magic Kitten and miscellaneous other names such as OilRig.

The group the security firm FireEye calls APT33 is especially noteworthy. It has conducted numerous espionage operations against oil and aviation industries in the U.S., Saudi Arabia and elsewhere. APT33 was recently reported to use small botnets (networks of compromised computers) to target very specific sites for their data collection.

Another group known as APT35 (aka Phosphoros) has attempted to gain access to email accounts belonging to individuals involved in a 2020 U.S. presidential campaign. Were they to succeed, they might be able to use stolen information to influence the election by, for example, releasing information publicly that could be damaging to a candidate.

In 2018, the U.S. Department of Justice charged nine Iranians with conducting a massive cyber theft campaign on behalf of the Islamic Revolutionary Guard Corps. All were tied to the Mabna Institute, an Iranian company behind cyber intrusions since at least 2013. The defendants allegedly stole 31 terabytes of data from U.S. and foreign entities. The victims included over 300 universities, almost 50 companies and several government agencies.

Cyber sabotage

Iran’s sabotage operations have employed “wiper” malware to destroy data on hard drives. They have also employed botnets to launch distributed denial-of-service attacks, where a flood of traffic effectively disables a server. These operations are frequently hidden behind monikers that resemble those used by independent hacktivists who hack for a cause rather than money.

Hacking groups tied to the Iranian regime have successfully defaced websites, wiped data from PCs and have attempted to infiltrate industrial control systems.

In one highly damaging attack, a group calling themselves the Cutting Sword of Justice attacked the Saudi Aramco oil company with wiper code in 2012. The hackers used a virus dubbed Shamoon to spread the code through the company’s network. The attack destroyed data on 35,000 computers, disrupting business processes for weeks.

The Shamoon software reappeared in 2016, wiping data from thousands of computers in Saudi Arabia’s civil aviation agency and other organizations. Then in 2018, a variant of Shamoon hit the Italian oil services firm Saipem, crippling more than 300 computers.

Iranian hackers have conducted massive distributed denial-of-service attacks. From 2012 to 2013, a group calling itself the Cyber Fighters of Izz ad-Din al-Qassam launched a series of relentless distributed denial-of-service attacks against major U.S. banks. The attacks were said to have caused tens of millions of dollars in losses relating to mitigation and recovery costs and lost business.

In 2016 the U.S. indicted seven Iranian hackers for working on behalf of the Islamic Revolutionary Guard Corps to conduct the bank attacks. The motivation may have been retaliation for economic sanctions that had been imposed on Iran.

Looking ahead

So far, Iranian cyberattacks have been limited to desktop computers and servers running standard commercial software. They have not yet affected industrial controls systems running electrical power grids and other physical infrastructure. Were they to get into and take over these control systems, they could, for example, cause more serious damage such as the 2015 and 2016 power outages caused by the Russians in Ukraine.

One of the Iranians indicted in the bank attacks did get into the computer control system for the Bowman Avenue Dam in rural New York. According to the indictment, no damage was done, but the access would have allowed the dam’s gate to be manipulated if it not been manually disconnected for maintenance issues.

While there are no public reports of Iranian threat actors demonstrating a capability against industrial control systems, Microsoft recently reported that APT33 appears to have shifted its focus to these systems. In particular, they have been attempting to guess passwords for the systems’ manufacturers, suppliers, and maintainers. The access and information that could be acquired from succeeding might help them get into an industrial control system.

Ned Moran, a security researcher with Microsoft, speculated that the group may be attempting to get access to industrial control systems in order to produce physically disruptive effects. Although APT33 has not been directly implicated in any incidents of cyber sabotage, security researchers have found links between code used by the group with code used in the Shamoon attacks to destroy data.

While it is impossible to know Iran’s intentions, they are likely to continue operating numerous cyber espionage campaigns while developing additional capabilities for cyber sabotage. If tensions between Iran and the United States mount, Iran may respond with additional cyberattacks, possibly ones that are more damaging than we’ve seen so far.

[ Deep knowledge, daily. Sign up for The Conversation’s newsletter. ]The Conversation

Dorothy Denning, Emeritus Distinguished Professor of Defense Analysis, Naval Postgraduate School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Thursday, January 23, 2020


Screen time: Conclusions about the effects of digital media are often incomplete, irrelevant or wrong

Humans are barraged by digital media 24/7. Is it a problem? Bruce Rolff/Shutterstock.com
Byron Reeves, Stanford University; Nilam Ram, Pennsylvania State University, and Thomas N. Robinson, Stanford University

There’s a lot of talk about digital media. Increasing screen time has created worries about media’s impacts on democracy, addiction, depression, relationships, learning, health, privacy and much more. The effects are frequently assumed to be huge, even apocalyptic.

Scientific data, however, often fail to confirm what seems true based on everyday experiences. In study after study, screen time is often not correlated with important effects at a magnitude that matches the concerns and expectations of media consumers, critics, teachers, parents, pediatricians and even the researchers themselves. For example, a recent review of over 200 studies about social media concluded there was almost no effect of greater screen time on psychological well-being. A comprehensive study of adolescents reported small effects of screen time on brain development, and no relationship between media use and cognitive performance. A review of 20 studies about the effects of multitasking with media – that is, using two or more screens at the same time – showed small declines in cognitive performance because of multitasking but also pointed out new studies that showed the opposite.

As communication, psychological and medical researchers interested in media effects, we are interested in how individuals’ engagement with digital technology influences peoples’ thoughts, emotions, behaviors, health and well-being.

Moving beyond ‘screen time’

Has the power of media over modern life been overstated? Probably not, but no one knows, because there is a severe lack of knowledge about what people are actually seeing and doing on their screens.

Individuals all around the world are now all looking at pretty much the same screens and spending a lot of time with them. However, the similarities between us end there. Many different kinds of applications, games and messages flow across people’s screens. And, because it is so easy to create customized personal threads of experiences, each person ends up viewing very different material at different times. No two people share the same media experiences.

To determine the effects of media on people’s lives, whether beneficial or harmful, requires knowledge of what people are actually seeing and doing on those screens. But researchers often mistakenly depend on a rather blunt metric – screen time.

So many social media apps, so little time. Twin Design/Shutterstock.com

Reports of screen time, the most common way to assess media use, are known to be terribly inaccurate and describe only total viewing time. Today, on a single screen, you can switch instantly between messaging a neighbor, watching the news, parenting a child, arranging for dinner delivery, planning a weekend trip, talking on an office video conference and even monitoring your car, home irrigation and lighting. Add to that more troublesome uses – bullying a classmate, hate speech or reading fabricated news. Knowing someone’s screen time – their total dose of media – will not diagnose problems with any of that content.

A media solution based only on screen time is like medical advice to someone taking multiple prescription medications to reduce their total number of pills by half. Which medications and when?

Complex and unique nature of media use

What would be a better gauge of media consumption than screen time? Something that better captures the complexities of how individuals engage with media. Perhaps the details about specific categories of content – the names of the programs, software and websites - would be more informative. Sometimes that may be enough to highlight problems – playing a popular game more than intended, frequent visits to a suspicious political website or too much social time on Facebook.

Tracking big categories of content, however, is still not that helpful. My one hour of Facebook, for example, could be spent on self-expression and social comparison; yours could be filled with news, shopping, classes, games and videos. Further, our research finds that people now switch between content on their smartphones and laptops every 10 to 20 seconds on average. Many people average several hundred different smartphone sessions per day. The fast cadence certainly influences how people converse with each other and how engaged we are with information. And each bit of content is surrounded by other kinds of material. News read on Facebook sandwiches political content between social relationships, each one changing the interpretation of the other.

Screen time: work and play. Gorodenkoff/Shutterstock.com

A call for a Human Screenome Project

In this era of technology and big data, we need a DVR for digital life that records the entirety of individuals’ screen media experiences - what we call the screenome, analogous to the genome, microbiome and other “omes” that define an individual’s unique characteristics and exposures.

An individual’s screenome includes apps and websites, the specific content observed and created, all of the words, images and sounds on the screens, and their time of day, duration and sequencing. It includes whether the content is produced by the user or sent from others. And it includes characteristics of use, such as variations in how much one interacts with a screen, how quickly one switches between content, scrolls through screens, and turns the screen on and off.

Without knowledge of the whole screenome, no one – including researchers, critics, educators, journalists or policymakers – can accurately describe the new media chaos. People need much better data – for science, policy, parenting and more. And it needs to be collected and supported by individuals and organizations who are motivated to share the information for all to analyze and apply.

The benefits from studying the human genome required developing the field of genomics. The same will be true for the human screenome, the unique individual record of experiences that constitute psychological and social life on digital devices. Researchers now have the technologies to begin a serious study of screenomics, which we describe in the journal Nature. Now we need the data – a collective effort to produce, map and analyze a large and informative set of screenomes. A Human Screenome Project could inform academics, health professionals, educators, parents, advocacy groups, tech companies and policymakers about how to maximize the potential of media and remedy its most pernicious effects.

[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter. ]The Conversation

Byron Reeves, Professor of Communication, Stanford University; Nilam Ram, Professor of Human Development and Family Studies, and Psychology, Pennsylvania State University, and Thomas N. Robinson, Professor of Pediatrics and of Medicine, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Thursday, January 9, 2020


How can we make sure that algorithms are fair?

When algorithms make decisions with real-world consequences, they need to be fair. R-Type/Shutterstock.com
Karthik Kannan, Purdue University

Using machines to augment human activity is nothing new. Egyptian hieroglyphs show the use of horse-drawn carriages even before 300 B.C. Ancient Indian literature such as “Silapadikaram” has described animals being used for farming. And one glance outside shows that today people use motorized vehicles to get around.

Where in the past human beings have augmented ourselves in physical ways, now the nature of augmentation also is more intelligent. Again, all one needs to do is look to cars – engineers are seemingly on the cusp of self-driving cars guided by artificial intelligence. Other devices are in various stages of becoming more intelligent. Along the way, interactions between people and machines are changing.

Machine and human intelligences bring different strengths to the table. Researchers like me are working to understand how algorithms can complement human skills while at the same time minimizing the liabilities of relying on machine intelligence. As a machine learning expert, I predict there will soon be a new balance between human and machine intelligence, a shift that humanity hasn’t encountered before.

Such changes often elicit fear of the unknown, and in this case, one of the unknowns is how machines make decisions. This is especially so when it comes to fairness. Can machines be fair in a way that people understand?

When people are illogical

To humans, fairness is often at the heart of a good decision. Decision-making tends to rely on both the emotional and rational centers of our brains, what Nobel laureate Daniel Kahneman calls System 1 and System 2 thinking. Decision theorists believe that the emotional centers of the brain have been quite well developed across the ages, while brain areas involved in rational or logical thinking evolved more recently. The rational and logical part of the brain, what Kahneman calls System 2, has given humans an advantage over other species.

However, because System 2 was more recently developed, human decision-making is often buggy. This is why many decisions are illogical, inconsistent and suboptimal.

For example, preference reversal is a well-known yet illogical phenomenon that people exhibit: In it, a person who prefers choice A over B and B over C does not necessarily prefer A over C. Or consider that researchers have found that criminal court judges tend to be more lenient with parole decisions right after lunch breaks than at the close of the day.

Part of the problem is that our brains have trouble precisely computing probabilities without appropriate training. We often use irrelevant information or are influenced by extraneous factors. This is where machine intelligence can be helpful.

Machines are logical…to a fault

Well-designed machine intelligence can be consistent and useful in making optimal decisions. By their nature, they can be logical in the mathematical sense – they simply don’t stray from the program’s instruction. In a well-designed machine-learning algorithm, one would not encounter the illogical preference reversals that people frequently exhibit, for example. Within margins of statistical errors, the decisions from machine intelligence are consistent.

The problem is that machine intelligence is not always well designed.

As algorithms become more powerful and are incorporated into more parts of life, scientists like me expect this new world, one with a different balance between machine and human intelligence, to be the norm of the future.

Judges’ rulings about parole can come down to what the computer program advises. THICHA SATAPITANON/Shutterstock.com

In the criminal justice system, judges use algorithms during parole decisions to calculate recidivism risks. In theory, this practice could overcome any bias introduced by lunch breaks or exhaustion at the end of the day. Yet when journalists from ProPublica conducted an investigation, they found these algorithms were unfair: white men with prior armed robbery convictions were rated as lower risk than African American females who were convicted of misdemeanors.

There are many more such examples of machine learning algorithms later found to be unfair, including Amazon and its recruiting and Google’s image labeling.

Researchers have been aware of these problems and have worked to impose restrictions that ensure fairness from the outset. For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes. Another, called DP (demographic parity), ensures that groups are proportionally fair. In other words, the proportion of the group receiving a positive outcome is equal or fair across both the discriminating and nondiscriminating groups.

Researchers and policymakers are starting to take up the mantle. IBM has open-sourced many of their algorithms and released them under the “AI Fairness 360” banner. And the National Science Foundation recently accepted proposals from scientists who want to bolster the research foundation that underpins fairness in AI.

Improving the fairness of machines’ decisions

I believe that existing fair machine algorithms are weak in many ways. This weakness often stems from the criteria used to ensure fairness. Most algorithms that impose “fairness restriction” such as demographic parity (DP) and color blindness (CB) are focused on ensuring fairness at the outcome level. If there are two people from different subpopulations, the imposed restrictions ensure that the outcome of their decisions is consistent across the groups.

Beyond just the inputs and the outputs, algorithm designers need to take into account how groups will change their behavior to adapt to the algorithm. elenabsl/Shutterstock.com

While this is a good first step, researchers need to look beyond the outcomes alone and focus on the process as well. For instance, when an algorithm is used, the subpopulations that are affected will naturally change their efforts in response. Those changes need to be taken into account, too. Because they have not been taken into account, my colleagues and I focus on what we call “best response fairness.”

If the subpopulations are inherently similar, their effort level to achieve the same outcome should also be the same even after the algorithm is implemented. This simple definition of best response fairness is not met by DP- and CB-based algorithms. For example, DP requires the positive rates to be equal even if one of the subpopulations does not put in effort. In other words, people in one subpopulation would have to work significantly harder to achieve the same outcome. While a DP-based algorithm would consider it fair – after all, both subpopulations achieved the same outcome – most humans would not.

There is another fairness restriction known as equalized odds (EO) which satisfies the notion of best response fairness – it ensures fairness even if you take into account the response of the subpopulations. However, to impose the restriction, the algorithm needs to know the discriminating variables (say, black/white), and it will end up setting explicitly different thresholds for subpopulations – so, the thresholds will be explicitly different for white and black parole candidates.

While that would help increase fairness of outcomes, such a procedure may violate the notion of equal treatment required by the Civil Rights Act of 1964. For this reason, a California Law Review article has urged policymakers to amend the legislation so that fair algorithms that utilize this approach can be used without potential legal repercussion.

These constraints motivate my colleagues and me to develop an algorithm that is not only “best response fair” but also does not explicitly use discriminating variables. We demonstrate the performance of our algorithms theoretically using simulated data sets and real sample data sets from the web. When we tested our algorithms with the widely used sample data sets, we were surprised at how well they performed relative to open-source algorithms assembled by IBM.

Our work suggests that, despite the challenges, machines and algorithms will continue to be useful to humans – for physical jobs as well as knowledge jobs. We must remain vigilant that any decisions made by algorithms are fair, and it is imperative that everyone understands their limitations. If we can do that, then it’s possible that human and machine intelligence will complement each other in valuable ways.

[ Deep knowledge, daily. Sign up for The Conversation’s newsletter. ]The Conversation

Karthik Kannan, Professor of Management and Director of the Krenicki Center for Business Analytics & Machine Learning, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Sunday, January 5, 2020


5 milestones that created the internet, 50 years after the first network message

This SDS Sigma 7 computer sent the first message over the predecessor of the internet in 1969. Andrew 'FastLizard4' Adams/Wikimedia Commons, CC BY-SA
Scott Shackelford, Indiana University

Fifty years ago, a UCLA computer science professor and his student sent the first message over the predecessor to the internet, a network called ARPANET.

The log page showing the connection from UCLA to Stanford Research Institute on Oct. 29, 1969. Charles S. Kline/UCLA Kleinrock Center for Internet Studies/Wikimedia Commons

On Oct. 29, 1969, Leonard Kleinrock and Charley Kline sent Stanford University researcher Bill Duval a two-letter message: “lo.” The intended message, the full word “login,” was truncated by a computer crash.

Much more traffic than that travels through the internet these days, with billions of emails sent and searches conducted daily. As a scholar of how the internet is governed, I know that today’s vast communications web is a result of governments and regulators making choices that collectively built the internet as it is today.

Here are five key moments in this journey.

Leonard Kleinrock shows the original document logging the very first ARPANET computer communication.

1978: Encryption failure

Early internet pioneers, in some ways, were remarkably farsighted. In 1973, a group of high school students reportedly gained access to ARPANET, which was supposed to be a closed network managed by the Pentagon.

Computer scientists Vinton Cerf and Robert Kahn suggested building encryption into the internet’s core protocols, which would have made it far more difficult for hackers to compromise the system.

But the U.S. intelligence community objected, though officials didn’t publicly say why. The only reason their intervention is public is because Cerf hinted at it in a 1983 paper he co-authored.

As a result, basically all of today’s internet users have to handle complex passwords and multi-factor authentication systems to ensure secure communications. People with more advanced security needs often use virtual private networks or specialized privacy software like Tor to encrypt their online activity.

However, computers may not have had enough processing power to effectively encrypt internet communications. That could have slowed the network, making it less attractive to users – delaying, or even preventing, wider use by researchers and the public.

Vinton Cerf and Robert Kahn with President George W. Bush at the ceremony where Cerf and Kahn were given the Presidential Medal of Freedom for their contributions to developing the internet. Paul Morse/White House/Wikimedia Commons

1983: ‘The internet’ is born

For the internet to really be a global entity, all kinds of different computers needed to speak the same language to be able to communicate with each other – directly, if possible, rather than slowing things down by using translators.

Hundreds of scientists from various governments collaborated to devise what they called the Open Systems Interconnection standard. It was a complex method that critics considered inefficient and difficult to scale across existing networks.

Cerf and Kahn, however, proposed another way, called Transmission Control Protocol/Internet Protocol. TCP/IP worked more like the regular mail – wrapping up messages in packages and putting the address on the outside. All the computers on the network had to do was pass the message to its destination, where the receiving computer would figure out what to do with the information. It was free for anyone to copy and use on their own computers.

TCP/IP – given that it both worked and was free – enabled the rapid, global scaling of the internet. A variety of governments, including the United States, eventually came out in support of OSI but too late to make a difference. TCP/IP made the internet cheaper, more innovative and less tied to official government standards.

1996: Online speech regulated

By 1996, the internet boasted more than 73,000 servers, and 22% of surveyed Americans were going online. What they found there, though, worried some members of Congress and their constituents – particularly the rapidly growing amount of pornography.

In response, Congress passed the Communications Decency Act, which sought to regulate indecency and obscenity in cyberspace.

The Supreme Court struck down portions of the law on free-speech grounds the next year, but it left in place Section 230, which stated: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Those 26 words, as various observers have noted, released internet service providers and web-hosting companies from legal responsibility for information their customers posted or shared online. This single sentence provided legal security that allowed the U.S. technology industry to flourish. That protection let companies feel comfortable creating a consumer-focused internet, filled with grassroots media outlets, bloggers, customer reviews and user-generated content.

Critics note that Section 230 also allows social media sites like Facebook and Twitter to operate largely without regulation.

1998: US government steps up

The TCP/IP addressing scheme required that every computer or device connected to the internet have its own unique address – which, for computational reasons, was a string of numbers like “192.168.2.201.”

But that’s hard for people to remember – it’s much easier to recall something like “indiana.edu.” There had to be a centralized record of which names went with which addresses, so people didn’t get confused, or end up visiting a site they didn’t intend to.

For years, Jon Postel held the reins to the internet’s address system. Jon Postel/Flickr

Originally, starting in the late 1960s, that record was kept on a floppy disk by a man named Jon Postel. By 1998, though, he and others were pointing out that such a significant amount of power shouldn’t be held by just one person. That year saw the U.S. Department of Commerce lay out a plan to transition control to a new private nonprofit organization, the Internet Corporation for Assigned Names and Numbers – better known as ICANN – that would manage internet addresses around the world.

For nearly 20 years, ICANN did that work under a contract from the Commerce Department, though objections over U.S. government control grew steadily. In 2016, the Commerce Department contract expired, and ICANN’s governance continued its shift toward a broader, more globalized structure.

Other groups that manage key aspects of internet communications have different structures. The Internet Engineering Task Force, for instance, is a voluntary technical organization open to anyone. There are drawbacks to that approach, but it would have lessened both the reality and perception of U.S. control.

This 2007 photo shows an Iranian nuclear enrichment facility in Natanz, which was apparently the target of the first known cyberweapon to cause physical damage. AP Photo/Hasan Sarbakhshian

2010: War comes online

In June 2010, cybersecurity researchers revealed the discovery of a sophisticated cyber weapon called Stuxnet, which was designed specifically to target equipment used by Iran’s effort to develop nuclear weapons. It was among the first known digital attacks that actually caused physical damage.

Almost a decade later, it’s clear that Stuxnet opened the eyes of governments and other online groups to the possibility of wreaking significant havoc through the internet. These days, nations use cyberattacks with increasing regularity, attacking a range of military and even civilian targets.

There’s certainly cause for hope for online peace and community, but these decisions – along with many others – have shaped cyberspace and with it millions of people’s daily lives. Reflecting on those past choices can help inform upcoming decisions – such as how international law should apply to cyberattacks, or whether and how to regulate artificial intelligence.

Maybe 50 years from now, events in 2019 will be seen as another key turning point in the development of the internet.

Correction: This article was updated Oct. 31, 2019, to clarify the description of ICANN’s governance system.

[ Insight, in your inbox each day. You can get it with The Conversation’s email newsletter. ]The Conversation

Scott Shackelford, Associate Professor of Business Law and Ethics; Director, Ostrom Workshop Program on Cybersecurity and Internet Governance; Cybersecurity Program Chair, IU-Bloomington, Indiana University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, December 3, 2019


Why a computer will never be truly conscious

What makes a brain tick is very different from how computers operate. Yurchanka Siarhei/Shutterstock.com
Subhash Kak, Oklahoma State University

Many advanced artificial intelligence projects say they are working toward building a conscious machine, based on the idea that brain functions merely encode and process multisensory information. The assumption goes, then, that once brain functions are properly understood, it should be possible to program them into a computer. Microsoft recently announced that it would spend US$1 billion on a project to do just that.

So far, though, attempts to build supercomputer brains have not even come close. A multi-billion-dollar European project that began in 2013 is now largely understood to have failed. That effort has shifted to look more like a similar but less ambitious project in the U.S., developing new software tools for researchers to study brain data, rather than simulating a brain.

Some researchers continue to insist that simulating neuroscience with computers is the way to go. Others, like me, view these efforts as doomed to failure because we do not believe consciousness is computable. Our basic argument is that brains integrate and compress multiple components of an experience, including sight and smell – which simply can’t be handled in the way today’s computers sense, process and store data.

Brains don’t operate like computers

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work.

The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.

Could you identify all of these as a table right away? A computer would likely have real trouble. L to R: pashminu/Pixabay; FDR Presidential Library/Flickr; David Mellis/Flickr, CC BY

Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture.

Computation and awareness

In my own recent work, I’ve highlighted some additional reasons that consciousness is not computable.

Werner Heisenberg. Bundesarchiv, Bild 183-R57262/Wikimedia Commons, CC BY-SA
Erwin Schrödinger. Nobel Foundation/Wikimedia Commons
Alan Turing. Wikimedia Commons

A conscious person is aware of what they’re thinking, and has the ability to stop thinking about one thing and start thinking about another – no matter where they were in the initial train of thought. But that’s impossible for a computer to do. More than 80 years ago, pioneering British computer scientist Alan Turing showed that there was no way ever to prove that any particular computer program could stop on its own – and yet that ability is central to consciousness.

His argument is based on a trick of logic in which he creates an inherent contradiction: Imagine there were a general process that could determine whether any program it analyzed would stop. The output of that process would be either “yes, it will stop” or “no, it won’t stop.” That’s pretty straightforward. But then Turing imagined that a crafty engineer wrote a program that included the stop-checking process, with one crucial element: an instruction to keep the program running if the stop-checker’s answer was “yes, it will stop.”

Running the stop-checking process on this new program would necessarily make the stop-checker wrong: If it determined that the program would stop, the program’s instructions would tell it not to stop. On the other hand, if the stop-checker determined that the program would not stop, the program’s instructions would halt everything immediately. That makes no sense – and the nonsense gave Turing his conclusion, that there can be no way to analyze a program and be entirely absolutely certain that it can stop. So it’s impossible to be certain that any computer can emulate a system that can definitely stop its train of thought and change to another line of thinking – yet certainty about that capability is an inherent part of being conscious.

Even before Turing’s work, German quantum physicist Werner Heisenberg showed that there was a distinct difference in the nature of the physical event and an observer’s conscious knowledge of it. This was interpreted by Austrian physicist Erwin Schrödinger to mean that consciousness cannot come from a physical process, like a computer’s, that reduces all operations to basic logic arguments.

These ideas are confirmed by medical research findings that there are no unique structures in the brain that exclusively handle consciousness. Rather, functional MRI imaging shows that different cognitive tasks happen in different areas of the brain. This has led neuroscientist Semir Zeki to conclude that “consciousness is not a unity, and that there are instead many consciousnesses that are distributed in time and space.” That type of limitless brain capacity isn’t the sort of challenge a finite computer can ever handle.

[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter. ]The Conversation

Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.