Duncan HollisInternational law Professor Duncan Hollis has been making waves in the world of cybersecurity with his ideas about how countries around the world can work together to protect the Internet from cyberthreats. His calls for a new set of behavioral norms for cybersecurity have caught the attention of a United Nations Group of Governmental Experts (GGE) working on the issue. His proposal for a Red Cross for Cyberspace has also garnered praise on the international stage. We checked in with him to learn more about both ideas.

TLS: 5 years ago you wrote An e-SOS for Cyberspace, 52 HARV. INT’L L.J. 373 (2011) in which you proposed a duty to assist, much like an SOS call, for cyberthreats. Interest in the idea has spread among the more than 100 governments represented at last year’s Hague Conference on Cybersecurity, and a UN group has included a version of that duty in their proposed behavioral norms for cybersecurity. What do you think sparked this interest and where does the discussion now stand?

DH: So I think there’s a practical explanation and a theoretical explanation. The theoretical explanation is that cybersecurity has an attribution problem. Attribution, whether it’s criminal law or international law, works by identifying bad actors, punishing them, and thereby deterring others. But in cyberspace, the bad actors, especially the really sophisticated ones, are able to hide who they are, or make you think they’re someone else, or make you think you’re just having problems with your computer or network. When I first started writing in this area, I didn’t fully appreciate the depth of this attribution problem and focused on writing up proposals for how the law should regulate bad actors assuming we could catch enough of them to deter others. But then a number of people pointed out to me that it doesn’t really work that way.

So then the question became, if law can’t regulate the problem by targeting the bad actors, what can it do? And my idea was law can do one of two things. First, it can target the tool they’re using to cause harm. That has all kinds of political problems- for example, the encryption problem we see now in the fight between Apple and the FBI. The other thing is you can regulate victims, and there are all sorts of ways to do that – for example, licensing. We could say that you need a license to use your computer to show you’re sophisticated enough not to cause harm, or monitoring –let the government watch over your system for you to make sure that nobody’s doing bad things. But both of those approaches have problems whether in terms of their scale, saleability, or even their effectiveness – the U.S. government hasn’t been able to protect its systems, so it’s not clear they could protect yours. So my idea was, how about something more libertarian, such as, “if you think you’re in trouble, you can call for help, and when you call for help, people are required to come help.” My thinking was that an eSOS offers people something to rally around without having to lose their position on things like encryption, or what the right cybercrime laws should be, or how to best co-operate among law enforcement across borders. It was an attempt to step outside of the box, to say, “how about this,” and it seemed to have that purchase.

My thinking was that an eSOS offers people something to rally around without having to lose their position on things like encryption, or what the right cybercrime laws should be, or how to best co-operate among law enforcement across borders.

More practically speaking, I was invited to present this idea at the inaugural cybernorms workshop sponsored by MIT, Harvard, and Toronto. Also invited were a number of senior US government officials, and some like-minded foreign officials, who heard me present the idea and seemed genuinely enthusiastic about it. So over the next couple years I knew that certain people were trying to translate my academic idea into something that could work in a real governmental context. Thus, I was happy in 2015, five years after I wrote the article, to have the Dutch government invite me to come over to this huge global conference on cyberspace to present my idea there, and even happier when a UN group of governmental experts of more than 20 states, including US, China, and Russia, came out with a document last summer that said, “here are these norms going forward that we think cyberspace has to have,” and one of them was that you have to assist in cases of severe attacks on critical infrastructure. It’s not an eSOS per se, but it’s the duty to assist norm, which I hope might evolve into an eSOS system. Today, several governments like Estonia and the Netherlands remain interested in considering it. So it’s been an interesting thing to see my idea translated into the policy arena. Usually, as academics we have an idea, and we write an article, and maybe, at the best, other academics will read and cite to your article, in coming up with their own ideas. This has been a really different kind of experience. It was really satisfying last November, for example, to be in Washington D.C. and hear the Estonian foreign minister say, “Here are the four things we think cyberspace needs,” and number four was an eSOS.

TLS: Practically speaking, would the duty to assist apply to private actors, governments, or both?

DH: That needs to be worked out. There are these Computer Emergency Response Teams (CERTs) at both the corporate and national levels whose job is to defend against cyberattacks. One question is how to institutionalize their job and do it across borders. On the high seas, for example, you send out an SOS and whoever’s in a geographic position to be able to help you *has* to come help. I grew up on a sailboat in Cape Cod, and we always knew that if somebody got in trouble, we had to go help. I think there are cyber equivalents to that, what we might call technical proximity. Estonia, for example, experienced overwhelming DDOS attacks in 2007 that took the banking systems, government websites, and media outlets all offline. You couldn’t find out what was happening on the news because all the news sites were down. The Estonian reaction was, “we think it’s coming from Russia,” and the Russian response was, “you know how attribution works – it could be coming from anywhere,” but the attacks kept coming. So the simple answer was that anyone who was technically proximate – that is, in a position to assist, had to do something to block the traffic, or open up more bandwidth so people could access these Estonian websites. And so that in sense, I think it’s possible to create and institutionalize something like the Coast Guard, to help out if you really get into trouble, or something like my other idea, the Red Cross for cyberspace.

TLS: Let’s talk about the Red Cross for cyberspace, and whether you think these two ideas should be developed in tandem or separately. How might they work together and what potential conflicts exist?

DH: So, when I wrote about eSOS, the response I got was, “Well, that’s really interesting. How are you going to do this?” One idea we’re exploring is how to institutionalize some federation of cyber- assistance organizations who would act as a tiered line of defense, much like the Red Cross. If we have a crisis here in Philadelphia, the Red Cross of Southeastern PA handles it. If it’s a big enough crisis, the America Red Cross comes in. If it’s really big, like the Haitian earthquake, a bunch of nations will put their Red Cross societies together to help. That’s the sort of tiered, coordinated effort I have in mind.

There’s a complicated series of problems with any kind of assistance regime. But the alternative is chaos. It’s the Wild West, where no one helps anyone, we have no trust, and it’s anarchy. I don’t think we want to live in a world with anarchy.

Part of what law can do is protect an institution like a global cyberfederation in much the same way that a Red Cross official is protected by their status. We can have that kind of system in cyberspace, where if someone gets in trouble, they have people they can turn to for help and, just as importantly, they can rest assured that in getting that help they won’t end up worse off.  Absent some norms for assistance, there’s always the fear that those providing assistance will keep what they learn about you for later use or maybe hand over your data to law enforcement or national security agencies.  Of course, if the help comes from private actors, there are other issues as well, such as what happens if the helper fails. Can they be immunized from liability because they were trying to help?

There’s a complicated series of problems with any kind of assistance regime. But the alternative is chaos. It’s the Wild West, where no one helps anyone, we have no trust, and it’s anarchy. I don’t think we want to live in a world with anarchy. It’s bad enough now, when you turn on your laptop and hear about the latest data breach, and find out you have to get an IRS PIN number because someone has compromised your SSN, or your credit card information has been stolen, or your data is held hostage by ransomware that freezes you out of your network. In the last few months we’ve been seeing hospital networks getting hit by ransomware attacks, getting frozen out of their own networks, and they pay that ransom because lives are at stake.

TLS: What comes next for each proposal, and what new work is on the horizon for you?

DH: So, the next thing for the cyberfederation idea is a more academic analysis of last summer’s TIME article about how we could institutionalize a duty to assist and the pros and cons of such an approach. I’m now a nonresident scholar at the Carnegie Institute for International Peace and I’m doing that work through Carnegie. I also just hosted a high level cybersecurity conference at TU Japan that gathered senior public and private officials to talk about these issues, including the Assistant Secretary for cyber policy at DHS, Microsoft’s Director of Cybersecurity Policy and Strategy, an adviser to the Japanese Cabinet on cybersecurity issues along with the former head of the U.S. Computer Emergency Response team.

One thing I’d like to do is host a conversation on politics and international law in cyberspace here at Temple, maybe a year from now. My grant work on cybernorms, which was funded by the U.S. Government in cooperation with MIT’s Computer Science and Artificial Intelligence Lab, is coming to a close with the upcoming publication of Constructing Cybernorms. And I’d love to continue consulting with governments and other independent experts, industry, and thinktanks on these questions.  It’s a continuing, and tremendously important, conversation. Whether it’s Apple and encryption or hacking hospitals, cybersecurity is here to stay, so I think that’s where I’ll be doing most of my thinking for the foreseeable future.