Ciaran Martin founded the UK’s National Cyber Security Centre (NCSC) and served as its first CEO from 2013 to 2020.

He is a distinguished former civil servant who has worked directly with five prime ministers and a variety of senior ministers across three political parties, and has held senior positions at HM Treasury and the Cabinet Office, as well as GCHQ.

Today, he is a professor at Oxford University’s Blavatnik School of Government and a fellow at Hertford College, Oxford, where he was an undergraduate and studied history.

He is also the chair of CyberCX in the UK, as well as managing director at Paladin Capital, head of the SANS CISO Institute, and an adviser to Garrison Technology and Red Sift.

During a session at the Infosecurity Europe show this month, he gave a sneak peek of a paper, now published by the Blavatnik school at Oxford, about the extent to which artificial intelligence (AI) might be disrupting a rough cyber security balance between attackers and defenders.

That balance has been governed by three principles, historically, he maintains. First, that computer systems where human safety is at risk tend to have fail-safes, as with air traffic control systems. Second, that the most dangerous capabilities remain in the hands of the most capable actors, who tend to have some sense of rationality and escalatory risk, as with the leaders of the USSR and the US during the Cold War. And third, that if you can use advanced code for bad, you can normally use it for (offsetting) good. The second and third of these are put into question by artificial intelligence (AI) at least, is his contention.

In a précis of the paper, he concludes: “The Digital Security Equilibrium is a useful concept if we wish to understand why cyberspace has remained a place of harm, contestation, but not catastrophe to date. It can remain that way, but it requires a sustained effort and smart policymaking over many years. And for now, the most worrying part is the growing accessibility of potent cyber capabilities to new actors.”

He went into more detail on this, and other matters, in a conversation with Computer Weekly at Infosec. What follows is a compressed and edited version of that.

Would you say the biggest threat to our security is that companies are simply not willing to invest in cyber resilience?

I’m getting sympathetic to that view, but I’m not going to do a hatchet job on companies. I think that companies, by and large, try to behave rationally.

The first thing I’d say is there was a lot of hype in the past that there was going to be more and more catastrophe. In one sense, that means people sit up and take notice, particularly big businesses and so forth. On the other hand, I think it was accidentally a bit infantilising. When you and I were growing up during the Cold War, we might have been worried about the threat of nuclear Armageddon.

I don’t think AI gives you any magic new tools. But in terms of the capability battle, I’m optimistic. I think there’s a huge potential for AI in cyber security to make things better

Ciaran Martin, Blavatnik School of Government, Oxford University

But also, we knew there wasn’t a thing we could do about it. And if you’re being told there’s this huge cyber risk and so forth, you think, “Hang on, what can I do about it? That’s why I pay taxes to the government”.

I think the second thing was – while personal data is really important and its theft and misuse can lead to serious harm – we have to balance things. We live in a country where companies, by and large, obey the law, and the legal balance hitherto has been very onerous for some years on data protection and very light on service disruption, on resilience.

I think we do have to incentivise resilience more as well. Marks and Spencer is a good example. They are a well-run company that had been doing really well until the cyber attack. They’re not suddenly stupid or negligent when it comes to cyber. You have to look a bit deeper. What are their incentives? What have they been told to do? What are they legally mandated to prioritise? And now we’re thinking: resilience is king.

In your presentation, I got the impression you were saying that AI means it is undecided if what you call the ‘security equilibrium’ holds. Is that right?

I don’t think AI gives you any magic new tools. There is a lot of hype about big red buttons that can bring down planes and all that stuff. It doesn’t really work that way. AI doesn’t take you there, but what it does do is massively lower the cost and other barriers to entry for doing something quite disruptive and bad.

But in terms of the capability battle, I’m optimistic. I think there’s a huge potential for AI in cyber security to make things better. In vulnerability scanning, for example, baddies do vulnerability scanning so they can exploit [vulnerabilities], goodies do it so they can patch. And by and large, that has to come out in our favour.

But does this not come down to people? Something like one-third of cyber security professionals in government are contractors because there has been a real problem in recruiting and paying civil servants the kind of money they can make in the private sector

My past gives me a luxury interpretation of this question because GCHQ was very good at retaining people. They weren’t paying Microsoft or Crowdstrike salaries, but they did pay them a bit more, and the mission was good and motivated them. Incentivising [a cyber security professional] to go into a major payments department like Work and Pensions or HMRC is going to be a bit different.  

Having said that, I think people are really important. But I think first of all, people as users are very important, and we have to try to give them sensible and meaningful things to control and not ask them to be able to take on the Russians on their own.

But I also think there’s a tendency to be very Cassandra-like about skills. I was warned when I was setting up the NCSC that it wasn’t going to work because there weren’t enough skills in the organisation or the economy. But there are great people out there, and retrainable people. You don’t need that many ninjas. You need layers. You need elite defence units, in government and in some of the major companies. We need good corporate cyber defences. We need a cyber-savvy workforce, and to know how to do the basics.

It is often said that the NCSC represented a fundamental shift. What was it a shift from and to?

To get high falutin’ about it, if you look back at the history of this, from Bletchley Park, computing and computer security, on both sides, the poacher and the gamekeeper side, were the preserve of the major global powers and governments, and that was it – the “crypto wars”, all of that.

Now, GCHQ has had a security mission since 1919. But it was about protecting Britain’s military and intelligence secrets – those were the only secrets that anybody cared about. But with mass digitisation, there is a shift into the open. You can’t protect an economy from behind barbed wire in a building with no access to cell phones. You just can’t do it. You can’t communicate with people, you can’t give them advice, you can’t respond to an incident.

The second thing was to be a bit more activist. There was an awful lot of passivity about public-private partnerships and about information sharing. So, it was from secret to open, and passive to active.

I saw Jeremy Fleming [the former director of GCHQ] speaking at Palo Alto Networks’ Ignite London event in March. He was surprised by a straw poll he took of the audience, of cyber security professionals, that revealed they believed the AI advantage was with the attacker … and that with more volatility, cyber security professionals tend to be more cautious. But he was still ‘broadly optimistic that the advantage is with the defender’, provided that a high pace of technology deployment is kept up and organisations are agile. What do you make of that? Was his surprise probably due to his background in national security?

I broadly agree with him. There’s a tendency to pessimism in this subject. Objectively, who has the advantage? It’s too early to tell, as [the Chinese prime minister] Zhou Enlai is reputed to have said [about the French Revolution].

But secondly, it doesn’t have to be like this. What advantages do the baddies have? Fundamentally, recklessness and a lack of ethics. They are prepared to do things that we might not be prepared to do, and they want to cause harm. So it’s a different calculus for them. But what are our advantages? Well, firstly, the stability of rule of law and the market economies that turbocharge innovation. They didn’t build any of this tech. They are just cheating with other people’s tech.

A lot of this is about economics and business climate. And regulation and the posture of the country. Do you incentivise people to take security seriously? And if you do, then a major British corporate will say: “We’re well off, we’re booming, we’re a bit worried about this security business, so we’re gonna buy. And there’s a whole suite of really innovative stuff out there that there’s a market for, then we’re going to win. If none of that works, then they’re going to win.

And for us, in the UK, which I would share in common with Jeremy, is the poacher and gamekeeper model at GCHQ, which is common in the Five Eyes, but it’s not common in continental Europe: that is to have the attackers and the defenders in the same place so they can learn from each other, and so forth. GCHQ is primarily a foreign intelligence digital espionage agency, but many of the people who worked for me in the NCSC, and in its predecessor body, the CESG, are focused on protection.

By the same token, the people who build tech are those who can secure it, as with Microsoft. And [at US defence level], secure by design is being kept by this Administration, and I am pleased about that.



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *