Ransomware attacks and the social responsibility to build resilient systems

15 May 2017

Ransomware screen

The heavier you lean on something, the more that thing should be designed to easily bear the weight.

It’s a rather basic engineering principle. But it’s not currently the one that most software companies have been undertaking in relation to their products.

Instead the emphasis has been all about speed. Make it quick. Ship it. Fix and improve it. It’s a model that has been well-suited to rapid innovation, start-ups and upstarts. But it is a disastrous model from the point of view of security.

That matters less if it’s a nice-to-have service, although it still matters. But if the software you produce is something that people begin to rely upon to get essential services, then a key corporate responsibility is to ensure that it is resilient in the face of attack.

The context for all of this, of course, is the malware attacks of the last few days, which have seen the UK’s National Health Service computers infected with ransomware that has rendered them unusable and thrown the delivery of health services into chaos.

You may ask just what on earth was the Health Service doing placing its essential processes in the hands of obsolete Windows XP systems, of course. It’s a valid question. But the answer isn’t always that IT departments are lazy and stupid or that managers have been too stingy (although sometimes that is the case). People follow incentives, and when you get badly designed systems, they often encourage perverse incentives.

 

If you require customers en masse to do the right thing, you’ve designed a system with guaranteed failure built in

So if your company has a history of producing “upgrades” that are popularly held to make systems worse not better, then guess what? People become very reluctant to upgrade until they know that it’s safe to do so, and may simply resist the process altogether. If you improve the security in the latest version of your operating system, but add in a bunch of changes that customers actually don’t like, you can’t blame the customers if they don’t simply follow like sheep to use the software that you think they should.

It’s a question of system design. If you require customers en masse to proactively do the right thing, you’ve designed a system with guaranteed failure built in. You can then blame the customers if you wish. But ultimately, you designed a system that depended on people being the way you want them to be, not the way that they are.

Apple has given at least one example of how you can get this right. Its IOS operating system used on iPads and iPhones is an illustration of how you can design a platform to be secure from the ground up. Apps are not allowed in unless they go through Apple’s app store, and code is reviewed and checked by Apple. And generally apps are restricted in terms of what they can do outside of their own ‘sandbox’ making malicious code very difficult to introduce.

Apple developers have often moaned about those restrictions. But whilst we do know that it’s impossible to build a 100% secure system, the restrictions make the robustness of the platform much greater. So really it comes down to a series of choices by the technology company.

Let’s be clear. If a tech company designs a system which is intended to be relied upon by people, the failure of which will create significant real-time disruption, that company has a social responsibility to create a resilient system. That responsibility starts with the platform. It’s hard for an individual app designer to counter security flaws in the Windows system that may enable an attacker to render the computer and its data useless. But within the bounds of the design choices it can make, security needs to be a much higher profile item.

In a recent video, I talked about the implications of growing levels of automation - the technical revolution that is likely to put between one third and one half of people out of their old jobs as those functions get taken over by AI.

Just think about how much more vulnerable a company makes itself by following that technological trend without sufficient care as to the robustness of the platform on which it builds its systems.

The fact the exploit for the recent attack was stolen from the US government’s NSA highlights the fact that governments can’t be relied upon as honest partners in this endeavour. Even if the US government decided it had learnt its lesson (and who’s betting on that outcome?) others will always follow their own perceived incentive to come up with ways of subverting the security of IT systems to be able to advance their own security interests or, in some cases, repressive agendas. Whilst the most effective approach to dealing with such issues is the major companies working in partnership with governments and other agencies, the companies still have to step up to take principal responsibility.

It does, however, underline Apple’s case against the FBI from last year. Apple’s contention that it shouldn’t build in back-doors into its software on the grounds that what could be used by the FBI would eventually be used by criminals - that point has been rather dramatically illustrated via the ransomware screens popping up on computer terminals in the last few days.

The difficult question is this - how do we hold software companies accountable for what they do in this area? When you have to be a highly sophisticated expert to tell the difference, any company can claim in its marketing materials how secure its systems are. False claims only get exposed when it’s too late.

How should security and resiliency be brought into the scrutiny brought to bear on tech companies by CSR and SRI analysts? It is long overdue that it became part of the overall equation when it comes to corporate social responsibility.

 

This post first published on the Respectful Business Blog.