Wednesday, April 15, 2015

Backdoors, Sabotage or Just Plain Stupidity

Someone on your development team, or a contractor or a consultant, or one of your sys admins, or a bad guy who stole one of these people’s credentials, might have put a backdoor, a logic bomb, a Trojan or other “malcode” into your application code. And you don’t know it.

How much of a real problem is this? And how can you realistically protect your organization from this kind of threat?

The bad news is that it can be difficult to find malcode planted by a smart developer, especially in large legacy code bases. And it can be card to distinguish between intentionally bad code and mistakes.

The good news is that according to research by CERT’s Insider Threat Program less than 5% of insider attacks involve someone intentionally tampering with software: (for a fascinating account of real-world insider software attacks, check out this report from CERT). h Which means that most of us are in much greater danger from sloppy design and coding mistakes in our code and in the third party code that we use, than we are from intentional fraud or other actions by malicious insiders.

And the better news is that most of the work in catching and containing threats from malicious insiders is the same work that you need to do to catch and prevent security mistakes in coding. Whether it is sloppy/stupid or deliberate/evil, you look for the same things, for what Brenton Kohler at Cigital calls “red flags”:

  1. Stupid or small accidental or “accidental” mistakes in security code such as authentication and session management, access control, or in crypto or secrets handling
  2. Hard-coded URLs or IPs or other addresses, hard-coded user-ids and passwords or password hashes or keys in the code or in configuration. Potential backdoors for insiders, whether they were intended for support purposes or not, are also holes that could be exploited by attackers
  3. Test code or debugging code or diagnostics
  4. Embedded shell commands
  5. Hidden commands, hidden parameters and hidden options
  6. Logic mistakes in handling money (like penny shaving) or risk limits or managing credit card details, or in command or control functions, or critical network-facing code
  7. Mistakes in error handling or exception handling that could leave the system open
  8. Missing logging or missing audit functions, and gaps in sequence handling
  9. Code that that is overly tricky, or unclear or that just doesn’t make sense. A smart bad guy will probably take steps to obfuscate what they are trying to do, and anything that doesn’t make sense should raise red flags. Even if this code isn’t intentionally malicious, you don’t want it in your system
  10. Self-modifying code. See above.

Some of these issues can be found through static analysis. For example, Veracode explains how some common backdoors can be detected by scanning byte code.

But there are limits to what tools can find, as Mary Ann Davidson at Oracle, in a cranky blog post from 2014 points out:

"It is in fact, trivial, to come up with a “backdoor” that, if inserted into code, would not be detected by even the best static analysis tools. There was an experiment at Sandia Labs in which a backdoor was inserted into code and code reviewers told where in code to look for it. They could not find it – even knowing where to look."

If you’re lucky, you might find some of these problems through fuzzing, although it’s hard to fuzz code and interfaces that are intentionally hidden.

The only way that you can have confidence that your system is probably free of malcode – in the same way that you can have confidence that your code is probably free of security vulnerabilities and other bugs – is through disciplined and careful code reviews, by people who know what they are looking for. Which means that you have to review everything, or at least everything important: framework and especially security code, protocol libraries, code that handles confidential data or money, …

And to prevent programmers from colluding, you should rotate reviewers or assign them randomly, and spot check reviews to make sure that they are being done responsibly (that reviews are not just rubber stamps), as outlined in the DevOps Audit Defense Toolkit.

And if the stakes are high enough, you may also need eyes from outside on your code, like the Linux Foundations’s Core Infrastructure Initiative is doing, paying experts to do a detailed audit of OpenSSL, NTP and OpenSSH.

You also need to manage code from check-in through build and test to deployment, to ensure that you are actually deploying what you checked-in and built and tested, and that code has not been tampered with along the way. Carefully manage secrets and keys. Use checksums/signatures and change detection tools like OSSEC to watch out for unexpected or unauthorized changes to important configs and code.

This will help you to catch malicious insiders as well as honest mistakes, and attackers who have somehow compromised your network. The same goes for monitoring activity inside your network: watching out for suspect traffic to catch lateral movement should catch bad guys regardless of whether they came from the outside or the inside.

If and when you find something, the next problem is deciding if it is stupid/sloppy/irresponsible or malicious/intentional.

Cigital’s Kohler suggests that if you have serious reasons to fear insiders, you should rely on a small number of trusted people to do most of the review work, and that you try to keep what they are doing secret, so that bad developers don’t find out and try to hide their activity.

For the rest of us who are less paranoid, we can be transparent, shine a bright light on the problem from the start.

Make it clear to everyone that your customers, shareholders and regulators require that code must be written responsibly, and that everybody’s work will be checked.

Include strict terms in employment agreements and contracts for everyone who could touch code (including offshore developers and contractors and sys admins) which state that they will not under any circumstances insert any kind of time bomb, backdoor or trap door, Trojan, Easter Egg or any kind of malicious code into the system – and that doing so could result in severe civil penalties as well as possible criminal action.

Make it clear that all code and other changes will be reviewed for anything that could be malcode.

Train developers on secure coding and how to do secure code reviews so that they all know what to look for.

If everyone knows that malcode will not be tolerated, and that there is a serious and disciplined program in place to catch this kind of behavior, it is much less likely that someone will try to get away with it – and even less likely that they will be able to get away with it.

You can do this without destroying a culture of trust and openness. Looking out for malcode, like looking out for mistakes, simply becomes another part of your SDLC. Which is the way it should be.

2 comments:

Anonymous said...

Your link "this report" has a missing "h" in the URL.

Jim Bird said...

Thank you, fixed the link

Site Meter