Monday 8 July 2013

File Integrity Monitoring – FIM and Why Change Management is the Best Security Measure You Can Implement

Introduction
Server room racksWith the growing awareness that cyber security is an urgent priority for any business there is a ready-market for automated, intelligent security defenses. The silver-bullet against malware and data theft is still being developed (promise!) but in the meantime there are hordes of vendors out there that will sell you the next best thing.

The trouble is, who do you turn to? According to, say, the Palo Alto firewall guy, his appliance is the main thing you need to best protect your company’s intellectual property, although if you then speak to the guy selling the FireEye sandbox, he may well disagree, saying you need one of his boxes to protect your company from malware. Even then, the McAfee guy will tell you that endpoint protection is where it’s at – their Global Threat Intelligence approach should cover you for all threats.

In one respect they are all right, all at the same time – you do need a layered approach to security defenses and you can almost never have ‘too much’ security. So is the answer as simple as ‘buy and implement as many security products as you can’?

Cyber Security Defenses– Can You Have Too Much of a Good Thing?
Before you draw up your shopping list, be aware all this stuff is really expensive, and the notion of buying a more intelligent firewall to replace your current one, or of purchasing a sandbox appliance to augment what your MIMEsweeper already largely provides, demands a pause for thought. What is the best return on investment available, considering all the security products on offer?

Arguably, the best value for money security product isn’t really a product at all. It doesn’t have any flashing lights, or even a sexy looking case that will look good in your comms cabinet, and the datasheet features don’t include any impressive packets per second throughput ratings. However, what a good Change Management process will give you is complete visibility and clarity of any malware infection, any potential weakening of defenses plus control over service delivery performance too.

In fact, many of the best security measures you can adopt may come across as a bit dull (compared to a new piece of kit for the network, what doesn’t seem dull?) but, in order to provide a truly secure IT environment, security best practices are essential.

Change Management – The Good, The Bad and The Ugly (and The Downright Dangerous)
There are four main types of changes within any IT infrastructure
  • Good Planned Changes (expected and intentional, which improve service delivery performance and/or enhance security)
  • Bad Planned Changes (intentional, expected, but poorly or incorrectly implemented which degrade service delivery performance and/or reduce security)
  • Good Unplanned Changes (unexpected and undocumented, usually emergency changes that fix problems and/or enhance security)
  • Bad Unplanned Changes (unexpected, undocumented, and which unintentionally create new problems and/or reduce security)
A malware infection, intentionally by an Inside Man or external hacker also falls into the last category of Bad Unplanned Changes. Similarly, a rogue Developer implanting a Backdoor into a corporate application. The fear of a malware infection, be it a virus, Trojan or the new buzzword in malware, an APT, is typically the main concern of the CISO and it helps sell security products, but should it be so?

A Bad Unplanned Change that unintentionally renders the organization more prone to attack is a far more likely occurrence than a malware infection, since every change that is made within the infrastructure has the potential to reduce protection. Developing and implementing a Hardened Build Standard takes time and effort, but undoing painstaking configuration work only takes one clumsy engineer to take a shortcut or enter a typo. Every time a Bad Unplanned Change goes undetected, the once secure infrastructure becomes more vulnerable to attack so that when your organization is hit by a cyber-attack, the damage is going to be much, much worse.

To this end, shouldn’t we be taking Change Management much more seriously and reinforcing our preventative security measures, rather than putting our trust in another gadget which will still be fallible where Zero Day Threats, Spear Phishing and straightforward security incompetence are concerned?

The Change Management Process in 2013 – Closed Loop and Total Change Visibility
The first step is to get a Change Management Process – for a small organization, just a spreadsheet or a procedure to email everyone concerned to let them know a change is going to be made at least gives some visibility and some traceability if problems subsequently arise. Cause and Effect generally applies where changes are made – whatever changed last is usually the cause of the latest problem experienced.
Which is why, once changes are implemented, there should be some checks made that everything was implemented correctly and that the desired improvements have been achieved (which is what makes the difference between a Good Planned Change and a Bad Planned Change).

For simple changes, say a new DLL is deployed to a system, this is easy to describe and straightforward to review and check. For more complicated changes, the verification process is similarly much more complex. Unplanned Changes, Good and Bad, present a far more difficult challenge. What you can’t see, you can’t measure and, by definition, Unplanned Changes are typically performed without any documentation, planning or awareness.

Contemporary Change Management systems utilize File Integrity Monitoring, providing a zero tolerance to changes. If a change is made – configuration attribute or to the filesystem – then the changes will be recorded.

In advanced FIM systems, the concept of a time window or change template can be pre-defined in advance of a change to provide a means of automatically aligning the details of the RFC (Request for Change) with the actual changes detected. This provides an easy means to observe all changes made during a Planned Change, and greatly improve the speed and ease of the verification process.

This also means that any changes detected outside of any defined Planned Change can immediately be categorized as Unplanned, and therefore potentially damaging, changes. Investigation becomes a priority task, but with a good FIM system, all the changes recorded are clearly presented for review, ideally with ‘Who Made the Change?’ data.

Summary
Change Management is always featured heavily in any security standard, such as the PCI DSS, and in any Best Practice framework such as SANS Top Twenty, ITIL or COBIT.

If Change Management is part of your IT processes, or your existing process is not fit for purpose, maybe this should be addressed as a priority? Coupled with a good Enterprise File Integrity Monitoring system, Change Management becomes a much more straightforward process, and this may just be a better investment right now than any flashy new gadgets?

Monday 1 July 2013

File Integrity Monitoring – Use FIM to Cover All the Bases

Why use FIM in the first place?
Unlike anti-virus and firewalling technology, FIM is not yet seen as a mainstream security requirement. In some respects, FIM is similar to data encryption, in that both are undeniably valuable security safeguards to implement, but both are used sparingly, reserved for niche or specialized security requirements.

FIM solutionsHow does FIM help with data security?
At a basic level, File Integrity Monitoring will verify that important system files and configuration files have not changed, in other words, the files’ integrity has been maintained.

Why is this important? In the case of system files – program, application or operating system files – these should only change when an update, patch or upgrade is implemented. At other times, the files should never change.

Most security breaches involving theft of data from a system will either use a keylogger to capture data being entered into a PC (the theft then perpetrated via a subsequent impersonated access), or some kind of data transfer conduit program, used to siphon off information from a server. In all cases, there has to be some form of malware implanted onto the system, generally operating as a Trojan i.e. the malware impersonates a legitimate system file so it can be executed and provided with access privileges to system data.

In these instances, a file integrity check will detect the Trojans existence, and given that zero day threats or targeted APT (advanced persistent threat) attacks will evade anti-virus measures, FIM comes into its own as a must-have security defense measure. To give the necessary peace of mind that a file has remained unchanged, the file attributes governing security and permissions, as well as the file length and cryptographic hash value must all be tracked.

Similarly, for configuration files, computer configuration settings that restrict access to the host, or restrict privileges for users of the host must also be maintained. For example, a new user account provisioned for the host and given admin or root privileges is an obvious potential vector for data theft – the account can be used to access host data directly, or to install malware that will provide access to confidential data.

File Integrity Monitoring and Configuration Hardening
Which brings us to the subject of configuration hardening. Hardening a configuration is intended to counteract the wide range of potential threats to a host and there are best practice guides available for all versions of Solaris, Ubuntu, RedHat, Windows and most network devices. Known security vulnerabilities are mitigated by employing a fundamentally secure configuration set-up for the host.

For example, a key basic for securing a host is via a strong password policy. For a Solaris, Ubuntu or other Linux host, this is implemented by editing the /etc/login.defs file or similar, whereas a Windows host will require the necessary settings to be defined within the Local or Group Security Policy. In either case, the configuration settings exist as a file that can be analyzed and the integrity verified for consistency (even if, in the Windows case, this file may be a registry value or the output of a command line program).

Therefore file integrity monitoring ensures a server or network device remains secure in two key dimensions: protected from Trojans or other system file changes, and maintained in a securely defended or hardened state.

File integrity assured – but is it the right file to begin with?
But is it enough to just use FIM to ensure system and configuration files remain unchanged? By doing so, there is a guarantee that the system being monitored remains in its original state, but there is a risk of perpetuating a bad configuration, a classic case of ‘junk in, junk out’ computing. In other words, if the system was built using an impure source – the recent Citadel keylogger scam is estimated to have netted over $500M in funds stolen from bank accounts where PCs were set-up using pirated Windows Operating System DVDs, each one with keylogger malware included free of charge.

In the corporate world, OS images, patches and updates are typically downloaded directly from the manufacturer website, therefore providing a reliable and original source. However, the configuration settings required to fully harden the host will always need to be applied and in this instance, file integrity monitoring technology can provide a further and invaluable function.

The best Enterprise FIM solutions can not only detect changes to configuration files/settings, but also analyze the settings to ensure that best practice in security configuration has been applied.

In this way, all hosts can be guaranteed to be secure and set-up in line with not just industry best practice recommendations for secure operation, but with any individual corporate hardened build-standard.
A hardened build-standard is a pre-requisite for secure operations and is mandated by all formal security standards such as PCI DSS, SOX, HIPAA, and ISO27K.

Conclusion
Even if FIM is being adopted simply to meet the requirements of a compliance audit, there is a wide range of benefits to be gained over and above simply passing the audit.

Protecting host systems from Trojan or malware infection cannot be left solely to anti-virus technology. The AV blind-spot for zero day threats and APT-type attacks leaves too much doubt over system integrity not to utilize FIM for additional defense.

But preventing breaches of security is the first step to take, and hardening a server, PC or network device will fend off all non-insider infiltrations. Using a FIM system with auditing capabilities for best practice secure configuration checklists makes expert-level hardening straightforward.

Don’t just monitor files for integrity – harden them first!