Wednesday 12 March 2014

File Integrity Monitoring – 3 Reasons Why Your Security is Compromised Without it Part 3

Introduction

This final article in the series of 3 focuses on one of the key security best practices that is usually the hardest to implement, one requiring wholesale organizational and cultural change within the IT Team: Change Management.

Server HardeningFIM for Change Control and Change Management

The forensic nature of file integrity management makes it the perfect technology to underpin a change management process. Change Management processes are notoriously difficult to implement and operate. Too thorough, and the bureaucracy required is too much of a turn off for the IT team, leaving them to work around the process as often as they use it. But make it too superficial and the benefits to be gained become marginal.

However, even for the most rigorously operated change management process there is always a glaring flaw, in that there is an assumption that changes are always implemented exactly as decreed by the approved request for change, or RFC.

Using enterprise-class file integrity monitoring ensures that all changes made will be reported. This provides a number of advantages over traditional ‘trust-based’ change management processes, not only in terms of simplifying the process, but also strengthening the value delivered.

Since all changes will be reported, it means that there is no longer the blindness to changes that a change management process seeks to address. Sure, the changes are reported only once they have been implemented but at least you have visibility of ALL changes now, not just those documented in an RFC.

This means that even if the process is bypassed, the changes are still visible. Better still, if the wrong change is made, or the change is inaccurately implemented, these are now immediately revealed. In this way, the flaw in the ‘trust-based’ change management process – trust that says, because we have diligently documented, reviewed and approved this request for change, it cannot and will not be incorrectly implemented (we hope!) – is resolved. Anything up to 70% of unplanned downtime is due to people and process issues, in other words, computers don’t tend to go wrong if they are left alone, but give people access to them, and problems won’t be too far behind.

The Change Management Process Flipped - Manage by FIM?

Where file integrity monitoring is used, this presents an interesting alternative perspective on change management processes.

If all changes are being recorded and presented clearly, detailing what was changed and who made the change, the change management process can be made more streamlined. Most IT professionals are savvy when it comes to planning changes and would naturally plan to make the change at a time to least inconvenience the business, pre-planning contingency measures if things go wrong.

In fact, it is the associated bureaucracy with a change management process that often kills it. Changes may get delayed, with the IT team wasting time documenting and reviewing RFCs that don’t warrant the examination, and this can lead to the benefits of the process being outweighed.

FIM allows for more changes to be approved ‘after the event’, making the change management process more of a ‘checks and balances’ operation. Assuming that the overwhelming majority of changes are correctly implemented and necessary, this alternative process may be welcomed by IT teams with time and resource pressures. It won’t be for every organization, but could play a valuable staging option to be used while a more rigorous change management process is introduced to the business.

Summary

Taken as a whole, the argument for File Integrity Monitoring as an essential security defense is compelling. As a check and balance for the change management process, FIM provides the visibility of changes typically missing. FIM also provides more flexibility to a change management process, enabling some shortcuts and options to make the process an ‘after-the-event’, ‘review-changes-and-sign-off’ which may be a more palatable and readily operable procedure for some IT Teams.

Wednesday 22 January 2014

File Integrity Monitoring – 3 Reasons Why Your Security Is Compromised Without It Part 2

Introduction
In part 1 of this series of articles we talked about the importance of using File Integrity monitoring for system files as a backstop to AV for detecting malware. Enterprise-level FIM goes further where configuration files are concerned to not only detect and report changes to config settings, but to also identify vulnerabilities.

Malware Detection – How Effective is Anti-Virus?
Security Is Compromised Without File Integrity MonitoringHowever, there are also a number of issues with using these checklists to eliminate vulnerabilities, or in other words, to harden a system. First of all, checking a system for the presence of vulnerabilities is time consuming and painstaking. Repeating the process for an entire estate of hundreds or thousands of servers will require significant resources.

The Vulnerability Scanner
Scanning systems, such as Nessus, Rapid7, eEye or Qualys, can be used to automatically probe a system and identify whether vulnerabilities are present. However, while a vulnerability scanner can solve the problem of the time and resource requirements for vulnerability detection, they also create a whole new range of problems, while leaving one glaring flaw unresolved.

Scanning means that servers and workstations are interrogated via the network, typically using an automated series of scripts, executed using psexec or ssh, working in conjunction with a dissolvable agent.
The first problem is that the dissolvable agent must be copied across the network to every host, and being dissolvable, this has got to be repeated for every scan, for every host. This burns up bandwidth and host resources.

Commands are run to query configuration settings, dumping the contents of config files, while the dissolvable agent allows an MD5 or SHA1 hash to be calculated for each file as a ‘DNA Fingerprint’ for each file. And this represents a further problem.

In order to verify the integrity of core system files and key configuration files, it is necessary for the scanner login to be at root, or near-root, privilege. This means that, before you can check the security posture of your hosts, you first need to weaken security and allow a root network-login!

Finally, the results then need to be analyzed by the scanning appliance, which means dragging all the data gathered back across the network, creating further load on the network. Scanning remote systems provides a more exaggerated problem of bandwidth usage and congestion.

For these reasons, scans always need to be scheduled outside of normal working hours to minimize server loads and to try and be as gentle on the network as possible.

At best, this means a scan can be completed once a day for critical servers, although in a 24/7 operation, there won’t ever be a good time to scan.

This leaves some big decisions to be made.
How much extra load are you prepared to place on your sensitive network infrastructure and host systems? How long would you tolerate your critical systems being left vulnerable to attack? How long are you comfortable to leave malware undetected on your key hosts?

Agent-Based FIM versus Agentless Scanner
Agent-based vulnerability detection systems such as Tripwire and NNT Change Tracker resolve these problems through use of agents. An agent resident on a host means there is no longer any need for the network-based interrogation of the host, so there is no need for additional admin or root access to be provided to secure hosts.

The FIM agent also removes the repeated scanning load on the host and network. A one-time baseline can be operated and thereafter, only qualifying file changes will require any activity from the agent and therefore any use of host resources.

Finally, an agent will also provide a real-time detection capability. The best enterprise FIM agent will have kernel monitoring capabilities and be capable of watching for all filesystem activity, recording changes of interest as soon as they are made. Typically this applies to Linux, Windows and Solaris, but the best FIM solutions will also extend to Mac OS X, and even Android and iOS.

Summary
FIM is well-established as a means of detecting vulnerabilities but there are still options available in the market. Agentless scanners and agent-based FIM solutions are commonly operated together and it typically isn’t an either/or decision regarding which technology is the right one for your network. In fact, most organizations see benefit of a ‘second opinion’ regarding vulnerabilities which is achieved by operating a vulnerability scanner in conjunction with a continuous FIM package.