Showing posts with label file integrity monitoring software. Show all posts
Showing posts with label file integrity monitoring software. Show all posts

Wednesday, 18 December 2013

File Integrity Monitoring – 3 Reasons Why Your Security Is Compromised Without It Part 1

Introduction
This is a 3 step series examining why File Integrity Monitoring is essential for the security of any business’ IT. This first section examines the need for malware detection, addressing the inevitable flaws in anti-virus systems.

Malware Detection – How Effective is Anti-Virus?
Security Is Compromised Without File Integrity Monitoring When malware hits a system - most commonly a Windows operating system, but increasingly Linux and Solaris systems are coming under threat (especially with the renewed popularity of Apple workstations running Mac OS X) - it will need to be executed in some way in order to do its evil deeds.
This means that some kind of system file – an executable, driver or dll has to be planted on the system. A Trojan will make sure that it gets executed without further user intervention by replacing a legitimate operating system or program file. When the program runs, or the OS performs one of its regular tasks, the Trojan is executed instead.

On a user workstation, 3rd party applications such as internet browsers, pdf readers and mundane user packages like MS Word or Excel have been targeted as a vector for intermediate malware. When the document or spreadsheet is opened, the malware can exploit vulnerabilities in the application, enabling malware to be downloaded and executed.

Either way, there will always be a number of associated file changes. Legitimate system files are replaced or new system files are added to the system.

If you are lucky, you won’t be the first victim of this particular strain of malware and your AV system – provided it has been updated recently – will have the necessary signature definitions to identify and stop the malware.

When this is not the case, and bear in mind that millions of new malware variants are introduced every month, your system will be compromised, usually without you knowing anything about it, while the malware quietly goes about its business, damaging systems or stealing your data.

FIM – Catching the Malware Other Anti-Virus Systems Miss
That is, of course, unless you are using file integrity monitoring.

Enterprise-level File Integrity Monitoring will detect any unusual filesystem activity. Unusual is important, because many files will change frequently on a system, so it is crucial that the FIM system is intelligent enough to understand what regular operation looks like for your systems and only flag genuine security incidents.

However, exclusions and exceptions should be kept to a minimum because FIM is at its best when it is operated in a ‘zero tolerance’ approach to changes. Malware is formulated with the objective that it will be effective, and this means it must both be successfully distributed and operate without detection.

The challenge of distribution has seen much in the way of innovation. Tempting emails with malware bait in the form of pictures to be viewed, prizes to be won and gossip on celebrities have all been successful in spreading malware. Phishing emails provide a convincing reason to click and enter details or download forms, and specifically targeted Spear Phishing emails have been responsible for duping even the most cybersecurity-savvy user.

Whatever the vector used, once malware is welcomed into a system, it may then have the means to propagate within the network to other systems.

So early detection is of paramount importance. And you simply cannot rely on your anti-virus system to be 100% effective, as we have already highlighted.

FIM provides this ‘zero tolerance’ to filesystem changes. There is no second-guessing of what may or may not be malware, guaranteeing that all malware is reported, making FIM 100% effective in detecting any breach of this type.

Summary
FIM is ideal as a malware detection technology as it is not prone to the ‘signature lag’ or ‘zero day vulnerabilities’ that are the Achilles’ Heel of anti-virus systems. As with most security best practices, the advice is always more is better, and operating anti-virus (even with its known flaws) in conjunction with FIM will give the best overall protection. AV is effective against legacy malware and its automated protection will quarantine most threats before they do any damage. But when malware does evade the AV, as some strains always will do, real-time FIM can provide a vital safety net.

Wednesday, 11 December 2013

Which File Integrity Monitoring Technology Is Best For FIM? File Integrity Monitoring FIM or SIEM FIM?

Introduction
Within the FIM technology market there are choices to be made. Agent-based or agentless is the most common choice, but even then there are both SIEM, and ‘pure-play’ FIM, solutions to choose between.


FIM – Agents or Agentless

File Integrity Monitoring FIM or SIEM FIMThere is never a clear advantage for either agent-based or agentless FIM. There is a balance to be found between agentless FIM and the arguably superior operation of agent-based FIM, offering
  • Real-time detection of changes – agentless FIM scanners can only be effective on a scheduled basis, typically once every day
  • Locally stored baseline data meaning a one-off full scan is all that is needed, while a vulnerability scanner will always need to re-baseline and hash every single file on the system each time it scans
  • Greater security by being self-contained, whereas an agentless FIM solution will require a logon and network access to the host under test
Conversely, proponents of the Agentless vulnerability scanner will cite the advantages of their technology over an agent-based FIM system, including
  • Up and running in minutes, with no need to deploy and maintain agents on end points, makes an agentless system easier to operate
  • No need to load any 3rd party software onto endpoints, an agentless scanner is 100% self-contained
  • Foreign or new devices being added to a network will always be discovered by an agentless scanner, while an agent-based system is only effective where agents have been deployed onto known hosts For these reasons there is no outright winner of this argument and typically, most organizations run both types of technology in order to benefit from all the advantages offered.
Using SIEM for FIM

Using SIEM technology is much easier to deal with. Similar to the agentless argument, a SIEM system may be operated without requiring any agent software on the endpoints, using WMI or native syslog capabilities of the host. However this is typically seen as an inferior solution the agent-based SIEM package. An agent will allow for advanced security functions such as hashing and real-time log monitoring.

For FIM, all SIEM vendors will rely on a combination of host object access auditing, combined with a scheduled baseline of the filesystem. The auditing of filesystem activity can give real-time FIM capabilities, but will require substantially higher resources from the host to operate this than a benign agent. The native auditing of the OS will not provide hash values for files so the forensic detection of a Trojan cannot be achieved to the extent that an enterprise FIM agent will do so.

The SIEM vendors have moved to address this problem by providing a scheduled baseline and hash function using an agent. The result is a solution that is the worst of all options – an agent must be installed and maintained, but without the benefits of a real-time agent!

Summary

In summary, SIEM is best used for event log analysis and FIM is best used for File Integrity Monitoring. Whether you then decide to use an agent-based FIM solution or an agentless system is tougher. In all likelihood, the conclusion will be that a combination of the two is going to be only complete solution.

Monday, 4 November 2013

Is Your QSA Making You Less Secure?

Introduction
Most organizations will turn to a QSA when undertaking a PCI Compliance project. A Qualified Security Assessor is the guy you need to satisfy with any security measures and procedures you implement to meet compliance with the PCI DSS so it makes sense to get them to tell you what you need to do.
For many, PCI Compliance is about simply dealing with the PCI DSS in the same way they would deal with another deadlined project. When does the bank want us to be PCI Compliant and what do we need to do before we get audited in order to get a pass?

PCI Compliance project For many, this is where the problems often begin, because of course, PCI compliance isn’t simply about passing an audit but getting your organization sufficiently organized and aware of the need to protect cardholder data at all times. The cliché in PCI circles is ‘don’t take a checkbox approach to compliance’ but it is true. Focusing on passing the audit is a tangible goal, but it should only be a milestone along the way to maturing internal processes and procedures in order to operate a secure environment every day of the year, not just to drag your organization through an annual audit.

The QSA Moral Maze
However, for many, the QSA is hired to ‘make PCI go away’ and this can sometimes present a dilemma. QSAs are in business and need to compete for work like any other commercial venture. They are typically fiercely independent and take their responsibility seriously for providing expert guidance, however, they also have bills to pay.

Some get caught by the conflict of interest between advising the implementation of measures and offering to supply the goods required. This presents a difficult choice for the customer – go along with what the QSA says, and buy whatever they sell you, or go elsewhere for any kit required and risk the valuable relationship needed to get through the audit. Whether this is for new firewalls, scanning or Pen Testing services, or FIM and Logging/SIEM products, too many Merchants have been left to make difficult decisions. The simple solution is to separate your QSA from supplying any other service or product for your PCI project, but make sure this is clarified up front.

The second common conflict of interest is one that affects any kind of consultant. If you are being paid by the day for your services, would you want the engagement to be shorter or longer? If you had the opportunity to influence the duration of the engagement, would you fight for it to be ended sooner, or be happy to let it run longer?

Let’s not be too cynical over this – the majority of Merchants have paid widely differing amounts for their QSA services but have been delighted with the value for money received. But we have had one experience recently where the QSA has asked for repeated network and system architecture re-designs. They have recommended that firewalls be replaced with more advanced versions with better IPS capabilities. In both instances, you can see that the QSA is giving accurate and proper advice, however, one of the unfortunate side-effects of doing so is that the Merchant delays implementation of other PCI DSS requirements. The result in this case is that the QSA actually delays security measures being put in place, in other words, the security expert’s advice is to prolong the organizations weak security posture!

Conclusion
The QSA community is a rich source of security experience and expertise, and who better to help navigate and organization through a PCI Program than those responsible for conducting the audit for compliance with the standard. However, best practice is to separate the QSA from any other aspect of the project. Secondly, self-educate and help yourself by becoming familiar with security best practices – it will save time and money if you can empower yourself instead of paying by the day to be taught the basics. Finally, don’t delay implementing security measures – you know your systems better than anyone else, so don’t pay to prolong your project! Seize responsibility for de-scoping your environment where possible, then apply basic best practices to the remaining systems in scope – harden, implement change controls, measure effectiveness using file integrity monitoring and retain audit trails of all system activity. It’s simpler than your QSA might leave you to believe.

Thursday, 12 September 2013

Cyber Threat Sharing Bill and Cyber Incident Response Scheme – Shouldn’t We Start with System Hardening and FIM?

Background: Defending the Nation’s Critical Infrastructure from Cyber Attacks

In the UK, HM Government’s ‘Cyber Incident Response Scheme’ is closely aligned in intent and purpose to the forthcoming US Cyber Threat Sharing Bill.

The driver for both mandates is that, in order to defend against truly targeted, stealthy cyber attacks (APTs, if you like), there will need to be a much greater level of awareness and collaboration. This becomes a government issue when the nation’s critical infrastructures (defense, air traffic control, health service, power and gas utilities etc.) are concerned. Stuxnet proved that cyber attacks against critical national infrastructure can succeed and there isn’t a government anywhere in the world who doesn’t have concerns that they could be next.

The issues are clear: a breach could happen, despite best efforts to prevent it. But in the event that a security breach is discovered, identifying the nature of the threat and then properly communicating this to others at risk is time-critical. The damage may already be done at one facility but heading it off before it effects other locations becomes the new priority: the impact of the breach can be isolated if swift and effective mitigating actions can be taken at other organizations subject to the same cyber threat.

As it stands, the US appear to be going further, legislating regulation via the Cyberthreat Sharing Bill. The UK Government have created the Cyber Incident Response Scheme but without this being a legislated and regulated requirement it may be suffer from slow adoption. Why wouldn’t the UK do likewise if they are taking national cyber security as seriously?

System Hardening and FIM

Prevention Better Than Cure?

One other observation, based on experience with other ‘top down’ mandated security standards such as PCI DSS, is that there is a temptation for the authorities to prioritize specific security best practices over others. Being able to give a ‘If you only do one thing for security, it’s this…’ message gets things moving in the right direction, however it is can also lead to a false sense of security amongst the community, that the mandated steps have been taken and so the organization is now ‘secure’.

In the case of the UK initiative, the collaboration with CREST is sound in that it provides a degree of ‘quality control’ over the resources recommended for usage. However, the concern is that the emphasis of the CREST scheme may be biased too heavily towards penetration testing. Whilst this is a good, basic security practice, pen testing is either too infrequent, or too automated (and therefore breeds complacency). Better than doing nothing? Absolutely – but the program should not be stopped there.

A truly secure environment is one where all security best practices are understood and embedded within the organization and operated constantly. Likewise, vulnerability monitoring and system integrity should be a non-stop process, not a quarterly or ad hoc pen test. Real-time file integrity monitoring, continuously assessing devices for compliance with a hardened build standard and identifying all system changes is the only way to truly guarantee security.

Wednesday, 4 September 2013

PCI DSS Version 3 and File Integrity Monitoring – New Standard, Same Problems

PCI DSS Version 3.0

PCI DSS Version 3 will soon be with us. Such is the anticipation that the PCI Security Standards Council have released a sneak preview ‘Change Highlights’ document.
The updated Data Security Standard highlights include a wagging finger statement which may be aimed at you if you are a Merchant or Acquiring Bank.

“Cardholder data continues to be a target for criminals. Lack of education and awareness around payment security and poor implementation and maintenance of the PCI Standards leads to many of the security breaches happening today”

In other words, a big part of the drive for the new version of the standard is to give it some fresh impetus. Just because the PCI DSS isn’t new, it doesn’t make it any less relevant today.

pci dss v3

But What is the Benefit of the PCI DSS for Us?

To understand just how relevant cardholder data protection is, the hard facts are outlined in the recent Nilson report. Their findings are that global card fraud losses have now exceeded $11Billion. It’s not all bad news if you are a card brand or issuing bank – the losses are made slightly more bearable by the fact that the total of transactions now exceeds $21TRILLION.

http://www.nilsonreport.com/publication_the_current_issue.php?1=1

“Card issuer losses occur mainly at the point of sale from counterfeit cards. Issuers bear the fraud loss if they give merchants authorization to accept the payment. Merchant and acquirer losses occur mainly on card-not-present (CNP) transactions on the Web, at a call center, or through mail order”

This is why the PCI DSS exists and needs to be taken seriously with all requirements fully implemented, and practised daily. Card fraud is a very real problem and as with most crimes, if you think it won’t happen to you, think again. Ignorance, complacency and corner-cutting are still the major contributors to card data theft.

The changes are very much in line with NNT’s methodology of continuous, real-time security validation for all in scope systems – the PCI SSC state that the changes in version 3 of the standard include “Recommendations focus on helping organizations take a proactive approach to protect cardholder data that focuses on security, not compliance, and makes PCI DSS a business-as-usual practice”

So instead of this being a ‘Once a year, get some scans done, patch everything, get a report done from a QSA then relax for another 11 months’ exercise, the PCI SSC are trying to educate and encourage merchants and banks to embed or entrench security best practices within their everyday operations, and be PCI Compliant as a natural consequence of this.

Continuous FIM – The Foundation of PCI Compliance

In fact, taking a continuous FIM approach as the starting point for security and PCI compliance makes much sense. It doesn’t take long to set up, it will only tell you if you need to take action when you need to do so, will help to define a hardened build standard for your systems and will drive you to adopt the necessary discipline for change control, plus it will give you full peace of mind that systems are being actively protected at all times, 100% in line with PCI DSS requirements.

Monday, 8 July 2013

File Integrity Monitoring – FIM and Why Change Management is the Best Security Measure You Can Implement

Introduction
Server room racksWith the growing awareness that cyber security is an urgent priority for any business there is a ready-market for automated, intelligent security defenses. The silver-bullet against malware and data theft is still being developed (promise!) but in the meantime there are hordes of vendors out there that will sell you the next best thing.

The trouble is, who do you turn to? According to, say, the Palo Alto firewall guy, his appliance is the main thing you need to best protect your company’s intellectual property, although if you then speak to the guy selling the FireEye sandbox, he may well disagree, saying you need one of his boxes to protect your company from malware. Even then, the McAfee guy will tell you that endpoint protection is where it’s at – their Global Threat Intelligence approach should cover you for all threats.

In one respect they are all right, all at the same time – you do need a layered approach to security defenses and you can almost never have ‘too much’ security. So is the answer as simple as ‘buy and implement as many security products as you can’?

Cyber Security Defenses– Can You Have Too Much of a Good Thing?
Before you draw up your shopping list, be aware all this stuff is really expensive, and the notion of buying a more intelligent firewall to replace your current one, or of purchasing a sandbox appliance to augment what your MIMEsweeper already largely provides, demands a pause for thought. What is the best return on investment available, considering all the security products on offer?

Arguably, the best value for money security product isn’t really a product at all. It doesn’t have any flashing lights, or even a sexy looking case that will look good in your comms cabinet, and the datasheet features don’t include any impressive packets per second throughput ratings. However, what a good Change Management process will give you is complete visibility and clarity of any malware infection, any potential weakening of defenses plus control over service delivery performance too.

In fact, many of the best security measures you can adopt may come across as a bit dull (compared to a new piece of kit for the network, what doesn’t seem dull?) but, in order to provide a truly secure IT environment, security best practices are essential.

Change Management – The Good, The Bad and The Ugly (and The Downright Dangerous)
There are four main types of changes within any IT infrastructure
  • Good Planned Changes (expected and intentional, which improve service delivery performance and/or enhance security)
  • Bad Planned Changes (intentional, expected, but poorly or incorrectly implemented which degrade service delivery performance and/or reduce security)
  • Good Unplanned Changes (unexpected and undocumented, usually emergency changes that fix problems and/or enhance security)
  • Bad Unplanned Changes (unexpected, undocumented, and which unintentionally create new problems and/or reduce security)
A malware infection, intentionally by an Inside Man or external hacker also falls into the last category of Bad Unplanned Changes. Similarly, a rogue Developer implanting a Backdoor into a corporate application. The fear of a malware infection, be it a virus, Trojan or the new buzzword in malware, an APT, is typically the main concern of the CISO and it helps sell security products, but should it be so?

A Bad Unplanned Change that unintentionally renders the organization more prone to attack is a far more likely occurrence than a malware infection, since every change that is made within the infrastructure has the potential to reduce protection. Developing and implementing a Hardened Build Standard takes time and effort, but undoing painstaking configuration work only takes one clumsy engineer to take a shortcut or enter a typo. Every time a Bad Unplanned Change goes undetected, the once secure infrastructure becomes more vulnerable to attack so that when your organization is hit by a cyber-attack, the damage is going to be much, much worse.

To this end, shouldn’t we be taking Change Management much more seriously and reinforcing our preventative security measures, rather than putting our trust in another gadget which will still be fallible where Zero Day Threats, Spear Phishing and straightforward security incompetence are concerned?

The Change Management Process in 2013 – Closed Loop and Total Change Visibility
The first step is to get a Change Management Process – for a small organization, just a spreadsheet or a procedure to email everyone concerned to let them know a change is going to be made at least gives some visibility and some traceability if problems subsequently arise. Cause and Effect generally applies where changes are made – whatever changed last is usually the cause of the latest problem experienced.
Which is why, once changes are implemented, there should be some checks made that everything was implemented correctly and that the desired improvements have been achieved (which is what makes the difference between a Good Planned Change and a Bad Planned Change).

For simple changes, say a new DLL is deployed to a system, this is easy to describe and straightforward to review and check. For more complicated changes, the verification process is similarly much more complex. Unplanned Changes, Good and Bad, present a far more difficult challenge. What you can’t see, you can’t measure and, by definition, Unplanned Changes are typically performed without any documentation, planning or awareness.

Contemporary Change Management systems utilize File Integrity Monitoring, providing a zero tolerance to changes. If a change is made – configuration attribute or to the filesystem – then the changes will be recorded.

In advanced FIM systems, the concept of a time window or change template can be pre-defined in advance of a change to provide a means of automatically aligning the details of the RFC (Request for Change) with the actual changes detected. This provides an easy means to observe all changes made during a Planned Change, and greatly improve the speed and ease of the verification process.

This also means that any changes detected outside of any defined Planned Change can immediately be categorized as Unplanned, and therefore potentially damaging, changes. Investigation becomes a priority task, but with a good FIM system, all the changes recorded are clearly presented for review, ideally with ‘Who Made the Change?’ data.

Summary
Change Management is always featured heavily in any security standard, such as the PCI DSS, and in any Best Practice framework such as SANS Top Twenty, ITIL or COBIT.

If Change Management is part of your IT processes, or your existing process is not fit for purpose, maybe this should be addressed as a priority? Coupled with a good Enterprise File Integrity Monitoring system, Change Management becomes a much more straightforward process, and this may just be a better investment right now than any flashy new gadgets?

Monday, 1 July 2013

File Integrity Monitoring – Use FIM to Cover All the Bases

Why use FIM in the first place?
Unlike anti-virus and firewalling technology, FIM is not yet seen as a mainstream security requirement. In some respects, FIM is similar to data encryption, in that both are undeniably valuable security safeguards to implement, but both are used sparingly, reserved for niche or specialized security requirements.

FIM solutionsHow does FIM help with data security?
At a basic level, File Integrity Monitoring will verify that important system files and configuration files have not changed, in other words, the files’ integrity has been maintained.

Why is this important? In the case of system files – program, application or operating system files – these should only change when an update, patch or upgrade is implemented. At other times, the files should never change.

Most security breaches involving theft of data from a system will either use a keylogger to capture data being entered into a PC (the theft then perpetrated via a subsequent impersonated access), or some kind of data transfer conduit program, used to siphon off information from a server. In all cases, there has to be some form of malware implanted onto the system, generally operating as a Trojan i.e. the malware impersonates a legitimate system file so it can be executed and provided with access privileges to system data.

In these instances, a file integrity check will detect the Trojans existence, and given that zero day threats or targeted APT (advanced persistent threat) attacks will evade anti-virus measures, FIM comes into its own as a must-have security defense measure. To give the necessary peace of mind that a file has remained unchanged, the file attributes governing security and permissions, as well as the file length and cryptographic hash value must all be tracked.

Similarly, for configuration files, computer configuration settings that restrict access to the host, or restrict privileges for users of the host must also be maintained. For example, a new user account provisioned for the host and given admin or root privileges is an obvious potential vector for data theft – the account can be used to access host data directly, or to install malware that will provide access to confidential data.

File Integrity Monitoring and Configuration Hardening
Which brings us to the subject of configuration hardening. Hardening a configuration is intended to counteract the wide range of potential threats to a host and there are best practice guides available for all versions of Solaris, Ubuntu, RedHat, Windows and most network devices. Known security vulnerabilities are mitigated by employing a fundamentally secure configuration set-up for the host.

For example, a key basic for securing a host is via a strong password policy. For a Solaris, Ubuntu or other Linux host, this is implemented by editing the /etc/login.defs file or similar, whereas a Windows host will require the necessary settings to be defined within the Local or Group Security Policy. In either case, the configuration settings exist as a file that can be analyzed and the integrity verified for consistency (even if, in the Windows case, this file may be a registry value or the output of a command line program).

Therefore file integrity monitoring ensures a server or network device remains secure in two key dimensions: protected from Trojans or other system file changes, and maintained in a securely defended or hardened state.

File integrity assured – but is it the right file to begin with?
But is it enough to just use FIM to ensure system and configuration files remain unchanged? By doing so, there is a guarantee that the system being monitored remains in its original state, but there is a risk of perpetuating a bad configuration, a classic case of ‘junk in, junk out’ computing. In other words, if the system was built using an impure source – the recent Citadel keylogger scam is estimated to have netted over $500M in funds stolen from bank accounts where PCs were set-up using pirated Windows Operating System DVDs, each one with keylogger malware included free of charge.

In the corporate world, OS images, patches and updates are typically downloaded directly from the manufacturer website, therefore providing a reliable and original source. However, the configuration settings required to fully harden the host will always need to be applied and in this instance, file integrity monitoring technology can provide a further and invaluable function.

The best Enterprise FIM solutions can not only detect changes to configuration files/settings, but also analyze the settings to ensure that best practice in security configuration has been applied.

In this way, all hosts can be guaranteed to be secure and set-up in line with not just industry best practice recommendations for secure operation, but with any individual corporate hardened build-standard.
A hardened build-standard is a pre-requisite for secure operations and is mandated by all formal security standards such as PCI DSS, SOX, HIPAA, and ISO27K.

Conclusion
Even if FIM is being adopted simply to meet the requirements of a compliance audit, there is a wide range of benefits to be gained over and above simply passing the audit.

Protecting host systems from Trojan or malware infection cannot be left solely to anti-virus technology. The AV blind-spot for zero day threats and APT-type attacks leaves too much doubt over system integrity not to utilize FIM for additional defense.

But preventing breaches of security is the first step to take, and hardening a server, PC or network device will fend off all non-insider infiltrations. Using a FIM system with auditing capabilities for best practice secure configuration checklists makes expert-level hardening straightforward.

Don’t just monitor files for integrity – harden them first!

Wednesday, 19 June 2013

File Integrity Monitoring - View Security Incidents in Black and White or in Glorious Technicolor?

FIM solutionsThe PCI DSS and File Integrity Monitoring
Using FIM, or file integrity monitoring has long been established as a keystone of information security best practices. Even so, there are still a number of common misunderstandings about why FIM is important and what it can deliver.

Ironically, the key contributor to this confusion is the same security standard that introduces most people to FIM in the first place by mandating the use of it - the PCI DSS.

PCI DSS Requirement 11.5 specifically uses the term 'file integrity monitoring' in relation to the need to "to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly"

As such, since the term 'file integrity monitoring' is only mentioned in requirement 11.5, one could be forgiven for concluding that this is the only part FIM has to play within the PCI DSS.

In fact, the application of FIM is and should be much more widespread in underpinning a solid secure posture for an IT estate. For example, other key requirements of the PCI data security standard are all best addressed using file integrity monitoring technology such as "Establish firewall and router configuration standards" (Req 1), "Develop configuration standards for all system components" (Req 2), "Develop and maintain secure systems and applications" (Req 6), "Restrict access to cardholder data by business need to know" (Req 7), Ensure proper user identification and authentication management for nonconsumer users and administrators on all system components" (Req 8), "Regularly test security systems and processes" (Req 11).
Within the confines of Requirement 11.5 only, many interpret this requirement as a simple 'has the file changed since last week?' and, taken in isolation, this would be a legitimate conclusion to reach. However, as highlighted earlier, the PCI DSS is a network of linked and overlapping requirements, and the role for file integrity analysis is much broader, underpinning other requirements for configuration hardening, configuration standards enforcement and change management.

But this isn't just an issue with how merchants read and interpret the PCI DSS. The new wave of SIEM vendors in particular are keen to take this narrow definition as 'secure enough' and for good, if selfish, reasons.

Do everything with SIEM - or is FIM + SIEM the right solution?

PCI requirement 10 is all about logging and the need to generate the necessary security events, backup log files and analyze the details and patterns. In this respect a logging system is going to be an essential component of your PCI DSS toolset.

SIEM or Event log management systems all rely on some kind of agent or polled-WMI method for watching log files. When the log file has new events appended to it, these new events are picked up by the SIEM system, backed up centrally and analyzed for either explicit evidence of security incidents or just unusual activity levels of any kind that may indicate a security incident. This approach has been expanded by many of the SIEM product vendors to provide a basic FIM test on system and configuration files and determine whether any files have changed or not.

A changed system file could reveal that a Trojan or other malware has infiltrated the host system, while a changed configuration file could weaken the host's inherently secure 'hardened' state making it more prone to attack. The PCI DSS requirement 11.5 mentioned earlier does use the word 'unauthorized' so there is a subtle reference to the need to operate a Change Management Process. Unless you can categorize or define certain changes as 'Planned', 'Authorized' or expected in some way, you have no way to label other changes as 'unauthorized' as is required by the standard.

So in one respect, this level of FIM is a good means of protecting your secure infrastructure. However, in practice, in the real-world, 'black and white' file integrity monitoring of this kind is pretty unhelpful and usually ends up giving the Information Security Team a stream of 'noise' - too many spurious and confusing alerts, usually masking the genuine security threats.

Potential security events? Yes.
Useful, categorized and intelligently assessed security events? No.

So if this 'changed/not changed' level of FIM is the black and white view, what is the Technicolor alternative? If we now talk about true Enterprise FIM (to draw a distinction from basic, SIEM-style FIM), this superior level of FIM provides file changes that have been automatically assessed in context - is this a good change or a bad change?

For example, if a Group Policy Security Setting is changed, how do you know if this is increasing or decreasing the policy's protection? Enterprise FIM will not only report the change, but expose the exact details of what the change is, was it a planned or unplanned change, and whether this violates or complies with your adopted Hardened Build Standard.

Better still, Enterprise FIM can give you an immediate snapshot of whether databases, servers, EPoS systems, workstations, routers and firewalls are secure - configured within compliance of your Hardened Build Standard or not. By contrast, a SIEM system is completely blind to how systems are configured unless a change occurs.

Conclusion

The real message is that trying to meet your responsibilities with respect to PCI Compliance requires an inclusive understanding of all PCI requirements. Requirements taken in isolation and too literally may leave you with a 'noisy' PCI solution, helping to mask rather than expose potential security threats. In conclusion, there are no short cuts in security - you will need the right tools for the job. A good SIEM system is essential for addressing Requirement 10, but an Enterprise FIM system will give you so much more than just ticking the box for Req 11.5.

Full color is so much better than black and white.

Wednesday, 12 June 2013

File Integrity Monitoring - FIM Could Just Save Your Business

Busted! The Citadel Cybercrime Operation

No guns were used, no doors forced open, and no masks or disguises were used, but up to $500Million has been stolen from businesses and individuals around the world. Reuters reported last week that one of the worlds biggest ever cybercrime rings has just been shut down. The Citadel botnet operation, first exposed in August last year, shows that anyone who wants to think big when it comes to cybercrime can make truckloads of money without even leaving home.

It's a familiar story of basic identity theft - PC's used to access on-line bank accounts were infiltrated by keylogging malware known as Citadel. This allowed security credentials to be stolen and then used to steal money from the victims' bank accounts. The malware had been in operation for up to 18 months and had affected up to 5 million PC's.

Like any malware, until it has been discovered, isolated and understood, anti-virus technology cannot tackle malware like Citadel. So-called 'zero day' malware can operate undetected until such time as an anti-virus definition has been formulated to recognize the malware files and remove them.

This is why file integrity monitoring software is also an essential defense measure against malware. File integrity monitoring or FIM technology works on a 'zero tolerance' basis, reporting any changes to operating system and program filesystems. FIM ensures that nothing changes on your protected systems without being reported for validation, for example, a Windows Update will result in file changes, but provided you are controlling when and how updates gets applied, you can then isolate any unexpected or unplanned changes, which could be evidence of a malware infection. Good FIM systems filter out expected, regular filechanges and focus attention on those system and configuration files which, under normal circumstances, do not change.

A victimless crime? Maybe not if you're a business that has been affected

In a situation like this, banks will usually try and unravel the problem between themselves - bank accounts that have been plundered will have had money moved to another bank account and another bank account and so on, and attempts will be made to recover any misappropriated funds. Inevitably some of the cash will have been spent but there is also a good chance that large sums can be recovered.
Generally speaking, individuals affected by identity theft or credit card fraud will have their funds reimbursed by their bank and the banking system as a whole, so it often feels like a victimless crime has been perpetrated.

Worryingly though, in this case, an American Bankers Association spokesman has been reported as saying that 'banks may require business customers to incur the losses'. It isn't clear as to why the banks may be seeking to place blame on business customers in this case. It is reported that Citadel was present in illegally pirated copies of Windows, so the victims may well be guilty of using counterfeit software, but who is to blame, and how far down the line can the blame be passed? The business customer, their supplier of the pirated software, the wholesaler who supplied the supplier?

Either way, any business user of on-line banking technology (and the consensus of estimates suggest that around half of businesses do at least 50% of their banking on-line, but that this is increasing year on year) should be concerned that protecting access to their bank account should be something they take seriously. It could well be that nobody else is looking out for you.

Conclusion

It may still be the case that 'Crime doesn't pay' but it seems that Cybercrime can pay handsomely. But for cybercrime to work, there needs to be a regular supply of victims and in this case, victims not using any kind of file integrity monitoring are leaving themselves exposed to zero-day malware which is currently invisible to anti-virus systems.

Good security is not just about installing AV software or even operating FIM but should be a layered and integrated approach. Leveraging security technology such as AV, FIM, firewalling, IDS and IPS should be done in conjunction with sound operating procedures to harden and patch systems regularly, verified with a separate auditing and governance function.
The biggest security threat is still complacency.


Wednesday, 29 May 2013

File Integrity Monitoring - Is FIM Better Than AV? Is a Gun Better Than a Knife?

Is a gun better than a knife?

I've been trying hard for an analogy, but this one kind of works. Which is better? A gun or a knife?
Both will help defend you against an attacker. A gun may be better than a knife if you are under attack from a big group of attackers running at you, but without ammunition, you are left defenseless. The knife works without ammunition and always provides a consistent deterrent, so in some respects, gives better protection than a gun.

Which is not a bad way to try and introduce the concept of FIM versus Anti-Virus technology. Anti-Virus technology will automatically eliminate malware from a computer, usually before it has done any damage. Both at the point at which malware is introduced to a computer, thorough email, download or USB, and at the instant at which a malware file is accessed, the AV will scan for known malware. If identified as a known virus, or even if the file exhibits characteristics that are associated with malware, the infected files can be removed from the computer.

However, if the AV system doesn't have a definition for the malware at hand, then like a gun with an empty magazine, it can't do anything to help.

File Integrity Monitoring by contrast may not be quite so 'active' in wiping out known malware, but - like a knife - it never needs ammo to maintain its role as a defense against malware. A FIM system will always report potentially unsafe filesystem activity, albeit with intelligence and rules to ignore certain activities that are always defined safe, regular or normal.

AV and FIM versus the Zero Day Threat

The key points to note from the previous description of AV operation is that the virus must either be 'known' i.e. the virus has been identified and categorized by the AV vendor, or that the malware must 'exhibit characteristics associated with malware' i.e. it looks, feels and acts like a virus. Anti-virus technology works on the principle that it has a regularly updated 'signature' or 'definition' list containing details of known malware. Any time a new file is introduced to the computer, the AV system has a look at the file and if it matches anything on its list, the file gets quarantined.

In other words, if a brand new, never-been-seen-before virus or Trojan is introduced to your computer, it is far from guaranteed that your AV system will do anything to stop it. Ask yourself - if AV technology was perfect, why would anybody still be concerned about malware?

The lifecycle of malware can be anything from 1 day to 2 years. The malware must first be seen - usually a victim will notice symptoms of the infection and investigate before reporting it to their AV vendor. At that point the AV vendor will work out how to counteract the malware in the future, and update their AV system definitions/signature files with details of this new malware strain. Finally the definition update is made available to the world, individual servers and workstations around the world will update themselves and will thereafter be rendered immune to this virus. Even if this process takes a day to conclude then that is a pretty good turnaround - after just one day the world is safe from the threat.

However, up until this time the malware is a problem. Hence the term 'Zero Day Threat' - the dangerous time is between 'Day Zero' and whichever day the inoculating definition update is provided.

By contrast, a FIM system will detect the unusual filesystem activity - either at the point at which the malware is introduced or when the malware becomes active, creating files or changing server settings to allow it to report back the stolen data.

Where is FIM better than AV?

As outlined previously, FIM needs no signatures or definitions to try and second guess whether a file is malware or not and it is therefore less fallible than AV.

Where FIM provides some distinct advantage over and above AV is in that it offers far better preventative measures than AV. Anti-Virus systems are based on a reactive model, a 'try and stop the threat once the malware has hit the server' approach to defense.

An Enterprise FIM system will not only keep watch over the core system and program files of the server, watching for malware introductions, but will also audit all the server's built-in defense mechanisms. The process of hardening a server is still the number one means of providing a secure computing environment and prevention, as we all know, is better than cure. Why try and hope your AV software will identify and quarantine threats when you can render your server fundamentally secure via a hardened configuration?
Add to this that Enterprise FIM can be used to harden and protect all components of your IT Estate, including Windows, Linux, Solaris, Oracle, SQL Server, Firewalls, Routers, Workstations, POS systems etc. etc. etc. and you are now looking at an absolutely essential IT Security defense system.

Conclusion

This article was never going to be about whether you should implement FIM or AV protection for your systems. Of course, you need both, plus some good firewalling, IDS and IPS defenses, all wrapped up with solid best practices in change and configuration management, all scrutinized for compliance via comprehensive audit trails and procedural guidelines.

Unfortunately there is no real 'making do' or cutting corners when it comes to IT Security. Trying to compromise on one component or another is a false economy and every single security standard and best practice guide in the world agrees on this.

FIM, AV, auditing and change management should be mandatory components in your security defenses.

Tuesday, 7 May 2013

File Integrity Monitoring – Database Security Hardening Basics

The Database – The Mother Lode of Sensitive Data
Being the heart of any corporate application means your database technology must be implemented and configured for maximum security. Whilst the desire to ‘get the database as secure as possible’ appears to be a clear objective, what does ‘secure as possible’ mean?
Database Security Hardening Basics

Whether you use Oracle 10g, Oracle 11g, DB2, Microsoft SQL Server, or even MySQL or PostgreSQL, a contemporary database is at least as complex as any modern server operation system. The database system will comprise a whole range of configuration parameters, each with security implications, including
  • User accounts and password settings
  • Roles and assigned privileges
  • File/object permissions
  • Schema structure
  • Auditing functions
  • Networking capabilities
  • Other security defense settings, for example, use of encryption
Hardened Build Standard for Oracle, SQL Server, DB2 and others
Therefore, just as with any Windows or Linux OS, there is a need to derive a hardened build standard for the database. This security policy or hardened build standard will be derived from collected best practices in security configuration and vulnerability mitigation/remediation, and just as with an operating system, the hardening checklist will comprise hundreds of settings to check and set for the database.
Depending on the scale of your organization, you may then need hardening checklists for Oracle 10g, Oracle 11g, SQL Server, DB2, PostgreSQL and MySQL, and maybe other database systems besides.

Automated Compliance Auditing for Database Systems
Potentially, there will be a requirement to verify that all databases are compliant with your hardened build standard involving hundreds of checks for hundreds of database systems, so automation is essential, not least because the hardening checklists are complex and time-consuming to verify. There is also somewhat of a conflict to manage in as much as the user performing the checklist tests will necessarily require administrator privileges to do so. So in order to verify that the database is secure, you potentially need to loosen security by granting admin rights to the user carrying out the audit. This provides a further driver to moving the audit function to a secure and automated tool.

In fact, given that security settings could be changed at any time by any user with privileges to do so, verifying compliance with the hardened build standard should also become a regular task. Whilst a formal compliance audit might be conducted once a year, guaranteeing security 365 days a year requires automated tracking of security settings, providing continuous reassurance that sensitive data is being protected.

Insider Threat and Malware Protection for Oracle and SQL Server Database Systems
Finally, there is also the threat of malware and insider threats to consider. A trusted developer will naturally have access to system and application files, as well as the database and its filesystem. Governance of the integrity of configuration and system files is essential in order to identify malware or an insider-generated application ‘backdoor’. Part of the answer is to operate tight scrutiny of the change management processes for the organization, but automated file integrity monitoring is also essential if disguised Trojans, zero day malware or modified bespoke application files are to be detected.

File Integrity Monitoring – A Universal Solution to Hardening Database Systems
In summary, the most comprehensive measure to securing a database system is to use automated file integrity monitoring. File integrity monitoring or FIM technology serves to analyze configuration files and settings, both for vulnerabilities and for compliance with a security best practices-based hardened-build standard.
The FIM approach is ideal, as it is provides a snapshot audit capability for any database, providing an audit report within a few seconds, showing where security can be improved. This not only automates the process, making a wide-scale estate audit simple, but also de-skills the hardening exercise to an extent. Since the best practice knowledge of how to identify vulnerabilities and also which files need to be inspected is stored within the FIM tool report, the user can get an expert assessment of their database security without needing to fully research and interpret hardening checklist materials.

Finally, file integrity monitoring will also identify Trojans and zero-day malware that may have infected the database system, and also any unauthorized application changes that may introduce security weaknesses.
Of course, any good FIM tool will also provide file integrity monitoring functions to Windows, Linux and Unix servers as well as firewalls and other network devices, performing the same malware detection and hardening audit reporting as described for database systems.

For fundamentally secure IT systems, FIM is still the best technology to use.

Wednesday, 10 April 2013

FIM for PCI DSS - Card Skimmers Still Doing the Business After All These Years

Card Skimming - Hardware or Software?
Simplest is still best - whether they are software-based (as in the so-called 'Dexter' or 'VSkimmer' Trojan - Google it for more information) or classic hardware interception devices, card skimming is still a highly effective means of stealing card data.
FIM for PCI DSS
The hardware approach can be as basic as inserting an in-line card data capture device between the card reader and the EPOS system or Till. This sounds crude but in more advanced cases, the card skimming hardware is cunningly embedded within the card reader itself, often with a cell phone circuit to relay the data to the awaiting fraudster.
Software skimmers are potentially far more powerful. First of all, they can be distributed globally and clearly are not physically detectable like the hardware equivalent. Secondly, they provide access to both 'card present' i.e. POS transactions as well as 'card not present' transactions, for example, tapping into payments via an eCommerce website.

EMV or Chip and PIN - Effective up to a Point
Where implemented - which of course, excludes the US at present - EMV technology (supporting 'Chip and PIN' authorizations) has resulted in big reductions in 'cardholder-present' fraud. A card skimmer would need not just the card details but the added encryption PIN (Personal Identity Number) to unlock it. Embedded card skimming technology can grab the PIN as it is entered too, and hence the emphasis on requiring only approved PIN entry devices that have anti-tampering measures in-built. Alternatively, just use a video camera to record the user entering the PIN and write it down!
By definition, the EMV chip security and PIN entry requirement is only effective for face-to-face transactions where a PED (PIN Entry Device) is used. As a consequence, 'card not present' fraud is still increasing rapidly all over the world, proving that card skimming remains a potentially lucrative crime.
In a global market, easily accessible via the internet, software card skimming is a numbers game. It is also one that relies on a constantly renewing stream of card numbers since card fraud detection capabilities improve both at the acquiring banks and card brands themselves.

Card Skimming in 2013 - The Solution is Still Here
Recently reported research in SC Magazine suggests that businesses are subject to cyber attacks every 3 minutes. The source of the research is Fire Eye, a sandbox technology provider, and they are keen to stress that these malware events are ones that would bypass what they refer to as legacy defences - firewalls, anti-virus and other security gateways. In other words, zero day threats, typically mutated or modified versions of Trojans or other malware, delivered via phishing attacks.
What is frustrating to the PCI Security Standards Council and the card brands (and no doubt software companies like Tripwire, nCircle and NNT!) is that the 6 year old PCI DSS advocates arrange of perfectly adequate measures to prevent any of these newly discovered Trojans (and buying a Fire Eye scanner isn't on the list!) All eCommerce servers and EPOS systems should be hardened and protected using file integrity monitoring. While firewalls and anti-virus is also mandatory, FIM is used to detect malware missed by these devices which, as the Fire Eye report shows, is as common as ever. A Trojan like VSkimmer or Dexter will manifest as file system activity and, on a Windows-system, will always generate registry changes.
Other means of introducing skimming software are also blocked if the PCI DSS is followed correctly. Card data storing systems should be isolated from the internet where possible, USB ports should be disabled as part of the hardening process, and any network access should be reduced to the bare minimum required for operational activities. Even then, access to systems should be recorded and limited to unique usernames only (not generic root or Administrator accounts).
The PCI DSS may be old in Internet Years, but fundamentally sound and well-managed security best practises have never be as relevant and effective as they are today.

Tuesday, 19 February 2013

Agentless FIM – Why File Integrity Monitoring Without Agents Is The Same, and Better, and Worse than using Agents

Introduction
Agentless FIMAgent versus Agentless is a perennial debate for any monitoring requirements and is something that has been written about previously.
The summary of the previous assessment was that agent-based FIM is usually better due to the real-time detection of changes, negating the need for repeated full baseline operations, and due to the agent providing file hashing, even though there is an additional management overhead for the installation and maintenance of agent software.
But what about Agentless systems that purport to provide hashing? Seemingly being able to encircle all requirements and deliver functionality of an agent-based FIM solution but still without using an agent?

What Is So Scary About Agents Anyway?
The problem with all agents is one of maintenance. First the agent itself needs to be deployed and installed on the endpoint. Usually this will also require other components like Java or Mono to be enabled at the endpoint too, and these all have their own requirements for maintenance too.
WSUS/Windows Update Services and Update Manager functions in Ubuntu all make life much easier now to maintain packaged programs but it is accepted than introducing more components to any system will only ever increase the range of ‘things that can go wrong’.
So we’ll make that 1-0 to Agentless for ease of implementation and maintenance, even though both functions can be automated to a greater or lesser degree – good FIM solutions will automatically update their agent components if new versions are released.

System Resources – Which Option Is More Efficient?
No agent means the agentless system must operate on a polled basis, and operating on a polled basis means the monitoring system is blind to any security events or configuration changes that occur until the next poll. This could mean that security threats go undetected for hours, and in the case of rootkit malware, irreparable damage could have been done before anybody knows that there is a problem.
Poll intervals can be reduced, but the nature of an agentless system is that every attribute for every object or file being monitored must be gathered for every poll because, unlike there is with an agent-based FIM solution, there is no means of tracking and recording changes as they happen.
The consequence of this is that agentless polls are as heavy in terms of system resources as the initial baselining operation of an agent-based system. Every single file and attribute must be recorded for every poll, regardless of whether changes have occurred or not.
Worse still, all the data collected must be dragged across the network to be analyzed centrally, and again, this load is repeated for every single poll. This also makes agentless scans slow to operate.
By contrast, an agent-based FIM solution will work through the full baseline process once only, and then use its vantage point on the endpoint host to record changes to the baseline in real-time as they occur. Being host-based also gives the agent access to the OS as changes are made, thereby enabling capture of ‘Who made the Change’ data too.
The agent gives a much more host-resource and network efficient solution, operating a changes-only function. If there are no changes to record, no host resources are used and no network capacity used either. The agentless poll will always use a full baseline’s worth of resource for every scheduled scan. Furthermore this makes running a report significantly slower than using an agent that already has up to date baselines of the information needed in the report.
This easily levels the scores up at 1-1.

Security Considerations of Agentless versus Agent-Based FIM Solutions
Finally, there is a further consideration for the agentless solution that doesn’t apply to the agent-based FIM option. By requiring the agentless solution to login and execute commands on the server to gather baseline information, the agentless solution server needs an Account with network access to the host. The Account provisioned will need sufficiently high privileges to access folders and files that need to be tracked and by definition, these are typically the most sensitive objects on the server in terms of security governance. Use of Private Keys can be used to help restrict access to a degree, but an agentless solution will always carry with it an additional inherent security risk over and above that posed by agent-based technology.
I would call that a clear 2-1 to the Agent, being more efficient, faster and more effective in reporting threats in real-time.

File Hashing – What is the Advantage?
The classic approach to file integrity monitoring is to record all the file attributes for a file, then perform a comparison of the same data to see if any have changed.
For more detail of how the file make-up or contents has changed, mainly relevant to Linux/Unix text-based configuration files or web application configuration files, then the contents may be compared side-by-side to show changes.
Using a file hash (more accurately a cryptographic file hash) is an elegant and very neat way of summarizing a file’s composition in a single, simple, unique code. This provides several key benefits -
  • Regardless of the size and complexity (text or binary) of the file being tracked, a fixed length but unique code can be created for any file – comparing hash values for files is a simple but highly sensitive way to check whether there have been any changes or not
  • The hash is unique for each file and, due to the algorithms used to generate cryptographic hashes, even tiny changes result in significant variations in the hash values returned, making changes obvious
  • The hash is portable so the same file held on different servers will return the same identical hash value, providing a forensic-level ‘DNA Fingerprint’ for the file and version
Therefore cryptographic hashing is an important dimension to file integrity monitoring, however, the standard Windows OS programs and components do not offer a readily useable mechanism for delivering this function.
So a further big advantage of using an agent-based FIM solution is that cryptographic hashing can be provided on a all platforms, unlike a pure agentless solution.
3-1 to the Agent and it looks like it is going to be hard for the agentless solution to get back in the game!

When is Agentless FIM Really the Same as an Agent-Based FIM Solution?
Most vendors like Tripwire provide a clear-cut agent-based and Agentless option with the pros and cons of each option understood.
The third option is where the best of the agentless and agent-based solutions come together to encircle all capabilities. This kind of solution is positioned as agentless, and yet delivers the agent-based features . The solution behaves like an agentless solution, in as much as it functions on a scheduled full-scan basis, logging in to devices to track file characteristics. However there is no need to pre-install an agent to run FIM, so the solution feels like it is agentless.
In practice the solution requires an Administrator logon to the servers to be scanned. The system then logs on and executes a whole sequence of command line scripted commands to check file integrity, but will also pipe across a program to help perform file hashing. This program – some say agent - will then be deleted after the scan.
So is this solution agentless? No, although it does remove the major hassle with an agent-based solution in that it automates the initial deployment of the agent.
What are the other benefits of this approach? None really. It is less secure than an installed agent - providing an Admin logon that can be used across the whole network is arguably weakening security before you even start.
It is massively less efficient than a local agent - piping programs across the network, then executing a bunch of scripts, then dragging all the results back across the network is hugely inefficient compared to an agent that runs locally, does its baselines and compares locally and then only if it needs to, sends results back.
It is also fundamentally not a very effective way to keep your estate secure - which kind of misses the point of doing it in the first place! Reason being you only get to know that security is weakened or actually compromised when you next run a scan - always too late! An agent-based FIM solution will detect config drift and FIM changes in real-time - you know you have a potential risk within seconds of it arising, complete with details of who made the change.

Summary
So in summary, agentless is less efficient, less secure and less able to keep your estate secure (and the most effective Agentless solutions still use a temporary agent anyway). The ease of deployment of Agentless is tempting, but this can always be automated using any one of a number of software distribution solutions. Perhaps the best solution still is to reserve the option on both and choose the best approach on balance? For example, Firewall appliances will always need to be handled using scripted, Agentless interrogation, while Windows servers can only truly be audited for vulnerabilities using a cryptographic hashing, real-time change detection agent.

Wednesday, 10 October 2012

File Integrity Monitoring and SIEM – Combat the Zero Day Threats and Modern Malware that Anti-Virus Systems miss

Introduction
It is well known that Anti-Virus technology is fallible and will continue to be so by design. The landscape (Threatscape?) is always changing and AV systems will typically update their malware signature repositories at least once per day in an attempt to keep up with the new threats that have been isolated since the previous update.

So how secure does your organization need to be? 80%? 90%? Because if you rely on traditional anti-virus defenses this is the best you can hope to achieve unless you implement additional defense layers such as FIM (file integrity monitoring) and SIEM (event log analysis).

Anti-Virus Technology – Complete With Malware Blind spots
Any Anti Virus software has an inherent weakness in that it relies on a library of malware ‘signatures’ to identify the viruses, Trojans and worms it is seeking to remove.

This repository of malware signatures is regularly updated, sometimes several times a day depending on the developer of the software being used. The problem is that the AV developer usually needs to have direct experience of any new strains of malware in order to counteract them. The concept of a 'zero day' threat is one that uses a new variant of malware yet to be identified by the AV system.

By definition, AV systems are blind to ‘zero day’ threats, even to the point whereby new versions of an existing malware strain may be able to evade detection. Modern malware often incorporates the means to mutate, allowing it to change its makeup every time it is propagated and so improve its effectiveness at evading the AV system.

Similarly other automated security technologies, such as the sandbox or quarantine approach, that aim to block or remove malware all suffer from the same blind spots. If the malware is new though – a zero day threat – then by definition there is no signature because it has not been identified before. The unfortunate reality is that the unseen cyber-enemy also knows that new is best if they want their malware to evade detection. This is evident by the fact that in excess of 10 million new malware samples will be identified in any 6 month period.

In other words most organizations typically have very effective defenses against known enemies – any malware that has been previously identified will be stopped dead in its tracks by the IPS, anti-virus system, or any other web/mail filtering with sandbox technology. However, it is also true that the majority of these same organizations have little or no protection against the zero day threat.

File Integrity Monitoring – The 2nd Line Anti-Virus Defense System for When Your Anti-Virus System Fails
File Integrity Monitoring serves to record any changes to the file system i.e. core operating system files or program components. In this way, any malware entering your key server platforms will be detected, no matter how subtle or stealthy the attack.

In addition FIM Technology will also ensure other vulnerabilities are screened out from your systems by ensuring best practices in securely configuring your Operating Systems have been applied.

For example, any configuration settings such as user accounts, password policy, running services and processes, installed software, management and monitoring functions are all potential vectors for security breaches. In the Windows environment, the Windows Local Security Policy has been gradually extended over time to include greater restrictions to numerous functions that have been exploited in the past but this in itself is a highly complex area to configure correctly. To then maintain systems in this secure configured state is impossible without automated file integrity monitoring technology.

Likewise SIEM or Security Information and Event Management systems are designed to gather and analyze all system audit trails/event logs and correlate these with other security information to present a true picture of whether anything unusual and potentially security threatening is happening.

It is telling that widely adopted and practiced security standards such as the PCI DSS place these elements at their core as a means of maintaining system security and verifying that key processes like Change Management are being observed.

Summary
Anti-virus technology is an essential and highly valuable line of defense for any organization. However, it is vital that the limitations and therefore vulnerabilities of this technology are understood and additional layers of security implemented to compensate. File Integrity Monitoring and Event Log Analysis are the ideal counterparts to an Anti-Virus system in order to provide complete security against the modern malware threat.

Friday, 7 September 2012

File Integrity Monitoring - FIM Agent Versus Agentless FIM

Introduction
The incessant escalation, both in malware sophistication and proliferation, means the need for fundamental file integrity monitoring is essential to maintain malware-free systems. Signature-based anti-virus technologies are too fallible and easily circumnavigated by zero-day malware or selectively created and targeted advanced persistent threat (APT) virus, worm or Trojan malware.

Any good security policy will recommend the use of regular file integrity checks on system and configuration files and best practice-based security standards such as the PCI DSS (Requirement 11.5), NERC CIP (System Security R15-R19), Department of Defense Information Assurance (IA) Implementation (DODI 8500.2), Sarbanes-Oxley (Section 404), FISMA - Federal Information Security Management Act (NIST SP800-53 Rev4) specifically mandate the need to perform regular checks for any unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.

However, file-integrity monitoring needs to be deployed with a little advanced planning and understanding of how the file systems of your servers behave on a routine basis in order to determine what unusual and therefore potentially threatening events look like.

The next question is then whether an Agentless or Agent-based approach is best for your environment. This article looks at the pros and cons of both options.

Agentless FIM for Windows and Linux/Unix Servers
Starting with the most obvious advantage, the first clear benefit of an Agentless approach to file integrity monitoring is that it doesn't need any agent software to be deployed on the monitored host. This means that an Agentless FIM solution like Tripwire or nCircle will always be the quickest option to deploy and to get results from. Not only that but there is no agent software to update or potentially interfere with the server operation.

The typical Agentless file-integrity monitoring solution for Windows and Linux/Unix will utilize a scripted, command-line interaction with the host to interrogate the salient files. At the simplest end of the scale, Linux files can be baselined using a cat command and a comparison done with the subsequent samples to detect any changes. Alternatively, if a vulnerability audit is being performed in order to harden the server configuration, then a series of grep commands, used with regex expressions, will more precisely identify missing or incorrect configuration settings. Similarly, a Windows server can be interrogated using command line programs, for example, the net.exe program can be used to expose the user accounts on a system, or even assess the state or other attribute associated with a user account if piped with a find command e.g. net.exe users guest |find.exe /i "Account active" will return an "Account active Yes" or "Account active No" result and establish if the Guest account is enabled, a classic vulnerability for any Windows server.

Agent-Based File Integrity Monitoring
The key advantage of an Agent for FIM is that it can monitor file changes in real-time. Due to the agent being installed on the monitored host, the OS activity can be monitored and any file activity can be observed and changes recorded. Clearly any Agentless approach will need to be operated on a scheduled poll basis and inevitably there will be a pay-off between the frequency of polls being regular enough to catch changes as they happen, and the limiting the increased load on the host and network due to the monitoring. In practice polling is typically run once per day on most FIM solutions, for example Tripwire, and this means that you risk being anything up to 24 hours late to identify potential security incidents.

The second major advantage of an agent-based file-integrity solution is that the host does not need to be 'opened up' to allow monitoring. For example, all critical system and configuration files will always be protected by the host filesystem security, for example, the Windows System32 folder is always an 'Administrator Access Only' folder. In order to monitor the files in this location, any external scripted interaction will need to be provided with Admin rights over the Host and this immediately means that the host needs to be made accessible via the network and an additional User or Service Account needs to be provisioned with Admin privilege, potentially introducing a new security weakness to the system. By contrast, an Agent operates within the confines of the Host, just pushing out File Integrity changes as they are detected.

Finally having an Agent offers a distinct advantage over and above the Agentless approach in that it can offer a 'changes only' update across the network, and even then only when there is a change to report. The Agentless solution will need to run through its complete checklist of queries in order to make any assessment of whether changes have been identified and even using elaborate WMI or Powershell scripts still requires considerable resource usage on the host and the network when dragging results back.

Summary
Nobody likes installing and maintaining agents on their servers and, if this can be avoided, this is an attractive option to take.

Wednesday, 15 August 2012

File Integrity Monitoring and SIEM - Why Layered Security Is Essential to Combat the APT

Every time the headlines are full of the latest Cyber Crime or malware Scare story such as the Flame virus, the need to review the security standards employed by your organization takes on a new level of urgency.

The 2012 APT (Advanced Persistent Threat)
The Advanced Persistent threat differs from a regular hack or Trojan attack in that it is as the name suggests, advanced in technology and technique, and persistent, in that it is typically a sustained theft of data over many months.

So far the APT has largely been viewed as Government sponsored cyber-espionage in terms of the resources needed to orchestrate such an attack, such as the recent Flame malware which appears to have been a US or Israeli backed espionage initiative against Iran. However you always see the leading edge of technology become the norm a year later, so expect to see APT attacks reach the more mainstream, competitor-backed industrial espionage, and 'hacktivist' groups like Lulzsec and Anonymous adopting similar approaches.

The common vector for these attacks is a targeted spear phishing infiltration of the organization. Using Facebook, LinkedIn or other social media makes identification of targets much easier today, and also what kind of phishing 'bait' is going to be most effective in duping the target into providing the all-important welcoming click on the tasty links or downloads offered.

Phishing is already a well-established tool for Organized Crime gangs who will utilize these same profiled spear phishing techniques to steal data. As an interesting aside regarding organized crimes' usage of 'cybermuscle', it is reported that prices for botnets are plummeting at the moment due to oversupply of available robot networks. If you want to coerce an organization with a threat of disabling their web presence, arm yourself with a global botnet and point it at their site - DDOS attacks are easier than ever to orchestrate.

Something Must Be Done...

To be clear on what we are saying here, it isn't that AV or firewalls are no use, far from it. But the APT style of threat will evade both by design and this is the first fact to acknowledge - like the first step for a recovering alcoholic the first step is to admit you have a problem!

By definition, this kind of attack is the most dangerous because any attack that is smart enough to skip past standard defense measures is definitely going to be one that is backed by a serious intent to damage your organization (note: don't think that APT technology is therefore only an issue for blue chip organizations - that may have been the case but now that the concepts and architecture of the APT is in the mainstream, the wider hacker and hacktivist communities will already have engineered their own interpretations of the APT)
So the second fact to take on board is that there is an 'art' to delivering effective security and that requires a continuous effort to follow process and cross-check that security measures are working effectively.
The good news is that it is possible to automate the cross-checks and vigilance we have identified a need for, and in fact there are already two key technologies designed to detect abnormal occurrences within systems and to verify that security best practices are being operated.

FIM and SIEM - Security Measures Underwritten
File Integrity Monitoring or FIM serves to record any changes to the file system i.e. core operating system files or program components, and the systems' configuration settings i.e. user accounts, password policy, services, installed software, management and monitoring functions, registry keys and registry values, running processes and security policy settings for audit policy settings, user rights assignment and security options. FIM is designed to both verify that a device remains hardened and free of vulnerabilities at all time, and that the filesystem remains free of any malware.

Therefore even if some form of APT malware manages to infiltrate a critical server, well implemented FIM will detect file system changes before any rootkit protective measures that may be employed by the malware can kick in.

Likewise SIEM, or Security Information and Event Management, systems are designed to gather and analyze all system audit trails/event logs and correlate these with other security information to present a true picture of whether anything unusual and potentially security threatening is happening.
It is telling that widely adopted and practiced security standards such as the PCI DSS place these elements at their core as a means of maintaining system security and verifying that key processes like Change Management are being observed.

At the core of any comprehensive security standard is the concept of layered security - firewalling, IPS, AV, patching, hardening, DLP, tokenization, secure application development and data encryption, all governed by documented change control procedures and underpinned by audit trail analysis and file integrity monitoring. Even then with standards like the PCI DSS there is a mandated requirement for Pen Testing and Vulnerability Scanning as further checks and balances that security is being maintained.

Summary
In summary, your security policy should be built around the philosophy that technology helps secure your organizations' data, but that nothing can be taken for granted. Only by practicing continuous surveillance of system activity can you truly maintain data security, very much the essence of the Art of Layered Security.