Showing posts with label file integrity monitoring. Show all posts
Showing posts with label file integrity monitoring. Show all posts

Wednesday, 18 December 2013

File Integrity Monitoring – 3 Reasons Why Your Security Is Compromised Without It Part 1

Introduction
This is a 3 step series examining why File Integrity Monitoring is essential for the security of any business’ IT. This first section examines the need for malware detection, addressing the inevitable flaws in anti-virus systems.

Malware Detection – How Effective is Anti-Virus?
Security Is Compromised Without File Integrity Monitoring When malware hits a system - most commonly a Windows operating system, but increasingly Linux and Solaris systems are coming under threat (especially with the renewed popularity of Apple workstations running Mac OS X) - it will need to be executed in some way in order to do its evil deeds.
This means that some kind of system file – an executable, driver or dll has to be planted on the system. A Trojan will make sure that it gets executed without further user intervention by replacing a legitimate operating system or program file. When the program runs, or the OS performs one of its regular tasks, the Trojan is executed instead.

On a user workstation, 3rd party applications such as internet browsers, pdf readers and mundane user packages like MS Word or Excel have been targeted as a vector for intermediate malware. When the document or spreadsheet is opened, the malware can exploit vulnerabilities in the application, enabling malware to be downloaded and executed.

Either way, there will always be a number of associated file changes. Legitimate system files are replaced or new system files are added to the system.

If you are lucky, you won’t be the first victim of this particular strain of malware and your AV system – provided it has been updated recently – will have the necessary signature definitions to identify and stop the malware.

When this is not the case, and bear in mind that millions of new malware variants are introduced every month, your system will be compromised, usually without you knowing anything about it, while the malware quietly goes about its business, damaging systems or stealing your data.

FIM – Catching the Malware Other Anti-Virus Systems Miss
That is, of course, unless you are using file integrity monitoring.

Enterprise-level File Integrity Monitoring will detect any unusual filesystem activity. Unusual is important, because many files will change frequently on a system, so it is crucial that the FIM system is intelligent enough to understand what regular operation looks like for your systems and only flag genuine security incidents.

However, exclusions and exceptions should be kept to a minimum because FIM is at its best when it is operated in a ‘zero tolerance’ approach to changes. Malware is formulated with the objective that it will be effective, and this means it must both be successfully distributed and operate without detection.

The challenge of distribution has seen much in the way of innovation. Tempting emails with malware bait in the form of pictures to be viewed, prizes to be won and gossip on celebrities have all been successful in spreading malware. Phishing emails provide a convincing reason to click and enter details or download forms, and specifically targeted Spear Phishing emails have been responsible for duping even the most cybersecurity-savvy user.

Whatever the vector used, once malware is welcomed into a system, it may then have the means to propagate within the network to other systems.

So early detection is of paramount importance. And you simply cannot rely on your anti-virus system to be 100% effective, as we have already highlighted.

FIM provides this ‘zero tolerance’ to filesystem changes. There is no second-guessing of what may or may not be malware, guaranteeing that all malware is reported, making FIM 100% effective in detecting any breach of this type.

Summary
FIM is ideal as a malware detection technology as it is not prone to the ‘signature lag’ or ‘zero day vulnerabilities’ that are the Achilles’ Heel of anti-virus systems. As with most security best practices, the advice is always more is better, and operating anti-virus (even with its known flaws) in conjunction with FIM will give the best overall protection. AV is effective against legacy malware and its automated protection will quarantine most threats before they do any damage. But when malware does evade the AV, as some strains always will do, real-time FIM can provide a vital safety net.

Wednesday, 11 December 2013

Which File Integrity Monitoring Technology Is Best For FIM? File Integrity Monitoring FIM or SIEM FIM?

Introduction
Within the FIM technology market there are choices to be made. Agent-based or agentless is the most common choice, but even then there are both SIEM, and ‘pure-play’ FIM, solutions to choose between.


FIM – Agents or Agentless

File Integrity Monitoring FIM or SIEM FIMThere is never a clear advantage for either agent-based or agentless FIM. There is a balance to be found between agentless FIM and the arguably superior operation of agent-based FIM, offering
  • Real-time detection of changes – agentless FIM scanners can only be effective on a scheduled basis, typically once every day
  • Locally stored baseline data meaning a one-off full scan is all that is needed, while a vulnerability scanner will always need to re-baseline and hash every single file on the system each time it scans
  • Greater security by being self-contained, whereas an agentless FIM solution will require a logon and network access to the host under test
Conversely, proponents of the Agentless vulnerability scanner will cite the advantages of their technology over an agent-based FIM system, including
  • Up and running in minutes, with no need to deploy and maintain agents on end points, makes an agentless system easier to operate
  • No need to load any 3rd party software onto endpoints, an agentless scanner is 100% self-contained
  • Foreign or new devices being added to a network will always be discovered by an agentless scanner, while an agent-based system is only effective where agents have been deployed onto known hosts For these reasons there is no outright winner of this argument and typically, most organizations run both types of technology in order to benefit from all the advantages offered.
Using SIEM for FIM

Using SIEM technology is much easier to deal with. Similar to the agentless argument, a SIEM system may be operated without requiring any agent software on the endpoints, using WMI or native syslog capabilities of the host. However this is typically seen as an inferior solution the agent-based SIEM package. An agent will allow for advanced security functions such as hashing and real-time log monitoring.

For FIM, all SIEM vendors will rely on a combination of host object access auditing, combined with a scheduled baseline of the filesystem. The auditing of filesystem activity can give real-time FIM capabilities, but will require substantially higher resources from the host to operate this than a benign agent. The native auditing of the OS will not provide hash values for files so the forensic detection of a Trojan cannot be achieved to the extent that an enterprise FIM agent will do so.

The SIEM vendors have moved to address this problem by providing a scheduled baseline and hash function using an agent. The result is a solution that is the worst of all options – an agent must be installed and maintained, but without the benefits of a real-time agent!

Summary

In summary, SIEM is best used for event log analysis and FIM is best used for File Integrity Monitoring. Whether you then decide to use an agent-based FIM solution or an agentless system is tougher. In all likelihood, the conclusion will be that a combination of the two is going to be only complete solution.

Monday, 4 November 2013

Is Your QSA Making You Less Secure?

Introduction
Most organizations will turn to a QSA when undertaking a PCI Compliance project. A Qualified Security Assessor is the guy you need to satisfy with any security measures and procedures you implement to meet compliance with the PCI DSS so it makes sense to get them to tell you what you need to do.
For many, PCI Compliance is about simply dealing with the PCI DSS in the same way they would deal with another deadlined project. When does the bank want us to be PCI Compliant and what do we need to do before we get audited in order to get a pass?

PCI Compliance project For many, this is where the problems often begin, because of course, PCI compliance isn’t simply about passing an audit but getting your organization sufficiently organized and aware of the need to protect cardholder data at all times. The cliché in PCI circles is ‘don’t take a checkbox approach to compliance’ but it is true. Focusing on passing the audit is a tangible goal, but it should only be a milestone along the way to maturing internal processes and procedures in order to operate a secure environment every day of the year, not just to drag your organization through an annual audit.

The QSA Moral Maze
However, for many, the QSA is hired to ‘make PCI go away’ and this can sometimes present a dilemma. QSAs are in business and need to compete for work like any other commercial venture. They are typically fiercely independent and take their responsibility seriously for providing expert guidance, however, they also have bills to pay.

Some get caught by the conflict of interest between advising the implementation of measures and offering to supply the goods required. This presents a difficult choice for the customer – go along with what the QSA says, and buy whatever they sell you, or go elsewhere for any kit required and risk the valuable relationship needed to get through the audit. Whether this is for new firewalls, scanning or Pen Testing services, or FIM and Logging/SIEM products, too many Merchants have been left to make difficult decisions. The simple solution is to separate your QSA from supplying any other service or product for your PCI project, but make sure this is clarified up front.

The second common conflict of interest is one that affects any kind of consultant. If you are being paid by the day for your services, would you want the engagement to be shorter or longer? If you had the opportunity to influence the duration of the engagement, would you fight for it to be ended sooner, or be happy to let it run longer?

Let’s not be too cynical over this – the majority of Merchants have paid widely differing amounts for their QSA services but have been delighted with the value for money received. But we have had one experience recently where the QSA has asked for repeated network and system architecture re-designs. They have recommended that firewalls be replaced with more advanced versions with better IPS capabilities. In both instances, you can see that the QSA is giving accurate and proper advice, however, one of the unfortunate side-effects of doing so is that the Merchant delays implementation of other PCI DSS requirements. The result in this case is that the QSA actually delays security measures being put in place, in other words, the security expert’s advice is to prolong the organizations weak security posture!

Conclusion
The QSA community is a rich source of security experience and expertise, and who better to help navigate and organization through a PCI Program than those responsible for conducting the audit for compliance with the standard. However, best practice is to separate the QSA from any other aspect of the project. Secondly, self-educate and help yourself by becoming familiar with security best practices – it will save time and money if you can empower yourself instead of paying by the day to be taught the basics. Finally, don’t delay implementing security measures – you know your systems better than anyone else, so don’t pay to prolong your project! Seize responsibility for de-scoping your environment where possible, then apply basic best practices to the remaining systems in scope – harden, implement change controls, measure effectiveness using file integrity monitoring and retain audit trails of all system activity. It’s simpler than your QSA might leave you to believe.

Wednesday, 4 September 2013

PCI DSS Version 3 and File Integrity Monitoring – New Standard, Same Problems

PCI DSS Version 3.0

PCI DSS Version 3 will soon be with us. Such is the anticipation that the PCI Security Standards Council have released a sneak preview ‘Change Highlights’ document.
The updated Data Security Standard highlights include a wagging finger statement which may be aimed at you if you are a Merchant or Acquiring Bank.

“Cardholder data continues to be a target for criminals. Lack of education and awareness around payment security and poor implementation and maintenance of the PCI Standards leads to many of the security breaches happening today”

In other words, a big part of the drive for the new version of the standard is to give it some fresh impetus. Just because the PCI DSS isn’t new, it doesn’t make it any less relevant today.

pci dss v3

But What is the Benefit of the PCI DSS for Us?

To understand just how relevant cardholder data protection is, the hard facts are outlined in the recent Nilson report. Their findings are that global card fraud losses have now exceeded $11Billion. It’s not all bad news if you are a card brand or issuing bank – the losses are made slightly more bearable by the fact that the total of transactions now exceeds $21TRILLION.

http://www.nilsonreport.com/publication_the_current_issue.php?1=1

“Card issuer losses occur mainly at the point of sale from counterfeit cards. Issuers bear the fraud loss if they give merchants authorization to accept the payment. Merchant and acquirer losses occur mainly on card-not-present (CNP) transactions on the Web, at a call center, or through mail order”

This is why the PCI DSS exists and needs to be taken seriously with all requirements fully implemented, and practised daily. Card fraud is a very real problem and as with most crimes, if you think it won’t happen to you, think again. Ignorance, complacency and corner-cutting are still the major contributors to card data theft.

The changes are very much in line with NNT’s methodology of continuous, real-time security validation for all in scope systems – the PCI SSC state that the changes in version 3 of the standard include “Recommendations focus on helping organizations take a proactive approach to protect cardholder data that focuses on security, not compliance, and makes PCI DSS a business-as-usual practice”

So instead of this being a ‘Once a year, get some scans done, patch everything, get a report done from a QSA then relax for another 11 months’ exercise, the PCI SSC are trying to educate and encourage merchants and banks to embed or entrench security best practices within their everyday operations, and be PCI Compliant as a natural consequence of this.

Continuous FIM – The Foundation of PCI Compliance

In fact, taking a continuous FIM approach as the starting point for security and PCI compliance makes much sense. It doesn’t take long to set up, it will only tell you if you need to take action when you need to do so, will help to define a hardened build standard for your systems and will drive you to adopt the necessary discipline for change control, plus it will give you full peace of mind that systems are being actively protected at all times, 100% in line with PCI DSS requirements.

Monday, 1 July 2013

File Integrity Monitoring – Use FIM to Cover All the Bases

Why use FIM in the first place?
Unlike anti-virus and firewalling technology, FIM is not yet seen as a mainstream security requirement. In some respects, FIM is similar to data encryption, in that both are undeniably valuable security safeguards to implement, but both are used sparingly, reserved for niche or specialized security requirements.

FIM solutionsHow does FIM help with data security?
At a basic level, File Integrity Monitoring will verify that important system files and configuration files have not changed, in other words, the files’ integrity has been maintained.

Why is this important? In the case of system files – program, application or operating system files – these should only change when an update, patch or upgrade is implemented. At other times, the files should never change.

Most security breaches involving theft of data from a system will either use a keylogger to capture data being entered into a PC (the theft then perpetrated via a subsequent impersonated access), or some kind of data transfer conduit program, used to siphon off information from a server. In all cases, there has to be some form of malware implanted onto the system, generally operating as a Trojan i.e. the malware impersonates a legitimate system file so it can be executed and provided with access privileges to system data.

In these instances, a file integrity check will detect the Trojans existence, and given that zero day threats or targeted APT (advanced persistent threat) attacks will evade anti-virus measures, FIM comes into its own as a must-have security defense measure. To give the necessary peace of mind that a file has remained unchanged, the file attributes governing security and permissions, as well as the file length and cryptographic hash value must all be tracked.

Similarly, for configuration files, computer configuration settings that restrict access to the host, or restrict privileges for users of the host must also be maintained. For example, a new user account provisioned for the host and given admin or root privileges is an obvious potential vector for data theft – the account can be used to access host data directly, or to install malware that will provide access to confidential data.

File Integrity Monitoring and Configuration Hardening
Which brings us to the subject of configuration hardening. Hardening a configuration is intended to counteract the wide range of potential threats to a host and there are best practice guides available for all versions of Solaris, Ubuntu, RedHat, Windows and most network devices. Known security vulnerabilities are mitigated by employing a fundamentally secure configuration set-up for the host.

For example, a key basic for securing a host is via a strong password policy. For a Solaris, Ubuntu or other Linux host, this is implemented by editing the /etc/login.defs file or similar, whereas a Windows host will require the necessary settings to be defined within the Local or Group Security Policy. In either case, the configuration settings exist as a file that can be analyzed and the integrity verified for consistency (even if, in the Windows case, this file may be a registry value or the output of a command line program).

Therefore file integrity monitoring ensures a server or network device remains secure in two key dimensions: protected from Trojans or other system file changes, and maintained in a securely defended or hardened state.

File integrity assured – but is it the right file to begin with?
But is it enough to just use FIM to ensure system and configuration files remain unchanged? By doing so, there is a guarantee that the system being monitored remains in its original state, but there is a risk of perpetuating a bad configuration, a classic case of ‘junk in, junk out’ computing. In other words, if the system was built using an impure source – the recent Citadel keylogger scam is estimated to have netted over $500M in funds stolen from bank accounts where PCs were set-up using pirated Windows Operating System DVDs, each one with keylogger malware included free of charge.

In the corporate world, OS images, patches and updates are typically downloaded directly from the manufacturer website, therefore providing a reliable and original source. However, the configuration settings required to fully harden the host will always need to be applied and in this instance, file integrity monitoring technology can provide a further and invaluable function.

The best Enterprise FIM solutions can not only detect changes to configuration files/settings, but also analyze the settings to ensure that best practice in security configuration has been applied.

In this way, all hosts can be guaranteed to be secure and set-up in line with not just industry best practice recommendations for secure operation, but with any individual corporate hardened build-standard.
A hardened build-standard is a pre-requisite for secure operations and is mandated by all formal security standards such as PCI DSS, SOX, HIPAA, and ISO27K.

Conclusion
Even if FIM is being adopted simply to meet the requirements of a compliance audit, there is a wide range of benefits to be gained over and above simply passing the audit.

Protecting host systems from Trojan or malware infection cannot be left solely to anti-virus technology. The AV blind-spot for zero day threats and APT-type attacks leaves too much doubt over system integrity not to utilize FIM for additional defense.

But preventing breaches of security is the first step to take, and hardening a server, PC or network device will fend off all non-insider infiltrations. Using a FIM system with auditing capabilities for best practice secure configuration checklists makes expert-level hardening straightforward.

Don’t just monitor files for integrity – harden them first!

Wednesday, 19 June 2013

File Integrity Monitoring - View Security Incidents in Black and White or in Glorious Technicolor?

FIM solutionsThe PCI DSS and File Integrity Monitoring
Using FIM, or file integrity monitoring has long been established as a keystone of information security best practices. Even so, there are still a number of common misunderstandings about why FIM is important and what it can deliver.

Ironically, the key contributor to this confusion is the same security standard that introduces most people to FIM in the first place by mandating the use of it - the PCI DSS.

PCI DSS Requirement 11.5 specifically uses the term 'file integrity monitoring' in relation to the need to "to alert personnel to unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly"

As such, since the term 'file integrity monitoring' is only mentioned in requirement 11.5, one could be forgiven for concluding that this is the only part FIM has to play within the PCI DSS.

In fact, the application of FIM is and should be much more widespread in underpinning a solid secure posture for an IT estate. For example, other key requirements of the PCI data security standard are all best addressed using file integrity monitoring technology such as "Establish firewall and router configuration standards" (Req 1), "Develop configuration standards for all system components" (Req 2), "Develop and maintain secure systems and applications" (Req 6), "Restrict access to cardholder data by business need to know" (Req 7), Ensure proper user identification and authentication management for nonconsumer users and administrators on all system components" (Req 8), "Regularly test security systems and processes" (Req 11).
Within the confines of Requirement 11.5 only, many interpret this requirement as a simple 'has the file changed since last week?' and, taken in isolation, this would be a legitimate conclusion to reach. However, as highlighted earlier, the PCI DSS is a network of linked and overlapping requirements, and the role for file integrity analysis is much broader, underpinning other requirements for configuration hardening, configuration standards enforcement and change management.

But this isn't just an issue with how merchants read and interpret the PCI DSS. The new wave of SIEM vendors in particular are keen to take this narrow definition as 'secure enough' and for good, if selfish, reasons.

Do everything with SIEM - or is FIM + SIEM the right solution?

PCI requirement 10 is all about logging and the need to generate the necessary security events, backup log files and analyze the details and patterns. In this respect a logging system is going to be an essential component of your PCI DSS toolset.

SIEM or Event log management systems all rely on some kind of agent or polled-WMI method for watching log files. When the log file has new events appended to it, these new events are picked up by the SIEM system, backed up centrally and analyzed for either explicit evidence of security incidents or just unusual activity levels of any kind that may indicate a security incident. This approach has been expanded by many of the SIEM product vendors to provide a basic FIM test on system and configuration files and determine whether any files have changed or not.

A changed system file could reveal that a Trojan or other malware has infiltrated the host system, while a changed configuration file could weaken the host's inherently secure 'hardened' state making it more prone to attack. The PCI DSS requirement 11.5 mentioned earlier does use the word 'unauthorized' so there is a subtle reference to the need to operate a Change Management Process. Unless you can categorize or define certain changes as 'Planned', 'Authorized' or expected in some way, you have no way to label other changes as 'unauthorized' as is required by the standard.

So in one respect, this level of FIM is a good means of protecting your secure infrastructure. However, in practice, in the real-world, 'black and white' file integrity monitoring of this kind is pretty unhelpful and usually ends up giving the Information Security Team a stream of 'noise' - too many spurious and confusing alerts, usually masking the genuine security threats.

Potential security events? Yes.
Useful, categorized and intelligently assessed security events? No.

So if this 'changed/not changed' level of FIM is the black and white view, what is the Technicolor alternative? If we now talk about true Enterprise FIM (to draw a distinction from basic, SIEM-style FIM), this superior level of FIM provides file changes that have been automatically assessed in context - is this a good change or a bad change?

For example, if a Group Policy Security Setting is changed, how do you know if this is increasing or decreasing the policy's protection? Enterprise FIM will not only report the change, but expose the exact details of what the change is, was it a planned or unplanned change, and whether this violates or complies with your adopted Hardened Build Standard.

Better still, Enterprise FIM can give you an immediate snapshot of whether databases, servers, EPoS systems, workstations, routers and firewalls are secure - configured within compliance of your Hardened Build Standard or not. By contrast, a SIEM system is completely blind to how systems are configured unless a change occurs.

Conclusion

The real message is that trying to meet your responsibilities with respect to PCI Compliance requires an inclusive understanding of all PCI requirements. Requirements taken in isolation and too literally may leave you with a 'noisy' PCI solution, helping to mask rather than expose potential security threats. In conclusion, there are no short cuts in security - you will need the right tools for the job. A good SIEM system is essential for addressing Requirement 10, but an Enterprise FIM system will give you so much more than just ticking the box for Req 11.5.

Full color is so much better than black and white.

Wednesday, 12 June 2013

File Integrity Monitoring - FIM Could Just Save Your Business

Busted! The Citadel Cybercrime Operation

No guns were used, no doors forced open, and no masks or disguises were used, but up to $500Million has been stolen from businesses and individuals around the world. Reuters reported last week that one of the worlds biggest ever cybercrime rings has just been shut down. The Citadel botnet operation, first exposed in August last year, shows that anyone who wants to think big when it comes to cybercrime can make truckloads of money without even leaving home.

It's a familiar story of basic identity theft - PC's used to access on-line bank accounts were infiltrated by keylogging malware known as Citadel. This allowed security credentials to be stolen and then used to steal money from the victims' bank accounts. The malware had been in operation for up to 18 months and had affected up to 5 million PC's.

Like any malware, until it has been discovered, isolated and understood, anti-virus technology cannot tackle malware like Citadel. So-called 'zero day' malware can operate undetected until such time as an anti-virus definition has been formulated to recognize the malware files and remove them.

This is why file integrity monitoring software is also an essential defense measure against malware. File integrity monitoring or FIM technology works on a 'zero tolerance' basis, reporting any changes to operating system and program filesystems. FIM ensures that nothing changes on your protected systems without being reported for validation, for example, a Windows Update will result in file changes, but provided you are controlling when and how updates gets applied, you can then isolate any unexpected or unplanned changes, which could be evidence of a malware infection. Good FIM systems filter out expected, regular filechanges and focus attention on those system and configuration files which, under normal circumstances, do not change.

A victimless crime? Maybe not if you're a business that has been affected

In a situation like this, banks will usually try and unravel the problem between themselves - bank accounts that have been plundered will have had money moved to another bank account and another bank account and so on, and attempts will be made to recover any misappropriated funds. Inevitably some of the cash will have been spent but there is also a good chance that large sums can be recovered.
Generally speaking, individuals affected by identity theft or credit card fraud will have their funds reimbursed by their bank and the banking system as a whole, so it often feels like a victimless crime has been perpetrated.

Worryingly though, in this case, an American Bankers Association spokesman has been reported as saying that 'banks may require business customers to incur the losses'. It isn't clear as to why the banks may be seeking to place blame on business customers in this case. It is reported that Citadel was present in illegally pirated copies of Windows, so the victims may well be guilty of using counterfeit software, but who is to blame, and how far down the line can the blame be passed? The business customer, their supplier of the pirated software, the wholesaler who supplied the supplier?

Either way, any business user of on-line banking technology (and the consensus of estimates suggest that around half of businesses do at least 50% of their banking on-line, but that this is increasing year on year) should be concerned that protecting access to their bank account should be something they take seriously. It could well be that nobody else is looking out for you.

Conclusion

It may still be the case that 'Crime doesn't pay' but it seems that Cybercrime can pay handsomely. But for cybercrime to work, there needs to be a regular supply of victims and in this case, victims not using any kind of file integrity monitoring are leaving themselves exposed to zero-day malware which is currently invisible to anti-virus systems.

Good security is not just about installing AV software or even operating FIM but should be a layered and integrated approach. Leveraging security technology such as AV, FIM, firewalling, IDS and IPS should be done in conjunction with sound operating procedures to harden and patch systems regularly, verified with a separate auditing and governance function.
The biggest security threat is still complacency.


Wednesday, 29 May 2013

File Integrity Monitoring - Is FIM Better Than AV? Is a Gun Better Than a Knife?

Is a gun better than a knife?

I've been trying hard for an analogy, but this one kind of works. Which is better? A gun or a knife?
Both will help defend you against an attacker. A gun may be better than a knife if you are under attack from a big group of attackers running at you, but without ammunition, you are left defenseless. The knife works without ammunition and always provides a consistent deterrent, so in some respects, gives better protection than a gun.

Which is not a bad way to try and introduce the concept of FIM versus Anti-Virus technology. Anti-Virus technology will automatically eliminate malware from a computer, usually before it has done any damage. Both at the point at which malware is introduced to a computer, thorough email, download or USB, and at the instant at which a malware file is accessed, the AV will scan for known malware. If identified as a known virus, or even if the file exhibits characteristics that are associated with malware, the infected files can be removed from the computer.

However, if the AV system doesn't have a definition for the malware at hand, then like a gun with an empty magazine, it can't do anything to help.

File Integrity Monitoring by contrast may not be quite so 'active' in wiping out known malware, but - like a knife - it never needs ammo to maintain its role as a defense against malware. A FIM system will always report potentially unsafe filesystem activity, albeit with intelligence and rules to ignore certain activities that are always defined safe, regular or normal.

AV and FIM versus the Zero Day Threat

The key points to note from the previous description of AV operation is that the virus must either be 'known' i.e. the virus has been identified and categorized by the AV vendor, or that the malware must 'exhibit characteristics associated with malware' i.e. it looks, feels and acts like a virus. Anti-virus technology works on the principle that it has a regularly updated 'signature' or 'definition' list containing details of known malware. Any time a new file is introduced to the computer, the AV system has a look at the file and if it matches anything on its list, the file gets quarantined.

In other words, if a brand new, never-been-seen-before virus or Trojan is introduced to your computer, it is far from guaranteed that your AV system will do anything to stop it. Ask yourself - if AV technology was perfect, why would anybody still be concerned about malware?

The lifecycle of malware can be anything from 1 day to 2 years. The malware must first be seen - usually a victim will notice symptoms of the infection and investigate before reporting it to their AV vendor. At that point the AV vendor will work out how to counteract the malware in the future, and update their AV system definitions/signature files with details of this new malware strain. Finally the definition update is made available to the world, individual servers and workstations around the world will update themselves and will thereafter be rendered immune to this virus. Even if this process takes a day to conclude then that is a pretty good turnaround - after just one day the world is safe from the threat.

However, up until this time the malware is a problem. Hence the term 'Zero Day Threat' - the dangerous time is between 'Day Zero' and whichever day the inoculating definition update is provided.

By contrast, a FIM system will detect the unusual filesystem activity - either at the point at which the malware is introduced or when the malware becomes active, creating files or changing server settings to allow it to report back the stolen data.

Where is FIM better than AV?

As outlined previously, FIM needs no signatures or definitions to try and second guess whether a file is malware or not and it is therefore less fallible than AV.

Where FIM provides some distinct advantage over and above AV is in that it offers far better preventative measures than AV. Anti-Virus systems are based on a reactive model, a 'try and stop the threat once the malware has hit the server' approach to defense.

An Enterprise FIM system will not only keep watch over the core system and program files of the server, watching for malware introductions, but will also audit all the server's built-in defense mechanisms. The process of hardening a server is still the number one means of providing a secure computing environment and prevention, as we all know, is better than cure. Why try and hope your AV software will identify and quarantine threats when you can render your server fundamentally secure via a hardened configuration?
Add to this that Enterprise FIM can be used to harden and protect all components of your IT Estate, including Windows, Linux, Solaris, Oracle, SQL Server, Firewalls, Routers, Workstations, POS systems etc. etc. etc. and you are now looking at an absolutely essential IT Security defense system.

Conclusion

This article was never going to be about whether you should implement FIM or AV protection for your systems. Of course, you need both, plus some good firewalling, IDS and IPS defenses, all wrapped up with solid best practices in change and configuration management, all scrutinized for compliance via comprehensive audit trails and procedural guidelines.

Unfortunately there is no real 'making do' or cutting corners when it comes to IT Security. Trying to compromise on one component or another is a false economy and every single security standard and best practice guide in the world agrees on this.

FIM, AV, auditing and change management should be mandatory components in your security defenses.

Tuesday, 7 May 2013

File Integrity Monitoring – Database Security Hardening Basics

The Database – The Mother Lode of Sensitive Data
Being the heart of any corporate application means your database technology must be implemented and configured for maximum security. Whilst the desire to ‘get the database as secure as possible’ appears to be a clear objective, what does ‘secure as possible’ mean?
Database Security Hardening Basics

Whether you use Oracle 10g, Oracle 11g, DB2, Microsoft SQL Server, or even MySQL or PostgreSQL, a contemporary database is at least as complex as any modern server operation system. The database system will comprise a whole range of configuration parameters, each with security implications, including
  • User accounts and password settings
  • Roles and assigned privileges
  • File/object permissions
  • Schema structure
  • Auditing functions
  • Networking capabilities
  • Other security defense settings, for example, use of encryption
Hardened Build Standard for Oracle, SQL Server, DB2 and others
Therefore, just as with any Windows or Linux OS, there is a need to derive a hardened build standard for the database. This security policy or hardened build standard will be derived from collected best practices in security configuration and vulnerability mitigation/remediation, and just as with an operating system, the hardening checklist will comprise hundreds of settings to check and set for the database.
Depending on the scale of your organization, you may then need hardening checklists for Oracle 10g, Oracle 11g, SQL Server, DB2, PostgreSQL and MySQL, and maybe other database systems besides.

Automated Compliance Auditing for Database Systems
Potentially, there will be a requirement to verify that all databases are compliant with your hardened build standard involving hundreds of checks for hundreds of database systems, so automation is essential, not least because the hardening checklists are complex and time-consuming to verify. There is also somewhat of a conflict to manage in as much as the user performing the checklist tests will necessarily require administrator privileges to do so. So in order to verify that the database is secure, you potentially need to loosen security by granting admin rights to the user carrying out the audit. This provides a further driver to moving the audit function to a secure and automated tool.

In fact, given that security settings could be changed at any time by any user with privileges to do so, verifying compliance with the hardened build standard should also become a regular task. Whilst a formal compliance audit might be conducted once a year, guaranteeing security 365 days a year requires automated tracking of security settings, providing continuous reassurance that sensitive data is being protected.

Insider Threat and Malware Protection for Oracle and SQL Server Database Systems
Finally, there is also the threat of malware and insider threats to consider. A trusted developer will naturally have access to system and application files, as well as the database and its filesystem. Governance of the integrity of configuration and system files is essential in order to identify malware or an insider-generated application ‘backdoor’. Part of the answer is to operate tight scrutiny of the change management processes for the organization, but automated file integrity monitoring is also essential if disguised Trojans, zero day malware or modified bespoke application files are to be detected.

File Integrity Monitoring – A Universal Solution to Hardening Database Systems
In summary, the most comprehensive measure to securing a database system is to use automated file integrity monitoring. File integrity monitoring or FIM technology serves to analyze configuration files and settings, both for vulnerabilities and for compliance with a security best practices-based hardened-build standard.
The FIM approach is ideal, as it is provides a snapshot audit capability for any database, providing an audit report within a few seconds, showing where security can be improved. This not only automates the process, making a wide-scale estate audit simple, but also de-skills the hardening exercise to an extent. Since the best practice knowledge of how to identify vulnerabilities and also which files need to be inspected is stored within the FIM tool report, the user can get an expert assessment of their database security without needing to fully research and interpret hardening checklist materials.

Finally, file integrity monitoring will also identify Trojans and zero-day malware that may have infected the database system, and also any unauthorized application changes that may introduce security weaknesses.
Of course, any good FIM tool will also provide file integrity monitoring functions to Windows, Linux and Unix servers as well as firewalls and other network devices, performing the same malware detection and hardening audit reporting as described for database systems.

For fundamentally secure IT systems, FIM is still the best technology to use.

Wednesday, 10 April 2013

FIM for PCI DSS - Card Skimmers Still Doing the Business After All These Years

Card Skimming - Hardware or Software?
Simplest is still best - whether they are software-based (as in the so-called 'Dexter' or 'VSkimmer' Trojan - Google it for more information) or classic hardware interception devices, card skimming is still a highly effective means of stealing card data.
FIM for PCI DSS
The hardware approach can be as basic as inserting an in-line card data capture device between the card reader and the EPOS system or Till. This sounds crude but in more advanced cases, the card skimming hardware is cunningly embedded within the card reader itself, often with a cell phone circuit to relay the data to the awaiting fraudster.
Software skimmers are potentially far more powerful. First of all, they can be distributed globally and clearly are not physically detectable like the hardware equivalent. Secondly, they provide access to both 'card present' i.e. POS transactions as well as 'card not present' transactions, for example, tapping into payments via an eCommerce website.

EMV or Chip and PIN - Effective up to a Point
Where implemented - which of course, excludes the US at present - EMV technology (supporting 'Chip and PIN' authorizations) has resulted in big reductions in 'cardholder-present' fraud. A card skimmer would need not just the card details but the added encryption PIN (Personal Identity Number) to unlock it. Embedded card skimming technology can grab the PIN as it is entered too, and hence the emphasis on requiring only approved PIN entry devices that have anti-tampering measures in-built. Alternatively, just use a video camera to record the user entering the PIN and write it down!
By definition, the EMV chip security and PIN entry requirement is only effective for face-to-face transactions where a PED (PIN Entry Device) is used. As a consequence, 'card not present' fraud is still increasing rapidly all over the world, proving that card skimming remains a potentially lucrative crime.
In a global market, easily accessible via the internet, software card skimming is a numbers game. It is also one that relies on a constantly renewing stream of card numbers since card fraud detection capabilities improve both at the acquiring banks and card brands themselves.

Card Skimming in 2013 - The Solution is Still Here
Recently reported research in SC Magazine suggests that businesses are subject to cyber attacks every 3 minutes. The source of the research is Fire Eye, a sandbox technology provider, and they are keen to stress that these malware events are ones that would bypass what they refer to as legacy defences - firewalls, anti-virus and other security gateways. In other words, zero day threats, typically mutated or modified versions of Trojans or other malware, delivered via phishing attacks.
What is frustrating to the PCI Security Standards Council and the card brands (and no doubt software companies like Tripwire, nCircle and NNT!) is that the 6 year old PCI DSS advocates arrange of perfectly adequate measures to prevent any of these newly discovered Trojans (and buying a Fire Eye scanner isn't on the list!) All eCommerce servers and EPOS systems should be hardened and protected using file integrity monitoring. While firewalls and anti-virus is also mandatory, FIM is used to detect malware missed by these devices which, as the Fire Eye report shows, is as common as ever. A Trojan like VSkimmer or Dexter will manifest as file system activity and, on a Windows-system, will always generate registry changes.
Other means of introducing skimming software are also blocked if the PCI DSS is followed correctly. Card data storing systems should be isolated from the internet where possible, USB ports should be disabled as part of the hardening process, and any network access should be reduced to the bare minimum required for operational activities. Even then, access to systems should be recorded and limited to unique usernames only (not generic root or Administrator accounts).
The PCI DSS may be old in Internet Years, but fundamentally sound and well-managed security best practises have never be as relevant and effective as they are today.

Tuesday, 19 February 2013

Agentless FIM – Why File Integrity Monitoring Without Agents Is The Same, and Better, and Worse than using Agents

Introduction
Agentless FIMAgent versus Agentless is a perennial debate for any monitoring requirements and is something that has been written about previously.
The summary of the previous assessment was that agent-based FIM is usually better due to the real-time detection of changes, negating the need for repeated full baseline operations, and due to the agent providing file hashing, even though there is an additional management overhead for the installation and maintenance of agent software.
But what about Agentless systems that purport to provide hashing? Seemingly being able to encircle all requirements and deliver functionality of an agent-based FIM solution but still without using an agent?

What Is So Scary About Agents Anyway?
The problem with all agents is one of maintenance. First the agent itself needs to be deployed and installed on the endpoint. Usually this will also require other components like Java or Mono to be enabled at the endpoint too, and these all have their own requirements for maintenance too.
WSUS/Windows Update Services and Update Manager functions in Ubuntu all make life much easier now to maintain packaged programs but it is accepted than introducing more components to any system will only ever increase the range of ‘things that can go wrong’.
So we’ll make that 1-0 to Agentless for ease of implementation and maintenance, even though both functions can be automated to a greater or lesser degree – good FIM solutions will automatically update their agent components if new versions are released.

System Resources – Which Option Is More Efficient?
No agent means the agentless system must operate on a polled basis, and operating on a polled basis means the monitoring system is blind to any security events or configuration changes that occur until the next poll. This could mean that security threats go undetected for hours, and in the case of rootkit malware, irreparable damage could have been done before anybody knows that there is a problem.
Poll intervals can be reduced, but the nature of an agentless system is that every attribute for every object or file being monitored must be gathered for every poll because, unlike there is with an agent-based FIM solution, there is no means of tracking and recording changes as they happen.
The consequence of this is that agentless polls are as heavy in terms of system resources as the initial baselining operation of an agent-based system. Every single file and attribute must be recorded for every poll, regardless of whether changes have occurred or not.
Worse still, all the data collected must be dragged across the network to be analyzed centrally, and again, this load is repeated for every single poll. This also makes agentless scans slow to operate.
By contrast, an agent-based FIM solution will work through the full baseline process once only, and then use its vantage point on the endpoint host to record changes to the baseline in real-time as they occur. Being host-based also gives the agent access to the OS as changes are made, thereby enabling capture of ‘Who made the Change’ data too.
The agent gives a much more host-resource and network efficient solution, operating a changes-only function. If there are no changes to record, no host resources are used and no network capacity used either. The agentless poll will always use a full baseline’s worth of resource for every scheduled scan. Furthermore this makes running a report significantly slower than using an agent that already has up to date baselines of the information needed in the report.
This easily levels the scores up at 1-1.

Security Considerations of Agentless versus Agent-Based FIM Solutions
Finally, there is a further consideration for the agentless solution that doesn’t apply to the agent-based FIM option. By requiring the agentless solution to login and execute commands on the server to gather baseline information, the agentless solution server needs an Account with network access to the host. The Account provisioned will need sufficiently high privileges to access folders and files that need to be tracked and by definition, these are typically the most sensitive objects on the server in terms of security governance. Use of Private Keys can be used to help restrict access to a degree, but an agentless solution will always carry with it an additional inherent security risk over and above that posed by agent-based technology.
I would call that a clear 2-1 to the Agent, being more efficient, faster and more effective in reporting threats in real-time.

File Hashing – What is the Advantage?
The classic approach to file integrity monitoring is to record all the file attributes for a file, then perform a comparison of the same data to see if any have changed.
For more detail of how the file make-up or contents has changed, mainly relevant to Linux/Unix text-based configuration files or web application configuration files, then the contents may be compared side-by-side to show changes.
Using a file hash (more accurately a cryptographic file hash) is an elegant and very neat way of summarizing a file’s composition in a single, simple, unique code. This provides several key benefits -
  • Regardless of the size and complexity (text or binary) of the file being tracked, a fixed length but unique code can be created for any file – comparing hash values for files is a simple but highly sensitive way to check whether there have been any changes or not
  • The hash is unique for each file and, due to the algorithms used to generate cryptographic hashes, even tiny changes result in significant variations in the hash values returned, making changes obvious
  • The hash is portable so the same file held on different servers will return the same identical hash value, providing a forensic-level ‘DNA Fingerprint’ for the file and version
Therefore cryptographic hashing is an important dimension to file integrity monitoring, however, the standard Windows OS programs and components do not offer a readily useable mechanism for delivering this function.
So a further big advantage of using an agent-based FIM solution is that cryptographic hashing can be provided on a all platforms, unlike a pure agentless solution.
3-1 to the Agent and it looks like it is going to be hard for the agentless solution to get back in the game!

When is Agentless FIM Really the Same as an Agent-Based FIM Solution?
Most vendors like Tripwire provide a clear-cut agent-based and Agentless option with the pros and cons of each option understood.
The third option is where the best of the agentless and agent-based solutions come together to encircle all capabilities. This kind of solution is positioned as agentless, and yet delivers the agent-based features . The solution behaves like an agentless solution, in as much as it functions on a scheduled full-scan basis, logging in to devices to track file characteristics. However there is no need to pre-install an agent to run FIM, so the solution feels like it is agentless.
In practice the solution requires an Administrator logon to the servers to be scanned. The system then logs on and executes a whole sequence of command line scripted commands to check file integrity, but will also pipe across a program to help perform file hashing. This program – some say agent - will then be deleted after the scan.
So is this solution agentless? No, although it does remove the major hassle with an agent-based solution in that it automates the initial deployment of the agent.
What are the other benefits of this approach? None really. It is less secure than an installed agent - providing an Admin logon that can be used across the whole network is arguably weakening security before you even start.
It is massively less efficient than a local agent - piping programs across the network, then executing a bunch of scripts, then dragging all the results back across the network is hugely inefficient compared to an agent that runs locally, does its baselines and compares locally and then only if it needs to, sends results back.
It is also fundamentally not a very effective way to keep your estate secure - which kind of misses the point of doing it in the first place! Reason being you only get to know that security is weakened or actually compromised when you next run a scan - always too late! An agent-based FIM solution will detect config drift and FIM changes in real-time - you know you have a potential risk within seconds of it arising, complete with details of who made the change.

Summary
So in summary, agentless is less efficient, less secure and less able to keep your estate secure (and the most effective Agentless solutions still use a temporary agent anyway). The ease of deployment of Agentless is tempting, but this can always be automated using any one of a number of software distribution solutions. Perhaps the best solution still is to reserve the option on both and choose the best approach on balance? For example, Firewall appliances will always need to be handled using scripted, Agentless interrogation, while Windows servers can only truly be audited for vulnerabilities using a cryptographic hashing, real-time change detection agent.

Friday, 21 December 2012

So did the OSSEC File Integrity Monitor detect the Java Remote Exploit? For this and much more...Security BSides Delaware


I am sure many that read this blog like to attend security conferences. I am sure you are familiar with Black Hat, Defcon, and H.O.P.E. These are great conferences with a lot of high quality content, they are also expensive and very crowded. Last month I had the opportunity to attend Security BSides in Delaware. It was my first BSides and will not be my last. It was held at Wilmington University on November 9th and 10th. The conference was a smaller one but in my opinion that made it great. The best part the entire conference was free. Free attendance, parking, breakfast, and lunch. The content was high quality and attracted a number of well-known speakers.

I was only able to attend on Saturday. My first talk of the day was “Social Engineering Basics and Beyond” given by Valerie Thomas @hacktress09. Valerie is a penetration tester. She audits company’s security policies and is paid to hack them. The focus of the talk was on what could be the weakest link in your organization, people. You can have the best firewalls, anti-virus, and advanced persistent threat detection but all of that could be overcome by an unaware staff member or inattentive help desk team member. Since everyone transmits their entire lives and routines on Twitter, Facebook, and 4Square, it is not hard to figure out who works for a company and their co-workers. Once you have that information it is a quick hop to Google to figure out the organization email format, username format and other key information. The information in hand the hacker makes a carefully crafted call to the helpdesk and requests a password reset or gathers the other information they need to launch their attack. The bottom line is train your people, make sure they verify security information and know who you are on the phone with.  The person on the other end of the phone may be trying to steal your information.

After a quick lunch I decided to visit the lock pick village. The challenges were to pick some simple locks as well as learning how to impression a lock and cut a key. I have previous experience with lock picking so picking was easy. As a side note, the Kwikset lock on your front door can be picked by an experienced picker in less than 2 minutes. The process to impression a key however is very difficult. After about 20 minutes I was able to impression and open a one pin lock. Most locks have 5 pins so you can see why it is so hard. The good part is that a lock impression can be done in stages, so if you have to abort your attempt you can always come back and finish later. Also, once you have the key you always have it and can get in and out quickly.

The afternoon was punctuated by shorter talks. I attended three others. The first was a talk given by a group of students regarding the CVE 2012-4681 Java Remote Exploit. The presentation was interesting in the fact that the standard security that most people would have on their machines was easily bypassed. The various freeware programs such as OSSEC also did not detect the exploit. It looks like the file integrity monitoring or FIM portion of OSSEC wasn’t used but in this case would have picked  up the changes. They also caught a special privileges escalation to a user account in the system logs which a properly configured log management tool would have alerted to the problem and warranted further investigation.  The write up is available here: https://cyberoperations.wordpress.com/student-research/cve-2012-4681-by-o-oigiagbe-r-patterson/.

The second talk I attended was on exploiting android operating systems. In this case the attack victim would be on a “rooted” android phone in which ADB was left on (the default). In this case the attacker could attach his phone or Nexus 7 table to a device and within a few minutes steal critical data from the victim phone or table. Included in this critical data was the Google Authentication token. The token, which can be pasted directly into a web browser allows access the victims entire user account bypassing any Google supplied security enhancements including two factor. The speaker even gave everyone in the class a cable to perform the attack with. Bottom line, if you root your phone, turn ADB off!

The last talk I attended I was on Pentoo http://www.pentoo.ch/ the Gentoo based penetration testing live cd. It is an alternative to BackTrack. The developer of the tool was very passionate about it  and presented several advantages. The first being an hardened kernel, pointing out how laughably easy it is to hack BackTrack when its running, a real problem at cons like Defcon. He also pointed out the advantage of having a good stable of WIFI drivers as well as built in update system and the ability to save changes to a USB stick. I have not had an opportunity to test Pentoo myself but I hope to over the holiday break and I will report back in another blog post.

Finally after a long day at the con I stopped off at Capriotti’s and picked up a Bobbie. Those from Delaware will know what I am talking about, for the rest of the world, think Thanksgiving on a sub roll.

Bart Lewis, NNT

Tuesday, 11 December 2012

Server Hardening Policy - Examples and Tips

Server Hardening PolicyIntroduction
Data Protection and Information Security best practice guidelines always place server hardening at the top of the list of measures that should be taken.
Every organization should have a hardened Windows build standard, a hardened Linux build standard, a hardened firewall standard etc. However, determining what is an appropriate server hardening policy for your environment will require detailed research of hardening checklists and then an understanding of how this should be applied to your operating systems and applications.

Server Hardening Policy background
Any server deployed in its default state will naturally be lacking in even basic security defenses. This leaves it vulnerable to compromise.
A standard framework for your server security policy should include
  • Access Security (physical and logical)
  • Operating System Configuration
  • User Accounts and Passwords
  • Filesystem Permissions
  • Software and Applications image
  • Patching and Updates
  • Auditing and Change Control
The server hardening policy should be documented and reviewed regularly.

Access Security
  • Is the server held under lock and key? Is there a log of all access to the server (visitor book, card swipe/entry code records, and video surveillance) for any access to the server?
  • Is server access governed by firewall appliances and/or software?
  • Is network access disabled, or if required, restricted using device/address based access control lists? For example, are the hosts.allow and hosts.deny files configured in line with best practice guidelines? Are accounts provided on a strict ‘must have access’ basis? Are there logs kept of all access and all account privilege assignment?
Operating System Configuration
  • Is the OS service packed/patched to latest levels and is this reviewed at least once a month?
  • Are all services/daemons removed or disabled where not required? For example, obvious candidates like web, ftp and telnet services should be removed and SSH used instead of Telnet. Similarly, remote desktop access should be removed if business operations will not be overly compromised. The best tip is to remove everything you know is not required e.g. Themes service, and then carefully experiment one at a time with other services you feel are unnecessary but may not be sure, however, don’t feel obliged to take this process too far – if you find that disabling a service compromises server operation too much for you, then don’t feel you need to do so.
  • For Windows Servers, is the Security and Audit Policy configured in line with best practice guidelines?
  • Is there a documented Secure Server Build Standard?
Filesystem Permissions
  • For example, for Unix and Linux Servers, are permissions on key security files such as /etc/passwd or /etc/shadow set in accordance with best practice checklist recommendations?
  • Is sudo being used, and are only root wheel members are allowed to use it?
  • For Windows servers, are the key executables, DLLs and drivers protected in the System32 and SysWOW64 folder?
User Accounts and Passwords
  • Are default user accounts, such as the local Administrator account and a local Guest account, renamed and in the case of the Guest Account, disabled? Whilst these accounts will be protected via a password, a number of simple steps can be taken to multiply up the security defenses in this area, simply by disabling the Guest account, and then renaming both the Guest and Administrator accounts.
  • Is there a password policy set with ageing, complexity, length, retry, lockout and reuse settings in line with best practice guidelines?
  • Is there a regular review process for removing redundant or leavers’ accounts?
  • Is there an audit trail of all account creation, privilege or rights assignments and a process for approval?
Software and Applications image/ Patching and Updates
  • Which packages and applications are defined within the Secure Build Standard? For example, anti-virus, data leakage protection, firewalling and file integrity monitoring?
  • Is there a process to check latest versions and patches have been tested and applied
  • Are automated updates to packages disabled in favor of scheduled, planned updates deployed in conjunction with a Change Management process?
Auditing and Change Control
  • Are audit trails enabled for all access, use of privilege, configuration changes and object access, creation and deletion? Are audit trails securely backed up and retained for at least 12 months?
  • Is file integrity monitoring used to verify the secure build standard/hardened server policy?
  • Is there a Change Management process, including a change proposal (covering impact analysis and rollback provisions), change approval, QA Testing and Post Implementation Review?
Best Practice Checklist for Server Hardening Policy
In the previous section there were a number of references to hardening the server ‘in line with best practice checklists’, and there are a number of sources for this information. In fact you may be reading articles like this in search of a straight answer to ‘How do I harden my Windows of Linux Server?’ It isn’t quite as simple as that unfortunately, but it also doesn’t have to be over complicated either.
Getting access to a hardening checklist or server hardening policy is easy enough. For example, the Center for Internet Security provide the CIS hardening checklists, Microsoft and Cisco produce their own checklists for Windows and Cisco ASA and Cisco routers, and the National Vulnerability Database hosted by NIST provides checklists for a wide range of Linux, Unix, Windows and firewall devices. NIST also provide the National Checklist Program Repository, based around the SCAP and OVAL standards.
SCAP is an ambitious project designed as a means of not only delivering standardized hardenings checklists, but automating the testing and reporting for devices. As such it is still being developed and refined, but in the meantime, commercial systems like Tripwire Enterprise and NNT Change Tracker provide automated means of auditing server hardening policy. The hardened server policy checklists can cover host operating systems such as CentOS, RedHat, Debian, Ubuntu, Solaris, AIX and of course Server 2003, Server 2008 and Windows 7/Windows 8.
However, any default checklist must be applied within the context of your server’s operation – what is its role? For example, if it is internet-facing then it will need to be substantially more hardened with respect to access control than if it is an internal database server behind a perimeter and internal firewall.

Server Hardening and File Integrity Monitoring
Once you have established your hardened server policy and have applied the various security best practice checklists to your hardened server build, you will now need to audit all servers and devices within your estate for compliance with the build standard. This can be very time-consuming but in order to automate the audit of a server for compliance with the security policy it is necessary to use a FIM or file integrity monitoring tool like Change Tracker Enterprise or Tripwire Enterprise. These tools can automatically audit even wide scale server estates within a few minutes, providing a full report of both passes and failures for the policy. Tips for mitigation of vulnerabilities will also be provided so the task can be greatly simplified and de-skilled.
Best of all, the hardened build standard for your server hardening policy can be monitored continuously. Any drift in configuration settings will be reported, enabling the system administrator to quickly mitigate the vulnerability again.

Summary
Prevention of security breaches is the best approach to data security. By locking out configuration vulnerabilities through hardening measures, servers can be rendered secure and attack-proof.
Using file integrity monitoring not only provides an initial audit and compliance score for all servers against standardized hardening checklists, but ensures the Windows, Linux, Ubuntu, Solaris and CentOS servers all remain securely configured at all times.

Wednesday, 10 October 2012

File Integrity Monitoring and SIEM – Combat the Zero Day Threats and Modern Malware that Anti-Virus Systems miss

Introduction
It is well known that Anti-Virus technology is fallible and will continue to be so by design. The landscape (Threatscape?) is always changing and AV systems will typically update their malware signature repositories at least once per day in an attempt to keep up with the new threats that have been isolated since the previous update.

So how secure does your organization need to be? 80%? 90%? Because if you rely on traditional anti-virus defenses this is the best you can hope to achieve unless you implement additional defense layers such as FIM (file integrity monitoring) and SIEM (event log analysis).

Anti-Virus Technology – Complete With Malware Blind spots
Any Anti Virus software has an inherent weakness in that it relies on a library of malware ‘signatures’ to identify the viruses, Trojans and worms it is seeking to remove.

This repository of malware signatures is regularly updated, sometimes several times a day depending on the developer of the software being used. The problem is that the AV developer usually needs to have direct experience of any new strains of malware in order to counteract them. The concept of a 'zero day' threat is one that uses a new variant of malware yet to be identified by the AV system.

By definition, AV systems are blind to ‘zero day’ threats, even to the point whereby new versions of an existing malware strain may be able to evade detection. Modern malware often incorporates the means to mutate, allowing it to change its makeup every time it is propagated and so improve its effectiveness at evading the AV system.

Similarly other automated security technologies, such as the sandbox or quarantine approach, that aim to block or remove malware all suffer from the same blind spots. If the malware is new though – a zero day threat – then by definition there is no signature because it has not been identified before. The unfortunate reality is that the unseen cyber-enemy also knows that new is best if they want their malware to evade detection. This is evident by the fact that in excess of 10 million new malware samples will be identified in any 6 month period.

In other words most organizations typically have very effective defenses against known enemies – any malware that has been previously identified will be stopped dead in its tracks by the IPS, anti-virus system, or any other web/mail filtering with sandbox technology. However, it is also true that the majority of these same organizations have little or no protection against the zero day threat.

File Integrity Monitoring – The 2nd Line Anti-Virus Defense System for When Your Anti-Virus System Fails
File Integrity Monitoring serves to record any changes to the file system i.e. core operating system files or program components. In this way, any malware entering your key server platforms will be detected, no matter how subtle or stealthy the attack.

In addition FIM Technology will also ensure other vulnerabilities are screened out from your systems by ensuring best practices in securely configuring your Operating Systems have been applied.

For example, any configuration settings such as user accounts, password policy, running services and processes, installed software, management and monitoring functions are all potential vectors for security breaches. In the Windows environment, the Windows Local Security Policy has been gradually extended over time to include greater restrictions to numerous functions that have been exploited in the past but this in itself is a highly complex area to configure correctly. To then maintain systems in this secure configured state is impossible without automated file integrity monitoring technology.

Likewise SIEM or Security Information and Event Management systems are designed to gather and analyze all system audit trails/event logs and correlate these with other security information to present a true picture of whether anything unusual and potentially security threatening is happening.

It is telling that widely adopted and practiced security standards such as the PCI DSS place these elements at their core as a means of maintaining system security and verifying that key processes like Change Management are being observed.

Summary
Anti-virus technology is an essential and highly valuable line of defense for any organization. However, it is vital that the limitations and therefore vulnerabilities of this technology are understood and additional layers of security implemented to compensate. File Integrity Monitoring and Event Log Analysis are the ideal counterparts to an Anti-Virus system in order to provide complete security against the modern malware threat.

Friday, 7 September 2012

File Integrity Monitoring - FIM Agent Versus Agentless FIM

Introduction
The incessant escalation, both in malware sophistication and proliferation, means the need for fundamental file integrity monitoring is essential to maintain malware-free systems. Signature-based anti-virus technologies are too fallible and easily circumnavigated by zero-day malware or selectively created and targeted advanced persistent threat (APT) virus, worm or Trojan malware.

Any good security policy will recommend the use of regular file integrity checks on system and configuration files and best practice-based security standards such as the PCI DSS (Requirement 11.5), NERC CIP (System Security R15-R19), Department of Defense Information Assurance (IA) Implementation (DODI 8500.2), Sarbanes-Oxley (Section 404), FISMA - Federal Information Security Management Act (NIST SP800-53 Rev4) specifically mandate the need to perform regular checks for any unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.

However, file-integrity monitoring needs to be deployed with a little advanced planning and understanding of how the file systems of your servers behave on a routine basis in order to determine what unusual and therefore potentially threatening events look like.

The next question is then whether an Agentless or Agent-based approach is best for your environment. This article looks at the pros and cons of both options.

Agentless FIM for Windows and Linux/Unix Servers
Starting with the most obvious advantage, the first clear benefit of an Agentless approach to file integrity monitoring is that it doesn't need any agent software to be deployed on the monitored host. This means that an Agentless FIM solution like Tripwire or nCircle will always be the quickest option to deploy and to get results from. Not only that but there is no agent software to update or potentially interfere with the server operation.

The typical Agentless file-integrity monitoring solution for Windows and Linux/Unix will utilize a scripted, command-line interaction with the host to interrogate the salient files. At the simplest end of the scale, Linux files can be baselined using a cat command and a comparison done with the subsequent samples to detect any changes. Alternatively, if a vulnerability audit is being performed in order to harden the server configuration, then a series of grep commands, used with regex expressions, will more precisely identify missing or incorrect configuration settings. Similarly, a Windows server can be interrogated using command line programs, for example, the net.exe program can be used to expose the user accounts on a system, or even assess the state or other attribute associated with a user account if piped with a find command e.g. net.exe users guest |find.exe /i "Account active" will return an "Account active Yes" or "Account active No" result and establish if the Guest account is enabled, a classic vulnerability for any Windows server.

Agent-Based File Integrity Monitoring
The key advantage of an Agent for FIM is that it can monitor file changes in real-time. Due to the agent being installed on the monitored host, the OS activity can be monitored and any file activity can be observed and changes recorded. Clearly any Agentless approach will need to be operated on a scheduled poll basis and inevitably there will be a pay-off between the frequency of polls being regular enough to catch changes as they happen, and the limiting the increased load on the host and network due to the monitoring. In practice polling is typically run once per day on most FIM solutions, for example Tripwire, and this means that you risk being anything up to 24 hours late to identify potential security incidents.

The second major advantage of an agent-based file-integrity solution is that the host does not need to be 'opened up' to allow monitoring. For example, all critical system and configuration files will always be protected by the host filesystem security, for example, the Windows System32 folder is always an 'Administrator Access Only' folder. In order to monitor the files in this location, any external scripted interaction will need to be provided with Admin rights over the Host and this immediately means that the host needs to be made accessible via the network and an additional User or Service Account needs to be provisioned with Admin privilege, potentially introducing a new security weakness to the system. By contrast, an Agent operates within the confines of the Host, just pushing out File Integrity changes as they are detected.

Finally having an Agent offers a distinct advantage over and above the Agentless approach in that it can offer a 'changes only' update across the network, and even then only when there is a change to report. The Agentless solution will need to run through its complete checklist of queries in order to make any assessment of whether changes have been identified and even using elaborate WMI or Powershell scripts still requires considerable resource usage on the host and the network when dragging results back.

Summary
Nobody likes installing and maintaining agents on their servers and, if this can be avoided, this is an attractive option to take.

Wednesday, 15 August 2012

File Integrity Monitoring and SIEM - Why Layered Security Is Essential to Combat the APT

Every time the headlines are full of the latest Cyber Crime or malware Scare story such as the Flame virus, the need to review the security standards employed by your organization takes on a new level of urgency.

The 2012 APT (Advanced Persistent Threat)
The Advanced Persistent threat differs from a regular hack or Trojan attack in that it is as the name suggests, advanced in technology and technique, and persistent, in that it is typically a sustained theft of data over many months.

So far the APT has largely been viewed as Government sponsored cyber-espionage in terms of the resources needed to orchestrate such an attack, such as the recent Flame malware which appears to have been a US or Israeli backed espionage initiative against Iran. However you always see the leading edge of technology become the norm a year later, so expect to see APT attacks reach the more mainstream, competitor-backed industrial espionage, and 'hacktivist' groups like Lulzsec and Anonymous adopting similar approaches.

The common vector for these attacks is a targeted spear phishing infiltration of the organization. Using Facebook, LinkedIn or other social media makes identification of targets much easier today, and also what kind of phishing 'bait' is going to be most effective in duping the target into providing the all-important welcoming click on the tasty links or downloads offered.

Phishing is already a well-established tool for Organized Crime gangs who will utilize these same profiled spear phishing techniques to steal data. As an interesting aside regarding organized crimes' usage of 'cybermuscle', it is reported that prices for botnets are plummeting at the moment due to oversupply of available robot networks. If you want to coerce an organization with a threat of disabling their web presence, arm yourself with a global botnet and point it at their site - DDOS attacks are easier than ever to orchestrate.

Something Must Be Done...

To be clear on what we are saying here, it isn't that AV or firewalls are no use, far from it. But the APT style of threat will evade both by design and this is the first fact to acknowledge - like the first step for a recovering alcoholic the first step is to admit you have a problem!

By definition, this kind of attack is the most dangerous because any attack that is smart enough to skip past standard defense measures is definitely going to be one that is backed by a serious intent to damage your organization (note: don't think that APT technology is therefore only an issue for blue chip organizations - that may have been the case but now that the concepts and architecture of the APT is in the mainstream, the wider hacker and hacktivist communities will already have engineered their own interpretations of the APT)
So the second fact to take on board is that there is an 'art' to delivering effective security and that requires a continuous effort to follow process and cross-check that security measures are working effectively.
The good news is that it is possible to automate the cross-checks and vigilance we have identified a need for, and in fact there are already two key technologies designed to detect abnormal occurrences within systems and to verify that security best practices are being operated.

FIM and SIEM - Security Measures Underwritten
File Integrity Monitoring or FIM serves to record any changes to the file system i.e. core operating system files or program components, and the systems' configuration settings i.e. user accounts, password policy, services, installed software, management and monitoring functions, registry keys and registry values, running processes and security policy settings for audit policy settings, user rights assignment and security options. FIM is designed to both verify that a device remains hardened and free of vulnerabilities at all time, and that the filesystem remains free of any malware.

Therefore even if some form of APT malware manages to infiltrate a critical server, well implemented FIM will detect file system changes before any rootkit protective measures that may be employed by the malware can kick in.

Likewise SIEM, or Security Information and Event Management, systems are designed to gather and analyze all system audit trails/event logs and correlate these with other security information to present a true picture of whether anything unusual and potentially security threatening is happening.
It is telling that widely adopted and practiced security standards such as the PCI DSS place these elements at their core as a means of maintaining system security and verifying that key processes like Change Management are being observed.

At the core of any comprehensive security standard is the concept of layered security - firewalling, IPS, AV, patching, hardening, DLP, tokenization, secure application development and data encryption, all governed by documented change control procedures and underpinned by audit trail analysis and file integrity monitoring. Even then with standards like the PCI DSS there is a mandated requirement for Pen Testing and Vulnerability Scanning as further checks and balances that security is being maintained.

Summary
In summary, your security policy should be built around the philosophy that technology helps secure your organizations' data, but that nothing can be taken for granted. Only by practicing continuous surveillance of system activity can you truly maintain data security, very much the essence of the Art of Layered Security.