Showing posts with label device hardening. Show all posts
Showing posts with label device hardening. Show all posts

Monday, 4 November 2013

Is Your QSA Making You Less Secure?

Introduction
Most organizations will turn to a QSA when undertaking a PCI Compliance project. A Qualified Security Assessor is the guy you need to satisfy with any security measures and procedures you implement to meet compliance with the PCI DSS so it makes sense to get them to tell you what you need to do.
For many, PCI Compliance is about simply dealing with the PCI DSS in the same way they would deal with another deadlined project. When does the bank want us to be PCI Compliant and what do we need to do before we get audited in order to get a pass?

PCI Compliance project For many, this is where the problems often begin, because of course, PCI compliance isn’t simply about passing an audit but getting your organization sufficiently organized and aware of the need to protect cardholder data at all times. The cliché in PCI circles is ‘don’t take a checkbox approach to compliance’ but it is true. Focusing on passing the audit is a tangible goal, but it should only be a milestone along the way to maturing internal processes and procedures in order to operate a secure environment every day of the year, not just to drag your organization through an annual audit.

The QSA Moral Maze
However, for many, the QSA is hired to ‘make PCI go away’ and this can sometimes present a dilemma. QSAs are in business and need to compete for work like any other commercial venture. They are typically fiercely independent and take their responsibility seriously for providing expert guidance, however, they also have bills to pay.

Some get caught by the conflict of interest between advising the implementation of measures and offering to supply the goods required. This presents a difficult choice for the customer – go along with what the QSA says, and buy whatever they sell you, or go elsewhere for any kit required and risk the valuable relationship needed to get through the audit. Whether this is for new firewalls, scanning or Pen Testing services, or FIM and Logging/SIEM products, too many Merchants have been left to make difficult decisions. The simple solution is to separate your QSA from supplying any other service or product for your PCI project, but make sure this is clarified up front.

The second common conflict of interest is one that affects any kind of consultant. If you are being paid by the day for your services, would you want the engagement to be shorter or longer? If you had the opportunity to influence the duration of the engagement, would you fight for it to be ended sooner, or be happy to let it run longer?

Let’s not be too cynical over this – the majority of Merchants have paid widely differing amounts for their QSA services but have been delighted with the value for money received. But we have had one experience recently where the QSA has asked for repeated network and system architecture re-designs. They have recommended that firewalls be replaced with more advanced versions with better IPS capabilities. In both instances, you can see that the QSA is giving accurate and proper advice, however, one of the unfortunate side-effects of doing so is that the Merchant delays implementation of other PCI DSS requirements. The result in this case is that the QSA actually delays security measures being put in place, in other words, the security expert’s advice is to prolong the organizations weak security posture!

Conclusion
The QSA community is a rich source of security experience and expertise, and who better to help navigate and organization through a PCI Program than those responsible for conducting the audit for compliance with the standard. However, best practice is to separate the QSA from any other aspect of the project. Secondly, self-educate and help yourself by becoming familiar with security best practices – it will save time and money if you can empower yourself instead of paying by the day to be taught the basics. Finally, don’t delay implementing security measures – you know your systems better than anyone else, so don’t pay to prolong your project! Seize responsibility for de-scoping your environment where possible, then apply basic best practices to the remaining systems in scope – harden, implement change controls, measure effectiveness using file integrity monitoring and retain audit trails of all system activity. It’s simpler than your QSA might leave you to believe.

Thursday, 12 September 2013

Cyber Threat Sharing Bill and Cyber Incident Response Scheme – Shouldn’t We Start with System Hardening and FIM?

Background: Defending the Nation’s Critical Infrastructure from Cyber Attacks

In the UK, HM Government’s ‘Cyber Incident Response Scheme’ is closely aligned in intent and purpose to the forthcoming US Cyber Threat Sharing Bill.

The driver for both mandates is that, in order to defend against truly targeted, stealthy cyber attacks (APTs, if you like), there will need to be a much greater level of awareness and collaboration. This becomes a government issue when the nation’s critical infrastructures (defense, air traffic control, health service, power and gas utilities etc.) are concerned. Stuxnet proved that cyber attacks against critical national infrastructure can succeed and there isn’t a government anywhere in the world who doesn’t have concerns that they could be next.

The issues are clear: a breach could happen, despite best efforts to prevent it. But in the event that a security breach is discovered, identifying the nature of the threat and then properly communicating this to others at risk is time-critical. The damage may already be done at one facility but heading it off before it effects other locations becomes the new priority: the impact of the breach can be isolated if swift and effective mitigating actions can be taken at other organizations subject to the same cyber threat.

As it stands, the US appear to be going further, legislating regulation via the Cyberthreat Sharing Bill. The UK Government have created the Cyber Incident Response Scheme but without this being a legislated and regulated requirement it may be suffer from slow adoption. Why wouldn’t the UK do likewise if they are taking national cyber security as seriously?

System Hardening and FIM

Prevention Better Than Cure?

One other observation, based on experience with other ‘top down’ mandated security standards such as PCI DSS, is that there is a temptation for the authorities to prioritize specific security best practices over others. Being able to give a ‘If you only do one thing for security, it’s this…’ message gets things moving in the right direction, however it is can also lead to a false sense of security amongst the community, that the mandated steps have been taken and so the organization is now ‘secure’.

In the case of the UK initiative, the collaboration with CREST is sound in that it provides a degree of ‘quality control’ over the resources recommended for usage. However, the concern is that the emphasis of the CREST scheme may be biased too heavily towards penetration testing. Whilst this is a good, basic security practice, pen testing is either too infrequent, or too automated (and therefore breeds complacency). Better than doing nothing? Absolutely – but the program should not be stopped there.

A truly secure environment is one where all security best practices are understood and embedded within the organization and operated constantly. Likewise, vulnerability monitoring and system integrity should be a non-stop process, not a quarterly or ad hoc pen test. Real-time file integrity monitoring, continuously assessing devices for compliance with a hardened build standard and identifying all system changes is the only way to truly guarantee security.

Monday, 5 November 2012

Server Hardening Checklist - Which Configuration Hardening Checklist Will Make My Server Most Secure?

Introduction
Any information security policy or standard will include a requirement to use a ‘hardened build standard’. The concept of hardening is straightforward enough, but knowing which source of information you should reference for a hardening checklist when there are so many published can be confusing.

Server Hardening Checklist Reference Sources
The most popular ‘brands’ in this area are the Center for Internet Security or CIS hardening checklists (free for personal use), the NIST (aka National Vulnerability Database) provided National Checklist Program Repository or the SANS Institute Reading Room articles regarding hardening of Top 20 Most Critical Vulnerabilities.

All of these groups offer Configuration Hardening Checklists for most Windows Operating Systems, Linux variants (Debian, Ubuntu, CentOS, RedHat Enterprise Linux aka RHEL, SUSE Linux), Unix variants (such as Solaris, AIX and HPUX), and firewalls and network appliances, (such as Cisco ASA, Checkpoint and Juniper).

These sources offer a convenient, one-stop shop for checklists but you may be better served by seeking out the manufacturer or community-specific checklists for your devices and Operating Systems. For example, Microsoft and Cisco offer very comprehensive hardening best-practice recommendations on their websites, and the various CentOS and Ubuntu communities have numerous secure configuration best practice tutorials across the internet.

So which checklist is the best? Which configuration hardening benchmark is the best and the most secure? If you consider that all benchmarks for say, Windows 2008 R2 are all seeking to eliminate the same vulnerabilities from the same operating system, then you quickly realize that there is naturally a high degree of commonality between then various sources. In short, they are all saying the same thing, just in slightly different terms. What is important is that you assess the relevant risk levels for your systems versus what compromises you can make in terms of reduced functionality in return for greater security.

Configuration Hardening and Vulnerability Management
It is important to distinguish between software-based vulnerabilities which require patching for remediation, and configuration based vulnerabilities which can only ever be mitigated. Achieving a hardened, secure build standard is really what a hardening program is all about as this provides a constant and fundamental level of security.

Configuration hardening presents a uniquely tough challenge as the level to which you can harden depends on your environment, applications and working practices. For example, removing web and ftp services from a host are good, basic hardening practices. However, if the host needs to act as a web server, then this is not going to be a sensible hardening measure!

Similarly, if you need remote access to the host via the network then you will need to open firewall ports and enable terminal server or ssh services on the host, otherwise these should always be removed or disabled to help secure the host.

Conversely, patching is a much simpler discipline, with a general rule that the latest version is always the most secure (but test it first just to make sure it works!).

Configuration Hardening Procedures
In a similar way that patching should be done at least once a month, configuration hardening must also be practiced regularly – it is not a one-time exercise.

Unlike patching, where new vulnerabilities to software packages are routinely discovered then fixed via the latest patches, new configuration-based vulnerabilities are discovered very seldom. For example, the CIS Server 2008 Benchmark has only been updated 3 times despite the Operating System having been available for nearly 5 years now. The initial benchmark, Version 1.0.0 was released in March 2010, and then updated to Version 1.1.0 in July the same year. There was then a recent update to Version 1.2.0 in September 2011.
 
However, even though new configuration best practices are rarely introduced, it is vital that the hardened configuration of your Windows Servers, Linux and Unix hosts and network devices are reviewed regularly because changes could be made at any time which may adversely affect the inherent security of the device.
When you consider that any checklist can typically comprise between 200 and 300 measures, verifying that all hardening measures are being consistently and continuously applied has to be an automated process.

This can be provided by vulnerability scanning appliances such as Nessus or Qualys, however these are limited in the range and depth of checks they can make unless they are given administrator or root access to the host under test. Of course, in doing so, this actually introduces additional security vulnerabilities, as the host is now accessible via the network and there is at least one more administrator or root account in circulation which could be abused.

Configuration Hardening – File Integrity Monitoring
The other limitation of scanning appliances is that they can only take a snapshot assessment of the device concerned. While this is a good way to check compliance of the device with a configuration hardening best practice checklist, there is no way to verify that the filesystem has not been compromised, for example, by a Trojan or other malware.

Summary
Continuous file integrity monitoring combined with continuous configuration hardening assessment is the only true solution for maintaining secure systems. While branded checklists such as the CIS Benchmarks are a great source of hardening best practices, they are not the only option available. In fact, manufacturer provided checklists are generally a more focused source of vulnerability mitigation practices. Remember that there may be a wide choice of checklists using different terms and language, but that ultimately there is only one way to harden any particular system. What is more important is that you apply the hardening measures appropriate for your environment, balancing risk reduction against operational and functional compromises.

Google+

Friday, 10 September 2010

Device Hardening, Vulnerability Scanning and Threat Mitigation for Compliance and Security

All security standards and Corporate Governance Compliance Policies such as PCI DSS, GCSx CoCo, SOX (Sarbanes Oxley), NERC CIP, HIPAA, HITECH, GLBA, ISO27000 and FISMA require devices such as PCs, Windows Servers, Unix Servers, network devices such as firewalls, Intrusion Protection Systems (IPS) and routers to be secure in order that they protect confidential data secure.

There are a number of buzzwords being used in this area - Security Vulnerabilities and Device Hardening? 'Hardening' a device requires known security 'vulnerabilities' to be eliminated or mitigated. A vulnerability is any weakness or flaw in the software design, implementation or administration of a system that provides a mechanism for a threat to exploit the weakness of a system or process. There are two main areas to address in order to eliminate security vulnerabilities - configuration settings and software flaws in program and operating system files. Eliminating vulnerabilites will require either 'remediation' - typically a software upgrade or patch for program or OS files - or 'mitigation' - a configuration settings change. Hardening is required equally for servers, workstations and network devices such as firewalls, switches and routers.

How do I identify Vulnerabilities? A Vulnerability scan or external Penetration Test will report on all vulnerabilities applicable to your systems and applications. You can buy in 3rd Party scanning/pen testing services - pen testing by its very nature is done externally via the public internet as this is where any threat would be exploited from. Vulnerability Scanning services need to be delivered in situ on-site. This can either be performed by a 3rd Party Consultant with scanning hardware, or you can purchase a 'black box' solution whereby a scanning appliance is permanently sited within your network and scans are provisioned remotely. Of course, the results of any scan are only accurate at the time of the scan which is why solutions that continuously track configuration changes are the only real way to guarantee the security of your IT estate is maintained.

What is the difference between 'remediation' and 'mitigation'? 'Remediation' of a vulnerability results in the flaw being removed or fixed permanently, so this term generally applies to any software update or patch. Patch management is increasingly automated by the Operating System and Product Developer - as long as you implement patches when released, then in-built vulnerabilities will be remediated. As an example, the recently reported Operation Aurora, classified as an Advanced Persistent Threat or APT, was successful in infiltrating Google and Adobe. A vulnerability within Internet Explorer was used to plant malware on targeted users' PCs that allowed access to sensitive data. The remediation for this vulnerability is to 'fix' Internet Explorer using Microsoft released patches. Vulnerability 'mitigation' via Configuration settings ensures vulnerabilities are disabled. Configuration-based vulnerabilities are no more or less potentially damaging than those needing to be remediated via a patch, although a securely configured device may well mitigate a program or OS-based threat. The biggest issue with Configuration-based vulnerabilities is that they can be re-introduced or enabled at any time - just a few clicks are needed to change most configuration settings.
How often are new vulnerabilities discovered? Unfortunately, all of the time! Worse still, often the only way that the global community discovers a vulnerability is after a hacker has discovered it and exploited it. It is only when the damage has been done and the hack traced back to its source that a preventative course of action, either patch or configuration settings, can be formulated. There are various centralized repositories of threats and vulnerabilities on the web such as the MITRE CCE lists and many security product vendors compile live threat reports or 'storm center' websites.

So all I need to do is to work through the checklist and then I am secure? In theory, but there are literally hundreds of known vulnerabilities for each platform and even in a small IT estate, the task of verifying the hardened status of each and every device is an almost impossible task to conduct manually.
Even if you automate the vulnerability scanning task using a scanning tool to identify how hardened your devices are before you start, you will still have work to do to mitigate and remediate vulnerabilities. But this is only the first step - if you consider a typical configuration vulnerability, for example, a Windows Server should have the Guest account disabled. If you run a scan, identify where this vulnerability exists for your devices, and then take steps to mitigate this vulnerability by disabling the Guest Account, then you will have hardened these devices. However, if another user with Administrator privileges then accesses these same servers and re-enables the Guest Account for any reason, you will then be left exposed. Of course, you wont know that the server has been rendered vulnerable until you next run a scan which may not be for another 3 months or even 12 months. There is another factor that hasn't yet been covered which is how do you protect systems from an internal threat - more on this later.

So tight change management is essential for ensuring we remain compliant? Indeed - Section 6.4 of the PCI DSS describes the requirements for a formally managed Change Management process for this very reason. Any change to a server or network device may have an impact on the device's 'hardened' state and therefore it is imperative that this is considered when making changes. If you are using a continuous configuration change tracking solution then you will have an audit trail available giving you 'closed loop' change management - so the detail of the approved change is documented, along with details of the exact changes that were actually implemented. Furthermore, the devices changed will be re-assessed for vulnerabilities and their compliant state confirmed automatically.

What about internal threats? Cybercrime is joining the Organised Crime league which means this is not just about stopping malicious hackers proving their skills as a fun pastime! Firewalling, Intrusion Protection Systems, AntiVirus software and fully implemented device hardening measures will still not stop or even detect a rogue employee who works as an 'inside man'. This kind of threat could result in malware being introduced to otherwise secure systems by an employee with Administrator Rights, or even backdoors being programmed into core business applications. Similarly, with the advent of Advanced Persistent Threats (APT) such as the publicized 'Aurora' hacks that use social engineering to dupe employees into introducing 'Zero-Day' malware. 'Zero-Day' threats exploit previously unknown vulnerabilities - a hacker discovers a new vulnerability and formulates an attack process to exploit it. The job then is to understand how the attack happened and more importantly how to remediate or mitigate future re-occurrences of the threat. By their very nature, anti-virus measures are often powerless against 'zero-day' threats. In fact, the only way to detect these types of threats is to use File-Integrity Monitoring technology. "All the firewalls, Intrusion Protection Systems, Anti-virus and Process Whitelisting technology in the world won't save you from a well-orchestrated internal hack where the perpetrator has admin rights to key servers or legitimate access to application code - file integrity monitoring used in conjunction with tight change control is the only way to properly govern sensitive payment card systems" Phil Snell, CTO, NNT

See our other whitepaper 'File-Integrity Monitoring - The Last Line of Defense of the PCI DSS' for more background to this area, but this is a brief summary -Clearly, it is important to verify all adds, changes and deletions of files as any change may be significant in compromising the security of a host. This can be achieved by monitoring for should be any attributes changes and the size of the file.

However, since we are looking to prevent one of the most sophisticated types of hack we need to introduce a completely infallible means of guaranteeing file integrity. This calls for each file to be 'DNA Fingerprinted', typically generated using a Secure Hash Algorithm. A Secure Hash Algorithm, such as SHA1 or MD5, produces a unique, hash value based on the contents of the file and ensures that even a single character changing in a file will be detected. This means that even if a program is modified to expose payment card details, but the file is then 'padded' to make it the same size as the original file and with all other attributes edited to make the file look and feel the same, the modifications will still be exposed. This is why the PCI DSS makes File-Integrity Monitoring a mandatory requirement and why it is increasingly considered as vital a component in system security as firewalling and anti-virus defences.

Conclusion
Device hardening is an essential discipline for any organization serious about security. Furthermore, if your organization is subject to any corporate governance or formal security standard, such as PCI DSS, SOX, HIPAA, NERC CIP, ISO 27K, GCSx Co Co, then device hardening will be a mandatory requirement. - All servers, workstations and network devices need to be hardened via a combination of configuration settings and software patch deployment - Any change to a device may adversely affect its hardened state and render your organization exposed to security threats - file-integrity monitoring must also be employed to mitigate 'zero-day' threats and the threat from the 'inside man' - vulnerability checklists will change regularly as new threats are identified