Monday 24 December 2012

BREAKING NEWS - Really? - PCI Compliance is Mandatory

If you're thinking "That's hardly breaking news?" I would tend to agree. However, it is still providing plenty of copy even though the PCI DSS was introduced seven long years ago. At the time it was 'mandatory' and 'urgent' but the problem now is that, so many firms have avoided or delayed measures that overcoming the apathy often associated with PCI compliance is getting more difficult.

I read this last week on Bankinfosecurity.com

PCI SSC: Firms Must Perform Rigorous Risk Assessments

I couldn't agree more with one of the points made by Bob Russo, General Manager of the PCI Security Standards Council (PCI SSC). Mr. Russo is quoted as saying "The standard requires an annual risk assessment, because the DSS (data security standard) validation is only a snapshot of your compliance at a particular point in time. Therefore, it is possible that changes that have been made to a system since the previous evaluation could have undermined security protections or opened up new vulnerabilities"

In other words, real time file integrity monitoring coupled with  continuous server hardening checks is essential for PCI compliance - read more about both areas here.

And then two days later, I was sent a link to this article

Even the tiniest firms face fines for failing to protect credit card details

This is more interesting because the Daily Mail is about as mainstream as you can get in the UK - whatever you think about the newspaper's editorial leanings, this was published as contemporary, newsworthy copy for it's readers. The angle is about small firms needing to adhere to the PCI DSS requirements - again, not really news, as right from day one, anyone handling cardholder data has been burdened with a duty of care over it. Most small firms either run transactions directly to their bank or via an on-line service like Worldpay, so their main concerns for PCI compliance is to be aware of the risks and take care of the basics, such as

1. Don't write down, or store in any other form, cardholder details. If you need to regularly re-use a customers card details, you'll either need to ask for them again each time, or use your banks 'vault' facilities (based on tokenized card data)

2. Check you Pin Entry Device regularly and don't let anyone tamper with it. Card skimming is still one of the biggest card theft opportunities - see this video for the basics. In the UK, Chip and PIN has significantly reduced the risk but in the US and other parts of the world where card handling checks are limited to a superficial signature (that is rarely even checked against the card), card skimming still pays dividends. Of course, just because Track 1 data from a card is stolen in the UK, the card can still be cloned and used anywhere in the world where Chip and PIN is not enforced.

3. Make sure you are learning from the PCI DSS - work to use as many of the measures as you can. Even if you are using an online service to process a card payment transaction, the PC used to enter the details could be compromised by a key logger or other malware designed to steal data. Hardening your systems in line with Best Practice checklist guidance, Firewalling, Anti Virus, File Integrity Monitoring and Logging will all ensure your systems are secure and that you have the visibility of potential security threats before they can be used to steal card data.

If you can follow some of these basic steps then you'll be able to ensure that your company doesn't end up as headline news for the next card data theft story.



Friday 21 December 2012

So did the OSSEC File Integrity Monitor detect the Java Remote Exploit? For this and much more...Security BSides Delaware


I am sure many that read this blog like to attend security conferences. I am sure you are familiar with Black Hat, Defcon, and H.O.P.E. These are great conferences with a lot of high quality content, they are also expensive and very crowded. Last month I had the opportunity to attend Security BSides in Delaware. It was my first BSides and will not be my last. It was held at Wilmington University on November 9th and 10th. The conference was a smaller one but in my opinion that made it great. The best part the entire conference was free. Free attendance, parking, breakfast, and lunch. The content was high quality and attracted a number of well-known speakers.

I was only able to attend on Saturday. My first talk of the day was “Social Engineering Basics and Beyond” given by Valerie Thomas @hacktress09. Valerie is a penetration tester. She audits company’s security policies and is paid to hack them. The focus of the talk was on what could be the weakest link in your organization, people. You can have the best firewalls, anti-virus, and advanced persistent threat detection but all of that could be overcome by an unaware staff member or inattentive help desk team member. Since everyone transmits their entire lives and routines on Twitter, Facebook, and 4Square, it is not hard to figure out who works for a company and their co-workers. Once you have that information it is a quick hop to Google to figure out the organization email format, username format and other key information. The information in hand the hacker makes a carefully crafted call to the helpdesk and requests a password reset or gathers the other information they need to launch their attack. The bottom line is train your people, make sure they verify security information and know who you are on the phone with.  The person on the other end of the phone may be trying to steal your information.

After a quick lunch I decided to visit the lock pick village. The challenges were to pick some simple locks as well as learning how to impression a lock and cut a key. I have previous experience with lock picking so picking was easy. As a side note, the Kwikset lock on your front door can be picked by an experienced picker in less than 2 minutes. The process to impression a key however is very difficult. After about 20 minutes I was able to impression and open a one pin lock. Most locks have 5 pins so you can see why it is so hard. The good part is that a lock impression can be done in stages, so if you have to abort your attempt you can always come back and finish later. Also, once you have the key you always have it and can get in and out quickly.

The afternoon was punctuated by shorter talks. I attended three others. The first was a talk given by a group of students regarding the CVE 2012-4681 Java Remote Exploit. The presentation was interesting in the fact that the standard security that most people would have on their machines was easily bypassed. The various freeware programs such as OSSEC also did not detect the exploit. It looks like the file integrity monitoring or FIM portion of OSSEC wasn’t used but in this case would have picked  up the changes. They also caught a special privileges escalation to a user account in the system logs which a properly configured log management tool would have alerted to the problem and warranted further investigation.  The write up is available here: https://cyberoperations.wordpress.com/student-research/cve-2012-4681-by-o-oigiagbe-r-patterson/.

The second talk I attended was on exploiting android operating systems. In this case the attack victim would be on a “rooted” android phone in which ADB was left on (the default). In this case the attacker could attach his phone or Nexus 7 table to a device and within a few minutes steal critical data from the victim phone or table. Included in this critical data was the Google Authentication token. The token, which can be pasted directly into a web browser allows access the victims entire user account bypassing any Google supplied security enhancements including two factor. The speaker even gave everyone in the class a cable to perform the attack with. Bottom line, if you root your phone, turn ADB off!

The last talk I attended I was on Pentoo http://www.pentoo.ch/ the Gentoo based penetration testing live cd. It is an alternative to BackTrack. The developer of the tool was very passionate about it  and presented several advantages. The first being an hardened kernel, pointing out how laughably easy it is to hack BackTrack when its running, a real problem at cons like Defcon. He also pointed out the advantage of having a good stable of WIFI drivers as well as built in update system and the ability to save changes to a USB stick. I have not had an opportunity to test Pentoo myself but I hope to over the holiday break and I will report back in another blog post.

Finally after a long day at the con I stopped off at Capriotti’s and picked up a Bobbie. Those from Delaware will know what I am talking about, for the rest of the world, think Thanksgiving on a sub roll.

Bart Lewis, NNT

Tuesday 11 December 2012

Server Hardening Policy - Examples and Tips

Server Hardening PolicyIntroduction
Data Protection and Information Security best practice guidelines always place server hardening at the top of the list of measures that should be taken.
Every organization should have a hardened Windows build standard, a hardened Linux build standard, a hardened firewall standard etc. However, determining what is an appropriate server hardening policy for your environment will require detailed research of hardening checklists and then an understanding of how this should be applied to your operating systems and applications.

Server Hardening Policy background
Any server deployed in its default state will naturally be lacking in even basic security defenses. This leaves it vulnerable to compromise.
A standard framework for your server security policy should include
  • Access Security (physical and logical)
  • Operating System Configuration
  • User Accounts and Passwords
  • Filesystem Permissions
  • Software and Applications image
  • Patching and Updates
  • Auditing and Change Control
The server hardening policy should be documented and reviewed regularly.

Access Security
  • Is the server held under lock and key? Is there a log of all access to the server (visitor book, card swipe/entry code records, and video surveillance) for any access to the server?
  • Is server access governed by firewall appliances and/or software?
  • Is network access disabled, or if required, restricted using device/address based access control lists? For example, are the hosts.allow and hosts.deny files configured in line with best practice guidelines? Are accounts provided on a strict ‘must have access’ basis? Are there logs kept of all access and all account privilege assignment?
Operating System Configuration
  • Is the OS service packed/patched to latest levels and is this reviewed at least once a month?
  • Are all services/daemons removed or disabled where not required? For example, obvious candidates like web, ftp and telnet services should be removed and SSH used instead of Telnet. Similarly, remote desktop access should be removed if business operations will not be overly compromised. The best tip is to remove everything you know is not required e.g. Themes service, and then carefully experiment one at a time with other services you feel are unnecessary but may not be sure, however, don’t feel obliged to take this process too far – if you find that disabling a service compromises server operation too much for you, then don’t feel you need to do so.
  • For Windows Servers, is the Security and Audit Policy configured in line with best practice guidelines?
  • Is there a documented Secure Server Build Standard?
Filesystem Permissions
  • For example, for Unix and Linux Servers, are permissions on key security files such as /etc/passwd or /etc/shadow set in accordance with best practice checklist recommendations?
  • Is sudo being used, and are only root wheel members are allowed to use it?
  • For Windows servers, are the key executables, DLLs and drivers protected in the System32 and SysWOW64 folder?
User Accounts and Passwords
  • Are default user accounts, such as the local Administrator account and a local Guest account, renamed and in the case of the Guest Account, disabled? Whilst these accounts will be protected via a password, a number of simple steps can be taken to multiply up the security defenses in this area, simply by disabling the Guest account, and then renaming both the Guest and Administrator accounts.
  • Is there a password policy set with ageing, complexity, length, retry, lockout and reuse settings in line with best practice guidelines?
  • Is there a regular review process for removing redundant or leavers’ accounts?
  • Is there an audit trail of all account creation, privilege or rights assignments and a process for approval?
Software and Applications image/ Patching and Updates
  • Which packages and applications are defined within the Secure Build Standard? For example, anti-virus, data leakage protection, firewalling and file integrity monitoring?
  • Is there a process to check latest versions and patches have been tested and applied
  • Are automated updates to packages disabled in favor of scheduled, planned updates deployed in conjunction with a Change Management process?
Auditing and Change Control
  • Are audit trails enabled for all access, use of privilege, configuration changes and object access, creation and deletion? Are audit trails securely backed up and retained for at least 12 months?
  • Is file integrity monitoring used to verify the secure build standard/hardened server policy?
  • Is there a Change Management process, including a change proposal (covering impact analysis and rollback provisions), change approval, QA Testing and Post Implementation Review?
Best Practice Checklist for Server Hardening Policy
In the previous section there were a number of references to hardening the server ‘in line with best practice checklists’, and there are a number of sources for this information. In fact you may be reading articles like this in search of a straight answer to ‘How do I harden my Windows of Linux Server?’ It isn’t quite as simple as that unfortunately, but it also doesn’t have to be over complicated either.
Getting access to a hardening checklist or server hardening policy is easy enough. For example, the Center for Internet Security provide the CIS hardening checklists, Microsoft and Cisco produce their own checklists for Windows and Cisco ASA and Cisco routers, and the National Vulnerability Database hosted by NIST provides checklists for a wide range of Linux, Unix, Windows and firewall devices. NIST also provide the National Checklist Program Repository, based around the SCAP and OVAL standards.
SCAP is an ambitious project designed as a means of not only delivering standardized hardenings checklists, but automating the testing and reporting for devices. As such it is still being developed and refined, but in the meantime, commercial systems like Tripwire Enterprise and NNT Change Tracker provide automated means of auditing server hardening policy. The hardened server policy checklists can cover host operating systems such as CentOS, RedHat, Debian, Ubuntu, Solaris, AIX and of course Server 2003, Server 2008 and Windows 7/Windows 8.
However, any default checklist must be applied within the context of your server’s operation – what is its role? For example, if it is internet-facing then it will need to be substantially more hardened with respect to access control than if it is an internal database server behind a perimeter and internal firewall.

Server Hardening and File Integrity Monitoring
Once you have established your hardened server policy and have applied the various security best practice checklists to your hardened server build, you will now need to audit all servers and devices within your estate for compliance with the build standard. This can be very time-consuming but in order to automate the audit of a server for compliance with the security policy it is necessary to use a FIM or file integrity monitoring tool like Change Tracker Enterprise or Tripwire Enterprise. These tools can automatically audit even wide scale server estates within a few minutes, providing a full report of both passes and failures for the policy. Tips for mitigation of vulnerabilities will also be provided so the task can be greatly simplified and de-skilled.
Best of all, the hardened build standard for your server hardening policy can be monitored continuously. Any drift in configuration settings will be reported, enabling the system administrator to quickly mitigate the vulnerability again.

Summary
Prevention of security breaches is the best approach to data security. By locking out configuration vulnerabilities through hardening measures, servers can be rendered secure and attack-proof.
Using file integrity monitoring not only provides an initial audit and compliance score for all servers against standardized hardening checklists, but ensures the Windows, Linux, Ubuntu, Solaris and CentOS servers all remain securely configured at all times.

Monday 5 November 2012

Server Hardening Checklist - Which Configuration Hardening Checklist Will Make My Server Most Secure?

Introduction
Any information security policy or standard will include a requirement to use a ‘hardened build standard’. The concept of hardening is straightforward enough, but knowing which source of information you should reference for a hardening checklist when there are so many published can be confusing.

Server Hardening Checklist Reference Sources
The most popular ‘brands’ in this area are the Center for Internet Security or CIS hardening checklists (free for personal use), the NIST (aka National Vulnerability Database) provided National Checklist Program Repository or the SANS Institute Reading Room articles regarding hardening of Top 20 Most Critical Vulnerabilities.

All of these groups offer Configuration Hardening Checklists for most Windows Operating Systems, Linux variants (Debian, Ubuntu, CentOS, RedHat Enterprise Linux aka RHEL, SUSE Linux), Unix variants (such as Solaris, AIX and HPUX), and firewalls and network appliances, (such as Cisco ASA, Checkpoint and Juniper).

These sources offer a convenient, one-stop shop for checklists but you may be better served by seeking out the manufacturer or community-specific checklists for your devices and Operating Systems. For example, Microsoft and Cisco offer very comprehensive hardening best-practice recommendations on their websites, and the various CentOS and Ubuntu communities have numerous secure configuration best practice tutorials across the internet.

So which checklist is the best? Which configuration hardening benchmark is the best and the most secure? If you consider that all benchmarks for say, Windows 2008 R2 are all seeking to eliminate the same vulnerabilities from the same operating system, then you quickly realize that there is naturally a high degree of commonality between then various sources. In short, they are all saying the same thing, just in slightly different terms. What is important is that you assess the relevant risk levels for your systems versus what compromises you can make in terms of reduced functionality in return for greater security.

Configuration Hardening and Vulnerability Management
It is important to distinguish between software-based vulnerabilities which require patching for remediation, and configuration based vulnerabilities which can only ever be mitigated. Achieving a hardened, secure build standard is really what a hardening program is all about as this provides a constant and fundamental level of security.

Configuration hardening presents a uniquely tough challenge as the level to which you can harden depends on your environment, applications and working practices. For example, removing web and ftp services from a host are good, basic hardening practices. However, if the host needs to act as a web server, then this is not going to be a sensible hardening measure!

Similarly, if you need remote access to the host via the network then you will need to open firewall ports and enable terminal server or ssh services on the host, otherwise these should always be removed or disabled to help secure the host.

Conversely, patching is a much simpler discipline, with a general rule that the latest version is always the most secure (but test it first just to make sure it works!).

Configuration Hardening Procedures
In a similar way that patching should be done at least once a month, configuration hardening must also be practiced regularly – it is not a one-time exercise.

Unlike patching, where new vulnerabilities to software packages are routinely discovered then fixed via the latest patches, new configuration-based vulnerabilities are discovered very seldom. For example, the CIS Server 2008 Benchmark has only been updated 3 times despite the Operating System having been available for nearly 5 years now. The initial benchmark, Version 1.0.0 was released in March 2010, and then updated to Version 1.1.0 in July the same year. There was then a recent update to Version 1.2.0 in September 2011.
 
However, even though new configuration best practices are rarely introduced, it is vital that the hardened configuration of your Windows Servers, Linux and Unix hosts and network devices are reviewed regularly because changes could be made at any time which may adversely affect the inherent security of the device.
When you consider that any checklist can typically comprise between 200 and 300 measures, verifying that all hardening measures are being consistently and continuously applied has to be an automated process.

This can be provided by vulnerability scanning appliances such as Nessus or Qualys, however these are limited in the range and depth of checks they can make unless they are given administrator or root access to the host under test. Of course, in doing so, this actually introduces additional security vulnerabilities, as the host is now accessible via the network and there is at least one more administrator or root account in circulation which could be abused.

Configuration Hardening – File Integrity Monitoring
The other limitation of scanning appliances is that they can only take a snapshot assessment of the device concerned. While this is a good way to check compliance of the device with a configuration hardening best practice checklist, there is no way to verify that the filesystem has not been compromised, for example, by a Trojan or other malware.

Summary
Continuous file integrity monitoring combined with continuous configuration hardening assessment is the only true solution for maintaining secure systems. While branded checklists such as the CIS Benchmarks are a great source of hardening best practices, they are not the only option available. In fact, manufacturer provided checklists are generally a more focused source of vulnerability mitigation practices. Remember that there may be a wide choice of checklists using different terms and language, but that ultimately there is only one way to harden any particular system. What is more important is that you apply the hardening measures appropriate for your environment, balancing risk reduction against operational and functional compromises.

Google+

Wednesday 10 October 2012

File Integrity Monitoring and SIEM – Combat the Zero Day Threats and Modern Malware that Anti-Virus Systems miss

Introduction
It is well known that Anti-Virus technology is fallible and will continue to be so by design. The landscape (Threatscape?) is always changing and AV systems will typically update their malware signature repositories at least once per day in an attempt to keep up with the new threats that have been isolated since the previous update.

So how secure does your organization need to be? 80%? 90%? Because if you rely on traditional anti-virus defenses this is the best you can hope to achieve unless you implement additional defense layers such as FIM (file integrity monitoring) and SIEM (event log analysis).

Anti-Virus Technology – Complete With Malware Blind spots
Any Anti Virus software has an inherent weakness in that it relies on a library of malware ‘signatures’ to identify the viruses, Trojans and worms it is seeking to remove.

This repository of malware signatures is regularly updated, sometimes several times a day depending on the developer of the software being used. The problem is that the AV developer usually needs to have direct experience of any new strains of malware in order to counteract them. The concept of a 'zero day' threat is one that uses a new variant of malware yet to be identified by the AV system.

By definition, AV systems are blind to ‘zero day’ threats, even to the point whereby new versions of an existing malware strain may be able to evade detection. Modern malware often incorporates the means to mutate, allowing it to change its makeup every time it is propagated and so improve its effectiveness at evading the AV system.

Similarly other automated security technologies, such as the sandbox or quarantine approach, that aim to block or remove malware all suffer from the same blind spots. If the malware is new though – a zero day threat – then by definition there is no signature because it has not been identified before. The unfortunate reality is that the unseen cyber-enemy also knows that new is best if they want their malware to evade detection. This is evident by the fact that in excess of 10 million new malware samples will be identified in any 6 month period.

In other words most organizations typically have very effective defenses against known enemies – any malware that has been previously identified will be stopped dead in its tracks by the IPS, anti-virus system, or any other web/mail filtering with sandbox technology. However, it is also true that the majority of these same organizations have little or no protection against the zero day threat.

File Integrity Monitoring – The 2nd Line Anti-Virus Defense System for When Your Anti-Virus System Fails
File Integrity Monitoring serves to record any changes to the file system i.e. core operating system files or program components. In this way, any malware entering your key server platforms will be detected, no matter how subtle or stealthy the attack.

In addition FIM Technology will also ensure other vulnerabilities are screened out from your systems by ensuring best practices in securely configuring your Operating Systems have been applied.

For example, any configuration settings such as user accounts, password policy, running services and processes, installed software, management and monitoring functions are all potential vectors for security breaches. In the Windows environment, the Windows Local Security Policy has been gradually extended over time to include greater restrictions to numerous functions that have been exploited in the past but this in itself is a highly complex area to configure correctly. To then maintain systems in this secure configured state is impossible without automated file integrity monitoring technology.

Likewise SIEM or Security Information and Event Management systems are designed to gather and analyze all system audit trails/event logs and correlate these with other security information to present a true picture of whether anything unusual and potentially security threatening is happening.

It is telling that widely adopted and practiced security standards such as the PCI DSS place these elements at their core as a means of maintaining system security and verifying that key processes like Change Management are being observed.

Summary
Anti-virus technology is an essential and highly valuable line of defense for any organization. However, it is vital that the limitations and therefore vulnerabilities of this technology are understood and additional layers of security implemented to compensate. File Integrity Monitoring and Event Log Analysis are the ideal counterparts to an Anti-Virus system in order to provide complete security against the modern malware threat.

Friday 7 September 2012

File Integrity Monitoring - FIM Agent Versus Agentless FIM

Introduction
The incessant escalation, both in malware sophistication and proliferation, means the need for fundamental file integrity monitoring is essential to maintain malware-free systems. Signature-based anti-virus technologies are too fallible and easily circumnavigated by zero-day malware or selectively created and targeted advanced persistent threat (APT) virus, worm or Trojan malware.

Any good security policy will recommend the use of regular file integrity checks on system and configuration files and best practice-based security standards such as the PCI DSS (Requirement 11.5), NERC CIP (System Security R15-R19), Department of Defense Information Assurance (IA) Implementation (DODI 8500.2), Sarbanes-Oxley (Section 404), FISMA - Federal Information Security Management Act (NIST SP800-53 Rev4) specifically mandate the need to perform regular checks for any unauthorized modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.

However, file-integrity monitoring needs to be deployed with a little advanced planning and understanding of how the file systems of your servers behave on a routine basis in order to determine what unusual and therefore potentially threatening events look like.

The next question is then whether an Agentless or Agent-based approach is best for your environment. This article looks at the pros and cons of both options.

Agentless FIM for Windows and Linux/Unix Servers
Starting with the most obvious advantage, the first clear benefit of an Agentless approach to file integrity monitoring is that it doesn't need any agent software to be deployed on the monitored host. This means that an Agentless FIM solution like Tripwire or nCircle will always be the quickest option to deploy and to get results from. Not only that but there is no agent software to update or potentially interfere with the server operation.

The typical Agentless file-integrity monitoring solution for Windows and Linux/Unix will utilize a scripted, command-line interaction with the host to interrogate the salient files. At the simplest end of the scale, Linux files can be baselined using a cat command and a comparison done with the subsequent samples to detect any changes. Alternatively, if a vulnerability audit is being performed in order to harden the server configuration, then a series of grep commands, used with regex expressions, will more precisely identify missing or incorrect configuration settings. Similarly, a Windows server can be interrogated using command line programs, for example, the net.exe program can be used to expose the user accounts on a system, or even assess the state or other attribute associated with a user account if piped with a find command e.g. net.exe users guest |find.exe /i "Account active" will return an "Account active Yes" or "Account active No" result and establish if the Guest account is enabled, a classic vulnerability for any Windows server.

Agent-Based File Integrity Monitoring
The key advantage of an Agent for FIM is that it can monitor file changes in real-time. Due to the agent being installed on the monitored host, the OS activity can be monitored and any file activity can be observed and changes recorded. Clearly any Agentless approach will need to be operated on a scheduled poll basis and inevitably there will be a pay-off between the frequency of polls being regular enough to catch changes as they happen, and the limiting the increased load on the host and network due to the monitoring. In practice polling is typically run once per day on most FIM solutions, for example Tripwire, and this means that you risk being anything up to 24 hours late to identify potential security incidents.

The second major advantage of an agent-based file-integrity solution is that the host does not need to be 'opened up' to allow monitoring. For example, all critical system and configuration files will always be protected by the host filesystem security, for example, the Windows System32 folder is always an 'Administrator Access Only' folder. In order to monitor the files in this location, any external scripted interaction will need to be provided with Admin rights over the Host and this immediately means that the host needs to be made accessible via the network and an additional User or Service Account needs to be provisioned with Admin privilege, potentially introducing a new security weakness to the system. By contrast, an Agent operates within the confines of the Host, just pushing out File Integrity changes as they are detected.

Finally having an Agent offers a distinct advantage over and above the Agentless approach in that it can offer a 'changes only' update across the network, and even then only when there is a change to report. The Agentless solution will need to run through its complete checklist of queries in order to make any assessment of whether changes have been identified and even using elaborate WMI or Powershell scripts still requires considerable resource usage on the host and the network when dragging results back.

Summary
Nobody likes installing and maintaining agents on their servers and, if this can be avoided, this is an attractive option to take.

Wednesday 15 August 2012

File Integrity Monitoring and SIEM - Why Layered Security Is Essential to Combat the APT

Every time the headlines are full of the latest Cyber Crime or malware Scare story such as the Flame virus, the need to review the security standards employed by your organization takes on a new level of urgency.

The 2012 APT (Advanced Persistent Threat)
The Advanced Persistent threat differs from a regular hack or Trojan attack in that it is as the name suggests, advanced in technology and technique, and persistent, in that it is typically a sustained theft of data over many months.

So far the APT has largely been viewed as Government sponsored cyber-espionage in terms of the resources needed to orchestrate such an attack, such as the recent Flame malware which appears to have been a US or Israeli backed espionage initiative against Iran. However you always see the leading edge of technology become the norm a year later, so expect to see APT attacks reach the more mainstream, competitor-backed industrial espionage, and 'hacktivist' groups like Lulzsec and Anonymous adopting similar approaches.

The common vector for these attacks is a targeted spear phishing infiltration of the organization. Using Facebook, LinkedIn or other social media makes identification of targets much easier today, and also what kind of phishing 'bait' is going to be most effective in duping the target into providing the all-important welcoming click on the tasty links or downloads offered.

Phishing is already a well-established tool for Organized Crime gangs who will utilize these same profiled spear phishing techniques to steal data. As an interesting aside regarding organized crimes' usage of 'cybermuscle', it is reported that prices for botnets are plummeting at the moment due to oversupply of available robot networks. If you want to coerce an organization with a threat of disabling their web presence, arm yourself with a global botnet and point it at their site - DDOS attacks are easier than ever to orchestrate.

Something Must Be Done...

To be clear on what we are saying here, it isn't that AV or firewalls are no use, far from it. But the APT style of threat will evade both by design and this is the first fact to acknowledge - like the first step for a recovering alcoholic the first step is to admit you have a problem!

By definition, this kind of attack is the most dangerous because any attack that is smart enough to skip past standard defense measures is definitely going to be one that is backed by a serious intent to damage your organization (note: don't think that APT technology is therefore only an issue for blue chip organizations - that may have been the case but now that the concepts and architecture of the APT is in the mainstream, the wider hacker and hacktivist communities will already have engineered their own interpretations of the APT)
So the second fact to take on board is that there is an 'art' to delivering effective security and that requires a continuous effort to follow process and cross-check that security measures are working effectively.
The good news is that it is possible to automate the cross-checks and vigilance we have identified a need for, and in fact there are already two key technologies designed to detect abnormal occurrences within systems and to verify that security best practices are being operated.

FIM and SIEM - Security Measures Underwritten
File Integrity Monitoring or FIM serves to record any changes to the file system i.e. core operating system files or program components, and the systems' configuration settings i.e. user accounts, password policy, services, installed software, management and monitoring functions, registry keys and registry values, running processes and security policy settings for audit policy settings, user rights assignment and security options. FIM is designed to both verify that a device remains hardened and free of vulnerabilities at all time, and that the filesystem remains free of any malware.

Therefore even if some form of APT malware manages to infiltrate a critical server, well implemented FIM will detect file system changes before any rootkit protective measures that may be employed by the malware can kick in.

Likewise SIEM, or Security Information and Event Management, systems are designed to gather and analyze all system audit trails/event logs and correlate these with other security information to present a true picture of whether anything unusual and potentially security threatening is happening.
It is telling that widely adopted and practiced security standards such as the PCI DSS place these elements at their core as a means of maintaining system security and verifying that key processes like Change Management are being observed.

At the core of any comprehensive security standard is the concept of layered security - firewalling, IPS, AV, patching, hardening, DLP, tokenization, secure application development and data encryption, all governed by documented change control procedures and underpinned by audit trail analysis and file integrity monitoring. Even then with standards like the PCI DSS there is a mandated requirement for Pen Testing and Vulnerability Scanning as further checks and balances that security is being maintained.

Summary
In summary, your security policy should be built around the philosophy that technology helps secure your organizations' data, but that nothing can be taken for granted. Only by practicing continuous surveillance of system activity can you truly maintain data security, very much the essence of the Art of Layered Security.

Tuesday 26 June 2012

Tokenization, the PCI DSS and the Number One Threat to Your Organization's Data

I was recently sent a whitepaper by a colleague of mine which covered the subject of tokenization. It took a belligerent tone regarding the PCI DSS and the PCI Security Councils views of Tokenization, which is understandable in context  – the vendors involved with the whitepaper are fighting their corner and believe passionately that tokenization is a great solution to the problem of how best to protect cardholder data.

To summarize the message of the whitepaper, the authors were attacking the PCI Security Standards Council because the Council’s ‘Information Supplement covering PCI DSS Tokenization Guidelines’ document  was specifically positioned as ‘for guidance only’ and explicitly stated that it did not ‘replace or supersede requirements in the PCI DSS’.

The whitepaper also quoted a PCI Security Standards Council Press Release on the subject of Tokenization where Bob Russo, the General Manager of the PCI SSC had stated that tokenization should be implemented as an additional PCI DSS ‘layer’. The tokenization whitepaper took issue with this, the argument being that tokenization should be sanctioned as an alternative to encryption rather than yet another layer of protection that a Merchant could optionally implement.

The reality is that Bob Russo runs the PCI Standards Security Council and it is they who define the PCI DSS, not any vendors of specific security point-products. Also, where I would say the statement above is completely wrong is where they say 'It's not about layering' because the PCI DSS – and best practice in security in general - is absolutely all about layering!

The reason why the PCI DSS is often seen as overly prescriptive and over-bearing in its demands for so much security process is that card data theft still happens on a daily basis. What’s more pertinent is that whilst card date theft can be the result of clever hackers, or polymorphous malware, or cross-site scripting or even card skimming using fake PEDs.

The number one Card data theft threat remains consistent - complacency about security.

In other words, corners are being cut in security - a lack of vigilance and more often than not, silly, basic mistakes being made in security procedures.

So what is the solution? Tokenization won't help if it gets switched off, or if it has a conflict with a windows patch or if it gets targeted by malware, or simply bypassed by a card skimming Trojan – also it won’t protect against a malicious or unintentional internal breach. Tokenization also won't help protect cardholder data if the Card Swipe or PED (PIN Entry Device in Europe) gets hacked, or if a card number gets written down or recorded at a call centre.

In summary - Tokenization is undeniably a good security measure for protecting cardholder data, but it doesn't remove the need to implement all PCI DSS measures. ‘There has never been and there still is NO SILVER BULLET when it comes to security.

In fact the only sensible solution to card data theft is layered security, operated with stringent checks and balances at all times. What PCI Merchants need now and will continue to need in the future is quality, proven PCI solutions from a specialist with a long track record in practicing the Art of Layered Security, combining multiple security disciplines to protect from external and internal threats, combining such things as good change management, file integrity monitoring with SIEM for example to provide the necessary vigilance essential for tight data protection security.

What do you think? I'd love to hear your thoughts on the subject.

Thursday 15 March 2012

File Integrity Monitoring And The Art of Layered Security

There is an art and a skill to building an effective security framework which requires a process, methodology and a set of tools that is right for your environment. The 'art' of good security and compliance requires an integrated and layered approach that can continuously monitor and evaluate all IT System activity in real-time to identify potential risks and threats from both internal and external sources.
The process, methodology and tools come together within this layered approach to provide the security needed to effectively and efficiently protect the environment and ensure a secure and compliant state. One of the best known examples of a formal security standard which utilises a layered security approach is the PCI DSS. PCI compliance requires adoption of all proven best practise measures for data security in order to protect cardholder data.

What is the Art of Layered Security?
The technology should be 'layered' to maximize security - including Perimeter Security, Firewall, Intrusion Detection, Penetration & Vulnerability Testing, Anti-Virus, Patch Management, Device Hardening, Change & Configuration Management, File Integrity Monitoring, Security Information and Event Log Management

The project should be delivered in a phased approach - understand the scope and environment, groups and types, priorities and locations to build up a picture of what 'good looks like' for the environment. Track all aspects of change and movement within this scope and understand how these relate to the change management process. Start small and grow, don't bite off more than you can chew
Utilize an integrated ecosystem of tools - events and changes happen all the time. Ensure the systems have the intelligence to understand the consequence of these events and what impact they may have had, whether the change was planned or unplanned and how it has impacted the compliant state.

File Integrity Monitoring vs. Anti Virus
File integrity monitoring works on a 'black and white' change comparison for a file system. FIM detects any changes to configuration settings or system files. In this way, FIM is a technology prone to false alarms, but is utterly comprehensive in detecting threats.

For each file, a complete inventory of file attributes must be collected, including a Secure Hash value. This way, even if a Trojan is introduced to the file system, this can be detected.

Anti-Virus technology works by comparing new files to a database of known malware 'signatures' and is therefore less prone to false alarms. However, by definition therefore AV can only detect known, previously identified malware and as a consequence is 'blind' to both 'zero day' threats and 'inside man' threats. Similarly, the Advanced Persistent Threat or APT favored for both Government-backed espionage and highly orchestrated intellectual property theft initiatives will always use targeted malware vectors, used sparingly to avoid detection for prolonged periods of time. In this way, Antivirus is also an ineffective defense against the APT.

The Art of Layered Security determines that both technologies should be used together to provide the best possible protection against malware. Each technology has advantages and disadvantages when compared to the other, but the conclusion is not that one is better than the other, but that both technologies need to be used together to provide maximum security for data.

The State of the Art in File Integrity Monitoring
The state of the art in FIM for system files now delivers real-time file change detection for Windows and Linux or Unix. In order to detect potentially significant changes to system files and protect systems from malware, it is essential to not just simply run a comparison of the file system once per day as has traditionally been the approach, but to provide an alert within seconds of a significant file change occurring.
The best File Integrity monitoring technology will also now identify who made the change, detailing the account name and process used to make changes, crucial for forensically investigating security breaches. It is good to know that a potential breach has occurred but even better if you can establish who and how the change was made.

Thursday 23 February 2012

PCI DSS, File Integrity Monitoring and Logging - Why Not Just Ignore It Like Everyone Else Does?

The Safety Belt Paradox
The Payment Card Industry Data Security Standard (PCI-DSS) has now been around for over 6 years, but every day we speak to organizations that have yet to implement any PCI measures. So what's the real deal with PCI compliance and why should any company spend money on it while others are avoiding it?
Often the pushback is from Board Level, asking for clear-cut justification for PCI investment. Other times it comes from within the IT Department, seeking to avoid the disruption PCI measures will incur.

Regardless of where resistance comes from, the consensus is that adopting the standard is a sensible thing to do from a security perspective. But like so many things in life, the common sense view is outweighed by the perceived pain of achieving it -this thinking is often referred to as 'The Safety Belt Paradox', more of which later.

This coupled with the anecdotal feedback that whilst the Acquiring Banks (payment card transaction processors) promote the need for PCI measures, they seldom have the focus and continual drive to monitor the status of compliance, making it all too easy for Merchants (anyone taking card payments) to carry on just as they are.

Prioritizing PCI Measures
With 12 headline Requirements covering 230 sub-requirements and around 650 detail points, encompassing technology, procedure and process, there is no denying that the PCI-DSS is complex and is likely to cause disruption. But the benefits ultimately outweigh the pitfalls, particularly when there are shortcuts to compliance, which follow the 'How do you eat a whale?' philosophy (one piece at a time, in case you were wondering).

This 'prioritized approach', advocated by the PCI Security Council, focuses attention on the most important 'biggest bang for buck' measures first, with the others broken into five levels of priority.
We would also always advise that in order to control costs and minimize disruption, that you understand the context and impact of each aspect to see which other Requirements can be taken care of by implementing the same measure - for instance, file integrity monitoring is specifically mentioned in Requirement 11.5, but actually applies to numerous other Requirements throughout the standard. For example, Device Hardening measures specified in Requirement 2 all come back to file integrity monitoring because configuration files and settings need to be assessed for compliance with best practices, and once a device has been hardened, it is vital that monitoring is in place to ensure there is no 'drift' away from the secure configuration policy adopted.
Similarly log management and the need to securely backup event logs from all in scope devices may only be detailed in Requirement 10, however, using event log data to track where changes have been made to devices and user accounts is a great way of auditing the effectiveness of your change management processes. Tracking user activity via syslog and event log data is generally seen as a means of providing the forensic audit trail for analysis after a breach has occurred, but used correctly, it can also act as a great deterrent to would-ne inside man hackers if they know they are being watched.

As evidence of the value of this approach, implementing firewall and anti-virus measures properly, with checks and balances provided via automated event log processing and file-integrity monitoring gets you around 30-35% compliant before you do anything else.

The Future of PCI-DSS
The PCI Security Standards Council insists that PCI is more about security than compliance. And it really does work - implemented correctly, the PCI-DSS will keep card holder data protected under any circumstances.

In the future, neglecting PCI Compliance measures could mean you are gambling with even higher stakes. With PCI being such a comprehensive framework, big-thinkers are arguing that PCI compliance should be leveraged to provide security for ALL company information as a whole and protect against the mainstream issue of Identity Theft. Losing card holder data is one thing, but risking your customers' personal information is potentially far more damaging and your customers won't thank you if you have been irresponsible.
This is certainly the case in Europe where, at the recent PCI Security Standards Council Meeting in London, the UK Government's Information Commissioners Office recommended that organizations should look to implement PCI for general Data Protection. This is echoed across Europe where ISO 27001 is taken much more seriously, especially in Germany where their snappily entitled 'Bundesdatenschutzgeset' (or BDSG - Federal Data Protection Act) has real teeth.

If a German organization loses the Personal Information of its customers then it is required by law to 'confess' by placing at least two, full-page advertisements in the National press informing the public of the potential Identity Theft they have been exposed to. Even if you don't believe in the power of advertising, you wouldn't want to test what this kind of publicity does for your brand and your sales.

The closest parallel in the US is the Nevada 'Security of Personal Information' law, and Nevada Senate Bill 227 specifically states a requirement to comply with the PCI DSS, or how about The Washington House Bill 1149 (Effective Jul 01, 2010) which "recognizes that data breaches of credit and debit card information contribute to identity theft and fraud and can be costly to consumers".

Which brings us back to the 'Safety Belt Paradox'. 50 years ago, the State of Wisconsin introduced legislation requiring seat belts to be fitted to cars. But very few people used them, because they were uncomfortable and slowed you down when starting a journey, even though most would admit they were a good idea.

So it was only in 1984 when the first US state (New York) made the wearing of a seatbelt compulsory that the real benefits were realized. Only then did common-sense become standard practice. Maybe Personal information Protection needs the same treatment?