Wednesday 7 December 2011

PCI Compliance In 10 Minutes A Day - Using File Integrity and Log File Monitoring Effectively

PCI Compliance Is Hard for Everyone!
In some respects, it can be argued that, the less IT 'stuff' an organization has, the fewer resources are going to be needed to run it all. However, with PCI compliance there are still always 12 Requirements and 650 sub-requirements in the PCI DSS to cover, regardless of whether you are a trillion dollar multinational or a local theatre company.

The principles of good security remain the same for both ends of the scale - you can only identify security threats if you know what business-as-usual, regular running looks like.

Establishing this baseline understanding will take time - 8 to 24 weeks in fact, because you are going to need a sufficiently wide perspective of what 'regular' looks like - and so we strongly advocate a baby-steps approach to PCI for all organizations, but especially those with smaller IT teams.

There is a strong argument that doing the basics well first, then expanding the scope of security measures is much more likely to succeed and be effective than trying to do everything at once and in a hurry. Even if this means PCI Compliance will take months to implement, this is a better strategy than implementing an unsupportable and too-broad a range of measures. Better to work at a pace that you can cope with than to go too fast and go into overload.

This is the five step program recommended, although it actually has merit for any size of organization.

PCI Compliance in 10 Minutes per Day
1. Classify your 'in scope of PCI' estate
You first need to understand where cardholder data resides. When we talk about cardholder data 'residing' this is deliberately different to the more usual term of cardholder data 'storage'. Card data passing through a PC, even it is encrypted and immediately transferred elsewhere for processing or storage, has still been 'stored' on that PC. You also need to include devices that share the same network as card data storing devices.
Now classify your device groups. For the example of Center Theatre Group, they have six core servers that process bookings. They also have around 25 PCs being used for Box Office functions. There are then around 125 other PCs being used for Admin and general business tasks.
So we would define 'PCI Server', 'Box Office PC' and 'General PC' classes. Firewall devices are also a key class, but other network devices can be grouped together and left to a later phase. Remember - this isn't cutting corners and sweeping dirt under the carpet, but a pragmatic approach to doing the most important basics well first, or in other words, taking the long view on PCI Compliance.

2. Make a Big Assumption
We now apply an assumption to these Device Groups - that is, that devices within each class are so similar in terms of their make-up and behavior, that monitoring one or two sample devices from any class will provide an accurate representation of all other devices in the same class.
We all know what can happen when you assume anything but this is assumption is a good one. This is all about taking baby steps to compliance and as we have declared up front that we have a strategy that is practical for our organization and available resources this works well.
The idea is that we get a good idea of what normal operation looks like, but in a controlled and manageable manner. We won't get flooded with file integrity changes or overwhelmed with event log data, but we will see a representative range of behavior patterns to understand what we are going to be dealing with.
Given the device groups outlined, I would target one or two servers - say a web server and a general application server - one or two Box Office PCs and one or two general PCs.

3. Watch...
You'll begin to see file changes and events being generated by your monitored devices and about ten minutes later you'll be wondering what they all are. Some are self explanatory, some not so.
Sooner or later, the imperative of tight Change Control becomes apparent.
If changes are being made at random, how can you begin to associate change alerts from your FIM system with intended 'good' changes and consequently, to detect genuinely unexpected changes which could be malicious?
Much easier if you can know in advance when changes are likely to happen - say, schedule the third Thursday in any month for patching. If you then see changes detected on a Monday these are exceptional by default. OK, there will always be a need for emergency fixes and changes but getting in control of the notification and documentation of Changes really starts to make sense when you begin to get serious about security.
Similarly from a log analysis standpoint - once you begin capturing logs in line with PCI DSS Requirement 10 you quickly see a load of activity that you never knew was happening before. Is it normal, should you be worried by events that don't immediately make sense? There is no alternative but to get intimate with your logs and begin understanding what regular activity looks like - otherwise you will never be able to detect the irregular and potentially harmful.

4....and learn
You'll now have a manageable volume of file integrity alerts and event log messages to help you improve your internal processes, mainly with respect to change management, and to 'tune in' your log analysis ruleset so that it has the intelligence to process events automatically and only alert you to the unexpected, for example, either a known set of events but with an unusual frequency, or previously unseen events.
Summary Reports collating filechanges on a per server basis are useful This is the time to hold your nerve and see this learning phase through to a conclusion where you and your monitoring systems are in control - you see what you expect to see on a daily basis, you get changes when they are planned to happen.

5. Implement
Now you are in control of what 'regular operation' looks like, you can begin expanding the scope of your File Integrity and Logging measures to cover all devices. Logically, although there will be a much higher volume of events being gathered from systems, these will be within the bounds of 'known, expected' events. Similarly, now that your Change Management processes have been matured, file integrity changes and other configuration changes will only be detected during scheduled, planned maintenance periods. Ideally your FIM system will be integrated with your Change Management process so that events can be categorized as Planned Changes and reconciled with RFC (Request for Change) details.

Sunday 2 October 2011

The Ever-changing DLL Hunt – Why Do 'lsprst7.dll' And 'sysprs7.dll' Continually Change?

File Integrity Monitoring - What really happens on your server when you're not looking?

Once our customers start using file integrity monitoring technology as part of a PCI Compliance or other security governance initiative there is often a realization of ‘What the eye doesn't see, the heart doesn't grieve over’

For instance, who knew there were that many file changes associated with a windows update?

We have recently dealt with an interesting project for a Passenger Ferry Operator. After we had been running Change Tracker file integrity monitoring for a few days they noticed repeated, frequent but irregular changes being reported to a couple of DLL files - 'lsprst7.dll' and 'sysprs7.dll', with two associated files 'lsprst7.tgz' and 'sysprs7.tgz'. These reside within the Windows\System32 and/or the SysWOW64 folders

Our customer contact did some research via Google but, despite finding other records of searches for the identity of these files and the reason for the frequent changes (with the trail leading to an Adobe forum thread), no explanation could be found.

A process of elimination exercise to identify the role of the files was suggested – delete the files and see which application breaks, or progressively remove programs from the server and see which one removes the DLLs in question?

It is counterintuitive for DLL files to change and you would be rightly suspicious if you saw this happening on a server. Concerns over mutating malware and polymorphic viruses began to circle.

What's the Solution?

In this instance, thankfully there is a perfectly logical explanation. The files are License Server components for SafeNet ‘Solve’ software (Solve is supplied by The Logic Group, and it provides card holder data encryption for the EPoS software used by this customer) The DLLs are persistence files, used to help detect "Time Tempering" and they change every time the software is accessed and a license check is run.

There are other examples of license key files which regularly change that we are familiar with and although it is initially surprising and of concern to see system files changing, it is ultimately a positive thing.

How can you detect genuinely exceptional file changes if you don’t fully understand how your applications and servers behave under regular operating conditions? Only by employing forensic-level file integrity monitoring and analyzing the results can you begin to get intimate with what ‘good’ looks like, and in turn, what irregular – and potentially damaging - behavior looks like.

Want to analyze your server system file behaviors or implement PCI file integrity monitoring technology? Request a free trial or demonstration here

Wednesday 7 September 2011

PCI Compliance Server Hardening doesn’t have to be Hard

Harden Server Configuration to remove Vulnerabilities

"PCI DSS Version 2.0 Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters"

From the moment a server is powered up it becomes vulnerable to attack. Assuming that leaving your key application servers turned off is not an option it will be necessary to implement security measures advocated by the PCI DSS.

PCI Requirement 2 calls for configuration hardening of servers, EPoS PC's and network devices. The headlines of the requirement call for removal of default usernames and passwords, and a need to stop any unnecessary services. However, beyond these initial measures there are a vast number of additional configuration setting changes recommended by 'best practice' authorities (such as SANS Institute, Center for Internet Security (CIS) and NIST) all of which help to mitigate security threats. If you haven't already adopted a hardened configuration standard then any of these organizations can assist, although a good configuration auditing and config change tracking system will typically be pre-packed with a server hardening checklist you can adopt. This type of system will automate not just the initial hardening assessment but will also do so on a continuous automatic basis so you can be alerted when any configuration drift occurs.

As with most elements of the PCI DSS Requirements, there are a number of checks and balances to provide evidence that adequate hardening measures have been applied. In common with the overall ethos of the PCI DSS, there is always a high degree of overlap to guarantee comprehensive coverage.

Similarly, event log management and file integrity monitoring measures will serve to provide additional checks to verify security measures have not been changed or compromised at all times.
Active Testing of PCI DSS Security Measures - Pen Testing and Vulnerability Scanning
PCI Requirement 11 covers Penetration Testing and Vulnerability Scanning - we'll discuss these in turn.

Pen Testing / Penetration Testing
Any internet facing devices are exposed to somewhere in excess of 2 billion potential hackers (source: ITU website - 'Key Facts') and while firewalls and intrusion detection technologies help to allow good users in and keep bad traffic out, the fact remains that an 'open' website is always going to be vulnerable to attack. Penetration testing takes the form of an active assessment of whether the internet facing devices and servers can be compromised. Typically a 'blended' approach is used combining automated, scripted scans and tests for common hacking vectors with manually orchestrated hacking techniques.

Vulnerability Scanning / ASV Scans
Whereas a Pen Test is for externally accessible devices, internal network connected devices are tested using an ASV scan (ASV is a PCI Security Council term for any organization or individual who has been validated as an Approved Scanning Vendor). This kind of internal vulnerability assessment is known as a Vulnerability Scan.

This is typically a more intensive active assessment for devices than a Pen Test, assessing operating system and application patching levels. Again the ASV vulnerability scan can be either fully automated using an on-site appliance or take a blended approach, part-automatic-part-manually orchestrated.

PCI DSS Hardening Methodologies - How do I harden my Server?
For Windows servers, numerous security features and best practice measures are implemented via the server's Local Security Policy. Group Policy can be used as a convenient way to update the Security Policy of multiple devices in bulk, but of course one common way to enhance security is to isolate servers from a domain to allow precise 'hand-picked' access permissions, in which case the Local Security Policy will need to be configured directly.

In addition, unnecessary services should be disabled, built-in accounts should be renamed and passwords changed from defaults, drive and folder permissions should be restricted - the list of actual and potential vulnerabilities is extensive and always growing.

It should be mentioned that there is a whole other area of vulnerability management relating to patches and application updates. Whilst these all carry the potential to be just as harmful as configuration-based vulnerabilities, patch or application-based vulnerabilities are inherently easier to manage, given that Windows Updates and major applications all have automated, self-updating capabilities. Furthermore, operating system and application-based vulnerabilities can be remediated - that is to say, eliminated permanently - as opposed to configuration-based vulnerabilities which can only ever be mitigated. In other words, configuration-based vulnerabilities can be just as easily re-introduced at any time as they are to remove in the first place.

Summary
In conclusion, this is yet another area of the PCI DSS that is ideally handled using intelligent, automated technology. The best products available combine comprehensive 'best practice' security policy and hardening checklists with continuous vulnerability assessments. Any configuration drift is identified immediately and alerted, while summary reports can be produced to give an 'at a glance' reassurance that nothing has changed.
The active role of a continuous configuration change tracking technology can also be used as a vantage point to implement file integrity monitoring too, guaranteeing system and application files do not change and that malware cannot be introduced onto the server without detection. Likewise, SIM, SIEM (Security Information and Event Management) or plain old Event Log Management technology also provides a full audit trail of security events for in scope devices.
The good news is you don't need to turn off your servers just to keep them secure.

Tuesday 6 September 2011

How Logging And File Integrity Monitoring Technologies Can Augment Process and Procedure

Documentation Of PCI Compliance Processes? No Thanks!
Small Company PCI Compliance
For many Merchants subject to the PCI DSS, September is always a significant deadline for proving that compliance with the security measures of the PCI DSS has been met.

Unless you are a Tier 1 merchant (transacting in excess of 6 million card sales each year) and being audited by a PCI Security Standards Council QSA (Qualified Security Assessor) then you will be using the Self-Assessment route. SAQ D is the most commonly used Self Assessment Questionnaire for medium to large scale merchants.

Regardless of which type of Merchant your organization is classified as, the issues are firstly to put measures in place to meet compliance with the requirements, (so either install some security technology, e.g. a file integrity monitor, or define and document security procedures), and secondly, to prove that the measures are effective.

For smaller merchants, processes are typically not documented because there has previously been no need to do so. It stands to reason that for a small-scale IT Department, processes are commensurately simple to explain and operate, and as such, wont have needed to be documented. This being the case, however, it could also be argued that the documentation of processes, and proving that they work, is also very simple.
For instance, the change management process may be as simple as 'if any of us need to make a change, we discuss it or just send an email to the others for their information, then enter details onto a shared spreadsheet document'.

Clearly there is ample potential for human error in a process like this and for an 'inside man' hack to be perpetrated, even if the risk is low and the subsequent identification of the perpetrator straightforward.
So in this case, documenting the process is easy, but proving that it is infallible is another matter. There are too many scenarios where the process can fail, principally due to human error, but this also makes it inadequate as a means of ensuring changes cannot be made without detection. This is why many small companies lose sleep over PCI Compliance, worrying how far measures need to be taken and just how much security is enough?

Process Checks and Balances - Automated
PCI DSS Requirement 10 mandates the logging of all significant security events from the PCI estate, while PCI DSS Requirement 11.5 mandates the use of File Integrity Monitoring technology. For many organizations taking a 'checkbox' approach to PCI Compliance, the implementation of both technologies is seen as just another hassle to get through for the sake of the PCI DSS.

However, take a step back and look at the PCI DSS as a whole. The emphasis is on good security measures with sound best practices. In other words, for each dimension of security advocated by the PCI DSS there is a need to document and test related processes.

It therefore becomes clear that logging and FIM are not just overlay technologies to plug gaps left by the firewalling, hardening and antivirus measures, but integral means of verifying that your net security stance is effective.

Any file change or configuration change reported should be investigated and verified then acknowledged as an approved change. The process is automated, but simple and robust.
Similarly, a new account or privilege being assigned will be reported via your log management system, prompting an investigation and ultimately a record of the acknowledgment. As such, implementation of event log management and file integrity checker technologies can actually provide the processes needed for PCI DSS compliance. You could have a whole shelf full of change management processes and procedures, or alternatively, simply refer to your log management and File Integrity Monitoring reporting system.

Sunday 7 August 2011

Documentation Of PCI Compliance Processes? No Thanks! How Logging And FIM technologies Can Augment (Or Replace) Process And Procedure

Small Company PCI Compliance

For many Merchants subject to the PCI DSS, September is always a significant deadline for proving that compliance with the security measures of the PCI DSS has been met.

Unless you are a Tier 1 merchant (transacting in excess of 6 million card sales each year) and being audited by a PCI Security Standards Council QSA (Qualified Security Assessor) then you will be using the Self-Assessment route. SAQ D is the most commonly used Self Assessment Questionnaire for medium to large scale merchants.

Regardless of which type of Merchant your organization is classified as, the issues are firstly to put measures in place to meet compliance with the requirements, (so either install some security technology, e.g. a file integrity monitor, or define and document security procedures), and secondly, to prove that the measures are effective.

For smaller merchants, processes are typically not documented because there has previously been no need to do so. It stands to reason that for a small-scale IT Department, processes are commensurately simple to explain and operate, and as such, wont have needed to be documented. This being the case, however, it could also be argued that the documentation of processes, and proving that they work, is also very simple.

For instance, the change management process may be as simple as ‘if any of us need to make a change, we discuss it or just send an email to the others for their information, then enter details onto a shared spreadsheet document’.

Clearly there is ample potential for human error in a process like this and for an ‘inside man’ hack to be perpetrated, even if the risk is low and the subsequent identification of the perpetrator straightforward.

So in this case, documenting the process is easy, but proving that it is infallible is another matter. There are too many scenarios where the process can fail, principally due to human error, but this also makes it inadequate as a means of ensuring changes cannot be made without detection. This is why many small companies lose sleep over PCI Compliance, worrying how far measures need to be taken and just how much security is enough?

Process Checks and Balances – Automated

PCI DSS Requirement 10 mandates the logging of all significant security events from the PCI estate, while PCI DSS Requirement 11.5 mandates the use of File-Integrity Monitoring technology. For many organizations taking a ‘checkbox’ approach to PCI Compliance, the implementation of both technologies is seen as just another hassle to get through for the sake of the PCI DSS.

However, take a step back and look at the PCI DSS as a whole. The emphasis is on good security measures with sound best practices. In other words, for each dimension of security advocated by the PCI DSS there is a need to document and test related processes.

It therefore becomes clear that logging and FIM are not just overlay technologies to plug gaps left by the firewalling, hardening and antivirus measures, but integral means of verifying that your net security stance is effective.

Any file change or configuration change reported should be investigated and verified then acknowledged as an approved change. The process is automated, but simple and robust.

Similarly, a new account or privilege being assigned will be reported via your log management system, prompting an investigation and ultimately a record of the acknowledgment.

As such, implementation of event log management and file integrity checker technologies can actually provide the processes needed for PCI DSS compliance. You could have a whole shelf full of change management processes and procedures, or alternatively, simply refer to your log management and FIM reporting system.

If you want to short-cut boring documentation of processes for PCI compliance then talk to us about how we can help - support@nntws.com


Monday 18 July 2011

The PCI DSS - Want Some More Advice?

Where to start with PCI Compliance? The PCI DSS is well thought out, utterly comprehensive but man - it's big!
The PCI DSS is also not at all easy to understand, and even less easy to apply to your personal situation. The headlines are as follows:
The PCI DSS is also not at

  • 12 Requirements
  • but 230 sub-requirements
  • and some estimates of 650 detail points

The PCI DSS in 2011 still remains an ongoing challenge for the overwhelming majority of PCI Merchants. The following is based on the feedback we have had from working with a number of casino resorts, theme parks, ferry services and call centers over the past few months and the statistics make interesting reading for any other PCI Merchant wanting advice about PCI compliance.

Typically, one in every two Tier 2 and Tier 3 Merchants admit they do not understand the requirements of the PCI DSS. If you are either still working on implementing compliance measures identified in pre-audit surveys, or are not compliant and doing nothing about it, or are leaving everything to the last minute, don't be too hard on yourself - nine out of ten Merchants are at the same stage.
In fact, it is fine to have a phased, prioritized approach and the PCI DSS Council fully recommend this strategy, mindful that Rome wasn't built in a day.

Prioritizing PCI Compliance Measures
With so much ground to cover, prioritizing measures is a must, and indeed the recently released 'Prioritized Approach for PCI DSS Version 2.0' from the PCI Security Standards council website is an essential document for anyone working out where to start.

Although the PCI DSS is sectioned loosely around twelve headline Requirements in terms of technologies (Firewalling, Anti-Virus, Logging and Audit Trails, File Integrity Monitoring, Device Hardening and Card Data Encryption) - and procedures and processes (physical security, education of staff, development and testing procedures, change management), you soon realize that there are threads that run horizontally through all requirements.

In this respect there is potentially a good argument for the creation of other versions of the PCI DSS oriented around procedural dimensions, such as password policies for all disciplines and devices, or change management for all disciplines and devices, and so on. Whilst the Prioritized Approach gives a good framework for planning and measuring progress, it is strongly advised that you also look up at every step and see which other requirements can be taken care of by the same measure being implemented.

For instance, file integrity monitoring is only specifically mentioned in Requirement 11.5, however, good FIM software solutions will underpin Requirement 1, requirement 2, and requirements 3, 4,5,6,7,8,10, and 12.
The general advice is that, even though it is very daunting, if you can get 'intimate' with the PCI DSS, both in spirit and in detail, then as with everything else in life, the better informed you are, the more in control you will be, and the less money and sweat will be wasted.

If you consider Requirement 1 of the PCI DSS, this is oriented around the need for a firewall and a fundamentally secure network design. However, you quickly end up with a secondary list of questions and queries. Do we need a diagramming tool? Do we need to automate the monitoring of firewall rule changes? (Incidentally, this is a task easily done using a good file integrity monitoring product) What is our Change Management Process? Is it documented?

Summary
The PCI DSS may well challenge your pre-conceptions about what an Information Security Policy comprises - but there is plenty of help to draw upon.
In summary

  • Use vendor offers - a free trial of event log server software will allow you to see first-hand how much notice you are likely to be dealing with in your estate and how straightforward or otherwise an implementation might be before you spend any money

  • Use the PCI Security Standards Council website - tools like the Prioritized Approach spreadsheet will help breakdown the full PCI DSS into a more manageable series of steps and priorities

  • Look for quick wins and the best 'bang for buck' measures - implementing File Integrity Monitoring software for PCI compliance can take a big bite of the overall requirements and may be one of the simpler and affordable steps you take

Friday 15 July 2011

The PCI DSS - Need A Little More Advice?

How much does PCI Compliance cost?
The first question that any organization will ask about PCI compliance is 'What does it cost?' (The second question typically being 'What happens if we don't get ourselves compliant?' but we can come back to this question later).

The issue of cost is a good question to ask up front but as you may have already discovered, one that is very difficult to get a straight (and reliable!) answer to.
In fact an article appeared recently in Secure Computing Magazine based on some research a vendor and an independent research organization had carried out. The premise of the article was that the 'average' cost of compliance was typically £4M more expensive than not being compliant, based on the average cost of achieving compliance being £2M, whilst the cost of non-compliance was £6M.

You could suggest that, for product vendors within the marketplace, this is great news and that most will have a vested interest in making things seem more complicated and consequently more expensive than they are. Then there is also the issue of the need to use a Qualified Security Assessor or QSA. A QSA is trained and accredited by the PCI Security Standards Council, so their knowledge is excellent but it comes at a price.

Conversely, there is plenty of free advice available from the PCI Security Standards Council website (and from vendors too), so you can get yourself educated and in control of your organizations PCI compliance program before engaging the services of a QSA.

What is the cost of non compliance with the PCI?
Of course, there is another dimension to the question 'How much does PCI Compliance cost?' You could instead ask 'What happens if we don't get PCI Compliant?'

One approach is to assess how much your brand and reputation is worth? If your business hits the headlines for the wrong reasons due to a breach - and it will be mainstream press now, not just the IT or Retail Industry Press - then customers will be thinking twice before they hand over payment card details to you.
Therefore it isn't just the fines, the cost and hassle of a forensic investigation of your security measures, or even the risk of increased transaction fees and more demanding audit pressure. There are now a growing number of US states bringing in legislation, such as in Nevada where the SB 227 Amendment specifically states a requirement to comply with the PCI-DSS. Similarly in the UK, the Information Commissioners Office will fine any organization that is found to be in breach of the UK Data Protection Act which compels organizations to protect customer personal information.

The bottom line is that if your organization loses customer personal information this is going to result in exactly the wrong kind of publicity. A customer can easily cancel a credit card and get a new number, but if you lose their address and date of birth this is impossible to reset, and they will not thank you for doing so!

What are the benefits of PCI compliance?
Where is the upside? In respect of a PCI Log Management solution, this will not only provide an advanced warning security system but one that can also alert you to impending hardware problem. How much is it worth to know in advance that you need to replace that till hard drive before it actually fails on the Saturday before Christmas!

The PCI DSS also provides a well-thought out and comprehensive off-the-shelf security policy, with a ready-made mature industry and knowledge base to draw upon that can double up to govern personal information too. Other industries are trying to adopt ISO27K but this simply doesn't have the pedigree or maturity of the PCI DSS.

Eduardo Perez is now Chairman of the PCI Security Council. Perez was featured in Secure Computing Magazine making it clear he wanted to dispel the 'wait and see' mindset of many merchants by saying that, despite what you will continue to read, there are simply no magic or even silver bullets for the PCI DSS. The message was clear - Forget about 'buying' an off-the-shelf solution to the PCI DSS.

Merchants are advised that they will need to work at achieving PCI Compliance and as much as you can automate some aspects and buy products for other requirements such as Event Log Management and File Integrity Monitoring, you will always be compelled to adopt all dimensions of best practice in security management. This means removing any complacency about being compliant or cutting corners - the PCI DSS should be a pervasive factor across all functions and departments of any organization using payment card holder data.

Expect tokenization and p2p encryption to be embraced by the PCI security council but don't expect any relaxing of other measures - they want more layers of protection, with more double-checks, safety nets and good old fashioned common sense. For instance, there will always be a need for file integrity monitoring software to ensure encryption applications have not been compromised, coupled with log management software to track any access or changes to systems.
Some advice from our customers, QSA colleagues and us

  • Don't let vendors and suppliers or even your QSA tell you what you should do and buy - get educated. There is lots of free advice around, not least from the PCI Security Council themselves.

  • don't assume you need to spend sacks of money on products and replacing everything you have - re-organize your network to reduce scope, recycle - use your older firewall to partition your network and reduce scope, use your existing processes and procedures but just formalize and document, and reduce your use of card data where possible, reduce those with access to data

  • Look for quick wins - contemporary log management and intelligent audit trail systems can be implemented quickly and even file integrity monitoring, always seen in the past as being expensive and complex are now affordable and automated

  • make your own decisions about the risks and potential for theft, then confirm with QSA - don't ask for guidance unless absolutely necessary

Wednesday 6 July 2011

The PCI DSS - Want Some Advice?

If you are a Payment Card Merchant looking for advice on getting PCI compliant then you are in good company. The following is based on information which a number of retailers and associated payment card service providers have been telling us over the past few months with respect to the PCI DSS.

Whilst we find there is strong understanding within Tier 1 merchants (6 million transactions per year), these organizations, in common with smaller merchants, are keen to hold off on major spending. Regarding the likely cost of any PCI DSS initiative this is covered in a subsequent article.

There is some good common sense in taking a 'wait and see' strategy. The future of the PCI DSS may well see some changes introduced, but this is actually not a good reason to delay implementation of a serious security strategy now. The big talking points of the moment include Tokenization and End to End Encryption (aka Point to Point Encryption) and both will have a role to play in the future, but right now there are plenty of good PCI DSS measures that should be implemented.

Furthermore, the entire premise of the PCI DSS is that a wide and diverse range of security measures are required, employing a combination of technological defenses and sound procedural practice.
For instance, Event Log management and File Integrity Monitoring are both essential requirements of the PCI DSS and can often be implemented quickly and for minimal expense while at the same time taking care of around 30% of PCI DSS requirements. You can calculate your own PCI compliance score by using the PCI Security Council's Prioritized Approach Tool spreadsheet, available to download free from the PCI Security Council website.

The PCI Security Standards Council website provides a wealth of information for understanding and navigating the PCI DSS. User forums such as the LinkedIn PCI DSS Compliance Specialist and vendor blogs and websites are also good sources of free information. Typical estimates suggest as many as 35% of retail, hospitality and entertainment organizations still do not understand compliance requirements.

However, understanding the way in which other organizations have dealt with the challenges you are facing is the best way to ensure you approach PCI Compliance with a clear vision of where you are likely to end up in terms of investment and procedural development. There are a number of cautionary tales in the marketplace to heed, such as a Tier 1 Retailer jumping in feet-first with a logging solution, only to find that they needed to employ a team of eight additional personnel to run and manage the system. This actually says more about the need to be careful about how you implement PCI Compliance measures and to go into it with your eyes open rather than the real demands of a good PCI event log management system, but it serves to illustrate how it is easy to get this wrong if you do not get good advice before you begin spending money.
Nearly all vendors will provide a free trial of any PCI compliance software solution and you would do well to make sure that where your PCI DSS program requires you to make investments and changes to in-house procedures, make sure you can see the big picture for day to day operation.

Implementation of a PCI log server needn't take very long and the overall process of implementing a syslog server trial will show you what you need to log and how much work will be needed.
For instance, Windows Servers will need some form of Windows syslog agent to be installed so that events can be forwarded from the Windows Server to the central PCI log server to be backed up centrally. However, you will also need to implement changes to either the Group Policy or Local Security Policy with respect to audit settings, and also review windows event log settings so that logons, privilege usage, policy changes, object access, creation and changes are all being audited and backed up in accordance with the PCI DSS.

You'll then need to implement logging for your Unix and Linux hosts, AS/400 and mainframe, together with configuring syslog logging for firewalls, switches and routers.

The whole process need not take more than a few hours but as well as showing you how much work is likely to be required to get your estate PCI compliant, you will begin to appreciate the PCI DSS philosophy in requiring not just access controls, preventing access to card holder data, but why active monitoring of changes is vital, coupled with a full, forensic-detail audit trail.

The PCI DSS is well thought out, utterly comprehensive but man - it’s big!

Where do you start with PCI Compliance?

It is a vast expanse of best practice security measures, not at all easy to understand, and even less easy to apply to your personal situation. The headlines are as follows

  • 12 Requirements
  • but 230 sub-requirements
  • and some estimates of 650 detail points

The PCI DSS in 2011 still remains an ongoing challenge for the overwhelming majority of PCI Merchants. Feedback we have had from working with a number of casino resorts, theme parks, ferry services and call centers over the past few months makes interesting reading for any other PCI Merchant wanting advice about PCI compliance.

Typically, one in every two Tier 2 and Tier 3 Merchants admit they do not understand the requirements of the PCI DSS. If you are either still working on implementing compliance measures identified in pre-audit surveys, or are not compliant and doing nothing about it, or are leaving everything to the last minute, don’t be too hard on yourself - nine out of ten Merchants are at the same stage.

In fact, it is fine to have a phased, prioritized approach and the PCI DSS Council fully recommend this strategy, mindful that Rome wasn’t built in a day.

Prioritizing PCI Compliance Measures

With so much ground to cover, prioritizing measures is a must, and indeed the recently released ‘Prioritized Approach for PCI DSS Version 2.0’ from the PCI Security Standards council website is an essential document for anyone working out where to start with assessing

Although the PCI DSS is sectioned loosely around twelve headline Requirements in terms of technologies (Firewalling, Anti-Virus, Logging and Audit Trails, File Integrity Monitoring, Device Hardening and Card Data Encryption) - and procedures and processes (physical security, education of staff, development and testing procedures, change management), you soon realize that there are threads that run horizontally through all requirements.

If you consider Requirement 1 of the PCI DSS, this is oriented around the need for a firewall and a fundamentally secure network design. However, you quickly end up with a secondary list of questions and queries. Do we need a diagramming tool? Do we need to automate the monitoring of firewall rule changes? (Incidentally, this is a task easily done using a good file integrity monitoring product) What is our Change Management Process? Is it documented

In this respect there is potentially a good argument for the creation of other versions of the PCI DSS oriented around procedural dimensions, such as password policies for all disciplines and devices, or change management for all disciplines and devices, and so on. Whilst the Prioritized Approach gives a good framework for planning and measuring progress, it is strongly advised that you also look up at every step and see which other requirements can be taken care of by the same measure being implemented. For example, file integrity monitoring is only specifically mentioned in Requirement 11.5, however, good FIM software solutions will underpin Requirement 1, requirement 2, and requirements 3, 4,5,6,7,8,10, and 12.

The general advice is that, even though it is very daunting, if you can get ‘intimate’ with the PCI DSS, both in spirit and in detail, then as with everything else in life, the better informed you are, the more in control you will be, and the less money and sweat will be wasted.

Summary

The PCI DSS may well challenge your pre-conceptions about what an Information Security Policy comprises - but there is plenty of help to draw upon.
In summary

  • Use vendor offers - a free trial of event log server software will allow you to see first-hand how much notice you are likely to be dealing with in your estate and how straightforward or otherwise an implementation might be before you spend any money
  • Use the PCI Security Standards Council website - tools like the Prioritized Approach spreadsheet will help breakdown the full PCI DSS into a more manageable series of steps and priorities
  • Look for quick wins and the best ‘bang for buck’ measures - implementing File Integrity Monitoring software for PCI compliance can take a big bite of the overall requirements and may be one of the simpler and affordable steps you take
What do you think? If you could give one piece of advice based on your own experience of PCI Compliance what would that be?

Thursday 2 June 2011

PCI File Integrity Monitoring - Five More FAQs for PCI DSS Merchants

PCI DSS File integrity monitoring - What are the best options file integrity monitoring and what else do you need to know? How do you implement file integrity monitoring for Windows servers and Unix servers? How do you provide file integrity monitoring for firewall, routers, EPoS devices and servers? How does file integrity monitoring software work and what are the key features to look for? Should a file integrity monitor be agent-based or agentless?
The following is part two in a two part series listing the Top Ten FAQs for File-Integrity Monitoring that any PCI Merchant should be aware of.

1. For Log Files and Databases
Log files will change constantly on a busy server but it is important that log files are only changed in the manner expected. File integrity monitoring must be used in secure environments to protect important audit trails of system access and privilege usage and changes. The key is to only allow log files to increase in size and to alert if any changes are made to monitor for log file changes that may be an attempt to remove or change audit trail information - clearing log files or changing log files is classic hacker activity and should be monitored. Of course, event logs should be backed up centrally on a secure log server as a mandated requirement of the PCI DSS, PCI Requirement 10.
Similarly database files containing card data and personal information must be protected and an audit trail of all access and changes created. Again, database files will change constantly so the SHA1 approach will not be suitable. When using file integrity monitoring for SQL Server or file integrity monitoring for Oracle databases the best option is to log access and changes to specific tables and backup event logs centrally on your secure PCI DSS log server.

2. For System32 Folder
The most critical system files on a Windows server or EPoS till to monitor for file-integrity are within the WindowsSystem32 folder. All critical operating system programs, dll files and drivers reside within this location and it is therefore an ideal location for Trojans to reside. The threat is that a Trojan could be implanted onto the EPoS device or Card Data Handling Server (evading Anti-Virus detection because AV is only typically 70-90% effective). A file integrity monitor agent will gather a full inventory of all files within the System 32 folders and then make regular comparative checks subsequently to detect any changes made. Trojans are particularly difficult to find ordinarily because they masquerade as regular System32 program files, so they look and appear to act like the genuine program.
Similarly for Linux file integrity and Unix file integrity, all key program file systems such as the /usr/sys and /bin must be checked for integrity using a Linux or Unix file integrity monitor.

3. For Windows Updates
Windows Updates and patches for other applications will almost always involve updating program files, drivers and dll files. It is rarely clear which files will be modified by a patch and therefore any updates may generate numerous file changes across many folders and locations. Therefore it is vital that, while your file integrity monitor may track detailed changes to any one of a wide range of file attributes, you can also get good 'at a glance' summary information regarding whether a file has been added, deleted or changed.

4. Card Data and Card Data Folder File Integrity Monitoring
Where card data or other sensitive financial information is stored on an EPoS device or server the first line of defense is to limit access via folder and file rights and permissions. Even then, any user with Administrator rights will still be able to view the data and potentially copy out card numbers.
Therefore the best line of defense is to implement object access auditing on the file or folder. This will generate a full audit trail logging all access to the folder including the user account used to do so. Processing this audit trail with an intelligent, PCI event log analyzer will then ensure any unexpected access to the card data will generate an alert. For example, defining a rule to automatically distinguish between normal operations e.g. local system account access compared to a named account with administrator access.

5. PCI File Monitoring and Planned Changes/Change Acknowledgment
Of course, changes will need to be made to configuration files and system files every once in a while. It is important to keep security patches up to date and the PCI DSS mandates this should happen every month.
Operating a formal Change Management process is a key element of any IT security policy and therefore it is vital that your file integrity monitoring solution takes account of intended, planned changes. Any file changes detected as part of a planned change should be verified as part of your QA Testing and post implementation review processes to confirm that the right changes happened to the intended files only.
What about unplanned changes that are either emergency changes or those that for some reason bypass the change management process? These will all be detected and alerts raised by the file integrity monitor but there then needs to be an incident management process to investigate and either approve the changes or remediate them. The PCI DSS is not prescriptive as to how these processes should be managed so for some organizations they will use a full Service desk application to document and approve changes, whereas smaller organizations may just need a spreadsheet record of changes - use what works best for your company, not what you think a QSA will expect to see!

See part one of this series for other important file-integrity monitoring FAQs that any Merchant needing to be PCI DSS compliant should know

Wednesday 1 June 2011

PCI File Integrity Monitoring - Five FAQs for PCI DSS Merchants

Requirement 11.5 of the PCI DSS specifies "the use of file-integrity monitoring tools within the cardholder data environment by observing system settings and monitored files, as well as reviewing results from monitoring activities." Additionally, "verify the tools are configured to alert personnel to unauthorized modification of critical files and to perform critical file comparisons at least weekly."
The following is part one in a two part series listing the Top Ten FAQs for File-Integrity Monitoring that any PCI Merchant should be aware of.

1. Agent-based file monitor or Agentless file monitor?
The gut reaction is that an agentless file integrity monitor is preferable - no software deployment required, no agent updates to apply and one less process running on your server. In theory at least, by enabling Object Access auditing via Group Policy or the Local Security Policy on the server or EPoS device it is possible to track file changes via Windows Events. You still need to work out how to get the local Windows Events back to a central log server, but then you will need to do this in order to comply with PCI DS requirement 10 anyway (and by the way, this will definitely need an agent to be deployed to any Windows server or Till).
However, the agent-based file-integrity monitor does have some distinct advantages over the agentless approach. Firstly, by using an agent, a PCI DSS file integrity monitoring template can be provided. This will comprise a blueprint for all folders and files that should be monitored to secure card data. In other words, a windows file monitoring agent is easier to set-up and configure.
Secondly, a windows file integrity monitor can actively inventory the file system. This approach allows the PCI DSS Merchant to demonstrate compliance with PCI DSS Requirement 11.5b by not just performing critical file comparisons weekly, but on a scheduled daily basis, or even in real-time for ultra secure environments.
Finally a file-integrity monitor for Windows that is agent-based can provide a Secure Hash Checksum of a file which is the only infallible means of guaranteeing the identity and integrity of binary system files. See FAQ 2 for more details.

2. Why use a Secure Hash Checksum for File Integrity Monitoring?
A secure hash checksum is generated by applying a hash algorithm to a file. The algorithm used is such that the resulting hash is unique. Even a one bit difference to a file will result in a significant variation to the hash. The most common algorithms used are SHA1 and MD5. SHA1 will generate a 160-bit hash value for a file, MD5 a 128-bit value. Recording and tracking changes to the Secure Hash of a file in conjunction with tracking changes to other file attributes such as permissions, modified date and size provides an infallible means of ensuring file integrity.

3. How to implement File Integrity Monitoring for Firewalls, Switches and Routers
Typically, any Firewall, Switch and Router will have a range of configuration settings which govern the performance, operation and crucially, the security of the device and the network it is protecting.
For instance, tracking changes to the running config and changes to the startup config of a router will reveal if any significant changes have been made that could affect the security of the network, Similarly tracking changes to permissions and rules on a firewall will ensure that perimeter security has not been affected.
Use of file integrity monitoring for firewalls, routers and switches is a key dimension for any change management procedure and essential for a comprehensive IT Security Policy.

4. File Integrity Monitoring for Web Applications
Web site Apps can generate lots of file changes that are not significant with respect to security of card data. For instance, images, page copy and page layouts may change frequently on an active ecommerce website, but none of these file changes will affect the security of the website. Depending on the web environment in use, there may be a mixture of ASP.NET (ascx, aspx, and asmx asdx files), Java (with js and jsp files), PHP, config or cnf files plus the more regular system files, such as dll and exe program files. It is essential to monitor file changes to all system files and config files for a car data application and web applications create more of a challenge due to the highly dynamic nature of the web app file system. A good file integrity monitor for web applications will have built-in intelligence to automatically detect significant file changes only and ignore changes to other files

5. File Integrity Monitoring for Web Applications
Web site Apps can generate lots of file changes that are not significant with respect to security of card data. For instance, images, page copy and page layouts may change frequently on an active ecommerce website, but none of these file changes will affect the security of the website. Depending on the web environment in use, there may be a mixture of ASP.NET (ascx, aspx, and asmx asdx files), Java (with js and jsp files), PHP, config or cnf files plus the more regular system files, such as dll and exe program files. It is essential to monitor file changes to all system files and config files for a car data application and web applications create more of a challenge due to the highly dynamic nature of the web app file system. A good file integrity monitor for web applications will have built-in intelligence to automatically detect significant file changes only and ignore changes to other files.

See part two in this series for more PCI DSS File Integrity Monitoring FAQs.

Friday 6 May 2011

Retail Systems Forum Approaches - 'Complicated, Expensive and Time-Consuming – but the PCI DSS isn’t going away'

Just 2 weeks now until this year's Retail Systems Forum being held at Microsoft's UK HQ in Reading - see http://www.retailsystemsforum.co.uk/

NNT are presenting one of the sessions -'Complicated, Expensive and Time-Consuming – but the PCI DSS isn’t going away'

  • The PCI DSS in 2011 – Attitudes and Opinions from Multi Channel retailers in the UK
  • Strategies available – what is working and what are others getting away with?
  • Common Sense or Technology?
  • Are the goalposts moving (or going to move)?
I have only just finished the presentation for the deadline so at least it is topical and up to date!

I am talking about some of the feedback we have had from PCI DSS customers over the past few months, such as

- Duck it! “The future is too unclear to make any investment...”
- Paralysis! “We don’t want to make mistakes like xyz...”
- Ignore it! “We don’t need to bother – we’ve been OK so far and we view the risks as low...”
- Go Slow! “We have kept some updated procedural stuff back and if we drip-feed this to the Bank over the next two quarters then we are covered for the next few months...”

How much does it cost to procrastinate, delay and ignore the requirements of the PCI DSS? Wouldn't it be a better use of resources to embrace the PCI DSS, understand its intentions and methods, then apply these to your organization? You need a security policy, so why not take the 'off the shelf' option on offer in the knowledge that this is a well-thought out, widely implemented and tested standard that works?

In all the instances referenced above, we ended up delivering solutions to the various PCI DSS requirements

- File Integrity Monitoring (PCI Requirement 11.5) essentially, this requires the PCI Merchant to keep tabs on any changes made to the configuration of firewalls, switches and routers in the network, ensure that windows operating system files and program files on EPoS devices and servers don't change, and to track any access to Card Data files
- Device Hardening (PCI Requirements 2,6,8,10 and 11) a configuration and set-up process for all servers, EPoS devices, PCs and network devices, whereby the 'built-in' weaknesses and vulnerabilities present are removed or minimized.
- Centralized Event Log Management (PCI Requirement 10) gives both a pro-active security monitoring capability and a full, 'forensic' audit trail to use in the event of a breach
- Change Management (PCI Requirements 1,2,6,8,10 and 11) underpins all PCI DSS requirements, in as much as once your PCI Estate is secure, you need to ensure you keep it that way, so reducing changes and for those that are made, make sure they are planned, documented and approved. Change Tracker reconciles changes that are made with details of the intended change

The RSF format is to not get too technical nor be product-oriented, so the presentation will shy away from even this level of detail.

I hope the event can be recorded and published on www.nntws.com for anyone who can't make the event in person.

Tuesday 29 March 2011

Implement Logging for PCI DSS – A How to Guide

PCI DSS Requirement 10 calls for a full audit trail of all activity for all devices and users, and specifically requires all event and audit logs to be gathered centrally and securely backed up. The thinking here is twofold.

Firstly, as a pro-active security measure, the PCI DSS requires all logs to be reviewed on a daily basis (yes - you did read that correctly - Review ALL logs DAILY - we shall return to this potentially overwhelming burden later...) requires the Security Team to become more intimate with the daily 'business as usual' workings of the network. This way, when a genuine security threat arises, it will be more easily detected through unusual events and activity patterns.

The second driver for logging all activity is to give a 'black box' recorded audit trail so that if a cyber crime is committed, a forensic analysis of the activity surrounding the security incident can be conducted. At best, the perpetrator and the extent of their wrongdoing can be identified and remediated. At worst - lessons can be learned from the attack so that processes and/or technological security defenses can be improved. Of course, if you are a PCI Merchant reading this, then your main driver is that this is a mandatory PCI DSS requirement - so we should get moving!

Which Devices are within scope of PCI Requirement 10? Same answer as to which devices are within scope of the PCI DSS as a whole - anything involved with handling or with access to card data is within scope and we there for need to capture an audit trail from each of them. The most critical devices are the firewall, servers with settlement or transaction files and any Domain Controller for the PCI Estate, although all 'in scope' devices must be covered without exception.

How do we get Event Logs from 'in scope' PCI devices?
We'll take them in turn -

How do I get PCI Event Logs from Firewalls? -the exact command set varies between manufacturers and firewall versions but you will need to enable 'logging' via either the Firewall Web interface or the Command Line. Taking a typical example - a Cisco ASA - the CLI command sequence is as follows logging on no logging console no logging monitor logging a.b.c.d (where a.b.c.d is the address of your syslog server) logging trap informational This will make sure all 'Informational' level and above messages are forwarded to the syslog server and guarantee all logon and log off events are captured.

How do I get PCI Audit Trails from Windows Servers and EPoS/Tills? - There are a few more steps required for Windows Servers and PCs/EPoS devices. First of all it is necessary to make sure that logon and logoff events, privilege use, policy change and, depending on your application and how card data is handled, object access. Use the Local Security Policy You may also wish to enable System Event logging if you want to use your SIEM system to help troubleshoot and pre-empt system problems e.g. a failing disk can be preempted before complete failure by spotting disk errors. Typically we will need Success and Failure to be logged for each Event -

  • Account Logon Events- Success and Failure
  • Account Management Events- Success and Failure
  • Directory Service Access Events- Failure
  • Logon Events- Success and Failure
  • Object Access Events- Success and Failure
  • Policy Change Events- Success and Failure
  • Privilege Use Events- Failure
  • Process Tracking- No Auditing
  • System Events- Success and Failure

* Directory Service Access Events available on a Domain Controller only

** Object Access - Used in conjunction with Folder and File Auditing. Auditing Failures reveals attempted access to forbidden secure objects which may be an attempted security breach. Auditing Success is used to give an Audit Trail of all access to secured date, such as, card data in a settlement/transaction file/folder.
*** Process Tracking - not recommended as this will generate a large number of events. Better to use a specialized whitelisting/blacklisting technology l
**** System Events - Not required for PCI DSS compliance but often used to provided extra 'added value' from a PCI DSS initiative, providing early warning signs of problems with hardware and so pre-empt system failures. Once events are being audited, they then need to be relayed back to your central syslog server. A Windows Syslog agent program will automatically bind into the Windows Event logs and send all events via syslog. The added benefit of an agent like this is that events can be formatted into standard syslog severity and facility codes and also pre-filtered. It is vital that events are forwarded to the secure syslog server in real-time to ensure they are backed up before there is any opportunity to clear the local server event log.
Unix/Linux Servers- Enable logging using the syslogd daemon which is a standard part of all UNIX and Linux Operating Systems such as Red Hat Enterprise Linux, CentOS and Ubuntu. Edit the /etc/syslog.conf file and enter details of the syslog server.

For example, append the following line to the /etc/syslog.conf file
*.* @(a.b.c.d)
Or if using Solaris or other System 5-type UNIX
*.debug @a.b.c.d
*.info @ a.b.c.d
*.notice @ a.b.c.d
*.warning @ a.b.c.d
*.err @ a.b.c.d
*.crit @ a.b.c.d
*.alert @ a.b.c.d
*.emerg @ a.b.c.d

Where a.b.c.d is the IP address of the targeted syslog server.
If you need to collect logs from a third-party application eg Oracle, then you may need to use specialized Unix Syslog agent which allows third-party log files to be relayed via syslog.

Other Network Devices Routers and Switches within the scope of PCI DSS will also need to be configured to send events via syslog. As was detailed for firewalls earlier, syslog is an almost universally supported function for all network devices and appliances. However, in the rare case that syslog is not supported, SNMP traps can be used provided the syslog server being used can receive and interpret SNMP traps.

PCI DSS Requirement 10.6 "Review logs for all system components at least daily" We have now covered how to get the right logs from all devices within scope of the PCI DSS but this is often the simpler part of handling Requirement 10. The aspect of Requirement 10 which often concerns PCI Merchants the most is the extra workload they expect by now being responsible for analyzing and understanding a potentially huge volume of logs. There is often a 'out of sight, out of mind' philosophy, or a 'if we can't see the logs, then we can't be responsible for reviewing them' mindset, since if logs are made visible and placed on the screen in front of the Merchant, there is no longer any excuse for ignoring them.

Tellingly, although the PCI DSS avoids being prescriptive about how to deliver against the 12 requirements, Requirement 10 specifically details "Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6". In practice it would be an extremely manpower-intensive task to review all event logs in even a small-scale environment and an automated means of analyzing logs is essential.

However, when implemented correctly,this will become so much more than simply a tool to help you cope with the inconvenient burden of the PCI DSS. An intelligent Security Information and Event Management system will be hugely beneficial to all troubleshooting and problem investigation tasks. Such a system will allow potential problems to be identified and fixed before they affect business operations. From a security standpoint, by enabling you to become 'intimate' with the normal workings of your systems, you are then well-placed to spot truly unusual and potentially significant security incidents.

Tuesday 1 February 2011

File Integrity Monitoring - PCI DSS Requirements 10, 10.5.5 and 11.5

Although FIM or File Integrity Monitoring is only mentioned specifically in two sub-requirements of the PCI DSS (10.5.5 and 11.5), it is actually one of the more important measures in securing business systems from card data theft.


What is it, and why is it important?

File Integrity monitoring systems are designed to protect card data from theft. The primary purpose of FIM is to detect changes to files and their associated attributes. However, this article provides the background to three different dimensions to file integrity monitoring, namely


  • secure hash-based FIM, used predominantly for system file integrity monitoring
  • file contents integrity monitoring, useful for configuration files from firewalls, routers and web servers
  • file and/or folder access monitoring, vital for protecting sensitive data

Secure Hash Based FIM

Within a PCI DSS context, the main files of concern include
  • System files e.g. anything that resides in the Windows/System32 or SysWOW64 folder, program files, or for Linux/Unix key kernel files
The objective for any hash-based file integrity monitoring system as a security measure is to ensure that only expected, desirable and planned changes are made to in scope devices. The reason for doing this is to prevent card data theft via malware or program modifications.
Imagine that a Trojan is installed onto a Card Transaction server – the Trojan could be used to transfer card details off the server. Similarly, a packet sniffer program could be located onto an EPoS device to capture card data – if it was disguised as a common Windows or Unix process with the same program and process names then it would be hard to detect. For a more sophisticated hack, what about implanting a ‘backdoor’ into a key program file to allow access to card data??
These are all examples of security incidents where Windows File Integrity Monitoring is essential in identifying the threat. Remember that anti-virus defenses are typically only aware of 70% of the world’s malware and an organization hit by a zero-day attack (zero-day marks the point in time when a new form of malware is first identified – only then can a remediation or mitigation strategy be formulated but it can be days or weeks before all devices are updated to protect them.

How far should FIM measures be taken?
As a starting point, when implementing file integrity monitoring for Windows, it is essential to monitor the Windows/System32 or SysWOW64 folders. Similarly for Unix and Linux systems, the /etc files and other core binary paths must be monitored, plus the main Card Data Processing Application Program Folders. For these locations, running a daily inventory of all system files within these folders and identifying all additions, deletions and changes. Additions and Deletions are relatively straightforward to identify and evaluate, but how should changes be treated, and how do you assess the significance of a subtle change, such as a file attribute? The answer is that ANY windows file integrity change in these critical locations must be treated with equal importance. Most high-profile PCI DSS security breaches have been instigated via an ‘inside man’ – typically a trusted employee with privileged admin rights. For today’s cybercrime there are no rules.
The industry-acknowledged approach to FIM is to track all file attributes and to record a secure hash. There is a whitepaper that explains the detail of the this technology here - 'File Integrity Monitoring - The Last Line of Defense in the PCI DSS'
Any change to the hash when the file-integrity check is re-run is a red alert situation – using SHA1 or MD5, even a microscopic change to a system file will denote a clear change to the hash value. When using FIM to govern the security of key system files there should never be any unplanned or unexpected changes – if there are, it could be a Trojan or backdoor-enabled version of a system file.
Which is why it also crucial to use FIM in conjunction with a ‘closed loop’ change management system – planned changes should be scheduled and the associated File Integrity changes logged and appended to the Planned Change record.

File Content/Config File Integrity Monitoring
Whilst a secure hash checksum is an infallible means of identifying any system file changes, this does only tell us that a change has been made to the file, not what that change is. Sure, for a binary-format executable this is the only meaningful way of conveying that a change has been made, but a more valuable means of file integrity monitoring for ‘readable’ files is to keep a record of the file contents. This way, if a change is made to the file, the exact change made to the readable content can be reported.
For instance, a web configuration file (php, aspnet, js or javascript, XML config) can be captured by the FIM system and recorded as readable text; thereafter changes will be detected and reported directly.
Similarly, if a firewall access control list was edited to allow access to key servers, or a Cisco router startup config altered, then this could allow a hacker all the time needed to break into a card data server
One final point on file contents integrity monitoring - Within the Security Policy/Compliance arena, Windows Registry keys and values are often included under the heading of FIM. These need to be monitored for changes as many hacks involve modifying registry settings. Similarly, a number of common vulnerabilities can be identified by analysis of registry settings.

File and/or Folder Access Monitoring

The final consideration for file integrity monitoring is how to handle other file types not suitable for secure hash value or contents tracking. For example, because a log file, database file etc will always be changing, both the contents and the hash will also be constantly changing. Good file integrity monitoring technology will allow these files to be excluded from any FIM template.
However, card data can still be stolen without detection unless other measures are put in place. As an example scenario, in an EPoS retail system, a card transaction or reconciliation file is created and forwarded to a central payments server on a scheduled basis throughout the trading day. The file will always be changing – maybe a new file is created every time with a time stamped name so everything about the file is always changing.
The file would be stored on an EPoS device in a secure folder to prevent user access to the contents. However, an ‘inside man’ with Admin Rights to the folder could view the transaction file and copy the data without necessarily changing the file or its attributes. Therefore the final dimension for File Integrity Monitoring is to generate an alert when any access to these files or folders is detected, and to provide a full audit trail by account name of who has had access to the data. Much of PCI DSS Requirement 10 is concerned with recording audit trails to allow a forensic analysis of any breach after the event and establish the vector and perpetrator of any attack. Much more detail on this requirement can be found here - 'Event Log Monitoring and the PCI DSS'
If you are reading this and want to learn more about the PCI DSS and just what it takes to tackle the FIM requirements, you can view a couple of video overviews here and trial compliance software can be downloaded here
For more information go to www.newnettechnologies.com
All material is copyright New Net Technologies

Wednesday 12 January 2011

PCI DSS Requirement 10: Track and monitor all access to network resources and cardholder data

Here's a new video overview explaining the background to the PCI DSS 2.0 requirements for event log centralization and secure storage. The video also shows how to implement a solution that will make it easy to gather all audit logs from Windows, Unix, Linux, firewalls, routers, switches - even mainframes.

But that's the easy part! The main problem is that the PCI DSS 2.0 (Section 10.6) mandates the requirement for YOU to "Review logs for all system components at least daily". Seriously? Review all my Event Logs - 'At least Daily'?!

This is why you need some Security Information and Event Management technology - automatic analysis of event logs and intelligence to bring your attention to the genuinely serious or unusual events. This approach has a double-impact. First of all, the obvious benefit is that you can still continue your current day-job and meet the requirements of the PCI DSS! Secondly, it means that the events that are determined as 'significant' can realistically be investigated PROPERLY.

Implemented and used correctly, SIEM technology like NNT's Log Tracker, ensures you not only meet your PCI DSS obligations to the letter, but in the spirit of the standard too. You will get intimate with how your network really behaves on a daily basis, which in turn means you will spot a real security threat if you are ever breached.

The PCI Event Log video is here (it is the second video clip on the lower half of the page, although it is worth watching our 6 Steps to PCI Compliance video too if you have time) - contact me if you want a live overview or trial and we can fix it up!