Wednesday, 7 December 2011

PCI Compliance In 10 Minutes A Day - Using File Integrity and Log File Monitoring Effectively

PCI Compliance Is Hard for Everyone!
In some respects, it can be argued that, the less IT 'stuff' an organization has, the fewer resources are going to be needed to run it all. However, with PCI compliance there are still always 12 Requirements and 650 sub-requirements in the PCI DSS to cover, regardless of whether you are a trillion dollar multinational or a local theatre company.

The principles of good security remain the same for both ends of the scale - you can only identify security threats if you know what business-as-usual, regular running looks like.

Establishing this baseline understanding will take time - 8 to 24 weeks in fact, because you are going to need a sufficiently wide perspective of what 'regular' looks like - and so we strongly advocate a baby-steps approach to PCI for all organizations, but especially those with smaller IT teams.

There is a strong argument that doing the basics well first, then expanding the scope of security measures is much more likely to succeed and be effective than trying to do everything at once and in a hurry. Even if this means PCI Compliance will take months to implement, this is a better strategy than implementing an unsupportable and too-broad a range of measures. Better to work at a pace that you can cope with than to go too fast and go into overload.

This is the five step program recommended, although it actually has merit for any size of organization.

PCI Compliance in 10 Minutes per Day
1. Classify your 'in scope of PCI' estate
You first need to understand where cardholder data resides. When we talk about cardholder data 'residing' this is deliberately different to the more usual term of cardholder data 'storage'. Card data passing through a PC, even it is encrypted and immediately transferred elsewhere for processing or storage, has still been 'stored' on that PC. You also need to include devices that share the same network as card data storing devices.
Now classify your device groups. For the example of Center Theatre Group, they have six core servers that process bookings. They also have around 25 PCs being used for Box Office functions. There are then around 125 other PCs being used for Admin and general business tasks.
So we would define 'PCI Server', 'Box Office PC' and 'General PC' classes. Firewall devices are also a key class, but other network devices can be grouped together and left to a later phase. Remember - this isn't cutting corners and sweeping dirt under the carpet, but a pragmatic approach to doing the most important basics well first, or in other words, taking the long view on PCI Compliance.

2. Make a Big Assumption
We now apply an assumption to these Device Groups - that is, that devices within each class are so similar in terms of their make-up and behavior, that monitoring one or two sample devices from any class will provide an accurate representation of all other devices in the same class.
We all know what can happen when you assume anything but this is assumption is a good one. This is all about taking baby steps to compliance and as we have declared up front that we have a strategy that is practical for our organization and available resources this works well.
The idea is that we get a good idea of what normal operation looks like, but in a controlled and manageable manner. We won't get flooded with file integrity changes or overwhelmed with event log data, but we will see a representative range of behavior patterns to understand what we are going to be dealing with.
Given the device groups outlined, I would target one or two servers - say a web server and a general application server - one or two Box Office PCs and one or two general PCs.

3. Watch...
You'll begin to see file changes and events being generated by your monitored devices and about ten minutes later you'll be wondering what they all are. Some are self explanatory, some not so.
Sooner or later, the imperative of tight Change Control becomes apparent.
If changes are being made at random, how can you begin to associate change alerts from your FIM system with intended 'good' changes and consequently, to detect genuinely unexpected changes which could be malicious?
Much easier if you can know in advance when changes are likely to happen - say, schedule the third Thursday in any month for patching. If you then see changes detected on a Monday these are exceptional by default. OK, there will always be a need for emergency fixes and changes but getting in control of the notification and documentation of Changes really starts to make sense when you begin to get serious about security.
Similarly from a log analysis standpoint - once you begin capturing logs in line with PCI DSS Requirement 10 you quickly see a load of activity that you never knew was happening before. Is it normal, should you be worried by events that don't immediately make sense? There is no alternative but to get intimate with your logs and begin understanding what regular activity looks like - otherwise you will never be able to detect the irregular and potentially harmful.

4....and learn
You'll now have a manageable volume of file integrity alerts and event log messages to help you improve your internal processes, mainly with respect to change management, and to 'tune in' your log analysis ruleset so that it has the intelligence to process events automatically and only alert you to the unexpected, for example, either a known set of events but with an unusual frequency, or previously unseen events.
Summary Reports collating filechanges on a per server basis are useful This is the time to hold your nerve and see this learning phase through to a conclusion where you and your monitoring systems are in control - you see what you expect to see on a daily basis, you get changes when they are planned to happen.

5. Implement
Now you are in control of what 'regular operation' looks like, you can begin expanding the scope of your File Integrity and Logging measures to cover all devices. Logically, although there will be a much higher volume of events being gathered from systems, these will be within the bounds of 'known, expected' events. Similarly, now that your Change Management processes have been matured, file integrity changes and other configuration changes will only be detected during scheduled, planned maintenance periods. Ideally your FIM system will be integrated with your Change Management process so that events can be categorized as Planned Changes and reconciled with RFC (Request for Change) details.

No comments:

Post a Comment