Saturday 13 November 2010

PCI DSS Section 10 - Backup event logs centrally

There are typically two concerns that need to be addressed - first, "what is the best way to gather and centralize event logs?" And second, "what do we need to do with the event logs once we have them stored centrally? (And how will we cope with the volume?)"

To the letter of the PCI DSS, you are obliged to make use of event and audit logs in order to track user activity for any device within scope i.e. all devices which either 'touch' cardholder data or have access to cardholder data processing systems. The full heading of the Log Tracking section of the PCI DSS is as follows -

"PCI DSS Requirement 10: Track and monitor all access to network resources and cardholder data"

Logging mechanisms and the ability to track user activities are critical in preventing, detecting, or minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting, and analysis when something does go wrong. Determining the cause of a compromise is very difficult without system activity logs.

Given that many PCI DSS estates will be geographically widespread it is always a good idea to use some means of centralizing log messages, however, you are obliged to take this route anyway if you read section 10.5.3 of the PCI DSS -

"Promptly back up audit trail files to a centralized log server or media that is difficult to alter"

The first obstacle to overcome is the gathering of event logs. Unix and Linux hosts can utilize their native syslogd capability, but Windows servers will need to use a third party Windows Sylog agent to transfer Windows Event Logs via syslog - you can download a free copy of our Log Tracker Agent via this link.

This will ensure all event log messages form Windows servers are backed up centrally in accordance with the PCI DSS standard. Similarly, Oracle and SQL Server based applications will also require a Syslog Agent to extract log entries for forwarding to the central syslog server. Similarly, IBM z/OS mainframe or AS/400 systems will also need platform-specific agent technology to ensure event logs are backed up.

Of course, Firewalls and Intrusion Protection/Detection System (IPS/IDS), as well as the majority of switches and routers all natively generate syslog messages.

So in terms of our two initial questions, we have fully covered the first, but what about the next logical question of 'What do we do with - and how do we cope with - the event logs gathered?'

"PCI DSS Section 10.6 Review logs for all system components at least daily"

This is the part of the standard that causes most concern. If you consider the volume of event logs that may be generated by a typical firewall this can be significant, but if you are managing a retail estate of 800 stores with 7,500 devices within scope of the PCI DSS, the task of reviewing logs from devices is going to be impossible to achieve. This may be a good time to consider some automation of the process...?

The Security Information and Event Management or SIEM market as defined by Gartner covers the advanced generation of solutions that harvest audit and event logs, and then parse or interpret the events e.g. store events by device, event type and severity, and analyze the details within event logs as they are stored. In fact, the PCI DSS recognizes the potential value of this kind of technology

"Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6 of the PCI DSS"

SIEM technology allows event logs to be automatically and intelligently managed such that only genuinely serious security events are alerted. The best SIEM technology can distinguish between true hacker activity running a 'brute force' attack and a user who has simply forgotten their password and is repeatedly trying to access their account. Naturally there is an amount of customization required for each environment as every organization's network, systems, applications and usage patterns are unique as are the corresponding event log volumes and types.

The PCI Event log management process can be approached in three stages, ensuring that there is a straightforward progression through becoming compliant with the PCI DSS standard and becoming fully in control of your PCI Estate. The tree phases will assist you in understanding how your PCI Estate functions normally and, as a result, placing all genuine security threats into the spotlight.

1. GATHER - Implement the SIEM system and gather all event logs centrally - the SIEM technology will provide a keyword index of all events, reported by device type, event severity and even with just the basic, pre-defined rules applied, the volumes of logs by type can be established. You need to get familiar with the types of event log messages being collected and what 'good' looks like for your estate.

2. PROFILE - Refinement of event type identification and thresholds - once an initial baselining period has been completed we can then customize rules and thresholds to meet the profile of your estate, with the aim of establishing a profiled, 'steady-state' view of event types and volumes. Even though all logs must be gathered and retained for the PCI DSS, there is a large proportion of events which aren't significant on a day-to-day basis and the aim is to de-emphasize these in order to promote focus on those events which are significant.

3. FOCUS - simple thresholding for event types is adequate for some significant security events, such as anti-virus alerts or IPS signature detections, but for other security events it is necessary to correlate and pattern-match combinations and sequences of event. SIEM only becomes valuable when it is notifying you of a manageable number of significant security events.

It is important to note that even when certain events are being de-emphasized, these are still being retained in line with the PCI DSS guidelines which are to retain logs for 12 months. At least 3 months of event logs must be in an on-line, searchable format for at least 3 months, and archived for 12 months.

Again, the archived and on-line log repositories must be protected from any editing or tampering so write-once media and file integrity monitoring must be used to preserve log file integrity.

It's much easier to see it in practise than read about it so please get in touch for a quick overview by webex - mail a request to info@newnettechnologies.com or go to http://www.newnettechnologies.com/contact-us.html


Monday 1 November 2010

Event Log Monitoring for the PCI DSS

This article has been produced to assist anyone concerned with ensuring their organization can meet PCI DSS obligations for event log management - "PCI DSS Section 10.2 Implement automated audit trails for all system components..."

There are typically two concerns that need to be addressed - first, "what is the best way to gather and centralize event logs?" And second, "what do we need to do with the event logs once we have them stored centrally? (And how will we cope with the volume?)"

To the letter of the PCI DSS, you are obliged to make use of event and audit logs in order to track user activity for any device within scope i.e. all devices which either 'touch' cardholder data or have access to cardholder data processing systems. The full heading of the Log Tracking section of the PCI DSS is as follows -

"PCI DSS Requirement 10: Track and monitor all access to network resources and cardholder data"
Logging mechanisms and the ability to track user activities are critical in preventing, detecting, or minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting, and analysis when something does go wrong. Determining the cause of a compromise is very difficult without system activity logs.

Given that many PCI DSS estates will be geographically widespread it is always a good idea to use some means of centralizing log messages, however, you are obliged to take this route anyway if you read section 10.5.3 of the PCI DSS -
"Promptly back up audit trail files to a centralized log server or media that is difficult to alter"
The first obstacle to overcome is the gathering of event logs. Unix and Linux hosts can utilize their native syslogd capability, but Windows servers will need to use a third party Windows Sylog agent to transfer Windows Event Logs via syslog. This will ensure all event log messages form Windows servers are backed up centrally in accordance with the PCI DSS standard. Similarly, Oracle and SQL Server based applications will also require a Syslog Agent to extract log entries for forwarding to the central syslog server. Similarly, IBM z/OS mainframe or AS/400 systems will also need platform-specific agent technology to ensure event logs are backed up.
Of course, Firewalls and Intrusion Protection/Detection System (IPS/IDS), as well as the majority of switches and routers all natively generate syslog messages.

File-Integrity Monitoring and Vulnerability Scanning
While we are on the subject of deployment of agents to platforms for event log monitoring, it is worth considering the other dimensions of the PCI DSS, namely file-integrity monitoring and vulnerability scanning/assessment.

Both of these functions can be addressed using an agent on board your servers and workstations. File-Integrity monitoring (see section 11.5 of the PCI DSS) is necessary to ensure key program and operating system files are not infiltrated by Trojans or other malware, and that 'backdoor' code is not inserted within applications. File-Integrity Monitoring should be deployed to all PCs and Epos systems, Windows Servers, Unix and Linux hosts.

Vulnerability Scanning is a further element of the PCI DSS and requires all devices to be scanned regularly for the presence of security vulnerabilities. The key benefit of an agent based approach is that vulnerability scans can be performed continuously and any configuration changes rendering your PCs/Epos/Servers less secure or less 'hardened' will be identified and alerted to you. The agent will need valid PCI Security Settings/Vulnerability Assessment/PCI Hardening Checklists to be applied.

Event Log Backup to a Centralized Server
Once assembled, the Audit trail history must be backed up in a way that is "difficult to alter". Traditionally, write-once media has been used to ensure event histories cannot be altered but most centralized log server solutions now employ file-integrity monitoring as a means of detecting any attempt to change or edit the event log backup.

So in terms of our two initial questions, we have fully covered the first, but what about the next logical question of 'What do we do with - and how do we cope with - the event logs gathered?'
"PCI DSS Section 10.6 Review logs for all system components at least daily"
This is the part of the standard that causes most concern. If you consider the volume of event logs that may be generated by a typical firewall this can be significant, but if you are managing a retail estate of 800 stores with 7,500 devices within scope of the PCI DSS, the task of reviewing logs from devices is going to be impossible to achieve. This may be a good time to consider some automation of the process...?
The Security Information and Event Management or SIEM market as defined by Gartner covers the advanced generation of solutions that harvest audit and event logs, and then parse or interpret the events e.g. store events by device, event type and severity, and analyze the details within event logs as they are stored. In fact, the PCI DSS recognizes the potential value of this kind of technology

"Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6 of the PCI DSS"

SIEM technology allows event logs to be automatically and intelligently managed such that only genuinely serious security events are alerted. The best SIEM technology can distinguish between true hacker activity running a 'brute force' attack and a user who has simply forgotten their password and is repeatedly trying to access their account. Naturally there is an amount of customization required for each environment as every organization's network, systems, applications and usage patterns are unique as are the corresponding event log volumes and types.

The PCI Event log management process can be approached in three stages, ensuring that there is a straightforward progression through becoming compliant with the PCI DSS standard and becoming fully in control of your PCI Estate. The tree phases will assist you in understanding how your PCI Estate functions normally and, as a result, placing all genuine security threats into the spotlight.

1. GATHER - Implement the SIEM system and gather all event logs centrally - the SIEM technology will provide a keyword index of all events, reported by device type, event severity and even with just the basic, pre-defined rules applied, the volumes of logs by type can be established. You need to get familiar with the types of event log messages being collected and what 'good' looks like for your estate.
2. PROFILE - Refinement of event type identification and thresholds - once an initial baselining period has been completed we can then customize rules and thresholds to meet the profile of your estate, with the aim of establishing a profiled, 'steady-state' view of event types and volumes. Even though all logs must be gathered and retained for the PCI DSS, there is a large proportion of events which aren't significant on a day-to-day basis and the aim is to de-emphasize these in order to promote focus on those events which are significant.
3. FOCUS - simple thresholding for event types is adequate for some significant security events, such as anti-virus alerts or IPS signature detections, but for other security events it is necessary to correlate and pattern-match combinations and sequences of event. SIEM only becomes valuable when it is notifying you of a manageable number of significant security events.

It is important to note that even when certain events are being de-emphasized, these are still being retained in line with the PCI DSS guidelines which are to retain logs for 12 months. At least 3 months of event logs must be in an on-line, searchable format for at least 3 months, and archived for 12 months.
Again, the archived and on-line log repositories must be protected from any editing or tampering so write-once media and file integrity monitoring must be used to preserve log file integrity.