Showing posts with label event log monitoring. Show all posts
Showing posts with label event log monitoring. Show all posts

Tuesday, 26 June 2012

Tokenization, the PCI DSS and the Number One Threat to Your Organization's Data

I was recently sent a whitepaper by a colleague of mine which covered the subject of tokenization. It took a belligerent tone regarding the PCI DSS and the PCI Security Councils views of Tokenization, which is understandable in context  – the vendors involved with the whitepaper are fighting their corner and believe passionately that tokenization is a great solution to the problem of how best to protect cardholder data.

To summarize the message of the whitepaper, the authors were attacking the PCI Security Standards Council because the Council’s ‘Information Supplement covering PCI DSS Tokenization Guidelines’ document  was specifically positioned as ‘for guidance only’ and explicitly stated that it did not ‘replace or supersede requirements in the PCI DSS’.

The whitepaper also quoted a PCI Security Standards Council Press Release on the subject of Tokenization where Bob Russo, the General Manager of the PCI SSC had stated that tokenization should be implemented as an additional PCI DSS ‘layer’. The tokenization whitepaper took issue with this, the argument being that tokenization should be sanctioned as an alternative to encryption rather than yet another layer of protection that a Merchant could optionally implement.

The reality is that Bob Russo runs the PCI Standards Security Council and it is they who define the PCI DSS, not any vendors of specific security point-products. Also, where I would say the statement above is completely wrong is where they say 'It's not about layering' because the PCI DSS – and best practice in security in general - is absolutely all about layering!

The reason why the PCI DSS is often seen as overly prescriptive and over-bearing in its demands for so much security process is that card data theft still happens on a daily basis. What’s more pertinent is that whilst card date theft can be the result of clever hackers, or polymorphous malware, or cross-site scripting or even card skimming using fake PEDs.

The number one Card data theft threat remains consistent - complacency about security.

In other words, corners are being cut in security - a lack of vigilance and more often than not, silly, basic mistakes being made in security procedures.

So what is the solution? Tokenization won't help if it gets switched off, or if it has a conflict with a windows patch or if it gets targeted by malware, or simply bypassed by a card skimming Trojan – also it won’t protect against a malicious or unintentional internal breach. Tokenization also won't help protect cardholder data if the Card Swipe or PED (PIN Entry Device in Europe) gets hacked, or if a card number gets written down or recorded at a call centre.

In summary - Tokenization is undeniably a good security measure for protecting cardholder data, but it doesn't remove the need to implement all PCI DSS measures. ‘There has never been and there still is NO SILVER BULLET when it comes to security.

In fact the only sensible solution to card data theft is layered security, operated with stringent checks and balances at all times. What PCI Merchants need now and will continue to need in the future is quality, proven PCI solutions from a specialist with a long track record in practicing the Art of Layered Security, combining multiple security disciplines to protect from external and internal threats, combining such things as good change management, file integrity monitoring with SIEM for example to provide the necessary vigilance essential for tight data protection security.

What do you think? I'd love to hear your thoughts on the subject.

Monday, 1 November 2010

Event Log Monitoring for the PCI DSS

This article has been produced to assist anyone concerned with ensuring their organization can meet PCI DSS obligations for event log management - "PCI DSS Section 10.2 Implement automated audit trails for all system components..."

There are typically two concerns that need to be addressed - first, "what is the best way to gather and centralize event logs?" And second, "what do we need to do with the event logs once we have them stored centrally? (And how will we cope with the volume?)"

To the letter of the PCI DSS, you are obliged to make use of event and audit logs in order to track user activity for any device within scope i.e. all devices which either 'touch' cardholder data or have access to cardholder data processing systems. The full heading of the Log Tracking section of the PCI DSS is as follows -

"PCI DSS Requirement 10: Track and monitor all access to network resources and cardholder data"
Logging mechanisms and the ability to track user activities are critical in preventing, detecting, or minimizing the impact of a data compromise. The presence of logs in all environments allows thorough tracking, alerting, and analysis when something does go wrong. Determining the cause of a compromise is very difficult without system activity logs.

Given that many PCI DSS estates will be geographically widespread it is always a good idea to use some means of centralizing log messages, however, you are obliged to take this route anyway if you read section 10.5.3 of the PCI DSS -
"Promptly back up audit trail files to a centralized log server or media that is difficult to alter"
The first obstacle to overcome is the gathering of event logs. Unix and Linux hosts can utilize their native syslogd capability, but Windows servers will need to use a third party Windows Sylog agent to transfer Windows Event Logs via syslog. This will ensure all event log messages form Windows servers are backed up centrally in accordance with the PCI DSS standard. Similarly, Oracle and SQL Server based applications will also require a Syslog Agent to extract log entries for forwarding to the central syslog server. Similarly, IBM z/OS mainframe or AS/400 systems will also need platform-specific agent technology to ensure event logs are backed up.
Of course, Firewalls and Intrusion Protection/Detection System (IPS/IDS), as well as the majority of switches and routers all natively generate syslog messages.

File-Integrity Monitoring and Vulnerability Scanning
While we are on the subject of deployment of agents to platforms for event log monitoring, it is worth considering the other dimensions of the PCI DSS, namely file-integrity monitoring and vulnerability scanning/assessment.

Both of these functions can be addressed using an agent on board your servers and workstations. File-Integrity monitoring (see section 11.5 of the PCI DSS) is necessary to ensure key program and operating system files are not infiltrated by Trojans or other malware, and that 'backdoor' code is not inserted within applications. File-Integrity Monitoring should be deployed to all PCs and Epos systems, Windows Servers, Unix and Linux hosts.

Vulnerability Scanning is a further element of the PCI DSS and requires all devices to be scanned regularly for the presence of security vulnerabilities. The key benefit of an agent based approach is that vulnerability scans can be performed continuously and any configuration changes rendering your PCs/Epos/Servers less secure or less 'hardened' will be identified and alerted to you. The agent will need valid PCI Security Settings/Vulnerability Assessment/PCI Hardening Checklists to be applied.

Event Log Backup to a Centralized Server
Once assembled, the Audit trail history must be backed up in a way that is "difficult to alter". Traditionally, write-once media has been used to ensure event histories cannot be altered but most centralized log server solutions now employ file-integrity monitoring as a means of detecting any attempt to change or edit the event log backup.

So in terms of our two initial questions, we have fully covered the first, but what about the next logical question of 'What do we do with - and how do we cope with - the event logs gathered?'
"PCI DSS Section 10.6 Review logs for all system components at least daily"
This is the part of the standard that causes most concern. If you consider the volume of event logs that may be generated by a typical firewall this can be significant, but if you are managing a retail estate of 800 stores with 7,500 devices within scope of the PCI DSS, the task of reviewing logs from devices is going to be impossible to achieve. This may be a good time to consider some automation of the process...?
The Security Information and Event Management or SIEM market as defined by Gartner covers the advanced generation of solutions that harvest audit and event logs, and then parse or interpret the events e.g. store events by device, event type and severity, and analyze the details within event logs as they are stored. In fact, the PCI DSS recognizes the potential value of this kind of technology

"Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6 of the PCI DSS"

SIEM technology allows event logs to be automatically and intelligently managed such that only genuinely serious security events are alerted. The best SIEM technology can distinguish between true hacker activity running a 'brute force' attack and a user who has simply forgotten their password and is repeatedly trying to access their account. Naturally there is an amount of customization required for each environment as every organization's network, systems, applications and usage patterns are unique as are the corresponding event log volumes and types.

The PCI Event log management process can be approached in three stages, ensuring that there is a straightforward progression through becoming compliant with the PCI DSS standard and becoming fully in control of your PCI Estate. The tree phases will assist you in understanding how your PCI Estate functions normally and, as a result, placing all genuine security threats into the spotlight.

1. GATHER - Implement the SIEM system and gather all event logs centrally - the SIEM technology will provide a keyword index of all events, reported by device type, event severity and even with just the basic, pre-defined rules applied, the volumes of logs by type can be established. You need to get familiar with the types of event log messages being collected and what 'good' looks like for your estate.
2. PROFILE - Refinement of event type identification and thresholds - once an initial baselining period has been completed we can then customize rules and thresholds to meet the profile of your estate, with the aim of establishing a profiled, 'steady-state' view of event types and volumes. Even though all logs must be gathered and retained for the PCI DSS, there is a large proportion of events which aren't significant on a day-to-day basis and the aim is to de-emphasize these in order to promote focus on those events which are significant.
3. FOCUS - simple thresholding for event types is adequate for some significant security events, such as anti-virus alerts or IPS signature detections, but for other security events it is necessary to correlate and pattern-match combinations and sequences of event. SIEM only becomes valuable when it is notifying you of a manageable number of significant security events.

It is important to note that even when certain events are being de-emphasized, these are still being retained in line with the PCI DSS guidelines which are to retain logs for 12 months. At least 3 months of event logs must be in an on-line, searchable format for at least 3 months, and archived for 12 months.
Again, the archived and on-line log repositories must be protected from any editing or tampering so write-once media and file integrity monitoring must be used to preserve log file integrity.

Tuesday, 13 July 2010

The Top Ten of Audit and Event Log Monitoring

Event Log, Audit Log and Syslog messages have always been a good source of troubleshooting and diagnostic information, but the need to back up audit trail files to a centralized log server is now a mandatory component of many governance standards. Contemporary, SIEM solutions need to be
• flexible enough to cater for all devices, operating systems, platforms, databases and application
• sufficiently scalable to cope with thousands of devices generating millions of events
• intelligent, correlating events and identifying true security incidents only so resources can focus on genuine threats and attacks.

This is an introductory 'Top Ten of Audit Trail and Event Log Monitoring.
1. Security Standards and Corporate Governance Compliance Policies such as PCI DSS and GCSx CoCo require logging mechanisms and the ability to track user activities as they are critical in preventing, detecting, or minimizing the impact of a data compromise. Other policies such as FISMA, Sarbanes Oxley, NERC CIP, ISO 27000 and HIPAA all benefit from a means of centralizing audit log events to identify security incidents.
2. The state of the art in Audit Log Correlation technology provides automated configuration assessment, proactively testing and assessing a server environment against preconfigured, out-of-the-box policies, helping to enable a minimal deployment window. The best solutions leverage industry standards, specifically benchmarks from the Center for Internet Security (CIS), the National Institute of Standards and Technology (NIST), and the Defense Information Systems Agency (DISA). These benchmarks include thousands of configuration assessments enabling automatic sustainable policy compliance testing for FISMA.
3. Security standards such as PCI DSS and GCSx CoCo mandate the need to track and monitor all access to network resources and cardholder data Logging mechanisms and the ability to track user activities. The presence of logs in all environments allows thorough tracking and analysis if something does go wrong. Determining the cause of a compromise is very difficult without system activity logs. A central event log analyzer is the best option to use.
4. It is vital that your system for centralizing audit log trails is robust and comprehensive. PCI DSS requires your audit trail history is retained for at least one year with at least 3 months history available for immediate access. The best audit-log tracking software solutions provide real-time indexing of logs with instant keyword search and correlation facilities.
5. While Unix and Linux hosts can forward audit trail and system events using syslog, Windows servers do not have an in-built mechanism for forwarding Windows Events and it is necessary to use an agent to convert Windows Event Logs to syslog. The Windows Events can then be collected centrally using your audit log server. Similarly, applications using Oracle or SQL Server or bespoke or non-standard applications do not use syslog to forward events and it is necessary to use an agent to forward events from these applications. Finally, if you are using an IBM z/OS mainframe or AS/400 system you will need further agent technology to centralize event and audit log messages.
6. Audit trail history must be securely stored in order to prevent retrospective editing or any tampering. The PCI DSS requires that audit trails are promptly backed up to a centralized log server or media that is difficult to alter. The best centralized log server solutions employ file-integrity monitoring for the log backup files so that any modifications can be detected and alerted.
7. Firewalls (Checkpoint, McAfee Sidewinder, Juniper, Netscreen, Cisco ASA, Nokia, Intrusion Protection System (IPS), Intrusion Detection Systems (IDS), routers and RADIUS accounting and authorization services, vulnerability scanning solutions such as Retine eEye, Nessus and other Pen Testing solutions, wireless routers, switches all natively generate syslog messages to report a range of events from the low-level informational logs through to critical events.
8. Syslog messages are defined in RFC 3164 and is officially known as the BSD Syslog Protocol. Syslog messages are sent using UDP on port 514 by default although different ports can be used. Syslog messages use a range of Facility Codes and Severity Codes. The Facility Codes range from 0 to 23 and determine the message type. The Severity Codes range from 0 to 7 as follows:
0 Emergency: system is unusable
1 Alert: action must be taken immediately
2 Critical: critical conditions
3 Error: error conditions
4 Warning: warning conditions
5 Notice: normal but significant condition
6 Informational: informational messages
7 Debug: debug-level messages
9. The Security Information and Event Management or SIEM market as defined by Gartner covers the advanced generation of solutions that not only harvest audit logs and provide centralized log server functions but parse event log messages and analyze event logs as they are stored. This allows event logs to be correlated to identify hacker activity and attack patterns and notify IT security teams. The best SIEM systems employ a range of artificial intelligence capabilities to recognize threat signatures by cross-referencing events from IPS, IDS and RADIUS systems, Anti-Virus, Host Integrity Monitoring systems, File Integrity Monitoring software, Firewalls, Active Directory and watching for classic hacker activity such as deletion of log files and "brute force" hacks where repeated/sequential logon failures or bad password events will be generated.
10. The goal for any SIEM solution is to provide comprehensive log harvesting, automatically filter out all 'information only' or 'normal operation' events while placing a spotlight on a manageable list of genuine, serious attack patterns or security incidents. Even a medium sized enterprise can have thousands or hundreds of thousands of events generated by devices in their infrastructure so a properly implemented SIEM system is invaluable.