This week I attended the PCI North American Community Meeting. If you are in the payment security space and haven’t been to a community meeting, I would recommend that you put this on your conference schedule. It’s great to connect with like-minded individuals, including card brands, banks, large customers, vendors, and yes, assessors – both internal (ISAs) and external (QSAs).
This year was big for announcements. The PCI Council has been hard at work updating several core standards. With all of the standards covered, the most attention-grabbing announcement was the overview of the new PCI Data Security Standard, version 4.0 (PCI DSS 4.0). Let’s go over some of the more prominent points that were discussed this week.
PCI DSS 4.0
It is probably not an exaggeration to say that this is the most significant change to the PCI DSS since version 1.0. Not only from the perspective of requirements that are being added or changed but in terms of structural changes to the standard. Some of the Council members were heard stating that “there will be quite a lot of new requirements” and, “every single requirement has been rewritten – every single requirement has been renumbered.” While they mentioned that the 12 main requirements would mostly stay the same, they did state that even those received slight rewording for better alignment.
Here we’ll look at some of the specifics that were talked about during the review of the changes.
NIST 800-63b and Passwords
One of the items that was expected to be updated was the password section within Requirement 8, meant to align to the NIST 800-63b updates that were published a couple of years ago. The actual verbiage has yet to be seen, but the general feeling is that with additional security controls, you may be able to forgo requiring password resets every 90 days. The Council was clear that this requirement is dependent on the nature of the account and access to sensitive data.
Years ago, they said that they might expand the multi-factor requirement from administrators of the cardholder data environment (CDE) devices to accounts that may also have access to cardholder data (CHD). I expect that they are hinting to this change within the standard. The systems most impacted by this potential change would be applications (e.g., web portals) or legacy systems (e.g., mainframes).
Another change that was confirmed with the release of version 4.0 is a requirement to encrypt CHD over any transmission. In previous versions, you were allowed to send plaintext cardholder information over a trusted network (e.g., your corporate network). The problem with this is that it drastically affected the scope of your CDE, and often customers would try to encrypt to achieve scope reduction.
There are a few areas that could be significantly impacted by this change, including load balancers to front-end web servers, the gap between secure telnet and mainframes, and VoIP phones. In these scenarios, customers designed the environment to be segmented from the network so that the transmission of plaintext cardholder information was minimal and achieved an acceptable scope reduction. In the new version, this would be against the standard and alternate ways of complying would have to be considered.
The most significant announcement regarding changes to the standard is that they are moving to more of an objective-based or intent-based control standard. What exactly does this mean? For starters, they are taking a dual approach to this. In the first approach, the standard will exist much as it does today, referred to as the defined approach. However, there will also be an option to perform a customized approach for each of the requirements. A customized approach will allow individuals to design their own controls and implement them based on the intent of the requirements.
You may ask, “Isn’t this just a compensating control?”, and you would be right except for the fact that the compensating control is being dropped from the standard. Since they are moving to an intent/objective-based model, the compensating control structure doesn’t make as much sense anymore. Unlike with compensating controls, where you would need to list a business constraint, there is no such requirement using the customized approach in the new version.
Generally, I think this is a good move for the standard. This will allow companies more flexibility to adopt new technologies or security solutions and not have to wait for the standard to catch up. That said, I wasn’t the only one with concerns about the potential for abuse, and other QSAs and the Council echoed these concerns. Being that there is now more flexibility in the control structure, there could be further variations from QSAs and their interpretations on compliant implementations. This is because the adequacy of the customized approach will be left primarily up to the QSA to develop a testing procedure for the controls and make a determination on effectiveness. For example, a merchant may submit that whitelisting is a valid customized control for anti-malware – one QSA company may agree because it blocks malware from running, where another may say that it doesn’t detect malware at all.
The Council mentioned that they looked at the Designated Entity Supplemental Validation (DESV) requirements and are integrating some of those controls into the standard DSS. It is not clear which controls will be integrated, but we can assume that additional controls will be added from there.
Future Date and Expectations
There will be two more requests for comments (RFC) periods before the PCI DSS is released (sometime in late in 2020). The Council has confirmed that due to the number of changes, there will be several controls that are going to be forward-dated for the requirement to implement, but the actual timing and length has yet to be determined.
Also, keep in mind that there still could be a number of changes and revisions to the standard before it is set for adoption. While the Council talked about some of the specifics of changes they had already made, further updates may (or may not) be made before the final version is released. The rollout is going to depend on feedback received from the community and the potential issues identified before the release.
Other Changes – P2PE and SSF
While the PCI DSS 4.0 changes took the spotlight on the stage, there were several other notable developments as well. Here are a couple of worthy mentions.
•P2PE (Point to Point Encryption) version 3.0 will be released in Q4 of 2019. This will add four new sub-components to the standard in which companies can get certified. For example, if all you do is key load onto swipe terminals, you can get certified for this function alone and allow other P2PE solution providers to outsource to you. This will essentially allow the standard to be more module, which is a good move.
•The Software Security Framework (SSF), the replacement of the PA-DSS, will now include a Secure Software Lifecycle (Secure SLC) component, which will allow companies to opt in to having their Software Development Lifecycle (SDLC) certified in the process. While this is not a requirement, it would alleviate the need to engage an assessor on delta changes and would only require a reassessment of the software every three years.
While there are some significant changes for the upcoming standard, they are most likely positive for the industry and assessed entities. It will be useful to see the actual draft next month, and TrustedSec will be submitting some feedback where appropriate. If you have any questions about the proposed changes or are concerned about the impact on your environment, feel free to reach out to us for a conversation. It would be prudent to start reviewing process and potential control changes now, that way you will be well prepared when the new requirements do come into effect.