Michael Yaeger focuses his practice on white collar criminal defense and investigations, securities enforcement, internal investigations, accounting fraud, cybercrime/cybersecurity and data security matters, as well as related civil litigation. Yaeger also leads internal investigation and cybercrime-related representations for financial services companies and provides guidance on drafting written information security plans and incident response plans for investment advisers.
A thought leader in the industry, Yaeger has been featured in numerous articles on cybersecurity, including “Proactive Steps to Prevent Legal Pitfalls in Bug Bounty Programs,” The Cybersecurity Law Report, “Cyber-SARS: Anti-Money Laundering and Cybersecurity Rules,” The Hedge Fund Journal, “NYDFS Revises Its Proposed Cybersecurity Regulation for Financial Services Companies,” among many others.
Christopher P. Skroupa: Where does cybersecurity fit in the Board’s accountabilities to all company stakeholders?
Michael Yaeger: One basic function of a modern corporate Board is to oversee risk management, and many risks do not present themselves as cybersecurity issues. These include risks to the customers’ personal information, of course, which comes with regulatory risks by the FTC, state attorneys general and private securities suits.
But they also include risks to disclosure of trade secrets and other sensitive business information, and reputational risks. For example, the breach at Sony Pictures a few years ago created an enormous headache for the company with its own employees and with the Hollywood stars who had been discussed in emails that were spread across the web. And for companies’ whose business model centers on the sale or licensing of information – such as Equifax or Facebook – the risks touch on nearly every aspect of their business.
Cybersecurity is also an area where board members themselves may create risks for the company because they are receiving and communicating about some of the company’s most sensitive data. For that reason, Board members – and their personal email accounts – may be targets of attackers.
Skroupa: Which steps should the Board take to meet those accountabilities?
Yaeger: Stated at a high level of generality, the board must ensure that the company has cyber risk management policies and procedures consistent with its strategy and risk appetite, and the board must ensure that these policies and procedures are functioning. Boards should review annual budgets for privacy and security, assign roles and responsibilities, and get regular briefings on cyber issues. Depending on the company, it may have to be quarterly.
For example, a retail company’s board might want quarterly cybersecurity briefings directly presented by the company’s CISO – its chief information security officer. Boards should also raise privacy and security issues when the company expands into new jurisdictions; any time a company is making a significant change to its operations or location, cyber risks should be part of the discussion.
To be sure, managing these risks at the Board level is not the same as managing them at the C-suite level. In other words, the board cannot and should not be involved in managing risks on a day-to-day basis. It’s not possible and it’s not advisable. Also, flooding the board with some metrics would be crazy; the board probably doesn’t need to know how many alerts there were on the intrusion detection system. But cyber risk management is an increasingly large part of risk management.
The Board has to focus on setting up systems, and specifically on setting them up so that they can be recovered quickly in the event of a problem: a natural disaster at a data center, a ransomware attack that encrypts the company’s systems, a breach at a key vendor that holds sensitive company data, etc.
Board members may create risks for the company because they are receiving and communicating about some of the company’s most sensitive data.
Skroupa: What’s different between how a Board should respond to the aftermath of a breach, and how they should prepare to prevent one?
Yaeger: In the aftermath of a breach, the natural emphasis is on containing the threat, eradicating it, and recovering, and there will be more frequent meetings with management. But all breach response should eventually reach the point where management is seeking to learn from the breach for the future. The company needs to try to reduce the probability of a similar incident happening again, and it should consider ways to improve its incident response procedures. Breach response is a circle, not a line, so there is a certain unity of approach when you look at it from a high level.
Also, while companies must try to prevent breaches, the stark and sober truth is that not all breaches will be prevented. So a significant part of cybersecurity should be seeking to minimize the damage from potential breaches. That means designing systems so that a failure is not catastrophic, and it means preparing for recovery from a breach. One thing this implies is building certain redundancies into critical systems, another is practicing breach response.
Skroupa: Do you have any final thoughts?
Yaeger: One more granular suggestion comes to mind: why not move away from sending board members sensitive materials to their personal email accounts? There are more secure ways to share documents. I don’t mean everything has to be typed into an encrypted messaging app or something like that. But sensitive documents could be shared in a data room of the sort that is used on M&A deals, etc. Boards don’t need perfection, but there is a lot of room for improvement over current practice at some companies.
On a broader level, I’d like to remind people that the baseline standard here is one of reasonableness. Courts and regulators will ask questions like: are the policies and procedures reasonable? Did the company take reasonable security measures? And one thing to remember is that what is considered “reasonable” changes over time as technological protections – and attacks – develop. That’s one reason why the Board has to look into this area regularly. When the facts in this area change, the law necessarily changes as well.
Consider the use of multi-factor authentication to login remotely to a network. For example, setting up a system so that an employee cannot log in with just a password, but must also use a random number provided on a token or app or in a text.
In 2000, odds were most judges wouldn’t scold most companies for not having multi- factor authentication. A password was enough. But now, when Google offers multi- factor authentication on gmail accounts, the bar has been raised. What’s reasonable has changed, and just using passwords might not be found reasonable anymore.
Originally published by Christopher Skroupa on Forbes.com