On October 16, 2024, the New York Department of Financial Services ("NYDFS") issued an industry letter (the "Guidance") highlighting the cybersecurity risks arising from the use of artificial intelligence ("AI") and providing strategies to address these risks. While the Guidance "does not impose any new requirements," it clarifies how Covered Entities should address AI-related risks as part of NYDFS's landmark cybersecurity regulation, codified at 23 NYCRR Part 500 ("Cybersecurity Regulation"). The Cybersecurity Regulation, as revised in November 2023, requires Covered Entities to implement certain detailed cybersecurity controls, including governance and board oversight requirements. Covered Entities subject to the Cybersecurity Regulation should pay close attention to the new Guidance not only if they are using or planning on using AI, but also if they could be subject to any of the AI-related risks or attacks described below.
AI-Related Risks: The Guidance notes that threat actors have a "lower barrier to entry" to conduct cyber attacks as a result of AI and identifies four (non-exhaustive) cybersecurity risks related to the use of AI, including two risks related to the use of AI by threat actors against Covered Entities and two risks related to the use of, or reliance on, AI by Covered Entities:
-
AI-Enabled Social Engineering - The Guidance highlights that "AI-enabled social engineering presents one of the most significant threats to the financial services sector." For example, the Guidance observes that "threat actors are increasingly using AI to create realistic and interactive audio, video, and text ('deepfakes') that allow them to target specific individuals via email (phishing), telephone (vishing), text (SMiShing), videoconferencing, and online postings." AI generated audio, video, and text can be used to target individuals to convince employees to divulge sensitive information about themselves or their employer, wire funds to fraudulent accounts, or circumvent biometric verification technology.
-
AI-Enhanced Cybersecurity Attacks - The Guidance also notes that AI can be used by threat actors to amplify the potency, scale, and speed of existing types of cyberattacks by quickly and efficiently identifying and exploiting security vulnerabilities.
-
Risks Related to Vast Amounts of Non-public Information - Covered Entities might maintain large quantities of non-public information, including biometric data, in connection with their deployment or use of AI. The Guidance notes that, "maintaining non-public information in large quantities poses additional risks for Covered Entities that develop or deploy AI because they need to protect substantially more data, and threat actors have a greater incentive to target these entities in an attempt to extract non-public information for financial gain or other malicious purposes."
-
Vulnerabilities due to Third-Party, Vendor, and Other Supply Chain Dependencies - Finally, the Guidance flags that acquiring the data needed to power AI tools might require the use of vendors or other third-parties, which expands an entity's supply chain and could introduce potential security vulnerabilities that could be exploited by threat actors.
Controls and Measures: The Guidance notes that the "Cybersecurity Regulation requires Covered Entities to assess risks and implement minimum cybersecurity standards designed to mitigate cybersecurity threats relevant to their businesses - including those posed by AI" (emphasis added.) In other words, the Guidance takes out the position that assessment and management of cyber risks related to AI are already required by the Cybersecurity Regulation. The Guidance then sets out "examples of controls and measures that, especially when used together, help entities to combat AI-related risks." Specifically, the Guidance provides recommendations to Covered Entities on how to address AI-related risks in the context of implementing measures to address existing NYDFS requirements under the Cybersecurity Regulation.
-
Risk Assessments and Risk-Based Programs, Policies, Procedures, and Plans - Covered Entities should consider the risks posed by AI when developing risk assessments and risk-based programs, policies, procedures, and plans as required in the Cybersecurity Regulation. While the Cybersecurity Regulation already requires annual updates to Risk Assessments, the Guidance notes that these updates must ensure new risks posed by AI are assessed. In addition, the Guidance specifies that the incident response, business continuity, and disaster recovery plans required by the Cybersecurity Regulation "should be reasonably designed to address all types of Cybersecurity Events and other disruptions, including those relating to AI." Further, the Guidance notes that the "Cybersecurity Regulation requires the Senior Governing Body to have sufficient understanding of cybersecurity risk management, and regularly receive and review management reports about cybersecurity matters," which should include "reports related to AI."
-
Third-Party Service Provider and Vendor Management - The Guidance emphasizes that "one of the most important requirements for combatting AI-related risks" is to ensure that all third-party service provider and vendor policies (including those required to comply with the Cybersecurity Regulation) account for the threats faced from the use of AI products and services, require reporting for cybersecurity events related to AI, and consider additional representations and warranties for securing a Covered Entity's non-public information if a third party service provider is using AI.
-
Access Controls - Building on the access control requirements in the Cybersecurity Regulation, the Guidance recommends that "Covered Entities should consider using authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks, by avoiding authentication via SMS text, voice, or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys," among other steps to defend against AI-related threats. The Guidance also advises Covered Entities to "consider using an authentication factor that employs technology with liveness detection or texture analysis to verify that a print or other biometric factor comes from a live person." Notably, the Guidance recommended, but does not require, Covered Entities to employ "zero trust" principles and, where possible, require authentication to verify identities of authorized users for all access requests.
-
Cybersecurity Training - As part of the annual cybersecurity training requirements under the Cybersecurity Regulation, the Guidance suggests that the required training should address AI-related topics, such as the risks posed by AI, procedures adopted by the entity to mitigate these risks, and responding to social engineering attacks using AI, including the use of deepfakes in phishing attacks. As part of social engineering training required under the Cybersecurity Regulation, entities should cover procedures for unusual requests, such as urgent money transfers, and the need to verify legitimacy of requests by telephone, video, or email. Entities that deploy AI directly (or through third party service providers) should also train relevant personnel on how to design, develop, and deploy AI systems securely, while personnel using AI-powered applications should be trained on drafting queries to avoid disclosing non-public information.
-
Monitoring - Building on the requirements in the Cybersecurity Regulation to implement certain monitoring processes, the Guidance notes that Covered Entities that use AI-enabled products or services "should also consider monitoring for unusual query behaviors that might indicate an attempt to extract [non-public information] and blocking queries from personnel that might expose [non-public information] to a public AI product or system."
-
Data Management - The Guidance notes that the Cybersecurity Regulation's data minimization requirements, which require implementation of procedures to dispose of non-public information that is no longer necessary for business purposes, also applies to non-public information used for AI purposes. Furthermore, while recent amendments to the Cybersecurity Regulation will require Covered Entities to "maintain and update data inventories," the Guidance recommends that Covered Entities using AI should implement data inventories immediately. Finally, Covered Entities that use or rely on AI should have controls "in place to prevent threat actors from accessing the vast amounts of data maintained for the accurate functioning of the AI."
Although AI presents some cybersecurity risks, the Guidance notes that there are also substantial benefits "that can be gained by integrating AI into cybersecurity tools, controls, and strategies." The Guidance concludes by noting that "it is vital for Covered Entities to review and reevaluate their cybersecurity programs and controls at regular intervals, as required by Part 500."