…
A. Introduction
The Securities and Futures Commission (“SFC”) has released a circular (the “Circular”) detailing expectations for licensed corporations (“LCs”) adopting generative artificial intelligence language models (“AI LMs”). While being supportive of the use of AL and AL LMs by LCs, the SFC also acknowledges that AI LMs are susceptible to risk and require necessary safeguards. In particular, the uses of AL LM for providing investment recommendations, investment advice or investment research to investors are considered to be high-risk use cases.
B. Scope of the Circular
The scope of the Circular is intended to cover LCs offering services or functionality provided by AI LMs or AI LM-based third-party products in relation to their regulated activities, irrespective of whether the AL LM is developed by the LC itself, its group company, an external service provider or sourced from open platforms.
C. Core Principles for Managing AI LMs
The SFC emphasises four core principles to guide LCs in the responsible adoption of AI LMs:
1. Senior Management Oversight
Senior management is responsible for ensuring proper governance throughout the AI LM lifecycle, from development and deployment to decommissioning. They should establish effective policies, procedures, and internal controls to manage risks and oversee the implementation of AI systems.
Senior management should also ensure qualified staff from business, risk, compliance, and technology functions are involved in overseeing AI LM adoption. Staff should possess competence in AI, data science, and regulatory compliance to address risks effectively. For high-risk use cases, such as investment recommendations or financial advice, heightened governance and additional risk controls are required to protect clients and investors.
While LCs may delegate certain functions, such as model validation, to their group companies, ultimate responsibility for compliance with legal and regulatory requirements remains with the LC.
2. AI Model Risk Management
An LC should implement a robust AI model risk management framework to ensure AI LMs remain fit for purpose. Key measures include conducting thorough validation before deployment and when significant changes are made to the model’s design or inputs, testing the model’s performance across all processes, including input, output, and any related systems and regularly monitoring and reviewing AI LM performance to address potential drifts or degradations over time.
For high-risk applications, LCs should adopt additional safeguards, such as human oversight of AI outputs and testing for consistency across variations in input prompts. Comprehensive documentation of all testing, validation, and monitoring activities is required.
The SFC distinguishes between off-the-shelf AI LM products and models developed or customised by LCs. While off-the-shelf products also require proper model management, customised models demand more rigorous oversight.
3. Cybersecurity and Data Risk Management
AI LMs are susceptible to adversarial attacks, data breaches, and other cybersecurity threats. LCs should implement robust controls, such as periodic adversarial testing, encryption of sensitive data, and measures to prevent data leakage through browser extensions or user inputs.
To ensure data integrity, LCs should mitigate biases in training data and comply with data protection laws. Particular care should be taken to protect sensitive information, such as client data, from being inadvertently exposed or exploited through AI LM training or use.
4. Managing Risks of Third-Party Providers
The SFC advises LC to exercise due skill, care and diligence to assess third party providers’ expertise, controls, and risk management frameworks.
The LC should evaluate the whether the third party provider itself has an effective model risk management in place and if the performance of the AL LM is appropriate for the LC’s specific use. The LC should also assess the third-party providers’ data management and consider if a breach by the third party provider of applicable personal data privacy or intellectual property laws could have a material adverse impact on the LC.
LCs should also prepare contingency plans to address service disruptions or operational failures stemming from third-party dependencies. Supply chain vulnerabilities and data leakage risks should be carefully monitored.
D. Notification and Compliance Requirements
LCs intending to use AI LMs for high-risk applications are reminded to comply with the notification requirements under the Securities and Futures (Licensing and Registration) (Information) Rules. Notifications are required significant changes in the LC’s nature of business and types of services provided. Early engagement with the SFC during the planning and development stages is recommended to address potential regulatory implications.
The SFC expects LCs to review and update their existing policies to comply with the circular’s requirements. Although immediate compliance is required, the SFC acknowledges that some LCs may need time to fully implement the necessary measures.
E. Conclusion
The SFC’s guidance underscores the importance of balancing innovation with responsibility in adopting AI LMs. By implementing robust governance, risk management, and cybersecurity measures, LCs can harness the benefits of AI while safeguarding against potential legal, operational, and reputational risks.
LCs are encouraged to engage proactively with the SFC to ensure alignment with regulatory expectations.
Please contact our Partner Mr. Rodney Teoh for any enquiries or further information.
This news update is for information purposes only. Its content does not constitute legal advice and should not be treated as such. Stevenson, Wong & Co. will not be liable to you in respect of any special, indirect or consequential loss or damage arising from or in connection with any decision made, action or inaction taken in reliance on the information set out herein.