Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
Hong Kong currently has no dedicated AI regulation, instead opting to rely on sectoral guidance from various regulators and government bodies which would apply depending on the context. These include the Digital Policy Office, the Office of the Privacy Commissioner for Personal Data, the Commerce and Economic Development Bureau, the Intellectual Property Department, the Financial Services and the Treasury Bureau, the Securities and Futures Commission, the Hong Kong Monetary Authority, the Insurance Authority, the Mandatory Provident Fund Schemes Authority and the Accounting and Financial Reporting Council.
Hong Kong wishes to develop artificial intelligence into one of its core industries. In February 2025, Hong Kong committed HK$1 billion (approximately $128 million) to establish the Hong Kong AI Research and Development Institute. In July 2024, the Digital Policy Office issued the Ethical Artificial Intelligence Framework (Version 1.4). The framework was originally developed to assist government bureaux/departments (B/Ds) in planning, designing and implementing AI and big data analytics in their IT projects and services. It consists of the following key components:
The AI assessment template is to be used by B/Ds at the application level to support them on their management of the benefits, impact and risks of using AI. However, the Ethical AI Framework is also available for use by other organisations on a voluntary basis, and can be customised for general reference when adopting AI and big data analytics in IT projects.
Certain significantly harmful AI practices are expressed to be prohibited as they contravene prevailing regulations and laws pertaining to, in particular, personal data protection, privacy, intellectual property rights, discrimination and national security. In addition to the Ethical AI Framework, various policy statements have also been issued in relation to specific industries, including banking and finance, insurance, and sales of medical devices.
Currently, there is no AI-specific regulation or legislation in place in Hong Kong.
From consumer protection law to online safety, AI continues to stretch existing legal frameworks. See the latest updates below.
There is no central framework for regulating the use of AI in the financial services sector in Hong Kong. Instead, the Securities and Futures Commission (SFC) and Hong Kong Monetary Authority (HKMA) have published circulars and guidelines to outline their regulatory expectations in this area.
The Financial Services and the Treasury Bureau issued a Policy Statement on the Responsible Application of Artificial Intelligence in the Financial Market on 28 October 2024, setting out the government's approach towards the responsible application of AI in the financial market. It includes the government's 'dual-track' approach to promote the development of AI and address potential challenges, current opportunities and use cases for AI, and the types of risks associated with the use of AI and corresponding measures to mitigate those risks. The policy statement provides that financial institutions should formulate an AI governance strategy and adopt a risk-based approach in the procurement, use, and management of AI systems, with human oversight to mitigate the potential risks.
The SFC has not yet published extensive guidance on the role of AI in the financial services sector. However, on 12 November 2024, it issued a circular on the use of generative AI language models that set out the SFC's expectations for licensed corporations that offer services or functionality provided by Generative AI language models (including as part of third party products) in respect of their regulated activities. It identified four core principles that licensed corporations may implement in a risk-based manner, specifically senior management responsibilities, AI model risk management, cybersecurity and data risk management, and third party provider risk management; the Appendix to the circular further lists non-exhaustive risk factors tied to each core principle. Asides from the circular, the SFC is participating in the International Organisation of Securities Commissions' Fintech Task Force AI Working Group - as part of which it will keep under review any findings or recommendations from such organisation to consider whether further regulatory guidance to SFC-licensed firms is necessary.
The HKMA has taken a more proactive role in issuing guidelines and circulars pertaining to the role of AI in the financial services industry for all authorised institutions. On 1 November 2019, it issued a circular that set out high-level principles to the banking industry on the use of AI applications, covering their governance, application design and development, and monitoring and maintenance. These principles are supplemented by publications including:
From a more practical perspective, the HKMA has invited authorised institutions to participate in the GenAI Sandbox initiative launched in August 2024 in collaboration with the Hong Kong Cyberport Management Company Limited. The GenAI Sandbox aims to promote responsible innovation in GenAI and will empower banks to pilot GenAI use cases within a risk-managed framework.
In addition:
On 3 January 2024, Hong Kong’s Medical Device Division (MDD) of the Department of Health issued a Technical Reference document TR-008: Artificial Intelligence Medical Devices which provides clarity for devices using AI and machine learning - including continuous learning capability - and the technical requirements expected for listing these medical devices on the Medical Device Administrative Control System. This document applies to all AI medical devices that fall within the scope of the Medical Device Administrative Control System.
Explore the latest landmark rulings as AI-related disputes make their way through the courts.
Dr Yeung Sau Sing Albert v Google Inc [2014] 4 HKLRD 493. Dr Yeung Sau Sing Albert brought an action for defamation on the basis that entering his name into Google Search generated suggested words that were defamatory to him by implying his involvement in criminal activities. The Court of First Instance allowed Dr Albert's claim to proceed past a summary dismissal application on the basis that Google designed and set up a search engine using autocomplete and related search features using AI, and there was an arguable case that Google was a "publisher" of the defamatory material (rather than a mere conduit or passive facilitator). Find out more in our blog post. This decision is currently being appealed by Google.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2025
We’ll send you the latest insights and briefings tailored to your needs