Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
If proposed changes to the Privacy Act are adopted, operators and users of AI could face far-reaching implications.
With the upswing of interest in Artificial Intelligence (AI) by businesses due to the advent of generative AI language models, such as ChatGPT, businesses are starting to look at other uses for AI. Although there is currently no AI-specific legislation in place in Australia, the flipside is that the general law will apply. But how does that law impact AI? This article looks at some of the future regulatory requirements applicable to the use of AI in Australia if the Government adopts the Attorney-General’s proposals under the Privacy Act Review Report1 (Report).
There are a number of factors applicable to AI that are potentially relevant to the proposed changes. In particular, AI (such as machine learning and deep learning) can be self-trained by processing large datasets through a feedback loop, often without human supervision. As a result of the training, the AI can make changes to the algorithms used to process decisions or to generate the output. This results in the situation that the logic used by an AI system is often unknown, the so-called ‘black box effect’.
This lack of transparency will mean that operators and users of AI-systems may find it difficult to comply with some of the changes proposed in the Report. Other changes proposed in the Report will impact closely related areas that are likely to use AI processing, such as data analytics and the use of biometric data.
Several of the recommendations in the Report require the operator to give the user (or customer) information relating to the type of data that is processed and the manner of processing.
Direct marketing - In the area of direct marketing, it will be necessary to provide information about targeting, including clear information about the use of algorithms and profiling to recommend content to individuals. The operator will therefore need to disclose the principles that apply in relation to the targeting algorithm. However, this recommendation does not appear require disclosure of the logic applied in the algorithm.
Automated decisions - A more stringent approach is set out in relation to automated decisions, even though this did not go as far as providing an ‘opt-out’ right for individuals as suggested earlier in the consultation process and as set out in the EU’s GDPR. The information that must be provided includes:
This requires a higher threshold of detail to be provided in respect of decisions and, where decisions are made using black box AI, the operators of that system may not be able to provide meaningful information on how the decisions are made.
The Report also envisages a more general requirement that any collection, use or disclosure of personal information must be fair and reasonable in the circumstances. This fair and reasonable requirement will most likely create some hurdles in respect of personal information processed by an AI application.
Considerations in determining whether the processing is fair and reasonable include:
For business using an AI may find it difficult to identify the information that is collected or used or even to know what amount of data is retained. In addition, if the AI is self-trained it would be difficult to determine whether the AI-generated information is necessary or directly related to the functions or activities.
Applying the fairly and reasonable requirement to the targeting of individuals, the Report recommends that businesses should provide clear information about the use of algorithms and profiling. As noted above in relation to automated decisions, use of black box AI will make it difficult for a business to provide the relevant information.
The Report also recommends that the Act should be amended to clarify that ‘collection’ of data includes obtaining information from any source, including inferred or generated information. The Report expressly confirms that this change includes data generated using data analytics and machine learning. It seeks to clarify that any ‘collection’ will be from the point at which the data is generated.
Data analytics frequently take place on existing sets of data to try and identify patterns. Consequently, this use does not occur at a point where the customer can review a privacy notice. The Report acknowledges that this can lead to practical challenges, especially given that Australian Privacy Principle 5 requires notice (and possibly consent) at the time of collection. Businesses will need to assess the impact of this clarification on future data analytics and plan the required consents in advance.
There are other recommendations that will have a potential impact in areas that are likely to involve the use of AI, such as:
For those businesses that are looking to harness the benefits of AI, it will be important to understand the impacts of these upcoming changes. HSF can help guide you through the process.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024
We’ll send you the latest insights and briefings tailored to your needs