Follow us


As AI tools become increasingly prevalent, Australia’s tech legal leaders are actively considering how to promote responsible AI use. They're exploring risk assessment practices, implementing guardrails, and launching internal education initiatives.

Artificial intelligence (AI) tools should be used as a guide, and not as replacements for skills or to delegate responsibility.

Last month, Legal Leaders (LLs) from across the Australian tech sector came together via our Tech Collective network for a candid and thoughtful conversation covering the most important business considerations for managing the responsible use of AI tools.

Themes that emerged from a leadership lens

Building systems to manage risk

  • Protecting data privacy and IP is everyone’s responsibility. LLs are considering ways to ensure all levels of the business understand the risks and impacts of using AI tools and are engaged in building effective and holistic risk management systems.
  • Before using AI tools as part of BAU practices, it is worth scrutinising underlying datasets and identifying quality concerns and areas for potential bias. Businesses may benefit from implementing impact assessment protocols to ensure risks are appropriately assessed.
  • LLs across industries are developing internal policies for responsible AI use. While there is no one-size-fits-all approach, some companies have found it valuable to engage external consultants to help ensure their policies are consistent with business values and human rights commitments.

Implementing guardrails

  • Clear standards and guardrails for acceptable use can help prevent security breaches and promote risk-appropriate use of AI tools (including in relation to IP, confidentiality, privacy, bias, and misinformation risks).
  • Employers must ensure their employees are aware that AI tools are a source of inspiration, not delegation. A human always needs to remain in the loop and responsible for any content generated by AI.
  • Legal teams and IT teams should work together to track and disable use of unauthorised data and software.

Educating the business and facilitating upskilling

  • Open discussion and education on the risks of AI (including bias, data privacy and security) is critical to supporting employees in exercising judgement appropriately with AI use.
  • De-briefing after an incident or using hypotheticals can make risks more tangible to employees and help facilitate an understanding of what is at stake when using AI.
  • While there is no AI specific regulation at the moment in Australia, businesses should be mindful of existing legal frameworks that can operate in this space (e.g. consumer protection, privacy, anti-discrimination and tort laws), as well as international regulations that may be relevant for businesses that have a global footprint.

 

Key contacts

Christine Wong photo

Christine Wong

Partner, Sydney

Christine Wong
Patrick Clark photo

Patrick Clark

Partner, Melbourne

Patrick Clark
Kwok Tang photo

Kwok Tang

Partner, Sydney

Kwok Tang
Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Technology, Media and Entertainment, and Telecommunications Crisis Prevention and Management Artificial Intelligence Digital Transformation Christine Wong Patrick Clark Kwok Tang Julian Lincoln