Follow us


We explore the potential use cases for generative AI as well as the risks associated with bringing this technology within an enterprise’s business model. 

We held a panel discussion on Exploring the Promise and Perils of Generative AI in the Enterprise as part of our Artificial Intelligence Series, where we  provided insight into the opportunities and risks associated with the use of generative AI.

Our panel consisted of leading experts in the AI space including Digital Law Lead APAC at Herbert Smith Freehills, Susannah Wilkinson, Head of Innovation for Customer Engineering at Google, Scott Thomson, and AI Governance Product Lead at Red Marble AI, Bronwyn Ross. The discussion was hosted by TMT Melbourne Partner Julian Lincoln.

During the event, we explored the potential use cases for generative AI as well as the risks associated with bringing this technology within an enterprise’s business model.

  1. The Use of Generative AI

    Broadly, generative AI has the ability to increase productivity and accurate decision-making at work through efficient content creation, but with this new ability comes a vast array of new challenges. For example, the data set used to train large language models may give rise to a range of concerns regarding intellectual property rights, privacy and data protection, as well as bias and discrimination. In terms of the output which is generated by AI, a new range of issues emerge.
  2. Ethical Considerations and Responsible AI

    The panellists noted that it is important for enterprises adopt an ethical and responsible AI framework for the use of AI, which must involve a measured risk assessment for each use case. This might involve additional transparency to consumers about the way that generative AI is used and a level of explainability to ensure decisions made by the AI system are accurate. The panel notes that it is essential for a person affected by decisions made by AI to be able to have their outcome reviewed because these models, like human brains, require checks and balances. For this reason, enterprises must articulate rules around the use of AI within their business models and embody these rules in their processes because the unfettered use of AI is a problem.
  3. Mitigation of Risks

    We also touched on how enterprises can mitigate the risks of generative AI while incorporating large language models into their business. Prompt tuning was noted as an effective way to hone a large language model for a specific use case, but this requires a high level of machine learning governance and safety. For those employees worried about AI replacing human jobs, the panel noted that AI is not the solution to enterprise or industry problems but rather a vehicle to help achieve a solution. Overall, enterprises must ensure they are using generative AI tools in a safe and responsible way.

Watch

 

The panel discussion runs for approx. 1 hour, followed by an interesting Q&A for 30 minutes

Need CPD Points?

Access our on-demand platform to watch this video and gain a CPD point in Practice Management. 

Watch it here

Visit our artificial intelligence hub for the latest legal and industry analysis.

Key contacts

Susannah Wilkinson photo

Susannah Wilkinson

Director, Generative AI (Digital Change), Brisbane

Susannah Wilkinson
Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Australia Technology, Media and Telecommunications Artificial Intelligence Technology, Media and Telecoms Susannah Wilkinson Julian Lincoln