top of page

Navigating the AI Landscape: Mitigating Professional Negligence Risks

Jan 17

6 min read

Author:  Muhammad Azly Haziq Sazali  & Mohd Ashraf Ramli


Although the preceding article as well as the general brouhaha regarding AI may seem to suggest that AI is a revolutionary panacea for all legal ills or an innovative silver bullet to troubleshoot all legal issues, however it must be borne in mind that AI is only a tool, one of many in the repertoire of a legal professional, and AI as a tool is not without its limitations.

 

Practical Leveraging of AI in the Legal Sector

 

Ideally, AI should be used for simpler repetitive tasks to free up the valuable time of legal professionals who are then able to focus more on complex and higher-level strategic work. Even then, tasks which have been delegated to AI still require legal professionals to apply their legal acumen on the output generated.

 

Key Risks of AI-Related Negligence

 

  1. Overreliance on AI: One of the primary risks is excessive reliance on AI without adequate human oversight. While AI tools can significantly enhance productivity and accuracy, they are not infallible. Overreliance can lead to errors, omissions, or incorrect decisions, potentially exposing professionals to liability.

  2. AI-Generated Errors and Misinformation: Generative AI models are trained on vast datasets, but they can still produce inaccurate or misleading information. If professionals fail to critically evaluate AI-generated content, they may inadvertently disseminate false or harmful information, leading to legal and reputational consequences.

  3. Data Privacy and Security Breaches: AI systems often require access to sensitive data to function effectively. If this data is not adequately protected, it could lead to data breaches, privacy violations, and significant financial losses.

  4. Algorithmic Bias and Discrimination: AI algorithms are trained on data that may contain biases, leading to discriminatory outcomes. If professionals use biased AI tools, they could perpetuate unfair practices and face legal challenges.

 

Mitigating AI-Related Negligence Risks

 

To minimize the risk of professional negligence associated with AI, businesses and individuals should adopt the following strategies:

  1. Human Oversight and Verification: Always maintain human oversight in AI-driven processes. Regularly review and verify AI-generated outputs to ensure accuracy and completeness.

  2. Continuous Training and Education: Invest in ongoing training and education for employees to enhance their understanding of AI capabilities, limitations, and ethical considerations. 

  3. Transparent AI Systems: Develop transparent AI systems that explain their decision-making processes. This transparency can help identify and address potential biases and errors.

  4. Robust Data Governance: Implement strong data governance practices to protect sensitive information and ensure data quality. Regularly assess and update data privacy and security protocols.

  5. Ethical AI Frameworks: Adhere to ethical AI frameworks and guidelines to promote responsible AI development and use. Consider the potential societal impact of AI applications.

  6. Regular Audits and Risk Assessments: Conduct regular audits and risk assessments to identify and mitigate potential AI-related risks. Stay updated on emerging AI technologies and their potential implications.

  7. Professional Liability Insurance: Consider obtaining professional liability insurance to protect against potential claims arising from AI-related negligence.

 

Case Example of AI Use in Court Filings Without Proper Oversight

 

In May 2023, a lawyer who was part of the legal team representing a client who was suing the airline Avianca decided to use ChatGPT to provide him with case law supporting his client’s claim. ChatGPT provided the lawyer with several case citations, which the lawyer included in his submissions to court. However, when neither the airline’s counsel nor the judge were able to find any of the cases, the judge ordered the lawyer to produce the same. Unfortunately, the lawyer was unable to do so, as ChatGPT had fabricated the case law.

 

The lawyer had to put himself at the mercy of the court, saying that he had no intention of deceiving the court, and that he himself was not aware that ChatGPT could have produced unreliable information. The judge subsequently found that the lawyer had acted in bad faith and made false and misleading statements to the court, and fined the lawyer, the lawyer’s co-counsel, and their law firm for US$5,000.

 

The above case example illustrates the need for human oversight and verification of the output of AI. Without such oversight, AI is prone to “hallucinate”, which refers to the production of output which is false yet was delivered with such confidence by the AI such that it is believed as being true by the average AI user without the knowledge to gainsay it. Although not a case of professional negligence per se, nonetheless should the client’s case have been harmed by such reliance on AI without necessary human oversight, it may amount to professional negligence.

 

Other Instances of AI Use in Court Filings Without Proper Oversight

 

In November 2023, a Colorado lawyer was suspended for at least 90 days, with the remainder of his suspension stayed upon completion of a two-year probation period, due to violations of the Colorado Bar Association’s Rules of Professional Conduct as a result of filing AI-generated documents which included incorrect and fictitious cases.

 

In January 2024, a New York lawyer was referred to the Grievance Panel of the United States Court of Appeals of the Second Circuit for not confirming that the cases cited by generative AI in relation to a medical malpractice suit was valid.

 

In July 2024, a Virginia federal judge presiding over a whistleblower suit ordered the plaintiff’s lawyers to show cause as to why they should not be sanctioned for including AI-generated cases with inaccurate citations and made-up quotations in their filings to the court. The lawyers in a declaration stated that the case names were indeed correct and had been checked, but did not realise that the citations were incorrect. As for the quotations, the lawyers submitted that although they were not verbatim, they nonetheless accurately reflected the principles of those cases. In that case, the lead counsel and author of the filing submitted in his declaration that he was solely to blame for the errors.

 

In November 2024, the lawyer acting for the plaintiff in a wrongful termination lawsuit against Goodyear Tire & Rubber was sanctioned with a US$2,000 penalty and was required to attend a course about generative AI in the legal field due to having submitted a court filing with nonexistent cases and quotations generated by artificial intelligence.

  

A Malaysian Perspective on AI Use

 

Although there is no binding legal framework governing the use of generative AI by the legal profession in Malaysia, nonetheless the Malaysian Ministry of Science, Technology and Innovation has published the National Guidelines on AI Governance and Ethics, which sets out seven AI principles. Malaysian legal professionals as end users can relate to these principles in the following ways:-

 

         i.      Fairness

As there is no absolute one-size-fits-all approach in legal practice, legal professionals must be careful not to cause discrimination or unintentional biases in the provisioning of legal services coupled with AI use and strive for equitable distribution of the benefits of AI use to all groups.

 

       ii.      Reliability, safety, and control

Legal professionals need to ensure that their AI use results in a safe, reliable, and controlled end result for their clients.

 

     iii.      Privacy & security

Legal professionals should be wary of the information they input into AI tools, especially in respect of confidential, sensitive, and personal data. Due to the nature of AI machine learning, such data may be disclosed to other parties, which is a data security risk and possible confidentiality breach.

 

      iv.      Inclusiveness

There should be equal access to AI for all stakeholders so that all may benefit from it.

 

        v.      Transparency

In addition to being upfront with how data is being used in relation to AI tools, transparency for legal professionals also means disclosing the use of AI in legal work, especially if prompted or required.

 

      vi.      Accountability

AI as a tool does have its limitations. Thus, legal professionals need to exercise due care, diligence, and caution in the use of AI. A human legal professional must always be in the loop to verify AI-generated output, as it is the human legal professional who will be held accountable for the output generated.

 

    vii.      Pursuit of human benefit and happiness.

The central principle of AI use. In today’s fast-paced world, the objective of AI use is to lessen the burden on human beings and thus promote human well-being and happiness. Thus, AI use should live up to its purpose instead of delivering the opposite effect.

 

Although these guidelines are not binding per se, nonetheless they do help orient users of AI tools to use such tools ethically and responsibly, and by extension, help mitigate risks in relation to professional negligence.

 

Conclusion

 

While AI offers immense potential, it is essential to approach its integration with caution and foresight. By understanding the risks and implementing appropriate safeguards, businesses and individuals can harness the power of AI while minimizing the potential for professional negligence. As AI continues to evolve, staying informed and adapting to emerging best practices will be crucial for navigating the complex legal and ethical landscape. Although AI will not completely replace legal professionals, lawyers who do not leverage on AI may find themselves at a disadvantage compared to those who do.


© 2025 by 

ASP Horizontal - Cropped_edited.png

All Rights Reserved

bottom of page