Artificial intelligence has moved from a niche innovation to a core consideration in how insurers, regulators and clients assess a law firm’s risk profile. With PII renewal forms now focusing on AI use, firms can no longer treat the technology as an informal experiment.
Recent cases involving fabricated citations and the mishandling of client information highlight the central point: AI is not the problem—ungoverned AI use is. Regulators remain clear that confidentiality, informed consent and professional judgement apply regardless of the tools used.
Insurers broadly support AI adoption but want assurance that firms have structure and oversight in place. They expect clarity in areas such as:
A blanket claim of “we don’t use AI” no longer reassures insurers or regulators. More often, it signals a lack of visibility over staff behaviour and unmonitored tool use.
Regulatory guidance continues to emphasise that AI does not alter a solicitor’s professional obligations. Responsibility for accuracy, confidentiality and client protection remains firmly with the practitioner.
To meet these expectations, firms must embed AI into their broader governance and risk structures – not just their technology strategy.
AI is now firmly embedded in risk assessments across the profession. The firms that proactively manage it – through governance, training and transparent controls – will earn trust, reduce exposure and remain aligned with insurer and regulatory expectations.
If you would like to discuss any of the themes raised in this blog please contact me at ih@hopkinslegalconsulting.co.uk or 07916669095.