decoded.legal's use of "AI"
This is our current thinking on decoded.legal's own use of "AI".
Last updated: 2026-03-30
A computer can never be held accountable. Therefore a computer must never make a management decision.
IBM, 1979. Perhaps.
More and more law firms are using "AI". We are not
So say surveys, often by companies selling "AI" tools.
decoded.legal is not among them, for the reasons set out below.
What I mean here by "AI"
"AI" is a buzzword, covering all sorts of things, including machine learning, which traditionally would not have been called "AI".
"AI" powering a local email search tool, or "AI" for local on-machine spam filtering, are not really what I have in mind.
I am thinking more about:
-
"AI" in the "big data" sort of sense. Grab a load of data, process it in a black box, and see what comes out, kind of thing.
-
generative AI: using huge reams of data, often obtained in a manner hard to square with copyright law, and a lot of power, to generate something.
(That said, I've seen some people describe what is essentially "vim and a bash script" as "AI" in marketing materials, so who knows...)
We do not currently use, and have no plans to use, "AI"
We do not currently use "AI", in the sense described above, in our work.
How do we know this? Because decoded.legal is Neil and Sandra, and neither of us uses AI. We don't have staff to supervise, or shadow IT to avoid.
Our work and communications are all thought through, written, and checked by, a human.
That might change - see below - but this is our position right now.
We use "AI" if that includes things like heuristic models for spam filtering, but I'm really not considering that "AI".
Supporting, not replacing, humans
(And, yes, for the purposes of this, "humans" includes lawyers...)
Perhaps there are ways in which I could use "AI" to improve what I do for my clients.
If so, I am open to exploring that, within limits. But before I did, I'd want to assess it thoroughly.
I can see a benefit to using "AI" to support humans and their own decision making.
I don't know exactly what this right now, but the kinds of things I envisage are tools which help me check or stress test my advice.
For example, using "AI" to:
- check citations, or references, or to identify if I have misapplied case law, or if there are other options I have not considered.
- review a data set which I have already reviewed, to see if I have missed anything. For example, in the context of helping with a subject access request, doing a "second pass" for finding, or redacting, personal data.
- support in identifying spam messages, or monitoring network security.
I will have done the work first, and then use "AI" to assist me.
We will not use non-consensually trained generative "AI"
We will not use generative "AI", created from scraping/using other people's personal creations without their consent, in our work for you.
Remote, -as-a-service, "AI" tools
We are highly unlikely to use remote, "as-a-service", "AI" tools.
I am sceptical of most SaaS services, and have a strong preference for self-hosted Free software.
"AI" with unclear, or non-open, training data
I struggle to see how using "AI" tools where the make-up and provenance of training data is unclear, or clear but problematic, is humane or ethical.
This is particularly pertinent where the output might be used in a way which adversely affects humans, where inaccuracies, bias, or a lack of fairness represented in the training data could be carried through to the tool, without rectification or checks and balances.
Local "AI", trained on data we already store, public data, or consensual/licensed data
Perhaps, at some unspecified point in the future, we might use locally-run-and-trained generative "AI" trained on data sets such as
- data we already store, such as advice we have given before
- public data, such as case law or legislation
- material licensed for use for "AI" training, such as precedents or template or other tooling, where "AI" use is permitted
This is not on our list of priorities at the moment.