Our last several blog articles have dealt with artificial intelligence (AI) in online content marketing. We predicted that search engines with embedded AI would change online marketing, looked at the viability of using AI-generated images for articles, checked out ChatGPT and tools that claimed they could detect AI-generated content, and offered 5 predictions for AI in online marketing.
Today, though, we’re going to look at a very real problem: How can law firms and their freelance writers can dissolve the tension and suspicion that AI content writing programs have created?
Our solution: Through a contractual provision that lays out which AI-detector is to be used on submissions, and what standards the content has to meet.
The Problem: Is This Content Written By a Human or a Machine?
Law firms contract freelancers to ghostwrite the content on their websites all the time. Myers Freelance LLC should know. It’s what we do all day.
But now there are AI programs, like ChatGPT, that can write online marketing content in ways that, at least at first, appear to pass muster.
This raises 2 key points of conflict:
- The firm is paying the human writer to create the content, but it’s hard to tell if the content is written by a human or by AI, and
- AI-generated content is notorious for confidently making things up, and inaccurate or outright false information on a law firm webpage can lead to serious problems.
These problems apply to both old and new freelancers. Law firms who have been working with the same freelancer for years may suddenly wonder if the content they’re getting is still being written by that trusted freelancer, while hiring a new writer is even more complicated than it was before.
There’s no question that many online content writers use ChatGPT or their preferred AI content creator. When AI writing programs became mainstream, there was an explosion in the number of “content writers” seeking work.
Coincidence? Of course not. Loads of people now see content writing as a get-rich-quick scheme. They get an assignment, tell AI to write it, maybe look it over, and submit it. They can charge a quarter of what old-school writers would charge and still make out better because it takes seconds for AI to make an article.
But for freelance writers who do not use AI tools, the suspicions they face can be frustrating. Worse, AI-detecting tools are notorious for falsely flagging human-written content as AI-generated. Clients in all industries have pointed to the results of an AI-detector to justify terminating the writer under the false assumption that these detectors are foolproof. In many cases, clients have used the failed AI test to justify not paying the writers for their work.
It Is a Problem, Both in Terms of Quality and SEO Potency
It matters whether the content is human-written or AI-generated. If it’s generated by AI, there are several risks:
- It can have factual inaccuracies or “hallucinations” that confidently present falsehoods as the truth in ways that can be difficult to detect,
- Search engines have already waffled from prohibiting AI-generated content to saying that they would reward “high-quality content, however it is produced,” so there’s nothing saying that they won’t evolve their stance, again, and
- Because AI content relies heavily on text prediction – meaning each successive word is the most likely to occur next, given what has come before it – it isn’t as engaging to read as content written by humans and so is likely to struggle to rank as well given the poor reader metrics it will produce.
Therefore, law firms who want to take online legal content marketing seriously have a strong interest in knowing what they’re getting. Even if writers fact-check AI-generated content perfectly and even if the content ends up, somehow, being engaging to read, if search engines backtrack and prohibit or demote AI content again (as we predict they will), all of that content on the firm’s website becomes a liability.
However, legitimate writers need some protections, too. Losing business because an AI-detector tool falsely flagged human-written content as AI-generated is devastating and embarrassing, can tarnish the writer’s reputation and, worse of all, is very difficult to challenge.
In the months since AI-generated content mainstreamed, Myers Freelance LLC has engaged with it on a daily basis. We think that we have the solution for both law firms and freelance legal writers.
The contractual provision that we think is the ideal compromise between clients and freelance writers requires the submitted content to have at least a 51% probability of being written by a human, according to Copyleaks’ AI-detection tool. As we will explain, there should also be a provision defining “submitted content” as being the entire article.
Copyleaks’ AI-Detector Tool
Out of all of the AI-detecting tools that we’ve played with, we’ve found that Copyleaks’ tool is adequately reliable.
It’s also free to use, though you’re limited in the number of articles that you can scan unless you create an account. The account is free, though, and setting it up is quick and easy.
It’s also one of the most rigorous tests for writers: Content that we know is written by humans (us) still doesn’t score 100%, while the tool is happy to point out copy/pasted AI-generated content in bright red highlights. This should make it the preferred detector tool for clients receiving content submissions.
The detector tool scores content on a range from a 100% probability that it’s written by a human to a 100% probability that it’s AI-generated, meaning that a score of 0 is in the middle.
51% Probability for Human Written Content
Generally, the content that the human beings at Myers Freelance LLC write scores between a 75% and an 85% probability that it is written by humans.
We said this tool is reliable, not necessarily accurate.
We have also found that content that we read and start to wonder whether it’s AI or not tends to score in the high 50s and low 60s on the scale.
Content that Copyleaks says has less than a 50% probability of being written by a human is generally so unengaging and boring that we struggle to get through a thousand words of it.
Therefore, we think that it is reasonable for clients to demand that the content passes Copyleaks’ tool with at least a 51% probability that it’s written by humans.
Define “Submitted Content”
One thing that the Copyleaks tool does sometimes is flag sections of an article as likely AI-generated, leaving the rest as likely to be human. Most of the time in these situations, the human-written content will score under a 51%, as well. When that’s the case, the whole article has failed, with the flagged sections of likely AI-generated words just failing the test more spectacularly.
Occasionally, though, there may be a paragraph or even just a sentence in an otherwise passing article that gets flagged as coming from AI. It’s rare, but we’ve seen it a couple of times. We think that treating this as an unpunishable anomaly is the better course of action. Sometimes, the words that a human writer chooses to use are the same as what an AI would have used. These AI-detector tools are not going to be able to see through this dilemma.
Why We Think This is a Fair and Reasonable Resolution
First, this metric is something that even bad human writers should really be able to pass. Professional copywriters should be comfortable enough with it to agree to it promptly. We are confident enough that we probably wouldn’t even run our submissions through the tool before sending them in. A 51% probability for human-written is not onerous and (we won’t mince words) if a writer thinks that it is, they shouldn’t be in the business.
Second, it creates transparency and sets the standard. AI-detectors are all over the place. We gave 2 of them identical content to score. The first said it was 68% sure it was human-written, while the second said it was 100% sure it was AI-generated. We’ve read about writers getting terminated when their client – unbeknownst to the writer – ran an article through a never-identified AI-detector and got a red flag. No questions were asked. No defenses were allowed. It was a no-due-process termination. Having transparency and a standard to meet protects writers and is just plain fair dealing.
Third, law firms and other clients who deal with freelance writers don’t have difficult questions to decide when content appears to be AI-generated. Just run it through the detector tool. If it doesn’t score well enough, you have a breach of contract.
Fourth, there is the criticism that transparency and a set standard allows writers to abuse it by using AI tools and then tweaking the content until it passes. This doesn’t hold water. If writers alter AI-generated content until it no longer reads like AI-generated content, what more do you want? Remember, your freelance writers is probably an independent contractor. As a client, you cannot tell them how to produce the result.