
I don’t know whether to feel offended or flattered, but I’ve been scraped–by AI. And I can prove it. The first intimation I had of this signal occurrence was a notice from WordPress asking me to approve a comment on my recent blog post, “Copyright, AI and the Legal Profession: Who Blinked?”. I logged in to find it wasn’t a comment but rather a link to this website.
A quick click took me to the article “CanLII settling with Caseway signals shift in legal-tech power dynamics”, dated April 20, the same day I had posted my blog. It was under a byline “London News”, which initially I naively assumed referred to London, Ont, (shows how parochial I can be) but quickly realized that this was some kind of online journal for commuters heading toward Picadilly Circus. London News appears to be written by a bot called Noah News Service, managed by the company HBM Advisory, based in London (England). There was no direct reference or link to my blog post in the article, but when I read it, it seemed eerily similar. The words were all different but the thread (with one exception that I will come to later) was the same. When I searched further, I found a footnote indicating the London News story was “inspired by” my blog post. What does this mean in reality?
My original post is protected by copyright, but anyone (even a bot I suppose) can take “inspiration” from a copyrighted work and produce something new. However, the “inspiration” I provided the bot is substantially different, in my view, from the sort of inspiration I would get from reading, say, an Agatha Christie mystery and then deciding to write my own mystery novel. In the case of my blog post, the bot did not really take “inspiration” from the content to create a new original work but rather engaged in rewriting the story using AI analysis of its key points to recreate what I had said using different words. That’s not true inspiration; it’s paraphrasing. Moreover, I’ll wager that an unauthorized copy of my work was made in order to feed the content to the bot to undertake its rewrite. While facts cannot be copyrighted (only someone’s expression of the facts), this rewrite was not based on the facts of the case. It was based on my blog post. Although the bot has not hijacked my precise words (i.e. my expression) it has nevertheless replicated the structure of my work, its flow and its arguments. It’s sailing very close to the wind, but probably still legal. This is not dissimilar to the challenge faced by news organizations who find their expensively created content being scraped and repackaged by online platforms such as Google, META, and others. According to the National Post, in a recent survey commissioned by News Media Canada, more than seven in 10 Canadians (of those surveyed) think the federal government should prevent artificial intelligence companies from taking and repackaging news content without permission or compensation.
But back to the London News article. Scrolling down to the end, I found an analysis of my blog post, produced by Noah. The post was rated according to various categories. It earned a “Freshness Check” score of 8/10 (i.e. the story was relevant), a “Quotes” check of 7/10; a “Source Reliability” score of just 6/10, a “Plausibility” rating of 8/10 but, sadly, an Overall Assessment for credibility of “Fail”, based on a “Medium” degree of confidence in this assessment. OMG, where did I fail to make the bot happy? How did I not meet its standards?
The Source reliability score would have been higher, according to the bot, if it had been published by an “established news organisation”, rather than on a personal blog;
“While the author, Hugh Stephens, has expertise in international copyright issues, (thanks, bot) the blog’s content is not subject to the same scrutiny as mainstream media.”
Well, I can live with that. The whole point of a personal blog is to offer a different perspective from Fox News, the BBC or the Globe and Mail.
The bot’s analysis continued:
“The article references reputable sources, but the lack of direct links to these sources raises concerns about transparency and verifiability.”
In other words, stuff your blog post with direct links to “mainstream media” and you might improve your report card. I could do that, but it might not be appreciated by my readers. The need for more direct links is repeated in the Quotes section (Score: 7/10) as well.
As for my failing grade, the bot’s summary says;
“The article provides a speculative analysis of the CanLII-Caseway AI settlement, referencing reputable sources but lacking direct links for independent verification. Its opinion-based nature and the author’s personal blog platform contribute to concerns about reliability and independence. Given these factors, the content does not meet the standards for publication under our editorial indemnity.”
But they published it anyway, as they do all kinds of content scraped from the web. I am not sure what the editorial indemnity policy is, but I suppose it is some sort of guaranteed reliability indicator, designed to separate the loony conspiracy theories (alternate facts?) from “real news”.
I wondered who would pass the bot’s scrutiny. Of the ten AI related stories posted on the front page of London News on the day I selected, 5 passed, 4 failed, and one was Conditional. The sources were all specialized but non-mainstream tech publications, or informed blogs, but certainly not conspiracy-theory outlets. Yet about half failed to gain Noah’s approval. I started to feel a bit better. Perhaps I’m not such an outlier.
I wonder if could write a blog post that would get an “A” from the bot. First, I would have to catch its attention, which I guess I could do by making sure there were lots of references to “AI” in the text, and then I would have to suppress my instinct to offer views on the topic. I would also have to stuff in lots of links to mainstream sources, like the Guardian and its ilk. But what is the fun in that? And what is the point? If people want to read “just the facts”, they can turn over the screening of content (and thinking) to their mainstream media subscriptions. However, I will say that the idea of assessing the reliability of a story on any topic, whether it’s on AI or the war in the Middle East, is not a bad thing. In the case of HBM, the assessment is used as a teaser to convince users (individuals, but more likely businesses) to sign up for more comprehensive, paid analysis. Part of the problem is that the assessment is done by an AI bot, and we know that AI is far from perfect.
HBM claims it uses AI and statistical modelling blended with human expertise and oversight to do its assessments. There is a thin but cursory layer of human involvement; fact-checking, source verification, style refinement etc. I think this is borne out by one missing key paragraph from HBM’s rewrite of my blog post. I had taken aim at Deloitte as an example of a large multinational company, that should know better, having been caught red-handed using unattributed AI that produced inaccurate, “hallucinated” results in a consulting report it prepared for the Newfoundland government. (“Deloitte’s AI Nightmare: Top Global Firm Caught Using AI-Fabricated Sources to Support its Policy Recommendations”). While HBM’s rewrite included almost all the key points in my post, there was zero reference to Deloitte. I am sure that “human expertise” decided that there was no point in gratuitously antagonizing an actual or potential client. Can I prove it? No, I guess its just another conspiracy theory.
I wonder if this blog post will be picked up and analyzed by Noah and if so, whether I would get a “Pass” this time. After all, it is “Fresh” and I have used lots of quotes from Noah. Having referred to the London News, I should get a 10/10 for Source Reliability (although I am not mainstream media, but neither is Noah). As for Plausibility what could be more plausible than an AI bot ripping off an author’s work through an unauthorized rewrite? Would all that land me a “Pass” from Noah? I will probably never know.
© Hugh Stephens, 2026. All Rights Reserved.
Update: Noah picked up and summarized (using much more direct language this time) the blog post above and then (drumroll) gave me a “Pass”.









