THE BOOK
Some of the ideas discussed in this blog are published in my new book called "The Stonehenge Bluestones" -- available by post and through good bookshops everywhere. Bad bookshops might not have it....
To order, click
HERE

Thursday, 20 November 2025

AI Man -- the latest strange phenomenon


One of AI Man's little helpers. If you want him to, he will quite happily write your articles for you, without you having to do any thinking at all. 



One of our contributors used this old quote the other day: "A Man sees what he wants to see......and disregards the rest".    By adding the letter "I" it becomes even more interesting:

"AI Man sees what he wants to see......and disregards the rest"

AI Man is making more and more appearances nowadays, and not just in the field of Stonehenge argumentation.  How do we recognise this strange and rather pathetic figure?  Well, his main characteristic is that he voluntarily gives up his capacity for rational thought and critical scrutiny, and asks AI to do it for him instead.   It is essentially a cowardly and somewhat pathetic act.  And AI, being what it is, is only too happy to oblige, even to the extent of writing detailed commentaries, reviews or articles which look for all the world as if they have been written by genuine human beings.

In theory, content generated entirely by AI should be prohibited or rejected outright by academic publishers. Academic platforms like Researchgate do not actually ban AI-generated materials, but depend on the publishers of academic journals to do the scrutiny job for them.  But the policy falls flat on its face with the online publication of "pre-publication" or pre-print versions of articles that may or may not be destined for journal submission.  I have used this "pre-publication" route myself either for articles that are in the journal publication pipeline or for articles intended to place new material onto the record or to stimulate discussion.  So the Researchgate platform provides a useful academic service in this regard.

But Researchgate has no editorial role, and refers to itself as a social networking site.  It does not screen out "pre-publication" articles that are written by AI, and that is a major failing.  So our friendly or unfriendly AI Man can publish material on the platform under false pretences, using his own name, or he can attribite the authorship to some weird AI bot either known or unknown to the readers.  There are plenty of them out there, including the one called Grok.   

We are in very dangerous territory here..........
 


7 comments:

Tony Hinchliffe said...

Our old friend the Wiltshire farmer uses AI, for example in his Researchgate submissions.

BRIAN JOHN said...

Yes, I had noticed..........

BRIAN JOHN said...

Grok, the AI Bot favoured by our farming friend, is Elon Musk's baby. Its big selling point is that it has a "personality" -- which you can choose via assorted prompts. But it is designed to be irreverent, humourous, controversial and provocative -- and "anti-woke". So it tends to be intolerant of maverick opinions, and to belittle or intimidate those who hold views which are different from those of the establishment. It is already causing considerable concern in the specialist lioterature because it does not seem to have much regard for the truth or for evidence-based conclusions. Here is one assessment:
https://www.arsturn.com/blog/groks-personality-explained-why-it-gives-controversial-answers

Dave Maynard said...

Sometimes when writing my pieces and using these techniques, although Grammarly is my limit, that the only reader is going to be AI.

BRIAN JOHN said...

Yes indeed Dave -- it has all become very weird. For example, I am rather convinced now that the majority of readers of my blog, measured as page views, are not human beings at all -- but are AI bots, just trawling for information. So in the future (already with us?) AI bots will be writing stuff for the entertainment and enlightenment of other AI bots -- with human beings becoming entirely redundant.........

Dave said...

Would it be enlightening or not, if AI bots made comments?

BRIAN JOHN said...

I suppose we might be enlightened to discover that AI bots do not have opinions -- and are therefore incapable of making comments that are of any use. All they can do is trawl a wide range of web-based sources and come up with some sort of report on what is deemed to be the "consensus." AI bots are designed to please, and are heavily influenced by the prompts given by those seeking the opinion........