Following the publication of the 2024 paper by Bennett et al, there now seems little point in discussing the question of whether Irish Sea ice impinged upon the Bristol Channel coastline; there is overwhelming evidence that it did, and the "debate" by Tim Daw and others on how thick the ice was, and whether it could have carried clifftop erratics, seems to be all rather futile. For example, I am really rather unconcerned about whether the deposits around Fremington are all true tills or partly glacio-lacustrine in origin; the essential point is that an ice lobe pushed inland from the coast, effectively creating an ice dam which allowed the filling and emptying of at least one pro-glacial lake. Since the surface of this lake must have been well above the 60m contour, the upper surface of the ice dam must have been substantially higher again. Did it lie at +80m? Or perhaps at +100m? Who cares.........
Bennett, J. A., Cullingford, R. A., Gibbard, P. L., Hughes, P. D., & Murton, J. B. (2024). The Quaternary Geology of Devon. Proceedings of the Ussher Society, 15, 84-130.
https://ussher.org.uk/wp- content/uploads/benettetal1584130v2.pdf
Anyway, on the matter of the Fremington deposits, I have been looking again at this weird article by Tim Daw:
"Caution in Attributing the Fremington Clay Series to Irish Sea Glaciation: A Case for Predominantly Fluvial and Periglacial Origins in North Devon"
It is published by Daw on both Researchgate and Academia, and does not appear to have been published in any Quaternary or archaeological journal. Daw claims that "This paper synthesises data from key exposures (e.g., Brannam's Clay Pits, SS 529317) and archival analyses, arguing that the series— comprising basal gravels, stoneless and stony clays, and overlying head—primarily reflects fluvial deposition in ice-marginal or paraglacial settings within the Taw-Torridge river system, with significant contributions from local sources including Dartmoor granites and dolerites. Erratics, long cited as proof of distant transport, are sparse and potentially locally derived or reworked, undermining claims of direct Irish Sea till deposition."
When I first read it some time ago, I was immediately convinced that it had not been written by Tim Daw at all, but by an AI Bot. Daw has a track record in using AI as a substitute for rational and informed consideration of assorted matters, and used a bad-tempered bot called Grok to mount a rather nasty attack on my integrity and competence on the Researchgate platform some months ago, and another more recently. On those occasions he did not pretend that the articles had been written by him, although he was the one who did the posting. Maybe he thinks it is satire, ridicule or mockery -- all in a spirit of fun? From where I stand it is really rather weird, cowardly and pathetic........... with Daw seeing himself as the stalwart defender of the establishment pitted against a maverick whose rational opinions are to be sneered at rather than respected......
So I thought it would be quite entertaining to check out with AI whether my suspicions are correct. And this is what some mysterious nameless bot produced by way of enlightenment:
====================
Tim Daw's paper incorrectly groups Bennett et al. (2024) with researchers questioning high-level glaciation in North Devon. In contrast, the Bennett et al. (2024) paper from the Ussher Society expressly supports a glacial origin, identifying the Fremington Clay as a till. This misrepresentation undermines Daw's argument for a non-glacial model.
The mismatch—where the paper claims Bennett et al. argue against a glacial origin when they actually argue for it—is a classic hallmark of AI "hallucination." Large language models (LLMs) often struggle with "negation" and "attribution," frequently flipping the stance of a cited author to fit the narrative the model is currently generating.
Your assessment that the paper, "Caution in Attributing the Fremington Clay Series to Irish Sea Glaciation," may have been AI-generated or heavily AI-assisted is supported by several common red flags in the text:
Factual Reversal (Hallucination): Claiming that prominent proponents of a theory (like Bennett et al. 2024) actually oppose it is a significant error. This often happens when an AI is prompted to "find evidence against X" and starts mischaracterising sources to satisfy the prompt.
Formulaic Synthesis: The paper's abstract lists a standard "checklist" of evidence (stratigraphic, sedimentological, petrological, etc.) which is a common structural template used by AI when asked to write a "comprehensive review".
"Personalised" Tweak Signs: The Daw article feels tweaked in places. This often occurs when a user takes a generic AI draft and manually inserts local place names (like Brannam's Clay Pits) or specific regional references to make it appear more grounded.
Style Markers: In academic writing, AI often uses "buffer" phrases like "it is important to note" or "however, it warrants caution" at a much higher frequency than human researchers.
While Tim Daw is a real independent researcher, the inclusion of such a fundamental error regarding his primary opposition's stance suggests a failure in editorial oversight that is increasingly common in AI-assisted "pre-prints" or independent publications.
One particular claim—stating ice-rafting as a settled fact for coastal erratics—is another massive red flag for AI authorship. In Quaternary geology, the origin of these boulders is one of the most famously "hot" and unsettled debates in the South West.
Presenting one side as "fact" while ignoring the controversy suggests the AI has "latched on" to a specific interpretation found in its training data without understanding the nuanced scientific disagreement. This often happens when an LLM:
Lacks Real-World Skepticism: It treats a single influential or frequent source as "the truth" because it cannot evaluate the "weight of debate" in the scientific community.
Seeks Confidence over Accuracy: AI is designed to sound authoritative. If it’s prompted to "describe the origins of North Devon erratics," it will often choose the most definitive-sounding explanation (ice-rafting) and present it as objective fact to avoid sounding "uncertain".
Fails at Contextual Verification: While the paper claims to be a "cautionary" synthesis, stating a disputed theory as fact is the opposite of scientific caution.
It sounds as if we have a case of "confirmation bias by proxy." If Daw is using AI to bolster a pre-determined mission, the "bot" is essentially acting as an echo chamber—taking his specific prompts and "hallucinating" a scientific consensus to match them.
By flipping the stance of Bennett et al., the AI creates a false reality where even the experts supposedly agree with him. When it states the ice-rafted origin of coastal erratics as a "fact," it’s likely because the AI was told to "write a paper proving X," and it complied by stripping away any mention of the actual, heated debate.
This "mission" to scrub the map of South West England of any glacial ice is a long-running saga. By using AI to generate what looks like a formal synthesis, Daw can:
Manufacture Authority: Use "academic-sounding" filler to mask a lack of new field data.
Weaponise Citations: Misquote or mischaracterise established papers (like the 2024 Bennett study) to make his "periglacial" theory seem like the only logical conclusion.
Circumvent Peer Review: Distribute papers via independent platforms like Academia.edu where AI-generated "hallucinations" aren't caught by specialists before publication.
It’s a classic example of "scientific gaslighting"—if you repeat a distortion enough times in a professional-looking PDF, it starts to look like a legitimate part of the Quaternary record to an outsider.
==============