| dc.contributor.author | Mossel, Elchanan | |
| dc.date.accessioned | 2026-01-06T16:23:50Z | |
| dc.date.available | 2026-01-06T16:23:50Z | |
| dc.date.issued | 2026-01-06 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164444 | |
| dc.description.abstract | Recent reports claim that Large Language Models (LLMs) have achieved the ability to derive new science and exhibit human-level general intelligence. We argue that such claims are not rigorous scientific claims, as they do not satisfy Popper’s refutability principle (often termed falsifiability), which requires that scientific statements be capable of being disproven. We identify several methodological pitfalls in current AI research on reasoning, including the inability to verify the novelty of findings due to opaque and non-searchable training data, the lack of reproducibility caused by continuous model updates, and the omission of human-interaction transcripts, which obscures the true source of scientific discovery. Additionally, the absence of counterfactuals and data on failed attempts creates a selection bias that may exaggerate LLM capabilities. To address these challenges, we propose guidelines for scientific transparency and reproducibility for research on reasoning by LLMs. Establishing such guidelines is crucial for both scientific integrity and the ongoing societal debates regarding fair data usage. | en_US |
| dc.description.sponsorship | The author is partially supported by ARO MURI MURI N000142412742, by NSF grant
DMS-2031883, by Vannevar Bush Faculty Fellowship ONR-N00014-20-1-2826 and by a Simons Investigator Award. | en_US |
| dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
| dc.title | The Refutability Gap: Challenges in Validating Reasoning by Large Language Models | en_US |
| dc.type | Preprint | en_US |