hera-logo-black
Members

Your cart is currently empty!

Join HERA
Let’s korero
  • Technical supportTautoko hangarau
  • ResearchRangahau
  • Training & eventsWhakangungu me nga huihuinga
  • News + insightsRongo
  • About usMō mātou
Let’s kōrero
Members
Join HERA
August 11, 2025

The AI Knowledge Crisis: Why Artificial Intelligence is Struggling to Learn from Science


Artificial Intelligence (AI) is rapidly transforming the way we work, research, and interact with the world. From coding applications to searching the web for information, AI promises to revolutionise how we process knowledge. Despite this, there is a glaring flaw in this promise: AI is often surprisingly ill-informed.

The issue? It is struggling to access the most credible and valuable sources of knowledge; peer-reviewed academic journals.

A Real-World Experiment in AI’s Limitations

Recently, staff at HERA (the Heavy Engineering Research Association) underwent AI training, exploring the capabilities of AI tools for a variety of tasks. As part of this training, we each developed our own AI agents and apps. Two AI-powered tools that I developed were developed:

  1. a tool to search the internet for copyright breaches of HERA’s publications and content; and
  2. a tool to identify relevant peer-reviewed statements—going beyond a simple literature review—to provide high-quality, verified academic insights.

The results were both surprising and concerning. The first tool failed to find any copyright breaches, despite known instances existing. More alarmingly, the second tool—designed to find peer-reviewed sources—was struggling to do so. Instead, it returned a mix of cited articles and general website content. This was despite clear and refined instructions specifying that it should focus only on peer-reviewed journal articles, academic books, and conference papers.

At first, this seemed like a technical glitch. However, the real issue became apparent: AI could not access most peer-reviewed research because much of it is locked behind paywalls. This limitation has massive implications for the future reliability of AI.

The Paywall Problem: A Knowledge Crisis in the Making

Academic journals have long served as the gold standard for knowledge verification. Rigorous peer review ensures that published research meets high standards of credibility and accuracy. However, access to this knowledge is often restricted. Publishers charge high fees for journal subscriptions, limiting access to institutions, researchers, and those who can afford it.

AI models, which rely on vast amounts of publicly available data, are effectively shut out from this critical information. Instead, they are left to learn from less credible, freely available sources such as blogs, news articles, and Wikipedia. This creates a major data bias, skewing AI’s understanding of the world.

Consider some practical examples:

  • AI-powered medical assistants may struggle to provide accurate recommendations because they cannot access the latest peer-reviewed research on drug interactions or emerging treatments.
  • AI used in climate science might rely on second-hand interpretations of data rather than original research, potentially distorting findings.
  • AI in legal applications may generate misleading legal arguments if it cannot access authoritative legal journals.

The Domino Effect: When AI Learns from the Wrong Sources

The consequences of this data bias extend beyond minor inaccuracies. AI is already being used to inform decisions in many difference fields. If it is relying on incomplete or less credible sources, its recommendations become questionable—at best, incomplete, and at worst, outright wrong.

A stark example is the replication crisis in science, where many published studies fail to be reproduced. If AI models are learning from secondary sources that fail to critically assess these studies, they may propagate false or misleading conclusions. Without direct access to peer-reviewed research, AI is forced to rely on incomplete summaries, biased interpretations, or even misinformation.

A Social Dilemma: Who Controls Knowledge Access?

The issue raises ethical questions about access to knowledge. In theory, AI has the potential to democratise information, making high-quality knowledge more accessible. However, the current system reinforces a two-tiered structure:

  • those with institutional access to journals (such as universities and research bodies) have the advantage of reliable information; and
  • those without access—including many AI systems—must rely on open-source but often lower-quality alternatives.

I would suggest that the vast majority of users are in the latter category, with no ability to access primary source documents and make an informed assessment of the reliability (or ludicrousness) of AI-generated knowledge.

This problem is not new. Researchers in developing countries have long struggled with paywalls restricting their access to critical research. The advent of AI now amplifies this issue on a global scale, affecting not just individuals but entire industries and governments that depend on AI-driven insights.

The Need for Open Access and AI-Friendly Research

If AI is to truly become a tool for good, we need to address the accessibility of peer-reviewed research. There are a few possible solutions.

  1. Expanding Open Access Initiatives – More research needs to be freely available through platforms like arXiv, PubMed Central, and institutional repositories.
  2. AI-Friendly Licensing – Journals could introduce AI-friendly access models, allowing vetted AI systems to access and process peer-reviewed content.
  3. Public-Private Partnerships – Governments and academic institutions could collaborate with AI developers to create knowledge-sharing agreements.

Some initiatives, like Plan S (a movement pushing for open-access publishing in publicly funded research), offer hope. Plan S is an initiative by cOAlition S, a group of research funders, to ensure that research funded by public grants is published with immediate open access, aiming to shift the publishing industry towards open access business models. However, widespread change is needed to ensure AI can learn from the best sources, not just the most available ones.

The Future of AI and Knowledge: A Call to Action

The AI knowledge crisis is not just a technical issue—it’s a societal one. As AI becomes more deeply embedded in decision-making, from business to healthcare, its knowledge gaps could have real-world consequences.

Researchers, policymakers, and publishers need to rethink how knowledge is shared. If AI is to fulfil its promise as an intelligent assistant, it must be trained on the best data available—not just what is freely accessible. Without intervention, we risk an AI future built on biased, incomplete, or even inaccurate knowledge.

It’s time to ask: In the age of AI, who gets to control knowledge? And are we willing to let artificial intelligence be educated by convenience rather than credibility?

It also plays into discussions about research excellence vs research impact. Peer reviewed journals only accept high quality research papers. However, if they are restricting access to that knowledge and quality research, there is certainly an issue of diminished potential research impact. In the age of AI, this is no longer a potential diminished potential for research impact, it is a direct negative impact – as it means AI tools are learning and drawing insights from data that is not necessarily reliable nor based on evidence-based, rigorous science.


Author

  • Troy Coyle

    Troy Coyle

    CEO

    Visit profile : Troy Coyle


←Previous: Reflecting on a Transformative Year – HERA’s FY25 AGM
Next: Welding of Reinforcing Steel (Rebar) for Concrete→

Follow our newsletter

Stay in the loop with our regular newsletter. Straight from our intellectual smelter, each issue contains plenty of education, inspiration, and connecting to what is important to our industry!

Subscribe


Industry awards

Our structure & governance

Our history & annual reports

HERA Foundation

Venue hire

HERA House 
17 – 19 Gladding Place
Manukau City
Auckland


P O Box 76134
Manukau, Auckland 2241
New Zealand


Waeaatia (Call us):

+64 9 262 2885


General email enquiries:

info@hera.org.nz

Structural enquiries:

structural@hera.org.nz

Fabrication enquiries:

welding@hera.org.nz


@Copyright HERA 2024

Terms and conditions

Use of AI

Privacy policy

Complaint process

Refunds and returns

Site by Glue