Should the BBC allow greater access to its news content for AI systems?

4th February 2026

The question of whether the BBC should allow greater access to its news content for AI systems raises complex issues about public service, editorial control, and the future of journalism.

As artificial intelligence becomes an increasingly common way for people to access information, decisions made by major news organisations like the BBC have significant implications for public understanding and democratic discourse.

On one hand, there is a strong argument that the BBC should allow more access. As a publicly funded broadcaster, the BBC has a clear public service mission to inform, educate, and serve audiences.

If AI tools such as ChatGPT are becoming a primary source of news summaries and explanations, excluding BBC journalism risks reducing the visibility of trusted, impartial reporting. This may lead to a form of source bias, where AI systems rely more heavily on outlets that permit access, potentially weakening the presence of UK-focused and public-interest journalism in AI-generated answers.

Furthermore, engagement rather than exclusion would allow the BBC to influence how its content is used. Through licensing agreements, the BBC could insist on proper attribution, accuracy safeguards, and transparency.

By setting conditions for access, it could help shape ethical standards for how AI systems interact with high-quality journalism, rather than leaving those standards to commercial publishers alone.

However, there are also compelling reasons for caution. The BBC has a responsibility to protect its intellectual property and to ensure its journalism is not reused in misleading or decontextualised ways. Unrestricted access could undermine editorial independence and weaken the BBC's ability to control how its reporting is presented. There are also financial considerations: although publicly funded, the BBC relies on commercial licensing and international sales, which could be threatened if AI-generated summaries substitute for direct engagement with BBC platforms.

Given these competing concerns, the most persuasive position lies between full access and total restriction. Rather than blocking AI companies outright or allowing unrestricted use, the BBC could pursue limited, licensed access designed specifically for public-interest purposes. Such agreements could include clear limits on reproduction, mandatory attribution, and auditing mechanisms to ensure responsible use.

In conclusion, the BBC should not simply open its archives without conditions, but nor should it isolate itself from emerging technologies. Allowing carefully controlled access would best balance its public service mission with the need to protect journalistic integrity, ensuring that trusted reporting continues to play a central role in the AI-driven information landscape.

30 January 2026
Revealed: ChatGPT draws more on GB News, Al Jazeera, and Marie Claire than the BBC, IPPR analysis shows

The Guardian, Reuters, and the Independent are the top three sources for news on ChatGPT
Murky rules mean AI tools pay for and prioritise some outlets, exploit other content for free, or exclude sources that block access
IPPR urges government to help establish ‘collective licensing agreements' between news organisations and AI companies
Popular AI tools used by millions to access news are drawing on a narrow and inconsistent range of sources, often sidelining the UK's most trusted journalism and reshaping which voices are heard, according to new analysis from the Institute for Public Policy Research (IPPR).

The think tank analysed how four leading AI tools - ChatGPT, Google Gemini, Perplexity, and Google AI Overviews - respond to news queries, and found that the BBC, the UK's most popular and trusted news outlet, was missing entirely from ChatGPT and Gemini, partly due to the murky rules governing this interaction.

Other major news outlets also received limited exposure on ChatGPT: The Telegraph was cited in just 4 per cent of answers, GB News in 3 per cent, the Sun in 1 per cent, and the Daily Mail in 0 per cent.

ChatGPT's top source was the Guardian, which was used as a source in 58 per cent of responses, and linked to far more than any other outlet, followed by Reuters, the Independent, and the Financial Times.

However, Google AI overview used the BBC as a source of its answer for 52.5 per cent of news queries, and Perplexity used the public broadcaster for 36 per cent. The Guardian was the most common source used by Gemini (appearing in 53 per cent of answers).

The think tank says there are several reasons why AI companies source news inconsistently. Some publishers, such as the Guardian, have licensing agreements with firms that own AI products like ChatGPT, while others — including the BBC — have sought to block AI companies from accessing their content.

Last year, the BBC threatened legal action against Perplexity for using its content without permission. While ChatGPT appears to be respecting the BBC's wishes, this comes at a cost to the public: the UK’s most popular and trusted news outlet is absent from the country’s most widely used AI tool.

IPPR says this editorialisation by AI companies is creating a new generation of winners and losers. The disproportionate use of some outlets over others risks narrowing the range of perspectives users are exposed to, potentially amplifying particular viewpoints or agendas without users’ knowledge.

Additionally, the rise of AI could have serious consequences for the financial sustainability of quality journalism. For example, when a Google AI Overview is present, Google users are almost half as likely to click on news links. News publishers have themselves predicted a 43 per cent reduction in traffic from search engines over the next three years, threatening advertising and subscription revenues, particularly where AI companies are reproducing content without payment in return.

The authors of the report say that AI is rapidly transforming the news ecosystem, and AI companies are quicky emerging as the new gatekeepers of the internet, dominating the way the public now consume news and information.

Given the importance of this to democracy, the government should seek to foster a healthy AI news environment now, before it is too late. To do so , the think tank recommends:

Making AI companies pay for the news they use, by requiring fair payment and collective licensing deals that ensure a wide range of publishers are included
Introducing clear, standardised "nutrition labels" for AI news so the public can see where AI answers come from and how they’re shaped
Using public funding to protect independent news in the age of AI, by backing a BBC-led public interest AI news service
Roa Powell, senior research fellow at IPPR, said:

"AI tools are rapidly becoming the front door to news, but right now that door is being controlled by a handful of tech companies with little transparency or accountability.

"When the UK’s most trusted news source can disappear entirely from AI answers, it’s a warning sign. If AI companies are going to profit from journalism and shape what the public sees, they must be required to pay fairly for the news they use and operate under clear rules that protect plurality, trust and the long-term future of independent journalism."

Carsten Jung, associate director for economic policy and AI at IPPR, said:

"So far, much of AI policy has sought to accelerate AI development. But we are coming to a stage where we need to more deliberately steer AI policy towards socially beneficial outcomes.

"In the news space, we have the tools to ensure that AI does not damage the public sphere, and in fact improve the quality and diversity of information people access. But this won’t happen by itself - the government needs to shape it. We should learn the lessons from the past and shape emerging technologies before it is too late."

Owen Meredith, CEO, News Media Association:

"As the report demonstrates, weakening UK copyright law would deprive publishers of reward and payment for the trusted journalism that enables AI to be accurate and up to date. The Government must end the uncertainty it has created, by ruling out any new text and data mining exception in their March reports.

“The CMA must also intervene swiftly to stop Google using its dominant position to force publishers to fuel its AI chatbots for free. Fair payment from the market leader is critical to a functioning licensing market and to preventing big tech incumbents from monopolising AI."

Source
https://www.ippr.org/media-office/revealed-chatgpt-draws-more-on-gb-news-al-jazeera-and-marie-claire-than-the-bbc-ippr-analysis-shows