The Devil's Corner

Welcome to The Devil's Corner! Open Mind has retained the services of the original Devil's Advocate. (His hourly rates are hell, but worth it.) His burning questions about AI in journalism mirror those we hear from the public, just with more brimstone. Think of this as our supernatural FAQ, where the Open Mind team face our demons head on, and where no question is to hot too handle:

Let's cut to the chase - isn't AI in journalism just a fancy way to help media companies fire their journalists?

Tor (CEO): The news industry has cut its workforce by more than half in the past 25 years, so I understand your concern. This might seem like an opportunity to continue that trend. But that’s not what we’re seeing, and here’s why: These massive editorial layoffs weren’t driven by productivity improvements, but because the industry lost two-thirds of its revenue during the same period.

The rise of the internet disrupted traditional business models, slashing both subscription and ad revenues. With people’s attention increasingly divided among various media and entertainment options, news organizations simply didn’t have the money to maintain their previous staffing levels.

The journalists and editors who remain in the industry today are struggling under the weight of ever-increasing productivity demands – a response to reduced revenue and constant budget pressure. Many are disillusioned by the reality of modern newsrooms, which often feels far from what drew them to journalism in the first place.

The world needs keen-eyed journalists more than ever, and to ensure this we need tools that enhance the work journalists do, not replace them. We need meaningful productivity improvements that allow quality to flourish rather than forcing compromises. Tools that give journalists more time to interview sources, investigate stories, and hold power to account.

Increased productivity is our best defense against the rising tide of content that entertains but doesn’t inform – or worse. It’s how we keep editorial institutions strong and relevant.

While each newsroom faces unique challenges and must make their own strategic decisions, we can share what we’ve observed: So far, none of our customers have laid off journalists as a consequence of using StoryGo. On the contrary: These newsrooms are setting ambitious growth targets, and over time, we hope it will allow them to increase their investment in the kind of deep, investigative journalism that drew them to this profession in the first place.
How can you claim this is a journalistic tool when it lacks the most basic ability - talking to people?
Nicolai (DBD): You’re right – our AI can’t pick up a phone. It also can’t attend press conferences, build trusted relationships with sources, or read body language during interviews. And that’s exactly the point.

Our AI handles the time-consuming tasks like data processing, background research, and initial drafts – freeing up journalists to do what only humans can do: conduct meaningful interviews, build relationships, and uncover stories that matter. Technology should amplify human journalism, not replace it.

Nicolai (DBD): This question touches on both legal and ethical considerations. Copyright law protects specific expressions of ideas, not the underlying facts or quotes themselves. So while directly copying someone’s article would be infringement, reporting the same facts is perfectly legal – and has been standard journalistic practice throughout history.

However, the more important question is about journalistic integrity and value creation. Our system helps journalists combine multiple sources, add their own analysis, and adapt stories for their specific audiences.

We work closely with newsrooms to ensure our tools support original journalism rather than mere aggregation. Our system is designed to facilitate proper source attribution while empowering journalists to develop their own angles and trust their instincts in story development.
Nicolai (DBD): This question highlights important distinctions in how AI technology is used in journalism. Our approach differs fundamentally from foundation model development in two key ways:

First, we’re not training AI on copyrighted content or reselling that content. Instead, we provide tools that help journalists discover, break down and analyze publicly available information – similar to how search engines index online content. Our customers access and reference this content following traditional journalistic practices for attribution and fair use.

Secondly: While we do use AI services from companies like OpenAI in our technology stack – as do most modern digital companies – we recognize the ongoing industry discussions about AI training data and copyright. Services like Tollbit have emerged to facilitate licensing agreements between AI companies and publishers.

While this represents progress, we don’t believe Tollbit’s particular approach serves the industry’s long-term interests, as it enables AI aggregators and personal assistants to create individual echo chambers, undermining the core purpose of editorial curation.
Instead, we believe the future lies in publisher-to-publisher markets that strengthen original reporting and editorial judgment, allowing news organizations to maintain direct relationships with their audiences rather than seeing their work diluted through AI-powered aggregators. We will welcome and collaborate with any such initiatives.

AI-legislation is still in its infancy and there is an urgent need for a unified set of rules. We hope this will ensure predictability and fair compensation for everybody with a stake in the game.
How can we trust AI systems aren't introducing hidden political bias? Aren't these LLMs essentially black boxes?
Tor (CEO): They certainly are. This is a crucial concern when introducing AI in journalism, where transparency and editorial control are fundamental. We address this challenge in several ways:

First, we give newsrooms complete control over which AI models power our tools. They can choose from all the leading commercial models and several open-source alternatives, allowing them to select providers whose values and transparency standards align with their editorial principles.

Second, we’ve developed rigorous prompting frameworks that prioritize objectivity and factual accuracy. While some news organizations have explicit editorial positions – which is a legitimate journalistic tradition – any such perspective should come from conscious editorial decisions, not from AI systems.
We also have an instant feedback system to identify and correct any unwanted political biases.

Most importantly, our tools are designed to assist journalists in gathering, analyzing, and drafting initial content. We firmly oppose AI autonomy in journalism, as it contradicts fundamental editorial principles.

All editorial decisions remain firmly in the hands of human journalists who understand their publication’s standards and values.

This commitment to editorial integrity is reflected in our SaaS contract that we sign with all customers: all AI-assisted content must carry a human journalist’s byline and be clearly marked as AI-assisted.

We only work with publications that have a chief editor, and we deliberately choose not to serve bloggers, SEO consultants, ad agencies, or any other non-editorial content developers. For now, we also don’t serve freelance journalists, as we’re not able to exercise proper control.