OpenAI Explores Applications in Journalism
SAN FRANCISCO – OpenAI, the leading artificial intelligence (AI) research laboratory, announced a new initiative this week to explore applications of AI technology in journalism and news media.
The San Francisco-based nonprofit, co-founded in 2015 by tech entrepreneur Sam Altman and famed AI researcher Elon Musk, has gained worldwide recognition for its groundbreaking work developing systems like GPT-3, a language model that can generate remarkably human-like text. Now, OpenAI aims to leverage these advanced natural language capabilities to assist newsrooms and augment human journalists.
“We believe AI can play a transformative role in the news industry to increase productivity, enhance coverage, and raise the quality standard of reporting,” said Greg Brockman, OpenAI’s chief technology officer.
The company convened a group of media veterans, technologists, and academic experts at its headquarters last month to discuss opportunities and risks around using cutting-edge language models in journalism. The two-day workshop probed ethical dilemmas and outlined principles to guide development of AI writing tools.
“This technology holds tremendous promise but also poses serious perils if deployed without sufficient forethought,” Brockman said. “We want these systems to uplift journalists and strengthen public discourse.”
A core focus is mitigating harms from AI-generated misinformation and ensuring transparency about automation in news production. OpenAI emphasized close collaboration with publishers and commitment to accountability as keys to fostering broad acceptance.
Tailoring Al Technology to News Workflows
A critical question is exactly how to integrate algorithmic capabilities into existing editorial operations and leverage strengths of both human and artificial intelligence.
“The goal shouldn’t be full automation that replaces journalists, but rather augmenting human strengths with machine learning,” said Sarah Shang, a Stanford computer science professor who attended the workshop.
Applications include using language models to synthesize draft news articles from data sets and background materials to allow reporters to produce more stories faster. Al could also take on time-intensive transcription of interviews and videos.
Other opportunities include personalizing news delivery for readers, powering predictive analytics of readership patterns, and even generating interactive visual stories. OpenAl plans to partner with select news companies to develop custom solutions targeting pain points within their workflows.
Guarding Against Misuse
However, experts raised alarm around risks such as training biases that skew algorithmic outputs. Last year, a language model from Meta called Galactica began inventing harmful misinformation within hours of launch.
OpenAl acknowledged similar vulnerabilities with its systems and emphasized oversight methods like monitoring for statistical drifts that can signal problems. Still, some critics argue reliance on Al in sensitive domains like news should face greater regulation.
“We’re seeing the very real dangers today of language models weaponized to spread disinformation at viral speed,” said Dr. Tim Hwang, a Brookings Institution researcher who co-authored a November 2022 report on systemic risks from generative Al. “News organizations leaning on this technology must account for harms amplified at scale.”
Industry Leaders Welcome Exploration
Nonetheless, several leading news executives welcomed OpenAI’s foray as an opportunity to responsibly drive innovation.
“Newsrooms have always adopted new technologies, from the printing press to photography to the internet,” said A.G. Sulzberger, publisher of The New York Times. “Exploring modern AI’s potential while safeguarding our standards and values promises to open new frontiers for journalism.”
The Times and other publishers have already begun limited tests of using language models to create first drafts of articles from raw source materials. Scaling up pilot initiatives with enhanced oversight and auditing procedures could unlock enormous productivity benefits.
Wired magazine’s editor-in-chief, Nicholas Thompson, views collaborating with companies like OpenAI as imperative to reinventing publications for the algorithmic age.
“This technology is arriving whether we like it or not,” said Thompson, who also participated in last month’s workshop. “I believe the best way to steer through coming disruption is for journalists to have a seat at the table shaping it responsibly.”
Brockman shares Thompson’s ethos on the inevitability of pervasive language models and the need to self-regulate upright applications.
“This train is moving extremely fast,” Brockman said. “It’s incumbent on organizations like ours to show how society can prosper from these deployments. We take that responsibility extremely seriously.”
With initiatives like its journalism project, OpenAI aims to set standards for conscience-driven innovation of transformative technologies. Responsible development today paves the way for Al to elevated news media’s democratic functions tomorrow through expanded information flows. Yet risks remain pervasive, requiring vigilant governance to ensure public good prevails over harm.


