AI & Local News newsletter, issue 12
Welcome to Issue 12 of the AI & Local News newsletter.
We’re wrapping up the first full year of the AI & Local News initiative. I’m thrilled by the work of our initiative orgs Associated Press (see Aimee Rinehart’s year-end wrap up), Brown Institute’s Local News Lab and Partnership on AI.
Looking ahead, it was great to see many NiemanLab journalism 2023 predictions that engage with artificial intelligence and automation (I counted at least six). Here’s to more of this work in 2023.
Finally, I want to call out a point from another NiemanLab piece, Ken Doctor’s update on two years of local journalism at Lookout Santa Cruz. First, it’s always reassuring to see that–at least under some conditions–there are models that seem to work for local news to grow and have impact. But what caught my attention was this point in the list of takeaways:
“The tech stack is still unnecessarily hard. … From CMSes to access controls to analytics to search optimization, we see a motley assortment of tech with too little integration among the moving parts. There’s real movement toward higher-performance standardization, optimization, and networking, but dealing with partial solutions continues to retard the fundamental work of remaking local news.”
As we explore the potential of AI for local news, it seems to me that part of making this work involves getting the local news tech stack set to keep up with the evolving digital world and take advantage of new tools.
Thanks for a great year and happy holidays!
Community & Project Lead, AI & Local News
NYC Media Lab
AI & Local News Initiative
NYC Media Lab: “On December 13, 2022 we hosted a community talk with Francesco Marconi from AppliedXL. Applied XL is an event detection company that combines the horsepower of machine learning and the principles of investigative journalism to anticipate the news before it happens. Catch up with the startup by watching the session recording.
Also, we were proud to have New York Times R&D x NYC Media Lab fellow Tuhin Chakrabarty present the paper “CONSISTENT: Open-Ended Question Generation From News Articles” at EMNLP in Abu Dhabi earlier this month. Read this blog post about the project from New York Times R&D to learn more.”
Partnership on AI: Learn more about PAI’s Local News workstream. If you'd like to get in contact with the PAI team directly and join their Quarterly meeting please reach out to Dalia Hashim, email@example.com
ChatGPT is one of the most buzzed about (and scary) chatbots in history. Creators are experimenting with it, researchers are critiquing it, and–notably–people outside the AI community are trying it. ChatGPT is built byOpenAI using their GPT-3.5 model which has the ability to create text responses to people’s queries.
As Samantha Lock writes in an explainer for the Guardian “there has been speculation that professions dependent upon content production could be rendered obsolete, including everything from playwrights and professors to programmers and journalists.”
- ChatGPT queries could become alternatives to some Google searches (although people will eventually have to pay to use ChatGPT). It can provide answers, descriptions and solutions to complex questions, including writing code. Its potential disruptive impact on society has prompted comparison to the iPhone..
- ChatGPT can also be entirely wrong and present false narratives and misinformation, writing “plausible-sounding but incorrect or nonsensical answers” as OpenAI underlined in a press release.
The New York Times, The Guardian, Fast Company, OpenAI
Over the past few years, it’s been said that AI and machine learning are ready to write stories, create stunning images, and take on a whole range of human tasks. But, according to a recent Axios article about the work of Stanford researcher Percy Liang, there is a flip side: “While machine learning-trained systems do many things well, they can also be confidently wrong — a dangerous combination.”
The article clearly highlights one of the pitfalls in the current iteration of large-language models:
- “Many of today's most powerful AI systems aim to offer a convincing response to any question, regardless of accuracy.
- ‘If you don’t know, you should just say you don’t know rather than make something up,’ says Stanford researcher Percy Liang”
Liang aims to create a resource for AI “where people can go to understand the strengths and weaknesses of foundational AI models, such as those from Meta, Google and OpenAI.” Stanford researchers have also just presented a tool to put algorithmic auditing in the hands of impacted communities. The toll’s aim is to empower small communities and help them hold accountable those who deploy harmful algorithms.
Lydia Polgreen’s op-ed in the New York Times gives a simple – but sometimes overlooked – suggestion: support your local news organization. In a post-ideological world fuelled by conspiracy theories and election deniers, local news outlets are the backbone of our democracy. The bad news is that, according to the data of Northwestern University’s Local News Initiative, since 2005 more than a quarter of the country’s newspapers have closed. But Polgreen’s article ends with hope, “There has been a tremendous flowering of innovation in local news nonprofits. New outlets are opening all the time. They rely on their communities to support them. The future of our democracy and the long-term health of our citizenry may well depend on it."
The New York Times
Register by: January 8, 2023
JournalismAI launched a new training course that will start in January 2023 and cover the key principles of AI in journalism.
International News Media Association
The new report “How Automated Journalism Is Shaping the Future of News Media” by the International News Media Association (INMA) focuses on how AI can accelerate and improve reporting and reshape journalism. According to the report, automation is both good for journalists - it frees up time and does not replace them - and for media, shaping new business models.
More than 50 speakers and 20 panels in this Fortune conference about AI are now available online. The sessions cover topics from ethical and philosophical questions about machine learning and the future of our planets, to how AI is changing art, journalism, creativity and defense.
All the videos and panels from the AI Journalism festival, including “How to identify potential false claims using AI,” “Tips from small newsrooms on starting your AI journey,” and “Getting started with AI: Where small newsrooms can start” with Aimee Rinehart and Ernest Kung from AP’s Local News AI Initiative.
Photo Illustration: The New York Times; Image: Jim Wilson/The New York Times
Protesters in China are working hard to trip up algorithms designed to flag their content. In this episode of Hard Fork, Kevin Roose and Casey Newton talked with Paul Mozur about Beijing’s AI powered censorship and the use of filters and other tricks to publish content against the regime online.
Food X thought
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing OpenAI has created a skilled Minecraft bot by training it with 70,000 hours of video of people playing the popular computer game. This approach–called imitation training–could be the next big thing in AI. The technique “could be used to train machines to carry out a wide range of tasks by binging on sites like YouTube, a vast and untapped source of training data.”
MSN Fired Its Human Journalists and Replaced Them With AI That Publishes Fake News About Mermaids and Bigfoot We’ve repeated multiple times that AI will not (and as you’ll see should not) replace journalists. However, according to an article published by Futurism, MSN News fired its journalists (the company started the journalist cuts in 2020) replacing them with AI. The automatic content curation has created some problems by publishing news about mermaids and bigfoot. According to Futurism, “After we published this story, MSN deleted all the hoax articles — but without a retraction note or anything else documenting the removal.” More false articles appeared on the site soon after.