This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

In the last week there has been a lot of talk about whether journalists or copywriters could or should be replaced by AI. Personally, I’m not worried. Here’s why.

So far, newsrooms have pursued two very different approaches to integrating the buzziest new AI tool, ChatGPT, into their work. Tech news site CNET secretly started using ChatGPT to write entire articles, only for the experiment to go up in flames. It ultimately had to issue corrections amid accusations of plagiarism. Buzzfeed, on the other hand, has taken a more careful, measured approach. Its leaders want to use ChatGPT to generate quiz answers, guided by journalists who create the topics and questions. 

You can boil these stories down to a fundamental question many industries now face: How much control should we give to an AI system? CNET gave too much and ended up in an embarrassing mess, whereas Buzzfeed’s more cautious (and transparent) approach of using ChatGPT as a productivity tool has been generally well received, and led its stock price to surge. 

But here’s the dirty secret of journalism: a surprisingly large amount of it could be automated, says Charlie Beckett, a professor at the London School of Economics who runs a program called JournalismAI. Journalists routinely reuse text from news agencies and steal ideas for stories and sources from competitors. It makes perfect sense for newsrooms to explore how new technologies could help them make these processes more efficient. 

“The idea that journalism is this blossoming flower bed of originality and creativity is absolute rubbish,” Beckett says. (Ouch!) 

It’s not necessarily a bad thing if we can outsource some of the boring and repetitive parts of journalism to AI. In fact, it could free journalists up to do more creative and important work. 

Related work from others:  Latest from Google AI - Open Images V7 — Now Featuring Point Labels

One good example I’ve seen of this is using ChatGPT to repackage newswire text into the “smart brevity” format used by Axios. The chatbot seems to do a good enough job of it, and I can imagine that any journalist in charge of imposing that format will be happy to have time to do something more fun. 

That’s just one example of how newsrooms might successfully use AI. AI can also help journalists summarize long pieces of text, comb through data sets, or come up with ideas for headlines. In the process of writing this newsletter, I’ve used several AI tools myself, such as autocomplete in word processing and transcribing audio interviews.  

But there are some major concerns with using AI in newsrooms. A major one is privacy, especially around sensitive stories where it’s vital to protect your source’s identity. This is a problem journalists at MIT Technology Review have bumped into with audio transcription services, and sadly the only way around it is to transcribe sensitive interviews manually.

Journalists should also exercise caution around inputting sensitive material into ChatGPT. We have no idea how its creator, OpenAI, handles data fed to the bot, and it is likely our inputs are being plowed right back into training the model, which means they could potentially be regurgitated to people using it in the future. Companies are already wising up to this: a lawyer for Amazon has reportedly warned employees against using ChatGPT on internal company documents. 

ChatGPT is also a notorious bullshitter, as CNET found out the hard way. AI language models work by predicting the next word, but they have no knowledge of meaning or context. They spew falsehoods all the time. That means everything they generate has to be carefully double-checked. After a while, it feels less time-consuming to just write that article yourself.

Related work from others:  Latest from MIT : Helping computer vision and language models understand what they see

New report: Generative AI in industrial design and engineering
Generative AI—the hottest technology this year—is transforming entire sectors, from journalism and drug design to industrial design and engineering. It’ll be more important than ever for leaders in those industries to stay ahead.  We’ve got you covered. A new research report from MIT Technology Review highlights the opportunities—and potential pitfalls— of this new technology for industrial design and engineering. 

The report includes two case studies from leading industrial and engineering companies that are already applying generative AI to their work—and a ton of takeaways and best practices from industry leaders. It is available now for $195.

Deeper Learning

People are already using ChatGPT to create workout plans

Some exercise nuts have started using ChatGPT as a proxy personal trainer. My colleague Rhiannon Williams asked the chatbot to come up with a marathon training program for her as part of a piece delving into whether AI might change the way we work out.  You can read how it went for her here. 

Sweat it out: This story is not only a fun read, but a reminder that we trust AI models at our peril. As Rhiannon points out, the AI has no idea what it is like to actually exercise, and it often offers up routines that are efficient but boring. She concluded that ChatGPT might best be treated as a fun way of spicing up a workout regime that’s started to feel a bit stale, or as a way to find exercises you might not have thought of yourself.

Bits and Bytes

A watermark for chatbots can expose text written by an AI
Hidden patterns buried in AI-generated texts could help us tell whether the words we’re reading weren’t written by a human. Among other things, this could help teachers trying to spot students who’ve outsourced writing their essays to AI. (MIT Technology Review)

Related work from others:  Latest from MIT : From physics to generative AI: An AI model for advanced pattern generation

OpenAI is dependent on Microsoft to keep ChatGPT running
The creator of ChatGPT needs billions of dollars a day to run it. That’s the problem with these huge models—this kind of computing power is accessible only to companies with the deepest pockets. (Bloomberg)  

Meta is embracing AI to help drive advertising engagement 
Meta is betting on integrating AI technology deeper into its products to drive advertising revenue and engagement. The company has one of the AI industry’s biggest labs, and news like this makes me wonder what this shift toward money-making AI is going to do to AI development. Is AI research really destined to be just a vehicle to bring in advertising money? (The Wall Street Journal

How will Google solve its AI conundrum? 
Google has cutting-edge AI language models but is reluctant to use them because of the massive reputational risk that comes with integrating the tech into online search. Amid growing pressure from OpenAI and Microsoft, it is faced with a conundrum: Does it release a competing product and risk a backlash over harmful search results, or risk losing out on the latest wave of development?  (The Financial Times

Similar Posts