We need to talk, friends. Things have gotten weird out there, and you're not dealing with it well at all.
I'm in a lot of data journalist social spaces, and a couple of years ago I started to notice a lot of people starting to use large language models for things that, bluntly, didn't make any sense. For example, in response to a question about converting between JSON and CSV, someone would inevitably pipe up and say "I always just ask ChatGPT to do this," meaning that instead of performing an actual transfer between two fully machine-readable and well-supported formats, they would just paste the whole thing into a prompt window and hope that the statistics were on their side.
I thought this was a joke the first time I saw it. But it happened again and again, and gradually I realized that there's an entire group of people — particularly younger reporters — who seem to genuinely think this is a normal thing to do, not to mention all the people relying on LLM-powered code completion. Amid the hype, there's been a gradual abdication of responsibility to "ChatGPT said" as an answer.
The prototypical example of this tendency is Simon Willison, a long-time wanderer across the line between tech and journalism. Willison has produced a significant amount of public output since 2020 "just asking questions" about LLMs, and wrote a post in the context of data journalism earlier this year that epitomizes both the trend of adoption and the dangers that it holds:
I really started to think I was losing my mind near the end of the post, when he uploads a dataset and asks it to tell him "something interesting about this data." If you're not caught up in the AI bubble, the idea that any of these models are going to say "something interesting" is laughable. They're basically the warm, beige gunk that you have to eat when you get out of the Matrix.
More importantly, LLMs can't reason. They don't actually have opinions, or even a mental model of anything, because they're just random word generators. How is it supposed to know what is "interesting?" I know that Willison knows this, but our tendency to anthropomorphize these interactions is so strong that I think he can't help it. The ELIZA effect is a hell of a drug.
I don't really want to pick on Willison here — I think he's a much more moderate voice than this makes him sound. But the post is emblematic of countless pitch emails and conversations that I have in which these tools are presumed to be useful or interesting in a journalism context. And as someone who prides themself on producing work that is accurate, reliable, and accountable, the idea of adding a black box containing a bunch of randomized matrix operations in my process is ridiculous. That's to say nothing of the ecological impact that they have in aggregate, or the fact that they're trained on stolen data (including the work of fellow journalists).
I know what the responses to this will be, particularly for people who are using Copilot and other coding assistants, because I've heard from them when I push back on the hype: what's wrong with using the LLM to get things done? Do I really think that the answer to these kinds of problems should be "write code yourself" if a chatbot can do it for us? Does everyone really need to learn to scrape a website, or understand a file format, or use a programming language at a reasonable level of competency?
And I say: well, yes. That's the job.
But also, I think we need to be reframing the entire question. If the problem is that the pace and management of your newsroom do not give you the time to explore your options, build new skills, and produce data analysis on a reasonable schedule, the answer is not to offload your work to OpenAI and shortchange the quality of journalism in the process. The answer is to fix the broken system that is forcing you to cut corners. Comrades, you don't need a code assistant — you need a union and a better manager.
Of course your boss is thrilled that you're using an LLM to solve problems: that's easier than fixing the mismanagement that plagues newsrooms and data journalism teams, keeping us overworked and undertrained. Solving problems and learning new things is the actual fun part of this job, and it's mind-boggling to me that colleagues would rather give that up to the robots than to push back on their leadership.
Of course many managers are fine with output that's average at best (and dangerous at worst)! But why are people so eager to reduce themselves to that level? The most depressing tic that LLM users have is answering a question with "well, here's what the chatbot said in response to that" (followed closely by "I couldn't think of how to end this, so I asked the chatbot"). Have some self-respect! Speak (and code) for yourself!
Of course CEOs and CEO-wannabes are excited about LLMs being able to take over work. Their jobs are answering e-mails and trying not to make statements that will freak anyone out. Most of them could be replaced by a chatbot and nobody would even notice, and they think that's true of everyone else as well. But what we do is not so simple (Google search and Facebook content initiatives notwithstanding).
If you are a data journalist, your job is to be as correct and as precise as possible, and no more, in a world where human society is rarely correct or precise. We have spent forty years, as an industry niche, developing what Philip Meyer referred to as "precision journalism," in which we adapt the techniques of science and math to the process of reporting. I am begging you, my fellow practitioners, not to throw it away for a random token selection process. Organize, advocate for yourself, and be better than the warm oatmeal machine. Because if you act like you can be replaced by the chatbot, in this industry, I can almost guarantee that you will be.