This isn’t going to be so much a column about the outdoors as it is journalism, using an outdoor example or two. It’s also about technology and how it might someday replace human creation, if we become desensitized enough to allow it.
Journalism geeks have surely heard about the latest technological assault and shameful black eye on our craft, as well as humanity’s struggle to control the forces we’ve unleashed. We’ve been training for this assault since, maybe 1968, when artificial intelligence, in the form of a computer, spoiled an expedition to Jupiter in the film “2001: A Space Odyssey.”
This latest attack was closer to home, in Cody, Wyoming, where a reporter used generative artificial intelligence to write his stories was exposed by a reporter at the rival newspaper in the nearby town of Powell.
Full disclosure. I used to teach in Powell and later worked at the Cody Enterprise, the newspaper where the AI scandal occurred. The Enterprise’s local owner has since sold the newspaper and I’m not sure any of my old colleagues remain.
One constant is CJ Baker. He’s the reporter from the Powell Tribune who detected the telltale signs that reporter Aaron Pelczar was using AI to write his stories.
Baker is one of those gifts journalism used to routinely provide the communities it served. He was at the Tribune when I moved to Wyoming, and he’s still there, with the insight and understanding a reporter only develops with longevity. Baker also retains a relentless desire to square the inconsistencies of the world, or at least those of the local school board.
I’ve never read a Baker byline where I didn’t think to myself, “That’s nice work.”
Baker has Wyoming roots, but if he ever gets antsy for the bright lights, he’d be an asset in any newsroom, including The New York Times, which published a story about the controversy last week.
I’m no expert on AI, but I’ve played around with the technology, and I think it has a place in the modern newsroom. It can help with research, editing or proofing copy. If you’ve ever used grammerly.com or the spell/grammar check function of Microsoft Word, or allowed predictive text to finish a sentence for you, you’ve also used AI.
We don’t need to stop using spell check to be ethical journalists, but we ought to at least understand that allowing the technology to write our stories for us — while making up quotes from people we’ve never interviewed — doesn’t clear the ethical bar.
Out of curiosity, and an acknowledgment that my students are using AI in their lives, and sometimes my classroom, I decided to launch my own AI experiment last spring. Using ChatGPT, I prompted the program to write a few stories for me. For my first test I typed in “List the best rivers for fishing the skwala hatch,” and in a couple of seconds, the program kicked out 313 serviceable words on the topic, including correctly identifying the Bitterroot as the No. 1 skwala river around.
I say “serviceable” words, because while the program didn’t get anything wrong, it was as superficial as veneer. ChatGPT didn’t offer the name of a single fly pattern or a good stretch of river or even the name of a fly shop in Hamilton. It lacked a single detail of substance, such as March 7, which is the earliest date I ever hit the skwala hatch just right, when even big browns were sipping dries in the runs.
I can’t write 313 words in a couple of seconds but give me a half hour and I’ll still beat AI.
There aren’t any shortcuts to good journalism. And even if ChatGPT gets clever enough to pump out unique content, we still need editors to keep an eye out for mistakes.
Ask Dave Bowman. Even HAL-9000 sometimes blows a circuit.