We’re living in a strange time for news. The line between fact and opinion feels like it’s fading, and artificial intelligence is speeding up the process. Don’t get me wrong—AI can be incredible. It can pull in mountains of data, condense it into a neat package, and spit out a polished article in seconds. But here’s the thing: AI doesn’t know truth. It just predicts what words “should” come next based on patterns. If the data it’s trained on is biased or wrong, it’ll confidently hand you a polished, well-written mistake.
I learned this the hard way recently. I was drafting a post about the information domain for an assignment and was tasked to let AI help me save some time.
https://docs.google.com/document/d/1zwKIqQ_nyQDQtN_dt7l524s6ne_Ycl5bfipDvQWmeVQ/edit?usp=sharing
On the surface, the draft looked great—clean sentences, professional tone, all the right buzzwords. But when I dug deeper, I found subtle inaccuracies that could easily pass as fact. In this case, the topic wasn’t life-or-death. But if it had been about national security, public health, or a breaking news event, those little errors could’ve had big consequences.
That’s the danger. When news is generated—or even just shaped—by AI, it’s easy for truth to get bent into something that feels right but isn’t. And when people can’t tell the difference, trust erodes. Worse, the temptation for media outlets to give audiences what they want to hear—rather than what they need—grows.
I’m not saying we should ditch AI. I’m saying we have to keep it on a short leash. That means fact-checking, transparency, and remembering that the goal of news is to inform, not to entertain.
Because once feelings start driving the headlines instead of facts, we’re in trouble. Once we take a byte of that cookie, it might be hard to take it from the mouse.

Leave a comment