My new IEEE Spectrum piece on machine learning and the Colorado River is out. It’s my first byline there and I’m pretty delighted about it.

The river is having its worst year on record. Negotiations between the seven states over how to share what’s left have collapsed twice. And in the middle of all of that there’s this genuinely impressive ecosystem of tools being built to model what comes next — reservoir optimization algorithms running millions of simulations, graph neural networks mapping how conditions at one point in the river ripple downstream weeks later, hybrid deep learning models issuing drought warnings months out.

The piece doesn’t get into something that comes up constantly when you put AI and the Colorado River in the same sentence: data centers. It’s basically the first thing people mention. Data centers use a lot of water for cooling. AI is driving data center growth. The Colorado River is drying up. The story writes itself.

Except the numbers don’t really support it. In 2023, all U.S. data centers combined used roughly 50,000 acre-feet of water. The Colorado River’s total allocation is around 16 to 17 million acre-feet annually. That’s about 0.3 percent. Tribal water rights alone account for around 3 million acre-feet. Agriculture takes 70 percent of everything.

The data center story isn’t wrong exactly. But at the basin scale it’s a distraction from the harder conversation — the one about a 1922 legal framework (another topic I didn’t have room to really dive into!) that allocated more water than the river has ever reliably produced, and the human decisions that are going to have to be made about who gives up what. That story doesn’t have a villain as satisfying as a data center. But it’s the actual story.

Link: AI Models Map the Colorado River’s Hard Choices

A nine-pound Cavapoo to be precise. Momo (the coder) made games, not important security patches or anything. To get to the vibe coding, the owner had to solve some delightful engineering problems first, like finding a keyboard that could withstand paws and build an automated treat reward dispenser to get Momo to smash some keys. Getting Claude to turn the random keystrokes into a game almost felt incidental to everything else.

Above is a screenshot from the game, which I downloaded and played. I did not beat the final boss!

Source: I Taught My Dog to Vibe Code Games

I just found this Verite News story from April that combines two of my interests: prison policy and AI. And as you can imagine, the combination is not good! In Louisiana:

A computerized scoring system adopted by the state Department of Public Safety and Corrections had deemed the nearly blind 70-year-old, who uses a wheelchair, a moderate risk of reoffending, should he be released. And under a new law, that meant he and thousands of other prisoners with moderate or high risk ratings cannot plead their cases before the board. According to the department of corrections, about 13,000 people — nearly half the state’s prison population — have such risk ratings, although not all of them are eligible for parole.

Critics of the law describe it as a Trojan horse for ending parole, and the algorithm makes that strategy even cleaner. Lawmakers don’t have to argue that someone doesn’t deserve release; they don’t even have to vote on it. They can point to a score and shrug. It’s not us, the system says. It’s the data.

Another way to think about it: why get your hands dirty with the trolley problem when you can make a machine pull the lever for you?

<3 — <3

AI loves the em-dash, and Sean Goedecke thinks he’s found out why after exploring a lot of fun theories I had not heard of (including AI trainers in East Africa being fond of the punctuation): modern models are trained on a ton of freshly digitized 19th- and early-20th-century books, back when writers used dashes like it was their job. GPT-3.5 didn’t do this; GPT-4o does. Blame the OCR.

Kelsey Piper tried a fun experiment: ask chatbots the same morally loaded questions in six languages and see if the answers diverge. You’d expect the “AI Sapir-Whorf hypothesis” (language influences thought and perception) to hold. Instead, the models mostly converge on one worldview: secular, liberal, modern-internet cosmopolitanism. Even DeepSeek, China’s flagship model, gives Western-ish answers… unless you ask it in Chinese, in which case it gets a bit more cautious.

Today’s AIs don’t “think” in multiple languages. They seem to think in English and translate outward. And that makes everyone, from Cairo to Kansas to Chongqing, more likely to get the same advice about protests, domestic violence, or how to respond when your kid comes out.

It’s a weird twist: humans don’t have a universal culture, but AIs might.

If a book about the history of chaptering literary works wasn’t enough to draw me in, this write-up in the Sydney Review of Books sold me:

One of the basic structures of the book, the chapter is a ‘box of time’ that shapes the reader’s experience of temporality. As such, changes in chaptering present one way of exploring changes in the experience of time in literary history. How did time feel in late antiquity, or in fifteenth-century Burgundy, or to a former slave at the end of the eighteenth century? Studying the chapter might also tell us something about our experience of time now, in ‘the present’ – whatever that is – and the historical distance between our time and that of times past.

Borrowing from the library now.

 

I don’t write a lot of essays, but I was so so pleased to write this one. I volunteered multiple times a week with a nonprofit that sends books to incarcerated readers across the country for more than five years. It changed who I was as a reader, writer, and person. Reading hundreds and then thousands of letters from people on the inside asking for the books that they hoped would entertain, distract, or educate themselves expanded my world in ways that I am still discovering.

Plus, writing this made me realize I have stared at Colin Powell’s butt more than all but maybe three or four people. I promise there is a reasonable explanation for this, but you will have to read on for it: https://lareviewofbooks.org/article/reading-behind-bars-and-beyond-barriers/

Anthropic, which built Claude, the LLM I find most useful, tests each of its models on Pokemon Red (I was a Blue player myself). Earlier models weren’t able to do much, but the latest version, using “extended thinking” (aka reasoning, the trend all the AI providers are after), is on a roll.

Anthropic Pokemon Red

This is more meaningful to me than most benchmarks, and I’m only half-joking. I remember Misty’s badge being hard to get!

You can watch the AI play here: https://www.twitch.tv/claudeplayspokemon.