Posts tagged AI

Choose another tag?

My new IEEE Spectrum piece on machine learning and the Colorado River is out. It’s my first byline there and I’m pretty delighted about it.

The river is having its worst year on record. Negotiations between the seven states over how to share what’s left have collapsed twice. And in the middle of all of that there’s this genuinely impressive ecosystem of tools being built to model what comes next — reservoir optimization algorithms running millions of simulations, graph neural networks mapping how conditions at one point in the river ripple downstream weeks later, hybrid deep learning models issuing drought warnings months out.

The piece doesn’t get into something that comes up constantly when you put AI and the Colorado River in the same sentence: data centers. It’s basically the first thing people mention. Data centers use a lot of water for cooling. AI is driving data center growth. The Colorado River is drying up. The story writes itself.

Except the numbers don’t really support it. In 2023, all U.S. data centers combined used roughly 50,000 acre-feet of water. The Colorado River’s total allocation is around 16 to 17 million acre-feet annually. That’s about 0.3 percent. Tribal water rights alone account for around 3 million acre-feet. Agriculture takes 70 percent of everything.

The data center story isn’t wrong exactly. But at the basin scale it’s a distraction from the harder conversation — the one about a 1922 legal framework (another topic I didn’t have room to really dive into!) that allocated more water than the river has ever reliably produced, and the human decisions that are going to have to be made about who gives up what. That story doesn’t have a villain as satisfying as a data center. But it’s the actual story.

Link: AI Models Map the Colorado River’s Hard Choices

A nine-pound Cavapoo to be precise. Momo (the coder) made games, not important security patches or anything. To get to the vibe coding, the owner had to solve some delightful engineering problems first, like finding a keyboard that could withstand paws and build an automated treat reward dispenser to get Momo to smash some keys. Getting Claude to turn the random keystrokes into a game almost felt incidental to everything else.

Above is a screenshot from the game, which I downloaded and played. I did not beat the final boss!

Source: I Taught My Dog to Vibe Code Games

I just found this Verite News story from April that combines two of my interests: prison policy and AI. And as you can imagine, the combination is not good! In Louisiana:

A computerized scoring system adopted by the state Department of Public Safety and Corrections had deemed the nearly blind 70-year-old, who uses a wheelchair, a moderate risk of reoffending, should he be released. And under a new law, that meant he and thousands of other prisoners with moderate or high risk ratings cannot plead their cases before the board. According to the department of corrections, about 13,000 people — nearly half the state’s prison population — have such risk ratings, although not all of them are eligible for parole.

Critics of the law describe it as a Trojan horse for ending parole, and the algorithm makes that strategy even cleaner. Lawmakers don’t have to argue that someone doesn’t deserve release; they don’t even have to vote on it. They can point to a score and shrug. It’s not us, the system says. It’s the data.

Another way to think about it: why get your hands dirty with the trolley problem when you can make a machine pull the lever for you?

<3 — <3

AI loves the em-dash, and Sean Goedecke thinks he’s found out why after exploring a lot of fun theories I had not heard of (including AI trainers in East Africa being fond of the punctuation): modern models are trained on a ton of freshly digitized 19th- and early-20th-century books, back when writers used dashes like it was their job. GPT-3.5 didn’t do this; GPT-4o does. Blame the OCR.

Kelsey Piper tried a fun experiment: ask chatbots the same morally loaded questions in six languages and see if the answers diverge. You’d expect the “AI Sapir-Whorf hypothesis” (language influences thought and perception) to hold. Instead, the models mostly converge on one worldview: secular, liberal, modern-internet cosmopolitanism. Even DeepSeek, China’s flagship model, gives Western-ish answers… unless you ask it in Chinese, in which case it gets a bit more cautious.

Today’s AIs don’t “think” in multiple languages. They seem to think in English and translate outward. And that makes everyone, from Cairo to Kansas to Chongqing, more likely to get the same advice about protests, domestic violence, or how to respond when your kid comes out.

It’s a weird twist: humans don’t have a universal culture, but AIs might.

While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate ‘Yes’ if you have read and agree.Why do you want to work at Anthropic? (We value this response highly – great answers are often 200-400 words.)

Anthropic job app via Simon Willison’s blog