I just found this Verite News story from April that combines two of my interests: prison policy and AI. And as you can imagine, the combination is not good! In Louisiana:

A computerized scoring system adopted by the state Department of Public Safety and Corrections had deemed the nearly blind 70-year-old, who uses a wheelchair, a moderate risk of reoffending, should he be released. And under a new law, that meant he and thousands of other prisoners with moderate or high risk ratings cannot plead their cases before the board. According to the department of corrections, about 13,000 people — nearly half the state’s prison population — have such risk ratings, although not all of them are eligible for parole.

Critics of the law describe it as a Trojan horse for ending parole, and the algorithm makes that strategy even cleaner. Lawmakers don’t have to argue that someone doesn’t deserve release; they don’t even have to vote on it. They can point to a score and shrug. It’s not us, the system says. It’s the data.

Another way to think about it: why get your hands dirty with the trolley problem when you can make a machine pull the lever for you?

<3 — <3

AI loves the em-dash, and Sean Goedecke thinks he’s found out why after exploring a lot of fun theories I had not heard of (including AI trainers in East Africa being fond of the punctuation): modern models are trained on a ton of freshly digitized 19th- and early-20th-century books, back when writers used dashes like it was their job. GPT-3.5 didn’t do this; GPT-4o does. Blame the OCR.

Kelsey Piper tried a fun experiment: ask chatbots the same morally loaded questions in six languages and see if the answers diverge. You’d expect the “AI Sapir-Whorf hypothesis” (language influences thought and perception) to hold. Instead, the models mostly converge on one worldview: secular, liberal, modern-internet cosmopolitanism. Even DeepSeek, China’s flagship model, gives Western-ish answers… unless you ask it in Chinese, in which case it gets a bit more cautious.

Today’s AIs don’t “think” in multiple languages. They seem to think in English and translate outward. And that makes everyone, from Cairo to Kansas to Chongqing, more likely to get the same advice about protests, domestic violence, or how to respond when your kid comes out.

It’s a weird twist: humans don’t have a universal culture, but AIs might.

If a book about the history of chaptering literary works wasn’t enough to draw me in, this write-up in the Sydney Review of Books sold me:

One of the basic structures of the book, the chapter is a ‘box of time’ that shapes the reader’s experience of temporality. As such, changes in chaptering present one way of exploring changes in the experience of time in literary history. How did time feel in late antiquity, or in fifteenth-century Burgundy, or to a former slave at the end of the eighteenth century? Studying the chapter might also tell us something about our experience of time now, in ‘the present’ – whatever that is – and the historical distance between our time and that of times past.

Borrowing from the library now.

 

I don’t write a lot of essays, but I was so so pleased to write this one. I volunteered multiple times a week with a nonprofit that sends books to incarcerated readers across the country for more than five years. It changed who I was as a reader, writer, and person. Reading hundreds and then thousands of letters from people on the inside asking for the books that they hoped would entertain, distract, or educate themselves expanded my world in ways that I am still discovering.

Plus, writing this made me realize I have stared at Colin Powell’s butt more than all but maybe three or four people. I promise there is a reasonable explanation for this, but you will have to read on for it: https://lareviewofbooks.org/article/reading-behind-bars-and-beyond-barriers/

Anthropic, which built Claude, the LLM I find most useful, tests each of its models on Pokemon Red (I was a Blue player myself). Earlier models weren’t able to do much, but the latest version, using “extended thinking” (aka reasoning, the trend all the AI providers are after), is on a roll.

Anthropic Pokemon Red

This is more meaningful to me than most benchmarks, and I’m only half-joking. I remember Misty’s badge being hard to get!

You can watch the AI play here: https://www.twitch.tv/claudeplayspokemon.

Just about had to pull my car over listening to the Tim Ferriss podcast when Brandon Sanderson said this in response to a question about when the story isn’t working:

One important mindset that is kind of a ground rule is remembering, as a writer, that the piece of art is not necessarily just the story you’re creating, that you are the piece of art. The time you spend writing is improving you as a writer and that is the most important thing. The book is almost a side product, not really, but it almost is to the fact that you are the art and if you know that, it helps a lot.

::Mind blown emoji:: The whole 2+ hours is worth a listen.


My other story for WSJ was on how foundation models are being fed satellite data and how that means a lot more of us could have access to cutting edge tools that let use explore Earth and track things we care about.

Foundation models is the tech under tools like ChatGPT, but can be put to use doing all sorts of things, like spotting illegal airstrips with satellite data in my story or study DNA sequencing, covered in this Quanta story. Let’s try out AI to save the planet or understand new things about ourselves and not just use it to create ad copy, alright?

Here’s a gift link to my Earth foundation model story.