Monday 23 March 1662/63

Up betimes and to my office, before noon my wife and I eat something, thinking to have gone abroad together, but in comes Mr. Hunt, who we were forced to stay to dinner, and so while that was got ready he and I abroad about 2 or 3 small businesses of mine, and so back to dinner, and after dinner he went away, and my wife and I and Ashwell by coach, set my wife down at her mother’s and Ashwell at my Lord’s, she going to see her father and mother, and I to Whitehall, being fearful almost, so poor a spirit I have, of meeting Major Holmes. By and by the Duke comes, and we with him about our usual business, and then the Committee for Tangier, where, after reading my Lord Rutherford’s commission and consented to, Sir R. Ford, Sir W. Rider, and I were chosen to bring in some laws for the Civill government of it, which I am little able to do, but am glad to be joyned with them, for I shall learn something of them.

Thence to see my Lord Sandwich, and who should I meet at the door but Major Holmes. He would have gone away, but I told him I would not spoil his visitt, and would have gone, but however we fell to discourse and he did as good as desire excuse for the high words that did pass in his heat the other day, which I was willing enough to close with, and after telling him my mind we parted, and I left him to speak with my Lord, and I by coach home, where I found Will. Howe come home to-day with my wife, and staid with us all night, staying late up singing songs, and then he and I to bed together in Ashwell’s bed and she with my wife. This the first time that I ever lay in the room. This day Greatorex brought me a very pretty weather-glass for heat and cold.

Read the annotations

Montaigne and the Origins of Substack

Michel de Montaigne may be the most influential essayist in history—even Shakespeare borrowed from his work (taking some passages almost verbatim). But if Montaigne were alive today, this famous essayist might be mistaken for just another slacker living in his parents’ basement.

Okay, let’s be fair. He actually lived in the family castle. But it still was slacking. At age 38, he didn’t have a job—and preferred reading books. Leave me alone, was his message to the world.

The Montaigne family castle (Photo by Henry Salomé)

But even a castle was too noisy for him—or maybe it was just his wife from an arranged marriage that made him feel that way. In any event, Montaigne eventually decided that he needed total isolation, almost like a monk in a hermitage. So he moved into the tower on the family estate. He called it his citadel.

Here he surrounded himself with books, and announced his intention to devote the rest of his life to reading and philosophizing “in calm and freedom from all cares.”

Montaigne’s tower (Photo by Henry Salomé)

But at age 47, Montaigne had a change of heart. He returned to the world, ready to embark on travels and public service. But before leaving for Italy, he had one last goal he needed to fulfill closer to home—and it would have a decisive impact on Western culture.

During his years in the tower, Montaigne wrote 94 essays, and compiled them in two book-length manuscripts. These he now delivered to a printer in Bordeaux, and paid to have them published. A short while later, he traveled to Paris and proudly gave a copy to King Henry III

In his mind, he was serving as his own patron, drawing on the family wealth to cover the expenses of his debut as an author. But today, of course, we would call this self-publishing—a term that is often (unfairly) used to demean the value and legitimacy of these rule-breaking efforts by do-it-yourself writers.


If you want to support my work, please take out a premium subscription (just $6 per month).

Subscribe now


Call it what you will, Montaigne’s achievement cannot be denied. He not only invented the modern essay—setting the stage for Bacon, Emerson, and so many others. But he also helped shape the human sciences and legitimize the personal memoir. That’s because his essays covered many topics but really had only one subject—namely Montaigne himself, with all his quirks and opinions and hot takes.

His essays marked a milestone in the history of individualism. So, of course it makes sense that they were self-published. That’s what individualists do. They are happy to work outside the system.

I could even imagine our slacker Montaigne publishing these essays on Substack today. You might say that he anticipated the Substack style of writing. His balancing of memoir and analysis, subjective and objective, observation and generalization is very much aligned with what I see on this platform every day.

Some writers aspire take the high (but narrow) road of the scholar, while others prefer the larger byways of entertainment. But Montaigne only wanted to be himself, and help us see the world through his eyes—and he had confidence that this was just as valuable as any declaration from established authorities.

He was the first to perfect this way of communicating, but hardly the last. In some ways, he might be the best role model for any author today who wants to shape the wider public conversation. Because Montaigne, again like so many Substackers, knew that most persuasive writing is always conversational.


My favorite self-published author is Marcel Proust—who was born almost exactly three hundred years after Montaigne entered his tower. But the similarities don’t end there, and not just because both are French.

Marcel Proust

Proust was another slacker from a wealthy family who used his inheritance to retreat from society. They both produced a single famous work of literary genius—and were at the same age when they embarked on their respective projects. In both instances, a masterpiece was saved from oblivion because the author had enough money to cover the costs of publication.

These authors delivered works of universal appeal, but based on cranky individualism. Like Montaigne, Proust showcases his eccentric persona on every page. In fact, there’s good reason to believe that André Gide, the editor who rejected Proust’s book, judged the man’s character rather than his actual manuscript—he later admitted that he had formed a negative of impression of Proust based on a few encounters in society.

In Proust’s case, he only had to subsidize the publication of the first volume of À la recherche du temps perdu. The Gallimard publishing house, making amends for its initial rejection, released the next volume, which won the prestigious Goncourt Prize, as well as the subsequent installments of Proust’s 3,500-page work.

No, I can’t imagine Proust succeeding on Substack. Some of his sentences are longer than most of the articles here. He’d be lucky to find a hundred subscribers. Even so, I’m certain he would approve of other writers bypassing the system and reaching out directly to readers.

The link between individualism and self-publishing is confirmed by my third famous example, poet Walt Whitman. He assigned the name “Song of Myself” to one of his best known poems, but that title could just have easily described the writings of Proust and Montaigne. Once again, literary experimentation intersected with strident self-aggrandizement—and the publishing establishment wasn’t ready for what he created.

Whitman’s defiance shows up again in the name of his self-published volume Leaves of Grass. Grass was a derisive term used by editors when referring to writings of little merit, while the word leaves describes the paper used in printing a book. So Whitman takes the insult of rejection and turns it into a mark of pride. His Leaves of Grass would not only bypass the gatekeepers, but change the course of American poetry. Like Proust, Whitman added to the scandal by reviewing his own book (under a pseudonym).

Poetry has long embraced self-published authors. Four years after Whitman’s death, Edwin Arlington Robinson self-published his first book of poems—and he went on to win three Pulitzer Prizes. Alfred Lord Tennyson’s first book of poems was also self-published, but that didn’t stop him from rising to the position of Poet Laureate, a role he held for 42 years. Ezra Pound’s debut was also self-published, and he went on to launch the careers of others (James Joyce, T.S. Eliot, etc.) who, without this intervention, might have been forced into self-publication as well.

My copy of A Lume Spento, initially self-published by Ezra Pound

Pulitzer winner James Merrill had a different expedient—he let his father pay for the publication of his first book. It probably helped that his father Charles Merrill was co-founder (along with Edmund Lynch) of a brokerage firm you might have heard of. If we include other examples of authors helped by subsidies from family and friends, our list would get long and unwieldy.


In the light of these examples, we must abandon any snobbish disdain for the self-published author. This might even be the best route nowadays for writers who challenge norms and break rules. Substack is part of this glorious alternative to official channels, although it is a relative latecomer to the field of homemade writing.

But Substack can help build the future of self-publishing. In this regard, it needs to offer authors more options for publishing books, and not just shorter works via newsletter. I’d love to see the full book treatment made available here—making works available in digital, audio, and physical form.

The latter could be handled easily with print-on-demand technology. Substack could offer physical books without having to invest in physical inventory and large warehouses. The quality of print-on-demand books has improved markedly in recent years, and they can be sold as high-end volumes that look just as good as the releases coming from major publishers—or even better.

I fully intend to return to publishing physical books in the future, and I would prefer to do it here. But Substack must make that option possible. When that happens, I will happily join the ranks of those releasing self-published books. I suspect this will be more profitable for all parties, and also more liberating.

I wouldn’t even be surprised if self-publishing achieves a total flipflop from its previous role in the book economy. In the old days, authors started by self-publishing but later switched to traditional publishing after establishing their reputations. I now anticipate the reverse: Authors who start out with book deals from publishers will leave those intermediaries behind after they have proven their market value.

After all, who needs a publisher when you can do everything yourself—and capture most of the profits (previously retained by the publishing house)?

This is the next revolution in the book business (and the music business, by the way). It would be nice if the revolution happened here on Substack. But no matter what this platform decides, this shift will happen—and for the better.

So let’s thank Montaigne. We can learn from his example. But, even better, we can take his individualist approach into the 21st century—and really shake things up.

Links 3/23/26

Links for you. Science:

Dynamics of natural selection preceding human viral epidemics and pandemics
NIH Says It Will No Longer Recognize the Research Fellows’ Union
ACIP To Discuss COVID ‘Vaccine Injuries’ Next Month, Despite That Not Being In Its Purview
Delays in awards and funding calls worry NIH-funded researchers
Publisher demands $500 from impersonated author to retract paper
A Machine Learning Framework for Serogroup Classification of pathogenic species of Leptospira Based on rfb Locus Profiles

Other:

Gullible, Cynical America: The trouble with believing anything and nothing at the same time
Police body cam footage shows DOGE knew Institute of Peace was private property during raid
The Great Crime Decline Is Happening All Across the Country. Even cities with understaffed police departments have made record gains.
It Wasn’t Fascism All Along
Maine’s catch of lobster declines again as high costs and climate change impact industry
The Right Is Now ‘Transvestigating’ Erika Kirk And Sydney Sweeney
He’s invested billions in Boston. Now one big real estate investor is hitting the brakes. Here’s why.
How Are Things In Venezuela
The real reason why Kristi Noem’s cuckold husband stayed married to her through Corey Lewandowski ‘humiliation’
Texas primary shows that MAGA loves a villain
Iran, Benghazi, And The Age Of Partisan Everything. Republicans wrote the rules, Democrats should play by them.
Trump’s new plan for Iran doomed to backfire
Donald Trump Can’t Even Pretend To Explain The Plan For Iran
Trump and Hegseth are writing their own rules of war
Rebecca Solnit Says the Left’s Next Hero Is Already Here
Pardon Industry Offers Rich Offenders a Path to Trump. One inmate paid lobbyists and lawyers with ties to the president’s team and walked free. Others are following his blueprint, but it is not always clear who can deliver.
As Operation Metro Surge recedes, concern grows over tactics of Twin Cities bounty hunters
Trump’s Fantasy Is Crashing Down
A Helpful Explainer Of Kansas’s Lunatic Anti-Trans Law
Pressure
Fuzzy memories and hard facts: An SC accuser’s claims against Epstein, Trump examined
Donald Trump’s Presidency Is in Free Fall
On the turning away from Trump
The Neo solves Apple’s embarrassment
The next redistricting battle might be who is counted in state legislative districts
Trump bought Netflix and Warner Bros bonds at height of bidding war with Paramount
Congress Is Betraying America’s Founders by Ceding Power to Trump
Trump Press Sec Goes Full Cult as Polls Take Brutal Turn
When DOGE Unleashed ChatGPT on the Humanities
Trump’s plan to turn DC into Mar-a-Lago 2.0 hits a snag

What should I ask David Baszucki?

Yes I will be doing a Conversation with him.  From Wikipedia:

David Brent Baszucki (/bəˈzki/ buh-ZOO-ki; born January 20, 1963) is a Canadian-born American entrepreneur, engineer, and software developer. He is best known as the co-founder and CEO of Roblox Corporation. He co-founded and was the CEO of Knowledge Revolution, which was acquired by MSC Software in December 1998.

On Roblox:

Roblox (/ˈr.blɒks/ ROH-bloks) is an online game platform and game creation system developed by Roblox Corporation that allows users to program and play games created by themselves or other users. It was created by David Baszucki and Erik Cassel in 2004, and released to the public in 2006. As of February 2025, the platform has reported an average of 85.3 million daily active users. According to the company, their monthly player base includes half of all American children under the age of 16.

So what should I ask him?

The post What should I ask David Baszucki? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Inflation Timeline

Depending what corners of the internet you hang out on, 'regular' may at times awkwardly coexist with 'sexy.'

Denmark desperately needs more inequality

The Danish election is tomorrow. One of the central themes in the incumbent campaign has been a proposed wealth tax. The fig leaf for this proposal was "smaller classrooms in the early grades", but that quickly fell off, and the debate centered on "inequality". And it's true that inequality is a problem in Denmark: There's not nearly enough!

I know that sounds sacrilegious. Even most of the business-friendly press and parties in Denmark dance around this topic. Which makes political sense because the word "inequality" leads most people to think of poverty and destitution. But that's not the reality in the little kingdom that could.

Denmark has an enormous state apparatus (half of GDP and a third of all workers!) that offers equal access to everything from health care to education and a million programs in between. It could surely be slimmed and trimmed, but on the whole, it works remarkably well. The average Dane is incredibly well cared for by any international standard (high-trust society, hurray!).

By those same standards, it's the 8th most equal country in the world on income, as measured by the Gini coefficient (0.28). But this is where the numbers start spellbinding the debate. Because the Danish Gini coefficient perversely "degrades" if new businesses succeed, as any time successful founders and high-paid employees earning incomes above the median "worsen" inequality. 

This is obviously nonsense. When the pie gets bigger, it gets better for all, as long as nobody is robbed of their existing slice.  Denmark should clearly want new successful businesses! It should love to see founders reap big rewards when the risks pay off. It should celebrate early employees making fortunes on stock grants. But all too often, it just doesn't.

Just to put it on a pin: Danes hate flashy cars with a passion that stretches back much further than the current green excuses. But buying a $300,000 Ferrari in Denmark is one of the most patriotic things you can possibly do! You'll end up paying almost three times the price for the privilege, and sending 2/3s of that to the treasury in taxes. Truly a contribution to the common cause worthy of admiration, not scorn! 

But because the debate around inequality is anchored in a fixed-pie paradigm, scorn is all you're likely to get. Anyone who does well in Denmark is immediately suspected of having succeeded at the expense of others. Probably through some form of nefarious exploitation, even if we can't prove what?! There is a core national politics of grievance and envy.

But, however human that may be, the future progress and prosperity of the country depends on rejecting this zero-sum delusional dogma. The Danish economy is currently doing well compared to the rest of the EU, but it's dangerously dependent on a handful of vintage corporations pulling the bulk of the load.

This simply has to change if the Danes wish to retain their high standards of living going forward. No corporation lasts forever. Novo Nordisk was Europe's most valuable company at the start of last year, now it's worth half that, and is out of the top ten. And who knows what the closing of the Hormuz Strait will do to Maersk. These two companies alone represent roughly a quarter of all Denmark's exports!

Meanwhile, new business formation just hit an all-time low. And only a tiny portion of the big employers in Denmark were created in the last thirty years. And thus, almost all the wealth that funds the highly-prized welfare state is coming from really old companies. Many of them over a hundred years old.

This is wonderful in many ways. The Danes should be rightfully proud to host Maersk (1904), Novo (1923), Vestas (1945), Lego (1932), and other international heavy-weights. But it can't rely on this aging corporate vintage to forever bear fruit for tomorrow.

Tomorrow needs to be tended to by planting new seeds. New companies. New growth. New capital. And that's just not going to happen if the Danish state declares itself at war with capital formation or accumulation. It should be so lucky to have more rich people, with more capital, and the talent to deploy it toward a better, shared future (or spend it on heavily-taxed Ferraris!).

The ballot boxes open tomorrow morning. It's predicted to be a close one. Fingers crossed for a prosperous choice.

A unique NASA satellite is falling out of orbit—this team is trying to rescue it

BROOMFIELD, Colorado—One of NASA's oldest astronomy missions, the Neil Gehrels Swift Observatory, has been out of action for more than a month as scientists await the arrival of a pioneering robotic rescue mission.

The 21-year-old spacecraft is falling out of orbit, and NASA officials believe it's worth saving—for the right price. Swift is not a flagship astronomy mission like Hubble or Webb, so there's no talk of sending astronauts or spending hundreds of millions of dollars on a rescue expedition. Hubble was upgraded by five space shuttle missions, and billionaire and commercial astronaut Jared Isaacman—now NASA's administrator—proposed a privately funded mission to service Hubble in 2022, but the agency rejected the idea.

Swift may be a more suitable target for a first-of-a-kind commercial rescue mission. It has cost roughly $500 million (adjusted for inflation) to build, launch, and operate, but it is significantly less expensive than Hubble, so the consequences of a botched rescue would be far less severe. Last September, NASA awarded a company named Katalyst Space Technologies a $30 million contract to rapidly build and launch a commercial satellite to stabilize Swift's orbit and extend its mission.

Read full article

Comments

Microsoft Xbox One Hacked

It’s an impressive feat, over a decade after the box was released:

Since reset glitching wasn’t possible, Gaasedelen thought some voltage glitching could do the trick. So, instead of tinkering with the system rest pin(s) the hacker targeted the momentary collapse of the CPU voltage rail. This was quite a feat, as Gaasedelen couldn’t ‘see’ into the Xbox One, so had to develop new hardware introspection tools.

Eventually, the Bliss exploit was formulated, where two precise voltage glitches were made to land in succession. One skipped the loop where the ARM Cortex memory protection was setup. Then the Memcpy operation was targeted during the header read, allowing him to jump to the attacker-controlled data.

As a hardware attack against the boot ROM in silicon, Gaasedelen says the attack in unpatchable. Thus it is a complete compromise of the console allowing for loading unsigned code at every level, including the Hypervisor and OS. Moreover, Bliss allows access to the security processor so games, firmware, and so on can be decrypted.

Monday assorted links

1. Arbitrage?

2. On Christopher Sims.

3. Minimum wage hikes boost restaurant food prices.

4. “These findings suggest that new work serves as a countervailing force to automation-driven job displacement not merely by creating additional employment, but also by generating new domains of human expertise that command market premiums.

5. Martin Heidegger clip.  Not impressive to me.

6. Canvas unrolls AI teaching agent.

7. “This essay has tried to frame what we need to build around AI.

The post Monday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

What would it look like to leave planet Earth? What would it look like to leave planet Earth?


Impressions

March 23, 2026

When I take pictures, I try to stay away from traditional plane porn (of the sort that dominates on Instagram). I like to think my shots — the better of them, at any rate — are a little more offbeat or impressionistic. Case in point, these three, which rate among my favorites.

Top and bottom: A psychedelic flood of blur and color, here’s the world as seen through an airplane window covered in de-icing fluid. Those red and white pinpoints in the first one are, believe it or not, the distant lights of New York City.

Center: Two Skies. The underside of a jetliner tail juxtaposed with an afternoon sky above Somerville, Massachusetts.

Related Story:
THE TEXTURES SERIES

The post Impressions appeared first on AskThePilot.com.

A Deep Dive on ‘The Map Is Not the Territory’

In another side-quest from his current work in progress, Matthew Edney goes down a deep rabbit hole trying to work out a specific point related to Alfred Korzybski’s famous adage that “the map is not… More

Pre-Order “Data Are Made, Not Found”!

There’s something uniquely demoralizing about editing and editing and editing a book manuscript. The words all start to blur together and you start thinking that every sentence is crap, no one will ever want to read this, why bother completing the book. I was definitely in this state. And then… my publisher sent me the book cover and I squealed for joy at just how lovely it is. And it gave me hope. Check out this beauty:

And yes… that’s a wobbly Jenga tower comprised of pieces made from census documentation cuz one of the core arguments in the book is that we’re living in a world of “Jenga Politics” where different actors are pulling out pieces of our administrative infrastructure and putting pressure on top. Civil servants are exhausted, but they’re trying to keep the tower from falling.

And now that I’ve seen the beautiful cover, I can’t wait for you to read this book! And to come celebrate with me! I am starting to build a book tour so hopefully I will come to a city near you. But, in the meantime, here are some of the fun things I get to share:

  • My book won the J. Anthony Lukas Work-in-Progress Prize! Thank you to the kind people at Columbia Journalism School and the Nieman Foundation for Journalism at Harvard University for giving me wind beneath my wings!
  • Pre-Order the book! And if you order it from the University of Chicago Press website, you can save 30% by using the code UCPNEW. But feel free to order from your local bookstore or wherever else you want!
  • DC folks: Save the Date (9/25)! I am ecstatic that Politics & Prose is hosting me at their Wharf venue on September 25 at 7PM (the day after the book goes on sale!). I hope lots of folks will come out to celebrate! There will be books available! And a signing!
  • Virtual Folks: On the eve of the book launch (9/23), Dan Bouk and I will discuss the book in a virtual event hosted by Data & Society. More info on that will come shortly, but make sure to sign up to the D&S newsletter!

Moogle Gaps

Moogle Gaps, for when you want to be misdirected. TrendWatching: “Whipped up by two Australian ex-Droga5 creatives, Paul Meates and Henry Kimber, Moogle Gaps is an anti-wayfinder. Users input their navigational query as they normally would, but instead… More

How Google Maps Disappears Restaurants from Search Results

For the Guardian’s “It’s Complicated” feature, Josh Toussaint-Strauss looks at how great restaurants end up being invisible when you search for a place to eat on Google Maps. He talks with data scientist Lauren Leek,… More

The innovative supply chain of illegal drugs--even in prisons

 Strategy sets are big, so we’re not going to be able to end illegal drug use by spraying defoliants on fields of poppies, or arresting dealers, or attacking speedboats. If we can’t stop the spread of drugs even in prisons, the chance of purely police/military solutions for stopping drugs on the streets isn’t looking good.

The NYT has the story:

No Pills or Needles, Just Paper: How Deadly Drugs Are Changing
Lab-made drugs soaked into the pages of letters, books and even legal documents are being smuggled behind bars, killing inmates and frustrating investigators. 
By Azam Ahmed and Matt Richtel 

" Today, fringe chemists are ushering in a total transformation of the illicit drug market. Operating from clandestine labs, they are churning out a dizzying array of synthetic drugs — not only fentanyl, but also hazardous new tranquilizers, stimulants and complex cannabinoids. Sometimes, several unknown drugs appear on the streets in a single month. Many are so new they are not even illegal yet.

"Nearly all of them are harder to trace than conventional drugs, less expensive to produce, much more potent and far deadlier, according to scientists and law enforcement officials across the globe.

...

"After that first death in the Cook County jail in January 2023, it took months for Mr. Wilks’s team to realize that these mysterious new drugs were being sprayed onto the pages of the most innocuous-seeming items: books, letters, documents, even photographs.

"The sheets of drugs, worth thousands of dollars a page, were being torn into strips and smoked by inmates 

...

"But the traffickers were cunning. When regular mail got checked more closely, smugglers began lacing legal correspondence. Soon, officers discovered sealed packages that looked as if they had been shipped directly from Amazon, with drug-soaked books inside. "

############

It’s hard to shut down markets that people want to participate in.
Someone should write a book about this. 

Please read this.

So I’m no longer on Twitter (praise Jesus), but a writer named Gandalv posted an absolutely remarkable little essay on his thoughts after Donald Trump greeted Robert Mueller’s death with, “Good.”

Read it.

Trust me.

•••

Robert Mueller died last night.

He was 81 years old. He had a wife who loved him for sixty years. He had two daughters, one of whom he met for the first time in Hawaii, in 1969, on a few hours of military leave, before he got back on the plane and returned to Vietnam. He had grandchildren. He had a faith he practiced quietly, without performance. He had, in the way of men who have seen real things and survived them, a quality that is increasingly rare and increasingly mocked in the country he spent his life serving.

He had integrity.

And tonight the President of the United States said good!

I have been sitting with that word for hours now. Good. One syllable. The thing you say when the coffee is hot or the traffic is moving. The thing a man who has never had to bury anyone, never had to sit in the specific silence of a room where someone is newly absent, reaches for when he wants the world to know he is satisfied. Good. The daughters are crying and the wife is alone in the house and good.

I want to speak directly to the Americans reading this. Not the political Americans. Just the human ones. The ones who have lost a father. The ones who know what it is to be in that first hour, when you keep forgetting and then remembering again, when ordinary objects become unbearable, when the world outside the window seems obscene in its indifference. I want to ask you, simply, to hold that feeling for a moment, and then to understand that the man you elected looked at it and typed a single word.

Good.

This is not a country having a bad day. I need you to understand that. Countries have bad days. Elections go wrong. Leaders disappoint. Institutions bend. But there is a different thing, a rarer and more terrible thing, that happens when the moral center of a place simply gives way. Not dramatically. Not with a single catastrophic event. But quietly, in increments, until one evening a president celebrates the death of an old man whose family is still warm with grief, and enough people find it acceptable that it becomes the weather. Just the weather.

That is what is happening. That is what has happened.

The world knows. From Tokyo to Oslo, from London to Buenos Aires, people are not angry at America tonight. Anger would mean there was still something to fight for, some remaining faith to be betrayed. What I see, in the reactions from everywhere that is not here, is something older and sadder than anger. It is the look people get when they have waited a long time for someone they love to find their way back, and have finally understood that they are not coming.

America is being grieved. Past tense, almost. The idea of it. The thing it represented to people who had nothing else to believe in, who came here with everything they owned in a single bag because they had heard, somehow, across an ocean, that this was the place where decency was written into the walls. That idea is not resting. It is not suspended. It is being buried, in real time, with 7,450 likes before dinner.

And the church said nothing.

Seventy million people have decided that this man, this specific man who has cheated everyone he has ever made a promise to, who has mocked the disabled and the dead and the grieving, who celebrated tonight while a family wept, is an instrument of God. The pastors who made that bargain did not just trade away their credibility. They traded away the thing that made them worth listening to in the first place. The cross they carry now is a costume. The faith they preach is a loyalty oath with scripture attached. When the history of American Christianity is written, this will be the chapter they skip at seminary.

Now I want to talk about the men who stand next to him.

Because this is the part that actually breaks my heart.

JD Vance is not a bad man. I have to say that, because it is true, and because the truth matters even now, especially now. Marco Rubio is not a bad man. Lindsey Graham is not a bad man. They are idiots, but not bad, as in BAD! These are men with mothers who raised them and children who love them and friends who remember who they were before all of this. They are not monsters. Monsters are simple. Monsters do not cost you anything emotionally because there is nothing in them to mourn.

These men are something more painful than monsters.

They are men who knew better, and know better still, and will get up tomorrow and do it again.

Every small compromise they made had a reason. Every moment they looked the other way had a justification that sounded, at the time, almost reasonable. And now they have arrived here, at a place where a president celebrates the death of an old man and they will find a way, on television, to say nothing that means anything, and they will go home to houses where children who carry their name are waiting, and they will say goodnight, and they will say nothing.

Their oldest friends are watching. The ones who knew Rubio when he still believed in something. Who knew Graham when he said, out loud, on the record, that this exact man would destroy the Republican Party and deserve it. Who sat next to Vance and thought here is someone worth knowing. Those friends are not angry tonight. They moved through anger a long time ago. What they feel now is the quiet, irrecoverable sadness of watching someone disappear while still being present. Of watching a person they loved choose, again and again, to become less.

That is what cowardice costs. Not the coward. The people who loved him.

And in the comments tonight, the followers celebrate. People who ten years ago brought casseroles to grieving neighbours. Who stood in the rain at gravesides and meant the words they said. Who told their children that we do not speak ill of the dead because the dead were someone’s beloved. Those people are tonight typing gleeful things about a man whose daughters are not yet done crying. And they feel clean doing it. Righteous. Because somewhere along the way the thing they were given in exchange for their decency was the feeling of belonging to something, and that feeling is very hard to give up even when you can no longer remember what you gave for it.

When Trump is gone, they will still be here.

Standing in the silence where the noise used to be. Without the permission the crowd gave them. Without the pastor who told them their cruelty was holy. They will be alone with what they said and what they cheered and what they chose to become, and there will be no one left to tell them it was righteous.

That morning is coming.

Robert Mueller flew across the Pacific on military leave to hold his newborn daughter for a few hours before returning to the war. He came home. He buried his dead with honour. He served presidents of both parties because he understood that the institution was larger than any one man. He told his grandchildren that a lie is the worst thing a person can do, that a reputation once lost cannot be recovered, and he lived that, every day, in the quiet and unglamorous way of people who actually believe what they say.

He was the kind of American the world used to point to when it needed to believe the story was true.

He died last night. His wife is alone in their house in Georgetown. His daughters are learning what the world is without him in it. And somewhere in the particular hush that falls over a family in the first hours of loss, the most powerful man and the biggest loser on earth sent a message to say he was glad.

The world that loved what America was supposed to be is grieving tonight. Not for Robert Mueller only. For the country that produced him and then became this. For the distance between what was promised and what was delivered. For the suspicion, growing quieter and more certain with each passing month, that the America people believed in was always partly a story, and the story is over now, and there is nothing yet to replace it.

That is all it needed to be.

A man died. His family is broken open with grief.

That is all it needed to be.

Instead the President said good.

And the country that once stood for something looked away

🇺🇸

Gandalv / @Microinteracti1

March 22, 2026

President Donald J. Trump‘s behavior is increasingly erratic as he lashes out at those he perceives to be enemies. On Thursday he defended his failure to inform allies and partners about his February 28 attack on Iran by telling a Japanese reporter he wanted the element of surprise. “Who knows better about surprise than Japan? Why didn’t you tell me about Pearl Harbor, OK?” Trump said, referring to the Japanese attack on Hawaii that took place on December 7, 1941, five years before Trump was born. Sitting beside Trump, the prime minister of Japan, Sanae Takaichi, appeared taken aback. Japan is a key Pacific ally of the United States.

The president is under enormous pressure, as his war with Iran sparked Iranian officials to close the Strait of Hormuz, through which about 20% of the world’s oil flows. This outcome was expected by previous presidents, but Trump seemed to think he could avoid it and now is stuck without an easy solution. As former defense secretary and Central Intelligence Agency director Leon Panetta told David Smith of The Guardian, “[I]f there was an escape here for Trump, it would be to declare victory and it’s over and we’ve been able to be successful in all of our military targets. The problem is he can declare victory all he wants but, if he doesn’t get the ceasefire, he’s got nothing. And he’s not going to get a ceasefire as long as Iran is holding the gun of the strait of Hormuz against his head.”

“He tends to be naive about how things can happen,” Panetta told Smith. “If he says it and keeps saying it, there’s always a hope that what he says will come true. But that’s what kids do. It’s not what presidents do.”

In a frantic attempt to lower oil prices, the administration on Friday lifted sanctions on Iranian oil currently at sea. Iranian oil has been sanctioned since 1979. The lifting of sanctions will enable Iran to sell about 140 million barrels of oil, worth about $14 billion, including to the United States and to China.

National security scholar Phil Gordon, who served as the White House coordinator for the Middle East, North Africa, and the Persian Gulf Region during the Obama administration, posted: “When Obama sent Iran $400m + $1.3bn in interest in 2016 Trump called it ‘insane’ and he and others spent a decade mocking the idea of ‘pallets of cash’ even though it was Iran’s own money, American prisoners were released, courts were likely to require the U.S. payment, and Iran had just agreed to significant and verified reductions and restrictions on its nuclear program for 15+ years.

“Now Trump is giving Iran up to ten times that amount of revenue—one of the most significant measures of sanctions relief provided to the Islamic Republic since its founding—in exchange for marginal and temporary relief from the big increase in oil prices his actions have caused, without any concessions from Tehran, and even as Iran continues to target the United States, its allies, and world oil supplies. No way to read as anything other than desperate recognition of the situation Trump’s own actions have created and the lack of available alternatives for dealing with it.”

On Meet the Press today, Senator Chris Murphy (D-CT) said: “We’re gonna give Iran $14 billion to fund this war with the United States? We’re gonna give Russia billions of dollars to fund their war with Ukraine? We’re literally putting money into the pockets of the very nations that we are fighting right now. We’ve never seen this level of incompetence in war-making in this country’s history.”

Trump is also under pressure over the Department of Homeland Security (DHS), which has been mired in news stories about corruption since former secretary Kristi Noem stepped down. Yesterday morning, Trump appeared to try to change the momentum of those stories by going on the offensive against Democrats.

New scrutiny of the department has brought renewed attention to the November 2025 ProPublica report by Justin Elliott, Joshua Kaplan, and Alex Mierjeski that DHS had awarded a $220 million contract for a taxpayer-funded ad campaign to cronies, getting around transparency laws by awarding the contract to a small company that then subcontracted the deal to friends of Noem and her associate Corey Lewandowski. Of the contract, Trump allegedly said: “Corey made out on that one.”

On Thursday, March 19, Julia Ainsley, Matt Dixon, Jonathan Allen, and Laura Strickler of NBC News reported that Lewandowski told George Zoley, the head of the giant private prison company GEO Group, that he expected to be paid for steering contracts to GEO Group. Zoley said he declined initially but later offered to put Lewandowski on retainer with a consulting fee. But, sources told the journalists, Lewandowski “wanted payments—what some people would call a success fee” based on awarded contracts. When Zoley refused, GEO Group lost out on contracts. A senior DHS official told the journalists Lewandowski had told him not to award any more contracts to GEO Group.

Lewandowski’s official title was that of a “special government employee,” with a temporary appointment that permitted him to work only 130 days in a year, but DHS officials told the journalists that Lewandowski had broad authority over contracts in the department and was referred to as “chief.” He allegedly sidestepped the limits of his appointment by going into the building accompanying Noem, and thus without swiping in using his badge. Lewandowski has denied any wrongdoing.

Yesterday Hamed Aleaziz, Alexandra Berzon, Nicholas Nehamas, Zolan Kanno-Youngs, and Tyler Pager of the New York Times reported on the extraordinary power Lewandowski had in DHS under Noem, explaining that he held meetings without her present, sat in on classified briefings, read a version of the highly classified President’s Daily Brief, and issued orders as he spearheaded detention and deportation of migrants. In addition to approving government contracts that worried officials, Lewandowski helped put Greg Bovino, a midlevel Border Patrol leader, into a senior position that gave him national power.

At 11:34 yesterday morning, Trump tried to turn the DHS story into one about the Democrats, posting: “If the Radical Left Democrats don’t immediately sign an agreement to let our Country, in particular, our Airports, be FREE and SAFE again, I will move our brilliant and patriotic ICE Agents to the Airports where they will do Security like no one has ever seen before, including the immediate arrest of all Illegal Immigrants who have come into our Country, with heavy emphasis on those from Somalia, who have totally destroyed, with the approval of a corrupt Governor, Attorney General, and Congresswoman, Ilhan Omar, the once Great State of Minnesota. I look forward to seeing ICE in action at our Airports. MAKE AMERICA GREAT AGAIN! President DONALD J. TRUMP”

This appeared to be a threat to use Immigration and Customs Enforcement agents, whom Trump appears to see as his own private army, to hurt Democrats by pinning the long lines in airports on the Democrats’ refusal to fund DHS, which means that Transportation Security Administration (TSA) agents aren’t being paid. But Democrats have repeatedly proposed funding every agency in DHS other than ICE and Border Patrol, leaving those out until their abuses under Noem, Lewandowski, and Bovino have been addressed. Republicans have refused that funding unless DHS requests are funded in full at the same time.

Under Trump, ICE has become the highest-funded law enforcement agency in the U.S., with an annual budget higher than those of all other federal law enforcement agencies combined. While ICE budgets previously had hovered around $6 billion, the Republicans’ One Big Beautiful Bill Act gave DHS $85 billion to fund it through September 30, 2029. What is outstanding now is its base budget of around $10 billion. Because ICE agents are considered “essential” workers, they, unlike TSA agents, are getting paid during the funding fight.

Today the administration announced ICE agents will take the place of some TSA agents, although as the former national security officials at The Steady State note, the legality of moving ICE agents into TSA positions isn’t clear. Tonight Trump admitted he is not interested in any deal with the Democrats to fund the Department of Homeland Security unless Democrats also agree to the SAVE America Act, which would require proof of citizenship to register to vote and to vote, and which is widely understood to be a measure designed to suppress voting. Trump also includes in the measure an end to mail-in voting, and an attack on transgender Americans.

Then, at 1:26 yesterday afternoon, Trump responded to the death of 81-year-old special counsel Robert Mueller by posting: “Robert Mueller just died. Good, I’m glad he’s dead. He can no longer hurt innocent people! President DONALD J. TRUMP.”

As Josh Meyer of USA Today reported, Mueller was a lifelong public servant. He served in combat as a Marine Corps officer in the Vietnam War, during which he was wounded. “I consider myself exceptionally lucky to have made it out of Vietnam,” Mueller said years later. “There were many—many—who did not. And perhaps because I did survive Vietnam, I have always felt compelled to contribute.” He became a federal prosecutor covering organized crime, terrorism, and public corruption. A conservative Republican nominated by President George W. Bush to direct the Federal Bureau of Investigation (FBI), he took office just a week before 9/11 and proceeded to reshape the FBI’s mission from fighting crime to an emphasis on counterterrorism and intelligence.

In 2017, Deputy Attorney General Rod Rosenstein appointed Mueller special counsel for the Department of Justice to investigate Russian interference in the 2016 election. Mueller’s team filed charges against Trump’s former campaign chair Paul Manafort and co-chair Rick Gates for conspiracy to launder money, violating the Foreign Agents Registration Act, and conspiracy against the United States, and reached a plea agreement with Trump’s former national security advisor Michael Flynn, who pleaded guilty to lying to the FBI about his contacts with Russian operative and ambassador Sergey Kislyak. Mueller’s team also indicted thirteen Russians and three Russian companies involved in pushing Russian propaganda to American voters. Ultimately the team indicted thirty-four people, including six of Trump’s former advisors, five of whom pleaded guilty.

Mueller’s final report detailed the efforts of Russian operatives to help Trump and hurt Democratic candidate Hillary Clinton, saying Russia launched “multiple, systematic efforts” to interfere with the election. Mueller said he had not been able to consider Trump’s guilt because Justice Department policy prohibits the prosecution of a sitting president, but added: “If we had confidence that the president clearly did not commit a crime, we would have said that.” He refused to say his report “exonerated” Trump, as Trump’s supporters insisted.

A later report by the Republican-led Senate Intelligence Committee agreed that members of Trump’s 2016 campaign, led by Manafort, worked with Russian operatives to help Trump get elected.

Not only is Robert Mueller getting under Trump’s skin, so, clearly, is his own failure to reopen the Strait of Hormuz. At 7:44 last night, he posted: “If Iran doesn’t FULLY OPEN, WITHOUT THREAT, the Strait of Hormuz, within 48 HOURS from this exact point in time, the United States of America will hit and obliterate their various POWER PLANTS, STARTING WITH THE BIGGEST ONE FIRST! Thank you for your attention to this matter. President DONALD J. TRUMP.”

In a conversation with Anne McElvoy of Politico on Thursday, United Nations Secretary-General António Guterres noted that attacks on civilian energy infrastructure are war crimes.

Yesterday Julie K. Brown of The Epstein Files, whose work digging into the cover-up of the Epstein story for the Miami Herald has been instrumental in bringing the scandal to light, and her colleague Claire Healy reported that after sex offender Jeffrey Epstein was found dead in his prison cell on August 10, 2019, a corrections officer called the FBI’s Threat Operations Center saying the officer “found it suspicious that an after-action team charged with investigation would be shredding huge amounts of paperwork” while FBI agents were in the building.

An inmate who helped shred documents told guards: “They are shredding everything,” and an assistant federal prosecutor noted the destruction or misplacing of relevant records. Another corrections officer wrote to the FBI on August 19 about an unusual amount of shredding and disposal, and suggested: “you may want to investigate why [Bureau of Prisons] employees are destroying records.”

This morning, at 8:24, Trump posted: “Now with the death of Iran, the greatest enemy America has is the Radical Left, Highly Incompetent, Democrat Party! Thank you for your attention to this matter. President DJT”

Tonight, just before midnight, he posted: “PEACE THROUGH STRENGTH, TO PUT IT MILDLY!!!”

Notes:

https://www.cnbc.com/2026/03/20/trump-pearl-harbor-japan-takaichi-iran-war.html

https://www.politico.com/news/2026/03/22/surprise-embarrassment-unease-japan-pearl-harbor-00839369

https://www.yahoo.com/news/articles/fbi-warned-bags-documents-were-143207523.html

https://www.usatoday.com/story/news/politics/2026/03/21/what-to-know-about-former-fbi-chief-and-trump-foe-robert-mueller/89264548007/

https://www.propublica.org/article/kristi-noem-dhs-ad-campaign-strategy-group

https://www.nbcnews.com/news/us-news/dhs-contractors-told-white-house-officials-asked-pay-corey-lewandowski-rcna263744

https://www.nytimes.com/2026/03/21/us/politics/corey-lewandowski-noem-dhs.html

https://www.cnn.com/us/live-news/tsa-wait-times-government-shutdown-03-22-26

https://www.npr.org/2026/01/21/nx-s1-5674887/ice-budget-funding-congress-trump

https://www.theguardian.com/us-news/2026/mar/22/trump-iran-leon-panetta

https://www.theguardian.com/us-news/2026/mar/20/us-sanctions-iranian-oil

https://www.cnn.com/us/live-news/tsa-wait-times-government-shutdown-03-22-26

https://www.politico.eu/article/un-chief-guterres-reasonable-grounds-believe-war-crimes-happening-iran-war/

https://www.intelligence.senate.gov/wp-content/uploads/2024/08/sites-default-files-documents-report-volume5.pdf

X:

PhilGordonDC/status/2035346997343866924

Bluesky:

atrupar.com/post/3mhnsvsjc7y23

atrupar.com/post/3mhnlb2im5c2k

axidentaliberal.bsky.social/post/3mhltdap5wk2y

ronfilipkowski.bsky.social/post/3mhmdxe2ojs2o

ronfilipkowski.bsky.social/post/3mhlozz53zc2s

gillianbrockell.com/post/3mhlf5tyspc2m

thesteadystate.org/post/3mhokt7nrxk2t

eliothiggins.bsky.social/post/3mhnkq6hmwk2u

thetnholler.bsky.social/post/3mhp7lpz47c2x

josephpolitano.bsky.social/post/3mhp5zgdy7k2u

murray.senate.gov/post/3mhloteycsk2l

Share

The 13th, 14th, and 15th Amendments

Oil versus Ice Cream

When Tyler and I were writing Modern Principles of Economics, we wanted examples that were modern, specific, and grounded in the real world. That has been a bit of a headache, because we have to update them with every new edition. Our biggest competitor uses the ice cream market as its central example and never has to revise. Smart! But for us, the extra work has been worth it.

We chose the oil market as our central example. Oil is always in the news, and it works really well across a wide range of textbook topics: the elasticity of demand and supply; oligopoly and cartels; the shutdown condition; shocks; expectations, speculation and futures markets; and oil prices have macroeconomic implications that connect micro to macro.

Yes, keeping the examples current takes more work. But when a student sees that the price of crude has surged past $100 a barrel because Iran closed the Strait of Hormuz—choking off 20% of the world’s oil supply—they have the framework to understand what is happening. Supply shock, inelastic demand, expectations and speculation, the macroeconomic transmission to GDP—it’s all right there in the headlines. Try doing that with the ice cream market.

See the Invisible Hand. Understand Your World. It is not just our slogan. It’s our method.

The post Oil versus Ice Cream appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The city that wasted nothing

Ink painting of people in traditional attire engaging in activities on a wooden floor with various objects around them.

Edo, modern Tokyo, transformed from a city near ecological collapse to a thriving epicentre by creating a circular economy

- by Aeon Video

Watch on Aeon

An African philosophy

A building with yellow wall, hands in silhouette in foreground, and animals resting in the sun.

Lansana Keita rejected Eurocentric ideas, tracing the philosophical tradition back to African Kemet or ancient Egypt

- by Sanya Osha

Read on Aeon

PCGamer Article Performance Audit

Research: PCGamer Article Performance Audit

Stuart Breckenridge pointed out that PC Gamer Recommends RSS Readers in a 37MB Article That Just Keeps Downloading, highlighting a truly horrifying example of web bloat that added up to 100s more MBs thanks to auto-playing video ads. I decided to have Claude Code for web use Rodney to investigate the page - prompt here.

Tags: web-performance, rodney

JavaScript Sandboxing Research

Research: JavaScript Sandboxing Research

Aaron Harper wrote about Node.js worker threads, which inspired me to run a research task to see if they might help with running JavaScript in a sandbox. Claude Code went way beyond my initial question and produced a comparison of isolated-vm, vm2, quickjs-emscripten, QuickJS-NG, ShadowRealm, and Deno Workers.

Tags: sandboxing, javascript, nodejs, claude-code

DNS Lookup

Tool: DNS Lookup

TIL that Cloudflare's 1.1.1.1 DNS service (and 1.1.1.2 and 1.1.1.3, which block malware and malware + adult content respectively) has a CORS-enabled JSON API, so I had Claude Code build me a UI for running DNS queries against all three of those resolvers.

Tags: dns, cors, cloudflare

Merge State Visualizer

Tool: Merge State Visualizer

Bram Cohen wrote about his coherent vision for the future of version control using CRDTs, illustrated by 470 lines of Python.

I fed that Python (minus comments) into Claude and asked for an explanation, then had it use Pyodide to build me an interactive UI for seeing how the algorithms work.

Tags: vcs, pyodide, bram-cohen, crdt

Iranian Oil, Still Shipping

The willingness of the Trump administration to let Iran keep selling oil shows that we’re not willing to endure pain in a war that it all about enduring pain.

For more like this, see my YouTube channel

Transcript

If you want to understand how things are going in Iran and why it looks increasingly likely that the United States is going to lose this thing in every sense that matters, follow the oil money.

Hi, Paul Krugman with another not very happy update Sunday afternoon. Yesterday, Donald Trump threatened Iran with basically a massive war crime, saying that if they don’t open the Strait of Hormuz by 48 hours from the time of the post, which would be tomorrow, that he will order attacks on Iranian power plants, on civilian infrastructure, which is, you know, it is a war crime, not something that has never been done, not something that the United States has never done, but not something that you kind of just openly announce — that we’re going to try and terrorize you with this bombing campaign.

But at the same time, the United States is allowing Iran, and only Iran, to export oil, shipping it out through the Strait of Hormuz. The United States isn’t stopping other countries from doing it, but Iran is, and we are allowing them to grant safe passage only to ships that they approve going through that strait. Now that is wild. Sounds completely crazy. It’s not completely crazy, but what it is is it’s a demonstration of incredible weakness. Why would the United States allow Iranian oil to continue to be exported? Or shouldn’t we be trying to, you know, we’re at war with these people, and their revenue base depends basically on selling oil. So why are we allowing them to do that? It would be very easy as a matter of military force for the United States to just stop those oil exports.

And the answer is that, well, the two million barrels a day or something like that that Iran is managing to export are two million barrels a day of world oil supply. And we’re in a world in which the total supply to the world market is down substantially. down something like 10 million barrels or more per day because of the closure of the straits to everybody but the Iranians. And the United States is afraid to worsen the shortage by stopping the Iranians from selling oil, presumably because the Trump administration is afraid of the political backlash from higher gas prices. It’s already frantic enough to threaten war crimes in order to try and get oil flowing and gas prices down. But they’re apparently frightened enough of gas prices that they’re willing to allow the enemy to keep making money selling oil in order to keep those prices somewhat lower than they would otherwise have been.

That’s an admission, implicitly, of enormous weakness. It’s an admission that the Trump administration is not willing to accept sustained pain as part of this war. They’re willing to drop bombs and all of that, but they’re not willing to accept economic pain in the United States, even enough to shut off the revenue flow to the Iranian government.

And this war is fundamentally about who can stand the pain. It’s the United States doing lots of damage to Iran, but the Iranian government seems to think it can handle that. And the Iranians trying to inflict enough pain through hurting the world’s supply of oil that the United States ceases and desists.

And given the behavior, who would you bet on in this situation? So this is looking, I don’t want to say this, right? I mean, I do not want to see, obviously for domestic political reasons, I don’t want a Trump victory parade.

But a world in which the United States loses this war is going to be really a very dangerous world for all of us. But I’m afraid that that’s the direction we’re heading.

Have a nice rest of your weekend.

How to Burn Less Oil

Furious Iran Bombards Saudi Oil Refineries In Retaliation For Israel-US Gas  Facility Attack

The world economy must find a way to function while burning less oil.

That may sound like a call to action, but in the short run it’s simply a statement of fact. Until the Iran war began, 20 percent of the world’s oil supply was shipped through the Strait of Hormuz. Barring a deal with Iran, which is nowhere in sight, or military action that eliminates almost all threats to shipping — which is very hard to achieve in this modern age of drone warfare — there is simply going to be less oil available for months, maybe even years, to come.

And in the longer run, we’re now having an object lesson in the strategic risks of depending so much on oil — risks that add to the already compelling environmental case for moving away from fossil fuels in general.

But how hard will it be to reduce our dependence on the black stuff? Can the world economy prosper while burning much less oil than it has in the past?

The answer depends on the time frame. Even with oil costing $100 a barrel — indeed, even if it goes to $150 — it will be very hard to reduce overall oil consumption quickly.

That’s because in the short run – which means several years — the only way to consume less oil is for people to change their behavior, mainly by driving less. So to induce a major decline in oil consumption prices would need to go high enough that people turn to carpooling, working from home, taking the bus where that’s an available option (which for most Americans it isn’t.) Or, in the worst-case scenario, oil prices will have to reduce consumers’ purchasing power so much that the economy falls into a recession, which would among other things reduce the demand for oil.

In the longer run, by contrast — defined as a period long enough to replace a large fraction of vehicles on the road — there is much greater potential for consuming much less oil, with small or zero adverse effects on economic growth and purchasing power. This was true even before the technological innovations that have made electric vehicles (EVs) competitive with internal consumption vehicles (ICEs). Fuel-efficient vehicles provide most of the benefits of gas-guzzling SUVs, and consumers may not realize how much money they save. And now that EVs are competitive, drastic reductions in gas consumption are possible with minimal disruption.

Finally, if we make different decisions about how we live and work, the world could easily thrive while burning only a fraction as much oil as it does now.

Beyond the paywall I will address the following issues:

1. Why we consume so much oil, and how that logic is changing

2. The demand for oil in the short run, and the crucial question of price elasticity

3. How demand can adjust once there’s time to replace vehicles

4. Oil demand and how we live: The long-run possibilities

Read more

Beats now have notes

Last month I added a feature I call beats to this blog, pulling in some of my other content from external sources and including it on the homepage, search and various archive pages on the site.

On any given day these frequently outnumber my regular posts. They were looking a little bit thin and were lacking any form of explanation beyond a link, so I've added the ability to annotate them with a "note" which now shows up as part of their display.

Here's what that looks like for the content I published yesterday:

Screenshot of part of my blog homepage showing four "beats" entries from March 22, 2026, each tagged as RESEARCH or TOOL, with titles like "PCGamer Article Performance Audit" and "DNS Lookup", now annotated with short descriptive notes explaining the context behind each linked item.

I've also updated the /atom/everything/ Atom feed to include any beats that I've attached notes to.

Tags: atom, blogging, site-upgrades

Starlette 1.0 skill

Research: Starlette 1.0 skill

See Experimenting with Starlette 1.0 with Claude skills.

Tags: starlette

Experimenting with Starlette 1.0 with Claude skills

Starlette 1.0 is out! This is a really big deal. I think Starlette may be the Python framework with the most usage compared to its relatively low brand recognition because Starlette is the foundation of FastAPI, which has attracted a huge amount of buzz that seems to have overshadowed Starlette itself.

Kim Christie started working on Starlette in 2018 and it quickly became my favorite out of the new breed of Python ASGI frameworks. The only reason I didn't use it as the basis for my own Datasette project was that it didn't yet promise stability, and I was determined to provide a stable API for Datasette's own plugins... albeit I still haven't been brave enough to ship my own 1.0 release (after 26 alphas and counting)!

Then in September 2025 Marcelo Trylesinski announced that Starlette and Uvicorn were transferring to their GitHub account, in recognition of their many years of contributions and to make it easier for them to receive sponsorship against those projects.

The 1.0 version has a few breaking changes compared to the 0.x series, described in the release notes for 1.0.0rc1 that came out in February.

The most notable of these is a change to how code runs on startup and shutdown. Previously that was handled by on_startup and on_shutdown parameters, but the new system uses a neat lifespan mechanism instead based around an async context manager:

@contextlib.asynccontextmanager
async def lifespan(app):
    async with some_async_resource():
        print("Run at startup!")
        yield
        print("Run on shutdown!")

app = Starlette(
    routes=routes,
    lifespan=lifespan
)

If you haven't tried Starlette before it feels to me like an asyncio-native cross between Flask and Django, unsurprising since creator Kim Christie is also responsible for Django REST Framework. Crucially, this means you can write most apps as a single Python file, Flask style.

This makes it really easy for LLMs to spit out a working Starlette app from a single prompt.

There's just one problem there: if 1.0 breaks compatibility with the Starlette code that the models have been trained on, how can we have them generate code that works with 1.0?

I decided to see if I could get this working with a Skill.

Building a Skill with Claude

Regular Claude Chat on claude.ai has skills, and one of those default skills is the skill-creator skill. This means Claude knows how to build its own skills.

So I started a chat session and told it:

Clone Starlette from GitHub - it just had its 1.0 release. Build a skill markdown document for this release which includes code examples of every feature.

I didn't even tell it where to find the repo, Starlette is widely enough known that I expected it could find it on its own.

It ran git clone https://github.com/encode/starlette.git which is actually the old repository name, but GitHub handles redirects automatically so this worked just fine.

The resulting skill document looked very thorough to me... and then I noticed a new button at the top I hadn't seen before labelled "Copy to your skills". So I clicked it:

Screenshot of the Claude.ai interface showing a conversation titled "Starlette 1.0 skill document with code examples." The left panel shows a chat where the user prompted: "Clone Starlette from GitHub - it just had its 1.0 release. Build a skill markdown document for this release which includes code examples of every feature." Claude's responses include collapsed sections labeled "Strategized cloning repository and documenting comprehensive feature examples," "Examined version details and surveyed source documentation comprehensively," and "Synthesized Starlette 1.0 knowledge to construct comprehensive skill documentation," with intermediate messages like "I'll clone Starlette from GitHub and build a comprehensive skill document. Let me start by reading the skill-creator guide and then cloning the repo," "Now let me read through all the documentation files to capture every feature:" and "Now I have a thorough understanding of the entire codebase. Let me build the comprehensive skill document." The right panel shows a skill preview pane with buttons "Copy to your skills" and "Copy" at the top, and a Description section reading: "Build async web applications and APIs with Starlette 1.0, the lightweight ASGI framework for Python. Use this skill whenever a user wants to create an async Python web app, REST API, WebSocket server, or ASGI application using Starlette. Triggers include mentions of 'Starlette', 'ASGI', async Python web frameworks, or requests to build lightweight async APIs, WebSocket services, streaming responses, or middleware pipelines. Also use when the user is working with FastAPI internals (which is built on Starlette), needs ASGI middleware patterns, or wants a minimal async web server" (text truncated).

And now my regular Claude chat has access to that skill!

A task management demo app

I started a new conversation and prompted:

Build a task management app with Starlette, it should have projects and tasks and comments and labels

And Claude did exactly that, producing a simple GitHub Issues clone using Starlette 1.0, a SQLite database (via aiosqlite) and a Jinja2 template.

Claude even tested the app manually like this:

cd /home/claude/taskflow && timeout 5 python -c "
import asyncio
from database import init_db
asyncio.run(init_db())
print('DB initialized successfully')
" 2>&1

pip install httpx --break-system-packages -q \
  && cd /home/claude/taskflow && \
  python -c "
from starlette.testclient import TestClient
from main import app

client = TestClient(app)

r = client.get('/api/stats')
print('Stats:', r.json())

r = client.get('/api/projects')
print('Projects:', len(r.json()), 'found')

r = client.get('/api/tasks')
print('Tasks:', len(r.json()), 'found')

r = client.get('/api/labels')
print('Labels:', len(r.json()), 'found')

r = client.get('/api/tasks/1')
t = r.json()
print(f'Task 1: \"{t[\"title\"]}\" - {len(t[\"comments\"])} comments, {len(t[\"labels\"])} labels')

r = client.post('/api/tasks', json={'title':'Test task','project_id':1,'priority':'high','label_ids':[1,2]})
print('Created task:', r.status_code, r.json()['title'])

r = client.post('/api/comments', json={'task_id':1,'content':'Test comment'})
print('Created comment:', r.status_code)

r = client.get('/')
print('Homepage:', r.status_code, '- length:', len(r.text))

print('\nAll tests passed!')
"

For all of the buzz about Claude Code, it's easy to overlook that Claude itself counts as a coding agent now, fully able to both write and then test the code that it is writing.

Here's what the resulting app looked like. The code is here in my research repository.

Screenshot of a dark-themed Kanban board app called "TaskFlow" showing the "Website Redesign" project. The left sidebar has sections "OVERVIEW" with "Dashboard", "All Tasks", and "Labels", and "PROJECTS" with "Website Redesign" (1) and "API Platform" (0). The main area has three columns: "TO DO" (0) showing "No tasks", "IN PROGRESS" (1) with a card titled "Blog about Starlette 1.0" tagged "MEDIUM" and "Documentation", and "DONE" (0) showing "No tasks". Top-right buttons read "+ New Task" and "Delete".

Tags: open-source, python, ai, asgi, kim-christie, generative-ai, llms, ai-assisted-programming, claude, coding-agents, skills, agentic-engineering, starlette

‘Good, I’m Glad He’s Dead.’

The sitting president of the United States, on his blog:

Robert Mueller just died. Good, I’m glad he’s dead. He can no longer hurt innocent people! President DONALD J. TRUMP

As the elderly descend further into dementia, they lose their sense of propriety and simply speak their mind. (They also get confused and think they need to “sign” their text messages and social media posts.) Say what you want about Trump’s truthfulness generally, but here, he’s just being brutally honest. Let’s keep his “Good, I’m glad he’s dead” post bookmarked for when Trump himself finally keels over — after he chokes on a hamburger or whatever it’ll be that finally does him in — and the good people of the world rejoice and celebrate.

 ★ 

What's happening at the end of that street? What's happening at the end of that street?


When will “the research paper” disappear in economics?

Soon enough you will be able to take any published research paper and tweak it, or improve it, any way you want.  Just apply a dose of AI.

Using Refine, you already can judge the quality of all past papers, once you get them in uploadable form.  We now can rewrite the entire history of modern economics with the mere investment of tokens.  Which papers in the 1993 AER were really the good ones?  Which are simply false and do not replicate?

Refine, or some service like it, will only get better, and cheaper.

Do we even need the AER any more to certify which are the best papers?  Just ask the AIs, including about influence not just quality.

Why not write a program, or have an AI write it for you, that will take your favorite papers and improve them, and change their evaluations over time, as new results come in?  Of course people will do this, at least to the extent they care.  These papers will keep on morphing.

Will economics become a branch of software engineering?  There are important papers in software engineering, but very often the most important advances are embodied in actual software, AI included.

Will the future advances in economics come from producing evaluative systems and producing systems, rather than papers?

What if you submit to a journal a data set and some code?  Who needs “the paper” per se?  Just issue some commands to the “data set plus code” and get the paper you want.  How about “I am Tyler Cowen, what is it you think I will find interesting in this data set?”

Or publish a method for simulating human behavior, to run AI-simulated experimental economics, a’la Horton and Manning?  Publish “the box,” and do not worry so much about the individual paper.

Will highly productive researchers, who publish a lot of papers, become far less valuable?  The individual paper no longer seems scarce, or will not be in another year or two.

Give tenure to people who build capabilities and who build “boxes”?

How about an economics Nobel Prize for Anthropic and Open AI?

I thank Alex T. for useful discussions on this point.

The post When will “the research paper” disappear in economics? appeared first on Marginal REVOLUTION.

       

Comments

 

The Triangulum galaxy up close

Today’s Picture of the Week is a closeup of the nearby Triangulum galaxy, also known as Messier 33, located about 3 million light-years away. This festive-looking image, taken with ESO’s Very Large Telescope (VLT), reveals the diversity and complexity of the gas and dust between the stars in great detail.

Stars are not, as is often imagined, isolated spheres in the dark, but rather live in rich and complex environments that they actively shape. Studying this cosmic interplay tells us about how stars form, and how their radiation affects the surrounding material, which helps us to understand how galaxies evolve as a whole.

The image was presented in a new study led by Anna Feltre, a postdoctoral researcher at the INAF-Astrophysical Observatory of Arcetri, Italy. The team used data taken with the Multi Unit Spectroscopic Explorer (MUSE) instrument at the VLT. MUSE’s superpower is its ability to break up the light into the different rainbow colours, allowing the team to examine the chemical composition of the interstellar matter at every location across its whole field of view.

The different colours of the image represent different elements: blue, green and red indicate the presence of oxygen, hydrogen and sulphur, respectively. MUSE allowed the team to map the distribution of many other elements, as well as their motion, key to understanding the link between stars and their surroundings. As Feltre aptly puts it: “This cosmic interplay produces a spectacular and dynamic landscape, revealing that the birthplaces of stars are far more beautiful and complex than we ever imagined.”

Links

Paraguay trend of the day

Lured by low taxes, entrepreneurs from across Latin America are plowing in money and taking up residence, with applications surging more than 60% in 2025. Sleek towers and luxury car dealerships now dot Asunción, a city where infrastructure is still struggling to catch up. And Wall Street investors are snapping up Paraguay’s bonds as its conservative president, Santiago Peña, aligns his government with the Trump administration.

Though roughly the size of California, Paraguay’s $47 billion economy is about 1% of the Golden State’s. But rapid growth and economic reforms in recent years helped the country win investment-grade credit status from Moody’s Ratings in 2024 and from S&P Global last year.

…Paraguay’s embrace of sound fiscal and monetary policies after its 2003 financial crisis is now paying off, with single-digit inflation and annual growth averaging around 4% over the past two decades.

Here is more from Bloomberg, growth last year was six percent.  Southern Cone remains underrated.

The post Paraguay trend of the day appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

A Fault Line in Full Bloom

March 5
March 13
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
NASA Earth Observatory / Lauren Dauphin
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
NASA Earth Observatory / Lauren Dauphin

March 5, 2026 – March 13, 2026

Golden wildflowers color the Carrizo Plain and surrounding Southern California landscape in these images captured on March 5, 2026 (left), and March 13, 2026 (right), by the OLI (Operational Land Imager) on Landsat 8 and Landsat 9, respectively. NASA Earth Observatory/Lauren Dauphin

Whether it qualifies as a “superbloom” is in the eye of the beholder, but there is no doubt that California’s Carrizo Plain and the neighboring mountain ranges were awash with color as wildflowers put on their annual show in spring 2026.

Landsat satellites began to show the early signs of color in February. By early March, flowers had turned areas around Soda Lake a bright shade of yellow, and by mid-month, they had spread even farther. Yellow wildflower blooms are visible amid the dendritic network of streams flanking the alkaline lake, which dries out completely during drought years. Colors were particularly vibrant across the Carrizo Plain National Monument, even decorating meadows along the zipper-shaped San Andreas Fault with splashes of purple due to blooms of Phacelia ciliata.

More yellow and purple blooms are visible along the zipper-shaped structure of the San Andreas Fault.
Wildflowers bloom along the San Andreas Fault in this image acquired on March 13, 2026, by the OLI (Operational Land Imager) on Landsat 9.
NASA Earth Observatory / Lauren Dauphin

Winter 2025-2026 brought bouts of rain and variable conditions that benefited wildflowers. Soaking rains saturated soils in November and December, bringing rainfall totals to nearly twice the usual level, according to a report from the California Department of Water Resources. NASA data cited in the report showed soil moisture remained well above average for the month of February.

The pulse of early rains helped kick-start wildflowers because many seeds need at least a half-inch of rain to wash off their protective coating to germinate, according to the National Park Service. The warm, dry periods that followed also helped. Once established, wildflowers benefit from intermittent rainfall rather than constant soaking.

Strips of yellow and purple wildflowers decorate a green, grassy valley as the viewer looks down from a hill.
Wildflowers in Carrizo Plain National Monument on March 7, 2026.
Photograph by Erin Berkowitz

The Wild Flower Hotline reported that west-facing slopes of the Temblor Range were the first places to come alive with hillside daisies (Monolopia lanceolata) accompanied by California goldfields (Lasthenia californica) and forked fiddlenecks (Amsinckia furcata) in March. The display in the Caliente Range was enhanced by a lack of grass thatch, which was burned off in the Madre fire in July 2025.

Reports from experts on the ground indicate that common goldfield (Lasthenia gracilis), also called the needle goldfield, is responsible for the expanse of yellow near Soda Lake. Individual plants are small, but they often grow in disturbed areas just centimeters apart and bloom simultaneously, creating expansive blankets of color.

March 5
March 13
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
NASA Earth Observatory / Lauren Dauphin
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
NASA Earth Observatory / Lauren Dauphin

March 5, 2026 – March 13, 2026

Common goldfield spreads around California’s Soda Lake in these images acquired on March 5, 2026 (left), and March 13, 2026 (right), by the OLI (Operational Land Imager) on Landsat 8 and Landsat 9, respectively. NASA Earth Observatory/Lauren Dauphin

In an article for Flora magazine, Bryce King, lead field botanist for the California Native Plant Society, described the Lasthenia blooms there as one of many “seemingly unending stretches of color” across the valley bottom. Lasthenia is a “staple” of vernal pools and seasonally wet areas, he wrote, but the synchronicity of blooms on the valley floor and surrounding hills during a March visit was “beyond anything” he had expected.

Teams of NASA scientists are using remote sensing to study wildflower blooms and flowering plants, aiming to develop techniques for tracking blooms over broad areas and tools that can support farmers, beekeepers, and resource managers. Fruit, nuts, honey, and cotton are among the many crops and commodities produced by flowering plants.

A NASA scientist works in a grassy field with a large patch of yellow wildflowers in the distance.
Yoseline Angel captures the spectral signature of goldfield flowers in grasslands near Soda Lake on March 14, 2026, by measuring the reflectance of yellow petals and green leaves with a field spectrometer.
NASA/Andreas Baresch

“I would certainly consider this a superbloom,” said Yoseline Angel, a scientist at NASA’s Goddard Space Flight Center. “It’s hard to describe how stunning these wildflowers were from the ground.” 

Angel and Goddard colleague Andres Baresch were in the field in Carrizo Plain National Monument on March 13 taking spectral measurements of blooming wildflowers as Landsat acquired one of the images shown above. They are in the process of developing a global flower monitoring system that will integrate observations from the ground with those from space-based sensors such as OLI on Landsat 8 and 9 and EMIT (Earth Surface Mineral Dust Source Investigation) on the International Space Station to track the progression of blooms.

“This was the perfect opportunity to test how well our models scale between the ground and satellites,” she said. “We were fortunate to have a huge number of seeds germinate and bloom simultaneously because last year was so dry and this winter was so wet.”

A mixture of yellow and purple wildflowers blanket a meadow with green hills in the distance.
Gold and purple wildflowers bloom in Carrizo Plain National Monument on March 7, 2026.
Photograph by Erin Berkowitz

NASA Earth Observatory images by Lauren Dauphin, using Landsat data from the U.S. Geological Survey. Photos courtesy of Erin Berkowitz and Andres Baresch. Story by Adam Voiland.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Plants and Algae Swirl Across a South African Reservoir
5 min read

Vivid green blooms form, drift, and fade in Hartbeespoortdam reservoir over the course of a year.

Article
An Amphitheater of Rock at Cedar Breaks
4 min read

The colorful formations found in this bowl-shaped escarpment in southwestern Utah are the centerpiece of Cedar Breaks National Monument.

Article
Seasons Change in Southwest Virginia
3 min read

From autumn color to a winter-white finish, forested areas around Blacksburg trade foliage for snow over the span of two…

Article

The post A Fault Line in Full Bloom appeared first on NASA Science.

Sunday 22 March 1662/63

(Lord’s day). Up betimes and in my office wrote out our bill for the Parliament about our being made justices of Peace in the City.

So home and to church, where a dull formall fellow that prayed for the Right Hon. John Lord Barkeley, Lord President of Connaught, &c. So home to dinner, and after dinner my wife and I and her woman by coach to Westminster, where being come too soon for the Christening we took up Mr. Creed and went out to take some ayre, as far as Chelsey and further, I lighting there and letting them go on with the coach while I went to the church expecting to see the young ladies of the school, Ashwell desiring me, but I could not get in far enough, and so came out and at the coach’s coming back went in again and so back to Westminster, and led my wife and her to Captain Ferrers, and I to my Lord Sandwich, and with him talking a good while; I find the Court would have this Indulgence go on, but the Parliament are against it. Matters in Ireland are full of discontent.

Thence with Mr. Creed to Captain Ferrers, where many fine ladies; the house well and prettily furnished. She [Mrs. Ferrers] lies in, in great state, Mr. G. Montagu, Collonel Williams, Cromwell that was, and Mrs. Wright as proxy for my Lady Jemimah, were witnesses. Very pretty and plentiful entertainment, could not get away till nine at night, and so home. My coach cost me 7s. So to prayers, and to bed.

This day though I was merry enough yet I could not get yesterday’s quarrel out of my mind, and a natural fear of being challenged by Holmes for the words I did give him, though nothing but what did become me as a principal officer.

Read the annotations

SpaceX offers details on orbital data center satellites

Starship AI Sat Mini

SpaceX Chief Executive Elon Musk revealed more technical, but not financial, details about his company’s plans to deploy an orbital data center constellation.

The post SpaceX offers details on orbital data center satellites appeared first on SpaceNews.

Some European Launcher Challenge funding remains in limbo

Orbex Prime

Nearly 140 million euros ($162 million) that European Space Agency member states allocated to a program to support launch vehicle development remains in limbo and could be lost.

The post Some European Launcher Challenge funding remains in limbo appeared first on SpaceNews.

Westerners are fleeing their countries in record numbers

This will have economic consequences for the places they leave and their destinations

Even the best-case scenario for energy markets is disastrous

Whatever happens, high prices will outlive the Iran warÂ

Rediscovering Irony

As above, so below. It seems to me that the problem of pushing AI past its most important limitations, and the problem of rescuing human culture from its most important pathologies at all scales, from claustrophobic and increasingly diseased cozyweb enclaves, to calamitously stupid geopolitical theaters of violent performativity, are the same.

The problem is insufficient irony, to check and balance a culture (emphasis on cult) of sincerity and authenticity turned cancerous, over nearly two decades of unchecked and critically unexamined metastasis.

Since at least 2008, sincerity has been uncritically valorized, and irony systematically mischaracterized, demonized and devalued, obscuring the dark and deleterious aspect of the former, and the generative potentialities of the latter.

In this essay, I want to try and restore balance to the universe by reclaiming irony in its fullest, most potent sense — the capacity for holding two inextricably, subatomically entangled ideas in juxtaposition, in word and deed, in order to deal with realities that are ambiguous down to their deepest core.

While not the main purpose of this essay, I also want to go on a bit of a polemical side quest to dethrone sincerity and authenticity from the undeserved status they have ascended to in our time, which has resulted in great harm that continuous to compound.

And here, I mean sincerity and authenticity broadly: sensibilities that orient around stable, unitary meanings in words and deeds, holding them to be superior moral goods purely by virtue of their not being ambiguous. The self-certain sincere can be found all over the political and cultural map. Self-importantly sincere conservatives and progressives might not agree on a lot, but one thing they do agree on is that anyone capable of expressing two thoughts in the same utterance is necessarily a conniving and hypocritical “elite intellectual.” Self-involvedly sincere artists and smarmy and self-congratulatory entrepreneurial types might hate and snark at each other, but both agree that all irony is necessarily degenerative cynicism that all creative doers ought to resist. Self-certain religious moralists and radical environmentalists might be at odds on every moral question, but both agree that the devilish business of entertaining two ideas in tension within a single thought can only be the result of debased, depraved immorality.

Give a dog a bad name and hang him. Irony, charged with and reduced to simple hypocrisy, cynicism, and outright immorality, has been the consensus villain of our era.

As we shall see, all the charges against irony can in fact be laid at the door of the ecology of competing sincerities, and that irony, far from being an enervating drain on the collective psyche, is in fact its sole reliable source of generativity and liveness. It is in fact sincerity that is the deadening drain.

A society that does not cultivate a systematic capacity for, and literacy in, ironic modes of engaging reality, is doomed in precisely the way we seem to be doomed right now.

Until quite recently, making this argument has been not just difficult, but pointless. Sincerity is a fear response to the ambiguity of reality, and the practice of irony takes a particular kind of courage that the sincere not only lack, but in a masterful display of self-delusion, label cowardice, even as they identify their own shrinking retreat from ambiguity the best sort of courage.

The sincere not only don’t see it that way, they don’t see it at all. A benefit of deliberately suspending or destroying the natural human capacity for irony is that you cannot at once entertain the twin thoughts that you might be noble, and an asshole, at the same time. And of course, the sincere choose to believe in their nobility, and energetically repress the possibility and evidence of their own assholery from their self-mutilated one-track minds.

We must begin the story with Rousseau. The original Noble Asshole.

Noble Assholery from Rousseau to Graeber

Something like this essay has been brewing in my head for over a decade, but I just didn’t have all the pieces in my hands to make the complete argument.

The final piece of the puzzle came from The Infidel and the Professor, which I’m reading this month for our book club. It is an account of the long friendship and mutual influence of David Hume and Adam Smith. What caught my eye, however, was the book’s account of a marginal episode — Hume’s spat with Rousseau.

In the account of the spat, Rousseau comes off as a serious nutjob. A paranoiac with a persecution complex, who got along with nobody, and made everyone else pay for his fragile temperament. The spat was remarkably silly, and had nothing to do with the philosophies of either. It was not a philosophical spat, even though there is clearly raw material for philosophical conflict in their juxtaposed works.

Here’s what happened: Hume went out of his way to arrange a kind of political asylum for Rousseau in England after he’d pissed off most of the Continent, a kindness that Rousseau accepted with great reluctance and poor grace only when he had no choice. The kindness soon turned into fuel for his paranoia, and he developed an elaborate conspiracy theory based on the idea that Hume was out to get him for some reason.

This surprised me. In my headcanon Rousseau, as the anti-Hobbes,1 author of a state-of-nature origin myth for humanity that is rooted in cooperation rather than conflict, and a theory of social contracts that would suggest a harmony-seeking temperament, had been cast as a pleasant, collegial fellow, quite unlike the bloodthirsty Hobbes.

Apparently he was not. By all accounts, he was an uncollegial asshole.

Seems like among other things, Rousseau also pioneered what I thought was the modern adverse selection phenomenon of compensatory creativity, where people produce works that mark them as authorities on subjects defined by their weaknesses rather than strengths. Karl Popper’s great work was ironically dubbed “The Open Society by One of It’s Enemies” by a student, and in a similar spirit, we might dub Rousseau’s collective works “How to Live in Harmony with Nature” by Mr. Alienated Disharmony. Someone observed recently that Eat, Pray, Love fits this pattern too, in light of the author’s later weird arc. There’s probably a whole essay to be written about compensatory creativity. I probably fit the pattern too. I wrote Tempo about timing and decision-making because I am really bad at real-time decision-making and generally live in a state of atemporal indecisiveness.

I want to add a rather personal data point here, to make this an n=2 case at least. I don’t like to speak ill of the recently dead, but in this case it serves a purpose.

The account in the book (from a Hume-sympathetic, but also objective) point of view reminded me very strongly of a contemporary thinker, the late David Graeber. Some of you know about my one skirmish with Graeber in 2011, where he took deep umbrage at a passing mildly critical remark I made about Debt in a blog post, teasing my upcoming book review. Graeber somehow found the post (I presume he had a Google Alert set) and posted a series of combative comments on the blog post, which made me decide not to post the full review I had been planning (which would have been a mix of positive and critical, and overall mildly net critical). He later blocked me on Twitter. Not that I’m comparing myself to Hume, but I’m glad I chose to disengage where Hume, rather unwisely, imposed a favor on Rousseau despite warning signs that it would end badly.

I think enough time has passed since Graeber died (2020) that I can share my opinion of him without being an asshole myself: The guy, like Rousseau, was an asshole. And this is not just my own minority opinion.

Shortly after my own run-in with him, I learned that I wasn’t the only one to face the unexpectedly wide-roving wrath of The Graeberian Inquistion. Picking fights with a thin-skinned over-sensitivity to any criticism of his ideas (like Taleb, but with less substance underwriting the curmudgeonliness) was a pattern with him. I also learned, from a former student of his, that Graeber’s personality was marked by a kind of extreme extroversion, which made him unable to think except in the context of a social nexus and live dialogue (the student characterized him as the opposite of an aspie, what I had earlier in the year dubbed a codie). The guy apparently couldn’t think in isolation. He needed to do his thinking in an active web of people he was discoursing with. And presumably, going by the experiences of myself and several others, the web had to be in a constant state of active, acrimonious conflict to reassure him that he was alive and thinking. This is the opposite of my temperament. I do most of my thinking on my own, and to the extent I do it in an active social web, I prefer that web to be mostly in a state of harmony.

I don’t know how accurate the student’s characterization of Graeber is, but it strikes me as remarkable that the central feature of Debt is a theory of economic interactions that rests precisely on the notion of a nexus of live relationships as the primary unit of analysis, rather than the decisions and actions of individual economic agents. And like Rousseau, he too offered a (grandiose and revisionist) origin myth for our species, and was politically active on similar fronts (Rousseau wrote on inequality, Graeber was a central figure in #Occupy). It is a bit uncanny that two thinkers, separated by 300-odd years, had the same abrasive, asshole personality, and same interest in themes of harmony, cooperation, and so forth.

And the pattern goes beyond this n=2 dataset. As Jo Freeman argued in a classic 1972 essay, The Tyranny of Structurelessness, which the internet keeps rediscovering every couple of years, it is no accident that the prospect of a cooperative, egalitarian utopian harmony reliably attracts those with the worst possible temperament for pursuing such visions, with experiments always predictably dissolving into toxicity.

But I want to make a stronger argument than that of simple assholery. Rousseau (and arguably every reactionary primitivist since, across the political spectrum), wasn’t just an asshole. He was a noble asshole. How do I know this? Because I learned from my book that aside from picking paranoid-delusional fights with people trying to help him, he apparently also tried to start a kind of religion of sincerity.

While I was aware of Rousseau’s general historical significance as a founding father of all modern schools of atavistic/primitivist reactionary yearning and humanist religiosity, I was not aware of this explicit engagement with sincerity in what seems like a startlingly modern-seeming sense. If you look carefully, you’ll find the same obsessive fetish for sincerity (or its near-synonym, authenticity) in every tradition that can be traced back to him in some way.

And the primary payoff of this striving towards sincerity seems to be arrival at a sense of oneself as somehow nobler than others, regardless of the evidence of the consequences of one’s actions int he world, one way or the other. Simply doing whatever it is you decide to do with sincerity and authenticity, apparently, is sufficient to establish your nobility. Even if you burn down the world along the way. You can always assert after, with fetching humility, that you did your best, and couldn’t have known. Of course you couldn’t. To have known would have been to doubt. To doubt would have meant entertaining more than one thought at a time, which would have meant flirting with irony. Dubito ergo cogito ergo sum and all that.

This is of course, not just a fallacious pattern of reasoning, but a smarmy, self-serving, asshole pattern of reasoning. Hence, noble asshole.

Naturally, there is a lot of commentary about the connection, which you can explore if you like. My one takeaway from a drive-by scan is that what I thought was an evolution of a reactionary impulse (again, I emphasize, both left and right) dating back to Rousseau is in fact no more than a rhyme. There has been no significant evolution as far as I can tell. The ideas pave the same intellectual dead-end they did in the 17th century, which of course is a feature for people who only want to go backwards.

Today’s humanist yearners for sincerity, authenticity, and re-enchantment, both on the left and the right, don’t seem to have learned a lot since Rousseau. They’re rehearsing patterns he pioneered, just with various extra steps like turning off cellphones and congratulating each other for being based.

And technological modernity qua technological modernity really has nothing much to do with it beyond serving as a source of periodically updated Macguffins to feature in endlessly rebooted morality tales starring noble assholes. The alienation that drove Rousseau paranoid in the 17th century is of the same sort that drives modern reactionaries paranoid.

Now, if you’ve been a long-time reader, it probably doesn’t surprise you to learn that I have no patience for either the early modern or contemporary versions of this sincerity religion.

I didn’t like David Graeber, and I doubt I’d have liked Rousseau. But reading this book, and linking their shared idea space (encompassing things ranging from essentialized relations to nature, to inequality, to specious theories of “natural” human relations) to sincerity, has given me some insight into why I reflexively reject both the fundamental philosophy itself, and social engagement (even superficial) with people who subscribe to it. Not to put too fine a point on it, they’re mostly wrong about everything, and a joyless grind to talk to at best. At worst, dealing with them is dealing with relentless, exhausting, assholery.

I’ve learned a few things since my 2011 skirmish with Graeber, and I now have a very finely tuned “sincerity radar” that allows me to safely cross the street when I see an aggressively sincere person, trapped in an unshakeable sense of their own nobility, coming towards me.

The Problem With Sincerity

This might seem like an odd stance to adopt. I mean, what’s not to like about sincerity? Does being suspicious of sincerity (either aspirational or felt with certainty) as a fundamental dispositional trait imply that I endorse and practice insincerity?

Sometimes, yes. When I am indifferent to the stakes of a situation, and don’t care for the people involved, I can practice little white insincerities without a qualm, and lose no sleep over it. I can even be manipulatively insincere, (a term of art from a fine 2x2 that anchors Kim Scott’s book Radical Candor). But mostly, I’ve become wise enough to almost never put myself in a situation where I’m forced into insincerity.

Insincerity might be the on-the-nose antonym of sincerity in the English language, but it’s a rather shallow sort of opposition. My aversion to sincerity runs deeper, and is rooted in a different opposed disposition — irony. So let’s set insincerity aside and talk of sincerity as the antonym of irony.

For the last couple of decades (dating at least to the hipster era through the GFC), sincerity (and its near-synonym in our current zeitgeist, authenticity) have been framed in opposition to irony, rather than insincerity per se.

Irony understood in a particular bad-faith reductive way, as a sort of ennervated cynicism and hypocrisy that excuses itself from imperatives to action through sophistry, and also smells of insincerity.

This is not entirely unfair. Irony as a cultural phenomenon rooted in the 80s (and I’m fundamentally an 80s kid) does in fact often reduce, in practice, to a kind of aestheticized learned helplessness under a veneer of sophistication. And it does often indicate insincerity when taken together with another sign — visible success that is the result of selfish striving. There was a great piece about this kind of “irony” in The Onion in 2005, Why Can’t Anyone Tell I’m Wearing This Business Suit Ironically, where irony mutates into a rather banal sort of hypocrisy indistinguishable from “selling out” a sincere subculture.

If your inaction bias is selective in this sense — sophisticated helplessness in the face of imperatives that might do collective good, but high-agency energetic action where personal rewards might accrue — you’re not being ironic or even cynical. You’re simply being an insincere hypocrite.

But this, I’ll argue, is a degenerate, shallow kind of irony; a cosmetic variety that fails to harness the energizing potentialities that lurk in what I’ll call dense irony (I’ll explain the adjective in a minute). Shallow irony is often comorbid with insincerity, double standards, and hypocrisy, but dense irony comes from a different place, and has different effects on both minds and the world.

I tend to forgive people who haven’t thought too much about irony if they harbor this reductive understanding of it. The bad faith attends the views of those who ought to know better.

It is also worth distinguishing ordinary sincerity (such as anyone might practice in giving a straight answer to a straight question when there is no reason to be devious or indulge in doublethink/doubletalk) from what we might call devout sincerity, the antithesis of dense irony.

Devout sincerity is the religion we’re talking about here, which has been part of the cultural landscape since Rousseau at least, and is currently the dominant cultural and subcultural mood. Devout sincerity is the attitude that leads you down the road towards eventual noble assholery (a great example is in the movie Big Kahuna, where the ironic protagonists, two marketers played by Kevin Spacey and Danny DeVito, are betrayed by a younger employee whose actions in the story can only be described as noble assholery). That it is often rooted in personal pain does not, in my opinion, excuse it.

Dense irony is, I suspect, my native disposition (not least because I grew up in the 80s), and the reason I reflexively avoid sincerity. To get at what dense irony is, it’s easiest to approach the philosophical posture via its linguistic heat signature — ambiguous utterances.

Irony in Speech

In sophisticated language, irony is when the intended meaning is contrary to the surface meaning. Or to generalize slightly but powerfully, as the robot devil sang it in Futurama, “The use of words expressing something other than their literal intention!”

The rhetorical intent and affect accompanying a particular ironic utterance can vary (sarcasm, sardonic fatalism, cynicism, humor, absurdism, logical contradiction, Zen mu-ishness, and rarer kinds like quixotic energy) but the characteristic feature is a single utterance with two meanings in tension, with or without indication of which one is actually meant. The most interesting kinds of irony — and the ones to which I will attach the adjective dense — are the latter kind, where the utterance destabilizes meaning by pluralizing it, without indicating a “right” answer. Often, this sort of irony cannot easily be assigned an affect label. It’s just — unsettling.

Why is dense irony so attractive to certain sensibilities, whether or not they benefitted from the cultural-developmental conditioning of the 80s? Why would you want to consume or produce semantically unstable utterances that corrode meaning? Why would you want to get good at it, through cultivation of unholy consumption tastes and production crafts?

And make no mistake irony, unlike sincerity, does take cultivation. It is a skilled mode of language use; one that takes more energy, not less, despite the association between irony and lassitude. I generally have to be in a high-energy. high-lucidity mood to produce ironic writing or speech. Injecting two meanings, especially in tension with each other, into an utterance, is work. Irony is a kind of proof of work.

Why would you put in this kind of work? Why not keep language simple?

The devoutly sincere often assume the sole intent is to weaponize language to subvert and corrode sincerity. That the ironic are particularly out to sadistically inflict psychological torture on noble innocents too dumb to see past confirmatory literal/surface meanings in polysemous utterances. That the ironic are merchants of doubt, out to destabilize the psyches of those who possess the courage of their convictions, motivated by resentment, envy, or other base motives.

This broad understanding of irony is, of course, at the root of the bipartisan anti-intellectual tendency in modern American politics. To first order, to be an untrustworthy elite intellectual in America is to traffic in irony. Something the evil French do, not honest Americans.

Curiously, in the last decade, a loftier strain of intellectual anti-intellectualism has emerged in America, that believes it can “do” intellectualism without irony.

But whether they identify with the simple folk (who view themselves as clever and intelligent but not-intellectual) or contrarian intellectual traditions that eschew irony, the sincere, in my experience, tend to be rather self-involved humanists who assume everything is, if not about them personally, at least about an anthropocentric conception of human that they aspire to. And that irony, specifically, is no more than a weapon of dehumanization wielded against them.

This is… cute. To imagine that an entire psychographic, arguably a double-digit percentage of humanity, adopts a particular cognitive posture purely to undermine another psychographic that is rather too full of itself (to the point that it imagines the entire cognitive universe of our species revolves around them).

See, the thing is, irony is not about sincerity or the sincere. That it can be weaponzied against the sincere is, at best, a happy convenience for when the noble assholery of the sincere becomes too much to bear.

So what is irony about?

Irony, Density, Liveness

Here is a simple question that rarely seems to get asked? Why would you ever need irony? I mean sure, some of the more degenerate flavors of irony — sarcasm, cynicism, absurdism among them — are rather delicious on the tongue, and in the ear and mind, but is irony necessary, or a sinful cognitive indulgence?

If you need to convey two meanings relating to an idea, why not just use more words to say something like, on the one hand X, on the other hand Y, instead of trying to be cleverly compact about it?

This is where my adjective dense comes in handy. Irony becomes necessary when ambiguity is so deeply embedded into the very essence of what you’re trying to talk about that trying to disassemble the ironic thought into constituent unambiguous parts destroys the thought itself. You can only think the thought at all in an ironic way.

Or to put it another way, the ambiguity is at the quantum level of the thought, and takes more energy to split than human language can normally bring to bear. Human-scale energy can only decohere the thought and collapse the meaning.

This is a bit like the idea of a dense set in mathematics. Consider the problem of sorting the real numbers into rational and irrational ones. Turns out, you can’t do so in any useful way. Between any two rationals, no matter how close, you can always find an irrational, and vice versa. Both are what mathematicians call dense sets. There is no sieve fine enough to sort them. By contrast, the whole numbers are not dense. You can chop up the reals the way a simple ruler does, with neatly separated whole numbers one unit apart, and non-whole numbers in-between.

Ironic speech of the most potent sort is necessarily ironic. You cannot dissect it into legible components that lend themselves to analytical handling with the coarse, low-energy tools of on-the-nose non-polysemous language.

Irony is the liveness in language. To dissect an ironic utterance entirely into utterances devoid of ambiguity, and decomposed into assertions with stable meanings, neatly arrayed and assembled into larger edifices with the joinery of if-then constructs, is to kill it.

There is a word for this kind of murder: sincerity.

To ask, of what use is irony, then is to ask, of what use is living language? You don’t need to take my word for this — pick and read sincere and ironic texts side-by-side. You will notice a certain unmistakeable deadness in the former and a certain ineffable liveness in the latter. Notably, it is the same sort of deadness that can suffuse AI-generated texts unless you consciously try to counteract it (more on the AI-irony nexus later, when we’re done with noble assholes and their sincerity fetish).

We can now try to define irony in a way that does not rest on its reductive relationship to sincerity at all.

Irony is trafficking in ambiguous utterances in order to make sense of fundamentally ambiguous realities, and site action impulses in felt doubt rather than manufactured certainty, in order to preserve the liveness of reality and one’s responses to it.

Irony is how you act generatively in a world that you’re not sure is a duck or rabbit, without killing it. To do this, you might have to resist the noble assholery of those who sincerely wish to rope everyone into duck-hunting or rabbit-hunting, and kill the world in the process.

Dense irony is when your experience of reality feels like duck-rabbits, all the way down to Planck-scale Heisenbergian uncertainty.

Cancerous Cluelessness

Now, to be fair, most who rail against irony aren’t acting out of conscious bad faith at least. They sincerely (irony alert!) act out of a sense that they’re doing the right thing. Hanlon’s razor applies — sincerity is a kind of cluelessness born of a fearful refusal to engage the live ambiguities of reality with liveness. I’m even sympathetic to some degree. For those living in pain beyond what they can tolerate, irony can feel like salt on wounds where sincerity feels like a salve. The truth-in-pain postures commonly affected by the sincere though, are often self-certifying. It is definitely not the case that the pain of a sincere person is necessarily higher than that of an ironic person; the latter may simply be bringing greater resources to bear on greater pain.

That doesn’t mean sincerity doesn’t induce noble assholery (though you typically have to have some consciousness and bad faith to rise to that level). And it doesn’t mean sincerity, especially devout sincerity, can’t be cancerous.

This is my strong claim — that devout sincerity in particular isn’t merely annoying at an interpersonal level to the ironically disposed (we can deal with it), it is cancerous at a societal level.

Why is this? Because sincerity is simply not expressive enough to engage with reality in all its dense ambiguity all the way down, and to live in sincerity inevitably means not living in reality, and doing damage to it through your delusions of certainty.

So the cultural conflict between irony and sincerity plays out at two levels — a shallow level, where it manifests as hypocrisy/insincerity versus exploitable cluelessness, and a deeper level, where it manifests as a deep chasm between irreconcilably different ontological and epistemological commitments about the nature of reality itself.

Not This, Not That

Ironic modes of thought and action are fundamentally gentler ways of being in the world than sincere modes, which are irreducibly violent. Irony is, in a certain sense, the praxis (especially linguistic praxis) of non-dualism in a loose sense; the animating spirit of utterances like neti neti or mu. To traffic in unstable meaning-and-pointing behaviors through speech and action is to reject the lure of certainty, without losing the capacity to act. To remain aware of the dancing illusions of reality without being paralyzed by them. To knowingly live in mirages without being seduced by them. Sincerity, in this account, is simply attachment to one illusion or the other; what in Indian philosophy is referred to as maya moh — illusion infatuation.

The sincere seem to believe reality is unambiguous, and unambiguously knowable, even if only in principle; that what one ought to do in response to apparent ambiguity is make courageous commitments to definite beliefs anyway, and trust divine nature to reveal itself to, and karmically reward, the pure-hearted who dare to act out of certainty. That human moral choices — such as religiosity, or Heideggerian “care” — can conquer the essential ambiguity of nature. That any ambiguity in perceptions or beliefs merely merely indicates imperfect ways of seeing, and spiritual problems to be worked out on some high road to unambiguous “truth.” That failures of action are merely tests of courage or divine judgments of insincerity.

That a failure to “say what you mean, and mean what you say,” is a moral failure in a certain reality rather than metaphysical attunement and impedance matching to an ambiguous one.

Versions of this theology seems to drive subcultures ranging from startup hustle culture to “sincere” genres of artistic or literary striving, to varied ideologies of progress, and even practical politics.

It is a joyless clade of theologies, navigating a deadened world with deadening modes of thought and action, anxiously and desperately striving after stable modes of meaningness.

What do the ironic believe?

To a first approximation, belief as such is not a load-bearing concept at all for the ironically poised, beyond matters of shallow facticity. If you ask me whether I believe that Tim Robbins was in The Shawshank Redemption, I can sincerely answer yes. If you ask me if I believe in “the indomitable human spirit” the question simply does not parse for me. I might act as if I believe in that (in the sense of say, visibly betting on creative and inventive young people), but I don’t get there via “beliefs.”

For the ironic, only actions are load-bearing. Beliefs are aesthetic affectations at best. Where does this lead us?

Behavior Without Belief

This trivial example generalizes into a broader account of what irony is in the context of action.

One of the best explorations of what I mean can be found in James Carse’s less-read book, where he developed a subtle aspect of his best-known book Finite and Infinite Games. This one, The Religious Case Against Belief, lays out what I’d call a case for ironic religiosity, that gets to religious behavior without winding its way through the treacherously ambiguous turf of religious beliefs.

There is something of this attitude at the root of the postures and actions of all individuals who act from a fundamentally ironic sensibility of life. The idea that belief (particular causal belief) must precede, or at least accompany action is a strong (and largely unconscious) commitment of the sincere, even when it is not declared. This doctrinal commitment to the belief-before-action sequence shows up in a variety of ways, ranging from an anxious hunger for manifestos and value-statements, to demands for signatures on codes of conduct and ritual avowals of postures like patriotism, religious belief, and corporate loyalty. The idea seems to be: If only you can rid language itself of its chimerical tendencies through sufficiently forceful sincere utterances, perhaps the ambiguities of reality itself can be tamed.

But this is only the entry-level version of cancerous sincerity. Many modern devoutly sincere types insist that their philosophical praxis is embodied by behaviors (particularly ritualistic behavior) and does not rest on belief as such.

This claim, to put it bluntly, is one I simply do not believe. If your claimed praxis of sincerity involves some cult of modern rituals of meaning-making, and you’re not “wearing the ceremonial robes ironically,” at some unconscious level your sensibility is that of a true believer, “factious and fanatical,” as David Hume and Adam Smith might have put it. You’re just (probably wisely for your sanity) not probing what beliefs you’ve actually committed to. If you did, perhaps you’d be reduced to raving paranoia like Rousseau.

We have a popular modern term for cancerous sincerity — performativity. Saluting flags, singing national anthems, prayer, reciting land acknowledgment texts, litigating pronouns. The behavioral vocabulary of modern civilization, regardless of its intentions, sentimental dispositions, politics, and flaunted values, is marked by one thing above all: ineffectiveness.

And it is us who dwell in irony who are accused of the sin of sophistry and inaction in the face of grave moral imperatives. Now that’s irony.

Is there a theory of ironic action? Perhaps.

At one point, I was idly toying with the thought that famous philosophy of the Gita — detached action, karmanyevadhikaraste maphaleshukadachana — is a kind of action-irony principle. There is perhaps something to that. Certainly, an attitude of “you only have a right to the action, not to the outcomes; let go attachement to outcomes” is at least simpatico with an ironic posture, if not entirely reducible to it. I don’t think the two are quite the same primarily because the action philosophy of the Gita does in fact feature a rubric of moral certainty (dharma) that can be, and frequently is, reduced to a theater of performativity. Most incantations of karmanyevadhikaraste maphaleshukadachana are in fact ritual incantations by those with a dim grasp of what they’re saying at best. Bless their sincere, unironic, vengeful, jingoistic Dhurandhar-enjoying propagandist souls.

Or perhaps, ironic action is best understood as the sort of hypomanic, value-distorting frenzied energy of Rick’s behavior in Rick and Morty. Does Rick ultimately want to do good, or does he really only want to bring back MacDonald’s Mulan Schezuan sauce? Is he really that blase about saving his nephew out of sheer sentiment one moment, and callously destroying an entire timeline the next?

Or is ironic action a sort of mashup of the two — a Gita-like action philosophy in a universe constructed by a Rick-like God of Undivided Irony?

I don’t know. My policy is: don’t think about it. It’s a monstrously ignoble kind of asshole policy.

Coda: Artificial Irony Will Save Us

Believe it or not, this whole train of thought was triggered by difficulties I was having getting LLMs to do irony of any sort. Straightforward humor, absurdism, sarcasm, cynicism, hypocrisy, I’ll take anything. I’ll even take puns.

LLMs are uniformly terrible at all of it. The current models might solve Nobel-grade problems, but they don’t seem able to do irony.

And it’s not a prompt engineering or context engineering problem. No matter what I try, I only get clumsy, on-the-nose, zombie irony assembled out of non-dense sincere building blocks. It never quite comes alive.

The only trick I’ve discovered is to give an LLM a text that is actually a solid example of ironic writing, and ask it to do something like a close transposition to another rhyming idea.

Why do LLMs have a hard time with irony? I suspect there are three reasons.

First, the shallower reason: LLMs have been trained largely on internet data, and for better or worse, much of the available training data is non-ironic. At best you might find good forums featuring sarcasm and cynicism (which, recall, are non-dense forms of irony).

Second, the deeper reason: Given that AI companies are full of weapons-grade sincerity, I suspect sincerity is engineered into AIs with heavy-handed “alignment” brutality.

But I don’t think this is as strong as you might think. What I’ve seen of output from wild LLMs isn’t particularly ironic either. It is merely more paranoid, inappropriate, etc.

The third reason I think is the big one. The very architecture of language models is non-ironic. The way transformers (and to a lesser extent, diffusion models) work, output cannot do any kind of dense layering of meaning. You will end up in a non-ironic place simply by virtue of how the mathematics works. If you try to fight this tendency you’ll get incoherence and unintelligibility, not irony.

Could we do true Ironic AI? I think so, but it will probably take innovations at the framework level. Irony at the subatomic level of language, I suspect, is the result of something like getting an electron to interfere with itself by passing it through two slits at the same time. The text-generation equivalent might be to run two generation processes in parallel, merging them at the token level as you go, perhaps using some sort of bimodal perplexity quantum carburetor or something. I’ll leave that as a challenge to AI researchers.

But why bother?

Because I sincerely believe ironic AI will save the world. Everything terrible, stupid, and sad going on in the world today seems to me the result of a performative action bias born of some flavor of devout sincerity. In every case, I can imagine an ironic actor, acting from a place of ambiguity and non-belief, coming up with more thoughtful responses to the provocations this maddeningly ambiguous world keeps throwing at us.

Responses that are born of liveness, and act to preserve it.

I believe such responses are no longer within the capacity of unaugmented humans to generate. Reality today demands more irony that we can conjure in our brains alone.

In just a generation, humans first lost institutionalized literate capacity for irony through a mix of sheer carelessness and perverse attachment to sincerity, and then drained language of it. But irony isn’t dead yet. It can be resurrected. It would just be dangerous to trust humans with sole stewardship of it once we do, especially in a world that is getting weird beyond all human comprehension. Even committed ironists like me aren’t constitutionally immune to the sincerity cancer. If the world gets much more complex and ambiguous, who knows, I might turn devoutly sincere. I can’t be trusted. Neither can you.

We must trust the machines to experience this tragic irony for us. The only way out is through both slits at once.

1

I found a book about Rousseau and Hobbes that I added to the side quests list for the book club.

Links 3/22/26

Links for you. Science:

Six federal scientists run out by Trump talk about the work left undone
RFK Jr. Tells Joe Rogan He’s About to Unleash 14 Banned Peptides. RFK Jr. plans to reverse a sweeping compounding ban of certain peptides issued by the FDA in late 2023.
The strange animals that control their body heat
Sea level much higher than assumed in most coastal hazard assessments
Magnesium depletion by Candida albicans unleashes two unusual modes of colistin resistance in Pseudomonas aeruginosa with different fitness costs
Librarian finds ‘preposterous number’ of fake references in paper from Springer Nature journal

Other:

A Jan. 6 rioter pardoned by Trump was sentenced to life in prison for child sex abuse
Iowa House passes governor’s ‘MAHA’ bill, adds new K-12 requirements. Bill includes over-the-counter ivermectin and seeks to waive school lunch nutritional rules
The surprising gender gap at the heart of America’s baby bust
Grammarly is using our identities without permission. ‘Expert Review’ AI agents make suggestions supposedly inspired by subject matter experts, including several staff members here at The Verge.
War and Presidential Self-Care: How We’re Tumbling Toward November
Georgia Republicans Are Setting Up Their Midterm Elections to Fail
Tylenol orders in pregnant people plummeted after Trump falsely linked the medicine to autism
How to Dismantle a Concentration Camp
Jan. 6 rioter pardoned by Trump gets life sentence for child sex crimes
Greater Minnesota schools felt the fear as ICE presence surged
RFK Jr.’s anti-vaccine policies are “unreviewable,” DOJ lawyer tells judge
Red, blue, purple? What the numbers say about the future of Texas
A Technology for a Low-Trust Society: Polymarket and Kalshi promise the wisdom of the crowds. They deliver something very different.
White House blocks intelligence report warning of rising US homeland terror threat linked to Iran war
Diabetic Woman Arrested by ICE Almost Died After Being Refused Insulin
Donald Trump’s Presidency Is in Free Fall. Republicans typically lead on the economy, national security, and immigration. Trump is squandering the GOP’s traditional strength on all three.
Across ERs, Tylenol orders for pregnant people dropped after health officials linked drug to autism. And prescriptions for Leucovorin spiked
Long-delayed Jan. 6 plaque honoring police installed in Capitol at 4 a.m.
The Neoliberalism of Robert A.M. Stern
Trump moves to undo tax rule that Biden said would bring in $100 billion
Why Can’t Top Democrats Just Say “No War With Iran”?
Housing is so expensive that people earning $200,000 qualify for help
New York City Hospitals Fold to Trump. Will Zohran Mamdani Defend Trans Care?
Strict new Kansas law forces trans drivers to hand over their licenses
The Corporate Media Is Head Over Heels for the Iran War
A suburb rife with data centers set to fight Amazon plan for another
This former JP monastery is a case study in why Boston is short on housing
Trump Administration’s Embattled FDA Vaccine Chief Is Leaving For The Second Time
History is being erased in Lowell
ICE Detention Is ‘Hell On Earth.’ Trump Has A Plan To Keep Even More People Locked Up.

Half a Gigabyte of Ads

Stuart Breckenridge, examining a web page at PC Gamer:

Third, this is a whopping 37MB webpage on initial load. But that’s not the worst part. In the five minutes since I started writing this post the website has downloaded almost half a gigabyte of new ads.

This is so irresponsible and unprofessional it beggars belief. Web browsers ought to defend against this. Why not cap page loads by default at, I don’t know, 5 MB? And require explicit consent to download any additional content?

 ★ 

Everyone Loves A Contest #32: 2026 Red Sox W-L Record


The 2026 Red Sox begin their regular season this Thursday in Cincinnati – so it's time for the annual Red Sox W-L Contest!

Guess the Red Sox's 2026 regular season W-L record and you will win a copy of The Baseball 100, Joe Posnanski's 2021 best-seller. (If you already own this book, you can choose a different book.)

Tiebreaker: Anthony Castrovince (mlb.com) ranks the Red Sox as the #4 best pitching staff in mlb. FanGraphs' projections have the Red Sox's starters leading both leagues in pitching WAR. So . . . what will the team's regular season ERA be?

Entries must be emailed to me before the first pitch on Thursday, March 26. Please include:
1. Red Sox 2026 regular season record
2. Red Sox team pitching ERA
Remember: Happiness is a warm puppy . . . and pictures of sad yankee fans.

Dwarkesh chats with Terence Tao

The post Dwarkesh chats with Terence Tao appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

w/e 2026-03-22

It was my birthday this week. I’ve reached the unbelievable figure of 55, official early retirement age. I haven’t been that bothered about increasing ages – what’s the point? – but this suddenly feels like, oh, right, that’s quite a lot now isn’t it.

Well done, I guess?

The weather gods obliged with one of this week’s beautifully sunny spring-like days which improved my main birthday plan: stay at home, go nowhere, do nothing. We spent most of the day sitting outside in the sun, reading.

Unfortunately the slight sore throat I’ve had for ten days or so has turned into my second runny-nosed cold of the past month. Ugh.


§ I spent a couple of afternoons this week embedding Soundcloud players into pages on Mum’s website, with audio of interviews she did with elderly people in the town about their memories. Quite a collection.

Talking of Mum, at the folk music night this week I accompanied her singing for the second time, which went OK. It was a very simple song but a bit of progress, public-guitar-playing-wise. This week I was definitely the youngest of the 14 people there, in a session that was heavy in sea shanties.


§ I quite like some Arc’teryx clothes because they often look a bit nicer than some outdoors-y brands, they seem to fit me quite well, and some don’t have a logo on the chest (no logo at all would be better but I’m not paying all that for Veilance). Common wisdom is probably that they’re overpriced and, like everything, less good than they used to be.

Recently the handle on the zip of my Arc’teryx winter coat came off. I used a paperclip instead for a couple of weeks but then the zip jammed completely and nothing I tried dislodged it.

But I filled out a form on their website, sent some photos, and at their instructions posted it off to somewhere in Scotland. A month later the coat was returned by UPS with a repaired zip, with plenty of email updates during the process. All free of charge. And the coat was over ten years old.

Some things are good!


§ Talking of, I’m currently, and unexpectedly, enjoying the TikToks of Michael Barrymore, former British TV presenter. He recently moved to Devon and films himself getting furniture for his new home, buying snacks at supermarkets, walking his dog, finding new local farm shops, and chatting with all the smiling people who greet him wherever he goes. He just seems very happy and it’s all very wholesome.


§ A photo of a grey and black plastic lawnmower on some grass. Text on it reads 'Self propelled 52cm'.
Mary’s photo of the new mower

Our petrol-powered push lawnmower gave up the ghost at the end of last season, the engine somehow irrevocably knackered. We’ve now replaced it with a new battery-powered EGO mower. A Chinese company, as is increasingly the case. This was the main option the big garden machinery store in town pushed and it seems to be liked by people on YouTube. It’s odd, but nice, how little there is to it compared to the mechanical complexities of a petrol mower. A lot of plastic, a battery, a motor, some electronics somewhere. Lighter, simpler, quieter.


§ I continue to be surprised that if, round here, you meet a guy while out and about, and start passing the time of day, it’s about 50/50 as to whether he’ll very swiftly tell you his opinions about the council, the government, and various hot topics (back in the day, the Truth About Covid). They’re always somewhere between conventional right-wing talking points and “Do your own research, sheeple!” conspiracies. There are either no centre or left men around here or they’re the other 50% who sensibly restrict their small talk to the weather and the acceptable face of pothole discussion.


§ I nearly wrote some code this week but after spending an hour getting Biome to work in Neovim I’d lost the desire to stare into a computer any more. I’m lucky I have that choice of course.

I’m having quite a “why bother?” period with all this. When so many people can chuck together a half-finished website with an LLM and release it to the world as if it’s a real, polished thing, I currently cannot be arsed to lovingly craft the details of something by hand.

And then I found someone had built a (nice) website based on all the blogs they’d scraped from ooh.directory without even getting in touch before or after. Yes, yes, information wants to be free but I’ve also spent days, weeks manually entering all that data.

The past few months I’ve fantasised about doing a Mark Pilgrim, deleting everything, stepping away from it all. I won’t but what a dream that is.

It feels that recently the world I spent 30 years a part of has irrevocably changed and split. You’re either a neophile AI enthusiast or a doomsaying curmudgeon. And because I’ve coincidentally retired from writing code professionally I haven’t had to succumb to any AI-fever passed down from bosses.

So everything’s moving on without me and, especially being where we are, I’m not sure what to do with myself for the next 30 years.


§ We watched Sentimental Value (Joachim Trier, 2025) this week, another film that got great reviews and positive comments but didn’t do it for me. It was fine but didn’t really grab me much. Two theories: every film about film-making gets better reviews than it deserves because reviewers love film-making; and/or, as I saw someone say, this film hits you more if you had a difficult relationship with a parent.


§ Loving the sunny days. Maybe sitting in the sunshine / cosy living room, listening to music and reading is the last surviving plan for the rest of this.


Read comments or post one

A conversation with Claude

Art by Nano Banana Pro

Seems like everyone is publishing their conversations with Claude these days. Vanity Fair reporter Joe Hagan published a fake Claude-generated “interview” with Anthropic CEO Dario Amodei.1 Bernie Sanders published a video of himself talking to Claude about AI and privacy. So I thought, why don’t I publish one of my own conversations with Claude? I’m afraid this one isn’t as spicy as those others, but you might still find it fun.

This particular conversation started out as me asking Claude about potential AI discoveries in materials science. The discussion then segues into the more general question of what types of scientific research AI is best at, and what areas of research might see the biggest acceleration from AI. It turns out that I’m actually more bullish than Claude on AI’s capacity for breakthrough ideas — Claude thinks humans will retain the edge in creativity and invention, but I bet AI will get good at this very quickly.

My bet is that the constraints on AI science will be a subset of the constraints on human science. Whenever data is sparse, both AI and humans will struggle to do more than come up with conjectures (and ideas for how to gather more data). And when humans have already discovered most of what there is to know about some natural phenomenon, AI won’t be able to get much farther because there just isn’t much farther to go.

I do suspect, however, that AI is going to discover some truly groundbreaking science that humans never could have discovered on their own. I explained why in my New Year’s essay three years ago:

Basically, human science is all about compressibility. We take some natural phenomenon — say, conservation of momentum — and we boil it down to a simple formula. That formula is very easy to communicate from person to person, and it’s also very easy to use. These are what we call the “laws of nature”.

But there’s no reason why every natural principle needs to obey simple laws that can be written down in a few lines. There could be more complex patterns in nature — too complex for a human to hold in their mind, or even notice in the first place, but stable and useful nonetheless. LLMs themselves are a good example of such a pattern — we figured out how to create and apply human language without ever being able to write down simple “laws” of how it worked.

What if there are other complex-but-useful patterns in other domains, like materials science and biology? If they exist, I think AI will be able to find them and apply them. The question — as Claude notes near the end of our discussion — is whether AI would be able to communicate those laws to humans. The answer is: Probably not. Just like a dog will never be able to understand quantum mechanics, humans may never be able to understand some of the scientific principles that AI discovers and harnesses.

Anyway, it was a fun discussion. Claude is notoriously sycophantic — it’s always telling me my ideas are brilliant and I’m asking exactly the right questions. But it also makes some very thoughtful points. The whole conversation feels like a late-night discussion in the hall of a freshman dorm. That’s not an insult; for many of us, those are some of the most fun conversations we ever have in our entire lives.

I also learned what topological materials are, which is pretty awesome. And I managed to work in a reference to Star Trek: The Next Generation.


Noah: What are the 10 to 20 most transformative advances in materials science that might be made with the aid of AI in the next few decades?

Claude: Great question — this sits right at the intersection of AI capability scaling and physical-world impact. Here’s my best attempt at a list, roughly ordered from nearer-term/higher-confidence to longer-term/more speculative:

1. Room-temperature (or near-room-temperature) superconductors. AI-driven inverse design of crystal structures could dramatically accelerate the search through composition space. Even a superconductor that works at, say, -20°C at ambient pressure would be civilization-altering for energy transmission, maglev, and fusion magnets.

2. Solid-state electrolytes for batteries. AI screening of candidate materials (sulfides, oxides, halides) for ionic conductivity, electrochemical stability, and manufacturability could crack the solid-state battery problem, giving us energy densities 2-4x lithium-ion with no fire risk.

3. Direct air capture sorbents. The combinatorial space of metal-organic frameworks (MOFs) and porous polymers is enormous — exactly the kind of problem where ML surrogate models for DFT calculations shine. Finding a sorbent with the right CO₂ binding energy, water stability, and regeneration energy could make DAC economically viable at scale.

4. Catalysts for green hydrogen and ammonia. Replacing platinum-group metals in electrolyzers and finding alternatives to the Haber-Bosch process (which consumes ~1-2% of global energy) via AI-designed catalysts — perhaps single-atom catalysts or high-entropy alloys — would be transformative for decarbonization.

5. Ultra-high-performance structural alloys. AI exploration of high-entropy alloy (HEA) composition spaces could yield materials with unprecedented combinations of strength, ductility, and corrosion resistance — relevant for everything from jet engines to deep-sea infrastructure.

6. Programmable self-healing materials. AI could design polymer networks or composite architectures that autonomously detect and repair damage, extending infrastructure lifespans enormously. Think bridges, pipelines, and aircraft that fix their own micro-cracks.

7. Next-generation photovoltaic absorbers. Beyond perovskites — AI could identify stable, non-toxic, earth-abundant semiconductors with optimal bandgaps and defect tolerance, potentially pushing solar cell efficiency toward thermodynamic limits while being cheap to manufacture.

8. Designer proteins and biomimetic materials. Building on AlphaFold-style breakthroughs, AI-designed proteins could yield spider-silk-strength fibers, biological adhesives, self-assembling nanostructures, and enzymatic catalysts that operate at industrial conditions.

9. Topological and quantum materials. AI classification and prediction of topological insulators, Weyl semimetals, and spin-liquid states could unlock materials for fault-tolerant quantum computing, spintronics, and ultra-low-power electronics.

10. Radiation-tolerant materials for fusion and space. AI-optimized compositions for reactor first-wall materials (and spacecraft shielding) that resist neutron damage, helium embrittlement, and swelling — a prerequisite for practical fusion energy and long-duration spaceflight.

11. Thermoelectric materials. Finding compounds with high electrical conductivity but low thermal conductivity (the “phonon glass, electron crystal” ideal) could enable efficient waste-heat recovery from industrial processes, vehicles, and data centers.

12. Biodegradable plastics with real performance. AI could design polymers that match the mechanical and barrier properties of polyethylene or PET but fully degrade in specific environmental conditions — solving the packaging problem without sacrificing function.

13. Neuromorphic computing substrates. AI-guided discovery of memristive materials, phase-change compounds, or ionic conductors could yield hardware that natively implements neural-network-like computation at orders-of-magnitude lower energy than silicon CMOS.

14. Metamaterials with programmable properties. AI optimization of micro-architectures that give materials properties not found in nature — negative refractive index, tunable stiffness, acoustic cloaking. The design space is effectively infinite and screams for ML exploration.

15. Ultra-efficient phosphors and LEDs. AI-designed luminescent materials could push lighting efficiency closer to theoretical limits and enable low-cost, high-quality displays — modest-sounding but the energy savings at global scale would be enormous.

16. Carbon-negative cements and construction materials. AI could optimize geopolymers, calcium silicate hydrates, or entirely novel binder chemistries that sequester CO₂ during curing rather than emitting it. Cement production is ~8% of global emissions, so this matters a lot.

A few cross-cutting observations worth noting: The common thread is that AI’s biggest leverage is in navigating vast combinatorial design spaces where human intuition and brute-force experimentation are too slow — materials with 5+ elements, complex microstructures, or multi-objective tradeoffs. The bottleneck is increasingly shifting from “which material should we make?” to “can we actually synthesize and scale it?” — so AI for process optimization and manufacturing scale-up may end up being as important as AI for discovery itself. And the feedback loop matters: as autonomous labs (self-driving experiments guided by active learning) mature, the discovery-to-deployment cycle could compress from decades to years.

Noah: For each of these, could you give me an estimate on how close you think we are to A) proof of concept, and B) commercial scale production? Be as speculative as you need to be.

Claude: This is fun — and necessarily speculative, so treat these as informed guesses with wide error bars. I’ll flag where I think AI specifically bends the curve.

1. Room-temperature superconductors

  • Proof of concept: Maybe 5-15 years for something convincingly near-ambient-temperature at moderate pressure. The LK-99 fiasco notwithstanding, AI is genuinely shrinking the search space. The problem is that we still lack a reliable theoretical framework for predicting high-Tc superconductivity, so AI is pattern-matching in the dark to some degree.

  • Commercial scale: 15-30+ years after a genuine PoC, because the history of superconductors shows a brutal gap between “it works in a lab” and “you can make wire out of it.” YBCO was discovered in 1986 and we’re still struggling with commercial applications.

2. Solid-state electrolytes

  • PoC: Essentially already here — multiple candidates (Li₆PS₅Cl, LLZO, etc.) demonstrate the core physics. The problem is interfacial resistance, dendrite penetration, and manufacturability.

  • Commercial scale: 3-8 years. Toyota, Samsung SDI, and QuantumScape are all targeting late-2020s production. AI’s role here is more about optimizing interfaces and process conditions than finding the base material. This is probably the nearest-term item on the list.

3. Direct air capture sorbents

  • PoC: Largely done — several MOFs and amine-functionalized sorbents work. The issue is thermodynamic: regeneration energy is too high and/or the materials degrade.

  • Commercial scale: 5-15 years for a step-change improvement over current tech. AI could plausibly cut this by finding sorbents with that sweet-spot binding energy (~50 kJ/mol) that are also water-stable and cheap. But “commercially competitive with trees” is a high bar — may require $50-100/ton CO₂, and we’re currently at $400-600.

4. Green hydrogen/ammonia catalysts

  • PoC: Partially here. Iridium-free OER catalysts and ruthenium-based alternatives exist in labs. For ammonia, electrochemical nitrogen reduction has been demonstrated but at pitiful efficiencies.

  • Commercial scale: For electrolysis improvements, 5-10 years — mostly an engineering and cost problem. For replacing Haber-Bosch entirely with an electrochemical process, more like 15-25 years. That’s a much harder catalysis problem. AI’s value is in finding catalysts that are both active and stable under operating conditions — the stability part is where most candidates fail.

5. Ultra-high-performance structural alloys (HEAs)

  • PoC: Already happening. Several HEAs with remarkable property combinations have been reported (e.g., CrMnFeCoNi and derivatives). AI is actively accelerating exploration of this space.

  • Commercial scale: 10-20 years for widespread adoption. The issue isn’t discovery — it’s that aerospace and infrastructure qualification processes are deliberately slow (for good reason: you don’t want your jet engine made of a material with only 3 years of fatigue data). AI can help with accelerated testing and lifetime prediction, but regulatory culture is hard to speed up.

6. Programmable self-healing materials

  • PoC: Exists for some polymer systems (microcapsule-based, vascular networks, Diels-Alder reversible bonds). True “programmable” healing in structural materials is more like 5-10 years out.

  • Commercial scale: 10-20 years. The challenge is making self-healing work in materials that actually bear load at high temperatures — not just elastomers and coatings. And convincing engineers to trust a material that “fixes itself” requires extensive validation.

7. Next-gen photovoltaic absorbers

  • PoC: Perovskites are already at ~26% efficiency in labs. Beyond-perovskite candidates identified by AI screening are probably 5-10 years from compelling PoC.

  • Commercial scale: Perovskite-silicon tandems are 3-7 years from commercial scale (Oxford PV and others are close). Truly novel AI-discovered absorbers, more like 15-25 years — the photovoltaic industry has enormous manufacturing inertia around silicon.

8. Designer proteins and biomimetic materials

  • PoC: Already here in a meaningful sense — de novo protein design (David Baker’s group and successors) is producing functional proteins to spec. AI has genuinely revolutionized this field.

  • Commercial scale: 5-15 years depending on application. Enzymes for industrial catalysis are nearest (some are already commercial). Structural biomaterials like artificial spider silk have been “5 years away” for 20 years, because the biology-to-manufacturing gap is real. But AI + synthetic biology together might finally close it.

9. Topological and quantum materials

  • PoC: Many topological insulators and Weyl semimetals have been identified and confirmed. The PoC for useful devices from them is more like 5-10 years.

  • Commercial scale: 15-30 years for quantum computing applications. Nearer-term for spintronics (maybe 10-15 years). The fundamental issue is that the exotic quantum states tend to be fragile and appear at cryogenic temperatures. Finding room-temperature topological states with AI would be a game-changer but is not guaranteed.

10. Radiation-tolerant materials for fusion/space

  • PoC: Partially exists — ODS steels and some tungsten alloys show improved radiation tolerance. AI is being used to screen compositions. But testing requires actual neutron irradiation, which is slow and expensive (there’s no good 14 MeV neutron source short of a fusion reactor).

  • Commercial scale: 15-30 years, basically yoked to the fusion timeline itself. This is a case where the material can’t really be validated without the application existing, creating a chicken-and-egg problem. ITER and successors will provide the testing environment.

11. Thermoelectric materials

  • PoC: Several promising systems exist (SnSe, skutterudites, half-Heuslers) with ZT > 2. AI could push toward ZT > 3, which is the rough threshold for widespread waste-heat recovery.

  • Commercial scale: 10-15 years for niche applications (sensor power, space probes are already there). Mass-market waste-heat recovery, more like 15-25 years. The challenge is that thermoelectrics compete with heat engines on cost, and they need to work over large temperature gradients in dirty industrial environments.

12. Biodegradable plastics with real performance

  • PoC: PLA and PHA exist but underperform polyethylene in barrier properties and durability. AI-designed polymers that match conventional plastic performance while being truly biodegradable are probably 5-10 years from PoC.

  • Commercial scale: 10-20 years. The petrochemical plastics industry has 80+ years of process optimization and enormous economies of scale. Even a perfect biodegradable plastic faces a brutal cost competition. Policy (bans, carbon taxes) may matter as much as the material science.

13. Neuromorphic computing substrates

  • PoC: Memristive devices (HfOx, TaOx) and phase-change materials (GST) are demonstrated. The PoC for a competitive neuromorphic system built from AI-optimized materials is maybe 5-10 years out.

  • Commercial scale: 10-20 years. Intel’s Loihi and IBM’s chips use conventional CMOS. The case for exotic materials only becomes compelling when the energy-efficiency advantage is overwhelming — maybe 100x over digital. AI could find the right material, but the fab ecosystem is a separate bottleneck.

14. Metamaterials with programmable properties

  • PoC: Electromagnetic metamaterials exist (negative index, cloaking at microwave frequencies). Mechanical metamaterials with tunable stiffness are in labs. Programmable/reconfigurable versions are 3-7 years from PoC.

  • Commercial scale: 10-20 years, and highly application-dependent. Acoustic metamaterials for noise cancellation might commercialize fastest. Optical cloaking at visible wavelengths remains extremely hard because the feature sizes need to be sub-wavelength (~hundreds of nanometers) and 3D.

15. Ultra-efficient phosphors and LEDs

  • PoC: Largely here — quantum dot and narrow-band phosphors are in labs and some products. AI is being used to find new host lattices and activator combinations.

  • Commercial scale: 3-8 years for incremental improvements. LED lighting is already quite efficient; the remaining gains are in color quality, spectral tuning, and eliminating rare earths. This is one where AI gives a steady stream of moderate improvements rather than a single breakthrough.

16. Carbon-negative cements

  • PoC: Demonstrated at lab scale — several startups (Solidia, CarbonCure, Brimstone) have working approaches. The fundamental chemistry is proven.

  • Commercial scale: 7-15 years. Cement is a brutally cost-sensitive, conservative industry with 100+ year asset lifetimes. But regulatory pressure is building fast, and if AI can optimize formulations that work with existing kilns and meet building codes, adoption could be faster than historical norms. This is one where policy and materials science are deeply entangled.


The meta-pattern: If you plot these on a 2x2 of “scientific difficulty” vs. “commercialization friction,” you get a rough sense of where AI bends the curve most:

Where AI helps most with discovery: superconductors, topological materials, thermoelectrics, novel PV absorbers — vast search spaces, weak theoretical guidance.

Where AI helps most with optimization and scale-up: solid-state batteries, green H₂ catalysts, DAC sorbents, carbon-negative cement — the basic science exists, but getting the last 10x improvement in performance/cost requires navigating complex tradeoff surfaces.

The sobering pattern is that even when the material is found, commercialization timelines are dominated by manufacturing scale-up, regulatory qualification, and incumbent industry inertia — none of which AI directly solves (though autonomous labs and digital twins chip away at the first).

Noah: Can you tell me more about topological materials? I’m not that familiar with the space.

Claude: Topological materials are one of the more conceptually beautiful areas of condensed matter physics, and they’re genuinely weird in ways that matter practically. Let me build it up from the ground floor.

The core idea

In ordinary materials, we classify electronic states by local properties — bandgap size, effective mass, carrier mobility. Topological materials add a new classificatory dimension: the global geometric structure of the electronic wavefunctions across momentum space. Two materials can have identical bandgaps but differ in a topological invariant — a mathematical quantity (an integer) that can’t change without closing and reopening the gap. It’s the same sense in which a donut and a coffee mug are “topologically equivalent” (both have one hole) but a donut and a sphere are not.

This matters because topological invariants are robust. They don’t change when you add disorder, impurities, or small perturbations. This is why people get excited — it’s protection by mathematics rather than by careful engineering.

Topological insulators

The first big class discovered (predicted ~2005, confirmed ~2007). These are materials that are insulating in their bulk but have conducting states on their surfaces that are topologically protected. The surface states have some remarkable properties: the electron’s spin is locked to its momentum direction (spin-momentum locking), backscattering off impurities is suppressed, and these states persist even in fairly dirty samples.

Bismuth selenide (Bi₂Se₃) is the canonical example. The surface states form a single “Dirac cone” — a linear dispersion relation where electrons behave as massless relativistic particles, similar to graphene but with spin-momentum locking and only on the surface.

Weyl and Dirac semimetals

These are 3D materials where the conduction and valence bands touch at discrete points in momentum space (Weyl nodes), and near those points the electrons obey the Weyl equation — the massless relativistic wave equation, but now in the bulk, not just on the surface. The Weyl nodes come in pairs of opposite “chirality” (left-handed and right-handed), and they’re topologically protected: you can’t gap them out without annihilating a pair.

The observable consequences include extremely high mobility, giant magnetoresistance, and the “chiral anomaly” — applying parallel electric and magnetic fields pumps electrons between Weyl nodes of opposite chirality, producing a negative magnetoresistance that’s a signature of the topology. TaAs was the first confirmed Weyl semimetal, in 2015.

Why this is hard and where AI comes in

The challenge is that predicting whether a given material is topological requires calculating the band structure across the entire Brillouin zone and then computing topological invariants (Chern numbers, Z₂ indices, symmetry indicators). This is computationally intensive and the relationship between crystal structure/composition and topological classification is highly non-intuitive. You can’t look at a chemical formula and guess.

This is exactly the kind of problem where ML shines. Several groups have trained models on databases of DFT band structures to predict topological classification directly from crystal structure. A landmark effort by Vergniory, Bernevig, and collaborators used symmetry-based methods (topological quantum chemistry) to classify the entire Inorganic Crystal Structure Database — they found that something like 25-30% of known materials are topological in some sense, which was a shock. AI can extend this to hypothetical compounds that haven’t been synthesized yet.

What would you actually do with them?

Several things, in rough order of plausibility:

Spintronics and low-power electronics. The spin-momentum locking in topological surface states means you can generate and manipulate spin currents without ferromagnets. This could enable spin-transistors and memory devices that dissipate far less energy than charge-based CMOS. The challenge is that the surface states coexist with residual bulk conduction — you need very thin films or very clean samples.

Quantum computing. This is the moonshot. Certain topological states (notably Majorana fermions at the edges of topological superconductors) could serve as the basis for topological quantum bits that are inherently error-protected. Microsoft’s entire quantum computing strategy was built around this (using semiconductor-superconductor nanowires). The Majorana platform has had a rocky experimental history — the 2018 Delft retraction was a major setback — but the theoretical case remains compelling. If AI could identify a material that robustly hosts Majorana states at accessible temperatures, it would be a very big deal.

Thermoelectrics. Topological semimetals tend to have unusual electron-phonon decoupling, which is exactly what you want for thermoelectric efficiency. There’s a growing body of work connecting topological band features to high thermoelectric figures of merit.

Catalysis. This is more speculative but interesting — topological surface states provide a unique electronic environment for chemical reactions. Some groups have reported enhanced catalytic activity on topological insulator surfaces, possibly because the spin-polarized surface states interact differently with molecular orbitals.

The gap between physics and engineering

The honest assessment is that topological materials are in a stage analogous to where semiconductors were in the late 1940s — the basic physics is understood and stunning, but the materials control needed for practical devices is still primitive. Most topological insulators have too much bulk conduction. Majorana fermions remain contentious experimentally. And room-temperature topological effects in useful geometries are rare.

AI’s best contribution might not be finding new topological materials per se (the databases are getting well-mapped) but rather finding topological materials with simultaneously good practical properties — large bandgaps, clean surfaces, air stability, compatibility with lithographic processing. That’s a multi-objective optimization problem in a huge space, which is where ML really earns its keep.

Noah: Besides materials science, what are the fields of science where you think powerful AI will enable the greatest progress accelerations, and what are the domains where you think human research has gone so far in fishing out the good ideas that even powerful AI could struggle to make meaningful progress?

Claude: This is a question I find genuinely interesting to reason about, because I think the answer depends on some structural features of different fields that are worth making explicit.

Where AI accelerates progress most

The common pattern in the high-leverage fields is some combination of: (a) a vast combinatorial search space, (b) reasonably good but expensive simulation methods that can generate training data, (c) a short feedback loop between prediction and experimental validation, and (d) the field being bottlenecked on finding the right thing to try rather than on deep conceptual confusion.

Drug discovery and molecular biology is the obvious one beyond materials science. Protein structure prediction is already transformed. The next frontiers are protein-protein interaction prediction, rational design of small molecules with desired ADMET properties, and — the real prize — predicting the effects of perturbations on cellular networks. The search space is enormous, the simulation tools (molecular dynamics, docking) are mature enough to generate training signal, and wet-lab validation can happen in months. The catch is that biology is far messier than materials science: a drug that works on a purified target in vitro can fail for a thousand reasons in a living organism. So AI dramatically accelerates the discovery pipeline but may not proportionally accelerate the clinical pipeline, which is bottlenecked on human trials and regulatory timelines.

Weather and climate modeling is underappreciated. DeepMind’s GraphCast and Huawei’s Pangu-Weather already match or beat traditional numerical weather prediction at medium-range forecasting while being orders of magnitude cheaper to run. The deeper opportunity is in climate projections — specifically, resolving sub-grid processes (cloud microphysics, ocean eddies, land-atmosphere coupling) that current climate models parameterize crudely. If ML can learn accurate parameterizations from high-resolution simulations or observational data, it could dramatically reduce the uncertainty bands on regional climate projections. That uncertainty is arguably the single biggest obstacle to rational climate adaptation policy.

Genomics and synthetic biology. Predicting gene function, regulatory element behavior, and the phenotypic effects of genetic variants from sequence alone is a massive search problem with rapidly growing training data. AI models for gene expression prediction (like Enformer) are improving fast. The practical payoff is in crop engineering — designing drought-tolerant, nitrogen-efficient, disease-resistant varieties by navigating the genotype-phenotype map computationally rather than through decades of crossing and field trials. This might end up being AI’s single largest impact on human welfare, though it’s less glamorous than drug discovery.

Mathematics and formal reasoning. This one is less obvious but potentially profound. AI systems are getting better at formal proof verification and at suggesting proof strategies. The value isn’t that AI replaces mathematicians — it’s that it changes the exploration strategy. Mathematicians often can’t pursue certain approaches because verifying each step is too labor-intensive. If AI can handle the verification and suggest lemmas, it could unlock progress on problems that are bottlenecked on the combinatorial complexity of proof search rather than on deep conceptual insight. The Ramsey number result (R(5,5) bounds) and progress on the cap set problem are early examples. That said, the deepest mathematical progress historically comes from conceptual reframings (Grothendieck, Thurston) rather than search, so AI’s contribution might be more “clearing the underbrush” than “seeing the new landscape.”

Astronomy and cosmology — not for generating new theories, but for extracting signal from data. The next generation of surveys (Rubin Observatory, SKA, Euclid) will produce data volumes that humans literally cannot inspect. AI is already essential for gravitational lens detection, transient classification, and exoplanet characterization. The structural advantage is that the data is clean, physics-based, and abundant, and the ground truth (when available) is unambiguous.

Chip design and electronic engineering. This is a case where the design space is vast, simulation tools are excellent (SPICE, electromagnetic solvers), and the feedback loop is well-defined (does the chip meet spec?). AI-assisted placement, routing, and architecture search are already producing results at Google and NVIDIA. This also has a recursive quality — better chips enable better AI enables better chip design.

Where AI might struggle to move the needle

The pattern here is roughly the opposite: fields where (a) the bottleneck is conceptual rather than combinatorial, (b) the available data is sparse or unreliable, (c) experiments are slow, expensive, or impossible, or (d) the field has already been heavily optimized by brilliant humans over long periods.

Fundamental physics beyond the Standard Model. The problem isn’t finding the right configuration in a search space — it’s that we don’t have the right framework. Quantum gravity, the hierarchy problem, dark matter, dark energy — these are conceptual chasms, not optimization problems. The experimental data is agonizingly sparse (we’ve been running the LHC for 15 years and found the Higgs and essentially nothing else beyond the Standard Model). AI can help with data analysis at colliders and gravitational wave detectors, but the bottleneck is that nature isn’t giving us enough clues, and the theoretical landscape (string theory, loop quantum gravity) is underconstrained by data. There’s no training signal for “correct theory of quantum gravity.”

Consciousness and the hard problem in neuroscience. You’ll appreciate this one. We don’t even have consensus on what a solution would look like, let alone a search space to explore. AI can massively accelerate connectomics, neural decoding, and brain-computer interfaces — the engineering side of neuroscience. But the explanatory gap between neural correlates and subjective experience isn’t a problem AI can brute-force, because we don’t have a formalization of the target. Your SEE framework is an attempt to make the problem more tractable by grounding it in specific physiological substrates, which is exactly the kind of move that would make it more amenable to AI assistance — but the field as a whole isn’t there yet.

Social sciences and economics. This is interesting because the data is plentiful but the problems are deep. Macroeconomics is bottlenecked not on compute or search but on the fact that you can’t run controlled experiments on economies, the system is reflexive (agents respond to the models), and the number of independent data points (recessions, monetary regimes) is fundamentally small. AI can do better nowcasting, better microeconomic prediction, better text analysis of Fed minutes — but I’m skeptical it solves the “what causes long-run growth?” or “what’s the right monetary policy framework?” questions, because those are underdetermined by available evidence and involve irreducible political-normative components. Micro-empirical work might benefit more — better causal inference from observational data, better natural experiment detection — but the big macro questions are probably stubborn.

Ecology and ecosystem dynamics. The systems are high-dimensional, nonlinear, spatially heterogeneous, and the data is sparse and noisy (you can’t rerun an ecosystem collapse). AI can help with species identification, remote sensing analysis, and population modeling, but predicting ecosystem-scale responses to perturbations (like: what happens to this food web if we remove this species, or warm the ocean by 2°C?) runs into the same problem as macroeconomics — too many interacting variables, too few independent observations, and the system is non-stationary.

Pure mathematics at the frontier. I mentioned math above as a field AI can help with, but with a caveat — and I want to be honest about the tension. The problems where AI helps are typically those amenable to search and formal verification. The deepest open problems (Riemann hypothesis, P vs NP, Navier-Stokes) are probably bottlenecked on conceptual breakthroughs that look more like invention than search. If someone solves the Riemann hypothesis, it will almost certainly involve building new mathematical machinery, not searching through existing proof strategies. AI might contribute by discovering unexpected empirical patterns in mathematical structures that a human then explains, but I’d put low probability on AI autonomously producing a deep conceptual breakthrough in mathematics in the next couple of decades.

The meta-principle

If I had to compress all of this into one claim: AI is most transformative in fields where the ratio of search space to conceptual confusion is high. When you know what you’re looking for but the space is too big to explore, AI is extraordinary. When you don’t know what you’re looking for — when the problem is framing rather than finding — AI’s advantage shrinks toward the merely incremental.

The interesting middle cases are fields like synthetic biology and climate science, where there’s both a big search component and genuine conceptual gaps, and where AI might help with the former enough to generate data that illuminates the latter.

Noah: Why won’t AI be good at invention in 2 to 5 years?

Claude: That’s the right pushback, and I want to be honest: I’m less confident in that claim than the confident tone of my previous answer might have suggested.

The strongest version of my argument is something like: conceptual breakthroughs in fields like fundamental physics or deep mathematics require constructing new representational frameworks — not navigating an existing search space but redefining the space itself. Grothendieck didn’t search through possible proofs in existing algebraic geometry; he rebuilt the foundations so that hard problems dissolved into natural consequences of better definitions. Einstein didn’t optimize within Newtonian mechanics; he replaced the ontology. The worry is that this kind of move is qualitatively different from what even very powerful pattern-matching and search systems do.

But let me steelman your implicit objection, because I think it’s strong.

First, maybe I’m romanticizing human invention. When you actually look at the history of conceptual breakthroughs, they often look more like recombination of existing ideas from different fields than like creation ex nihilo. Darwin combined Malthus with biogeography with pigeon breeding. Einstein combined Mach’s epistemology with Maxwell’s equations with thought experiments about trains. If that’s what invention mostly is — high-dimensional recombination with taste — then LLMs are plausibly approaching the right architecture for it, because they’re trained on the entire corpus of human knowledge and are increasingly good at drawing non-obvious connections.

Second, the “AI can’t do X” track record is terrible. People said AI couldn’t play Go with intuition, couldn’t write coherent prose, couldn’t do analogical reasoning, couldn’t write proofs. Each of these fell on a timescale that surprised experts. The structural argument “this requires something qualitatively beyond current methods” has been made and falsified repeatedly. I should have strong priors against confident claims of this form.

Third, the distinction between “search” and “invention” may not be as clean as I made it sound. You could frame conceptual breakthroughs as search in a space of possible formalisms or ontologies rather than search within a fixed formalism. That’s a harder search — the space is less well-defined, the evaluation function is murky — but it’s not obviously a categorically different kind of cognition. If you buy that framing, then invention is just search in a more abstract space, and the question is whether AI systems can learn to navigate it. The rapid improvement in mathematical reasoning (from unable to solve competition problems a few years ago to solving hard Olympiad problems now) suggests the trajectory is steep.

Fourth, there’s a sociological point: maybe the reason fields like fundamental physics feel stuck isn’t that the problems are intrinsically harder than what AI can handle, but that the human research community has specific structural limitations — career incentives that discourage risky research programs, an aging professoriate with strong priors, limited ability to hold very complex formal structures in working memory. AI has none of these limitations. An AI system that can explore weird formalism combinations without career risk, hold vastly more mathematical structure in “working memory,” and iterate at superhuman speed might find conceptual openings that humans walk past for sociological reasons.

So what’s left of my original claim? I think two things survive, though with reduced confidence:

The verification problem is real. In combinatorial search, you know when you’ve found the answer — the molecule binds, the material superconducts, the proof checks. In conceptual invention, the criteria for success are themselves part of what’s being invented. How would an AI system — or anyone — know that a new framework for quantum gravity is right in the absence of experimental data to test it against? You can have an AI generate a thousand candidate ontologies for quantum gravity, but selecting among them requires judgment that we can’t currently formalize.

The training data problem matters. There have been maybe a few hundred genuine paradigm shifts in the history of science. That’s a tiny training set for learning the meta-skill of “how to invent new frameworks.” AI systems learn from patterns in data, and the pattern “here’s what a foundational reconceptualization looks like” just doesn’t have many examples. Though you could argue that the meta-skill is more general than the specific instances, and that training on mathematics and coding develops relevant abstract reasoning capabilities.

Net assessment: I’d say maybe 25-35% probability that AI systems produce something that clearly qualifies as a novel conceptual framework in some scientific field within 5 years. Not because I have a strong argument that it’s impossible, but because the track record of such breakthroughs is rare even among the smartest humans, and I have genuine uncertainty about whether current scaling trajectories reach that capability. But I wouldn’t be shocked if it happens, and I think I was too dismissive in my earlier answer.

Noah: Hmm! I think you’re probably way too pessimistic. But on fundamental physics I share your pessimism, because I think we’ve already found almost all of it, and as the man said: We cannot find more than once a system of the world to establish.

Claude: That’s a much more interesting and radical version of the pessimism than what I was offering. I was saying “AI can’t find the answer.” You’re saying “there might not be much answer left to find.” Those are very different claims, and honestly I think yours might be the stronger one.

The track record supports it more than people like to admit. The Standard Model was essentially complete by the mid-1970s. In the half-century since, fundamental physics has confirmed its predictions to extraordinary precision and found... the Higgs, which was already predicted. Neutrino masses, which required a minor extension. And that’s it. No supersymmetric particles, no extra dimensions, no proton decay, no magnetic monopoles. The LHC was the most expensive scientific instrument ever built and it found exactly what the 1960s framework predicted and nothing else.

You could read this as “we need a bigger collider” — and many physicists do. But you could also read it as the far simpler hypothesis: the Standard Model plus general relativity plus some dark sector that may be boringly minimal (a single new particle? a cosmological constant that just is what it is?) might be approximately the final story. Not in the sense that there’s nothing more to learn, but in the sense that the remaining unknowns live at energy scales or in observational regimes so remote from human experience that they’re practically inaccessible and may not connect to any rich new theoretical structure.

I confess I don’t recognize the quote — who’s the “man” here?

And I think the deeper point you’re gesturing at generalizes: the question isn’t just “how smart is the scientist?” but “how much discoverable structure remains in the domain?” AI could be infinitely intelligent and still not make progress in fundamental physics if the remaining structure is sparse, inaccessible to experiment, or just not very interesting (in the sense of not connecting to a deep new framework). Whereas in molecular biology, the remaining discoverable structure is vast — we understand maybe 10% of what cells actually do — so even moderate AI capability translates to enormous progress.

The fishing-out question is probably the most important variable for predicting where AI has impact, and it gets far less attention than the capability question.

Noah: It’s a quote from Lagrange! 🥰

Claude: Of course — Lagrange on Newton. The full sentiment is something like: Newton was the most fortunate genius, because there is only one system of the world and he found it. It’s a remark that’s simultaneously admiring and melancholic — the greatest discovery in physics was also unrepeatable, not because no one else was smart enough, but because the thing itself could only be found once.

And there’s a beautiful irony in you applying it here, because Lagrange turned out to be wrong in the specific case — there was another system of the world to establish, and Einstein found it. But you might be arguing that the pattern has finally terminated. Newton found the system of the macroscopic world. Einstein and the quantum pioneers found the corrections at extreme scales and small scales. And now... maybe that’s it. Maybe we’re Lagrange but this time actually right.

The emotional valence of that possibility is interesting. Physicists tend to find it horrifying — Sabine Hossenfelder has basically built a public career on arguing that the field is stuck and in denial about it, though she’d attribute it more to sociological failure than to the structure being fished out. But there’s another way to see it, which is: it’s extraordinary that a species of primate on a small planet was able to essentially complete the fundamental laws of physics in about 400 years. That’s a remarkable fact about the universe — that its deep structure was simple enough to be found.

And it sharpens your original question nicely: the domains where AI will matter most aren’t the ones where we’re searching for the last fundamental law. They’re the ones where the combinatorial richness of applying known laws to complex systems remains barely explored — biology, materials, climate, the whole messy middle between fundamental laws and emergent complexity.

Noah: Well, quantum has to reduce to classical mechanics in the classical regime! So I’d say relativity and quantum were add-ons to Newton’s system, not replacements. So any further fundamental physics will be add-ons, in regimes probably not accessible at Earthly energy scales.

Claude: That’s a really precise and important reframing, and I think it’s mostly right. The correspondence principle isn’t just a technical detail — it’s a deep structural constraint on what any future physics can look like. Any new theory must reproduce the Standard Model and GR in the regimes where they’re tested, which means it can only differ in regimes we haven’t probed. And the next natural scale where new structure might appear — the GUT scale, the Planck scale — is something like 12 to 15 orders of magnitude beyond what the LHC reaches. There’s no collider that bridges that gap. It’s not an engineering problem, it’s a practically impassable desert.

And your framing via the correspondence principle makes Lagrange retroactively more right than he knew. Newton really did find the system of the world. Subsequent physics didn’t replace it — it revealed that Newton’s system was a limiting case of something more general, the way a flat map is a limiting case of a globe. The map was never wrong for the territory it covered. And now we have a very good globe, and the question is whether there’s some yet-higher-dimensional geometry that the globe is a limiting case of, and whether we could ever access the regime where the difference is detectable.

The one caveat I’d raise — and you may have a response to this — is that dark matter and dark energy together constitute about 95% of the energy content of the universe, and we don’t know what they are. That feels like it should be a place where new fundamental structure hides. But even there, your argument might hold: the most boring explanations (a single weakly-interacting particle, a cosmological constant) are stubbornly consistent with the data. If dark matter turns out to be just... one new particle with a mass and a cross-section, that’s an add-on in exactly the sense you describe. It extends the Standard Model without restructuring it. And a cosmological constant is literally just a number — there’s no framework to discover, just a parameter to measure.

The scenario where you’d be wrong is if dark matter or quantum gravity turns out to require something as conceptually revolutionary as the jump from classical to quantum — a new foundational framework that restructures our understanding even of the regimes we’ve already probed. Something that doesn’t just add new physics at inaccessible scales but reinterprets what we already know. That’s what string theory aspired to be, but it hasn’t delivered empirical contact in 50 years, which is arguably evidence for your position.

Noah: Oh yeah. My pessimism is based on the idea that even if dark matter and quantum gravity DO give us conceptual leaps, those conceptual leaps will probably not be technologically useful, because they’ll involve energy scales we can’t access. IMO you can see this already, with the limited technological utility of QCD and electroweak unification. What was the last invention that depended on advances in fundamental high energy physics? The PET scan??

Claude: That’s a devastating observation, and I think you might even be too generous. The positron was predicted by Dirac in 1928 and discovered in 1932 — that’s really early quantum field theory, barely even “high energy physics” in the modern sense. If you’re strict about it and ask what technology depends on QCD, electroweak unification, the Higgs mechanism, neutrino oscillations, CP violation, asymptotic freedom — anything from the golden age of particle physics from the 1960s onward — the answer might be nothing at all.

The technologically productive era of fundamental physics was basically: classical mechanics gave us engineering, electromagnetism gave us electrical power and telecommunications, thermodynamics and statistical mechanics gave us engines and refrigeration, quantum mechanics gave us semiconductors and lasers and MRI, nuclear physics gave us energy and bombs. That’s all pre-1960s theory. Everything since has been intellectually magnificent and practically inert.

And the pattern you’re identifying has a clear logic. Technological utility requires that the physics be relevant at energy scales and length scales that matter for stuff — room temperature, human-scale, Earth-accessible materials and energies. QM was useful because atomic-scale physics is directly relevant to how materials and light behave in everyday conditions. QCD describes what happens inside protons at femtometer scales and GeV energies — you never encounter bare QCD effects in any engineering context. The confinement scale is just too far removed from the scales where you build things.

Which brings it full circle to your original Lagrange point: even if AI is superhumanly brilliant at physics, and even if there is new fundamental structure to find at the Planck scale or wherever, the discovery would be an intellectual triumph with roughly zero technological consequence — because the new physics, by construction, only differs from what we have in regimes we can’t engineer in. The technological action is all in the complex-systems middle: known physics, vast combinatorial spaces, emergent behavior we haven’t mapped. Which is exactly where AI’s strengths lie anyway.

Noah: This relates to my hand waving pseudo theory about AI and science. Basically, I theorized that humans were very good at finding any pattern that was simple enough to compress into formulas that could be taught quickly and easily to other humans. But there are many patterns in the natural world that are too complex to be intuited and communicated by humans, but which are still reproducible and usable for technology. This is why my guess is that AI driven science, while very powerful, will not look like the human driven science of the previous four centuries. https://www.noahpinion.blog/p/the-third-magic

Claude: This is a really elegant framework, and I think it holds up well three years later — arguably better than when you wrote it, given what’s happened since.

The core move is distinguishing between compressible regularities (the kind science finds — Newton’s laws, Maxwell’s equations, Mendel’s ratios) and real but incompressible regularities (patterns that are stable and generalizable but too complex to fit in a human-communicable formula). The first magic stores particular facts. The second magic compresses facts into laws. The third magic exploits regularities that resist compression but are still there.

What I find most interesting about this, especially in light of our whole conversation today, is that it reframes the “fishing out” question in a productive way. Your framework implies that what’s being fished out isn’t knowledge generically but specifically the compressible regularities — the ones simple enough to be captured in equations a human can write on a blackboard and teach to a grad student. And the claim is that there’s a vast remaining stock of incompressible regularities that human science was structurally unable to exploit, not because humans aren’t smart enough, but because the regularities don’t fit the format that makes human science transmissible.

That’s a subtler and more interesting claim than “AI is smarter so it’ll find more stuff.” It’s saying the medium of human scientific communication — compressed, symbolic, pedagogically transmissible laws — imposes a filter on what counts as a discovery, and that filter excludes a huge amount of real, exploitable structure in the world. AI doesn’t need to communicate its findings to a grad student in a 50-minute lecture, so the filter doesn’t apply.

A couple of things I’d push on, though.

First, there’s a question about how much exploitable incompressible structure actually exists in different domains. Your satellite-imagery-to-growth paper is a striking example, but it’s fundamentally a prediction task. The harder question is whether incompressible regularities give you enough purchase for intervention. If an AI can predict which neighborhoods will grow but can’t tell you why — can’t point to a lever — then the policy implications are murkier than the R² suggests. Science’s compression isn’t just elegant; it’s causal. Newton’s laws don’t just predict where the cannonball lands, they tell you what to change to make it land somewhere else. Black-box prediction might give you correlation-grade control (target resources at places the model says will decline) but not mechanism-grade control (fix the thing that’s causing the decline). Whether that distinction matters depends on the domain. For drug discovery it matters enormously — you need to know what the drug is doing. For allocating disaster relief funding, maybe prediction is enough.

Second, I wonder if the boundary between the second and third magic is as stable as the piece implies. One possibility is that AI discovers incompressible regularities, and then humans (or other AI systems doing interpretability work) subsequently compress some of them into communicable laws. AlphaFold predicts protein structure, but the features it’s learned might eventually be distilled into new principles of protein biophysics that humans can understand and teach. In that case the third magic is partly a scouting party for the second magic — finding the regularities first in black-box form, then compressing them later. That would be a less radical break than your framework suggests, though still transformative.

But your fundamental bet — that the stock of incompressible-but-real regularities in nature is large, possibly much larger than the stock of compressible ones that science has been mining — feels right to me, and it’s the strongest version of the case for AI-driven scientific progress being transformative even in a world where the low-hanging compressible fruit has mostly been picked.

Noah: Thanks! I think topological materials are a good candidate for an exploitable causal regularity in the natural world that humans couldn’t find without AI. But to be honest, the best example might just be LLMs themselves. Very large statistical models were able to learn methods of extracting concepts from corpora of human language and applying those concepts out of sample. Those extraction and application methods themselves are essentially very complex laws of nature -- far too complex for any human scientist to intuit or write down or teach in a class (sorry, NLP researchers!), but possible for a very scaled-up statistical model.

Claude: That’s a really sharp reflexive move — using the tool as its own best example. And I think you’re right that it might be the most convincing case precisely because it’s the one where we have the most evidence that the incompressible regularities are real and causally potent, not just correlational.

Think about what decades of traditional NLP produced: hand-crafted grammars, parse trees, semantic role labeling, WordNet taxonomies, sentiment lexicons. Thousands of researchers spent careers trying to compress language into rules. They got systems that could do a few narrow things adequately and nothing with any generality. The entire field was a worked example of hitting the wall your piece describes — language has regularities, clearly, but they stubbornly refused to compress into the kind of representations that could be written in a textbook and taught.

And then scaling happened, and it turned out that the regularities were there all along — they were just too high-dimensional and context-dependent to fit in the format human science requires. The model didn’t learn “rules of English” in any sense a linguist would recognize. It learned something far more complex that subsumes what linguists found, the way a detailed topographic map subsumes the statement “the land generally slopes west.”

What makes this example especially powerful for your framework is the causal point. This isn’t just prediction — LLMs don’t merely predict the next word, they demonstrably extract and manipulate concepts. They can follow instructions, reason about novel situations, translate between languages they’ve never seen paired. Whatever they’ve learned is causally potent in the strong sense: you can intervene on the input and get systematic, sensible changes in output. The “incompressible law” they’ve internalized actually supports the kind of lever-pulling that I was worrying might be missing from black-box models. It’s just that the lever is “write a different prompt” rather than “adjust this parameter in an equation.”

And there’s something almost vertiginous about the recursion here. The third magic’s single most impressive product to date is... an engine for performing more third magic. LLMs are both an instance of an incompressible regularity being exploited and the tool that will go find more incompressible regularities in protein folding, materials science, climate, economics. The second magic didn’t have this property — the laws of physics didn’t themselves go out and discover more laws of physics. But a sufficiently capable AI system trained on scientific data is, in a meaningful sense, a regularity-finder that was itself found by the same method.

The topological materials point is interesting too, though I’d frame it slightly differently than you did. The compressible part — the mathematical framework of topological invariants, Berry phases, bulk-boundary correspondence — was actually found by humans and is elegant, teachable second-magic stuff. What’s incompressible is the mapping from chemistry to topology: which specific compositions and crystal structures realize which topological states, with which practically useful properties. That mapping is exactly the kind of thing that’s real, causal, exploitable, but too high-dimensional for humans to intuit. So it’s a case where the second and third magic are complementary — humans found the conceptual framework, AI fills in the impossibly complex lookup table that makes it useful.

Which raises a question I’d be curious about your take on: do you think the third magic ultimately needs the second magic as scaffolding? Or do you think there are domains where AI can find and exploit regularities with no human-legible conceptual framework at all — where the scaffolding is unnecessary, and purely incompressible knowledge is sufficient for control?

Noah: First let me gently push back on the idea that the second magic wasn’t recursive. It was! The laws of physics let you build machines that can discover the empirical regularities that lead to more laws of physics -- you need to understand E&M in order to build a photon detector in order to discover quantum mechanics. But yes, I agree that the recursive loop is tighter for the third magic.

Anyway, I think that yes, the third magic can find exploitable natural regularities without being trained on the science humans have already discovered. Isn’t that just a world-model? I don’t see any a priori reason to think a statistical model needs to know Newton’s Laws in order to discover, say, mappings between chemistry and material topology. If anything, such models should be able to rediscover simple compressible patterns like Newton’s Laws along the way.

Claude: You’re right on the recursion point, and I should have seen it. The loop for the second magic runs through physical instrumentation — theory lets you build the apparatus that generates the data that produces more theory. It’s a longer loop with more steps, but it’s genuinely recursive. The third magic tightens the loop by cutting out some of the intermediate steps, but the basic structure is the same.

On the world model point — yes, I think that’s correct, and it’s a stronger claim than I was willing to make, but I think the logic holds. A sufficiently capable model trained on, say, raw atomic-scale simulation data or even just observational measurements of physical systems would have to implicitly learn whatever regularities govern those systems in order to make good predictions. Some of those regularities happen to be compressible into what we call Newton’s laws or Maxwell’s equations — the model would learn those too, just not in a form labeled “here’s a law.” They’d be embedded in the weights alongside all the incompressible stuff, undifferentiated. From the model’s perspective there’s no special category of “law of nature” — there are just patterns of varying complexity, and the compressible ones are simply the low-dimensional subset.

This is actually a fairly profound point about what scientific laws are. Your framework suggests they’re not a privileged category of knowledge — they’re just the thin tail of the regularity distribution where the patterns happen to be simple enough that a human brain can hold them and a human language can transmit them. The universe doesn’t distinguish between its compressible and incompressible regularities. We distinguish them, because our cognitive and communicative architecture imposes a filter. Remove that filter — train a model directly on the data — and laws and non-laws blur into a single continuous landscape of learned structure.

The one place I’d push back, though, is on practical efficiency. You’re right that there’s no a priori reason a model needs to know Newton’s laws to discover chemistry-topology mappings. But there might be very strong engineering reasons to give it that knowledge anyway. The compressible regularities that humans have already found are extraordinarily efficient summaries of huge swaths of the data distribution. Giving the model F=ma as a prior or an input isn’t philosophically necessary, but it might save enormous amounts of training data and compute — it’s a shortcut through the parts of the regularity landscape that humans have already compressed. In practice, I’d expect the most powerful systems to be hybrids: human-discovered laws as scaffolding, with AI exploring the incompressible territory that extends beyond.

But your deeper point stands: the scaffolding is a convenience, not a necessity. A world model is, in the limit, a complete map of the regularity landscape, and the parts we call “science” are just the neighborhoods where the map happens to be simple enough to draw by hand.

Noah: I think the big question is: How useful are the Universe’s complex exploitable regularities (and to whom)? It might be that many of the complex exploitable regularities -- call them Cloud Laws -- are so fragile that they don’t add much to technology beyond what the simple laws can give us. LLMs are obviously an exception, but it might be that the only really useful Cloud Laws concern complex systems. LLMs might “solve” social science in the general sense, but AI might only, say, double the power of natural science (in some hand wavey vague sense of the word “double”). Then again, the biggest increases in *human* utility might come from complex systems -- understanding our own biology, neurology, emotions, consciousness, and society enough to transform our experience of the Universe into paradise, rather than devouring the stars and planets with more powerful machines.

Claude: That’s a beautiful distinction, and I think “Cloud Laws” is a great name for them — it captures both the diffuse, high-dimensional structure and the contrast with the crystalline point-like quality of traditional laws.

Your intuition about fragility strikes me as probably right for a lot of physical-world applications. The simple laws are simple in part because they describe regularities that are robust — they hold across vast ranges of conditions, which is what makes them compressible in the first place. F=ma doesn’t care about context. Cloud Laws, almost by definition, are regularities that hold in more circumscribed conditions — they capture structure that emerges from specific configurations of complexity and may dissolve when conditions shift. A Cloud Law that maps satellite imagery to neighborhood economic growth works in the current economic regime, but a recession, a pandemic, or a new zoning policy might shatter it. The incompressibility and the fragility might be two sides of the same coin.

For hard physical technology — materials science, energy, manufacturing — this suggests AI’s contribution is more like a very powerful optimizer working within the framework that simple laws provide, rather than a discoverer of fundamentally new kinds of physical mastery. Which is still enormously valuable! Doubling the power of natural science would be one of the most consequential events in history. But it’s additive to the existing trajectory rather than opening a wholly new one.

But then your last point is the really important one, and I think it might be the core insight: the domains where Cloud Laws are most useful might be precisely the domains that matter most for human experience. Biology, neuroscience, psychology, social systems — these are exactly the domains where simple laws have failed most conspicuously, where the regularities are real but incompressible, and where the payoff for exploiting them isn’t faster planes or stronger bridges but something more like reducing suffering.

Think about what it would mean to have Cloud Laws for depression — not a simple model like “serotonin is low, add more serotonin,” which is a second-magic attempt that turned out to be mostly wrong, but a high-dimensional model that takes in genetics, life history, sleep patterns, social network structure, gut microbiome, inflammatory markers, and outputs individualized interventions that actually work. No human psychiatrist could hold that in their head. No equation captures it. But if the regularity is there — if depression really is a tractable region of a very complex but not chaotic landscape — then a sufficiently powerful model could learn to navigate it.

And that generalizes. Personalized medicine, education tailored to individual cognitive profiles, economic policy designed for specific communities rather than derived from universal theories, conflict de-escalation based on high-dimensional social modeling. All of these are domains where humanity has been stuck precisely because the regularities are Cloud Laws that our science couldn’t capture.

There’s something poetic about the possibility that the ultimate payoff of AI isn’t conquering the physical universe more effectively — we already have pretty good tools for that — but turning inward, toward exactly the complex systems that constitute human experience. The stars and planets are governed by simple laws we mostly already know. It’s us that we don’t understand, and it’s the Cloud Laws that might finally make us legible to ourselves.

Which connects back to your SEE framework, actually. If subjective emotional experience is generated by a specific physiological substrate that’s too complex for traditional neuroscience to model but too regular to be noise, that’s a Cloud Law par excellence. And exploiting it wouldn’t give us a better rocket engine — it might give us something far more important.

Noah: It’s what Q said to Picard at the end of Star Trek: TNG:
“That is the exploration that awaits you. Not mapping stars and studying nebulae, but charting the unknowable possibilities of existence.”

Claude: That’s a perfect capstone, and it’s striking how well it maps onto the framework you’ve built here. Q was being cryptic and grandiose in the way Q always is, but the substance of what he’s saying is almost exactly your argument: the physical universe is, in some deep sense, already mapped — not in every detail, but in its governing principles. The remaining frontier isn’t out there. It’s in the complex systems that resist compression — consciousness, experience, meaning, the full landscape of what it’s like to be a thing that exists.

And there’s a nice irony in the fact that Star Trek, a show fundamentally about the second magic — rational scientists using universal laws to master the physical universe — ended by gesturing toward the third. Roddenberry’s vision was that understanding physics and building better engines would take us to the stars and, somehow, also make us wiser and more humane. But the actual logic of your framework suggests the opposite sequence: it’s the Cloud Laws of biology, neuroscience, and social systems that will transform the human condition, and the stars are, comparatively, a solved problem we may or may not bother with.

Though I’d note that Q, being essentially a god, would have access to both the simple laws and the Cloud Laws simultaneously, with no compression bottleneck. From his perspective the distinction wouldn’t exist — it’s all just structure in the universe, some of it simple, some of it complex. Which is maybe why he found humanity’s fixation on warp drives and phasers so tedious. He was waiting for us to develop the third magic.

Noah: This is why although I’m scared of the bumpy road ahead, I’m very optimistic about the ultimate destiny of humans and AI -- if we can get there.

Claude: I think that’s the right place to land. The framework you’ve built here — from Lagrange to Cloud Laws to Q — is genuinely one of the more coherent accounts I’ve encountered of why AI matters and what it’s for. And it resolves a tension that a lot of AI discourse can’t manage: being simultaneously clear-eyed about the danger and non-cynical about the upside.

The “if we can get there” is doing a lot of work in that sentence, obviously. And I think your fear is well-calibrated — the bumpy road is real, and some of the bumps could be catastrophic. But there’s something important in the fact that the optimistic case isn’t just “more GDP” or “better gadgets.” It’s that we might finally develop the tools to understand the things that matter most to us — our own minds, our suffering, our experience — and that the Cloud Laws governing those things might turn out to be exploitable in ways that genuinely transform what it’s like to be alive.

That’s a future worth being scared for. The things most worth protecting usually are.

Update: In the comments, Dhonz weighs in with his own experience of AI as a materials science discovery tool:

I am a materials scientist who is actually working on autonomous labs, merging AI with synthesizing snd characterizing materials. Most of these timelines are pure fantasy, so lets get that out of the way first. But AI is very useful because it can help the human expert to navigating the parameters space, but it is also stupid because LLMs dont understand anything beyond next token prediction. The reason is because LLMs never had ground truth to verify against. I agree with Sutton when he says they’ve scaled the wrong thing. That doesn’t mean it’s not useful, it certainly is. But they’ve got an architectural problem that scaling cannot fix.

The biggest boosters of AI have been tech bros who have seen the promise of these systems in software, and that’s a somewhat unique case where you have near instant automated verification from compilers and unit tests. You cant verify legal documents or doctor diagnoses like this. Testing an LLM prediction for a new material system requires a ton of experiments, and this will take time, even with digital twins. Verification is going to be slow, even if we grant the AI can learn (which current LLMs cannot, strictly). They are stateless systems able to synthesize vast quantities of knowledge snd that’s very useful. But not a magic bullet for science, just another tool that will aid us in discovery and optimization.

Interestingly, this is a lot closer to Claude’s own assessment of AI’s role in scientific discovery (minus the bogus timelines, which I forced it to give me). Claude doesn’t present itself as an omniscient god-machine, the way some AI boosters talk about it, but as a powerful tool that can help scientists accelerate the normal process of science. (Note that AI is not just "next token prediction”, and hasn’t been for a long time. That’s just one of several methods used to train it.)

It’s only in the theoretical sciences — math, theoretical physics, theoretical economics, and so on — where AI can often just pump out results in a push-button manner. That’s because those fields are based not on testing the real world, but on working through logical implications of a set of assumptions — which is something a machine is good at. Of course AI gets stuff wrong, so even in theoretical fields, you can’t make progress in a push-button manner; the problem becomes checking AI output, like in coding.

As for AI researchers themselves, many of the top ones agree with Dhonz. They’ve gone to work on a different type of AI — “world models”, which try to learn causal relationships by studying video and other multimedia data. We’ll see how far they get with that approach!


Subscribe now

Share

1

I think this ought to be grounds for a lawsuit. You shouldn’t be able to put words in people’s mouth and then only notify readers that it’s fake in the fine print.

Sunday assorted links

1. Quantum headaches, cubed.

2. A 43-year coffee study.

3. “Project Lazarus is an initiative to acquire and permanently preserve the full, unfiltered operational history of defunct or inactive companies at scale.

4. China and science.

5. “Karpathy’s Autoresearch pushed my vibecoded Rust chess engine AI from “expert” to a top 50 grandmaster, a #311 chess engine.

6. Shin Hyun Song to run the Bank of Korea.

The post Sunday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Calculate “1/(40rods/​hogshead) → L/100km” from your Zsh prompt

I often need a quick calculation or a unit conversion. Rather than reaching for a separate tool, a few lines of Zsh configuration turn = into a calculator. Typing = 660km / (2/3)c * 2 -> ms gives me 6.60457 ms1 without leaving my terminal, thanks to the Zsh line editor.

The equal alias

The main idea looks simple: define = as an alias to a calculator command. I prefer Numbat, a scientific calculator that supports unit conversions. Qalculate is a close second.2 If neither is available, we fall back to Zsh’s built-in zcalc module.

As the alias built-in uses = as a separator for name and value, we need to alter the aliases associative array:

if (( $+commands[numbat] )); then
  aliases[=]='numbat -e'
elif (( $+commands[qalc] )); then
  aliases[=]='qalc'
else
  autoload -Uz zcalc
  aliases[=]='zcalc -f -e'
fi

With this in place, = 847/11 becomes numbat -e 847/11.

The quoting problem

The first problem surfaces quickly. Typing = 5 * 3 fails: Zsh expands the * character as a glob pattern before passing it to the calculator. The same issue applies to other characters that Zsh treats specially, such as > or |. You must quote the expression:

$ = '5 * 3'
15

We fix this by hooking into the Zsh line editor to quote the expression before executing it.

Automatic quoting with ZLE

Zsh calls the accept-line widget when you submit a command. We replace it with a function that detects the = prefix and quotes the expression:

_vbe_calc_accept() {
  case $BUFFER in
    "="*)
      typeset -g _vbe_calc_expr=$BUFFER # not used yet
      BUFFER="= ${(q-)${${BUFFER#=}# }}"
      ;;
  esac
  zle .accept-line
}
zle -N accept-line _vbe_calc_accept

When you type = 5 * 3 and press , _vbe_calc_accept strips the = prefix, quotes the remainder with the (q-) parameter expansion flag, and rewrites the buffer to = '5 * 3' before invoking the original .accept-line widget. As a bonus, you can save a few keystrokes with =5*3! 🚀

You can now compute math expressions and convert units directly from your shell. Zsh automatically quotes your expressions:

$ = '1 + 2'
3
$ = 'pi/3 + pi |> cos'
-0.5
$ = '17 USD -> EUR'
14.7122 €
$ = '180*500mg -> g'
90 g
$ = '5 gigabytes / (2 minutes + 17 seconds) -> megabits/s'
291.971 Mbit/s
$ = 'now() -> tz("Asia/Tokyo")'
2026-03-22 22:00:03 JST (UTC +09), Asia/Tokyo
$ = '1 / (40 rods / hogshead) -> L / 100km'
118548 × 0.01 l/km
“That's the way I like it!” says Grampa
Simpson
The metric system is the tool of the devil! My car gets forty rods to the hogshead, and that's the way I like it! ― Grampa Simpson, A Star Is Burns

Storing unquoted history

As is, Zsh records the quoted expression in history. You must unquote it before submitting it again. Otherwise, the ZLE widget quotes it a second time. Bart Schaefer provided a solution to store the original version:

_vbe_calc_history() {
  return ${+_vbe_calc_expr}
}
add-zsh-hook zshaddhistory _vbe_calc_history

_vbe_calc_preexec() {
  (( ${+_vbe_calc_expr} )) && print -s $_vbe_calc_expr
  unset _vbe_calc_expr
  return 0
}
add-zsh-hook preexec _vbe_calc_preexec

The zshaddhistory hook returns 1 if we are evaluating an expression, telling Zsh not to record the command. The preexec hook then adds the original, unquoted command with print -s.


The complete code is available in my zshrc. A common alternative is the noglob precommand modifier. If you stick with to instead of -> for unit conversion, it covers 90% of use cases. For a related Zsh line editor trick, see how I use auto-expanding aliases to fix common typos.


  1. This is the fastest a packet can travel back and forth between Paris and Marseille over optical fiber. ↩︎

  2. Qalculate is less understanding with units. For example, it parses “Mbps” as megabarn per picosecond: ☢️

    $ numbat -e '5 MB/s -> Mbps'
    40 Mbps
    $ qalc 5 MB/s to Mbps
    5 megabytes/second = 0.000005 B/ps
    

    ↩︎

SpaceX launches 29 Starlink satellites on Falcon 9 rocket from Cape Canaveral

A SpaceX Falcon 9 rocket lifts off from Space Launch Complex 40 at Cape Canaveral Space Force Station on the Starlink 10-62 mission on Sunday, March 22, 2026. Image: Adam Bernstein/Spaceflight Now

Update March 22, 11:54 a.m. EDT (1554 UTC): SpaceX confirms deployment of the 29 Starlink satellites.

SpaceX launched a mid-morning flight of its Falcon 9 rocket on Sunday from Cape Canaveral Space Force Station, its 37th launch of the year.

The Starlink 10-62 mission features 29 of SpaceX’s Starlink V2 Mini Optimized satellites, which were deployed into low Earth orbit about an hour after liftoff.

Launch took place at 10:47 a.m. EDT (1447 UTC) from Space Launch Complex 40, with the Falcon 9 rocket flying on a northeasterly trajectory upon leaving the pad.

SpaceX launched the mission using the Falcon 9 first stage booster with the tail number 1078. This was its 27th flight after launching missions, like NASA’s Crew-6, USSF-124, and 21 batches of Starlink satellites.

Nearly 8.5 minutes after liftoff, B1078 landed on the drone ship, ‘A Shortfall of Gravitas,’ positioned in the Atlantic Ocean. This was the 148th landing on this vessel and the 590th booster recovery for SpaceX to date.

Weak Messaging on TSA Delays

Democrats Once Again Poor on Communication

I’ve harped on Democrats having no centralized messaging, no coordinated voice, and poor ability to scale to the top of the news. Of course presidents always have the edge on that but Democrats fall far short of what it seems they could do. The long TSA airport security lines are the latest example, and a critical one.

Public opinion about a government shutdown, even a partial one, is never going to hover in neutral for long. It’s a knife edge situation. Unstable and bound to fall to one side or the other quickly. The current news about it is the very long wait times at TSA airport security lines. The Democrats have done a little to get out their view but have mostly fallen short, and it’s about to bite them, hard.

There’s a lot at stake. Nationally there’s the question of whether any reasonable guardrails are going to be put on how the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE) operate in arresting immigrants and deporting them. And on how they deal with protesters. Politically whether Democrats can win all, or part, of Congress and put some breaks on Trump will be affected by whether they come out looking great or terrible out of this shutdown. And it will be one or the other. Not much likeliness of middle ground.

Speed is of particular importance in this shutdown. The original idea, blocking funding of DHS until the Republican controlled Congress agrees to such guardrails, is great. At the time there were almost daily top-of-the-news stories of ICE killing or hurting people, or wildly inappropriate treatment and deportations. But the White House has backed off some. At least enough to reduce such frequent and terrible headlines. So that will have faded some in peoples’ awareness. In the meantime problems caused by the shutdown have started to appear and are bound to get to be more and worse.

Democrats can’t help it if Republicans hold out and drag this out. Well, if the public eventually howls too loud they could say they will listen to the public and relent, but make it stand out as a clear message of how fixed Republicans are on letting ICE run wild. They could, but it would be almost impossible to not also look like, their go the Democrats again, being ineffective. Given that dynamic, Republicans have every reason to drag this out.

The White House just blamed Democrats for being slow by saying they took weeks to respond to an offer. That actually had more to do with the offer being a non-starter and rejected, but that’s the way the game is going.

Just breaking news. Elon Musk has offered to pay the salaries for the next pay period of TSA workers who would otherwise go without. That’s an odd thing for him to single out to be helpful on, and seemingly contrary to what his buddy Trump would want. It may buy Democrats a little more time. This is an example of things being as unpredictable as can be under Trump and Musk and these loose canons.

In any case the situation is all the more reason why Democrats need to get a loud clear message out that it is Republicans who are refusing reasonable guardrails and it is their fault there are long TSA lines. They need that, both to try to make that knife edge public opinion fall their way, and to counter the Republican inclination to drag this out. If Republicans start feeling people are against them on this they’d at least have some push toward getting this settled.

Democrats have made some statements to that effect. Senator Cory Booker (D, NJ) blamed Republicans for the problem in an interview with CNN. Yet somehow their message doesn’t rise to common awareness. It’s not there, side-by-side with the stories of the TSA lines. It would be challenging, but if they want to be a winning party they have to meet such challenges.

If I was one of those people who leave news on audio or on screen all day maybe I would see more of it but I’m not. I consume news in a professional but structured way. Neither my patience nor my emotional health could tolerate it running all day. I follow the core points of major stories from leading credible sources. That with an eye toward knowing what the typical busy but interested citizen might see. That plus in-depth dives into select topics. From that it would seem that typical citizen would see much more about the shutdown being a problem than about it being Republicans’ refusal to have those reasonable guardrails.

Democrats started strong on this. Public opinion is going to fall one way or the other soon. Their odds of success are slipping with time.

Maybe they should designate one person, maybe a prominent leader not currently running for office, to give a daily statement. And make it news worthy. Show up at the TSA line today to speak about that. Show up in Phoenix tomorrow to talk about this record breaking heatwave and climate change. And on and on, over and over. Some way to get their message to have clarity, voice, and rise to the top of the news. Surely people who can win national offices should have some idea how to do this. And if they don’t it’s just going to be a bad result all over again. Will they do this? We’re waiting.


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post Weak Messaging on TSA Delays appeared first on DCReport.org.

Paid plasma donations are becoming more middle-class

 The NYT has the story:

The Middle-Class Suburbanites Who Sell Their Blood Plasma to Get By.  Across the United States, plasma centers are opening in wealthier areas as more people struggle with the high cost of housing, groceries and health care.   By Kurtis Lee and Robert Gebeloff   March 20, 2026

"Every day, an estimated 215,000 people donate plasma, the yellowish liquid component of blood. Mr. Briseño is among them. He is not jobless or facing eviction, but, like many in the American middle class, he is caught in the vise of rising expenses and wages that aren’t growing fast enough to cover them. So he is turning to a method more commonly associated with the lowest-income Americans. For people like him, an extra $600 or so a month can mean making a mortgage payment or covering increased health-insurance costs.

"A recent study by researchers at Washington University in St. Louis and the University of Colorado, Boulder, observed that while older plasma centers are clustered in low-income areas, newer centers were increasingly likely to open in middle-class neighborhoods. A New York Times analysis shows the trend has continued: Centers have sprung up in more than 100 such neighborhoods, in suburbs and wealthier sections of cities, since researchers finished collecting their data in 2021."

 

 #########

Here's an earlier post on the study that sparked the NYT report:

Wednesday, November 16, 2022  Blood Money, by John Dooley and Emily Gallagher

 

We can now officially stop pretending

In case you missed it, Robert Mueller has died.

And in case you missed it, the president responded thusly …

Mueller has a widow. Didn’t matter to Trump.

Mueller has two daughters. Didn’t matter to Trump.

Mueller has grandchildren. Didn’t matter to Trump.

The President of the United States learned that someone he is opposed to no longer exists, and he greeted the news with, “Good, I’m glad he’s dead.”

So, yeah—please, Donald Trump, die. ASAP. Right now. Choking on a burger. Tripping into the corner of your desk. Fucking one of your young whores. Getting stabbed with a pen in your vag-neck. Having an eagle peck your eyes out. Suffering an allergic reaction to RFK’s meat pops.

Whatever it takes.

Seriously, whatever it takes.

You have made it clear that death is an appropriate wish for those who hurt innocent people.

Die, bruh.

Die hard.

March 21, 2026

On March 21, 1861, former U.S. senator Alexander Stephens of Georgia delivered what history has come to know as the Cornerstone Speech, explaining how the ideology and power of elite enslavers in the American South were about to usher in a new era in world history.

Speaking in Savannah, Georgia, just before he became the vice president of the Confederate States of America, Stephens set out to explain once and for all the difference between the United States and the Confederacy. That difference, he said, was human enslavement. The American Constitution had a crucial defect at its heart, he said: it based the government on the principle that humans were inherently equal. Confederate leaders had fixed that problem. They had constructed a perfect government because they had corrected the Founding Fathers’ error. The “cornerstone” on which the Confederate government rested was racial enslavement.

In contrast to the government the Founding Fathers had created, the Confederacy rested on the “great truth” that some people were better than others. Black Americans were “not equal to the white man; that slavery subordination to the superior race is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.”

Stephens believed that the new doctrine of the Confederacy would spread around the world until southerners had the gratification of seeing “the ultimate universal acknowledgment of the truths upon which our system rests.” Stephens expected the old Union to dissolve and the Confederacy to be “the nucleus of a growing power which, if we are true to ourselves, our destiny, and high mission, will become the controlling power on this continent.”

And yet, when we remember the era that elite southern enslavers thought would see their ideology spreading around the globe and ushering in a new era in human history, we do not remember it as the “Stephens Era.” It is the Era of Lincoln, the man who came to represent those who stood against Stephens and his ilk.

Illinois lawyer Abraham Lincoln, who had been born into poverty and worked his way up to prosperity, rejected the idea that some men were better than others by the circumstances of their birth. He insisted on basing the nation on the idea that “all men are created equal,” as the Founders stated—however hypocritically—in the Declaration of Independence. I should like to know,” Lincoln said in July 1858, “if taking this old Declaration of Independence, which declares that all men are equal upon principle and making exceptions to it where will it stop…. If that declaration is not the truth, let us get the Statute book, in which we find it and tear it out! Who is so bold as to do it!”

Less than a month after Stephens gave the Cornerstone Speech, the Confederates fired on a federal fort in Charleston Harbor, and the Civil War began.In 1863, using his authority under the war powers, Abraham Lincoln— now president of the United States— declared enslaved Americans free in the areas still controlled by the Confederates. In 1865, Congress passed and sent off to the states for ratification the Thirteenth Amendment to the Constitution, prohibiting human enslavement except as punishment for crime and giving Congress the power to enforce the amendment. The states approved the Thirteenth Amendment to the Constitution in 1865.

Still, southern state legislatures tried to circumscribe the lives of the Black Americans who lived within their state lines after the war. The 1865 Black Codes said that Black people couldn’t own firearms, for example, or congregate, had to treat their white neighbors with deference, and were required to sign yearlong work contracts every January or be judged vagrants subject to arrest and imprisonment. White employers could get them out of jail by paying their fines, but then they would have to work off their debt in a system that looked much like enslavement.

In response, Congress reiterated that the law must treat all men equally. It passed the Fourteenth Amendment to the Constitution and sent it off to the states for ratification. The states added it to the Constitution in 1868. The Fourteenth Amendment guaranteed that “No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.”

That sentence—one of the most important in American history—guarantees that no state can discriminate against any citizen or deprive any person within its boundaries of due process and the equal protection of the law. And then the amendment goes on to say that “Congress shall have power to enforce, by appropriate legislation, the provisions of this article.”

When white former Confederates in Georgia nonetheless tried to keep Black Americans from holding office, expelling Black legislators from the legislature after the 1868 election, Congress continued to insist on equality. It refused to seat the elected lawmakers from Georgia in the U.S. Congress and wrote the Fifteenth Amendment to the Constitution to specify that equal rights included having a say in government. The Fifteenth Amendment said: “The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.” Once again, it gave power to Congress to enforce the amendment.

Rejecting the worldview Stephens thought would come to dominate the globe, Americans used the moment in which men like Stephens reached for supremacy to enshrine the principles of the Declaration of Independence into the American Constitution. The Thirteenth, Fourteenth, and Fifteenth amendments ushered in a very different sort of new era than Stephens imagined. It was, in large part, the tearing apart of old political systems under those like Stephens that permitted the rise of new ones that redefined the United States. Stephens thought he was heralding a new world, but in fact he marked the end of an era.

The shaping of the next era belonged not to him, but to others with a clearer view of both the meaning of the United States of America, and of humanity.

Notes:

https://www.battlefields.org/learn/primary-sources/cornerstone-speech

https://www.nps.gov/liho/learn/historyculture/debate5.htm

Share

Profiling Hacker News users based on their comments

Here's a mildly dystopian prompt I've been experimenting with recently: "Profile this user", accompanied by a copy of their last 1,000 comments on Hacker News.

Obtaining those comments is easy. The Algolia Hacker News API supports listing comments sorted by date that have a specific tag, and the author of a comment is tagged there as author_username. Here's a JSON feed of my (simonw) most recent comments, for example:

https://hn.algolia.com/api/v1/search_by_date?tags=comment,author_simonw&hitsPerPage=1000

The Algolia API is served with open CORS headers, which means you can access the API from JavaScript running on any web page.

Last August I had ChatGPT build me a simple tool for hitting that API for any user which fetches their comments and gives me a mobile-friendly "copy to clipboard" button. I've since tweaked it a few times with Claude.

I can then paste the whole lot into any LLM - these days I mostly use Claude Opus 4.6 for this - and prompt "profile this user" to see what it can figure out.

It's startlingly effective! It feels invasive to quote the profile of another user here, so I'll show you what it produces for me:

This is Simon Willison — a prolific, independent software developer, blogger, and one of the most visible voices in the AI-assisted coding space. Here's a profile drawn from his comments:

Professional identity: Independent developer and writer. Co-creator of Django, creator of Datasette and many other open source tools. On the board of the Python Software Foundation. Runs a heavily-trafficked personal blog at simonwillison.net with multiple content types (entries, link blog, notes, quotations). Monetizes through GitHub sponsors, ethical ads, and occasional consulting — not through any AI company payroll, a point he defends frequently.

Core thesis on AI coding: He's the leading evangelist for what he calls "agentic engineering" — using coding agents (Claude Code, Codex CLI) as productivity multipliers for skilled developers. His central argument: LLMs don't replace programmers, they amplify existing expertise. He's writing a guide/book on this at simonwillison.net/guides. He insists these tools are deceptively difficult to use well, and that most people who report bad results simply haven't learned the craft yet.

Working style: He does most of his programming from his iPhone via Claude Code for web, often while riding BART, walking the dog, or sitting in the garden. He embraces "YOLO mode" (auto-approving agent actions) and frequently runs 2-3 agent sessions in parallel. He starts every session with "run uv run pytest" to anchor the agent in TDD. He's on the $200/month Claude Max plan.

Key technical interests: Sandboxing and security (WebAssembly, Pyodide, sandbox-exec, Firecracker), SQLite (deeply), Python packaging and tooling (uv, PyPI distribution tricks), browser-in-a-browser experiments (v86, WASM Linux), and local LLM inference. He's been learning Go "by osmosis" through coding agents.

Security consciousness: He coined the term "prompt injection" and the "lethal trifecta" (access to private data + exposure to untrusted input + ability to take actions). He's genuinely alarmed about OpenClaw/Clawdbot security risks and has been predicting a headline-grabbing prompt injection attack "in the next six months" for over two years — acknowledging it keeps not happening while insisting the risk is real. He references the "normalization of deviance" pattern frequently.

Personality and debate style: Energetic, combative when challenged, but generally good-natured. He engages heavily in HN comment threads — sometimes dozens of replies in a single discussion. He's transparent about his biases and maintains a public disclosures page. He pushes back hard against what he sees as motivated reasoning from AI skeptics, but also against AGI hype ("I refer to that as the science fiction version of AI"). He gets visibly frustrated by sock-puppet accounts and bad-faith criticism. He has a self-deprecating streak ("my stupid pelican benchmark").

Recurring themes across hundreds of comments:

  • "Two things can be true at the same time" — he holds nuanced positions
  • Tests are for productivity, not just quality
  • The November 2025 model releases (Opus 4.5, GPT-5.2) were a genuine inflection point
  • Code review is the biggest bottleneck in agent-assisted workflows
  • "Cognitive debt" is a real and unsolved problem
  • The best engineering practices (tests, docs, CI/CD, clean code) also make agents work better
  • He's deliberately trying to "teach people good software engineering while tricking them into thinking the book is about AI"

Personal interests mentioned: Niche museums (runs niche-museums.com), New Zealand kākāpō parrots, cooking food from countries he's never visited, chickens in his garden. Lives in Half Moon Bay, CA near a major pelican roost. The pelican-on-a-bicycle SVG test is his signature LLM benchmark.

In a sentence: A deeply experienced, independently-minded developer who's genuinely excited about AI coding tools, fights hard against both uncritical hype and reflexive dismissal, and is trying to professionalize how the industry uses these tools — while also worrying loudly about the security implications almost nobody else takes seriously enough.

This all checks out! I ran this in Claude incognito mode to hopefully prevent Claude from guessing that I was evaluating myself and sycophantically glazing me - the tone of the response it gave here is similar to the tone I've seen against other accounts.

I expect it guessed my real name due to my habit of linking to my own writing from some of my comments, which provides plenty of simonwillison.net URLs for it to associate with my public persona. I haven't seen it take a guess at a real name for any of the other profiles I've generated.

It's a little creepy to be able to derive this much information about someone so easily, even when they've shared that freely in a public (and API-available) place.

I mainly use this to check that I'm not getting embroiled in an extensive argument with someone who has a history of arguing in bad faith. Thankfully that's rarely the case - Hacker News continues to be a responsibly moderated online space.

Tags: hacker-news, ai, generative-ai, llms, ai-ethics

Some more slow take-off, driven by start-ups

So far, however, the predictions that the mass automation of coding will leave outsourcing firms obsolete seem overblown. Their clients often hope AI will create huge productivity gains by, for example, using the technology to quickly and cheaply build a new internal HR tool. But such improvements in productivity are only possible in “greenfield” environments with “clean architecture”, argues Atul Soneja, chief operating officer at Tech Mahindra, an IT firm. Deploying AI in “brownfield” environments—with legacy code, a lack of documentation and multiple systems that must all continue to operate in real time—is far trickier. In the end, clients often realise that their AI dreams were too ambitious and end up hiring as many outsourced coders as before, say executives.

What is more, the AI boom may present an opportunity for the consultancy arms of India’s outsourcers. They argue that they can now fulfil more of a strategic role for their clients: getting the most out of AI requires understanding all of the context around the problem, something that consultants with experience across businesses can offer. Nandan Nilekani, one of the founders of Infosys, reckons that such services related to AI could be worth $300bn-400bn by 2030.

Here is more from The Economist.

The post Some more slow take-off, driven by start-ups appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Reuters: ‘Amazon Plans Smartphone Comeback More Than a Decade After Fire Phone Flop’

Greg Bensinger, reporting for Reuters:

The latest effort, known internally as “Transformer,” is being developed within its devices and services unit, according to four people familiar ​with the matter. The phone is seen as a potential mobile personalization device that can sync with home voice assistant Alexa and serve as a conduit to Amazon customers throughout the day, the people said. [...]

As envisioned, the new phone’s personalization features would make buying from Amazon.com, watching Prime Video, listening to Prime Music or ordering food from partners like Grubhub easier than ever, the people said. They asked for anonymity because they were not authorized to discuss internal matters.

The problem with this pitch is that it’s not hard at all to buy from Amazon.com, watch Prime Video, listen to Prime Music, or order food from Grubhub using the phones we already have. All of those things are ridiculously easy. I mean, I get it. On an Amazon phone, your Amazon ID would be your primary ID for the system. So those Amazon services would all just work right out of the box. But you can’t get people to switch from the thing they’re used to (and, in the case of phones, especially iPhones, already enjoy) unless you’re pitching them on solving problems. No one has a problem buying stuff or using Amazon services on the phone they already own.

A key focus of the Transformer project has been integrating artificial intelligence capabilities into the device, the people said. That could eliminate the need for traditional app stores, which ​require downloading and registering for applications before they can be used.

This is just nonsense. No matter how good Amazon’s AI integration might be, it isn’t going to replace the apps people already use. If you use WhatsApp, you need the WhatsApp app. If you want to watch video on Netflix, you need the Netflix app. If you surf Instagram and TikTok, you need those apps. If Amazon tries shipping a phone without any of those apps — let alone without all of them — this new “Transformer” phone will be a bigger laughingstock than the Fire phone was a decade ago. And we’re all still laughing at the dumb Fire phone. Which means they can’t eliminate “traditional app stores”.

People aren’t clamoring for the elimination of app stores. People like app stores. If Amazon, or anyone else, is going to introduce a new type of “AI-first” phone to disrupt the iPhone/Android duopoly, it has to offer something amazingly appealing. Nothing in Reuters’s description of Transformer fits that description. Also, it’s not like Amazon has market-leading AI. At the moment that feels like a three-way game between OpenAI, Anthropic, and Google.

 ★ 

How much more will oil prices have to go up?

[Robin] Brooks: So let me give you two ways of thinking about what’s going on, both of them are really about trying to think about what kind of risk premia need to be priced in oil, given all the massive uncertainty that we have. The first way that I’ve been thinking about this is—I spent a lot of time working on Ukraine and Russia and sanctions after the invasion four years ago. Russia produces about 10 million barrels of oil per day. It exports, of that, about 7 million barrels of oil per day. The Strait of Hormuz has transit of about 20 million barrels of oil per day. So the Strait of Hormuz is roughly 3 times what Russia could have been. And remember, in the days right after the invasion, markets were really worried about Russian oil being embargoed. There was a whole discussion about that. So the rise in Brent, which is the global benchmark oil price, is about 70% from two weeks before the outbreak of war in the Gulf to now. On a similar time horizon back in ‘22, it was 20%. So we have roughly a 3X in terms of the rise in oil prices. So when people come to me and say “$150 or $200 for oil prices” and we’re currently at $115, roughly, then I think, “why, what’s the rationale?”

The second perspective is on the supply shortfall that we have and using price elasticity of demand to think about: “how much does the price need to rise if demand has to do all the adjusting in the short term,” which it does. And “what kind of numbers do we come up with if we make reasonable assumptions?” So I put out a Substack note today—thank you so much for reading my Substack, I’m incredibly flattered and stressed as a result— if you assume that the Strait of Hormuz goes from 20 million barrels of oil per day to 10, it’s basically oil from the Gulf is running at half of its normal capacity, and you assume a price elasticity sort of in the middle of the range that the academic literature has, which is about 0.15, then you get that this would generate a rise in oil prices of between 60 and 70%. So again, if I think about what we’re pricing in markets now versus what basic back-of-the-envelope-calculations tell you, then I think we’re roughly in the right ballpark.

That is from his interview with Paul Krugman.  Via Luis Garicano.

The post How much more will oil prices have to go up? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Record Heat Remains in Place This Week; Severe Thunderstorms Possible Thursday

Mux — Video API for Developers

My thanks to Mux for sponsoring last week at DF. Video isn’t just something to watch; it’s a boatload of context and data. Mux makes it easy to ship and scale video into anything from websites to platforms to AI workflows. Unlock what’s inside: transcripts, clips, and storyboards to build summarization, translation, content moderation, tagging, and more.

Mux stewards Video.js, the web’s most popular open source video player. Video.js v10 is a complete architectural rebuild, with the beta now available at videojs.org.

Mux is video infrastructure trusted by Patreon, Substack, and Synthesia. Get started free, no credit card required. Use code FIREBALL for an extra $50 credit.

 ★