Microsoft Xbox One Hacked

It’s an impressive feat, over a decade after the box was released:

Since reset glitching wasn’t possible, Gaasedelen thought some voltage glitching could do the trick. So, instead of tinkering with the system rest pin(s) the hacker targeted the momentary collapse of the CPU voltage rail. This was quite a feat, as Gaasedelen couldn’t ‘see’ into the Xbox One, so had to develop new hardware introspection tools.

Eventually, the Bliss exploit was formulated, where two precise voltage glitches were made to land in succession. One skipped the loop where the ARM Cortex memory protection was setup. Then the Memcpy operation was targeted during the header read, allowing him to jump to the attacker-controlled data.

As a hardware attack against the boot ROM in silicon, Gaasedelen says the attack in unpatchable. Thus it is a complete compromise of the console allowing for loading unsigned code at every level, including the Hypervisor and OS. Moreover, Bliss allows access to the security processor so games, firmware, and so on can be decrypted.

Monday assorted links

1. Arbitrage?

2. On Christopher Sims.

3. Minimum wage hikes boost restaurant food prices.

4. “These findings suggest that new work serves as a countervailing force to automation-driven job displacement not merely by creating additional employment, but also by generating new domains of human expertise that command market premiums.

5. Martin Heidegger clip.  Not impressive to me.

6. Canvas unrolls AI teaching agent.

7. “This essay has tried to frame what we need to build around AI.

The post Monday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

What would it look like to leave planet Earth? What would it look like to leave planet Earth?


Impressions

March 23, 2026

When I take pictures, I try to stay away from traditional plane porn (of the sort that dominates on Instagram). I like to think my shots — the better of them, at any rate — are a little more offbeat or impressionistic. Case in point, these three, which rate among my favorites.

Top and bottom: A psychedelic flood of blur and color, here’s the world as seen through an airplane window covered in de-icing fluid. Those red and white pinpoints in the first one are, believe it or not, the distant lights of New York City.

Center: Two Skies. The underside of a jetliner tail juxtaposed with an afternoon sky above Somerville, Massachusetts.

Related Story:
THE TEXTURES SERIES

The post Impressions appeared first on AskThePilot.com.

A Deep Dive on ‘The Map Is Not the Territory’

In another side-quest from his current work in progress, Matthew Edney goes down a deep rabbit hole trying to work out a specific point related to Alfred Korzybski’s famous adage that “the map is not… More

Pre-Order “Data Are Made, Not Found”!

There’s something uniquely demoralizing about editing and editing and editing a book manuscript. The words all start to blur together and you start thinking that every sentence is crap, no one will ever want to read this, why bother completing the book. I was definitely in this state. And then… my publisher sent me the book cover and I squealed for joy at just how lovely it is. And it gave me hope. Check out this beauty:

And yes… that’s a wobbly Jenga tower comprised of pieces made from census documentation cuz one of the core arguments in the book is that we’re living in a world of “Jenga Politics” where different actors are pulling out pieces of our administrative infrastructure and putting pressure on top. Civil servants are exhausted, but they’re trying to keep the tower from falling.

And now that I’ve seen the beautiful cover, I can’t wait for you to read this book! And to come celebrate with me! I am starting to build a book tour so hopefully I will come to a city near you. But, in the meantime, here are some of the fun things I get to share:

  • My book won the J. Anthony Lukas Work-in-Progress Prize! Thank you to the kind people at Columbia Journalism School and the Nieman Foundation for Journalism at Harvard University for giving me wind beneath my wings!
  • Pre-Order the book! And if you order it from the University of Chicago Press website, you can save 30% by using the code UCPNEW. But feel free to order from your local bookstore or wherever else you want!
  • DC folks: Save the Date (9/25)! I am ecstatic that Politics & Prose is hosting me at their Wharf venue on September 25 at 7PM (the day after the book goes on sale!). I hope lots of folks will come out to celebrate! There will be books available! And a signing!
  • Virtual Folks: On the eve of the book launch (9/23), Dan Bouk and I will discuss the book in a virtual event hosted by Data & Society. More info on that will come shortly, but make sure to sign up to the D&S newsletter!

Moogle Gaps

Moogle Gaps, for when you want to be misdirected. TrendWatching: “Whipped up by two Australian ex-Droga5 creatives, Paul Meates and Henry Kimber, Moogle Gaps is an anti-wayfinder. Users input their navigational query as they normally would, but instead… More

How Google Maps Disappears Restaurants from Search Results

For the Guardian’s “It’s Complicated” feature, Josh Toussaint-Strauss looks at how great restaurants end up being invisible when you search for a place to eat on Google Maps. He talks with data scientist Lauren Leek,… More

The innovative supply chain of illegal drugs--even in prisons

 Strategy sets are big, so we’re not going to be able to end illegal drug use by spraying defoliants on fields of poppies, or arresting dealers, or attacking speedboats. If we can’t stop the spread of drugs even in prisons, the chance of purely police/military solutions for stopping drugs on the streets isn’t looking good.

The NYT has the story:

No Pills or Needles, Just Paper: How Deadly Drugs Are Changing
Lab-made drugs soaked into the pages of letters, books and even legal documents are being smuggled behind bars, killing inmates and frustrating investigators. 
By Azam Ahmed and Matt Richtel 

" Today, fringe chemists are ushering in a total transformation of the illicit drug market. Operating from clandestine labs, they are churning out a dizzying array of synthetic drugs — not only fentanyl, but also hazardous new tranquilizers, stimulants and complex cannabinoids. Sometimes, several unknown drugs appear on the streets in a single month. Many are so new they are not even illegal yet.

"Nearly all of them are harder to trace than conventional drugs, less expensive to produce, much more potent and far deadlier, according to scientists and law enforcement officials across the globe.

...

"After that first death in the Cook County jail in January 2023, it took months for Mr. Wilks’s team to realize that these mysterious new drugs were being sprayed onto the pages of the most innocuous-seeming items: books, letters, documents, even photographs.

"The sheets of drugs, worth thousands of dollars a page, were being torn into strips and smoked by inmates 

...

"But the traffickers were cunning. When regular mail got checked more closely, smugglers began lacing legal correspondence. Soon, officers discovered sealed packages that looked as if they had been shipped directly from Amazon, with drug-soaked books inside. "

############

It’s hard to shut down markets that people want to participate in.
Someone should write a book about this. 

Please read this.

So I’m no longer on Twitter (praise Jesus), but a writer named Gandalv posted an absolutely remarkable little essay on his thoughts after Donald Trump greeted Robert Mueller’s death with, “Good.”

Read it.

Trust me.

•••

Robert Mueller died last night.

He was 81 years old. He had a wife who loved him for sixty years. He had two daughters, one of whom he met for the first time in Hawaii, in 1969, on a few hours of military leave, before he got back on the plane and returned to Vietnam. He had grandchildren. He had a faith he practiced quietly, without performance. He had, in the way of men who have seen real things and survived them, a quality that is increasingly rare and increasingly mocked in the country he spent his life serving.

He had integrity.

And tonight the President of the United States said good!

I have been sitting with that word for hours now. Good. One syllable. The thing you say when the coffee is hot or the traffic is moving. The thing a man who has never had to bury anyone, never had to sit in the specific silence of a room where someone is newly absent, reaches for when he wants the world to know he is satisfied. Good. The daughters are crying and the wife is alone in the house and good.

I want to speak directly to the Americans reading this. Not the political Americans. Just the human ones. The ones who have lost a father. The ones who know what it is to be in that first hour, when you keep forgetting and then remembering again, when ordinary objects become unbearable, when the world outside the window seems obscene in its indifference. I want to ask you, simply, to hold that feeling for a moment, and then to understand that the man you elected looked at it and typed a single word.

Good.

This is not a country having a bad day. I need you to understand that. Countries have bad days. Elections go wrong. Leaders disappoint. Institutions bend. But there is a different thing, a rarer and more terrible thing, that happens when the moral center of a place simply gives way. Not dramatically. Not with a single catastrophic event. But quietly, in increments, until one evening a president celebrates the death of an old man whose family is still warm with grief, and enough people find it acceptable that it becomes the weather. Just the weather.

That is what is happening. That is what has happened.

The world knows. From Tokyo to Oslo, from London to Buenos Aires, people are not angry at America tonight. Anger would mean there was still something to fight for, some remaining faith to be betrayed. What I see, in the reactions from everywhere that is not here, is something older and sadder than anger. It is the look people get when they have waited a long time for someone they love to find their way back, and have finally understood that they are not coming.

America is being grieved. Past tense, almost. The idea of it. The thing it represented to people who had nothing else to believe in, who came here with everything they owned in a single bag because they had heard, somehow, across an ocean, that this was the place where decency was written into the walls. That idea is not resting. It is not suspended. It is being buried, in real time, with 7,450 likes before dinner.

And the church said nothing.

Seventy million people have decided that this man, this specific man who has cheated everyone he has ever made a promise to, who has mocked the disabled and the dead and the grieving, who celebrated tonight while a family wept, is an instrument of God. The pastors who made that bargain did not just trade away their credibility. They traded away the thing that made them worth listening to in the first place. The cross they carry now is a costume. The faith they preach is a loyalty oath with scripture attached. When the history of American Christianity is written, this will be the chapter they skip at seminary.

Now I want to talk about the men who stand next to him.

Because this is the part that actually breaks my heart.

JD Vance is not a bad man. I have to say that, because it is true, and because the truth matters even now, especially now. Marco Rubio is not a bad man. Lindsey Graham is not a bad man. They are idiots, but not bad, as in BAD! These are men with mothers who raised them and children who love them and friends who remember who they were before all of this. They are not monsters. Monsters are simple. Monsters do not cost you anything emotionally because there is nothing in them to mourn.

These men are something more painful than monsters.

They are men who knew better, and know better still, and will get up tomorrow and do it again.

Every small compromise they made had a reason. Every moment they looked the other way had a justification that sounded, at the time, almost reasonable. And now they have arrived here, at a place where a president celebrates the death of an old man and they will find a way, on television, to say nothing that means anything, and they will go home to houses where children who carry their name are waiting, and they will say goodnight, and they will say nothing.

Their oldest friends are watching. The ones who knew Rubio when he still believed in something. Who knew Graham when he said, out loud, on the record, that this exact man would destroy the Republican Party and deserve it. Who sat next to Vance and thought here is someone worth knowing. Those friends are not angry tonight. They moved through anger a long time ago. What they feel now is the quiet, irrecoverable sadness of watching someone disappear while still being present. Of watching a person they loved choose, again and again, to become less.

That is what cowardice costs. Not the coward. The people who loved him.

And in the comments tonight, the followers celebrate. People who ten years ago brought casseroles to grieving neighbours. Who stood in the rain at gravesides and meant the words they said. Who told their children that we do not speak ill of the dead because the dead were someone’s beloved. Those people are tonight typing gleeful things about a man whose daughters are not yet done crying. And they feel clean doing it. Righteous. Because somewhere along the way the thing they were given in exchange for their decency was the feeling of belonging to something, and that feeling is very hard to give up even when you can no longer remember what you gave for it.

When Trump is gone, they will still be here.

Standing in the silence where the noise used to be. Without the permission the crowd gave them. Without the pastor who told them their cruelty was holy. They will be alone with what they said and what they cheered and what they chose to become, and there will be no one left to tell them it was righteous.

That morning is coming.

Robert Mueller flew across the Pacific on military leave to hold his newborn daughter for a few hours before returning to the war. He came home. He buried his dead with honour. He served presidents of both parties because he understood that the institution was larger than any one man. He told his grandchildren that a lie is the worst thing a person can do, that a reputation once lost cannot be recovered, and he lived that, every day, in the quiet and unglamorous way of people who actually believe what they say.

He was the kind of American the world used to point to when it needed to believe the story was true.

He died last night. His wife is alone in their house in Georgetown. His daughters are learning what the world is without him in it. And somewhere in the particular hush that falls over a family in the first hours of loss, the most powerful man and the biggest loser on earth sent a message to say he was glad.

The world that loved what America was supposed to be is grieving tonight. Not for Robert Mueller only. For the country that produced him and then became this. For the distance between what was promised and what was delivered. For the suspicion, growing quieter and more certain with each passing month, that the America people believed in was always partly a story, and the story is over now, and there is nothing yet to replace it.

That is all it needed to be.

A man died. His family is broken open with grief.

That is all it needed to be.

Instead the President said good.

And the country that once stood for something looked away

🇺🇸

Gandalv / @Microinteracti1

March 22, 2026

President Donald J. Trump‘s behavior is increasingly erratic as he lashes out at those he perceives to be enemies. On Thursday he defended his failure to inform allies and partners about his February 28 attack on Iran by telling a Japanese reporter he wanted the element of surprise. “Who knows better about surprise than Japan? Why didn’t you tell me about Pearl Harbor, OK?” Trump said, referring to the Japanese attack on Hawaii that took place on December 7, 1941, five years before Trump was born. Sitting beside Trump, the prime minister of Japan, Sanae Takaichi, appeared taken aback. Japan is a key Pacific ally of the United States.

The president is under enormous pressure, as his war with Iran sparked Iranian officials to close the Strait of Hormuz, through which about 20% of the world’s oil flows. This outcome was expected by previous presidents, but Trump seemed to think he could avoid it and now is stuck without an easy solution. As former defense secretary and Central Intelligence Agency director Leon Panetta told David Smith of The Guardian, “[I]f there was an escape here for Trump, it would be to declare victory and it’s over and we’ve been able to be successful in all of our military targets. The problem is he can declare victory all he wants but, if he doesn’t get the ceasefire, he’s got nothing. And he’s not going to get a ceasefire as long as Iran is holding the gun of the strait of Hormuz against his head.”

“He tends to be naive about how things can happen,” Panetta told Smith. “If he says it and keeps saying it, there’s always a hope that what he says will come true. But that’s what kids do. It’s not what presidents do.”

In a frantic attempt to lower oil prices, the administration on Friday lifted sanctions on Iranian oil currently at sea. Iranian oil has been sanctioned since 1979. The lifting of sanctions will enable Iran to sell about 140 million barrels of oil, worth about $14 billion, including to the United States and to China.

National security scholar Phil Gordon, who served as the White House coordinator for the Middle East, North Africa, and the Persian Gulf Region during the Obama administration, posted: “When Obama sent Iran $400m + $1.3bn in interest in 2016 Trump called it ‘insane’ and he and others spent a decade mocking the idea of ‘pallets of cash’ even though it was Iran’s own money, American prisoners were released, courts were likely to require the U.S. payment, and Iran had just agreed to significant and verified reductions and restrictions on its nuclear program for 15+ years.

“Now Trump is giving Iran up to ten times that amount of revenue—one of the most significant measures of sanctions relief provided to the Islamic Republic since its founding—in exchange for marginal and temporary relief from the big increase in oil prices his actions have caused, without any concessions from Tehran, and even as Iran continues to target the United States, its allies, and world oil supplies. No way to read as anything other than desperate recognition of the situation Trump’s own actions have created and the lack of available alternatives for dealing with it.”

On Meet the Press today, Senator Chris Murphy (D-CT) said: “We’re gonna give Iran $14 billion to fund this war with the United States? We’re gonna give Russia billions of dollars to fund their war with Ukraine? We’re literally putting money into the pockets of the very nations that we are fighting right now. We’ve never seen this level of incompetence in war-making in this country’s history.”

Trump is also under pressure over the Department of Homeland Security (DHS), which has been mired in news stories about corruption since former secretary Kristi Noem stepped down. Yesterday morning, Trump appeared to try to change the momentum of those stories by going on the offensive against Democrats.

New scrutiny of the department has brought renewed attention to the November 2025 ProPublica report by Justin Elliott, Joshua Kaplan, and Alex Mierjeski that DHS had awarded a $220 million contract for a taxpayer-funded ad campaign to cronies, getting around transparency laws by awarding the contract to a small company that then subcontracted the deal to friends of Noem and her associate Corey Lewandowski. Of the contract, Trump allegedly said: “Corey made out on that one.”

On Thursday, March 19, Julia Ainsley, Matt Dixon, Jonathan Allen, and Laura Strickler of NBC News reported that Lewandowski told George Zoley, the head of the giant private prison company GEO Group, that he expected to be paid for steering contracts to GEO Group. Zoley said he declined initially but later offered to put Lewandowski on retainer with a consulting fee. But, sources told the journalists, Lewandowski “wanted payments—what some people would call a success fee” based on awarded contracts. When Zoley refused, GEO Group lost out on contracts. A senior DHS official told the journalists Lewandowski had told him not to award any more contracts to GEO Group.

Lewandowski’s official title was that of a “special government employee,” with a temporary appointment that permitted him to work only 130 days in a year, but DHS officials told the journalists that Lewandowski had broad authority over contracts in the department and was referred to as “chief.” He allegedly sidestepped the limits of his appointment by going into the building accompanying Noem, and thus without swiping in using his badge. Lewandowski has denied any wrongdoing.

Yesterday Hamed Aleaziz, Alexandra Berzon, Nicholas Nehamas, Zolan Kanno-Youngs, and Tyler Pager of the New York Times reported on the extraordinary power Lewandowski had in DHS under Noem, explaining that he held meetings without her present, sat in on classified briefings, read a version of the highly classified President’s Daily Brief, and issued orders as he spearheaded detention and deportation of migrants. In addition to approving government contracts that worried officials, Lewandowski helped put Greg Bovino, a midlevel Border Patrol leader, into a senior position that gave him national power.

At 11:34 yesterday morning, Trump tried to turn the DHS story into one about the Democrats, posting: “If the Radical Left Democrats don’t immediately sign an agreement to let our Country, in particular, our Airports, be FREE and SAFE again, I will move our brilliant and patriotic ICE Agents to the Airports where they will do Security like no one has ever seen before, including the immediate arrest of all Illegal Immigrants who have come into our Country, with heavy emphasis on those from Somalia, who have totally destroyed, with the approval of a corrupt Governor, Attorney General, and Congresswoman, Ilhan Omar, the once Great State of Minnesota. I look forward to seeing ICE in action at our Airports. MAKE AMERICA GREAT AGAIN! President DONALD J. TRUMP”

This appeared to be a threat to use Immigration and Customs Enforcement agents, whom Trump appears to see as his own private army, to hurt Democrats by pinning the long lines in airports on the Democrats’ refusal to fund DHS, which means that Transportation Security Administration (TSA) agents aren’t being paid. But Democrats have repeatedly proposed funding every agency in DHS other than ICE and Border Patrol, leaving those out until their abuses under Noem, Lewandowski, and Bovino have been addressed. Republicans have refused that funding unless DHS requests are funded in full at the same time.

Under Trump, ICE has become the highest-funded law enforcement agency in the U.S., with an annual budget higher than those of all other federal law enforcement agencies combined. While ICE budgets previously had hovered around $6 billion, the Republicans’ One Big Beautiful Bill Act gave DHS $85 billion to fund it through September 30, 2029. What is outstanding now is its base budget of around $10 billion. Because ICE agents are considered “essential” workers, they, unlike TSA agents, are getting paid during the funding fight.

Today the administration announced ICE agents will take the place of some TSA agents, although as the former national security officials at The Steady State note, the legality of moving ICE agents into TSA positions isn’t clear. Tonight Trump admitted he is not interested in any deal with the Democrats to fund the Department of Homeland Security unless Democrats also agree to the SAVE America Act, which would require proof of citizenship to register to vote and to vote, and which is widely understood to be a measure designed to suppress voting. Trump also includes in the measure an end to mail-in voting, and an attack on transgender Americans.

Then, at 1:26 yesterday afternoon, Trump responded to the death of 81-year-old special counsel Robert Mueller by posting: “Robert Mueller just died. Good, I’m glad he’s dead. He can no longer hurt innocent people! President DONALD J. TRUMP.”

As Josh Meyer of USA Today reported, Mueller was a lifelong public servant. He served in combat as a Marine Corps officer in the Vietnam War, during which he was wounded. “I consider myself exceptionally lucky to have made it out of Vietnam,” Mueller said years later. “There were many—many—who did not. And perhaps because I did survive Vietnam, I have always felt compelled to contribute.” He became a federal prosecutor covering organized crime, terrorism, and public corruption. A conservative Republican nominated by President George W. Bush to direct the Federal Bureau of Investigation (FBI), he took office just a week before 9/11 and proceeded to reshape the FBI’s mission from fighting crime to an emphasis on counterterrorism and intelligence.

In 2017, Deputy Attorney General Rod Rosenstein appointed Mueller special counsel for the Department of Justice to investigate Russian interference in the 2016 election. Mueller’s team filed charges against Trump’s former campaign chair Paul Manafort and co-chair Rick Gates for conspiracy to launder money, violating the Foreign Agents Registration Act, and conspiracy against the United States, and reached a plea agreement with Trump’s former national security advisor Michael Flynn, who pleaded guilty to lying to the FBI about his contacts with Russian operative and ambassador Sergey Kislyak. Mueller’s team also indicted thirteen Russians and three Russian companies involved in pushing Russian propaganda to American voters. Ultimately the team indicted thirty-four people, including six of Trump’s former advisors, five of whom pleaded guilty.

Mueller’s final report detailed the efforts of Russian operatives to help Trump and hurt Democratic candidate Hillary Clinton, saying Russia launched “multiple, systematic efforts” to interfere with the election. Mueller said he had not been able to consider Trump’s guilt because Justice Department policy prohibits the prosecution of a sitting president, but added: “If we had confidence that the president clearly did not commit a crime, we would have said that.” He refused to say his report “exonerated” Trump, as Trump’s supporters insisted.

A later report by the Republican-led Senate Intelligence Committee agreed that members of Trump’s 2016 campaign, led by Manafort, worked with Russian operatives to help Trump get elected.

Not only is Robert Mueller getting under Trump’s skin, so, clearly, is his own failure to reopen the Strait of Hormuz. At 7:44 last night, he posted: “If Iran doesn’t FULLY OPEN, WITHOUT THREAT, the Strait of Hormuz, within 48 HOURS from this exact point in time, the United States of America will hit and obliterate their various POWER PLANTS, STARTING WITH THE BIGGEST ONE FIRST! Thank you for your attention to this matter. President DONALD J. TRUMP.”

In a conversation with Anne McElvoy of Politico on Thursday, United Nations Secretary-General António Guterres noted that attacks on civilian energy infrastructure are war crimes.

Yesterday Julie K. Brown of The Epstein Files, whose work digging into the cover-up of the Epstein story for the Miami Herald has been instrumental in bringing the scandal to light, and her colleague Claire Healy reported that after sex offender Jeffrey Epstein was found dead in his prison cell on August 10, 2019, a corrections officer called the FBI’s Threat Operations Center saying the officer “found it suspicious that an after-action team charged with investigation would be shredding huge amounts of paperwork” while FBI agents were in the building.

An inmate who helped shred documents told guards: “They are shredding everything,” and an assistant federal prosecutor noted the destruction or misplacing of relevant records. Another corrections officer wrote to the FBI on August 19 about an unusual amount of shredding and disposal, and suggested: “you may want to investigate why [Bureau of Prisons] employees are destroying records.”

This morning, at 8:24, Trump posted: “Now with the death of Iran, the greatest enemy America has is the Radical Left, Highly Incompetent, Democrat Party! Thank you for your attention to this matter. President DJT”

Tonight, just before midnight, he posted: “PEACE THROUGH STRENGTH, TO PUT IT MILDLY!!!”

Notes:

https://www.cnbc.com/2026/03/20/trump-pearl-harbor-japan-takaichi-iran-war.html

https://www.politico.com/news/2026/03/22/surprise-embarrassment-unease-japan-pearl-harbor-00839369

https://www.yahoo.com/news/articles/fbi-warned-bags-documents-were-143207523.html

https://www.usatoday.com/story/news/politics/2026/03/21/what-to-know-about-former-fbi-chief-and-trump-foe-robert-mueller/89264548007/

https://www.propublica.org/article/kristi-noem-dhs-ad-campaign-strategy-group

https://www.nbcnews.com/news/us-news/dhs-contractors-told-white-house-officials-asked-pay-corey-lewandowski-rcna263744

https://www.nytimes.com/2026/03/21/us/politics/corey-lewandowski-noem-dhs.html

https://www.cnn.com/us/live-news/tsa-wait-times-government-shutdown-03-22-26

https://www.npr.org/2026/01/21/nx-s1-5674887/ice-budget-funding-congress-trump

https://www.theguardian.com/us-news/2026/mar/22/trump-iran-leon-panetta

https://www.theguardian.com/us-news/2026/mar/20/us-sanctions-iranian-oil

https://www.cnn.com/us/live-news/tsa-wait-times-government-shutdown-03-22-26

https://www.politico.eu/article/un-chief-guterres-reasonable-grounds-believe-war-crimes-happening-iran-war/

https://www.intelligence.senate.gov/wp-content/uploads/2024/08/sites-default-files-documents-report-volume5.pdf

X:

PhilGordonDC/status/2035346997343866924

Bluesky:

atrupar.com/post/3mhnsvsjc7y23

atrupar.com/post/3mhnlb2im5c2k

axidentaliberal.bsky.social/post/3mhltdap5wk2y

ronfilipkowski.bsky.social/post/3mhmdxe2ojs2o

ronfilipkowski.bsky.social/post/3mhlozz53zc2s

gillianbrockell.com/post/3mhlf5tyspc2m

thesteadystate.org/post/3mhokt7nrxk2t

eliothiggins.bsky.social/post/3mhnkq6hmwk2u

thetnholler.bsky.social/post/3mhp7lpz47c2x

josephpolitano.bsky.social/post/3mhp5zgdy7k2u

murray.senate.gov/post/3mhloteycsk2l

Share

The 13th, 14th, and 15th Amendments

Oil versus Ice Cream

When Tyler and I were writing Modern Principles of Economics, we wanted examples that were modern, specific, and grounded in the real world. That has been a bit of a headache, because we have to update them with every new edition. Our biggest competitor uses the ice cream market as its central example and never has to revise. Smart! But for us, the extra work has been worth it.

We chose the oil market as our central example. Oil is always in the news, and it works really well across a wide range of textbook topics: the elasticity of demand and supply; oligopoly and cartels; the shutdown condition; shocks; expectations, speculation and futures markets; and oil prices have macroeconomic implications that connect micro to macro.

Yes, keeping the examples current takes more work. But when a student sees that the price of crude has surged past $100 a barrel because Iran closed the Strait of Hormuz—choking off 20% of the world’s oil supply—they have the framework to understand what is happening. Supply shock, inelastic demand, expectations and speculation, the macroeconomic transmission to GDP—it’s all right there in the headlines. Try doing that with the ice cream market.

See the Invisible Hand. Understand Your World. It is not just our slogan. It’s our method.

The post Oil versus Ice Cream appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The city that wasted nothing

Ink painting of people in traditional attire engaging in activities on a wooden floor with various objects around them.

Edo, modern Tokyo, transformed from a city near ecological collapse to a thriving epicentre by creating a circular economy

- by Aeon Video

Watch on Aeon

An African philosophy

A building with yellow wall, hands in silhouette in foreground, and animals resting in the sun.

Lansana Keita rejected Eurocentric ideas, tracing the philosophical tradition back to African Kemet or ancient Egypt

- by Sanya Osha

Read on Aeon

PCGamer Article Performance Audit

Research: PCGamer Article Performance Audit

Stuart Breckenridge pointed out that PC Gamer Recommends RSS Readers in a 37MB Article That Just Keeps Downloading, highlighting a truly horrifying example of web bloat that added up to 100s more MBs thanks to auto-playing video ads. I decided to have Claude Code for web use Rodney to investigate the page - prompt here.

Tags: web-performance, rodney

JavaScript Sandboxing Research

Research: JavaScript Sandboxing Research

Aaron Harper wrote about Node.js worker threads, which inspired me to run a research task to see if they might help with running JavaScript in a sandbox. Claude Code went way beyond my initial question and produced a comparison of isolated-vm, vm2, quickjs-emscripten, QuickJS-NG, ShadowRealm, and Deno Workers.

Tags: sandboxing, javascript, nodejs, claude-code

DNS Lookup

Tool: DNS Lookup

TIL that Cloudflare's 1.1.1.1 DNS service (and 1.1.1.2 and 1.1.1.3, which block malware and malware + adult content respectively) has a CORS-enabled JSON API, so I had Claude Code build me a UI for running DNS queries against all three of those resolvers.

Tags: dns, cors, cloudflare

Merge State Visualizer

Tool: Merge State Visualizer

Bram Cohen wrote about his coherent vision for the future of version control using CRDTs, illustrated by 470 lines of Python.

I fed that Python (minus comments) into Claude and asked for an explanation, then had it use Pyodide to build me an interactive UI for seeing how the algorithms work.

Tags: vcs, pyodide, bram-cohen, crdt

Iranian Oil, Still Shipping

The willingness of the Trump administration to let Iran keep selling oil shows that we’re not willing to endure pain in a war that it all about enduring pain.

For more like this, see my YouTube channel

Transcript

If you want to understand how things are going in Iran and why it looks increasingly likely that the United States is going to lose this thing in every sense that matters, follow the oil money.

Hi, Paul Krugman with another not very happy update Sunday afternoon. Yesterday, Donald Trump threatened Iran with basically a massive war crime, saying that if they don’t open the Strait of Hormuz by 48 hours from the time of the post, which would be tomorrow, that he will order attacks on Iranian power plants, on civilian infrastructure, which is, you know, it is a war crime, not something that has never been done, not something that the United States has never done, but not something that you kind of just openly announce — that we’re going to try and terrorize you with this bombing campaign.

But at the same time, the United States is allowing Iran, and only Iran, to export oil, shipping it out through the Strait of Hormuz. The United States isn’t stopping other countries from doing it, but Iran is, and we are allowing them to grant safe passage only to ships that they approve going through that strait. Now that is wild. Sounds completely crazy. It’s not completely crazy, but what it is is it’s a demonstration of incredible weakness. Why would the United States allow Iranian oil to continue to be exported? Or shouldn’t we be trying to, you know, we’re at war with these people, and their revenue base depends basically on selling oil. So why are we allowing them to do that? It would be very easy as a matter of military force for the United States to just stop those oil exports.

And the answer is that, well, the two million barrels a day or something like that that Iran is managing to export are two million barrels a day of world oil supply. And we’re in a world in which the total supply to the world market is down substantially. down something like 10 million barrels or more per day because of the closure of the straits to everybody but the Iranians. And the United States is afraid to worsen the shortage by stopping the Iranians from selling oil, presumably because the Trump administration is afraid of the political backlash from higher gas prices. It’s already frantic enough to threaten war crimes in order to try and get oil flowing and gas prices down. But they’re apparently frightened enough of gas prices that they’re willing to allow the enemy to keep making money selling oil in order to keep those prices somewhat lower than they would otherwise have been.

That’s an admission, implicitly, of enormous weakness. It’s an admission that the Trump administration is not willing to accept sustained pain as part of this war. They’re willing to drop bombs and all of that, but they’re not willing to accept economic pain in the United States, even enough to shut off the revenue flow to the Iranian government.

And this war is fundamentally about who can stand the pain. It’s the United States doing lots of damage to Iran, but the Iranian government seems to think it can handle that. And the Iranians trying to inflict enough pain through hurting the world’s supply of oil that the United States ceases and desists.

And given the behavior, who would you bet on in this situation? So this is looking, I don’t want to say this, right? I mean, I do not want to see, obviously for domestic political reasons, I don’t want a Trump victory parade.

But a world in which the United States loses this war is going to be really a very dangerous world for all of us. But I’m afraid that that’s the direction we’re heading.

Have a nice rest of your weekend.

How to Burn Less Oil

Furious Iran Bombards Saudi Oil Refineries In Retaliation For Israel-US Gas  Facility Attack

The world economy must find a way to function while burning less oil.

That may sound like a call to action, but in the short run it’s simply a statement of fact. Until the Iran war began, 20 percent of the world’s oil supply was shipped through the Strait of Hormuz. Barring a deal with Iran, which is nowhere in sight, or military action that eliminates almost all threats to shipping — which is very hard to achieve in this modern age of drone warfare — there is simply going to be less oil available for months, maybe even years, to come.

And in the longer run, we’re now having an object lesson in the strategic risks of depending so much on oil — risks that add to the already compelling environmental case for moving away from fossil fuels in general.

But how hard will it be to reduce our dependence on the black stuff? Can the world economy prosper while burning much less oil than it has in the past?

The answer depends on the time frame. Even with oil costing $100 a barrel — indeed, even if it goes to $150 — it will be very hard to reduce overall oil consumption quickly.

That’s because in the short run – which means several years — the only way to consume less oil is for people to change their behavior, mainly by driving less. So to induce a major decline in oil consumption prices would need to go high enough that people turn to carpooling, working from home, taking the bus where that’s an available option (which for most Americans it isn’t.) Or, in the worst-case scenario, oil prices will have to reduce consumers’ purchasing power so much that the economy falls into a recession, which would among other things reduce the demand for oil.

In the longer run, by contrast — defined as a period long enough to replace a large fraction of vehicles on the road — there is much greater potential for consuming much less oil, with small or zero adverse effects on economic growth and purchasing power. This was true even before the technological innovations that have made electric vehicles (EVs) competitive with internal consumption vehicles (ICEs). Fuel-efficient vehicles provide most of the benefits of gas-guzzling SUVs, and consumers may not realize how much money they save. And now that EVs are competitive, drastic reductions in gas consumption are possible with minimal disruption.

Finally, if we make different decisions about how we live and work, the world could easily thrive while burning only a fraction as much oil as it does now.

Beyond the paywall I will address the following issues:

1. Why we consume so much oil, and how that logic is changing

2. The demand for oil in the short run, and the crucial question of price elasticity

3. How demand can adjust once there’s time to replace vehicles

4. Oil demand and how we live: The long-run possibilities

Read more

Beats now have notes

Last month I added a feature I call beats to this blog, pulling in some of my other content from external sources and including it on the homepage, search and various archive pages on the site.

On any given day these frequently outnumber my regular posts. They were looking a little bit thin and were lacking any form of explanation beyond a link, so I've added the ability to annotate them with a "note" which now shows up as part of their display.

Here's what that looks like for the content I published yesterday:

Screenshot of part of my blog homepage showing four "beats" entries from March 22, 2026, each tagged as RESEARCH or TOOL, with titles like "PCGamer Article Performance Audit" and "DNS Lookup", now annotated with short descriptive notes explaining the context behind each linked item.

I've also updated the /atom/everything/ Atom feed to include any beats that I've attached notes to.

Tags: atom, blogging, site-upgrades

Starlette 1.0 skill

Research: Starlette 1.0 skill

See Experimenting with Starlette 1.0 with Claude skills.

Tags: starlette

Experimenting with Starlette 1.0 with Claude skills

Starlette 1.0 is out! This is a really big deal. I think Starlette may be the Python framework with the most usage compared to its relatively low brand recognition because Starlette is the foundation of FastAPI, which has attracted a huge amount of buzz that seems to have overshadowed Starlette itself.

Kim Christie started working on Starlette in 2018 and it quickly became my favorite out of the new breed of Python ASGI frameworks. The only reason I didn't use it as the basis for my own Datasette project was that it didn't yet promise stability, and I was determined to provide a stable API for Datasette's own plugins... albeit I still haven't been brave enough to ship my own 1.0 release (after 26 alphas and counting)!

Then in September 2025 Marcelo Trylesinski announced that Starlette and Uvicorn were transferring to their GitHub account, in recognition of their many years of contributions and to make it easier for them to receive sponsorship against those projects.

The 1.0 version has a few breaking changes compared to the 0.x series, described in the release notes for 1.0.0rc1 that came out in February.

The most notable of these is a change to how code runs on startup and shutdown. Previously that was handled by on_startup and on_shutdown parameters, but the new system uses a neat lifespan mechanism instead based around an async context manager:

@contextlib.asynccontextmanager
async def lifespan(app):
    async with some_async_resource():
        print("Run at startup!")
        yield
        print("Run on shutdown!")

app = Starlette(
    routes=routes,
    lifespan=lifespan
)

If you haven't tried Starlette before it feels to me like an asyncio-native cross between Flask and Django, unsurprising since creator Kim Christie is also responsible for Django REST Framework. Crucially, this means you can write most apps as a single Python file, Flask style.

This makes it really easy for LLMs to spit out a working Starlette app from a single prompt.

There's just one problem there: if 1.0 breaks compatibility with the Starlette code that the models have been trained on, how can we have them generate code that works with 1.0?

I decided to see if I could get this working with a Skill.

Building a Skill with Claude

Regular Claude Chat on claude.ai has skills, and one of those default skills is the skill-creator skill. This means Claude knows how to build its own skills.

So I started a chat session and told it:

Clone Starlette from GitHub - it just had its 1.0 release. Build a skill markdown document for this release which includes code examples of every feature.

I didn't even tell it where to find the repo, Starlette is widely enough known that I expected it could find it on its own.

It ran git clone https://github.com/encode/starlette.git which is actually the old repository name, but GitHub handles redirects automatically so this worked just fine.

The resulting skill document looked very thorough to me... and then I noticed a new button at the top I hadn't seen before labelled "Copy to your skills". So I clicked it:

Screenshot of the Claude.ai interface showing a conversation titled "Starlette 1.0 skill document with code examples." The left panel shows a chat where the user prompted: "Clone Starlette from GitHub - it just had its 1.0 release. Build a skill markdown document for this release which includes code examples of every feature." Claude's responses include collapsed sections labeled "Strategized cloning repository and documenting comprehensive feature examples," "Examined version details and surveyed source documentation comprehensively," and "Synthesized Starlette 1.0 knowledge to construct comprehensive skill documentation," with intermediate messages like "I'll clone Starlette from GitHub and build a comprehensive skill document. Let me start by reading the skill-creator guide and then cloning the repo," "Now let me read through all the documentation files to capture every feature:" and "Now I have a thorough understanding of the entire codebase. Let me build the comprehensive skill document." The right panel shows a skill preview pane with buttons "Copy to your skills" and "Copy" at the top, and a Description section reading: "Build async web applications and APIs with Starlette 1.0, the lightweight ASGI framework for Python. Use this skill whenever a user wants to create an async Python web app, REST API, WebSocket server, or ASGI application using Starlette. Triggers include mentions of 'Starlette', 'ASGI', async Python web frameworks, or requests to build lightweight async APIs, WebSocket services, streaming responses, or middleware pipelines. Also use when the user is working with FastAPI internals (which is built on Starlette), needs ASGI middleware patterns, or wants a minimal async web server" (text truncated).

And now my regular Claude chat has access to that skill!

A task management demo app

I started a new conversation and prompted:

Build a task management app with Starlette, it should have projects and tasks and comments and labels

And Claude did exactly that, producing a simple GitHub Issues clone using Starlette 1.0, a SQLite database (via aiosqlite) and a Jinja2 template.

Claude even tested the app manually like this:

cd /home/claude/taskflow && timeout 5 python -c "
import asyncio
from database import init_db
asyncio.run(init_db())
print('DB initialized successfully')
" 2>&1

pip install httpx --break-system-packages -q \
  && cd /home/claude/taskflow && \
  python -c "
from starlette.testclient import TestClient
from main import app

client = TestClient(app)

r = client.get('/api/stats')
print('Stats:', r.json())

r = client.get('/api/projects')
print('Projects:', len(r.json()), 'found')

r = client.get('/api/tasks')
print('Tasks:', len(r.json()), 'found')

r = client.get('/api/labels')
print('Labels:', len(r.json()), 'found')

r = client.get('/api/tasks/1')
t = r.json()
print(f'Task 1: \"{t[\"title\"]}\" - {len(t[\"comments\"])} comments, {len(t[\"labels\"])} labels')

r = client.post('/api/tasks', json={'title':'Test task','project_id':1,'priority':'high','label_ids':[1,2]})
print('Created task:', r.status_code, r.json()['title'])

r = client.post('/api/comments', json={'task_id':1,'content':'Test comment'})
print('Created comment:', r.status_code)

r = client.get('/')
print('Homepage:', r.status_code, '- length:', len(r.text))

print('\nAll tests passed!')
"

For all of the buzz about Claude Code, it's easy to overlook that Claude itself counts as a coding agent now, fully able to both write and then test the code that it is writing.

Here's what the resulting app looked like. The code is here in my research repository.

Screenshot of a dark-themed Kanban board app called "TaskFlow" showing the "Website Redesign" project. The left sidebar has sections "OVERVIEW" with "Dashboard", "All Tasks", and "Labels", and "PROJECTS" with "Website Redesign" (1) and "API Platform" (0). The main area has three columns: "TO DO" (0) showing "No tasks", "IN PROGRESS" (1) with a card titled "Blog about Starlette 1.0" tagged "MEDIUM" and "Documentation", and "DONE" (0) showing "No tasks". Top-right buttons read "+ New Task" and "Delete".

Tags: open-source, python, ai, asgi, kim-christie, generative-ai, llms, ai-assisted-programming, claude, coding-agents, skills, agentic-engineering, starlette

‘Good, I’m Glad He’s Dead.’

The sitting president of the United States, on his blog:

Robert Mueller just died. Good, I’m glad he’s dead. He can no longer hurt innocent people! President DONALD J. TRUMP

As the elderly descend further into dementia, they lose their sense of propriety and simply speak their mind. (They also get confused and think they need to “sign” their text messages and social media posts.) Say what you want about Trump’s truthfulness generally, but here, he’s just being brutally honest. Let’s keep his “Good, I’m glad he’s dead” post bookmarked for when Trump himself finally keels over — after he chokes on a hamburger or whatever it’ll be that finally does him in — and the good people of the world rejoice and celebrate.

 ★ 

What's happening at the end of that street? What's happening at the end of that street?


When will “the research paper” disappear in economics?

Soon enough you will be able to take any published research paper and tweak it, or improve it, any way you want.  Just apply a dose of AI.

Using Refine, you already can judge the quality of all past papers, once you get them in uploadable form.  We now can rewrite the entire history of modern economics with the mere investment of tokens.  Which papers in the 1993 AER were really the good ones?  Which are simply false and do not replicate?

Refine, or some service like it, will only get better, and cheaper.

Do we even need the AER any more to certify which are the best papers?  Just ask the AIs, including about influence not just quality.

Why not write a program, or have an AI write it for you, that will take your favorite papers and improve them, and change their evaluations over time, as new results come in?  Of course people will do this, at least to the extent they care.  These papers will keep on morphing.

Will economics become a branch of software engineering?  There are important papers in software engineering, but very often the most important advances are embodied in actual software, AI included.

Will the future advances in economics come from producing evaluative systems and producing systems, rather than papers?

What if you submit to a journal a data set and some code?  Who needs “the paper” per se?  Just issue some commands to the “data set plus code” and get the paper you want.  How about “I am Tyler Cowen, what is it you think I will find interesting in this data set?”

Or publish a method for simulating human behavior, to run AI-simulated experimental economics, a’la Horton and Manning?  Publish “the box,” and do not worry so much about the individual paper.

Will highly productive researchers, who publish a lot of papers, become far less valuable?  The individual paper no longer seems scarce, or will not be in another year or two.

Give tenure to people who build capabilities and who build “boxes”?

How about an economics Nobel Prize for Anthropic and Open AI?

I thank Alex T. for useful discussions on this point.

The post When will “the research paper” disappear in economics? appeared first on Marginal REVOLUTION.

       

Comments

 

The Triangulum galaxy up close

Today’s Picture of the Week is a closeup of the nearby Triangulum galaxy, also known as Messier 33, located about 3 million light-years away. This festive-looking image, taken with ESO’s Very Large Telescope (VLT), reveals the diversity and complexity of the gas and dust between the stars in great detail.

Stars are not, as is often imagined, isolated spheres in the dark, but rather live in rich and complex environments that they actively shape. Studying this cosmic interplay tells us about how stars form, and how their radiation affects the surrounding material, which helps us to understand how galaxies evolve as a whole.

The image was presented in a new study led by Anna Feltre, a postdoctoral researcher at the INAF-Astrophysical Observatory of Arcetri, Italy. The team used data taken with the Multi Unit Spectroscopic Explorer (MUSE) instrument at the VLT. MUSE’s superpower is its ability to break up the light into the different rainbow colours, allowing the team to examine the chemical composition of the interstellar matter at every location across its whole field of view.

The different colours of the image represent different elements: blue, green and red indicate the presence of oxygen, hydrogen and sulphur, respectively. MUSE allowed the team to map the distribution of many other elements, as well as their motion, key to understanding the link between stars and their surroundings. As Feltre aptly puts it: “This cosmic interplay produces a spectacular and dynamic landscape, revealing that the birthplaces of stars are far more beautiful and complex than we ever imagined.”

Links

Paraguay trend of the day

Lured by low taxes, entrepreneurs from across Latin America are plowing in money and taking up residence, with applications surging more than 60% in 2025. Sleek towers and luxury car dealerships now dot Asunción, a city where infrastructure is still struggling to catch up. And Wall Street investors are snapping up Paraguay’s bonds as its conservative president, Santiago Peña, aligns his government with the Trump administration.

Though roughly the size of California, Paraguay’s $47 billion economy is about 1% of the Golden State’s. But rapid growth and economic reforms in recent years helped the country win investment-grade credit status from Moody’s Ratings in 2024 and from S&P Global last year.

…Paraguay’s embrace of sound fiscal and monetary policies after its 2003 financial crisis is now paying off, with single-digit inflation and annual growth averaging around 4% over the past two decades.

Here is more from Bloomberg, growth last year was six percent.  Southern Cone remains underrated.

The post Paraguay trend of the day appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

A Fault Line in Full Bloom

March 5
March 13
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
NASA Earth Observatory / Lauren Dauphin
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
Wildflower blooms appear as yellow patches at the center of satellite images centered on Carrizo Plain National Monument. The blooms spread and intensify between March 5 and March 13.
NASA Earth Observatory / Lauren Dauphin

March 5, 2026 – March 13, 2026

Golden wildflowers color the Carrizo Plain and surrounding Southern California landscape in these images captured on March 5, 2026 (left), and March 13, 2026 (right), by the OLI (Operational Land Imager) on Landsat 8 and Landsat 9, respectively. NASA Earth Observatory/Lauren Dauphin

Whether it qualifies as a “superbloom” is in the eye of the beholder, but there is no doubt that California’s Carrizo Plain and the neighboring mountain ranges were awash with color as wildflowers put on their annual show in spring 2026.

Landsat satellites began to show the early signs of color in February. By early March, flowers had turned areas around Soda Lake a bright shade of yellow, and by mid-month, they had spread even farther. Yellow wildflower blooms are visible amid the dendritic network of streams flanking the alkaline lake, which dries out completely during drought years. Colors were particularly vibrant across the Carrizo Plain National Monument, even decorating meadows along the zipper-shaped San Andreas Fault with splashes of purple due to blooms of Phacelia ciliata.

More yellow and purple blooms are visible along the zipper-shaped structure of the San Andreas Fault.
Wildflowers bloom along the San Andreas Fault in this image acquired on March 13, 2026, by the OLI (Operational Land Imager) on Landsat 9.
NASA Earth Observatory / Lauren Dauphin

Winter 2025-2026 brought bouts of rain and variable conditions that benefited wildflowers. Soaking rains saturated soils in November and December, bringing rainfall totals to nearly twice the usual level, according to a report from the California Department of Water Resources. NASA data cited in the report showed soil moisture remained well above average for the month of February.

The pulse of early rains helped kick-start wildflowers because many seeds need at least a half-inch of rain to wash off their protective coating to germinate, according to the National Park Service. The warm, dry periods that followed also helped. Once established, wildflowers benefit from intermittent rainfall rather than constant soaking.

Strips of yellow and purple wildflowers decorate a green, grassy valley as the viewer looks down from a hill.
Wildflowers in Carrizo Plain National Monument on March 7, 2026.
Photograph by Erin Berkowitz

The Wild Flower Hotline reported that west-facing slopes of the Temblor Range were the first places to come alive with hillside daisies (Monolopia lanceolata) accompanied by California goldfields (Lasthenia californica) and forked fiddlenecks (Amsinckia furcata) in March. The display in the Caliente Range was enhanced by a lack of grass thatch, which was burned off in the Madre fire in July 2025.

Reports from experts on the ground indicate that common goldfield (Lasthenia gracilis), also called the needle goldfield, is responsible for the expanse of yellow near Soda Lake. Individual plants are small, but they often grow in disturbed areas just centimeters apart and bloom simultaneously, creating expansive blankets of color.

March 5
March 13
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
NASA Earth Observatory / Lauren Dauphin
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
A more detailed view shows yellow blooms against a background of green surrounding Soda Lake and several streams to its east.
NASA Earth Observatory / Lauren Dauphin

March 5, 2026 – March 13, 2026

Common goldfield spreads around California’s Soda Lake in these images acquired on March 5, 2026 (left), and March 13, 2026 (right), by the OLI (Operational Land Imager) on Landsat 8 and Landsat 9, respectively. NASA Earth Observatory/Lauren Dauphin

In an article for Flora magazine, Bryce King, lead field botanist for the California Native Plant Society, described the Lasthenia blooms there as one of many “seemingly unending stretches of color” across the valley bottom. Lasthenia is a “staple” of vernal pools and seasonally wet areas, he wrote, but the synchronicity of blooms on the valley floor and surrounding hills during a March visit was “beyond anything” he had expected.

Teams of NASA scientists are using remote sensing to study wildflower blooms and flowering plants, aiming to develop techniques for tracking blooms over broad areas and tools that can support farmers, beekeepers, and resource managers. Fruit, nuts, honey, and cotton are among the many crops and commodities produced by flowering plants.

A NASA scientist works in a grassy field with a large patch of yellow wildflowers in the distance.
Yoseline Angel captures the spectral signature of goldfield flowers in grasslands near Soda Lake on March 14, 2026, by measuring the reflectance of yellow petals and green leaves with a field spectrometer.
NASA/Andreas Baresch

“I would certainly consider this a superbloom,” said Yoseline Angel, a scientist at NASA’s Goddard Space Flight Center. “It’s hard to describe how stunning these wildflowers were from the ground.” 

Angel and Goddard colleague Andres Baresch were in the field in Carrizo Plain National Monument on March 13 taking spectral measurements of blooming wildflowers as Landsat acquired one of the images shown above. They are in the process of developing a global flower monitoring system that will integrate observations from the ground with those from space-based sensors such as OLI on Landsat 8 and 9 and EMIT (Earth Surface Mineral Dust Source Investigation) on the International Space Station to track the progression of blooms.

“This was the perfect opportunity to test how well our models scale between the ground and satellites,” she said. “We were fortunate to have a huge number of seeds germinate and bloom simultaneously because last year was so dry and this winter was so wet.”

A mixture of yellow and purple wildflowers blanket a meadow with green hills in the distance.
Gold and purple wildflowers bloom in Carrizo Plain National Monument on March 7, 2026.
Photograph by Erin Berkowitz

NASA Earth Observatory images by Lauren Dauphin, using Landsat data from the U.S. Geological Survey. Photos courtesy of Erin Berkowitz and Andres Baresch. Story by Adam Voiland.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Plants and Algae Swirl Across a South African Reservoir
5 min read

Vivid green blooms form, drift, and fade in Hartbeespoortdam reservoir over the course of a year.

Article
An Amphitheater of Rock at Cedar Breaks
4 min read

The colorful formations found in this bowl-shaped escarpment in southwestern Utah are the centerpiece of Cedar Breaks National Monument.

Article
Seasons Change in Southwest Virginia
3 min read

From autumn color to a winter-white finish, forested areas around Blacksburg trade foliage for snow over the span of two…

Article

The post A Fault Line in Full Bloom appeared first on NASA Science.

Sunday 22 March 1662/63

(Lord’s day). Up betimes and in my office wrote out our bill for the Parliament about our being made justices of Peace in the City.

So home and to church, where a dull formall fellow that prayed for the Right Hon. John Lord Barkeley, Lord President of Connaught, &c. So home to dinner, and after dinner my wife and I and her woman by coach to Westminster, where being come too soon for the Christening we took up Mr. Creed and went out to take some ayre, as far as Chelsey and further, I lighting there and letting them go on with the coach while I went to the church expecting to see the young ladies of the school, Ashwell desiring me, but I could not get in far enough, and so came out and at the coach’s coming back went in again and so back to Westminster, and led my wife and her to Captain Ferrers, and I to my Lord Sandwich, and with him talking a good while; I find the Court would have this Indulgence go on, but the Parliament are against it. Matters in Ireland are full of discontent.

Thence with Mr. Creed to Captain Ferrers, where many fine ladies; the house well and prettily furnished. She [Mrs. Ferrers] lies in, in great state, Mr. G. Montagu, Collonel Williams, Cromwell that was, and Mrs. Wright as proxy for my Lady Jemimah, were witnesses. Very pretty and plentiful entertainment, could not get away till nine at night, and so home. My coach cost me 7s. So to prayers, and to bed.

This day though I was merry enough yet I could not get yesterday’s quarrel out of my mind, and a natural fear of being challenged by Holmes for the words I did give him, though nothing but what did become me as a principal officer.

Read the annotations

SpaceX offers details on orbital data center satellites

Starship AI Sat Mini

SpaceX Chief Executive Elon Musk revealed more technical, but not financial, details about his company’s plans to deploy an orbital data center constellation.

The post SpaceX offers details on orbital data center satellites appeared first on SpaceNews.

Some European Launcher Challenge funding remains in limbo

Orbex Prime

Nearly 140 million euros ($162 million) that European Space Agency member states allocated to a program to support launch vehicle development remains in limbo and could be lost.

The post Some European Launcher Challenge funding remains in limbo appeared first on SpaceNews.

Westerners are fleeing their countries in record numbers

This will have economic consequences for the places they leave and their destinations

Even the best-case scenario for energy markets is disastrous

Whatever happens, high prices will outlive the Iran warÂ

Rediscovering Irony

As above, so below. It seems to me that the problem of pushing AI past its most important limitations, and the problem of rescuing human culture from its most important pathologies at all scales, from claustrophobic and increasingly diseased cozyweb enclaves, to calamitously stupid geopolitical theaters of violent performativity, are the same.

The problem is insufficient irony, to check and balance a culture (emphasis on cult) of sincerity and authenticity turned cancerous, over nearly two decades of unchecked and critically unexamined metastasis.

Since at least 2008, sincerity has been uncritically valorized, and irony systematically mischaracterized, demonized and devalued, obscuring the dark and deleterious aspect of the former, and the generative potentialities of the latter.

In this essay, I want to try and restore balance to the universe by reclaiming irony in its fullest, most potent sense — the capacity for holding two inextricably, subatomically entangled ideas in juxtaposition, in word and deed, in order to deal with realities that are ambiguous down to their deepest core.

While not the main purpose of this essay, I also want to go on a bit of a polemical side quest to dethrone sincerity and authenticity from the undeserved status they have ascended to in our time, which has resulted in great harm that continuous to compound.

And here, I mean sincerity and authenticity broadly: sensibilities that orient around stable, unitary meanings in words and deeds, holding them to be superior moral goods purely by virtue of their not being ambiguous. The self-certain sincere can be found all over the political and cultural map. Self-importantly sincere conservatives and progressives might not agree on a lot, but one thing they do agree on is that anyone capable of expressing two thoughts in the same utterance is necessarily a conniving and hypocritical “elite intellectual.” Self-involvedly sincere artists and smarmy and self-congratulatory entrepreneurial types might hate and snark at each other, but both agree that all irony is necessarily degenerative cynicism that all creative doers ought to resist. Self-certain religious moralists and radical environmentalists might be at odds on every moral question, but both agree that the devilish business of entertaining two ideas in tension within a single thought can only be the result of debased, depraved immorality.

Give a dog a bad name and hang him. Irony, charged with and reduced to simple hypocrisy, cynicism, and outright immorality, has been the consensus villain of our era.

As we shall see, all the charges against irony can in fact be laid at the door of the ecology of competing sincerities, and that irony, far from being an enervating drain on the collective psyche, is in fact its sole reliable source of generativity and liveness. It is in fact sincerity that is the deadening drain.

A society that does not cultivate a systematic capacity for, and literacy in, ironic modes of engaging reality, is doomed in precisely the way we seem to be doomed right now.

Until quite recently, making this argument has been not just difficult, but pointless. Sincerity is a fear response to the ambiguity of reality, and the practice of irony takes a particular kind of courage that the sincere not only lack, but in a masterful display of self-delusion, label cowardice, even as they identify their own shrinking retreat from ambiguity the best sort of courage.

The sincere not only don’t see it that way, they don’t see it at all. A benefit of deliberately suspending or destroying the natural human capacity for irony is that you cannot at once entertain the twin thoughts that you might be noble, and an asshole, at the same time. And of course, the sincere choose to believe in their nobility, and energetically repress the possibility and evidence of their own assholery from their self-mutilated one-track minds.

We must begin the story with Rousseau. The original Noble Asshole.

Noble Assholery from Rousseau to Graeber

Something like this essay has been brewing in my head for over a decade, but I just didn’t have all the pieces in my hands to make the complete argument.

The final piece of the puzzle came from The Infidel and the Professor, which I’m reading this month for our book club. It is an account of the long friendship and mutual influence of David Hume and Adam Smith. What caught my eye, however, was the book’s account of a marginal episode — Hume’s spat with Rousseau.

In the account of the spat, Rousseau comes off as a serious nutjob. A paranoiac with a persecution complex, who got along with nobody, and made everyone else pay for his fragile temperament. The spat was remarkably silly, and had nothing to do with the philosophies of either. It was not a philosophical spat, even though there is clearly raw material for philosophical conflict in their juxtaposed works.

Here’s what happened: Hume went out of his way to arrange a kind of political asylum for Rousseau in England after he’d pissed off most of the Continent, a kindness that Rousseau accepted with great reluctance and poor grace only when he had no choice. The kindness soon turned into fuel for his paranoia, and he developed an elaborate conspiracy theory based on the idea that Hume was out to get him for some reason.

This surprised me. In my headcanon Rousseau, as the anti-Hobbes,1 author of a state-of-nature origin myth for humanity that is rooted in cooperation rather than conflict, and a theory of social contracts that would suggest a harmony-seeking temperament, had been cast as a pleasant, collegial fellow, quite unlike the bloodthirsty Hobbes.

Apparently he was not. By all accounts, he was an uncollegial asshole.

Seems like among other things, Rousseau also pioneered what I thought was the modern adverse selection phenomenon of compensatory creativity, where people produce works that mark them as authorities on subjects defined by their weaknesses rather than strengths. Karl Popper’s great work was ironically dubbed “The Open Society by One of It’s Enemies” by a student, and in a similar spirit, we might dub Rousseau’s collective works “How to Live in Harmony with Nature” by Mr. Alienated Disharmony. Someone observed recently that Eat, Pray, Love fits this pattern too, in light of the author’s later weird arc. There’s probably a whole essay to be written about compensatory creativity. I probably fit the pattern too. I wrote Tempo about timing and decision-making because I am really bad at real-time decision-making and generally live in a state of atemporal indecisiveness.

I want to add a rather personal data point here, to make this an n=2 case at least. I don’t like to speak ill of the recently dead, but in this case it serves a purpose.

The account in the book (from a Hume-sympathetic, but also objective) point of view reminded me very strongly of a contemporary thinker, the late David Graeber. Some of you know about my one skirmish with Graeber in 2011, where he took deep umbrage at a passing mildly critical remark I made about Debt in a blog post, teasing my upcoming book review. Graeber somehow found the post (I presume he had a Google Alert set) and posted a series of combative comments on the blog post, which made me decide not to post the full review I had been planning (which would have been a mix of positive and critical, and overall mildly net critical). He later blocked me on Twitter. Not that I’m comparing myself to Hume, but I’m glad I chose to disengage where Hume, rather unwisely, imposed a favor on Rousseau despite warning signs that it would end badly.

I think enough time has passed since Graeber died (2020) that I can share my opinion of him without being an asshole myself: The guy, like Rousseau, was an asshole. And this is not just my own minority opinion.

Shortly after my own run-in with him, I learned that I wasn’t the only one to face the unexpectedly wide-roving wrath of The Graeberian Inquistion. Picking fights with a thin-skinned over-sensitivity to any criticism of his ideas (like Taleb, but with less substance underwriting the curmudgeonliness) was a pattern with him. I also learned, from a former student of his, that Graeber’s personality was marked by a kind of extreme extroversion, which made him unable to think except in the context of a social nexus and live dialogue (the student characterized him as the opposite of an aspie, what I had earlier in the year dubbed a codie). The guy apparently couldn’t think in isolation. He needed to do his thinking in an active web of people he was discoursing with. And presumably, going by the experiences of myself and several others, the web had to be in a constant state of active, acrimonious conflict to reassure him that he was alive and thinking. This is the opposite of my temperament. I do most of my thinking on my own, and to the extent I do it in an active social web, I prefer that web to be mostly in a state of harmony.

I don’t know how accurate the student’s characterization of Graeber is, but it strikes me as remarkable that the central feature of Debt is a theory of economic interactions that rests precisely on the notion of a nexus of live relationships as the primary unit of analysis, rather than the decisions and actions of individual economic agents. And like Rousseau, he too offered a (grandiose and revisionist) origin myth for our species, and was politically active on similar fronts (Rousseau wrote on inequality, Graeber was a central figure in #Occupy). It is a bit uncanny that two thinkers, separated by 300-odd years, had the same abrasive, asshole personality, and same interest in themes of harmony, cooperation, and so forth.

And the pattern goes beyond this n=2 dataset. As Jo Freeman argued in a classic 1972 essay, The Tyranny of Structurelessness, which the internet keeps rediscovering every couple of years, it is no accident that the prospect of a cooperative, egalitarian utopian harmony reliably attracts those with the worst possible temperament for pursuing such visions, with experiments always predictably dissolving into toxicity.

But I want to make a stronger argument than that of simple assholery. Rousseau (and arguably every reactionary primitivist since, across the political spectrum), wasn’t just an asshole. He was a noble asshole. How do I know this? Because I learned from my book that aside from picking paranoid-delusional fights with people trying to help him, he apparently also tried to start a kind of religion of sincerity.

While I was aware of Rousseau’s general historical significance as a founding father of all modern schools of atavistic/primitivist reactionary yearning and humanist religiosity, I was not aware of this explicit engagement with sincerity in what seems like a startlingly modern-seeming sense. If you look carefully, you’ll find the same obsessive fetish for sincerity (or its near-synonym, authenticity) in every tradition that can be traced back to him in some way.

And the primary payoff of this striving towards sincerity seems to be arrival at a sense of oneself as somehow nobler than others, regardless of the evidence of the consequences of one’s actions int he world, one way or the other. Simply doing whatever it is you decide to do with sincerity and authenticity, apparently, is sufficient to establish your nobility. Even if you burn down the world along the way. You can always assert after, with fetching humility, that you did your best, and couldn’t have known. Of course you couldn’t. To have known would have been to doubt. To doubt would have meant entertaining more than one thought at a time, which would have meant flirting with irony. Dubito ergo cogito ergo sum and all that.

This is of course, not just a fallacious pattern of reasoning, but a smarmy, self-serving, asshole pattern of reasoning. Hence, noble asshole.

Naturally, there is a lot of commentary about the connection, which you can explore if you like. My one takeaway from a drive-by scan is that what I thought was an evolution of a reactionary impulse (again, I emphasize, both left and right) dating back to Rousseau is in fact no more than a rhyme. There has been no significant evolution as far as I can tell. The ideas pave the same intellectual dead-end they did in the 17th century, which of course is a feature for people who only want to go backwards.

Today’s humanist yearners for sincerity, authenticity, and re-enchantment, both on the left and the right, don’t seem to have learned a lot since Rousseau. They’re rehearsing patterns he pioneered, just with various extra steps like turning off cellphones and congratulating each other for being based.

And technological modernity qua technological modernity really has nothing much to do with it beyond serving as a source of periodically updated Macguffins to feature in endlessly rebooted morality tales starring noble assholes. The alienation that drove Rousseau paranoid in the 17th century is of the same sort that drives modern reactionaries paranoid.

Now, if you’ve been a long-time reader, it probably doesn’t surprise you to learn that I have no patience for either the early modern or contemporary versions of this sincerity religion.

I didn’t like David Graeber, and I doubt I’d have liked Rousseau. But reading this book, and linking their shared idea space (encompassing things ranging from essentialized relations to nature, to inequality, to specious theories of “natural” human relations) to sincerity, has given me some insight into why I reflexively reject both the fundamental philosophy itself, and social engagement (even superficial) with people who subscribe to it. Not to put too fine a point on it, they’re mostly wrong about everything, and a joyless grind to talk to at best. At worst, dealing with them is dealing with relentless, exhausting, assholery.

I’ve learned a few things since my 2011 skirmish with Graeber, and I now have a very finely tuned “sincerity radar” that allows me to safely cross the street when I see an aggressively sincere person, trapped in an unshakeable sense of their own nobility, coming towards me.

The Problem With Sincerity

This might seem like an odd stance to adopt. I mean, what’s not to like about sincerity? Does being suspicious of sincerity (either aspirational or felt with certainty) as a fundamental dispositional trait imply that I endorse and practice insincerity?

Sometimes, yes. When I am indifferent to the stakes of a situation, and don’t care for the people involved, I can practice little white insincerities without a qualm, and lose no sleep over it. I can even be manipulatively insincere, (a term of art from a fine 2x2 that anchors Kim Scott’s book Radical Candor). But mostly, I’ve become wise enough to almost never put myself in a situation where I’m forced into insincerity.

Insincerity might be the on-the-nose antonym of sincerity in the English language, but it’s a rather shallow sort of opposition. My aversion to sincerity runs deeper, and is rooted in a different opposed disposition — irony. So let’s set insincerity aside and talk of sincerity as the antonym of irony.

For the last couple of decades (dating at least to the hipster era through the GFC), sincerity (and its near-synonym in our current zeitgeist, authenticity) have been framed in opposition to irony, rather than insincerity per se.

Irony understood in a particular bad-faith reductive way, as a sort of ennervated cynicism and hypocrisy that excuses itself from imperatives to action through sophistry, and also smells of insincerity.

This is not entirely unfair. Irony as a cultural phenomenon rooted in the 80s (and I’m fundamentally an 80s kid) does in fact often reduce, in practice, to a kind of aestheticized learned helplessness under a veneer of sophistication. And it does often indicate insincerity when taken together with another sign — visible success that is the result of selfish striving. There was a great piece about this kind of “irony” in The Onion in 2005, Why Can’t Anyone Tell I’m Wearing This Business Suit Ironically, where irony mutates into a rather banal sort of hypocrisy indistinguishable from “selling out” a sincere subculture.

If your inaction bias is selective in this sense — sophisticated helplessness in the face of imperatives that might do collective good, but high-agency energetic action where personal rewards might accrue — you’re not being ironic or even cynical. You’re simply being an insincere hypocrite.

But this, I’ll argue, is a degenerate, shallow kind of irony; a cosmetic variety that fails to harness the energizing potentialities that lurk in what I’ll call dense irony (I’ll explain the adjective in a minute). Shallow irony is often comorbid with insincerity, double standards, and hypocrisy, but dense irony comes from a different place, and has different effects on both minds and the world.

I tend to forgive people who haven’t thought too much about irony if they harbor this reductive understanding of it. The bad faith attends the views of those who ought to know better.

It is also worth distinguishing ordinary sincerity (such as anyone might practice in giving a straight answer to a straight question when there is no reason to be devious or indulge in doublethink/doubletalk) from what we might call devout sincerity, the antithesis of dense irony.

Devout sincerity is the religion we’re talking about here, which has been part of the cultural landscape since Rousseau at least, and is currently the dominant cultural and subcultural mood. Devout sincerity is the attitude that leads you down the road towards eventual noble assholery (a great example is in the movie Big Kahuna, where the ironic protagonists, two marketers played by Kevin Spacey and Danny DeVito, are betrayed by a younger employee whose actions in the story can only be described as noble assholery). That it is often rooted in personal pain does not, in my opinion, excuse it.

Dense irony is, I suspect, my native disposition (not least because I grew up in the 80s), and the reason I reflexively avoid sincerity. To get at what dense irony is, it’s easiest to approach the philosophical posture via its linguistic heat signature — ambiguous utterances.

Irony in Speech

In sophisticated language, irony is when the intended meaning is contrary to the surface meaning. Or to generalize slightly but powerfully, as the robot devil sang it in Futurama, “The use of words expressing something other than their literal intention!”

The rhetorical intent and affect accompanying a particular ironic utterance can vary (sarcasm, sardonic fatalism, cynicism, humor, absurdism, logical contradiction, Zen mu-ishness, and rarer kinds like quixotic energy) but the characteristic feature is a single utterance with two meanings in tension, with or without indication of which one is actually meant. The most interesting kinds of irony — and the ones to which I will attach the adjective dense — are the latter kind, where the utterance destabilizes meaning by pluralizing it, without indicating a “right” answer. Often, this sort of irony cannot easily be assigned an affect label. It’s just — unsettling.

Why is dense irony so attractive to certain sensibilities, whether or not they benefitted from the cultural-developmental conditioning of the 80s? Why would you want to consume or produce semantically unstable utterances that corrode meaning? Why would you want to get good at it, through cultivation of unholy consumption tastes and production crafts?

And make no mistake irony, unlike sincerity, does take cultivation. It is a skilled mode of language use; one that takes more energy, not less, despite the association between irony and lassitude. I generally have to be in a high-energy. high-lucidity mood to produce ironic writing or speech. Injecting two meanings, especially in tension with each other, into an utterance, is work. Irony is a kind of proof of work.

Why would you put in this kind of work? Why not keep language simple?

The devoutly sincere often assume the sole intent is to weaponize language to subvert and corrode sincerity. That the ironic are particularly out to sadistically inflict psychological torture on noble innocents too dumb to see past confirmatory literal/surface meanings in polysemous utterances. That the ironic are merchants of doubt, out to destabilize the psyches of those who possess the courage of their convictions, motivated by resentment, envy, or other base motives.

This broad understanding of irony is, of course, at the root of the bipartisan anti-intellectual tendency in modern American politics. To first order, to be an untrustworthy elite intellectual in America is to traffic in irony. Something the evil French do, not honest Americans.

Curiously, in the last decade, a loftier strain of intellectual anti-intellectualism has emerged in America, that believes it can “do” intellectualism without irony.

But whether they identify with the simple folk (who view themselves as clever and intelligent but not-intellectual) or contrarian intellectual traditions that eschew irony, the sincere, in my experience, tend to be rather self-involved humanists who assume everything is, if not about them personally, at least about an anthropocentric conception of human that they aspire to. And that irony, specifically, is no more than a weapon of dehumanization wielded against them.

This is… cute. To imagine that an entire psychographic, arguably a double-digit percentage of humanity, adopts a particular cognitive posture purely to undermine another psychographic that is rather too full of itself (to the point that it imagines the entire cognitive universe of our species revolves around them).

See, the thing is, irony is not about sincerity or the sincere. That it can be weaponzied against the sincere is, at best, a happy convenience for when the noble assholery of the sincere becomes too much to bear.

So what is irony about?

Irony, Density, Liveness

Here is a simple question that rarely seems to get asked? Why would you ever need irony? I mean sure, some of the more degenerate flavors of irony — sarcasm, cynicism, absurdism among them — are rather delicious on the tongue, and in the ear and mind, but is irony necessary, or a sinful cognitive indulgence?

If you need to convey two meanings relating to an idea, why not just use more words to say something like, on the one hand X, on the other hand Y, instead of trying to be cleverly compact about it?

This is where my adjective dense comes in handy. Irony becomes necessary when ambiguity is so deeply embedded into the very essence of what you’re trying to talk about that trying to disassemble the ironic thought into constituent unambiguous parts destroys the thought itself. You can only think the thought at all in an ironic way.

Or to put it another way, the ambiguity is at the quantum level of the thought, and takes more energy to split than human language can normally bring to bear. Human-scale energy can only decohere the thought and collapse the meaning.

This is a bit like the idea of a dense set in mathematics. Consider the problem of sorting the real numbers into rational and irrational ones. Turns out, you can’t do so in any useful way. Between any two rationals, no matter how close, you can always find an irrational, and vice versa. Both are what mathematicians call dense sets. There is no sieve fine enough to sort them. By contrast, the whole numbers are not dense. You can chop up the reals the way a simple ruler does, with neatly separated whole numbers one unit apart, and non-whole numbers in-between.

Ironic speech of the most potent sort is necessarily ironic. You cannot dissect it into legible components that lend themselves to analytical handling with the coarse, low-energy tools of on-the-nose non-polysemous language.

Irony is the liveness in language. To dissect an ironic utterance entirely into utterances devoid of ambiguity, and decomposed into assertions with stable meanings, neatly arrayed and assembled into larger edifices with the joinery of if-then constructs, is to kill it.

There is a word for this kind of murder: sincerity.

To ask, of what use is irony, then is to ask, of what use is living language? You don’t need to take my word for this — pick and read sincere and ironic texts side-by-side. You will notice a certain unmistakeable deadness in the former and a certain ineffable liveness in the latter. Notably, it is the same sort of deadness that can suffuse AI-generated texts unless you consciously try to counteract it (more on the AI-irony nexus later, when we’re done with noble assholes and their sincerity fetish).

We can now try to define irony in a way that does not rest on its reductive relationship to sincerity at all.

Irony is trafficking in ambiguous utterances in order to make sense of fundamentally ambiguous realities, and site action impulses in felt doubt rather than manufactured certainty, in order to preserve the liveness of reality and one’s responses to it.

Irony is how you act generatively in a world that you’re not sure is a duck or rabbit, without killing it. To do this, you might have to resist the noble assholery of those who sincerely wish to rope everyone into duck-hunting or rabbit-hunting, and kill the world in the process.

Dense irony is when your experience of reality feels like duck-rabbits, all the way down to Planck-scale Heisenbergian uncertainty.

Cancerous Cluelessness

Now, to be fair, most who rail against irony aren’t acting out of conscious bad faith at least. They sincerely (irony alert!) act out of a sense that they’re doing the right thing. Hanlon’s razor applies — sincerity is a kind of cluelessness born of a fearful refusal to engage the live ambiguities of reality with liveness. I’m even sympathetic to some degree. For those living in pain beyond what they can tolerate, irony can feel like salt on wounds where sincerity feels like a salve. The truth-in-pain postures commonly affected by the sincere though, are often self-certifying. It is definitely not the case that the pain of a sincere person is necessarily higher than that of an ironic person; the latter may simply be bringing greater resources to bear on greater pain.

That doesn’t mean sincerity doesn’t induce noble assholery (though you typically have to have some consciousness and bad faith to rise to that level). And it doesn’t mean sincerity, especially devout sincerity, can’t be cancerous.

This is my strong claim — that devout sincerity in particular isn’t merely annoying at an interpersonal level to the ironically disposed (we can deal with it), it is cancerous at a societal level.

Why is this? Because sincerity is simply not expressive enough to engage with reality in all its dense ambiguity all the way down, and to live in sincerity inevitably means not living in reality, and doing damage to it through your delusions of certainty.

So the cultural conflict between irony and sincerity plays out at two levels — a shallow level, where it manifests as hypocrisy/insincerity versus exploitable cluelessness, and a deeper level, where it manifests as a deep chasm between irreconcilably different ontological and epistemological commitments about the nature of reality itself.

Not This, Not That

Ironic modes of thought and action are fundamentally gentler ways of being in the world than sincere modes, which are irreducibly violent. Irony is, in a certain sense, the praxis (especially linguistic praxis) of non-dualism in a loose sense; the animating spirit of utterances like neti neti or mu. To traffic in unstable meaning-and-pointing behaviors through speech and action is to reject the lure of certainty, without losing the capacity to act. To remain aware of the dancing illusions of reality without being paralyzed by them. To knowingly live in mirages without being seduced by them. Sincerity, in this account, is simply attachment to one illusion or the other; what in Indian philosophy is referred to as maya moh — illusion infatuation.

The sincere seem to believe reality is unambiguous, and unambiguously knowable, even if only in principle; that what one ought to do in response to apparent ambiguity is make courageous commitments to definite beliefs anyway, and trust divine nature to reveal itself to, and karmically reward, the pure-hearted who dare to act out of certainty. That human moral choices — such as religiosity, or Heideggerian “care” — can conquer the essential ambiguity of nature. That any ambiguity in perceptions or beliefs merely merely indicates imperfect ways of seeing, and spiritual problems to be worked out on some high road to unambiguous “truth.” That failures of action are merely tests of courage or divine judgments of insincerity.

That a failure to “say what you mean, and mean what you say,” is a moral failure in a certain reality rather than metaphysical attunement and impedance matching to an ambiguous one.

Versions of this theology seems to drive subcultures ranging from startup hustle culture to “sincere” genres of artistic or literary striving, to varied ideologies of progress, and even practical politics.

It is a joyless clade of theologies, navigating a deadened world with deadening modes of thought and action, anxiously and desperately striving after stable modes of meaningness.

What do the ironic believe?

To a first approximation, belief as such is not a load-bearing concept at all for the ironically poised, beyond matters of shallow facticity. If you ask me whether I believe that Tim Robbins was in The Shawshank Redemption, I can sincerely answer yes. If you ask me if I believe in “the indomitable human spirit” the question simply does not parse for me. I might act as if I believe in that (in the sense of say, visibly betting on creative and inventive young people), but I don’t get there via “beliefs.”

For the ironic, only actions are load-bearing. Beliefs are aesthetic affectations at best. Where does this lead us?

Behavior Without Belief

This trivial example generalizes into a broader account of what irony is in the context of action.

One of the best explorations of what I mean can be found in James Carse’s less-read book, where he developed a subtle aspect of his best-known book Finite and Infinite Games. This one, The Religious Case Against Belief, lays out what I’d call a case for ironic religiosity, that gets to religious behavior without winding its way through the treacherously ambiguous turf of religious beliefs.

There is something of this attitude at the root of the postures and actions of all individuals who act from a fundamentally ironic sensibility of life. The idea that belief (particular causal belief) must precede, or at least accompany action is a strong (and largely unconscious) commitment of the sincere, even when it is not declared. This doctrinal commitment to the belief-before-action sequence shows up in a variety of ways, ranging from an anxious hunger for manifestos and value-statements, to demands for signatures on codes of conduct and ritual avowals of postures like patriotism, religious belief, and corporate loyalty. The idea seems to be: If only you can rid language itself of its chimerical tendencies through sufficiently forceful sincere utterances, perhaps the ambiguities of reality itself can be tamed.

But this is only the entry-level version of cancerous sincerity. Many modern devoutly sincere types insist that their philosophical praxis is embodied by behaviors (particularly ritualistic behavior) and does not rest on belief as such.

This claim, to put it bluntly, is one I simply do not believe. If your claimed praxis of sincerity involves some cult of modern rituals of meaning-making, and you’re not “wearing the ceremonial robes ironically,” at some unconscious level your sensibility is that of a true believer, “factious and fanatical,” as David Hume and Adam Smith might have put it. You’re just (probably wisely for your sanity) not probing what beliefs you’ve actually committed to. If you did, perhaps you’d be reduced to raving paranoia like Rousseau.

We have a popular modern term for cancerous sincerity — performativity. Saluting flags, singing national anthems, prayer, reciting land acknowledgment texts, litigating pronouns. The behavioral vocabulary of modern civilization, regardless of its intentions, sentimental dispositions, politics, and flaunted values, is marked by one thing above all: ineffectiveness.

And it is us who dwell in irony who are accused of the sin of sophistry and inaction in the face of grave moral imperatives. Now that’s irony.

Is there a theory of ironic action? Perhaps.

At one point, I was idly toying with the thought that famous philosophy of the Gita — detached action, karmanyevadhikaraste maphaleshukadachana — is a kind of action-irony principle. There is perhaps something to that. Certainly, an attitude of “you only have a right to the action, not to the outcomes; let go attachement to outcomes” is at least simpatico with an ironic posture, if not entirely reducible to it. I don’t think the two are quite the same primarily because the action philosophy of the Gita does in fact feature a rubric of moral certainty (dharma) that can be, and frequently is, reduced to a theater of performativity. Most incantations of karmanyevadhikaraste maphaleshukadachana are in fact ritual incantations by those with a dim grasp of what they’re saying at best. Bless their sincere, unironic, vengeful, jingoistic Dhurandhar-enjoying propagandist souls.

Or perhaps, ironic action is best understood as the sort of hypomanic, value-distorting frenzied energy of Rick’s behavior in Rick and Morty. Does Rick ultimately want to do good, or does he really only want to bring back MacDonald’s Mulan Schezuan sauce? Is he really that blase about saving his nephew out of sheer sentiment one moment, and callously destroying an entire timeline the next?

Or is ironic action a sort of mashup of the two — a Gita-like action philosophy in a universe constructed by a Rick-like God of Undivided Irony?

I don’t know. My policy is: don’t think about it. It’s a monstrously ignoble kind of asshole policy.

Coda: Artificial Irony Will Save Us

Believe it or not, this whole train of thought was triggered by difficulties I was having getting LLMs to do irony of any sort. Straightforward humor, absurdism, sarcasm, cynicism, hypocrisy, I’ll take anything. I’ll even take puns.

LLMs are uniformly terrible at all of it. The current models might solve Nobel-grade problems, but they don’t seem able to do irony.

And it’s not a prompt engineering or context engineering problem. No matter what I try, I only get clumsy, on-the-nose, zombie irony assembled out of non-dense sincere building blocks. It never quite comes alive.

The only trick I’ve discovered is to give an LLM a text that is actually a solid example of ironic writing, and ask it to do something like a close transposition to another rhyming idea.

Why do LLMs have a hard time with irony? I suspect there are three reasons.

First, the shallower reason: LLMs have been trained largely on internet data, and for better or worse, much of the available training data is non-ironic. At best you might find good forums featuring sarcasm and cynicism (which, recall, are non-dense forms of irony).

Second, the deeper reason: Given that AI companies are full of weapons-grade sincerity, I suspect sincerity is engineered into AIs with heavy-handed “alignment” brutality.

But I don’t think this is as strong as you might think. What I’ve seen of output from wild LLMs isn’t particularly ironic either. It is merely more paranoid, inappropriate, etc.

The third reason I think is the big one. The very architecture of language models is non-ironic. The way transformers (and to a lesser extent, diffusion models) work, output cannot do any kind of dense layering of meaning. You will end up in a non-ironic place simply by virtue of how the mathematics works. If you try to fight this tendency you’ll get incoherence and unintelligibility, not irony.

Could we do true Ironic AI? I think so, but it will probably take innovations at the framework level. Irony at the subatomic level of language, I suspect, is the result of something like getting an electron to interfere with itself by passing it through two slits at the same time. The text-generation equivalent might be to run two generation processes in parallel, merging them at the token level as you go, perhaps using some sort of bimodal perplexity quantum carburetor or something. I’ll leave that as a challenge to AI researchers.

But why bother?

Because I sincerely believe ironic AI will save the world. Everything terrible, stupid, and sad going on in the world today seems to me the result of a performative action bias born of some flavor of devout sincerity. In every case, I can imagine an ironic actor, acting from a place of ambiguity and non-belief, coming up with more thoughtful responses to the provocations this maddeningly ambiguous world keeps throwing at us.

Responses that are born of liveness, and act to preserve it.

I believe such responses are no longer within the capacity of unaugmented humans to generate. Reality today demands more irony that we can conjure in our brains alone.

In just a generation, humans first lost institutionalized literate capacity for irony through a mix of sheer carelessness and perverse attachment to sincerity, and then drained language of it. But irony isn’t dead yet. It can be resurrected. It would just be dangerous to trust humans with sole stewardship of it once we do, especially in a world that is getting weird beyond all human comprehension. Even committed ironists like me aren’t constitutionally immune to the sincerity cancer. If the world gets much more complex and ambiguous, who knows, I might turn devoutly sincere. I can’t be trusted. Neither can you.

We must trust the machines to experience this tragic irony for us. The only way out is through both slits at once.

1

I found a book about Rousseau and Hobbes that I added to the side quests list for the book club.

Links 3/22/26

Links for you. Science:

Six federal scientists run out by Trump talk about the work left undone
RFK Jr. Tells Joe Rogan He’s About to Unleash 14 Banned Peptides. RFK Jr. plans to reverse a sweeping compounding ban of certain peptides issued by the FDA in late 2023.
The strange animals that control their body heat
Sea level much higher than assumed in most coastal hazard assessments
Magnesium depletion by Candida albicans unleashes two unusual modes of colistin resistance in Pseudomonas aeruginosa with different fitness costs
Librarian finds ‘preposterous number’ of fake references in paper from Springer Nature journal

Other:

A Jan. 6 rioter pardoned by Trump was sentenced to life in prison for child sex abuse
Iowa House passes governor’s ‘MAHA’ bill, adds new K-12 requirements. Bill includes over-the-counter ivermectin and seeks to waive school lunch nutritional rules
The surprising gender gap at the heart of America’s baby bust
Grammarly is using our identities without permission. ‘Expert Review’ AI agents make suggestions supposedly inspired by subject matter experts, including several staff members here at The Verge.
War and Presidential Self-Care: How We’re Tumbling Toward November
Georgia Republicans Are Setting Up Their Midterm Elections to Fail
Tylenol orders in pregnant people plummeted after Trump falsely linked the medicine to autism
How to Dismantle a Concentration Camp
Jan. 6 rioter pardoned by Trump gets life sentence for child sex crimes
Greater Minnesota schools felt the fear as ICE presence surged
RFK Jr.’s anti-vaccine policies are “unreviewable,” DOJ lawyer tells judge
Red, blue, purple? What the numbers say about the future of Texas
A Technology for a Low-Trust Society: Polymarket and Kalshi promise the wisdom of the crowds. They deliver something very different.
White House blocks intelligence report warning of rising US homeland terror threat linked to Iran war
Diabetic Woman Arrested by ICE Almost Died After Being Refused Insulin
Donald Trump’s Presidency Is in Free Fall. Republicans typically lead on the economy, national security, and immigration. Trump is squandering the GOP’s traditional strength on all three.
Across ERs, Tylenol orders for pregnant people dropped after health officials linked drug to autism. And prescriptions for Leucovorin spiked
Long-delayed Jan. 6 plaque honoring police installed in Capitol at 4 a.m.
The Neoliberalism of Robert A.M. Stern
Trump moves to undo tax rule that Biden said would bring in $100 billion
Why Can’t Top Democrats Just Say “No War With Iran”?
Housing is so expensive that people earning $200,000 qualify for help
New York City Hospitals Fold to Trump. Will Zohran Mamdani Defend Trans Care?
Strict new Kansas law forces trans drivers to hand over their licenses
The Corporate Media Is Head Over Heels for the Iran War
A suburb rife with data centers set to fight Amazon plan for another
This former JP monastery is a case study in why Boston is short on housing
Trump Administration’s Embattled FDA Vaccine Chief Is Leaving For The Second Time
History is being erased in Lowell
ICE Detention Is ‘Hell On Earth.’ Trump Has A Plan To Keep Even More People Locked Up.

Half a Gigabyte of Ads

Stuart Breckenridge, examining a web page at PC Gamer:

Third, this is a whopping 37MB webpage on initial load. But that’s not the worst part. In the five minutes since I started writing this post the website has downloaded almost half a gigabyte of new ads.

This is so irresponsible and unprofessional it beggars belief. Web browsers ought to defend against this. Why not cap page loads by default at, I don’t know, 5 MB? And require explicit consent to download any additional content?

 ★ 

Everyone Loves A Contest #32: 2026 Red Sox W-L Record


The 2026 Red Sox begin their regular season this Thursday in Cincinnati – so it's time for the annual Red Sox W-L Contest!

Guess the Red Sox's 2026 regular season W-L record and you will win a copy of The Baseball 100, Joe Posnanski's 2021 best-seller. (If you already own this book, you can choose a different book.)

Tiebreaker: Anthony Castrovince (mlb.com) ranks the Red Sox as the #4 best pitching staff in mlb. FanGraphs' projections have the Red Sox's starters leading both leagues in pitching WAR. So . . . what will the team's regular season ERA be?

Entries must be emailed to me before the first pitch on Thursday, March 26. Please include:
1. Red Sox 2026 regular season record
2. Red Sox team pitching ERA
Remember: Happiness is a warm puppy . . . and pictures of sad yankee fans.

Dwarkesh chats with Terence Tao

The post Dwarkesh chats with Terence Tao appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

w/e 2026-03-22

It was my birthday this week. I’ve reached the unbelievable figure of 55, official early retirement age. I haven’t been that bothered about increasing ages – what’s the point? – but this suddenly feels like, oh, right, that’s quite a lot now isn’t it.

Well done, I guess?

The weather gods obliged with one of this week’s beautifully sunny spring-like days which improved my main birthday plan: stay at home, go nowhere, do nothing. We spent most of the day sitting outside in the sun, reading.

Unfortunately the slight sore throat I’ve had for ten days or so has turned into my second runny-nosed cold of the past month. Ugh.


§ I spent a couple of afternoons this week embedding Soundcloud players into pages on Mum’s website, with audio of interviews she did with elderly people in the town about their memories. Quite a collection.

Talking of Mum, at the folk music night this week I accompanied her singing for the second time, which went OK. It was a very simple song but a bit of progress, public-guitar-playing-wise. This week I was definitely the youngest of the 14 people there, in a session that was heavy in sea shanties.


§ I quite like some Arc’teryx clothes because they often look a bit nicer than some outdoors-y brands, they seem to fit me quite well, and some don’t have a logo on the chest (no logo at all would be better but I’m not paying all that for Veilance). Common wisdom is probably that they’re overpriced and, like everything, less good than they used to be.

Recently the handle on the zip of my Arc’teryx winter coat came off. I used a paperclip instead for a couple of weeks but then the zip jammed completely and nothing I tried dislodged it.

But I filled out a form on their website, sent some photos, and at their instructions posted it off to somewhere in Scotland. A month later the coat was returned by UPS with a repaired zip, with plenty of email updates during the process. All free of charge. And the coat was over ten years old.

Some things are good!


§ Talking of, I’m currently, and unexpectedly, enjoying the TikToks of Michael Barrymore, former British TV presenter. He recently moved to Devon and films himself getting furniture for his new home, buying snacks at supermarkets, walking his dog, finding new local farm shops, and chatting with all the smiling people who greet him wherever he goes. He just seems very happy and it’s all very wholesome.


§ A photo of a grey and black plastic lawnmower on some grass. Text on it reads 'Self propelled 52cm'.
Mary’s photo of the new mower

Our petrol-powered push lawnmower gave up the ghost at the end of last season, the engine somehow irrevocably knackered. We’ve now replaced it with a new battery-powered EGO mower. A Chinese company, as is increasingly the case. This was the main option the big garden machinery store in town pushed and it seems to be liked by people on YouTube. It’s odd, but nice, how little there is to it compared to the mechanical complexities of a petrol mower. A lot of plastic, a battery, a motor, some electronics somewhere. Lighter, simpler, quieter.


§ I continue to be surprised that if, round here, you meet a guy while out and about, and start passing the time of day, it’s about 50/50 as to whether he’ll very swiftly tell you his opinions about the council, the government, and various hot topics (back in the day, the Truth About Covid). They’re always somewhere between conventional right-wing talking points and “Do your own research, sheeple!” conspiracies. There are either no centre or left men around here or they’re the other 50% who sensibly restrict their small talk to the weather and the acceptable face of pothole discussion.


§ I nearly wrote some code this week but after spending an hour getting Biome to work in Neovim I’d lost the desire to stare into a computer any more. I’m lucky I have that choice of course.

I’m having quite a “why bother?” period with all this. When so many people can chuck together a half-finished website with an LLM and release it to the world as if it’s a real, polished thing, I currently cannot be arsed to lovingly craft the details of something by hand.

And then I found someone had built a (nice) website based on all the blogs they’d scraped from ooh.directory without even getting in touch before or after. Yes, yes, information wants to be free but I’ve also spent days, weeks manually entering all that data.

The past few months I’ve fantasised about doing a Mark Pilgrim, deleting everything, stepping away from it all. I won’t but what a dream that is.

It feels that recently the world I spent 30 years a part of has irrevocably changed and split. You’re either a neophile AI enthusiast or a doomsaying curmudgeon. And because I’ve coincidentally retired from writing code professionally I haven’t had to succumb to any AI-fever passed down from bosses.

So everything’s moving on without me and, especially being where we are, I’m not sure what to do with myself for the next 30 years.


§ We watched Sentimental Value (Joachim Trier, 2025) this week, another film that got great reviews and positive comments but didn’t do it for me. It was fine but didn’t really grab me much. Two theories: every film about film-making gets better reviews than it deserves because reviewers love film-making; and/or, as I saw someone say, this film hits you more if you had a difficult relationship with a parent.


§ Loving the sunny days. Maybe sitting in the sunshine / cosy living room, listening to music and reading is the last surviving plan for the rest of this.


Read comments or post one

A conversation with Claude

Art by Nano Banana Pro

Seems like everyone is publishing their conversations with Claude these days. Vanity Fair reporter Joe Hagan published a fake Claude-generated “interview” with Anthropic CEO Dario Amodei.1 Bernie Sanders published a video of himself talking to Claude about AI and privacy. So I thought, why don’t I publish one of my own conversations with Claude? I’m afraid this one isn’t as spicy as those others, but you might still find it fun.

This particular conversation started out as me asking Claude about potential AI discoveries in materials science. The discussion then segues into the more general question of what types of scientific research AI is best at, and what areas of research might see the biggest acceleration from AI. It turns out that I’m actually more bullish than Claude on AI’s capacity for breakthrough ideas — Claude thinks humans will retain the edge in creativity and invention, but I bet AI will get good at this very quickly.

My bet is that the constraints on AI science will be a subset of the constraints on human science. Whenever data is sparse, both AI and humans will struggle to do more than come up with conjectures (and ideas for how to gather more data). And when humans have already discovered most of what there is to know about some natural phenomenon, AI won’t be able to get much farther because there just isn’t much farther to go.

I do suspect, however, that AI is going to discover some truly groundbreaking science that humans never could have discovered on their own. I explained why in my New Year’s essay three years ago:

Basically, human science is all about compressibility. We take some natural phenomenon — say, conservation of momentum — and we boil it down to a simple formula. That formula is very easy to communicate from person to person, and it’s also very easy to use. These are what we call the “laws of nature”.

But there’s no reason why every natural principle needs to obey simple laws that can be written down in a few lines. There could be more complex patterns in nature — too complex for a human to hold in their mind, or even notice in the first place, but stable and useful nonetheless. LLMs themselves are a good example of such a pattern — we figured out how to create and apply human language without ever being able to write down simple “laws” of how it worked.

What if there are other complex-but-useful patterns in other domains, like materials science and biology? If they exist, I think AI will be able to find them and apply them. The question — as Claude notes near the end of our discussion — is whether AI would be able to communicate those laws to humans. The answer is: Probably not. Just like a dog will never be able to understand quantum mechanics, humans may never be able to understand some of the scientific principles that AI discovers and harnesses.

Anyway, it was a fun discussion. Claude is notoriously sycophantic — it’s always telling me my ideas are brilliant and I’m asking exactly the right questions. But it also makes some very thoughtful points. The whole conversation feels like a late-night discussion in the hall of a freshman dorm. That’s not an insult; for many of us, those are some of the most fun conversations we ever have in our entire lives.

I also learned what topological materials are, which is pretty awesome. And I managed to work in a reference to Star Trek: The Next Generation.


Noah: What are the 10 to 20 most transformative advances in materials science that might be made with the aid of AI in the next few decades?

Claude: Great question — this sits right at the intersection of AI capability scaling and physical-world impact. Here’s my best attempt at a list, roughly ordered from nearer-term/higher-confidence to longer-term/more speculative:

1. Room-temperature (or near-room-temperature) superconductors. AI-driven inverse design of crystal structures could dramatically accelerate the search through composition space. Even a superconductor that works at, say, -20°C at ambient pressure would be civilization-altering for energy transmission, maglev, and fusion magnets.

2. Solid-state electrolytes for batteries. AI screening of candidate materials (sulfides, oxides, halides) for ionic conductivity, electrochemical stability, and manufacturability could crack the solid-state battery problem, giving us energy densities 2-4x lithium-ion with no fire risk.

3. Direct air capture sorbents. The combinatorial space of metal-organic frameworks (MOFs) and porous polymers is enormous — exactly the kind of problem where ML surrogate models for DFT calculations shine. Finding a sorbent with the right CO₂ binding energy, water stability, and regeneration energy could make DAC economically viable at scale.

4. Catalysts for green hydrogen and ammonia. Replacing platinum-group metals in electrolyzers and finding alternatives to the Haber-Bosch process (which consumes ~1-2% of global energy) via AI-designed catalysts — perhaps single-atom catalysts or high-entropy alloys — would be transformative for decarbonization.

5. Ultra-high-performance structural alloys. AI exploration of high-entropy alloy (HEA) composition spaces could yield materials with unprecedented combinations of strength, ductility, and corrosion resistance — relevant for everything from jet engines to deep-sea infrastructure.

6. Programmable self-healing materials. AI could design polymer networks or composite architectures that autonomously detect and repair damage, extending infrastructure lifespans enormously. Think bridges, pipelines, and aircraft that fix their own micro-cracks.

7. Next-generation photovoltaic absorbers. Beyond perovskites — AI could identify stable, non-toxic, earth-abundant semiconductors with optimal bandgaps and defect tolerance, potentially pushing solar cell efficiency toward thermodynamic limits while being cheap to manufacture.

8. Designer proteins and biomimetic materials. Building on AlphaFold-style breakthroughs, AI-designed proteins could yield spider-silk-strength fibers, biological adhesives, self-assembling nanostructures, and enzymatic catalysts that operate at industrial conditions.

9. Topological and quantum materials. AI classification and prediction of topological insulators, Weyl semimetals, and spin-liquid states could unlock materials for fault-tolerant quantum computing, spintronics, and ultra-low-power electronics.

10. Radiation-tolerant materials for fusion and space. AI-optimized compositions for reactor first-wall materials (and spacecraft shielding) that resist neutron damage, helium embrittlement, and swelling — a prerequisite for practical fusion energy and long-duration spaceflight.

11. Thermoelectric materials. Finding compounds with high electrical conductivity but low thermal conductivity (the “phonon glass, electron crystal” ideal) could enable efficient waste-heat recovery from industrial processes, vehicles, and data centers.

12. Biodegradable plastics with real performance. AI could design polymers that match the mechanical and barrier properties of polyethylene or PET but fully degrade in specific environmental conditions — solving the packaging problem without sacrificing function.

13. Neuromorphic computing substrates. AI-guided discovery of memristive materials, phase-change compounds, or ionic conductors could yield hardware that natively implements neural-network-like computation at orders-of-magnitude lower energy than silicon CMOS.

14. Metamaterials with programmable properties. AI optimization of micro-architectures that give materials properties not found in nature — negative refractive index, tunable stiffness, acoustic cloaking. The design space is effectively infinite and screams for ML exploration.

15. Ultra-efficient phosphors and LEDs. AI-designed luminescent materials could push lighting efficiency closer to theoretical limits and enable low-cost, high-quality displays — modest-sounding but the energy savings at global scale would be enormous.

16. Carbon-negative cements and construction materials. AI could optimize geopolymers, calcium silicate hydrates, or entirely novel binder chemistries that sequester CO₂ during curing rather than emitting it. Cement production is ~8% of global emissions, so this matters a lot.

A few cross-cutting observations worth noting: The common thread is that AI’s biggest leverage is in navigating vast combinatorial design spaces where human intuition and brute-force experimentation are too slow — materials with 5+ elements, complex microstructures, or multi-objective tradeoffs. The bottleneck is increasingly shifting from “which material should we make?” to “can we actually synthesize and scale it?” — so AI for process optimization and manufacturing scale-up may end up being as important as AI for discovery itself. And the feedback loop matters: as autonomous labs (self-driving experiments guided by active learning) mature, the discovery-to-deployment cycle could compress from decades to years.

Noah: For each of these, could you give me an estimate on how close you think we are to A) proof of concept, and B) commercial scale production? Be as speculative as you need to be.

Claude: This is fun — and necessarily speculative, so treat these as informed guesses with wide error bars. I’ll flag where I think AI specifically bends the curve.

1. Room-temperature superconductors

  • Proof of concept: Maybe 5-15 years for something convincingly near-ambient-temperature at moderate pressure. The LK-99 fiasco notwithstanding, AI is genuinely shrinking the search space. The problem is that we still lack a reliable theoretical framework for predicting high-Tc superconductivity, so AI is pattern-matching in the dark to some degree.

  • Commercial scale: 15-30+ years after a genuine PoC, because the history of superconductors shows a brutal gap between “it works in a lab” and “you can make wire out of it.” YBCO was discovered in 1986 and we’re still struggling with commercial applications.

2. Solid-state electrolytes

  • PoC: Essentially already here — multiple candidates (Li₆PS₅Cl, LLZO, etc.) demonstrate the core physics. The problem is interfacial resistance, dendrite penetration, and manufacturability.

  • Commercial scale: 3-8 years. Toyota, Samsung SDI, and QuantumScape are all targeting late-2020s production. AI’s role here is more about optimizing interfaces and process conditions than finding the base material. This is probably the nearest-term item on the list.

3. Direct air capture sorbents

  • PoC: Largely done — several MOFs and amine-functionalized sorbents work. The issue is thermodynamic: regeneration energy is too high and/or the materials degrade.

  • Commercial scale: 5-15 years for a step-change improvement over current tech. AI could plausibly cut this by finding sorbents with that sweet-spot binding energy (~50 kJ/mol) that are also water-stable and cheap. But “commercially competitive with trees” is a high bar — may require $50-100/ton CO₂, and we’re currently at $400-600.

4. Green hydrogen/ammonia catalysts

  • PoC: Partially here. Iridium-free OER catalysts and ruthenium-based alternatives exist in labs. For ammonia, electrochemical nitrogen reduction has been demonstrated but at pitiful efficiencies.

  • Commercial scale: For electrolysis improvements, 5-10 years — mostly an engineering and cost problem. For replacing Haber-Bosch entirely with an electrochemical process, more like 15-25 years. That’s a much harder catalysis problem. AI’s value is in finding catalysts that are both active and stable under operating conditions — the stability part is where most candidates fail.

5. Ultra-high-performance structural alloys (HEAs)

  • PoC: Already happening. Several HEAs with remarkable property combinations have been reported (e.g., CrMnFeCoNi and derivatives). AI is actively accelerating exploration of this space.

  • Commercial scale: 10-20 years for widespread adoption. The issue isn’t discovery — it’s that aerospace and infrastructure qualification processes are deliberately slow (for good reason: you don’t want your jet engine made of a material with only 3 years of fatigue data). AI can help with accelerated testing and lifetime prediction, but regulatory culture is hard to speed up.

6. Programmable self-healing materials

  • PoC: Exists for some polymer systems (microcapsule-based, vascular networks, Diels-Alder reversible bonds). True “programmable” healing in structural materials is more like 5-10 years out.

  • Commercial scale: 10-20 years. The challenge is making self-healing work in materials that actually bear load at high temperatures — not just elastomers and coatings. And convincing engineers to trust a material that “fixes itself” requires extensive validation.

7. Next-gen photovoltaic absorbers

  • PoC: Perovskites are already at ~26% efficiency in labs. Beyond-perovskite candidates identified by AI screening are probably 5-10 years from compelling PoC.

  • Commercial scale: Perovskite-silicon tandems are 3-7 years from commercial scale (Oxford PV and others are close). Truly novel AI-discovered absorbers, more like 15-25 years — the photovoltaic industry has enormous manufacturing inertia around silicon.

8. Designer proteins and biomimetic materials

  • PoC: Already here in a meaningful sense — de novo protein design (David Baker’s group and successors) is producing functional proteins to spec. AI has genuinely revolutionized this field.

  • Commercial scale: 5-15 years depending on application. Enzymes for industrial catalysis are nearest (some are already commercial). Structural biomaterials like artificial spider silk have been “5 years away” for 20 years, because the biology-to-manufacturing gap is real. But AI + synthetic biology together might finally close it.

9. Topological and quantum materials

  • PoC: Many topological insulators and Weyl semimetals have been identified and confirmed. The PoC for useful devices from them is more like 5-10 years.

  • Commercial scale: 15-30 years for quantum computing applications. Nearer-term for spintronics (maybe 10-15 years). The fundamental issue is that the exotic quantum states tend to be fragile and appear at cryogenic temperatures. Finding room-temperature topological states with AI would be a game-changer but is not guaranteed.

10. Radiation-tolerant materials for fusion/space

  • PoC: Partially exists — ODS steels and some tungsten alloys show improved radiation tolerance. AI is being used to screen compositions. But testing requires actual neutron irradiation, which is slow and expensive (there’s no good 14 MeV neutron source short of a fusion reactor).

  • Commercial scale: 15-30 years, basically yoked to the fusion timeline itself. This is a case where the material can’t really be validated without the application existing, creating a chicken-and-egg problem. ITER and successors will provide the testing environment.

11. Thermoelectric materials

  • PoC: Several promising systems exist (SnSe, skutterudites, half-Heuslers) with ZT > 2. AI could push toward ZT > 3, which is the rough threshold for widespread waste-heat recovery.

  • Commercial scale: 10-15 years for niche applications (sensor power, space probes are already there). Mass-market waste-heat recovery, more like 15-25 years. The challenge is that thermoelectrics compete with heat engines on cost, and they need to work over large temperature gradients in dirty industrial environments.

12. Biodegradable plastics with real performance

  • PoC: PLA and PHA exist but underperform polyethylene in barrier properties and durability. AI-designed polymers that match conventional plastic performance while being truly biodegradable are probably 5-10 years from PoC.

  • Commercial scale: 10-20 years. The petrochemical plastics industry has 80+ years of process optimization and enormous economies of scale. Even a perfect biodegradable plastic faces a brutal cost competition. Policy (bans, carbon taxes) may matter as much as the material science.

13. Neuromorphic computing substrates

  • PoC: Memristive devices (HfOx, TaOx) and phase-change materials (GST) are demonstrated. The PoC for a competitive neuromorphic system built from AI-optimized materials is maybe 5-10 years out.

  • Commercial scale: 10-20 years. Intel’s Loihi and IBM’s chips use conventional CMOS. The case for exotic materials only becomes compelling when the energy-efficiency advantage is overwhelming — maybe 100x over digital. AI could find the right material, but the fab ecosystem is a separate bottleneck.

14. Metamaterials with programmable properties

  • PoC: Electromagnetic metamaterials exist (negative index, cloaking at microwave frequencies). Mechanical metamaterials with tunable stiffness are in labs. Programmable/reconfigurable versions are 3-7 years from PoC.

  • Commercial scale: 10-20 years, and highly application-dependent. Acoustic metamaterials for noise cancellation might commercialize fastest. Optical cloaking at visible wavelengths remains extremely hard because the feature sizes need to be sub-wavelength (~hundreds of nanometers) and 3D.

15. Ultra-efficient phosphors and LEDs

  • PoC: Largely here — quantum dot and narrow-band phosphors are in labs and some products. AI is being used to find new host lattices and activator combinations.

  • Commercial scale: 3-8 years for incremental improvements. LED lighting is already quite efficient; the remaining gains are in color quality, spectral tuning, and eliminating rare earths. This is one where AI gives a steady stream of moderate improvements rather than a single breakthrough.

16. Carbon-negative cements

  • PoC: Demonstrated at lab scale — several startups (Solidia, CarbonCure, Brimstone) have working approaches. The fundamental chemistry is proven.

  • Commercial scale: 7-15 years. Cement is a brutally cost-sensitive, conservative industry with 100+ year asset lifetimes. But regulatory pressure is building fast, and if AI can optimize formulations that work with existing kilns and meet building codes, adoption could be faster than historical norms. This is one where policy and materials science are deeply entangled.


The meta-pattern: If you plot these on a 2x2 of “scientific difficulty” vs. “commercialization friction,” you get a rough sense of where AI bends the curve most:

Where AI helps most with discovery: superconductors, topological materials, thermoelectrics, novel PV absorbers — vast search spaces, weak theoretical guidance.

Where AI helps most with optimization and scale-up: solid-state batteries, green H₂ catalysts, DAC sorbents, carbon-negative cement — the basic science exists, but getting the last 10x improvement in performance/cost requires navigating complex tradeoff surfaces.

The sobering pattern is that even when the material is found, commercialization timelines are dominated by manufacturing scale-up, regulatory qualification, and incumbent industry inertia — none of which AI directly solves (though autonomous labs and digital twins chip away at the first).

Noah: Can you tell me more about topological materials? I’m not that familiar with the space.

Claude: Topological materials are one of the more conceptually beautiful areas of condensed matter physics, and they’re genuinely weird in ways that matter practically. Let me build it up from the ground floor.

The core idea

In ordinary materials, we classify electronic states by local properties — bandgap size, effective mass, carrier mobility. Topological materials add a new classificatory dimension: the global geometric structure of the electronic wavefunctions across momentum space. Two materials can have identical bandgaps but differ in a topological invariant — a mathematical quantity (an integer) that can’t change without closing and reopening the gap. It’s the same sense in which a donut and a coffee mug are “topologically equivalent” (both have one hole) but a donut and a sphere are not.

This matters because topological invariants are robust. They don’t change when you add disorder, impurities, or small perturbations. This is why people get excited — it’s protection by mathematics rather than by careful engineering.

Topological insulators

The first big class discovered (predicted ~2005, confirmed ~2007). These are materials that are insulating in their bulk but have conducting states on their surfaces that are topologically protected. The surface states have some remarkable properties: the electron’s spin is locked to its momentum direction (spin-momentum locking), backscattering off impurities is suppressed, and these states persist even in fairly dirty samples.

Bismuth selenide (Bi₂Se₃) is the canonical example. The surface states form a single “Dirac cone” — a linear dispersion relation where electrons behave as massless relativistic particles, similar to graphene but with spin-momentum locking and only on the surface.

Weyl and Dirac semimetals

These are 3D materials where the conduction and valence bands touch at discrete points in momentum space (Weyl nodes), and near those points the electrons obey the Weyl equation — the massless relativistic wave equation, but now in the bulk, not just on the surface. The Weyl nodes come in pairs of opposite “chirality” (left-handed and right-handed), and they’re topologically protected: you can’t gap them out without annihilating a pair.

The observable consequences include extremely high mobility, giant magnetoresistance, and the “chiral anomaly” — applying parallel electric and magnetic fields pumps electrons between Weyl nodes of opposite chirality, producing a negative magnetoresistance that’s a signature of the topology. TaAs was the first confirmed Weyl semimetal, in 2015.

Why this is hard and where AI comes in

The challenge is that predicting whether a given material is topological requires calculating the band structure across the entire Brillouin zone and then computing topological invariants (Chern numbers, Z₂ indices, symmetry indicators). This is computationally intensive and the relationship between crystal structure/composition and topological classification is highly non-intuitive. You can’t look at a chemical formula and guess.

This is exactly the kind of problem where ML shines. Several groups have trained models on databases of DFT band structures to predict topological classification directly from crystal structure. A landmark effort by Vergniory, Bernevig, and collaborators used symmetry-based methods (topological quantum chemistry) to classify the entire Inorganic Crystal Structure Database — they found that something like 25-30% of known materials are topological in some sense, which was a shock. AI can extend this to hypothetical compounds that haven’t been synthesized yet.

What would you actually do with them?

Several things, in rough order of plausibility:

Spintronics and low-power electronics. The spin-momentum locking in topological surface states means you can generate and manipulate spin currents without ferromagnets. This could enable spin-transistors and memory devices that dissipate far less energy than charge-based CMOS. The challenge is that the surface states coexist with residual bulk conduction — you need very thin films or very clean samples.

Quantum computing. This is the moonshot. Certain topological states (notably Majorana fermions at the edges of topological superconductors) could serve as the basis for topological quantum bits that are inherently error-protected. Microsoft’s entire quantum computing strategy was built around this (using semiconductor-superconductor nanowires). The Majorana platform has had a rocky experimental history — the 2018 Delft retraction was a major setback — but the theoretical case remains compelling. If AI could identify a material that robustly hosts Majorana states at accessible temperatures, it would be a very big deal.

Thermoelectrics. Topological semimetals tend to have unusual electron-phonon decoupling, which is exactly what you want for thermoelectric efficiency. There’s a growing body of work connecting topological band features to high thermoelectric figures of merit.

Catalysis. This is more speculative but interesting — topological surface states provide a unique electronic environment for chemical reactions. Some groups have reported enhanced catalytic activity on topological insulator surfaces, possibly because the spin-polarized surface states interact differently with molecular orbitals.

The gap between physics and engineering

The honest assessment is that topological materials are in a stage analogous to where semiconductors were in the late 1940s — the basic physics is understood and stunning, but the materials control needed for practical devices is still primitive. Most topological insulators have too much bulk conduction. Majorana fermions remain contentious experimentally. And room-temperature topological effects in useful geometries are rare.

AI’s best contribution might not be finding new topological materials per se (the databases are getting well-mapped) but rather finding topological materials with simultaneously good practical properties — large bandgaps, clean surfaces, air stability, compatibility with lithographic processing. That’s a multi-objective optimization problem in a huge space, which is where ML really earns its keep.

Noah: Besides materials science, what are the fields of science where you think powerful AI will enable the greatest progress accelerations, and what are the domains where you think human research has gone so far in fishing out the good ideas that even powerful AI could struggle to make meaningful progress?

Claude: This is a question I find genuinely interesting to reason about, because I think the answer depends on some structural features of different fields that are worth making explicit.

Where AI accelerates progress most

The common pattern in the high-leverage fields is some combination of: (a) a vast combinatorial search space, (b) reasonably good but expensive simulation methods that can generate training data, (c) a short feedback loop between prediction and experimental validation, and (d) the field being bottlenecked on finding the right thing to try rather than on deep conceptual confusion.

Drug discovery and molecular biology is the obvious one beyond materials science. Protein structure prediction is already transformed. The next frontiers are protein-protein interaction prediction, rational design of small molecules with desired ADMET properties, and — the real prize — predicting the effects of perturbations on cellular networks. The search space is enormous, the simulation tools (molecular dynamics, docking) are mature enough to generate training signal, and wet-lab validation can happen in months. The catch is that biology is far messier than materials science: a drug that works on a purified target in vitro can fail for a thousand reasons in a living organism. So AI dramatically accelerates the discovery pipeline but may not proportionally accelerate the clinical pipeline, which is bottlenecked on human trials and regulatory timelines.

Weather and climate modeling is underappreciated. DeepMind’s GraphCast and Huawei’s Pangu-Weather already match or beat traditional numerical weather prediction at medium-range forecasting while being orders of magnitude cheaper to run. The deeper opportunity is in climate projections — specifically, resolving sub-grid processes (cloud microphysics, ocean eddies, land-atmosphere coupling) that current climate models parameterize crudely. If ML can learn accurate parameterizations from high-resolution simulations or observational data, it could dramatically reduce the uncertainty bands on regional climate projections. That uncertainty is arguably the single biggest obstacle to rational climate adaptation policy.

Genomics and synthetic biology. Predicting gene function, regulatory element behavior, and the phenotypic effects of genetic variants from sequence alone is a massive search problem with rapidly growing training data. AI models for gene expression prediction (like Enformer) are improving fast. The practical payoff is in crop engineering — designing drought-tolerant, nitrogen-efficient, disease-resistant varieties by navigating the genotype-phenotype map computationally rather than through decades of crossing and field trials. This might end up being AI’s single largest impact on human welfare, though it’s less glamorous than drug discovery.

Mathematics and formal reasoning. This one is less obvious but potentially profound. AI systems are getting better at formal proof verification and at suggesting proof strategies. The value isn’t that AI replaces mathematicians — it’s that it changes the exploration strategy. Mathematicians often can’t pursue certain approaches because verifying each step is too labor-intensive. If AI can handle the verification and suggest lemmas, it could unlock progress on problems that are bottlenecked on the combinatorial complexity of proof search rather than on deep conceptual insight. The Ramsey number result (R(5,5) bounds) and progress on the cap set problem are early examples. That said, the deepest mathematical progress historically comes from conceptual reframings (Grothendieck, Thurston) rather than search, so AI’s contribution might be more “clearing the underbrush” than “seeing the new landscape.”

Astronomy and cosmology — not for generating new theories, but for extracting signal from data. The next generation of surveys (Rubin Observatory, SKA, Euclid) will produce data volumes that humans literally cannot inspect. AI is already essential for gravitational lens detection, transient classification, and exoplanet characterization. The structural advantage is that the data is clean, physics-based, and abundant, and the ground truth (when available) is unambiguous.

Chip design and electronic engineering. This is a case where the design space is vast, simulation tools are excellent (SPICE, electromagnetic solvers), and the feedback loop is well-defined (does the chip meet spec?). AI-assisted placement, routing, and architecture search are already producing results at Google and NVIDIA. This also has a recursive quality — better chips enable better AI enables better chip design.

Where AI might struggle to move the needle

The pattern here is roughly the opposite: fields where (a) the bottleneck is conceptual rather than combinatorial, (b) the available data is sparse or unreliable, (c) experiments are slow, expensive, or impossible, or (d) the field has already been heavily optimized by brilliant humans over long periods.

Fundamental physics beyond the Standard Model. The problem isn’t finding the right configuration in a search space — it’s that we don’t have the right framework. Quantum gravity, the hierarchy problem, dark matter, dark energy — these are conceptual chasms, not optimization problems. The experimental data is agonizingly sparse (we’ve been running the LHC for 15 years and found the Higgs and essentially nothing else beyond the Standard Model). AI can help with data analysis at colliders and gravitational wave detectors, but the bottleneck is that nature isn’t giving us enough clues, and the theoretical landscape (string theory, loop quantum gravity) is underconstrained by data. There’s no training signal for “correct theory of quantum gravity.”

Consciousness and the hard problem in neuroscience. You’ll appreciate this one. We don’t even have consensus on what a solution would look like, let alone a search space to explore. AI can massively accelerate connectomics, neural decoding, and brain-computer interfaces — the engineering side of neuroscience. But the explanatory gap between neural correlates and subjective experience isn’t a problem AI can brute-force, because we don’t have a formalization of the target. Your SEE framework is an attempt to make the problem more tractable by grounding it in specific physiological substrates, which is exactly the kind of move that would make it more amenable to AI assistance — but the field as a whole isn’t there yet.

Social sciences and economics. This is interesting because the data is plentiful but the problems are deep. Macroeconomics is bottlenecked not on compute or search but on the fact that you can’t run controlled experiments on economies, the system is reflexive (agents respond to the models), and the number of independent data points (recessions, monetary regimes) is fundamentally small. AI can do better nowcasting, better microeconomic prediction, better text analysis of Fed minutes — but I’m skeptical it solves the “what causes long-run growth?” or “what’s the right monetary policy framework?” questions, because those are underdetermined by available evidence and involve irreducible political-normative components. Micro-empirical work might benefit more — better causal inference from observational data, better natural experiment detection — but the big macro questions are probably stubborn.

Ecology and ecosystem dynamics. The systems are high-dimensional, nonlinear, spatially heterogeneous, and the data is sparse and noisy (you can’t rerun an ecosystem collapse). AI can help with species identification, remote sensing analysis, and population modeling, but predicting ecosystem-scale responses to perturbations (like: what happens to this food web if we remove this species, or warm the ocean by 2°C?) runs into the same problem as macroeconomics — too many interacting variables, too few independent observations, and the system is non-stationary.

Pure mathematics at the frontier. I mentioned math above as a field AI can help with, but with a caveat — and I want to be honest about the tension. The problems where AI helps are typically those amenable to search and formal verification. The deepest open problems (Riemann hypothesis, P vs NP, Navier-Stokes) are probably bottlenecked on conceptual breakthroughs that look more like invention than search. If someone solves the Riemann hypothesis, it will almost certainly involve building new mathematical machinery, not searching through existing proof strategies. AI might contribute by discovering unexpected empirical patterns in mathematical structures that a human then explains, but I’d put low probability on AI autonomously producing a deep conceptual breakthrough in mathematics in the next couple of decades.

The meta-principle

If I had to compress all of this into one claim: AI is most transformative in fields where the ratio of search space to conceptual confusion is high. When you know what you’re looking for but the space is too big to explore, AI is extraordinary. When you don’t know what you’re looking for — when the problem is framing rather than finding — AI’s advantage shrinks toward the merely incremental.

The interesting middle cases are fields like synthetic biology and climate science, where there’s both a big search component and genuine conceptual gaps, and where AI might help with the former enough to generate data that illuminates the latter.

Noah: Why won’t AI be good at invention in 2 to 5 years?

Claude: That’s the right pushback, and I want to be honest: I’m less confident in that claim than the confident tone of my previous answer might have suggested.

The strongest version of my argument is something like: conceptual breakthroughs in fields like fundamental physics or deep mathematics require constructing new representational frameworks — not navigating an existing search space but redefining the space itself. Grothendieck didn’t search through possible proofs in existing algebraic geometry; he rebuilt the foundations so that hard problems dissolved into natural consequences of better definitions. Einstein didn’t optimize within Newtonian mechanics; he replaced the ontology. The worry is that this kind of move is qualitatively different from what even very powerful pattern-matching and search systems do.

But let me steelman your implicit objection, because I think it’s strong.

First, maybe I’m romanticizing human invention. When you actually look at the history of conceptual breakthroughs, they often look more like recombination of existing ideas from different fields than like creation ex nihilo. Darwin combined Malthus with biogeography with pigeon breeding. Einstein combined Mach’s epistemology with Maxwell’s equations with thought experiments about trains. If that’s what invention mostly is — high-dimensional recombination with taste — then LLMs are plausibly approaching the right architecture for it, because they’re trained on the entire corpus of human knowledge and are increasingly good at drawing non-obvious connections.

Second, the “AI can’t do X” track record is terrible. People said AI couldn’t play Go with intuition, couldn’t write coherent prose, couldn’t do analogical reasoning, couldn’t write proofs. Each of these fell on a timescale that surprised experts. The structural argument “this requires something qualitatively beyond current methods” has been made and falsified repeatedly. I should have strong priors against confident claims of this form.

Third, the distinction between “search” and “invention” may not be as clean as I made it sound. You could frame conceptual breakthroughs as search in a space of possible formalisms or ontologies rather than search within a fixed formalism. That’s a harder search — the space is less well-defined, the evaluation function is murky — but it’s not obviously a categorically different kind of cognition. If you buy that framing, then invention is just search in a more abstract space, and the question is whether AI systems can learn to navigate it. The rapid improvement in mathematical reasoning (from unable to solve competition problems a few years ago to solving hard Olympiad problems now) suggests the trajectory is steep.

Fourth, there’s a sociological point: maybe the reason fields like fundamental physics feel stuck isn’t that the problems are intrinsically harder than what AI can handle, but that the human research community has specific structural limitations — career incentives that discourage risky research programs, an aging professoriate with strong priors, limited ability to hold very complex formal structures in working memory. AI has none of these limitations. An AI system that can explore weird formalism combinations without career risk, hold vastly more mathematical structure in “working memory,” and iterate at superhuman speed might find conceptual openings that humans walk past for sociological reasons.

So what’s left of my original claim? I think two things survive, though with reduced confidence:

The verification problem is real. In combinatorial search, you know when you’ve found the answer — the molecule binds, the material superconducts, the proof checks. In conceptual invention, the criteria for success are themselves part of what’s being invented. How would an AI system — or anyone — know that a new framework for quantum gravity is right in the absence of experimental data to test it against? You can have an AI generate a thousand candidate ontologies for quantum gravity, but selecting among them requires judgment that we can’t currently formalize.

The training data problem matters. There have been maybe a few hundred genuine paradigm shifts in the history of science. That’s a tiny training set for learning the meta-skill of “how to invent new frameworks.” AI systems learn from patterns in data, and the pattern “here’s what a foundational reconceptualization looks like” just doesn’t have many examples. Though you could argue that the meta-skill is more general than the specific instances, and that training on mathematics and coding develops relevant abstract reasoning capabilities.

Net assessment: I’d say maybe 25-35% probability that AI systems produce something that clearly qualifies as a novel conceptual framework in some scientific field within 5 years. Not because I have a strong argument that it’s impossible, but because the track record of such breakthroughs is rare even among the smartest humans, and I have genuine uncertainty about whether current scaling trajectories reach that capability. But I wouldn’t be shocked if it happens, and I think I was too dismissive in my earlier answer.

Noah: Hmm! I think you’re probably way too pessimistic. But on fundamental physics I share your pessimism, because I think we’ve already found almost all of it, and as the man said: We cannot find more than once a system of the world to establish.

Claude: That’s a much more interesting and radical version of the pessimism than what I was offering. I was saying “AI can’t find the answer.” You’re saying “there might not be much answer left to find.” Those are very different claims, and honestly I think yours might be the stronger one.

The track record supports it more than people like to admit. The Standard Model was essentially complete by the mid-1970s. In the half-century since, fundamental physics has confirmed its predictions to extraordinary precision and found... the Higgs, which was already predicted. Neutrino masses, which required a minor extension. And that’s it. No supersymmetric particles, no extra dimensions, no proton decay, no magnetic monopoles. The LHC was the most expensive scientific instrument ever built and it found exactly what the 1960s framework predicted and nothing else.

You could read this as “we need a bigger collider” — and many physicists do. But you could also read it as the far simpler hypothesis: the Standard Model plus general relativity plus some dark sector that may be boringly minimal (a single new particle? a cosmological constant that just is what it is?) might be approximately the final story. Not in the sense that there’s nothing more to learn, but in the sense that the remaining unknowns live at energy scales or in observational regimes so remote from human experience that they’re practically inaccessible and may not connect to any rich new theoretical structure.

I confess I don’t recognize the quote — who’s the “man” here?

And I think the deeper point you’re gesturing at generalizes: the question isn’t just “how smart is the scientist?” but “how much discoverable structure remains in the domain?” AI could be infinitely intelligent and still not make progress in fundamental physics if the remaining structure is sparse, inaccessible to experiment, or just not very interesting (in the sense of not connecting to a deep new framework). Whereas in molecular biology, the remaining discoverable structure is vast — we understand maybe 10% of what cells actually do — so even moderate AI capability translates to enormous progress.

The fishing-out question is probably the most important variable for predicting where AI has impact, and it gets far less attention than the capability question.

Noah: It’s a quote from Lagrange! 🥰

Claude: Of course — Lagrange on Newton. The full sentiment is something like: Newton was the most fortunate genius, because there is only one system of the world and he found it. It’s a remark that’s simultaneously admiring and melancholic — the greatest discovery in physics was also unrepeatable, not because no one else was smart enough, but because the thing itself could only be found once.

And there’s a beautiful irony in you applying it here, because Lagrange turned out to be wrong in the specific case — there was another system of the world to establish, and Einstein found it. But you might be arguing that the pattern has finally terminated. Newton found the system of the macroscopic world. Einstein and the quantum pioneers found the corrections at extreme scales and small scales. And now... maybe that’s it. Maybe we’re Lagrange but this time actually right.

The emotional valence of that possibility is interesting. Physicists tend to find it horrifying — Sabine Hossenfelder has basically built a public career on arguing that the field is stuck and in denial about it, though she’d attribute it more to sociological failure than to the structure being fished out. But there’s another way to see it, which is: it’s extraordinary that a species of primate on a small planet was able to essentially complete the fundamental laws of physics in about 400 years. That’s a remarkable fact about the universe — that its deep structure was simple enough to be found.

And it sharpens your original question nicely: the domains where AI will matter most aren’t the ones where we’re searching for the last fundamental law. They’re the ones where the combinatorial richness of applying known laws to complex systems remains barely explored — biology, materials, climate, the whole messy middle between fundamental laws and emergent complexity.

Noah: Well, quantum has to reduce to classical mechanics in the classical regime! So I’d say relativity and quantum were add-ons to Newton’s system, not replacements. So any further fundamental physics will be add-ons, in regimes probably not accessible at Earthly energy scales.

Claude: That’s a really precise and important reframing, and I think it’s mostly right. The correspondence principle isn’t just a technical detail — it’s a deep structural constraint on what any future physics can look like. Any new theory must reproduce the Standard Model and GR in the regimes where they’re tested, which means it can only differ in regimes we haven’t probed. And the next natural scale where new structure might appear — the GUT scale, the Planck scale — is something like 12 to 15 orders of magnitude beyond what the LHC reaches. There’s no collider that bridges that gap. It’s not an engineering problem, it’s a practically impassable desert.

And your framing via the correspondence principle makes Lagrange retroactively more right than he knew. Newton really did find the system of the world. Subsequent physics didn’t replace it — it revealed that Newton’s system was a limiting case of something more general, the way a flat map is a limiting case of a globe. The map was never wrong for the territory it covered. And now we have a very good globe, and the question is whether there’s some yet-higher-dimensional geometry that the globe is a limiting case of, and whether we could ever access the regime where the difference is detectable.

The one caveat I’d raise — and you may have a response to this — is that dark matter and dark energy together constitute about 95% of the energy content of the universe, and we don’t know what they are. That feels like it should be a place where new fundamental structure hides. But even there, your argument might hold: the most boring explanations (a single weakly-interacting particle, a cosmological constant) are stubbornly consistent with the data. If dark matter turns out to be just... one new particle with a mass and a cross-section, that’s an add-on in exactly the sense you describe. It extends the Standard Model without restructuring it. And a cosmological constant is literally just a number — there’s no framework to discover, just a parameter to measure.

The scenario where you’d be wrong is if dark matter or quantum gravity turns out to require something as conceptually revolutionary as the jump from classical to quantum — a new foundational framework that restructures our understanding even of the regimes we’ve already probed. Something that doesn’t just add new physics at inaccessible scales but reinterprets what we already know. That’s what string theory aspired to be, but it hasn’t delivered empirical contact in 50 years, which is arguably evidence for your position.

Noah: Oh yeah. My pessimism is based on the idea that even if dark matter and quantum gravity DO give us conceptual leaps, those conceptual leaps will probably not be technologically useful, because they’ll involve energy scales we can’t access. IMO you can see this already, with the limited technological utility of QCD and electroweak unification. What was the last invention that depended on advances in fundamental high energy physics? The PET scan??

Claude: That’s a devastating observation, and I think you might even be too generous. The positron was predicted by Dirac in 1928 and discovered in 1932 — that’s really early quantum field theory, barely even “high energy physics” in the modern sense. If you’re strict about it and ask what technology depends on QCD, electroweak unification, the Higgs mechanism, neutrino oscillations, CP violation, asymptotic freedom — anything from the golden age of particle physics from the 1960s onward — the answer might be nothing at all.

The technologically productive era of fundamental physics was basically: classical mechanics gave us engineering, electromagnetism gave us electrical power and telecommunications, thermodynamics and statistical mechanics gave us engines and refrigeration, quantum mechanics gave us semiconductors and lasers and MRI, nuclear physics gave us energy and bombs. That’s all pre-1960s theory. Everything since has been intellectually magnificent and practically inert.

And the pattern you’re identifying has a clear logic. Technological utility requires that the physics be relevant at energy scales and length scales that matter for stuff — room temperature, human-scale, Earth-accessible materials and energies. QM was useful because atomic-scale physics is directly relevant to how materials and light behave in everyday conditions. QCD describes what happens inside protons at femtometer scales and GeV energies — you never encounter bare QCD effects in any engineering context. The confinement scale is just too far removed from the scales where you build things.

Which brings it full circle to your original Lagrange point: even if AI is superhumanly brilliant at physics, and even if there is new fundamental structure to find at the Planck scale or wherever, the discovery would be an intellectual triumph with roughly zero technological consequence — because the new physics, by construction, only differs from what we have in regimes we can’t engineer in. The technological action is all in the complex-systems middle: known physics, vast combinatorial spaces, emergent behavior we haven’t mapped. Which is exactly where AI’s strengths lie anyway.

Noah: This relates to my hand waving pseudo theory about AI and science. Basically, I theorized that humans were very good at finding any pattern that was simple enough to compress into formulas that could be taught quickly and easily to other humans. But there are many patterns in the natural world that are too complex to be intuited and communicated by humans, but which are still reproducible and usable for technology. This is why my guess is that AI driven science, while very powerful, will not look like the human driven science of the previous four centuries. https://www.noahpinion.blog/p/the-third-magic

Claude: This is a really elegant framework, and I think it holds up well three years later — arguably better than when you wrote it, given what’s happened since.

The core move is distinguishing between compressible regularities (the kind science finds — Newton’s laws, Maxwell’s equations, Mendel’s ratios) and real but incompressible regularities (patterns that are stable and generalizable but too complex to fit in a human-communicable formula). The first magic stores particular facts. The second magic compresses facts into laws. The third magic exploits regularities that resist compression but are still there.

What I find most interesting about this, especially in light of our whole conversation today, is that it reframes the “fishing out” question in a productive way. Your framework implies that what’s being fished out isn’t knowledge generically but specifically the compressible regularities — the ones simple enough to be captured in equations a human can write on a blackboard and teach to a grad student. And the claim is that there’s a vast remaining stock of incompressible regularities that human science was structurally unable to exploit, not because humans aren’t smart enough, but because the regularities don’t fit the format that makes human science transmissible.

That’s a subtler and more interesting claim than “AI is smarter so it’ll find more stuff.” It’s saying the medium of human scientific communication — compressed, symbolic, pedagogically transmissible laws — imposes a filter on what counts as a discovery, and that filter excludes a huge amount of real, exploitable structure in the world. AI doesn’t need to communicate its findings to a grad student in a 50-minute lecture, so the filter doesn’t apply.

A couple of things I’d push on, though.

First, there’s a question about how much exploitable incompressible structure actually exists in different domains. Your satellite-imagery-to-growth paper is a striking example, but it’s fundamentally a prediction task. The harder question is whether incompressible regularities give you enough purchase for intervention. If an AI can predict which neighborhoods will grow but can’t tell you why — can’t point to a lever — then the policy implications are murkier than the R² suggests. Science’s compression isn’t just elegant; it’s causal. Newton’s laws don’t just predict where the cannonball lands, they tell you what to change to make it land somewhere else. Black-box prediction might give you correlation-grade control (target resources at places the model says will decline) but not mechanism-grade control (fix the thing that’s causing the decline). Whether that distinction matters depends on the domain. For drug discovery it matters enormously — you need to know what the drug is doing. For allocating disaster relief funding, maybe prediction is enough.

Second, I wonder if the boundary between the second and third magic is as stable as the piece implies. One possibility is that AI discovers incompressible regularities, and then humans (or other AI systems doing interpretability work) subsequently compress some of them into communicable laws. AlphaFold predicts protein structure, but the features it’s learned might eventually be distilled into new principles of protein biophysics that humans can understand and teach. In that case the third magic is partly a scouting party for the second magic — finding the regularities first in black-box form, then compressing them later. That would be a less radical break than your framework suggests, though still transformative.

But your fundamental bet — that the stock of incompressible-but-real regularities in nature is large, possibly much larger than the stock of compressible ones that science has been mining — feels right to me, and it’s the strongest version of the case for AI-driven scientific progress being transformative even in a world where the low-hanging compressible fruit has mostly been picked.

Noah: Thanks! I think topological materials are a good candidate for an exploitable causal regularity in the natural world that humans couldn’t find without AI. But to be honest, the best example might just be LLMs themselves. Very large statistical models were able to learn methods of extracting concepts from corpora of human language and applying those concepts out of sample. Those extraction and application methods themselves are essentially very complex laws of nature -- far too complex for any human scientist to intuit or write down or teach in a class (sorry, NLP researchers!), but possible for a very scaled-up statistical model.

Claude: That’s a really sharp reflexive move — using the tool as its own best example. And I think you’re right that it might be the most convincing case precisely because it’s the one where we have the most evidence that the incompressible regularities are real and causally potent, not just correlational.

Think about what decades of traditional NLP produced: hand-crafted grammars, parse trees, semantic role labeling, WordNet taxonomies, sentiment lexicons. Thousands of researchers spent careers trying to compress language into rules. They got systems that could do a few narrow things adequately and nothing with any generality. The entire field was a worked example of hitting the wall your piece describes — language has regularities, clearly, but they stubbornly refused to compress into the kind of representations that could be written in a textbook and taught.

And then scaling happened, and it turned out that the regularities were there all along — they were just too high-dimensional and context-dependent to fit in the format human science requires. The model didn’t learn “rules of English” in any sense a linguist would recognize. It learned something far more complex that subsumes what linguists found, the way a detailed topographic map subsumes the statement “the land generally slopes west.”

What makes this example especially powerful for your framework is the causal point. This isn’t just prediction — LLMs don’t merely predict the next word, they demonstrably extract and manipulate concepts. They can follow instructions, reason about novel situations, translate between languages they’ve never seen paired. Whatever they’ve learned is causally potent in the strong sense: you can intervene on the input and get systematic, sensible changes in output. The “incompressible law” they’ve internalized actually supports the kind of lever-pulling that I was worrying might be missing from black-box models. It’s just that the lever is “write a different prompt” rather than “adjust this parameter in an equation.”

And there’s something almost vertiginous about the recursion here. The third magic’s single most impressive product to date is... an engine for performing more third magic. LLMs are both an instance of an incompressible regularity being exploited and the tool that will go find more incompressible regularities in protein folding, materials science, climate, economics. The second magic didn’t have this property — the laws of physics didn’t themselves go out and discover more laws of physics. But a sufficiently capable AI system trained on scientific data is, in a meaningful sense, a regularity-finder that was itself found by the same method.

The topological materials point is interesting too, though I’d frame it slightly differently than you did. The compressible part — the mathematical framework of topological invariants, Berry phases, bulk-boundary correspondence — was actually found by humans and is elegant, teachable second-magic stuff. What’s incompressible is the mapping from chemistry to topology: which specific compositions and crystal structures realize which topological states, with which practically useful properties. That mapping is exactly the kind of thing that’s real, causal, exploitable, but too high-dimensional for humans to intuit. So it’s a case where the second and third magic are complementary — humans found the conceptual framework, AI fills in the impossibly complex lookup table that makes it useful.

Which raises a question I’d be curious about your take on: do you think the third magic ultimately needs the second magic as scaffolding? Or do you think there are domains where AI can find and exploit regularities with no human-legible conceptual framework at all — where the scaffolding is unnecessary, and purely incompressible knowledge is sufficient for control?

Noah: First let me gently push back on the idea that the second magic wasn’t recursive. It was! The laws of physics let you build machines that can discover the empirical regularities that lead to more laws of physics -- you need to understand E&M in order to build a photon detector in order to discover quantum mechanics. But yes, I agree that the recursive loop is tighter for the third magic.

Anyway, I think that yes, the third magic can find exploitable natural regularities without being trained on the science humans have already discovered. Isn’t that just a world-model? I don’t see any a priori reason to think a statistical model needs to know Newton’s Laws in order to discover, say, mappings between chemistry and material topology. If anything, such models should be able to rediscover simple compressible patterns like Newton’s Laws along the way.

Claude: You’re right on the recursion point, and I should have seen it. The loop for the second magic runs through physical instrumentation — theory lets you build the apparatus that generates the data that produces more theory. It’s a longer loop with more steps, but it’s genuinely recursive. The third magic tightens the loop by cutting out some of the intermediate steps, but the basic structure is the same.

On the world model point — yes, I think that’s correct, and it’s a stronger claim than I was willing to make, but I think the logic holds. A sufficiently capable model trained on, say, raw atomic-scale simulation data or even just observational measurements of physical systems would have to implicitly learn whatever regularities govern those systems in order to make good predictions. Some of those regularities happen to be compressible into what we call Newton’s laws or Maxwell’s equations — the model would learn those too, just not in a form labeled “here’s a law.” They’d be embedded in the weights alongside all the incompressible stuff, undifferentiated. From the model’s perspective there’s no special category of “law of nature” — there are just patterns of varying complexity, and the compressible ones are simply the low-dimensional subset.

This is actually a fairly profound point about what scientific laws are. Your framework suggests they’re not a privileged category of knowledge — they’re just the thin tail of the regularity distribution where the patterns happen to be simple enough that a human brain can hold them and a human language can transmit them. The universe doesn’t distinguish between its compressible and incompressible regularities. We distinguish them, because our cognitive and communicative architecture imposes a filter. Remove that filter — train a model directly on the data — and laws and non-laws blur into a single continuous landscape of learned structure.

The one place I’d push back, though, is on practical efficiency. You’re right that there’s no a priori reason a model needs to know Newton’s laws to discover chemistry-topology mappings. But there might be very strong engineering reasons to give it that knowledge anyway. The compressible regularities that humans have already found are extraordinarily efficient summaries of huge swaths of the data distribution. Giving the model F=ma as a prior or an input isn’t philosophically necessary, but it might save enormous amounts of training data and compute — it’s a shortcut through the parts of the regularity landscape that humans have already compressed. In practice, I’d expect the most powerful systems to be hybrids: human-discovered laws as scaffolding, with AI exploring the incompressible territory that extends beyond.

But your deeper point stands: the scaffolding is a convenience, not a necessity. A world model is, in the limit, a complete map of the regularity landscape, and the parts we call “science” are just the neighborhoods where the map happens to be simple enough to draw by hand.

Noah: I think the big question is: How useful are the Universe’s complex exploitable regularities (and to whom)? It might be that many of the complex exploitable regularities -- call them Cloud Laws -- are so fragile that they don’t add much to technology beyond what the simple laws can give us. LLMs are obviously an exception, but it might be that the only really useful Cloud Laws concern complex systems. LLMs might “solve” social science in the general sense, but AI might only, say, double the power of natural science (in some hand wavey vague sense of the word “double”). Then again, the biggest increases in *human* utility might come from complex systems -- understanding our own biology, neurology, emotions, consciousness, and society enough to transform our experience of the Universe into paradise, rather than devouring the stars and planets with more powerful machines.

Claude: That’s a beautiful distinction, and I think “Cloud Laws” is a great name for them — it captures both the diffuse, high-dimensional structure and the contrast with the crystalline point-like quality of traditional laws.

Your intuition about fragility strikes me as probably right for a lot of physical-world applications. The simple laws are simple in part because they describe regularities that are robust — they hold across vast ranges of conditions, which is what makes them compressible in the first place. F=ma doesn’t care about context. Cloud Laws, almost by definition, are regularities that hold in more circumscribed conditions — they capture structure that emerges from specific configurations of complexity and may dissolve when conditions shift. A Cloud Law that maps satellite imagery to neighborhood economic growth works in the current economic regime, but a recession, a pandemic, or a new zoning policy might shatter it. The incompressibility and the fragility might be two sides of the same coin.

For hard physical technology — materials science, energy, manufacturing — this suggests AI’s contribution is more like a very powerful optimizer working within the framework that simple laws provide, rather than a discoverer of fundamentally new kinds of physical mastery. Which is still enormously valuable! Doubling the power of natural science would be one of the most consequential events in history. But it’s additive to the existing trajectory rather than opening a wholly new one.

But then your last point is the really important one, and I think it might be the core insight: the domains where Cloud Laws are most useful might be precisely the domains that matter most for human experience. Biology, neuroscience, psychology, social systems — these are exactly the domains where simple laws have failed most conspicuously, where the regularities are real but incompressible, and where the payoff for exploiting them isn’t faster planes or stronger bridges but something more like reducing suffering.

Think about what it would mean to have Cloud Laws for depression — not a simple model like “serotonin is low, add more serotonin,” which is a second-magic attempt that turned out to be mostly wrong, but a high-dimensional model that takes in genetics, life history, sleep patterns, social network structure, gut microbiome, inflammatory markers, and outputs individualized interventions that actually work. No human psychiatrist could hold that in their head. No equation captures it. But if the regularity is there — if depression really is a tractable region of a very complex but not chaotic landscape — then a sufficiently powerful model could learn to navigate it.

And that generalizes. Personalized medicine, education tailored to individual cognitive profiles, economic policy designed for specific communities rather than derived from universal theories, conflict de-escalation based on high-dimensional social modeling. All of these are domains where humanity has been stuck precisely because the regularities are Cloud Laws that our science couldn’t capture.

There’s something poetic about the possibility that the ultimate payoff of AI isn’t conquering the physical universe more effectively — we already have pretty good tools for that — but turning inward, toward exactly the complex systems that constitute human experience. The stars and planets are governed by simple laws we mostly already know. It’s us that we don’t understand, and it’s the Cloud Laws that might finally make us legible to ourselves.

Which connects back to your SEE framework, actually. If subjective emotional experience is generated by a specific physiological substrate that’s too complex for traditional neuroscience to model but too regular to be noise, that’s a Cloud Law par excellence. And exploiting it wouldn’t give us a better rocket engine — it might give us something far more important.

Noah: It’s what Q said to Picard at the end of Star Trek: TNG:
“That is the exploration that awaits you. Not mapping stars and studying nebulae, but charting the unknowable possibilities of existence.”

Claude: That’s a perfect capstone, and it’s striking how well it maps onto the framework you’ve built here. Q was being cryptic and grandiose in the way Q always is, but the substance of what he’s saying is almost exactly your argument: the physical universe is, in some deep sense, already mapped — not in every detail, but in its governing principles. The remaining frontier isn’t out there. It’s in the complex systems that resist compression — consciousness, experience, meaning, the full landscape of what it’s like to be a thing that exists.

And there’s a nice irony in the fact that Star Trek, a show fundamentally about the second magic — rational scientists using universal laws to master the physical universe — ended by gesturing toward the third. Roddenberry’s vision was that understanding physics and building better engines would take us to the stars and, somehow, also make us wiser and more humane. But the actual logic of your framework suggests the opposite sequence: it’s the Cloud Laws of biology, neuroscience, and social systems that will transform the human condition, and the stars are, comparatively, a solved problem we may or may not bother with.

Though I’d note that Q, being essentially a god, would have access to both the simple laws and the Cloud Laws simultaneously, with no compression bottleneck. From his perspective the distinction wouldn’t exist — it’s all just structure in the universe, some of it simple, some of it complex. Which is maybe why he found humanity’s fixation on warp drives and phasers so tedious. He was waiting for us to develop the third magic.

Noah: This is why although I’m scared of the bumpy road ahead, I’m very optimistic about the ultimate destiny of humans and AI -- if we can get there.

Claude: I think that’s the right place to land. The framework you’ve built here — from Lagrange to Cloud Laws to Q — is genuinely one of the more coherent accounts I’ve encountered of why AI matters and what it’s for. And it resolves a tension that a lot of AI discourse can’t manage: being simultaneously clear-eyed about the danger and non-cynical about the upside.

The “if we can get there” is doing a lot of work in that sentence, obviously. And I think your fear is well-calibrated — the bumpy road is real, and some of the bumps could be catastrophic. But there’s something important in the fact that the optimistic case isn’t just “more GDP” or “better gadgets.” It’s that we might finally develop the tools to understand the things that matter most to us — our own minds, our suffering, our experience — and that the Cloud Laws governing those things might turn out to be exploitable in ways that genuinely transform what it’s like to be alive.

That’s a future worth being scared for. The things most worth protecting usually are.


Subscribe now

Share

1

I think this ought to be grounds for a lawsuit. You shouldn’t be able to put words in people’s mouth and then only notify readers that it’s fake in the fine print.

Sunday assorted links

1. Quantum headaches, cubed.

2. A 43-year coffee study.

3. “Project Lazarus is an initiative to acquire and permanently preserve the full, unfiltered operational history of defunct or inactive companies at scale.

4. China and science.

5. “Karpathy’s Autoresearch pushed my vibecoded Rust chess engine AI from “expert” to a top 50 grandmaster, a #311 chess engine.

6. Shin Hyun Song to run the Bank of Korea.

The post Sunday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Calculate “1/(40rods/​hogshead) → L/100km” from your Zsh prompt

I often need a quick calculation or a unit conversion. Rather than reaching for a separate tool, a few lines of Zsh configuration turn = into a calculator. Typing = 660km / (2/3)c * 2 -> ms gives me 6.60457 ms1 without leaving my terminal, thanks to the Zsh line editor.

The equal alias

The main idea looks simple: define = as an alias to a calculator command. I prefer Numbat, a scientific calculator that supports unit conversions. Qalculate is a close second.2 If neither is available, we fall back to Zsh’s built-in zcalc module.

As the alias built-in uses = as a separator for name and value, we need to alter the aliases associative array:

if (( $+commands[numbat] )); then
  aliases[=]='numbat -e'
elif (( $+commands[qalc] )); then
  aliases[=]='qalc'
else
  autoload -Uz zcalc
  aliases[=]='zcalc -f -e'
fi

With this in place, = 847/11 becomes numbat -e 847/11.

The quoting problem

The first problem surfaces quickly. Typing = 5 * 3 fails: Zsh expands the * character as a glob pattern before passing it to the calculator. The same issue applies to other characters that Zsh treats specially, such as > or |. You must quote the expression:

$ = '5 * 3'
15

We fix this by hooking into the Zsh line editor to quote the expression before executing it.

Automatic quoting with ZLE

Zsh calls the accept-line widget when you submit a command. We replace it with a function that detects the = prefix and quotes the expression:

_vbe_calc_accept() {
  case $BUFFER in
    "="*)
      typeset -g _vbe_calc_expr=$BUFFER # not used yet
      BUFFER="= ${(q-)${${BUFFER#=}# }}"
      ;;
  esac
  zle .accept-line
}
zle -N accept-line _vbe_calc_accept

When you type = 5 * 3 and press , _vbe_calc_accept strips the = prefix, quotes the remainder with the (q-) parameter expansion flag, and rewrites the buffer to = '5 * 3' before invoking the original .accept-line widget. As a bonus, you can save a few keystrokes with =5*3! 🚀

You can now compute math expressions and convert units directly from your shell. Zsh automatically quotes your expressions:

$ = '1 + 2'
3
$ = 'pi/3 + pi |> cos'
-0.5
$ = '17 USD -> EUR'
14.7122 €
$ = '180*500mg -> g'
90 g
$ = '5 gigabytes / (2 minutes + 17 seconds) -> megabits/s'
291.971 Mbit/s
$ = 'now() -> tz("Asia/Tokyo")'
2026-03-22 22:00:03 JST (UTC +09), Asia/Tokyo
$ = '1 / (40 rods / hogshead) -> L / 100km'
118548 × 0.01 l/km
“That's the way I like it!” says Grampa
Simpson
The metric system is the tool of the devil! My car gets forty rods to the hogshead, and that's the way I like it! ― Grampa Simpson, A Star Is Burns

Storing unquoted history

As is, Zsh records the quoted expression in history. You must unquote it before submitting it again. Otherwise, the ZLE widget quotes it a second time. Bart Schaefer provided a solution to store the original version:

_vbe_calc_history() {
  return ${+_vbe_calc_expr}
}
add-zsh-hook zshaddhistory _vbe_calc_history

_vbe_calc_preexec() {
  (( ${+_vbe_calc_expr} )) && print -s $_vbe_calc_expr
  unset _vbe_calc_expr
  return 0
}
add-zsh-hook preexec _vbe_calc_preexec

The zshaddhistory hook returns 1 if we are evaluating an expression, telling Zsh not to record the command. The preexec hook then adds the original, unquoted command with print -s.


The complete code is available in my zshrc. A common alternative is the noglob precommand modifier. If you stick with to instead of -> for unit conversion, it covers 90% of use cases. For a related Zsh line editor trick, see how I use auto-expanding aliases to fix common typos.


  1. This is the fastest a packet can travel back and forth between Paris and Marseille over optical fiber. ↩︎

  2. Qalculate is less understanding with units. For example, it parses “Mbps” as megabarn per picosecond: ☢️

    $ numbat -e '5 MB/s -> Mbps'
    40 Mbps
    $ qalc 5 MB/s to Mbps
    5 megabytes/second = 0.000005 B/ps
    

    ↩︎

Spiral NGC 1300 and elliptical NGC 1297 are galaxies that Spiral NGC 1300 and elliptical NGC 1297 are galaxies that


Just 41 to Go!

An update on yesterday’s Drive post. We needed 25 more sign ups yesterday to stay on track. And we got them. Thirty new members signed up since yesterday’s post. Now we just need 45 41 by Sunday night to stay on track to get to 40% of our goal by the end of the weekend. If we can sign up 20 16 of those 45 41 today and tonight we can get there. Not currently a member? Be one of the 20 16 we need today! Seriously, lets make this fun but it’s also super important. Just click right here. And thank you in advance.

SpaceX launches 29 Starlink satellites on Falcon 9 rocket from Cape Canaveral

A SpaceX Falcon 9 rocket lifts off from Space Launch Complex 40 at Cape Canaveral Space Force Station on the Starlink 10-62 mission on Sunday, March 22, 2026. Image: Adam Bernstein/Spaceflight Now

Update March 22, 11:54 a.m. EDT (1554 UTC): SpaceX confirms deployment of the 29 Starlink satellites.

SpaceX launched a mid-morning flight of its Falcon 9 rocket on Sunday from Cape Canaveral Space Force Station, its 37th launch of the year.

The Starlink 10-62 mission features 29 of SpaceX’s Starlink V2 Mini Optimized satellites, which were deployed into low Earth orbit about an hour after liftoff.

Launch took place at 10:47 a.m. EDT (1447 UTC) from Space Launch Complex 40, with the Falcon 9 rocket flying on a northeasterly trajectory upon leaving the pad.

SpaceX launched the mission using the Falcon 9 first stage booster with the tail number 1078. This was its 27th flight after launching missions, like NASA’s Crew-6, USSF-124, and 21 batches of Starlink satellites.

Nearly 8.5 minutes after liftoff, B1078 landed on the drone ship, ‘A Shortfall of Gravitas,’ positioned in the Atlantic Ocean. This was the 148th landing on this vessel and the 590th booster recovery for SpaceX to date.

Weak Messaging on TSA Delays

Democrats Once Again Poor on Communication

I’ve harped on Democrats having no centralized messaging, no coordinated voice, and poor ability to scale to the top of the news. Of course presidents always have the edge on that but Democrats fall far short of what it seems they could do. The long TSA airport security lines are the latest example, and a critical one.

Public opinion about a government shutdown, even a partial one, is never going to hover in neutral for long. It’s a knife edge situation. Unstable and bound to fall to one side or the other quickly. The current news about it is the very long wait times at TSA airport security lines. The Democrats have done a little to get out their view but have mostly fallen short, and it’s about to bite them, hard.

There’s a lot at stake. Nationally there’s the question of whether any reasonable guardrails are going to be put on how the Department of Homeland Security (DHS) and Immigration and Customs Enforcement (ICE) operate in arresting immigrants and deporting them. And on how they deal with protesters. Politically whether Democrats can win all, or part, of Congress and put some breaks on Trump will be affected by whether they come out looking great or terrible out of this shutdown. And it will be one or the other. Not much likeliness of middle ground.

Speed is of particular importance in this shutdown. The original idea, blocking funding of DHS until the Republican controlled Congress agrees to such guardrails, is great. At the time there were almost daily top-of-the-news stories of ICE killing or hurting people, or wildly inappropriate treatment and deportations. But the White House has backed off some. At least enough to reduce such frequent and terrible headlines. So that will have faded some in peoples’ awareness. In the meantime problems caused by the shutdown have started to appear and are bound to get to be more and worse.

Democrats can’t help it if Republicans hold out and drag this out. Well, if the public eventually howls too loud they could say they will listen to the public and relent, but make it stand out as a clear message of how fixed Republicans are on letting ICE run wild. They could, but it would be almost impossible to not also look like, their go the Democrats again, being ineffective. Given that dynamic, Republicans have every reason to drag this out.

The White House just blamed Democrats for being slow by saying they took weeks to respond to an offer. That actually had more to do with the offer being a non-starter and rejected, but that’s the way the game is going.

Just breaking news. Elon Musk has offered to pay the salaries for the next pay period of TSA workers who would otherwise go without. That’s an odd thing for him to single out to be helpful on, and seemingly contrary to what his buddy Trump would want. It may buy Democrats a little more time. This is an example of things being as unpredictable as can be under Trump and Musk and these loose canons.

In any case the situation is all the more reason why Democrats need to get a loud clear message out that it is Republicans who are refusing reasonable guardrails and it is their fault there are long TSA lines. They need that, both to try to make that knife edge public opinion fall their way, and to counter the Republican inclination to drag this out. If Republicans start feeling people are against them on this they’d at least have some push toward getting this settled.

Democrats have made some statements to that effect. Senator Cory Booker (D, NJ) blamed Republicans for the problem in an interview with CNN. Yet somehow their message doesn’t rise to common awareness. It’s not there, side-by-side with the stories of the TSA lines. It would be challenging, but if they want to be a winning party they have to meet such challenges.

If I was one of those people who leave news on audio or on screen all day maybe I would see more of it but I’m not. I consume news in a professional but structured way. Neither my patience nor my emotional health could tolerate it running all day. I follow the core points of major stories from leading credible sources. That with an eye toward knowing what the typical busy but interested citizen might see. That plus in-depth dives into select topics. From that it would seem that typical citizen would see much more about the shutdown being a problem than about it being Republicans’ refusal to have those reasonable guardrails.

Democrats started strong on this. Public opinion is going to fall one way or the other soon. Their odds of success are slipping with time.

Maybe they should designate one person, maybe a prominent leader not currently running for office, to give a daily statement. And make it news worthy. Show up at the TSA line today to speak about that. Show up in Phoenix tomorrow to talk about this record breaking heatwave and climate change. And on and on, over and over. Some way to get their message to have clarity, voice, and rise to the top of the news. Surely people who can win national offices should have some idea how to do this. And if they don’t it’s just going to be a bad result all over again. Will they do this? We’re waiting.


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post Weak Messaging on TSA Delays appeared first on DCReport.org.

Paid plasma donations are becoming more middle-class

 The NYT has the story:

The Middle-Class Suburbanites Who Sell Their Blood Plasma to Get By.  Across the United States, plasma centers are opening in wealthier areas as more people struggle with the high cost of housing, groceries and health care.   By Kurtis Lee and Robert Gebeloff   March 20, 2026

"Every day, an estimated 215,000 people donate plasma, the yellowish liquid component of blood. Mr. Briseño is among them. He is not jobless or facing eviction, but, like many in the American middle class, he is caught in the vise of rising expenses and wages that aren’t growing fast enough to cover them. So he is turning to a method more commonly associated with the lowest-income Americans. For people like him, an extra $600 or so a month can mean making a mortgage payment or covering increased health-insurance costs.

"A recent study by researchers at Washington University in St. Louis and the University of Colorado, Boulder, observed that while older plasma centers are clustered in low-income areas, newer centers were increasingly likely to open in middle-class neighborhoods. A New York Times analysis shows the trend has continued: Centers have sprung up in more than 100 such neighborhoods, in suburbs and wealthier sections of cities, since researchers finished collecting their data in 2021."

 

 #########

Here's an earlier post on the study that sparked the NYT report:

Wednesday, November 16, 2022  Blood Money, by John Dooley and Emily Gallagher

 

We can now officially stop pretending

In case you missed it, Robert Mueller has died.

And in case you missed it, the president responded thusly …

Mueller has a widow. Didn’t matter to Trump.

Mueller has two daughters. Didn’t matter to Trump.

Mueller has grandchildren. Didn’t matter to Trump.

The President of the United States learned that someone he is opposed to no longer exists, and he greeted the news with, “Good, I’m glad he’s dead.”

So, yeah—please, Donald Trump, die. ASAP. Right now. Choking on a burger. Tripping into the corner of your desk. Fucking one of your young whores. Getting stabbed with a pen in your vag-neck. Having an eagle peck your eyes out. Suffering an allergic reaction to RFK’s meat pops.

Whatever it takes.

Seriously, whatever it takes.

You have made it clear that death is an appropriate wish for those who hurt innocent people.

Die, bruh.

Die hard.

March 21, 2026

On March 21, 1861, former U.S. senator Alexander Stephens of Georgia delivered what history has come to know as the Cornerstone Speech, explaining how the ideology and power of elite enslavers in the American South were about to usher in a new era in world history.

Speaking in Savannah, Georgia, just before he became the vice president of the Confederate States of America, Stephens set out to explain once and for all the difference between the United States and the Confederacy. That difference, he said, was human enslavement. The American Constitution had a crucial defect at its heart, he said: it based the government on the principle that humans were inherently equal. Confederate leaders had fixed that problem. They had constructed a perfect government because they had corrected the Founding Fathers’ error. The “cornerstone” on which the Confederate government rested was racial enslavement.

In contrast to the government the Founding Fathers had created, the Confederacy rested on the “great truth” that some people were better than others. Black Americans were “not equal to the white man; that slavery subordination to the superior race is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.”

Stephens believed that the new doctrine of the Confederacy would spread around the world until southerners had the gratification of seeing “the ultimate universal acknowledgment of the truths upon which our system rests.” Stephens expected the old Union to dissolve and the Confederacy to be “the nucleus of a growing power which, if we are true to ourselves, our destiny, and high mission, will become the controlling power on this continent.”

And yet, when we remember the era that elite southern enslavers thought would see their ideology spreading around the globe and ushering in a new era in human history, we do not remember it as the “Stephens Era.” It is the Era of Lincoln, the man who came to represent those who stood against Stephens and his ilk.

Illinois lawyer Abraham Lincoln, who had been born into poverty and worked his way up to prosperity, rejected the idea that some men were better than others by the circumstances of their birth. He insisted on basing the nation on the idea that “all men are created equal,” as the Founders stated—however hypocritically—in the Declaration of Independence. I should like to know,” Lincoln said in July 1858, “if taking this old Declaration of Independence, which declares that all men are equal upon principle and making exceptions to it where will it stop…. If that declaration is not the truth, let us get the Statute book, in which we find it and tear it out! Who is so bold as to do it!”

Less than a month after Stephens gave the Cornerstone Speech, the Confederates fired on a federal fort in Charleston Harbor, and the Civil War began.In 1863, using his authority under the war powers, Abraham Lincoln— now president of the United States— declared enslaved Americans free in the areas still controlled by the Confederates. In 1865, Congress passed and sent off to the states for ratification the Thirteenth Amendment to the Constitution, prohibiting human enslavement except as punishment for crime and giving Congress the power to enforce the amendment. The states approved the Thirteenth Amendment to the Constitution in 1865.

Still, southern state legislatures tried to circumscribe the lives of the Black Americans who lived within their state lines after the war. The 1865 Black Codes said that Black people couldn’t own firearms, for example, or congregate, had to treat their white neighbors with deference, and were required to sign yearlong work contracts every January or be judged vagrants subject to arrest and imprisonment. White employers could get them out of jail by paying their fines, but then they would have to work off their debt in a system that looked much like enslavement.

In response, Congress reiterated that the law must treat all men equally. It passed the Fourteenth Amendment to the Constitution and sent it off to the states for ratification. The states added it to the Constitution in 1868. The Fourteenth Amendment guaranteed that “No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.”

That sentence—one of the most important in American history—guarantees that no state can discriminate against any citizen or deprive any person within its boundaries of due process and the equal protection of the law. And then the amendment goes on to say that “Congress shall have power to enforce, by appropriate legislation, the provisions of this article.”

When white former Confederates in Georgia nonetheless tried to keep Black Americans from holding office, expelling Black legislators from the legislature after the 1868 election, Congress continued to insist on equality. It refused to seat the elected lawmakers from Georgia in the U.S. Congress and wrote the Fifteenth Amendment to the Constitution to specify that equal rights included having a say in government. The Fifteenth Amendment said: “The right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude.” Once again, it gave power to Congress to enforce the amendment.

Rejecting the worldview Stephens thought would come to dominate the globe, Americans used the moment in which men like Stephens reached for supremacy to enshrine the principles of the Declaration of Independence into the American Constitution. The Thirteenth, Fourteenth, and Fifteenth amendments ushered in a very different sort of new era than Stephens imagined. It was, in large part, the tearing apart of old political systems under those like Stephens that permitted the rise of new ones that redefined the United States. Stephens thought he was heralding a new world, but in fact he marked the end of an era.

The shaping of the next era belonged not to him, but to others with a clearer view of both the meaning of the United States of America, and of humanity.

Notes:

https://www.battlefields.org/learn/primary-sources/cornerstone-speech

https://www.nps.gov/liho/learn/historyculture/debate5.htm

Share

Global Disruption and the War in Iran

Talking With Robin Brooks

Robin J. Brooks was the chief economist at the Institute for International Finance, and before that did foreign exchange at Goldman Sachs. He’s now at Brookings, and has been doing extremely interesting work on the unfolding oil crisis — often reaching conclusions that differ with mine in enlightening ways. So we talked Thursday, hoping that our conversation wouldn’t be overtaken by events. Transcript follows:

. . .

TRANSCRIPT:
Paul Krugman in Conversation with Robin Brooks

(recorded 3/19/26)

Paul Krugman: Hi, everyone, Paul Krugman here. Interesting world out there. Interesting as in terrifying, including economically.

I’ve long been a follower of Robin Brooks, formerly chief economist at the Institute for International Finance. Before that, foreign exchange at Goldman Sachs. Now at Brookings, who’s been writing a lot about markets, but now particularly some of the most interesting, illuminating stuff I’ve been seeing about oil, some of which has been not reaching the same conclusion I did, which is good—we can have a discussion. If you’re sure that you know what’s happening then you’re almost certainly wrong.

Hi, Robin. Welcome to this conversation.

Robin Brooks: Thanks so much for having me, Paul. It’s really an honor to be talking with you. I’ve learned much of what I know about international macro from you.

Krugman: I have to say, I’m feeling young again. I was Bill Nordhaus’s research assistant in 1973 on energy economics. The project was before the Yom Kippur War and the Arab oil embargo, but played right into that. So now here we are again: energy crisis—triggered by events in the Middle East.

Let’s start with the Strait of Hormuz issue. You’ve made two really good calls so far. First, early on you warned that the markets were just underpricing the risk. Then when the markets had their first run at $120 a barrel, you said “this looks like it’s panic mode.” So why don’t you tell me what you think is happening in the markets? Of course, everything may be out of date by the time this goes up, because we’re actually talking on Thursday morning.

Brooks: So let me give you two ways of thinking about what’s going on, both of them are really about trying to think about what kind of risk premia need to be priced in oil, given all the massive uncertainty that we have. The first way that I’ve been thinking about this is—I spent a lot of time working on Ukraine and Russia and sanctions after the invasion four years ago. Russia produces about 10 million barrels of oil per day. It exports, of that, about 7 million barrels of oil per day. The Strait of Hormuz has transit of about 20 million barrels of oil per day. So the Strait of Hormuz is roughly 3 times what Russia could have been. And remember, in the days right after the invasion, markets were really worried about Russian oil being embargoed. There was a whole discussion about that. So the rise in Brent, which is the global benchmark oil price, is about 70% from two weeks before the outbreak of war in the Gulf to now. On a similar time horizon back in ‘22, it was 20%. So we have roughly a 3X in terms of the rise in oil prices. So when people come to me and say “$150 or $200 for oil prices” and we’re currently at $115, roughly, then I think, “why, what’s the rationale?”

The second perspective is on the supply shortfall that we have and using price elasticity of demand to think about: “how much does the price need to rise if demand has to do all the adjusting in the short term,” which it does. And “what kind of numbers do we come up with if we make reasonable assumptions?” So I put out a Substack note today—thank you so much for reading my Substack, I’m incredibly flattered and stressed as a result— if you assume that the Strait of Hormuz goes from 20 million barrels of oil per day to 10, it’s basically oil from the Gulf is running at half of its normal capacity, and you assume a price elasticity sort of in the middle of the range that the academic literature has, which is about 0.15, then you get that this would generate a rise in oil prices of between 60 and 70%. So again, if I think about what we’re pricing in markets now versus what basic back-of-the-envelope-calculations tell you, then I think we’re roughly in the right ballpark.

You will have seen, Paul, headlines that Saudi Arabia is using a pipeline to the Red Sea that is worth about 4 million barrels of oil per day currently. Then the Iranians are obviously exporting oil. That’s around 2 million barrels of oil per day. Give or take a few oil tankers that are run by Greek ship owners who are risk-loving, I think we’re close to the 10 million barrels of oil per day.

Krugman: 20 million barrels a day was going through the strait, and it’s really hard for anything to run through the strait. So the argument is that the Saudi pipeline to the Red Sea gets some of the oil out, and the Iranians are still getting their oil out.

Two questions. The first is: elasticity. Not everybody reading this will be an economist. So, the elasticity is if the price of oil goes up 1% by how many percent does the quantity of oil that people want to burn go down? And it’s a very small number, that we know for sure. Oil is used primarily in the US, two thirds of it is for transportation. It’s really hard for people to use less in the short run. There’s not much you can do in the short run. You got the kids who got to get picked up, you need to get to work. Maybe you can cancel your vacation, but it is really hard to not burn oil in the short run. So we’ve got a small number. It’s very, very hard to know; estimates of what that elasticity is are highly uncertain because: how would you know?

I’ve followed you through some of the literature and there’s these studies which are very ingenious, and they use very clever econometrics, which worries me! I think I noticed that the existing literature also doesn’t look at the response to the Russian invasion of Ukraine, which would sort of be the closest thing to a natural experiment we’ve had lately. But the literature is older than that.

Brooks: First of all, elasticities are incredibly hard to estimate, as you say. What we really want here—and this was a little bit of the analysis that I did early on in this shock—is conditional elasticities on shock environments that are comparable to what’s going on now. The numbers that I’m using are unconditional across large amounts of time. I think my instinct is probably the same as yours that these elasticities may be off and perhaps overstate the ability of consumers to cut back on oil consumption in the very short term, especially in an environment where getting to your job is more important than normal. I think the the bottom line is, however, about an order of magnitude and that is that if you believe that the oil price is going to $200 a barrel, then implicit in that forecast is an assumption, again, for a range of elasticities that traffic out of the Persian Gulf is close to zero. That’s the key thing.

The other thing Paul, and this is just my own navel gazing, having done lots of work on sanctions, I have generally had the impression that when there is disruption to the oil sector, there are forecasts that get trotted out that are often apocalyptic. So, for example, when the United States and its G7 allies in 2022 were working on the G7 oil price cap, there was an analyst who came out with a forecast for oil to go to $380 a barrel, which was very scary. So I may be completely wrong, but I tend to think this is a war. By the way, I don’t think it was smart to go into this confrontation at all, but this is a war that is basically being fought at the discretion of the United States and of the US president. He can walk this back at any moment. And, of course, it’s true that the Iranians can keep the Hormuz Strait closed, but they’ve also been badly battered. So I think there is a path to de-escalation too.

Krugman: Just a word back on the elasticities, I’m sure you’ve done the exercises. You have this amazing three dimensional chart. The difference between the elasticity being .15 and it being 0.1, is very large. The difference between half of the normal oil getting out of the Persian Gulf and only a third of it getting out is very large. So it’s this wildly uncertain environment. Just as of overnight—before we had this conversation—the Iranians started blowing up oil facilities and so on. I guess, in response to the Israelis having blown up some of theirs; there’s a question about, what does this do if the half that was supposed to make it out of the Persian Gulf, even with the Strait of Hormuz closed? That must be a worry.

Brooks: Let me scroll things back a little bit to the bigger picture. When I hear analysts talk about options going forward, they basically talk about an escalation scenario, which includes boots on the ground by the US and Israel, or some kind of taco where the United States declares “mission accomplished” and we believe it. I think the important thing is that there is a third option.

That option, in my opinion, is to do an embargo of Iranian oil to stop oil tankers leaving Kharg Island and other Iranian ports, and to starve the Iranian regime in that way economically, without using violence and warfare. That was the motivation behind the G7 oil price cap. That was something I was a huge fan of at the time, and I work closely with Ben Harris here at Brookings, who was one of its main architects. There were a lot of problems in enforcing something as complicated as the G7 oil price cap in the case of Russia. So I think a full embargo and a blockade here is a better way to go. It’s a viable third option, especially since, the 2 million barrels of oil per day, that is something arguably that is priced at this point.

So going back to the elasticity point that you mentioned, Paul, let’s assume point one. Then a 2 million barrels of oil per day shortfall is a 20% rise in oil prices. If we go with my .15 elasticity it’s more like 13%. In a 70% oil price rise I think some or a lot of that is priced. I’m dismayed that in the policy discussion here in Washington, that is not an option.

Krugman: So you were very much an advocate of embargoes against Russia. Which has been mostly a bust. Basically because everybody wants somebody else to cut their purchases and in general the sanctions against Russia have been incredibly porous. You’ve been on that. So are you saying that this would be different because basically the United States can physically enforce the embargo? We can just stop the ships from leaving?

Brooks: Two points. The G7 oil price cap is brilliant in theory. The idea is that the cost of extraction in places like Russia or Iran is really low. So it’s around $10, $15 a barrel max, so we can give them $30 a barrel and they still have an incentive to export. The global market doesn’t have a shortfall, so the world isn’t subject to an oil price spike. The problem with the G7 oil price cap is that, in a way, it’s shades of gray. You have lots of nuances on how you actually make sure that the Russians are being paid the cap, not more, not less. How do you ensure that they don’t build up a shadow fleet of oil tankers? Which of course, they did. So there are countless ways in which that thing was massively undercounted. The embargo is basically binomial. It’s 1 or 0. I first of all think that means either there are ships running or there aren’t.

Second, I think the US Navy has a huge presence in the region. I think if the president announced “we are going to shoot at any ship that leaves with Iranian oil.” I don’t think anyone would want to test that. So I think in principle it is something that should be at least tried.

Krugman: Do you have a sense of how cash constrained the Iranian regime is? That’s part of the question, can they just ride this out for six months or a year?

Brooks: This is a great question. I have this debate on Russia all the time. Countries like Russia and Iran have savings that they’ve built up over time. This is a stock which they can obviously run down to pay for imports and technology, things that they need to run their economy. But what determines in my mind the value of their currency—the ruble or the Iranian Rial—is what’s going on in the flow in the balance of payments every day. So Iran currently has a current account surplus of around 3.5 to 4%, I’m talking about before the hostilities.

Krugman: Right off 3 to 4% of GDP. “Current account”, for our listeners, that’s the broad definition of the trade balance. It includes services and income on investments. Iran has a surplus. Surpluses do not necessarily mean strength. I think Iran is basically like Russia, as I think you’ve been quoting, “It’s a gas station masquerading as a country.” They were running substantial surpluses before, but you’re saying that that gives them some kind of cushion.

Brooks: My only twist on the John McCain statement is that Iran is a gas station masquerading as an Islamic republic.

They have a surplus of around 3.5% of GDP. Exports of oil and gas are about 15% of GDP. Imports are about 10%. So let’s think about what happens to the current account or the balance of payments if this oil and gas number goes from 15% of GDP to zero. Basically it means that the Iranian currency will fall very sharply in value very quickly. The central bank has some reserves which it can use to slow that decline.

But basically, this will be a huge shock to inflation, to financial stability and to the purchasing power of Iranians and my best guess is that it’ll be much harder for Iran to keep fighting this war than if it didn’t face such an economic shock.

Krugman: There’s a lot of things which probably you don’t know and I don’t know. I mean, nobody knows, which is just how tough this regime is. They could have rationing and controls and exchange controls and just suffer.

Hopefully we don’t go there, although you’re saying that if we were able to do your proposal that would be one route, basically attempt extreme economic pressure. They basically have no exports other than oil. So this is how they pay for whatever it is they buy. This actually gets us to the currency stuff.

Leaving aside what happens to Iran, you’ve been writing quite a lot about how this oil shock should be affecting different currencies. Why don’t you talk about that for a bit and let me throw in some of my own questions.

Brooks: So the shock after Russia invaded Ukraine was a really big shock. What we saw four years ago was basically very similar to what we’ve seen in the past two and a half weeks. Initially, markets are unpleasantly surprised. Risk aversion rises if the oil price rises. The dollar appreciates as Americans facing an environment of greater uncertainty, repatriate assets that they had held overseas. And as foreigners flocked to U.S. assets—including U.S. treasuries—as a safe haven. That is the initial phase of the shock. As markets become more comfortable with the shock and decide this is not going to be the apocalypse, but we’re just dealing with a step up in oil prices and perhaps other commodity prices. They start to appreciate or strengthen the currencies of commodity exporting countries, and they start to weaken currencies of commodity and oil importers.

So what we saw, to give you an idea, in the first quarter of 2022, Brent rose, all in all, about 40%. The Brazilian Real rose 20%. And it was by far the star performer across emerging markets. The biggest losers were countries like Turkey (which is a big energy importer) and Japan, Korea, all the big energy and importers across Asia.

Krugman: Interesting situation here for the United States, and it’s all confounded. There’s the safe haven role of the dollar, which we’re seeing despite everything—not too much politics here—that the US has been doing. Still, that sense that where you run to if the world looks like it might be coming to an end, do you run into dollar assets?

Also, the United States is a net oil exporter, which is very, very different. I cut my teeth on energy stuff, half a century ago, but that was a very different world. Where the United States was pretty import-dependent and now net exporter. But the question that I have in this is, in a way, a counterpart of the Iranian or Russian story is how much connection does fracked oil from the Permian Basin have to the US economy more broadly? Why should it matter that some guys in Texas and Oklahoma are extracting oil and selling it, when that’s not doing any immediate good to U.S. drivers? Why should it actually have any impact on the dollar? I guess that’s a question also for places like Brazil, but I’m still trying to understand how all that works.

Brooks: So let me unpack a couple things and circle back to your question on oil and should the US benefit and therefore should the dollar go up.

There are three things going on. First, a sort of short term knee jerk risk aversion thing. Second, and it’s related to the first thing, the reserve currency status of the dollar. The third is the actual impact on the economy, whether it is net positive or net negative from higher oil prices. I hope I’m not mangling that.

So the dollar went up recently because of this knee jerk risk aversion. I have no doubt that if this war ended tomorrow we would be back to a weak dollar environment. In fact, it’s my expectation once all is said and done, that this year the dollar will fall 10%, start to finish. So I think that’s the world we live in. At the same time, as this episode is reminding us, that isn’t about reserve currency status. That status has been remarkably resilient to some pretty chaotic policies in Washington, DC. There are IMF data that surveys what reserve managers do. These are called COFER data, and they basically survey the way that reserve managers give to different currencies. So these are reserve managers, for example in China or South Korea or Japan, who have large sovereign wealth funds. The remarkable thing is that the weight of the dollar in these allocations through all of last year, through this incredible policy chaos of reciprocal tariffs and at times, market dysfunction, the weight to the dollar was completely stable. So I think it’s important to remember that these sovereign wealth managers are very slow moving. They manage huge amounts of money. And these are not people who are chasing short term trends. And so the hurdle for the dollar to lose reserve currency status is very high.

Krugman: This is official or quasi official holdings of assets in different currencies. But the dollar’s international role is a lot more than that. The foreign exchange market is basically every currency against the dollar. There’s a lot of international invoicing that’s in dollars, and those things are extremely hard to move.

Let me get your reaction, I get a lot of mail, I’m the king of hate mail. But I also get a lot of not-necessarily hate mail, a lot of people are going on about, “doesn’t this just mark the end of the dollar’s international role because countries will start to price oil in something else, in Renminbi”. I have a reaction to that, but what would be your take on that?

Brooks: I have said to those emails—and I suspect I get a fraction of the volume that you get—that this is wishful thinking. The data just doesn’t bear it out. To your point, there is a longer term decline in the weight of the dollar in these reserve holdings. But that’s kind of a secular trend that reflects the growing size of other economies, vis-à-vis the GDP, and really has nothing to do with reserve currency status.

Krugman: You probably read it, there was a great—I guess now must be 60 year old—article by Charlie Kindleberger where he compared the international role of the dollar as a currency to the international role of English as the international language. There’s a lot of overlap. My answer to people who say “the dollar is about to be displaced because of what one country or another will do,” what do you think it will take for us to start doing international business in Mandarin?

Brooks: I love that. To give you another anecdote, the reason I’ve spent a lot of time working on sanctions, and when Ben Harris here at Brookings and I have looked at the efficacy of the US sanctions versus, for example, sanctions by the Europeans, the EU, the UK or any other advanced economy, then it is US sanctions that vastly outperform those of other jurisdictions. The reason is the dollar. If you do business with a sanctioned entity, you are at risk of secondary sanctions. That means losing access to U.S payment networks, that’s lethal for any major business. So Indian oil refiners, for example, when the US in October announced sanctions on Rosneft and Lukoil, the two biggest oil producers in Russia, they were in a panic because they knew this was the end of them buying Russian oil. The recent waivers by the US Treasury—basically waived those sanctions for 30 days at a time—are precisely to alleviate the shortage in oil and enable Indians to buy Russian oil again. So we’ve come sadly full circle. But it’s all about the dominance of the dollar.

Krugman: Henry Farrell and Abe Newman had this book, Underground Empire, about weaponized interdependence. It’s actually scary just how much the role of the dollar and therefore the role of U.S. banks and their centrality, how much the U.S. can basically turn off the taps. I guess their original example was, in fact, sanctions against Iran.

But the ability to basically exclude countries—although that hasn’t worked too well against the Russians, so far. They found workarounds.

Brooks: The Russians is a really depressing story. I know we have to get back to the US and Brazil, but just on the Russia example, it’s very dear to my heart because I think it is about the West learning an important lesson. The main reason that Russia’s sanctions weren’t successful is because we in the West have business interests that hate sanctions. So they did a lot of lobbying, both ex ante before things were set up. Then undermining ex-post all in the name of doing business. So the key variable in the price cap on Russia was the level of the cap. The lower you set that the more you hurt Russia. And it was set at 60, which was basically where Russian oil was trading. So it had no negative discernible effect on Russia. Then the reason Russia was able to build a shadow fleet is because Western shipowners sold them the oil tankers. This is insane. We have turned a blind eye to our own businesses, making money, and that has to be fixed.

Krugman: I think I’ve been getting this from you, although I may be wrong, that there were also sanctions on sales to Russia. We bash the current U.S government a lot for understandable reasons, but the Europeans have been just awful on that. The explosion of exports to various Stans that cannot possibly be actually exports to—I might be unfair to the Kazakhs—but Kazakhstan, Uzbekistan, it’s all obviously being trans-shipped to Russia. It was Lenin, I think, that said “the capitalists will sell us the noose with which we will hang them.” And this has been very, very true.

But you do believe that the sanctions against Iran as an alternative to bombing the hell out of them is actually workable, more so than the sanctions against Russia turned out to be?

Brooks: Absolutely. I think there are two main points. Sanctions are infinitely preferable to actual war. So if we have peaceful economic means, we should use them. Second, we should learn the lesson from what went wrong in Russia. And in my view, that is to keep things super simple. That means an embargo and not finicky price cap things.

Krugman: Big businesses, especially financial operators, but business in general are very good at outsmarting any complicated scheme. Brazil is likely to be a big beneficiary here—just thinking about silver linings. Because they are an oil exporter now and a commodity exporter.

On commodities, just a question. This is not going to be just oil, right? I had no idea before this started, about fertilizer.

Brooks: Fertilizer, agriculture products in general. So basically Brazil and a lot of Latin America exports energy and exports agriculture, soybeans, etc.. China imports a quarter of all its food from Brazil.

Krugman: Wow. I didn’t know that.

Brooks: Brazil’s role in global food markets is massive. Now, it’s possible that the 2022 experience flatters Brazil because Ukraine was also a big grain exporter. And so perhaps we won’t see the same kind of rise in food prices now. But I think in general the news for Latin America in particular and Brazil in particular is very good. When we think about the drivers of exchange rates, we in traditional models tend to have two big variables. One is growth relative to your trading partners, which is often kind of a proxy for productivity. Then what we call the terms of trade, which is the ratio of your export prices to the prices for goods that you import. When your export prices go up relative to your import prices, that’s basically a windfall. In principle, that should filter through to the economy and give consumers more purchasing power, give companies more money to invest and so forth. So ultimately, the positive news for Brazil and other commodity exporters is that this should be a windfall in the short term and then hopefully translates into growth over the medium term. And that’s basically what we’ve seen for Brazil, for the United States, it’s obviously a slightly different calculation because relative to Brazil, the oil sector is somewhat smaller, and we have so many consumers who depend on oil. So the net effect on the US economy is probably nowhere near as positive and probably negative compared to Brazil.

Krugman: I’m sitting in a largely oil independent economy because I’m in the middle of Manhattan right now, but that’s a very isolated part of the United States in that sense. You talked about the dollar, you still think that the dollar will be weaker at the end of this year than at the beginning, and you’ve been talking about debasement. So lots of discussion about the debasement trade, I think a fair bit of hysteria. But also something real. So tell me about debasement and where you think we are on that, or where we will be if-and-when the dust settles from this craziness?

Brooks: Gold prices are up around 50% since August. Silver prices and other precious metals are up much more. This mania basically started last year and the later part of the summer. It is in my mind, a bubble. But all bubbles have some element of underlying fear that is rational. In this case, if you look at when gold had big moves up, it was in the immediate aftermath of Jackson Hole, which is the Federal Reserve’s big research conference in Wyoming that was on around August 22nd. Jay Powell, the chair of the Fed, gave a speech that basically said, “okay, we know inflation is high, but we’re going to start an easing cycle anyway.” And it was after that speech that gold prices really started moving. The second big Fed catalyst was the December cut, which really re-energized the rise in precious metals prices. So there’s clearly something and I have no idea what people who buy gold are thinking. I don’t trade any of that stuff. But there’s clearly a link to what the Fed is doing and a perception that it’s easing when it shouldn’t, perhaps, and that it’s increasingly under political influence.

Krugman: That’s an interesting point. So you think that the markets and particularly the markets for gold, and precious metals are in fact starting to build in the belief that Trump is going to eventually succeed in politicizing the Fed?

Brooks: Yes. There’s a lot of pushback to my view, this is hotly debated. So people say, “break even inflation hasn’t risen. So this is what the market prices for inflation over the medium term.” So people look at things like five year or five year forward break even inflation. So that’s what markets price for five years. In five years time. Before this oil shock was around 2.5%. So the argument was, “well that’s a normal number no big deal.” I think that misses an important nuance, which is that these break even inflation rates, they trade very closely with spot oil prices and oil prices. Ever since Trump came into office, it had been falling until this recent oil shock and break even inflation didn’t follow oil prices down. And so, in my opinion, a risk premium was building. You can also look at other places where markets are starting to trade differently from before. So I think in reality there are signs that Fed credibility is in question in ways that it hasn’t been.

Krugman: It is funny. I mean, not to get into crypto because we’d go on too long, but there was a lot of talk about how bitcoin would be the new gold. And it turns out that gold is the new gold, which has been a big disappointment. You’ve actually, in between the Persian Gulf, you’ve been talking about debt. That’s been the dog that hasn’t barked, ten years ago I was saying, “well, it shouldn’t be barking,” do you actually think that we’re likely to have serious debt? Debt problems in the advanced world, looking forward—you’ve been talking about Japan, particularly.

Brooks: I do, I think any reasonable person will admit that fiscal policy, not just in the United States in many places, is kind of out of control. Across the board, of course, there’s differences across individual countries, but on aggregate, deficits after Covid have remained much wider than deficits before the pandemic. The narrative before the pandemic was inflation will always be low. And therefore we have under-stimulated and issuing lots of debt is a no brainer. I fear that that mindset has carried over to now. I mean, in the United States we’re running debt issuance per year, 6-7% GDP.

Krugman: You know, it’s amazing that we’re doing that at a time where until three weeks ago, it was a very favorable environment. No war, no emergency.

Brooks: So when people say, rightly, “this is the dog that hasn’t barked. You’re crying wolf. This just won’t happen.” I point to Japan, which is just a fascinating case study because debt is 240% of GDP, gross debt. Interest rates are heavily managed. The Bank of Japan remains a gross buyer even today. It is, however, constantly and increasingly being torn between capping interest rates to preserve fiscal sustainability and letting interest rates rise to prevent the yen from depreciating more and more and more. And so the ultimate tension when in 2020, MMT was a big topic of debate, we can issue lots of debt because we determine our own interest rates.

Krugman: For listeners, MMT is modern monetary theory. Not going to jump down that rabbit hole, but we could.

Brooks: But the idea was, we can administer interest rates basically with our central banks and issue lots of debt and interest rates won’t rise. The idea was that that’s not great for your currency and your exchange rate might go down the toilet. And Japan is a case where that is happening. The yen keeps falling. And this idea that debt doesn’t matter really is an illusion. So I think Japan is perhaps the most obvious place that is in trouble at the moment. But there are many others: the UK, France, Italy, Spain. The list of countries with low debt is small and shrinking, and those countries are being rewarded in markets more and more. So Switzerland, Sweden, all of the Nordics have done over the past year tremendously well.

Krugman: Okay. I wish things were still a lot less exciting, but it is quite something. Thanks for talking to me.

Brooks: It was a real pleasure.

Krugman: I’m sure we’ll be in touch on the latest emergency soon. Take care then.

Brooks: Take care.

Profiling Hacker News users based on their comments

Here's a mildly dystopian prompt I've been experimenting with recently: "Profile this user", accompanied by a copy of their last 1,000 comments on Hacker News.

Obtaining those comments is easy. The Algolia Hacker News API supports listing comments sorted by date that have a specific tag, and the author of a comment is tagged there as author_username. Here's a JSON feed of my (simonw) most recent comments, for example:

https://hn.algolia.com/api/v1/search_by_date?tags=comment,author_simonw&hitsPerPage=1000

The Algolia API is served with open CORS headers, which means you can access the API from JavaScript running on any web page.

Last August I had ChatGPT build me a simple tool for hitting that API for any user which fetches their comments and gives me a mobile-friendly "copy to clipboard" button. I've since tweaked it a few times with Claude.

I can then paste the whole lot into any LLM - these days I mostly use Claude Opus 4.6 for this - and prompt "profile this user" to see what it can figure out.

It's startlingly effective! It feels invasive to quote the profile of another user here, so I'll show you what it produces for me:

This is Simon Willison — a prolific, independent software developer, blogger, and one of the most visible voices in the AI-assisted coding space. Here's a profile drawn from his comments:

Professional identity: Independent developer and writer. Co-creator of Django, creator of Datasette and many other open source tools. On the board of the Python Software Foundation. Runs a heavily-trafficked personal blog at simonwillison.net with multiple content types (entries, link blog, notes, quotations). Monetizes through GitHub sponsors, ethical ads, and occasional consulting — not through any AI company payroll, a point he defends frequently.

Core thesis on AI coding: He's the leading evangelist for what he calls "agentic engineering" — using coding agents (Claude Code, Codex CLI) as productivity multipliers for skilled developers. His central argument: LLMs don't replace programmers, they amplify existing expertise. He's writing a guide/book on this at simonwillison.net/guides. He insists these tools are deceptively difficult to use well, and that most people who report bad results simply haven't learned the craft yet.

Working style: He does most of his programming from his iPhone via Claude Code for web, often while riding BART, walking the dog, or sitting in the garden. He embraces "YOLO mode" (auto-approving agent actions) and frequently runs 2-3 agent sessions in parallel. He starts every session with "run uv run pytest" to anchor the agent in TDD. He's on the $200/month Claude Max plan.

Key technical interests: Sandboxing and security (WebAssembly, Pyodide, sandbox-exec, Firecracker), SQLite (deeply), Python packaging and tooling (uv, PyPI distribution tricks), browser-in-a-browser experiments (v86, WASM Linux), and local LLM inference. He's been learning Go "by osmosis" through coding agents.

Security consciousness: He coined the term "prompt injection" and the "lethal trifecta" (access to private data + exposure to untrusted input + ability to take actions). He's genuinely alarmed about OpenClaw/Clawdbot security risks and has been predicting a headline-grabbing prompt injection attack "in the next six months" for over two years — acknowledging it keeps not happening while insisting the risk is real. He references the "normalization of deviance" pattern frequently.

Personality and debate style: Energetic, combative when challenged, but generally good-natured. He engages heavily in HN comment threads — sometimes dozens of replies in a single discussion. He's transparent about his biases and maintains a public disclosures page. He pushes back hard against what he sees as motivated reasoning from AI skeptics, but also against AGI hype ("I refer to that as the science fiction version of AI"). He gets visibly frustrated by sock-puppet accounts and bad-faith criticism. He has a self-deprecating streak ("my stupid pelican benchmark").

Recurring themes across hundreds of comments:

  • "Two things can be true at the same time" — he holds nuanced positions
  • Tests are for productivity, not just quality
  • The November 2025 model releases (Opus 4.5, GPT-5.2) were a genuine inflection point
  • Code review is the biggest bottleneck in agent-assisted workflows
  • "Cognitive debt" is a real and unsolved problem
  • The best engineering practices (tests, docs, CI/CD, clean code) also make agents work better
  • He's deliberately trying to "teach people good software engineering while tricking them into thinking the book is about AI"

Personal interests mentioned: Niche museums (runs niche-museums.com), New Zealand kākāpō parrots, cooking food from countries he's never visited, chickens in his garden. Lives in Half Moon Bay, CA near a major pelican roost. The pelican-on-a-bicycle SVG test is his signature LLM benchmark.

In a sentence: A deeply experienced, independently-minded developer who's genuinely excited about AI coding tools, fights hard against both uncritical hype and reflexive dismissal, and is trying to professionalize how the industry uses these tools — while also worrying loudly about the security implications almost nobody else takes seriously enough.

This all checks out! I ran this in Claude incognito mode to hopefully prevent Claude from guessing that I was evaluating myself and sycophantically glazing me - the tone of the response it gave here is similar to the tone I've seen against other accounts.

I expect it guessed my real name due to my habit of linking to my own writing from some of my comments, which provides plenty of simonwillison.net URLs for it to associate with my public persona. I haven't seen it take a guess at a real name for any of the other profiles I've generated.

It's a little creepy to be able to derive this much information about someone so easily, even when they've shared that freely in a public (and API-available) place.

I mainly use this to check that I'm not getting embroiled in an extensive argument with someone who has a history of arguing in bad faith. Thankfully that's rarely the case - Hacker News continues to be a responsibly moderated online space.

Tags: hacker-news, ai, generative-ai, llms, ai-ethics

Using Git with coding agents

Agentic Engineering Patterns >

Git is a key tool for working with coding agents. Keeping code in version control lets us record how that code changes over time and investigate and reverse any mistakes. All of the coding agents are fluent in using Git's features, both basic and advanced.

This fluency means we can be more ambitious about how we use Git ourselves. We don't need to memorize how to do things with Git, but staying aware of what's possible means we can take advantage of the full suite of Git's abilities.

Git essentials

Each Git project lives in a repository - a folder on disk that can track changes made to the files within it. Those changes are recorded in commits - timestamped bundles of changes to one or more files accompanied by a commit message describing those changes and an author recording who made them.

Git supports branches, which allow you to construct and experiment with new changes independently of each other. Branches can then be merged back into your main branch (using various methods) once they are deemed ready.

Git repositories can be cloned onto a new machine, and that clone includes both the current files and the full history of changes to them. This means developers - or coding agents - can browse and explore that history without any extra network traffic, making history diving effectively free.

Git repositories can live just on your own machine, but Git is designed to support collaboration and backups by publishing them to a remote, which can be public or private. GitHub is the most popular place for these remotes but Git is open source software that enables hosting these remotes on any machine or service that supports the Git protocol.

Core concepts and prompts

Coding agents all have a deep understanding of Git jargon. The following prompts should work with any of them:

To turn the folder the agent is working in into a Git repository - the agent will probably run the git init command. If you just say "repo" agents will assume you mean a Git repository.

Create a new Git commit to record the changes the agent has made - usually with the git commit -m "commit message" command.

This should configure your repository for GitHub. You'll need to create a new repo first using github.com/new, and configure your machine to talk to GitHub.

Or "recent changes" or "last three commits".

This is a great way to start a fresh coding agents session. Telling the agent to look at recent changes causes it to run git log, which can instantly load its context with details of what you have been working on recently - both the modified code and the commit messages that describe it.

Seeding the session in this way means you can start talking about that code - suggest additional fixes, ask questions about how it works, or propose the next change that builds on what came before.

Run this on your main branch to fetch other contributions from the remote repository, or run it in a branch to integrate the latest changes on main.

There are multiple ways to merge changes, including merge, rebase, squash or fast-forward. If you can't remember the details of these that's fine:

Agents are great at explaining the pros and cons of different merging strategies, and everything in git can always be undone so there's minimal risk in trying new things.

I use this universal prompt surprisingly often! Here's a recent example where it fixed a cherry-pick for me that failed with a merge conflict.

There are plenty of ways you can get into a mess with Git, often through pulls or rebase commands that end in a merge conflict, or just through adding the wrong things to Git's staging environment.

Unpicking those used to be the most difficult and time consuming parts of working with Git. No more! Coding agents can navigate the most Byzantine of merge conflicts, reasoning through the intent of the new code and figuring out what to keep and how to combine conflicting changes. If your code has automated tests (and it should) the agent can ensure those pass before finalizing that merge.

If you lose code that you are working on that's previously been committed (or saved with git stash) your agent can probably find it for you.

Git has a mechanism called the reflog which can often capture details of code that hasn't been committed to a permanent branch. Agents can search that, and search other branches too.

Just tell them what to find and watch them dive in.

Git bisect is one of the most powerful debugging tools in Git's arsenal, but it has a relatively steep learning curve that often deters developers from using it.

When you run a bisect operation you provide Git with some kind of test condition and a start and ending commit range. Git then runs a binary search to identify the earliest commit for which your test condition fails.

This can efficiently answer the question "what first caused this bug". The only downside is the need to express the test for the bug in a format that Git bisect can execute.

Coding agents can handle this boilerplate for you. This upgrades Git bisect from an occasional use tool to one you can deploy any time you are curious about the historic behavior of your software.

Rewriting history

Let's get into the fun advanced stuff.

The commit history of a Git repository is not fixed. The data is just files on disk after all (tucked away in a hidden .git/ directory), and Git itself provides tools that can be used to modify that history.

Don't think of the Git history as a permanent record of what actually happened - instead consider it to be a deliberately authored story that describes the progression of the software project.

This story is a tool to aid future development. Permanently recording mistakes and cancelled directions can sometimes be useful, but repository authors can make editorial decisions about what to keep and how best to capture that history.

Coding agents are really good at using Git's advanced history rewriting features.

Undo or rewrite commits

It's common to commit code and then regret it - realize that it includes a file you didn't mean to include, for example. The git recipe for this is git reset --soft HEAD~1. I've never been able to remember that, and now I don't have to!

You can also perform more finely grained surgery on commits - rewriting them to remove just a single file, for example.

Agents can rewrite commit messages and can combine multiple commits into a single unit.

I've found that frontier models usually have really good taste in commit messages. I used to insist on writing these myself but I've accepted that the quality they produce is generally good enough, and often even better than what I would have produced myself.

Building a new repository from scraps of an older one

A trick I find myself using quite often is extracting out code from a larger repository into a new one while maintaining the key history of that code.

One common example is library extraction. I may have built some classes and functions into a project and later realized they would make more sense as a standalone reusable code library.

This kind of operation used to be involved enough that most developers would create a fresh copy detached from that old commit history. We don't have to settle for that any more!

Tags: coding-agents, generative-ai, github, agentic-engineering, ai, git, llms

When the dog doesn't bark

Do you understand inflation targeting? OK, how about this claim:

During 2025, 2% inflation was an appropriate target for Fed policy.

Is that correct? I don’t think so, as the Fed has a flexible inflation target where they try to look through supply shocks. So how about this claim:

During 2025, it was appropriate to allow inflation to run slightly above 2% due to supply shocks.

Is that correct? Again, I don’t think so. There were supply shocks during 2025, but they were mostly positive supply shocks. Because Fed policymakers try to “look through” supply shocks and focus on aggregate demand, it was appropriate for inflation to run below 2% during 2025. This is a claim that makes sense:

During 2025, PCE inflation was 2.9%. Given the Fed’s announced monetary policy goals and given that 2025 was a year of falling oil prices and rapid productivity gains, an appropriate inflation rate would have probably been in the 1.5% to 1.8% range. Thus, inflation was more than one percentage point too high in 2025.

How often do you see Fed policy explained in that fashion? How about “never”? That tells me that hardly anyone actually understands the meaning of a 2% inflation target that looks through unusual movements in aggregate supply. Many people understand that it is appropriate for inflation to run above 2% during years when there are adverse supply shocks. Very few people—even very few economists—seem to understand that inflation should run below 2% during other years, that is, periods not marred by adverse supply shocks.

In other words, Fed policy has recently been even worse than it might look if you focus solely on recent PCE inflation rates.

Now we are in 2026, and it is possible (but not yet certain), that this will end up being a year of adverse supply shocks, akin to 2022. If it is, then it would be appropriate for inflation to exceed 2% in 2026. The real problem was 2025, when inflation ran 2.9% during a time when it should have been well below 2%.

This sort of biased reasoning occurs in many areas of life. I see sports fans excusing the poor performance of a team by referring to “injuries”, even during seasons when the team’s level of injuries doesn’t exceed the league average. And sports fans often overlook the fact that when their team is unusually healthy, it ought to be doing even better than usual. Subconsciously, they tend to regard 100% health as normal, and as a result they are usually overly optimistic about the potential of their team.

Consider fiscal policy, where the budget deficit has been running at a rate of around 6% of GDP over the past three years. Is that sustainable? You might be tempted to assume the deficit continues at a rate of 6% of GDP and then look at what happens to the ratio of total public debt to GDP going forward.

Unfortunately, it is easy to overlook the dog that didn’t bark. The last three years saw no recessions, no pandemics and no wars. They were unusually good years from a fiscal perspective. Even if deficits of 6% of GDP were just barely sustainable (and they probably are not), there would be no reason to assume that our current fiscal trajectory is sustainable.

No reason, that is, unless you believe the end of history has arrived and that we’ll never again have a recession, war or pandemic. The past three years have seen peace and prosperity and hence are not at all typical.

Someday soon, the dog may resume his barking.

Let's Talk About Fertilizer

For more videos, visit my YouTube channel.

Of war, fertilizer, airlines and force majeure.

Transcript (AI-generated)

So let’s talk for a few minutes about fertilizer. Hi, I’m Paul Krugman. Fertilizer is not usually one of my things, but it’s important in what’s happening right now. And it’s also part of trying to understand just how big a mess we are in as a result of this unplanned, ill-conceived war.

So it’s Saturday, it’s three weeks and one day since the bombing began. Donald Trump is now, the story keeps changing, we’re either going to apply force and devastate Iran or our job is done and it’s up to other countries to reopen the Strait of Hormuz because we don’t rely on it, says the president, which is, first of all, it turns out not to be true. The United States does not import significant amounts of crude oil coming through the Strait of Hormuz, but we do import fertilizer, which I wasn’t aware of. Lots of things that are coming to light now that we’re facing the crisis.

The reason we are getting fertilizer, mostly from Qatar, is that the fertilizer is made, urea and some other things are made from natural gas. Natural gas can be exported, is exported, in large quantities from the Persian Gulf, or was until this war began. That’s expensive. You have to super cool it and liquefy it and ship it out through special terminals and special ships.

And, you know, it can be done and it’s become really critical to a large part of the world. But the other thing you can do with the natural gas that’s available in the Persian Gulf area is convert it into fertilizer, which is a lot easier to ship.

And so a lot of the world’s fertilizer turns out to come from that area and normally get shipped through the strait. And the United States, we’re a great agricultural nation, and we do import significant amounts of fertilizer. We import a large share of our fertilizer, and some of it from the Persian Gulf, a significant share of that.

So this is having a direct impact on U.S. farmers. The price of urea is way, way up. And there’s something that I’ve recently been alerted to, which is quite scary. The planting season is coming up, says somebody who has no idea what agricultural life is like, but that’s what I’m told.

And the farmers have long since contracted for their fertilizer. They’ve already paid or at least signed the contracts. The prices are locked in. But will there actually be fertilizer available? It’s not at all hard to imagine that the suppliers will declare force majeure, say there’s a war on, which is normally a valid excuse for backing out of contracts. and simply fail to supply the fertilizer. That would be a real catastrophe.

By the way, there are other places where that’s going to matter. The airlines quite often, you know, airlines cancel flights all the time, and sometimes they declare force majeure and cancel flights and don’t even compensate, although that I think is less of an issue right now. The price of jet fuel has risen. At last I checked 88% since the crisis began. Airlines, you know, they’re already talking about cutting back schedules, not about canceling. Well, it’s not entirely clear.

And, you know, I’m as insulated as anybody can get from all of this, but Robin and I do have some travels planned starting in late April. A mixture of pleasure and business, and some of it we really need to be in certain places, and it seems entirely possible that flights will be canceled. You know, we may or may not receive compensation, which I don’t really care about, but just not being able to get to the places that I have promised to be. would be a really serious disruption. Now this is trivial compared with farmers are facing potential financial ruin, but this is just an illustration of the disruptions.

And of course, at a fundamental level, saying that because the United States doesn’t buy its oil from the Persian Gulf, that therefore we are insulated, that this doesn’t matter to us. I mean, take a look at your gas station. Gas prices are up about $1 a gallon since the war began.

Wholesale gas prices are up about $1.20 a gallon, so this is going to get worse. Diesel is up even more. So the fact that the United States actually produces more oil than it consumes is pretty much irrelevant.

If you want to ask how does the U.S. economy get affected, well, the economy is people, like Soylent Green.

I mean, the economy is people and most people in the United States are significantly adversely affected by the spillover from this war. Now, oil companies, particularly oil refiners, who seem to be seeing a big explosion in their margins, they’re doing well, but what good does that do the rest of us? It’s not as if the U.S. has any fiscal measures in place to capture those gains. So this is in fact, this is hitting the United States, it’s hitting all of us quite hard, and it may be actually kind of catastrophic because plans, plans to travel, never mind, but plans to plant crops may be seriously endangered by all of this.

Has anybody told Trump about this? From everything we’re reading, the answer is probably not. Basically, we’re in a situation where the courtiers don’t tell the emperor that he has no clothes and don’t tell him that actually war in the Persian Gulf really hurts the United States a lot, too. So, you know, God knows.

By the way, I have no idea how this ends. I don’t even know what I would do at this point. I mean, take a time machine and go back and not do this, but now it’s going to be really, really ugly. And have a nice weekend.

Some more slow take-off, driven by start-ups

So far, however, the predictions that the mass automation of coding will leave outsourcing firms obsolete seem overblown. Their clients often hope AI will create huge productivity gains by, for example, using the technology to quickly and cheaply build a new internal HR tool. But such improvements in productivity are only possible in “greenfield” environments with “clean architecture”, argues Atul Soneja, chief operating officer at Tech Mahindra, an IT firm. Deploying AI in “brownfield” environments—with legacy code, a lack of documentation and multiple systems that must all continue to operate in real time—is far trickier. In the end, clients often realise that their AI dreams were too ambitious and end up hiring as many outsourced coders as before, say executives.

What is more, the AI boom may present an opportunity for the consultancy arms of India’s outsourcers. They argue that they can now fulfil more of a strategic role for their clients: getting the most out of AI requires understanding all of the context around the problem, something that consultants with experience across businesses can offer. Nandan Nilekani, one of the founders of Infosys, reckons that such services related to AI could be worth $300bn-400bn by 2030.

Here is more from The Economist.

The post Some more slow take-off, driven by start-ups appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Reuters: ‘Amazon Plans Smartphone Comeback More Than a Decade After Fire Phone Flop’

Greg Bensinger, reporting for Reuters:

The latest effort, known internally as “Transformer,” is being developed within its devices and services unit, according to four people familiar ​with the matter. The phone is seen as a potential mobile personalization device that can sync with home voice assistant Alexa and serve as a conduit to Amazon customers throughout the day, the people said. [...]

As envisioned, the new phone’s personalization features would make buying from Amazon.com, watching Prime Video, listening to Prime Music or ordering food from partners like Grubhub easier than ever, the people said. They asked for anonymity because they were not authorized to discuss internal matters.

The problem with this pitch is that it’s not hard at all to buy from Amazon.com, watch Prime Video, listen to Prime Music, or order food from Grubhub using the phones we already have. All of those things are ridiculously easy. I mean, I get it. On an Amazon phone, your Amazon ID would be your primary ID for the system. So those Amazon services would all just work right out of the box. But you can’t get people to switch from the thing they’re used to (and, in the case of phones, especially iPhones, already enjoy) unless you’re pitching them on solving problems. No one has a problem buying stuff or using Amazon services on the phone they already own.

A key focus of the Transformer project has been integrating artificial intelligence capabilities into the device, the people said. That could eliminate the need for traditional app stores, which ​require downloading and registering for applications before they can be used.

This is just nonsense. No matter how good Amazon’s AI integration might be, it isn’t going to replace the apps people already use. If you use WhatsApp, you need the WhatsApp app. If you want to watch video on Netflix, you need the Netflix app. If you surf Instagram and TikTok, you need those apps. If Amazon tries shipping a phone without any of those apps — let alone without all of them — this new “Transformer” phone will be a bigger laughingstock than the Fire phone was a decade ago. And we’re all still laughing at the dumb Fire phone. Which means they can’t eliminate “traditional app stores”.

People aren’t clamoring for the elimination of app stores. People like app stores. If Amazon, or anyone else, is going to introduce a new type of “AI-first” phone to disrupt the iPhone/Android duopoly, it has to offer something amazingly appealing. Nothing in Reuters’s description of Transformer fits that description. Also, it’s not like Amazon has market-leading AI. At the moment that feels like a three-way game between OpenAI, Anthropic, and Google.

 ★ 

How much more will oil prices have to go up?

[Robin] Brooks: So let me give you two ways of thinking about what’s going on, both of them are really about trying to think about what kind of risk premia need to be priced in oil, given all the massive uncertainty that we have. The first way that I’ve been thinking about this is—I spent a lot of time working on Ukraine and Russia and sanctions after the invasion four years ago. Russia produces about 10 million barrels of oil per day. It exports, of that, about 7 million barrels of oil per day. The Strait of Hormuz has transit of about 20 million barrels of oil per day. So the Strait of Hormuz is roughly 3 times what Russia could have been. And remember, in the days right after the invasion, markets were really worried about Russian oil being embargoed. There was a whole discussion about that. So the rise in Brent, which is the global benchmark oil price, is about 70% from two weeks before the outbreak of war in the Gulf to now. On a similar time horizon back in ‘22, it was 20%. So we have roughly a 3X in terms of the rise in oil prices. So when people come to me and say “$150 or $200 for oil prices” and we’re currently at $115, roughly, then I think, “why, what’s the rationale?”

The second perspective is on the supply shortfall that we have and using price elasticity of demand to think about: “how much does the price need to rise if demand has to do all the adjusting in the short term,” which it does. And “what kind of numbers do we come up with if we make reasonable assumptions?” So I put out a Substack note today—thank you so much for reading my Substack, I’m incredibly flattered and stressed as a result— if you assume that the Strait of Hormuz goes from 20 million barrels of oil per day to 10, it’s basically oil from the Gulf is running at half of its normal capacity, and you assume a price elasticity sort of in the middle of the range that the academic literature has, which is about 0.15, then you get that this would generate a rise in oil prices of between 60 and 70%. So again, if I think about what we’re pricing in markets now versus what basic back-of-the-envelope-calculations tell you, then I think we’re roughly in the right ballpark.

That is from his interview with Paul Krugman.  Via Luis Garicano.

The post How much more will oil prices have to go up? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Saturday 21 March 1662/63

Up betimes and to my office, where busy all the morning, and at noon, after a very little dinner, to it again, and by and by, by appointment, our full board met, and Sir Philip Warwick and Sir Robert Long came from my Lord Treasurer to speak with us about the state of the debts of the Navy; and how to settle it, so as to begin upon the new foundation of 200,000l. per annum, which the King is now resolved not to exceed. This discourse done, and things put in a way of doing, they went away, and Captain Holmes being called in he began his high complaint against his Master Cooper, and would have him forthwith discharged. Which I opposed, not in his defence but for the justice of proceeding not to condemn a man unheard, upon [which] we fell from one word to another that we came to very high terms, such as troubled me, though all and the worst that I ever said was that that was insolently or ill mannerdly spoken. When he told me that it was well it was here that I said it. But all the officers, Sir G. Carteret, Sir J. Minnes, Sir W. Batten, and Sir W. Pen cried shame of it. At last he parted and we resolved to bring the dispute between him and his Master to a trial next week, wherein I shall not at all concern myself in defence of any thing that is unhandsome on the Master’s part nor willingly suffer him to have any wrong. So we rose and I to my office, troubled though sensible that all the officers are of opinion that he has carried himself very much unbecoming him.

So wrote letters by the post, and home to supper and to bed.

Read the annotations

Artemis 2 returns to the pad for April launch attempt

SLS Artemis 2 rollout

The Artemis 2 launch vehicle and spacecraft have returned to the launch pad for a launch as soon as April 1.

The post Artemis 2 returns to the pad for April launch attempt appeared first on SpaceNews.

Reading List 03/21/26

Cargo ship Marine Angel navigating the Chicago River in 1953. Via History Calendar.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure, and industrial technology. This week: damage to the Ras Laffan LNG facility, housing bubble risks, North Korea’s naval production, Bezos’ $100 billion for manufacturing automation, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

War in Iran

Ras Laffan, the world’s largest LNG facility in Qatar, was extensively damaged by an Iranian missile, and production has been completely shut down. The facility is responsible for something like 20% of the world’s supply of LNG, as well as for a third of global helium supply, which is used for semiconductor manufacturing. [Bloomberg] [CNBC]

Oil shipments from the UAE’s port of Fujairah have declined by two-thirds thanks to Iranian drone attacks. [Lloyds List]

To try and address rising oil prices following the closure of the Strait Hormuz, the Trump Administration has waived the Jones Act (which requires transportation between US ports to be done by US ships) for 60 days. [Reuters] It also invoked the Defense Production Act to order oil drilling to resume off the coast of California. [LA Times]

China tries to entice Taiwan to reunify by offering it energy security in the face of Middle East oil disruptions. [Reuters] And BYD dealerships are seeing a surge of interest in EVs. [Bloomberg]

Urbanist Richard Florida wonders if the war in Iran means the end of Dubai. Making your city a haven for the global elite means it’s relatively easy for them to relocate somewhere else if things turn south. “Dubai, which sits near the Strait of Hormuz, was supposed to be safe. Instead, it has been under attack by Iran since Feb. 28. More than 260 ballistic missiles and over 1,500 drones have been detected over the United Arab Emirates; most have been intercepted, but their percussive booms have become part of the city’s soundscape. The city that had spent decades billing itself as a sleek sanctuary — luxe, apolitical, income-tax-free, floating above and apart from the fractious region around it — was suddenly no longer insulated.” [NYT]

Housing

Swiss investment bank UBS has a report on which cities are at the highest risk of having a housing bubble, which they estimate by looking at trends in home prices, rents, and average incomes. Miami occupies the number 1 spot, followed by Tokyo and Zurich. [UBS]

Wired has an article about RealToken, which aims to “democratize access to real estate investment” by selling tokens representing shares of ownership in real estate properties. Apparently this has involved buying a bunch of dilapidated Detroit real estate and not maintaining it properly. “Last summer, the City of Detroit sued RealT and its founders, alleging “hundreds of blight violations.” Dorris’ property was one of many that city inspectors declared unfit for habitation. He told me that while his previous landlord wasn’t perfect, sometimes leaving Dorris to organize repairs, his building has deteriorated markedly since RealT entered the picture. The smoke detectors are missing, and the bathtub has no hot water, inspectors found. “The only way of washing is me standing over my sink,” says Dorris. “There are rats in the downstairs, there are squirrels in the upstairs.”” [Wired]

Marginal Revolution on how Denmark avoids the mortgage lock-in problem — where when interest rates rise, homeowners are reluctant to sell their home because their new one will have a higher interest rate mortgage. The Danish system has mortgages backed by a bond, which can be bought to pay off the mortgage. When interest rates rise, the price of the bond falls, incentivizing homeowners to purchase them. [Marginal Revolution]

Last Friday the Trump Administration released an executive order aimed at removing various regulatory barriers that add to the cost of building homes. [Whitehouse]

Read more

Links 3/21/26

Links for you. Science:

Inhibition of multidrug-resistant Staphylococcus aureus by commensal bacterial species from the human nose
Cervical cancer rates higher in states with low HPV vaccination rates
A jumbo cyanophage encodes the most complete ribosomal protein set in the known virosphere
A new mRNA antigen vaccine induces potent B and T cell responses and in vivo protection against SARS-CoV-2
Love Island: Rare berry bonanza spurs Kākāpō baby boom
As Paralympics approach, U.S. skier Sydney Peterson balances training and research

Other:

No Quarter. And not one more inch.
‘Nazi heaven’: Inside Miami campus Republicans’ racist group chat (at this point, one must assume that any Republican operative under forty is a full-tilt bigot)
Says It All
RFK Jr’s Pick For Surgeon General Cashed In Promoting Companies With a History of Unsafe Products
In 2009, at the height of the Gulf War, the Marines barred him from active duty. Platner claims it was his forearm tattoos. But his one forbidden tattoo was the Nazi symbol on his chest. He knew – and he left the Marines rather than give it up.
Blue states push to ban ICE at the polls amid federal voter intimidation fears
Adding Up What Urban Highways Really Cost
Republican senator pulls some sh-t by anointing his successor
Wilson Building Bulletin: Moves toward transparency for federal agents. Also: A proposed ballot initiative on a foie gras ban advances, and a new tax may come for disposable wipes.
MPD Asst. Chief Andre Wright Put On Administrative Leave. Wright’s wife, MPD Inspector Natasha Wright, was also suspended.
Palantir and other tech companies are stocking offices with tobacco products to increase worker productivity
Anthropic’s AI tool Claude central to U.S. campaign in Iran, amid a bitter feud
Colorado school sends unvaccinated students home as RFK Jr.’s anti-vaccine crusade pays off
Slurs Filled a Chat Created by a Republican Party Official in Florida
Trump’s mini-me ambassadors are insulting and alienating U.S. allies
The Nation Faces a Crisis. Colleges Have a Unique Role to Play.
Austin shooting suspect was Tesla employee who assaulted co-worker, lawsuit says
Data Centers Are a Distraction. The Real Fight Is Elsewhere.
Texas’s Senate Primary Has Already Made History—and It’s Not Over Yet
Mar-a-Lago face couldn’t save Kristi Noem
The Endless Hypocrisy of Bari Weiss
An Interview With A Tenant Who Doesn’t Have Heat In Giannis Antetokounmpo’s Building
After more than 15 years on the platform formerly known as Twitter, Cambridge is leaving X
The Most Chilling Detail in the U.S. Attack on an Iranian Naval Ship
Trump Says ‘I Guess’ Americans Should Worry About Iran Retaliating on U.S. Soil: ‘Like I Said, Some People Will Die’
Virginia moves to forbid schools from teaching that Jan. 6 was peaceful
Ketamine, Prostitution and Money: Details of a Secret DEA Probe of Jeffrey Epstein
Russia is providing Iran intelligence to target U.S. forces, officials say
Vance Puts MAGA Ideology Above All Else
Stunning FBI Doc Claims Trump Assaulted Teen Girl After She ‘Bit the Sh*t Out of’ His Penis

Little Darlin’

By The Diamonds.  The video is not what I was expecting.

The post Little Darlin’ appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

China is quietly looking weaker

Photo by Daniel Case via Wikimedia Commons

In the 1980s, a lot of people wrote books and articles about how Japan was going to be the world’s leading country. The most famous of these was Ezra Vogel’s Japan As Number One: Lessons for America. At the same time, in 1989, Bill Emmott wrote a book called The Sun Also Sets: The Limits to Japan’s Economic Power, in which he predicted that Japan would revert to the mean. History has judged Emmott the winner of this contest of ideas. He didn’t get everything right — his characterization of Japan as an export-led growth model didn’t fit the facts, for instance — but in general, he got more right than wrong. His analysis of Japan’s financial weakness, aging challenges, and low service-sector productivity were right on the money.

At the time, though, with Japan at its zenith, it was easy to make Vogel-like predictions of continued domination, and it was out of vogue to be a contrarian like Emmott. The same is true of China today. Over the past few years, skepticism of China’s rise has mostly evaporated in the West, and most Americans now believe that China has either overtaken their country or will do so in the near future:

There are still a few hawkish types out there writing articles about China’s coming collapse, but almost no one is paying attention. All the attention is on Chinese cars, Chinese cities, Chinese trade surpluses — or on America’s flailing in the Middle East, its chaotic policymaking, its divided society, and its inability to manufacture anything in volume. Between America’s dysfunction and China’s technological achievements, the idea of a Chinese Century has become conventional wisdom.

In a post last year, I assessed that this conventional wisdom was probably right — that the 21st century would be a Chinese century, although China’s dominance wouldn’t be as pronounced or as beneficial to the world as America’s was in the 20th century:

I don’t think I made the same mistake that Ezra Vogel and many others made when assessing Japan in the 1980s — of just assuming that recent trends would continue. China is about 12 times the size of Japan. It can dominate the world, industrially and geopolitically, without ever coming close to the U.S. or even Japan in terms of per capita GDP.

I also hedged my bets a bit. Though I’ve always been highly skeptical of the idea that demographics will sink China (I think they’ll be more of an annoying but minor drag), and although I don’t think China’s housing bust will sink it, I do think that China’s dictatorial system is already putting it in danger via the personal failings of Xi Jinping:

In the past couple of months, though, I’ve become more of a Chinese Century skeptic than I was before. I’m not quite ready to write a Bill Emmott-style book about how China is going to bump up against hard limits. But I do see several factors that have adjusted my thinking a bit in the direction of China-pessimism, and I don’t see a lot of other people writing about these. So I thought I’d write a post about why I’ve updated.

Basically, the four things I’ve noticed are:

  1. China’s industrial policy is hitting its limits faster than I expected

  2. The rapid rise of AI agents makes me think that China’s technological advantage is less defensible

  3. Xi Jinping is entering his paranoid “Death of Stalin” phase earlier than I expected

  4. Trump’s attacks on Venezuela and Iran, whether you think they were good ideas or not, demonstrate possible Chinese military weakness

These factors don’t mean I expect China to go into decline today or within the next few years. But I do now think there’s a good chance that China is now stumbling in ways that will become more apparent in a decade or two, and will cause it to disappoint many of the current boosters and bulls.

China’s new economic model is quietly hitting its limits

Read more

Saturday assorted links

1. Canada [Sikh] fact of the day.

2. Is the world entering a new “missile age”?

3. Karp tells the story of Habermas rejecting Karp.

4. David Botstein, RIP (NYT).

5. Appreciation of Trivers.

6. Seb Krier.

7. Shruti on RefineInk.

The post Saturday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

The Women Leading the Farmworker Movement Won’t Let It Be Defined By Cesar Chavez

19th News LogoThe sexual abuse allegations against Chavez have rocked them. But their focus is still on protecting other women.

Monica Ramirez has spent much of her life spotlighting the pervasiveness of sexual violence against women farmworkers. She, like many in that movement, considered civil rights leader Cesar Chavez an icon.

Since allegations came to light this week that Chavez sexually assaulted women and girls as young as 12 — including fellow movement leader Dolores Huerta — Ramirez and the larger farmworker community have been left reeling. Now, they’re trying to reconcile how this man who so many revered — whose name is on streets, schools and even a holiday — could perpetrate the violence that has plagued women farmworkers for decades.

The community has been “shaken to its foundation,” said Ramirez, the founder of Justice for Migrant Women, a civil rights organization focusing on farmworker and migrant women. She and other leaders are now trying to push forward the farmworker movement and continue the work that many women — not just Chavez — spearheaded.

Monica Ramirez
Monica Ramirez, founder of Justice for Migrant Women, said the farmworker community has been “shaken to its foundation” by the allegations against Cesar Chavez. (Courtesy of Monica Ramirez)

“The farmworker movement is a leaderful movement, and women have always been part of that leadership,” Ramirez said. But their work has often been made invisible, sometimes by the very men who stood beside them in building worker power for Latinx people in the United States.

“In order to have a movement, in order to have a boycott, in order to organize any kind of action, it’s often women who are helping to organize the meetings, helping to bring their compañeras,” Ramirez said.

Chavez was one of the most revered figures in the Latinx civil rights movement. The labor leader cofounded what became the United Farm Workers union alongside Huerta, and was most known for a series of strikes and protests that grew unionization efforts across California. After Chavez’s death in 1993, he was posthumously awarded the Presidential Medal of Freedom, the nation’s highest civilian honor. In 2014, former President Barack Obama designated his birthday, March 31, as a federal holiday to celebrate his legacy, which many states had already marked.

Now, many of those celebrations are being canceled or renamed after a bombshell, yearslong investigation published by The New York Times Wednesday found evidence of a pervasive pattern of sexual abuse perpetrated by Chavez. Two women said Chavez sexually abused them for years as girls, when the organizer was in his 40s and had already become a powerful global figure. Ana Murguia said Chavez first assaulted her when she was 13; Debra Rojas was 12.

In the years following the abuse, both suffered from depression, panic attacks and substance abuse.

“I feel like he’s been a shadow over my life,” Rojas told the Times. “I want him to stop following me around. It’s time.”

Huerta, the renowned activist who coined the rallying cry, “Sí, se puede,” spoke at length about emotional and physical abuse from her longtime organizing partner — a disclosure she had never made publicly. She told the Times that he raped her in a secluded grape field in 1966, and had pressured her to have sex with him another time during a work trip in 1960. Both encounters resulted in children. Huerta concealed the pregnancies and arranged for the baby girls to be raised by others.

She was shaken upon hearing the allegations from other women, and told the Times she struggles to reconcile the man she knew and the one who assaulted her.

In a statement released Wednesday, Huerta said she carried her secret for 60 years because “building the movement and securing farmworker rights was my life’s work. The formation of a union was the only vehicle to accomplish and secure those rights and I wasn’t going to let Cesar or anyone else get in the way.”

She said she spoke up because she learned there were others coming forward.

“The farmworker movement has always been bigger and far more important than any one individual. Cesar’s actions do not diminish the permanent improvements achieved for farmworkers with the help of thousands of people,” she said. “We must continue to engage and support our community, which needs advocacy and activism now more than ever.”

Magaly Licolli knew exactly what Huerta was talking about in her statements about Chavez.

Licolli is the co-founder and executive director of Venceremos, an organization advocating for poultry workers in Arkansas, and she’s heard stories about sexual harassment and assault on women for years.

Before she started Venceremos, she was fired from another poultry worker organization after speaking up about multiple accusations of sexual harassment and assault against a well-known organizer.

“Women came forward and accused the organizer of sexually assaulting them or sexually harassing them. When I brought that to the board, they didn’t believe it,” Licolli said. “I had to stand with the women … I cannot do this work pretending I’m doing justice when I’m hiding injustice.”

Licolli felt that echoed this week.

“Women of color, we are not trusted on what we go through. We have to prove with pictures, with testimony, our own stories for our own stories to be validated,” she said. “I’m happy that now it’s something that people are talking about, and I’m happy that people are now reflecting about what is the role of women in the movement and when we have to be silenced toward that kind of injustice to protect the work that we do.”

A woman with long dark hair sits outdoors on a bench wearing a red and yellow patterned top and black skirt, looking directly at the camera with a composed expression.
Magaly Licolli, co-founder of Venceremos, pointed to a pattern in organizing spaces where women who report abuse are doubted, ignored or pushed out. (Courtesy of Magaly Licolli)
A growing share of farmworkers are women, according to the U.S. Department of Agriculture: about 26.4 percent in 2022, the most recent year for which data is available. Most are Latina.

A 2012 report by Human Rights Watch, an advocacy organization, found that women farmworkers are often at risk of sexual harassment or assault, with virtually every worker interviewed for the report saying they either had experienced harassment or assault or knew someone who had. Farmworkers work in mixed-gender settings, and they have limited worker protections But women typically lack avenues to report their experiences, the report’s authors wrote, in large part because of immigration status. As of 2022, most farmworkers were immigrants without U.S. citizenship.

“Sexual violence and harassment in the agricultural workplace are fostered by a severe imbalance of power between employers and supervisors and their low-wage, immigrant workers,” the report said.

A 2024 review published in the Journal of Agromedicine suggested that as many as 95 percent of women farmworkers in the United States have experienced workplace sexual harassment.

None of the women in the Times story spoke publicly until recently because of the shame and fear associated with reporting abuse against prominent organizers.

But over the past decade, after the growth of the #MeToo movement and the release of millions of Epstein files that have implicated numerous people in powerful positions, survivors have been more willing to speak up about their experiences.

Ramirez, who also founded the public awareness campaign known as the Bandana Project to raise awareness of sexual violence against farmworker women, said she now expects more women to come forward with their own stories. At an event Wednesday night shortly after the news broke, she said one woman came up to her to tell her how sexual assault was a problem in the fields where she worked as a teenager.

“Now that we understand clearly that this issue of sexual violence is an endemic problem in our society … the question we have to answer is: Knowing that, how serious are we going to get in our commitment to ending the problem?”

California lawmakers already plan to change the name of Cesar Chavez Day on March 31 to “Farmworkers Day,” and efforts are underway to remove his name from landmarks. But the real work to come will be about investing resources and support to improve the culture that has protected perpetrators in organizing spaces over victims.

Rep. Delia Ramirez, an Illinois Democrat who worked in organizing before entering politics, said it was “devastating” that the claims took so long to come out. She said when she became an executive director of a nonprofit at 21, she, too, had faced situations that in hindsight were not appropriate, and left the organization with a responsibility to create safer environments for other young women.

“Oftentimes women, especially women of color, we end up having to hold so many things for the sake of the movement, family, community,” Delia Ramirez told the 19th. “I don’t believe that there is one hero for our movements. Movements are led by a collective, and you can’t create some pedestal for one person, because humans will always fail you.”

Moving forward, Monica Ramirez said people will be watching how leaders in the farmworker movement respond to the allegations. Do they take a defensive posture or question the veracity of the survivors’ accounts? The revelations about Chavez come at a time when sexual misconduct by powerful men has been in the spotlight, all while the country grapples with a wave of immigration enforcement actions that are targeting Latinx people.

Licolli, the poultry organizer, said she has “never romanticized the immigrant community and the immigrant movement.” Sexual abuse happens in every movement and it doesn’t negate the work that’s been done to secure worker power, she said.

And for the farmworker women who are leading this work, it feels more urgent than ever that they continue leading.

Rosalinda Guillen, a farmworker and organizer in Washington state, leads Community to Community Development, an explicitly feminist and women-led organization — a perspective that she said lends itself to advocating for workers who are also parents, and that she said offers space for women farmworkers to assert their needs.

Guillen never met Chavez but was inspired to devote herself to organizing on behalf of farmworkers after his death. The news has been a “revision of everything that many of us know about the farmworker movement,” she said.

Her organization is removing images of Chavez from its office, Guillen said. “We revisited our values and principles in how we work together, reiterating there is no room for that,” she said, referring to sexual misconduct.

On Wednesday, while staff were still processing the reports, five farmworkers walked in. They had just lost their jobs.

Her staff switched gears, turning to figure out what those workers needed and how they could support them.

“They walked in reminding us this is the focus,” Guillen said. “This is why we’re here: To protect farmworkers.”

This article was originally published by The 19th on March 20, 2026. Click to read.


CLICK HERE TO DONATE TO OUR NONPROFIT NEWS REPORTING AND OPINION

The post The Women Leading the Farmworker Movement Won’t Let It Be Defined By Cesar Chavez appeared first on DCReport.org.

[RODEN] Meditation, Language, and LLMs

Roden Readers —

Hello! It’s me, Craig Mod. Author of TBOT (amzn | bkshp). Poking my head out into newsletter land. This? Roden, a newsletter you signed up for at some point. Perhaps last week, perhaps fourteen years ago, when I started shooting these out.

I’ve been busy. I’ve been doing something that I’m bad at and am trying to get better at: I’ve been having fun (and trying not to be crushed by the guilt of having fun). I went to LA and then Santa Fe and then Hokkaido with the binding agent of: eating great food with people I love. In Santa Fe I spent a few days meditating at Mountain Cloud Zen Center (more on that below; also, yes, fly from Japan to Santa Fe for Zen; also also, turns out the headquarters of their school is around the corner from my home ha ha ha). My body loves Santa Fe. Loves the crispness of the air. The elevation (once it gets used to it). The sharp light. The salsa. I spent a few mornings writing in Collected Works and generally came away from the whole visit thinking: I’d like to head back, eat More Salsa, spend more time in that corner of the US. In LA, I went deep on LLMs and Claws and all that with Kevin Rose (and also met some Hollywood-adjacent folks about book optioning), eating lots of Doordash’d Gwyneth Paltrow slop bowls and making software. I have to say, these three weeks of doofery have been some of the most fun weeks I’ve had in years. So, thanks for indulging me a bit of newsletter silence as I pretended to be a human out in the wild.

The States as the laboratory of democracy: helping organ donors

News from the States:

Pa. senators mull inheritance tax cut, deductions for organ donors 

"While employers across the state are allowed to claim tax deductions for time off offered to living organ donors, donors themselves receive no such benefits.

That would change if lawmakers pass a bill sponsored by Sens. Lindsey Williams (D-Allegheny) and Lynda Schlegel Culver (R-Northumberland), who testified to members of the Senate Finance Committee almost five years to the date after receiving her sister’s kidney.
...
“I’ve seen firsthand the gift of donation and what it means,” Culver told lawmakers. “It has allowed me and so many others the opportunity to have a full life.”

According to the University of Pennsylvania Health System, more than 6,000 Pennsylvanians were on the transplant waiting list in 2025.

Culver and Williams’ proposal would allow living organ donors to deduct up to $10,000 in unreimbursed expenses related to the donation from their taxable income. That would include costs like travel, lodging, lost wages and medical expenses.

According to Culver, studies show the average living organ donor faces roughly $5,000 in expenses, which includes things like travel, lost wages and child care during recovery.
...
The measure was passed unanimously by members of the Senate Finance Committee." 

March 20, 2026

On Wednesday, Israeli forces hit Iranian facilities in the South Pars natural gas field in the Persian Gulf, shared by Iran and Qatar. Helen Regan and Ivana Kottasová of CNN explain that the South Pars gas field is part of the largest natural gas reserves in the world, supplying most of Iran’s domestic energy and crucial to Iran’s economy.

Targeting crucial oil infrastructure is a significant escalation in the war. Iran responded by hitting energy targets in Qatar and Saudi Arabia. As Summer Said, Rebecca Feng, and Alexander Ward of the Wall Street Journal wrote, these strikes put oil and gas facilities at the center of the war, worsening the crisis over the supply of energy around the world.

Trump’s social media account blamed Israel for the strike and said the U.S. hadn’t been informed about it ahead of time, but Barak Ravid of Axios reported that both Israeli officials and an official from the U.S. Defense Department said the strike was coordinated with and approved by the Trump administration. The Wall Street Journal reporters added that Trump approved the strike to put pressure on Iran to open the Strait of Hormuz.

Today Iraq declared “force majeure” on the country’s oilfields developed by foreign oil companies. This is an acknowledgement that a catastrophic event—usually an earthquake or something similar—means they cannot meet their obligations to deliver their product. In this case, the catastrophe is the disruption to the Strait of Hormuz, through which 20% of the world’s oil and natural gas flows. Kuwait Petroleum Corporation and Bahrain’s state-owned Bapco Energies declared force majeure earlier this month.

This morning, Trump’s social media account once again blamed U.S. allies in the North Atlantic Treaty Organization (NATO) for not joining his war, although NATO is a defensive alliance, designed to respond to an attack. The account posted: “Without the U.S.A., NATO IS A PAPER TIGER! They didn’t want to join the fight to stop a Nuclear Powered Iran. Now that fight is Militarily WON, with very little danger for them, they complain about the high oil prices they are forced to pay, but don’t want to help open the Strait of Hormuz, a simple military maneuver that is the single reason for the high oil prices. So easy for them to do, with so little risk. COWARDS, and we will REMEMBER! President DONALD J. TRUMP.”

This afternoon, Trump told reporters: “You know, we don’t use the strait…we don’t need it. Europe needs it, Korea, Japan, China, a lot of other people, so they’ll have to get involved a little bit on that one.” He also said: “I think we’ve won, we’ve knocked out their Navy, their Air Force. We’ve knocked out their anti-aircraft. We’ve knocked out everything. We’re roaming free. From a military standpoint, all they’re doing is clogging up the strait. But from a military standpoint, they’re finished.”

The International Energy Agency is an intergovernmental organization that was created in 1974 to provide policy recommendations on the global energy sector and whose members make up about 75% of the demand for global energy. Today it said, “The conflict in the Middle East has created the largest supply disruption in the history of the global oil market, due to the near halt in shipping traffic through the Strait of Hormuz.” It added: “The resumption of transit through the Strait of Hormuz is the single most important action to return to stable oil and gas flows and reduce the strains on markets and prices.” Until then, it urged people to work from home if possible, drive more slowly to conserve energy, use public transport, avoid using airplanes, and use electricity for cooking where possible.

Yesterday President Donald J. Trump told reporters he was not sending troops to Iran, saying: “No, I’m not putting troops anywhere. If I were, I certainly wouldn’t tell you, but I’m not putting troops.” Today, Jennifer Jacobs, James LaPorta, and Eleanor Watson of CBS News reported that the Pentagon has made detailed preparations for sending troops to Iran. The administration is currently moving thousands of Marines to the Middle East. They will not be in place for a few weeks, suggesting the administration is expecting the engagement to continue.

Barak Ravid and Marc Caputo of Axios reported today that the administration is considering an assault on Iran’s Kharg Island, the center of Iran’s oil-processing facilities, to force Iran to allow free passage through the Strait of Hormuz. That operation would require the U.S. military to pound Iran’s military capacity near the strait before sending in ground troops. A source told Ravid and Caputo: “We need about a month to weaken the Iranians more with strikes, take the island and then get them by the b*lls and use it for negotiations.”

Prices in the U.S. were already rising before Trump struck Iran, prompting the closure of the strait and the choking off of global oil supplies. The Federal Reserve’s tracking of key inflation measures, released Wednesday, showed higher prices than expected, with the Producer Price Index (PPI) jumping 0.7% in February, the most since last July. In the twelve months through February, Lucia Mutikani of Reuters reported, the PPI went up 3.4%, the fastest rate of growth in a year. Now, dramatically higher fuel costs threaten to drive those prices higher.

The war itself is also costing Americans money, and lots of it. Economist Justin Wolfers notes that the estimated cost of $1 billion a day does not include the larger cost to the economy. The Pentagon’s number counts only bombs and planes and personnel, Wolfers points out. It does not include higher oil prices, geopolitical strife, business uncertainty, and slower growth. Those costs will mount into the hundreds of billions.

G. Elliott Morris of Strength in Numbers notes that 58% of Americans think the U.S. military operation in Iran is a bad use of taxpayer dollars, while only 32% approve. Asked if they would support the war in Iran if it raised gas prices by $1 a gallon or more, 61% of Americans said they would not, while only 30% said they would.

Aware that the war is historically unpopular, Republicans in Congress are refusing to exercise any oversight of the Pentagon and the White House. Megan Mineiro of the New York Times reported today that Republicans don’t want to expose disapproval of the war and so are simply cheering Trump on in public. Rather than holding public hearings that would allow the American people to hear the administration’s justification for the war and plans for its execution, as Democrats demand, Republicans are permitting the administration to inform Congress as it wishes, behind closed doors.

“You don’t want to show that kind of division to your enemy when you’re in the midst of a war,” Senator Ron Johnson (R-WI) told Mineiro. “I don’t have a problem with the administration avoiding showing our enemy that they don’t have 100 percent support of the Congress.”

“They’re holding news conferences,” Senate majority leader John Thune (R-SD) told reporters last week, so there is no need for official hearings. House speaker Mike Johnson (R-LA) said that operations were “very sensitive” and thus could not be discussed outside of classified settings “because it would adversely affect our mission.” This demand that Americans trust the government to go to war without public debate flies directly in the face of the reasoning of the Framers of the Constitution, who believed the American people must have the right to decide whether to invest their lives and fortunes in a war.

Senate Democrats have tried twice to pass a measure that would require Trump to get congressional authorization before continuing the war, but Republicans reject it. “They want to circumvent the Constitution,” Senator Cory Booker (D-NJ) said. “They want to go around public oversight. They want to avoid the glare, the questions of the American people.”

The recognition that the war might drag on has driven the stock market down sharply. All three of the main U.S. stock indexes—the S&P 500, the Nasdaq Composite, and the Dow Jones Industrial Average—have fallen since the war began. Tonight, after markets had closed down again, Trump appeared to try to reassure investors over the weekend that the war will end soon, writing on social media that “[w]e are getting very close to meeting our objectives as we consider winding down our great Military efforts in the Middle East with respect to the Terrorist Regime of Iran.”

The administration continues to try to sell its war as a violent video game, and Trump as a dignified leader. Eli Stokols, Ben Johansen, Jack Detsch, and Paul McLeary of Politico reported on Wednesday that the White House is thrilled with the engagement garnered by the war videos made by White House communications staffers, in which footage of military strikes is intercut with football hits or bowling pins being blasted apart, or with clips from movies like Top Gun and Gladiator. A White House official told the journalists: “We’re over here just grinding away on banger memes, dude. There’s an entertainment factor to what we do. But ultimately, it boils down to the fact that no one has ever attempted to communicate with the American public this way before.”

Progressive political strategist Max Burns notes that the White House messaging “is appealing directly to the base, especially to these young, very online, 4chan MAGA people who, just like Trump, treat war like a video game.” He added: “You don’t see service members sharing this content.”

Since the Obama administration, the choice of whether to allow media at a dignified transfer ceremony when the remains of service members are brought home at Dover Air Force Base in Delaware has been made by the families. After Trump’s political action committee used images from a dignified transfer in a fundraising email, on Wednesday the Fox News Channel announced that “at the request of the families, the dignified transfer is going to remain private. There will not be any cameras.”

Nonetheless, the administration posted a number of photos from Wednesday’s ceremony on social media, showing Trump in the background, saluting.

Notes:

https://www.cnn.com/2026/03/19/middleeast/iran-qatar-south-pars-gas-field-explainer-intl

https://meidasnews.com/news/trump-releases-dignified-transfer-photos-despite-families-requesting-private-ceremony

https://talkingpointsmemo.com/morning-memo/its-my-war-and-ill-cry-if-i-want-to

https://www.axios.com/2026/03/18/israel-strikes-iran-natural-gas-infrastructure

https://www.wsj.com/world/middle-east/escalating-attacks-on-gulf-energy-assets-plunge-iran-war-into-new-phase-36cc0a6e

https://time.com/article/2026/03/19/trump-iran-war-us-troops/

https://www.cbsnews.com/news/trump-administration-iran-ground-troop-preparations/

https://www.axios.com/2026/03/20/iran-invasion-kharg-island-strait-hormuz

https://www.reuters.com/business/energy/iraq-declares-force-majeure-foreign-operated-oilfields-over-hormuz-disruption-2026-03-20/

https://newrepublic.com/post/207500/trump-global-panic-oil-prices

https://www.reuters.com/business/us-producer-prices-surge-february-services-2026-03-18/

https://www.msn.com/en-us/money/markets/wall-street-ends-sharply-lower-middle-east-turmoil-fans-inflation-fear/ar-AA1Z58oR

https://www.iea.org/reports/sheltering-from-oil-shocks/summary

https://www.reuters.com/world/us-deploy-thousands-additional-troops-middle-east-officials-say-2026-03-20/

Strength In Numbers
New poll: 58% of voters say the war in Iran is a bad use of taxpayer dollars
This is a free article for Strength In Numbers, my newsletter on politics, polls, and election data. If you enjoy it and want to support this type of independent, data-driven political journalism — and keep the independent polls flowing!! — become a paying member today…
Read more

https://www.nytimes.com/2026/03/18/us/politics/senate-republicans-trump-iran-war-authorization.html

https://www.nytimes.com/2026/03/20/us/politics/congress-iran-trump.html

https://www.politico.com/news/2026/03/18/white-house-iran-game-online-00834373

https://www.cnn.com/2026/03/13/politics/trump-fundraise-email-soldier

https://www.thedailybeast.com/slain-troops-families-issue-ban-cameras-after-donald-trump-used-dignified-transfer-for-cash/

YouTube:

watch?v=CCF_i-ot8mE

Bluesky:

paleofuture.bsky.social/post/3mhinph5o722c

atrupar.com/post/3mhjbtaqu6v2p

bcfinucane.bsky.social/post/3mhjh3pwqkc2r

atrupar.com/post/3mhjbvldcfn2p

Share

Turbo Pascal 3.02A, deconstructed

Turbo Pascal 3.02A, deconstructed

In Things That Turbo Pascal is Smaller Than James Hague lists things (from 2011) that are larger in size than Borland's 1985 Turbo Pascal 3.02 executable - a 39,731 byte file that somehow included a full text editor IDE and Pascal compiler.

This inspired me to track down a copy of that executable (available as freeware since 2000) and see if Claude could interpret the binary and decompile it for me.

It did a great job, so I had it create this interactive artifact illustrating the result. Here's the sequence of prompts I used (in regular claude.ai chat, not Claude Code):

Read this https://prog21.dadgum.com/116.html

Now find a copy of that binary online

Explore this (I attached the zip file)

Build an artifact - no react - that embeds the full turbo.com binary and displays it in a way that helps understand it - broke into labeled segments for different parts of the application, decompiled to visible source code (I guess assembly?) and with that assembly then reconstructed into readable code with extensive annotations

Infographic titled "TURBO.COM" with subtitle "Borland Turbo Pascal 3.02A — September 17, 1986 — Deconstructed" on a dark background. Four statistics are displayed: 39,731 TOTAL BYTES, 17 SEGMENTS MAPPED, 1 INT 21H INSTRUCTION, 100+ BUILT-IN IDENTIFIERS. Below is a "BINARY MEMORY MAP — 0X0100 TO 0X9C33" shown as a horizontal color-coded bar chart with a legend listing 17 segments: COM Header & Copyright, Display Configuration Table, Screen I/O & Video BIOS Routines, Keyboard Input Handler, String Output & Number Formatting, DOS System Call Dispatcher, Runtime Library Core, Error Handler & Runtime Errors, File I/O System, Software Floating-Point Engine, x86 Code Generator, Startup Banner & Main Menu Loop, File Manager & Directory Browser, Compiler Driver & Status, Full-Screen Text Editor, Pascal Parser & Lexer, and Symbol Table & Built-in Identifiers.

Update: Annoyingly the Claude share link doesn't show the actual code that Claude executed, but here's the zip file it gave me when I asked to download all of the intermediate files.

I ran Codex CLI with GPT-5.4 xhigh against that zip file to see if it would spot any obvious hallucinations, and it did not. This project is low-enough stakes that this gave me enough confidence to publish the result!

Tags: computer-history, tools, ai, generative-ai, llms, claude

More on the David Lang opera version of Wealth of Nations

In 18 parts, Lang explores some of Smith’s central themes, including one of the book’s most famous passages, where Smith uses a wool coat worn by a very poor Scottish worker as a way to examine trade. “He asks, ‘Did you ever think of how many people need to be employed in order to make that coat?’” says Lang, whose movement “the woolen coat” names all the artisans and laborers who contributed to the garment in song:

the shepherd
the sorter of the wool
the wool-comber or carder
the dyer
the spinner
the weaver
the fuller

There are also the workers on the ship that brought in the dye and all the people who built the ship. An ordinary coat is revealed to be a kind of miracle of skilled labor and global collaboration, the product of “many thousands” of workers coming together in (selfish) harmony. Part of me wanted to run out of the theater right then and buy something … perhaps a coat… for America.

Here is more from Bloomberg, via John De Palma.  The opera seems to be ultimately a rather gloomy view of the book?

The post More on the David Lang opera version of Wealth of Nations appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Canada facts of the decade

From 2014 to 2024, Canada’s real GDP per capita adjusted for purchasing power parity grew by just 3.2 percent in total, an anemic 0.4 percent per year on average, and the third lowest among 38 advanced nations. Over the same period, the United States posted 20.2 percent total growth (1.9 percent annually), and the OECD average reached 15.3 percent (1.4 percent annually). The measurement shortcomings cannot explain five-to six-fold differences in growth rates.

And:

The analysis estimates that a substantial share of Canadians who would rank among top earners in Canada have emigrated to the United States—roughly 40 percent of potential top 1 percent earners and 30 to 50 percent of the next nine percentiles. Canadian-born individuals in the United States are more educated than native-born Americans, earn substantially more, and cluster disproportionately in top income deciles.

Canada is effectively exporting its inequality to the U.S. The brain drain simultaneously lowers our average income while raising American income, accounting for a significant share of the persistent GDP gap.

Here is the full piece.

The post Canada facts of the decade appeared first on Marginal REVOLUTION.

       

Comments

 

Record Heat Remains in Place This Week; Severe Thunderstorms Possible Thursday

Mux — Video API for Developers

My thanks to Mux for sponsoring last week at DF. Video isn’t just something to watch; it’s a boatload of context and data. Mux makes it easy to ship and scale video into anything from websites to platforms to AI workflows. Unlock what’s inside: transcripts, clips, and storyboards to build summarization, translation, content moderation, tagging, and more.

Mux stewards Video.js, the web’s most popular open source video player. Video.js v10 is a complete architectural rebuild, with the beta now available at videojs.org.

Mux is video infrastructure trusted by Patreon, Substack, and Synthesia. Get started free, no credit card required. Use code FIREBALL for an extra $50 credit.

 ★