Links for you. Science:
After court loss, RFK Jr. gives himself more power over CDC vaccine panel
Wildlife trade drives animal-to-human pathogen transmission over 40 years
Protein and genomic language models uncover the unexplored diversity of bacterial immunity
This detox may erase 10 years of social media brain damage, researchers say. Studies show that taking even short breaks could reverse measures of cognitive decline. (paper here)
How rats conquered Earth
Trump administration drops court fight to cap NIH payments for research overhead costs
A natural molecule present in the human body protects against the flu
Other:
The Pure Joy of Joy: You can’t fake the happiness of Artemis II — or Zohran Mamdani
Why Would You Ask AI To Tell The Story Of Your Own Life?
Former ‘Lesbian’ ICE Deputy Director Accused of Secret Affair With 19-Year-Old Staffer, Now Running for Congress in Ohio
Wisconsinites Can Keep Watching Porn After Governor Vetoes Age Verification Bill
It’s Time To Grow Up
Hegseth: Full Metal Manhood
Has the Position of College Graduates Worsened?
They rescued D.C.’s munys. Then Trump took over. Inside the fight between the president and public golf
The New Defense Budget
What Was the Avignon Papacy and Why Is It in the News?
New York Times and WSJ editors continue to trivialize massive pro-democracy demonstrations. Analysis of “No Kings 3” front pages. Local editors continue to shine.
Trump Was Watching a U.F.C. Fight in Miami While Iran Talks Collapsed (uncharacteristically brutal for the NYT)
Having Failed to Win a “Marathon” [sic] without Training, Trump Announces Blockade of Iran’s Blockade
‘Grosse Pointe Blank’ At 25: How A Knock-Off Became A Groundbreaker In The Assassin-In-Existential-Crisis Comedic Subgenre
Your Kindle’s not obsolete, it just needs a jailbreak – and I’ll show you how it’s done
DHS deported a US citizen to Mexico after threatening him with prison time
D.C. school applications fall amid deportation fears and federal layoffs
Trump’s Interference With DC Arts Is a Boon to Baltimore
In Praise Of Cafe du Monde And Tourist Trap Eateries Everywhere
Eric Swalwell and the Death of Accountability. His sexual misconduct was an open secret. So why was he still seen as a rising star in the party?
Despite apocalyptic warnings, California fast food wage hike didn’t kill jobs
America’s booming annoyance economy
The creation of instant coffee
Crypto’s Midterm Open Marriage
DHS Paying Local Police Millions in Quieter Approach to Immigration Enforcement
Dark Money for Soft Power: The Trumpily named Campaign for America First International Assistance hopes to sway conservative voters to support the kinds of programs gutted by Trump’s DOGE.
Trump revealed his objective in Iran — 40 years ago
Purging Trump
‘Disgusting’: Trump’s Top Economic Adviser Brags About Killing 300,000 ‘High-Paying’ American Jobs
Notes of an Economist on Food Stamps
What Is ‘Narcissistic Collapse’? Experts See Familiar Hallmarks In Trump’s Latest Rant

“Shoplifters of the world/ Unite and take over” — The Smiths
“When she wants something, man, she don’t wanna pay for it” — Jane’s Addiction
Seven years ago, when I wanted some toothpaste, I would walk down to my local Walgreens, grab a box of Crest off of the shelf, pay for it at the register, and walk home with it. Today, when I want some toothpaste, I open up Amazon.com and buy it in bulk. What changed? Amazon was just as good in 2019 as it is today. But now, when I walk into Walgreens, the toothpaste is locked behind a clear plastic case. In order to buy it, I have to call a store employee over to open the case for me.
That’s just too much of a hassle; the convenience of being able to walk into a store is canceled out by the inconvenience of having to stand there waiting for a human being to help me buy a goddamn tube of toothpaste.
People argue about whether there was really a nationwide epidemic of shoplifting in the U.S. in the early 2020s, and about whether that caused a wave of store closures. Some retailers claimed they were closing stores because of petty theft; some critics argued that this was a flimsy excuse. But no one can argue with those clear plastic cases covering the shelves. Those barriers, and the corporate investment and labor costs required to install and maintain them, are indisputably real. Numerator, a market research company, found the following in 2024:
Numerator, a data and tech company serving the market research space, has issued a new report—Unlocking Shopper Reactions to Secured Products—sourced from verified purchase data and a sentiment survey of over 5,000 consumers on their awareness of and reaction to merchandise being locked up in stores. Three-fifths of shoppers reported seeing locked-up merchandise on a regular basis, and 27% said they would switch retailers or abandon the purchase altogether instead of waiting for assistance for a locked-up product…
61% of shoppers reported seeing an increase in the number of products under lock and key over the past year. 33% have not noticed a change, and 7% say there are fewer items locked up now…35% of Western consumers say they encounter locks on the items they are trying to purchase almost every time they shop and 30% of urban consumers say the same…17% say they will switch retailers (10% online, 7% in-store), and 10% say they will abandon the purchase altogether. [emphasis mine]
When people cite numbers showing that shoplifting is down in San Francisco and many other metros since 2019 (despite almost doubling nationwide), you have to take into account the fact that a lot of merchandise is now being locked up. Unless companies are just stupidly wasting their money on those cases, and on the increased labor costs required to operate them, the existence of those cases is direct evidence that shoplifting has real costs.
If anti-theft barriers drive 5% of a store’s revenue to Amazon, that would mean that either A) theft would have caused the store to lose 5% or more of its revenue, or B) retail companies are being stupid and wasting money on anti-theft barriers. Chain stores like Walgreens and CVS are hyper-efficient optimizers — they really don’t like to make stupid decisions that lose money, and they have a ton of data and very good statisticians. Therefore, it’s extremely likely that theft imposes significant costs on many retailers.1
Who pays those costs? Maybe the shareholders of Walgreens and CVS just take a hit and see their share prices and wealth decline. Maybe their CEOs take a pay cut. Or maybe the stores cut wages and force their employees to work longer hours. Maybe they raise their prices, forcing regular people to pay more for toothpaste and shampoo and Advil. Maybe they close their least profitable stores — i.e., the stores in poor areas. Maybe poor people have one less Walgreens in their neighborhood to give them jobs and sell them their daily necessities.
In general, the cost will get divided up among those various people. But the pain will land much more on the poor and working class. Suppose that people start shoplifting more from Whole Foods, and it costs the company $20 million — 0.1% of its revenue. Now suppose that cost gets evenly divided — $5 million comes out of Jeff Bezos’ pocket, $5 million comes out of the salaries of the company’s executives and top managers, $5 million gets recouped by the company via price hikes, and $5 million gets saved via store closures and job cuts.2
Think about how much pain that would cause to each of the parties involved. If Bezos loses $5 million, he won’t even notice. It’s a rounding error on his wealth. The executives and top managers of Whole Foods will probably be slightly annoyed, but their lifestyles won’t change. Whole Foods’ middle- and upper-class customers will be a little more annoyed when prices go up. But the worst pain by far will land on the people who lose their jobs when stores close and staffing gets cut. $5 million is almost 100 employees.
Obviously some of the pain gets canceled out when shoppers go online instead. But online stores have a much lower labor share than brick-and-mortar retailers — if someone spends $1000 on Amazon instead of at Whole Foods, Bezos actually gets to keep a lot more of the money. And shoplifting does destroy some economic activity completely — regular people end up consuming less and getting paid less.
Every time you shoplift, in other words, you’re stealing from the people who work at grocery stores and drugstores and discount stores. You’re stealing from the communities that those stores serve. You’re contributing to food deserts. You’re raising unemployment. You’re making food less affordable for the most vulnerable. What you’re not doing is hurting rich people in any appreciable way.
If you’re shoplifting because you’re poor and desperate, the pain you’re causing to society might be worth it. But if you’re shoplifting because you’re a bored, arrogant multimillionaire with a chip on his shoulder, you’re just a rich person hurting poor people for fun.
Why am I writing this? Because in a recent roundtable discussion at the New York Times, leftist commentator Hasan Piker and New Yorker staff writer Jia Tolentino defended shoplifting — which interviewer Nadja Spiegelman renamed “microlooting” — arguing that it’s a way to strike out at the rich.
Here are some quotes from Piker, doing his usual shock-jock routine and endorsing theft of various kinds before admitting that he personally doesn’t steal:
Yeah, I’m pro-piracy all the way, like, across the board. Would you pirate a car? Yes. You know, if you could…If I could get away with it, if it was as easy as pirating intellectual property, I would do it…We’ve got to get back to cool crimes like that: bank robberies, stealing priceless artifacts, things of that nature…I’m pro stealing from big corporations, because they steal quite a bit more from their own workers…Yeah, chaos. Full chaos. Let’s go…
I — ironically enough — I don’t personally do it. I never do it. When I was younger, I stole some Pokémon cards from a friend and my father punished me. And it was such a harrowing experience that I literally can’t even steal a candy bar. When we were in college, a lot of my friends used to love doing that…I would never participate in it. And I still can’t, to this day, participate in it.
And here’s Tolentino, recounting when she stole lemons from Whole Foods to help a family friend, and then defending the idea of shoplifting on a more systematic basis:
I will say, I think that stealing from a big box store — I’ll just state my platform — it’s neither very significant as a moral wrong, nor is it significant in any way as protest or direct action. But I did steal from Whole Foods on several occasions…[E]very week I would go get groceries for Miss Nancy, my now family friend who lived nearby…I’d be getting Miss Nancy all of her groceries, and…I forgot four lemons. And on several occasions I was like, I’m just going to go back, grab those four lemons and get the hell out.
Tolentino and Piker then engage in a long discourse about when it’s politically acceptable to steal things. Tolentino says it’s acceptable to steal from the Louvre. Piker says it’s acceptable to steal from big-box stores, but not from restaurants. Tolentino says it’s OK to steal from Ikea and Whole Foods if you give the loot to the homeless. Piker says that IP theft is OK, but stealing from a government-owned store is wrong.
It’s possible to see this as the amoral self-justification of two selfish rich people — petty millionaires resentful of billionaires, taking out their resentment by trashing the society around them. And sure, I wouldn’t be surprised if there were some of that going on. But when you look closely at these people’s actions, they aren’t actually wanton thieves — Piker admits that he doesn’t personally steal anything, while Tolentino only admits stealing lemons for a family friend back when she was much less wealthy. They talk a big game about chaos and piracy and rebellion, but they’re mostly behaving like standard well-behaved highly-educated rule-following progressive coastal elites.
It’s also possible to see pro-theft rhetoric as part of American leftism’s intellectual heritage. The old European left had two basic factions — communists, and anarchists. The communists generally defeated the anarchists in Europe, but the American left is mostly descended from anarchism. Individual rebellion against the rules of society tends to be prized above collective action and hierarchy.
But what I really think we’re seeing is a combination of political posturing with a weird kind of effective altruism. Piker and Tolentino’s judgements of when stealing is OK and when it’s not OK are explicitly based on their judgements of when stealing is good for society, and when it’s bad. They envision a purely situational morality, in which people decide, moment-by-moment, whether to follow the law based on a sophisticated judgement of whether following the law will make the world a better place.
You could easily write down an economic model in which that sort of behavior is both rational and good. The problem is that it envisions every citizen as a sort of superhuman homo economicus, able to accurately make a complex calculation about the social costs and benefits when deciding whether or not to pay for every piece of fruit at Whole Foods.
In reality, that approach is doomed to fail. One big reason is that making decisions about whether to “microloot” usually requires a lot more knowledge about the workings of society than even the smartest human possesses.
Look closely, and you’ll see that Piker and Tolentino’s situational judgements sit on top of a gigantic stack of questionable assumptions about how economics and politics work. Piker says that stealing from a local diner is bad, probably because he implicitly assumes that the theft would come mostly out of the pocket of the restaurant’s independent owner, rather than out of the pocket of the restaurateur’s landlord or its corporate suppliers. But they both agree that stealing from a big-box store is OK because they assume the cost will come out of the pockets of corporate shareholders and executives.
Both of those assumptions are almost certainly wrong. Stealing from an indie restaurant will hurt corporations a bit; stealing from a big-box store will hit working-class employees and customers to some extent. Piker and Tolentino don’t understand much about how economics works, and they seem very confident in their simplistic mental model.
The second reason this kind of homo economicus approach to morality is dangerous is that there are tons of externalities involved. When you steal things, you probably give implicit permission to other people to steal things (and their reasons are likely to be less altruistic). You make stores more likely to install anti-theft barriers, which gives society a more militarized dystopian feel. Shoplifting forces marginally profitable stores to close, leading to vacant storefronts that attract crime, while depriving local governments of tax revenue to fund infrastructure and education. And so on.
It’s very difficult to calculate all of these externalities when you take each action. That’s probably why society has a social contract — a system of rules that we follow instead of calculating the results of each action from first principles. That social contract is often unfair, and we have many mechanisms dedicated to constantly revising it — democracy, the free press, and so on. But individual anarchism — the rejection of any social contract in favor of personal morality based on current assumptions — pretty much instantly runs into the hard limits of individual human knowledge.
Fortunately, if the decision is whether to shoplift five lemons or pirate a movie, the consequences of getting this sort of thing wrong won’t be catastrophic. Yes, shoplifting is wrong, but most people I know have done it at some point in their lives, and society hasn’t collapsed.3 But there are plenty of higher-stakes issues where the kind of fine-grained consequentialism advocated by Tolentino and Piker can have much more serious consequences.
For example, in the NYT roundtable, Tolentino says that blowing up a pipeline should be OK, but getting iced coffee in a plastic cup is morally wrong:
One thing that should be legal that isn’t — it’s interesting, because I have to regularly explain this stuff to a small child, and have so thoroughly explained to her that some things are against the rules, but they’re OK, depending on who you are. And some things are not against the rules, but they’re not OK. There are so many perfectly legal things I do regularly that I find mildly immoral. Like getting iced coffee in a plastic cup. I find that to be a profoundly selfish, immoral, collectively destructive action. I have taken so many planes for so many pleasure reasons; I have acted in so many selfish ways that are not only legal, but they’re sanctioned and they’re unbelievably valorized, culturally. So, maybe things like blowing up a pipeline, let’s say that.
Tolentino is obviously thinking purely about climate change when she says this — getting ice in a plastic cup raises emissions because ice and plastic are carbon-intensive, blowing up a pipeline lowers emissions by curbing fossil fuel use. But even if those assumptions are correct — and that’s a big if! — climate isn’t the only consequence in the world. Blowing up a pipeline can kill or maim innocent people. It can release toxic pollutants into the local environment. It can deprive local poor people of income that the pipeline owner agreed to share, and so on.
Piker, meanwhile, downplayed Luigi Mangione’s murder of health insurance CEO Brian Thompson in 2024, while accusing Thompson of “social murder”:
Brian Thompson, as the United Healthcare C.E.O., was engaging in a tremendous amount of social murder. The systematized forms of violence, the structural violence of poverty, the for-profit, paywalled system of health care in this country — and the consequences of that are tremendous amounts of pain, tremendous amounts of violence, tremendous amounts of deaths…[B]ecause of the pervasive pain that the private health care system had created for the average American, I saw so many people immediately understand why this death had taken place…[T]hat is the reason why, I think, the reaction to Luigi Mangione, especially by younger generations, was not so negative.
This life-and-death judgement also rests on a teetering tower of shaky assumptions. It assumes that health insurance companies — rather than providers — are chiefly responsible for high health care prices in America. In fact, as I wrote after Thompson’s murder, that’s just not true:
In fact, health insurers have consistently terrible profit margins; they are not giant pots of profit that could be used to pay for regular people’s treatment. Insurers are almost entirely a pass-through — it’s overpriced health services themselves that are responsible for the high cost of care in America.
Getting these things wrong can result in a lot of unnecessary violence, death, and conflict. Making excuses for terrorism and murder is a lot more consequential than deciding whether to steal a few lemons, and yet Piker and Tolentino are just as comfortable doing the former as the latter. Their mental models of economics and politics are a dense tangle of undergrad-level misunderstandings, leftist memes, and political talking points — victims of American progressives’ increasing epistemic closure. And yet they are arrogant enough to feel comfortable discarding all of society’s rules on a case-by-case basis in favor of their own personal calculations.
This is a bad direction for the progressive movement, the Democratic Party, and educated coastal elite culture in general. Yes, there are many arenas of human life in which modern society has overly constrained individual judgement with a thicket of rules and procedures. But stealing from stores, blowing up pipelines, and gunning down corporate executives are not good examples of situations where we need fewer rules and more individual judgement.
Economists have tried to estimate these costs.
For simplicity’s sake I’ve assumed that Bezos owns 100% of Whole Foods, which isn’t true. But this assumption isn’t important to the point I’m making.
I have only shoplifted one thing in my entire life: a copy of Abbie Hoffman’s book Steal This Book.
Science news:
Scientists have finally cracked a long-standing mystery about squid and cuttlefish evolution by analyzing newly sequenced genomes alongside global datasets. The research reveals that these bizarre, intelligent creatures likely originated deep in the ocean over 100 million years ago, surviving mass extinction events by retreating into oxygen-rich deep-sea refuges. For millions of years, their evolution barely changed—until a dramatic post-extinction boom sparked rapid diversification as they moved into new shallow-water habitats.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
2. Claude doing stand-up comedy, the video is AI too.
3. Thirty more lines from Empedocles have been found.
4. Latin America’s oil resurgence.
5. 25 dead at the Haitian Citadelle.
6. A paean to the earlier University of Chicago law school.
7. Can AI agents read a social science paper and write the code from scratch to reproduce its results?
8. Papers of Hayek are now accessible online.
The post Saturday assorted links appeared first on Marginal REVOLUTION.
With the circumlunar flight of Artemis II, and the prospect of landing astronauts on the lunar surface within a few years, humanity is preempting an era where the imprint of visiting the Moon would be erased from living memory.
There are five men still alive who flew to the Moon on NASA's Apollo missions. All are now in their 90s. Between 1968 and 1972, 24 astronauts visited the Moon, and 12 of them walked on its surface. We'll have to wait a little longer to add to the roster of Moonwalkers, but there are four new names to etch on the list of lunar explorers.
The Artemis II astronauts, all in their 40s or 50s, flew a little more than 4,000 miles from the Moon, higher above the surface than the Apollo lunar missions. The four-person crew on Artemis II set a new record for the farthest humans have ever traveled from Earth: 252,756 miles (406,771 kilometers).
The US Space Force released a list Friday of a dozen companies working on Space-Based Interceptors for the Pentagon's Golden Dome initiative, a multilayer defense system to shield US territory from drones and ballistic, hypersonic, and cruise missile attacks.
The roster of Golden Dome Space-Based Interceptor (SBI) contractors, some of which were previously reported, includes Anduril Industries, Booz Allen Hamilton, General Dynamics Mission Systems, GITAI USA, Lockheed Martin, Northrop Grumman, Quindar, Raytheon, Sci-Tec, SpaceX, True Anomaly, and Turion Space.
The Space Force made 20 individual awards the 12 companies in late 2025 and early 2026 using an acquisition mechanism known as Other Transaction Authority, or OTA, agreements. OTAs allow the Pentagon to bypass federal acquisition regulations and cast a wide net to attract a larger number of potential contractors, and are especially useful for rapid prototyping. That is exactly what the Space Force wants to see with the first phase of the SBI program.
On April 25, 1945, delegates from fifty nations met in San Francisco to establish a permanent forum for international cooperation: the United Nations.
Even before the U.S. entered World War II, U.S. president Franklin Delano Roosevelt and British prime minister Winston Churchill and their advisors laid out principles for an international system that could prevent future world wars. In the 1941 Atlantic Charter, they declared that countries should not invade each other and therefore the world should work toward disarmament, and that international cooperation and trade thanks to freedom of the seas would help to knit the world together with rising prosperity and human rights.
Between 1942 and 1945, forty-seven nations signed the Declaration by United Nations, a treaty formalizing the alliance that stood against the fascist Axis powers. The treaty declared that signatories would not sign separate peace agreements with Germany, Italy, or Japan and would work together to create a world based on the 1941 Atlantic Charter.
In October 1943 the governments of the U.S., the United Kingdom, the Soviet Union, and China declared that they would continue to cooperate with each other after the war ended, and that they recognized the need to establish an international organization, “based on the principle of the sovereign equality of all peace-loving states, and open to membership by all such states, large and small, for the maintenance of international peace and security.”
To create that organization, representatives from those four nations met at the Dumbarton Oaks estate in Washington, D.C., in late summer and fall 1944. They hammered out the Dumbarton Oaks Proposals for an international organization called the United Nations. Its purpose would be to maintain international peace and security by acting collectively to stop aggression and settle international disputes, to strengthen ties between nations, and to work together to solve problems.
The organization was based on “the principle of the sovereign equality of all peace-loving states,” and membership in it would be open to all such states.
In February 1945, President Roosevelt, Prime Minister Churchill, and General Secretary Joseph Stalin met near Yalta in Crimea to discuss the postwar world. The Allies had liberated France and Belgium, and the Germans had lost the Battle of the Bulge in late January, while Soviet forces were within 50 miles of Berlin. It was clear that the end of the war in Europe was coming.
At Yalta the three leaders hashed out the last pieces of the proposed United Nations and agreed that “a United Nations conference on the proposed world organization should be summoned for Wednesday, 25 April, 1945, and should be held in the United States of America.” Those invited to the conference would be “the United Nations as they existed on 8 Feb[ruary], 1945” and any associated nation that had “declared war on the common enemy by 1 March, 1945.”
On March 1 a visibly exhausted Roosevelt addressed the nation. “A conference of all the United Nations of the world will meet in San Francisco on April 25, 1945,” he said. “There, we all hope, and confidently expect, to execute a definite charter of organization under which the peace of the world will be preserved and the forces of aggression permanently outlawed.”
“This time we are not making the mistake of waiting until the end of the war to set up the machinery of peace. This time,” he said, in reference to the failed League of Nations after World War I, “as we fight together to win the war finally, we work together to keep it from happening again.”
Roosevelt explained: “The structure of world peace cannot be the work of one man, or one party, or one Nation. It cannot be just an American peace, or a British peace, or a Russian, a French, or a Chinese peace. It cannot be a peace of large Nations—or of small Nations. It must be a peace which rests on the cooperative effort of the whole world.
“It cannot be a structure of complete perfection at first. But it can be a peace—and it will be a peace—based on the sound and just principles of the Atlantic Charter—on the concept of the dignity of the human being—and on the guarantees of tolerance and freedom of religious worship.”
Roosevelt died on April 12, 1945 and less than two weeks later, on April 25, 3,500 people arrived at the San Francisco conference: 850 delegates and their staff and advisors, along with the staff of the conference. To follow developments, more than 2,500 reporters and observers were also there.
The conference organizers divided the delegates into committees to figure out how, exactly, to make the United Nations work. Together they wrote, and then adopted unanimously, the United Nations charter.
“We the peoples of the United Nations,” the preamble to that document began, are determined “to save succeeding generations from the scourge of war, which twice in our lifetime has brought untold sorrow to mankind.” The document declares the signers “faith in fundamental human rights, in the dignity and worth of the human person, in the equal rights of men and women and of nations large and small.” It calls for the maintenance of international treaties and international law.
The preamble also called for countries to live in peace with each other, uniting their strength to maintain international peace and security and making sure that “armed force shall not be used” unless it is in the common interest. As Roosevelt and Churchill had called for in the 1941 Atlantic Charter, it called for nations to work together “for the promotion of the economic and social advancement of all peoples.”
“To accomplish these aims,” the signatories announced, “[we] have resolved to combine our efforts.”
—
Notes:
https://avalon.law.yale.edu/wwii/moscow.asp
https://history.state.gov/historicaldocuments/frus1943v01/d684
https://digital.library.cornell.edu/catalog/ss:21796682
https://avalon.law.yale.edu/wwii/moscow.asp
https://www.presidency.ucsb.edu/documents/address-congress-the-yalta-conference
https://avalon.law.yale.edu/wwii/yalta.asp
https://www.un.org/en/about-us/history-of-the-un/san-francisco-conference
This is a guest post from Jerry Rocha, a stand-up comedian and new local. You can follow him on Instagram here.
Could this be the year? Could these upcoming mid-terms be the elections where America, with an overwhelming number of votes, tells MAGA to eat shit?
So many signs are pointing to it, from polling to the numbers experts on cable news all saying their data is showing a blue wave, maybe even a historic one, is on the way.
I want to believe it, but I worry those number-crunching experts on cable news know just about as much as the gambling experts who have taken up all the talking head retail space on every cable sports outlet (By the way, it won’t ever stop there with the gambling bullshit. Sports was just the way in).
I’m sure the action on the mid-terms will be a rabid frenzy of bets and the smart money should be the fascists getting their asses kicked. My fear is that should have been the outcome every time a fascist asshole is on a ballot, and yet we’ve seen them win time and time again.
It’s so disheartening. Hillary should’ve beaten his ass (electorally). Biden should’ve beaten his ass by a much bigger margin than he did. Kamala should’ve beaten his ass, and instead there that motherfucker is, exposed as a sexual assaulter, exposed as a pedophile and all that’s happened is many of us seeing our dumbest friend or relative we have now making excuses for sexual assault and pedophilia.
How the fuck are they still okay with this shit?
It does seem that, thankfully, a few are peeling off, but what about the people who haven’t? The non-billionaires (not even close in so many cases) who are in no way benefitting from this asshole being in control … why are they still there?
If anyone has seen the brilliant John Sayles film Lone Star, then I’m sure you remember the cameo by Frances McDormand. If not, go watch it now. When I first saw the film, her character, Bunny, I thought was comic relief, a divorced woman who lived alone and whose home decorations and whose clothing and whose conversations seemed to all be centered around the Dallas Cowboys. Having been born and raised in Dallas, I just saw what I figured to be a funny caricature of a mega fan.
Then I saw the movie again and I had a completely different take on Bunny. What I initially thought as comic relief, I soon saw as the complete opposite. This was a tragic tragic individual who had no way to deal with her trauma and instead poured her existence into a football team.
Now, every time I see MAGA, I think of Bunny. It’s sad, knowing that there are people out in the world so lost and that damaged that they fall for that orange pile of shit and then say we’re the “deranged” ones when all the negative things he does are exposed.
It’s sad, but it’s also difficult to have empathy. We all have trauma, however, most of us don’t deal with that trauma in taking solace that people different than us are hurting.
And yet, it’s still paramount that some of them have pulled away from MAGA and more continue to pull away. So as much as I talk shit, I will also be happy to let any MAGA on the fence I happen to come across know how nice the water is on this side of the pool.
I will also hope the apathetic mostly left-leaning “both sides suck” political hipsters understand that if there was ever a mid-term election to show up to, this is the one.
The experts are saying the mid-terms will favor the Liberals. I’m cautiously optimistic. I suppose it helps that this is easily the worst president and administration we’ve ever had, and while I will sit here and hope that a big blue wave will be the case, I will also make sure I get my ass out there to vote. I will talk to everyone about how important it will be for them to go vote as well.
Although my cancer fight has left me mostly bedridden at the moment, I at least promise that my daily calls to the local GameStops will include some semblance of “please go and vote” talk.
The chances any MAGA decide to have a face turn and reject the evil is so unlikely, it would be if Bunny suddenly became a Washington Commanders fan, but in Rocky IV, the Russians in the middle of massive fight in Russia between one of their own and Rocky, started cheering for Rocky.
So… hope?
I do have faith that a change (for the better) is on the way.
I have no idea how many times these MAGA pukes have been on a ballot and won, but if it’s the third time, or seventh time or whatever time, let’s just hurry up and get to the time where it’s a charm.
The trade accounts are among the most pernicious statistics ever collected. It’s long been remarked, for example, that merely by calling something a “deficit” it seems bad even though a current account deficit is matched by a financial account surplus. Put that issue aside, however, because the real problems are much deeper. The international accounts make it appear that individuals, in their ordinary buying and selling, bind us all in a collective endeavor. The accounts take millions of voluntary, mutually beneficial transactions between individuals and firms and repackage them as a relationship between nations—as if “America” were buying from “China”. Many, many experts get this wrong—not just non-economists who are misled by terms like “deficits.”
Don Boudreaux at Cafe Hayek gives a truly excellent example in replying to a reader who asks:
The USA ran trade deficits for 50 years. Those were offset by foreigners’ investments in the USA. Foreigners expect returns on these investments. Doesn’t it mean Americans eventually have to pay those returns to foreigners?
Don’s answer:
No.
The only Americans who are obliged to pay anything to foreigners are Americans who borrowed money from foreigners. (This number includes U.S. citizens-taxpayers whose government borrowed money from foreigners.) But no such obligation exists for other investments that foreigners made in the U.S. – those other investments being equity investments in the U.S. (for example, foreigners buying a restaurant in Houston), purchases of real estate in the U.S., and holding U.S. dollars.
If, for example, the foreign-owned restaurant in Houston goes bankrupt, the loss is fully borne by its foreign owners; no American is obliged to pay anything on that account to foreigners.Of course, foreigners do expect positive returns on all of their U.S. investments, regardless of form. But with the exception of Americans’ repayment of principal and interest on funds that they borrowed from foreigners, no returns that foreigners earn on their investments in America are paid by Americans. If the foreign-owned restaurant in Houston is profitable, those profits are newly created wealth – wealth that’s created by that restaurant’s foreign owners.
In the international commercial accounts, when the restaurant’s foreign owners realize returns on their restaurant – say, by being paid dividends drawn on that restaurant’s profits – it appears that Americans are paying foreigners. This appearance comes from the fact that dollars flow from the U.S. to abroad, and so are recorded as payments from America to a foreign country or countries. But this appearance is misleading. America, as such, doesn’t pay those returns to the restaurant’s foreign owners. Nor do any flesh-and-blood Americans pay those returns. Those returns, again, are new wealth created by the restaurant’s foreign owners; economically, those returns are paid to the restaurant’s foreign owners by the restaurant’s foreign owners.
But the international commercial accounts mask this economic reality. What appears in the commercial accounts as payments by America to foreign countries are no such thing. This accounting mistakes geography for economic reality. Untold confusion is unleashed by supposing that, just because these dollar-denominated returns are created in the U.S. and then sent abroad to foreigners, these dollar-denominated returns are necessarily paid by Americans to foreigners.
As Don says, the trade accounts commit a kind of category error: they categorize geographic location, a where, and treat it as a who, as if “nations” traded. But nations don’t trade, people trade. This confusion wouldn’t matter too much if the statistics stayed in the back pages of government reports. But they don’t. They land on the front page, they shape policy, and they frame negotiations. When a president claims that “we lost $500 billion” to “crazy trade” with China, he is reading the international accounts as a story about nations in competition. The accounting creates the narrative. the narrative creates the policy. Bad accounting leads to bad policy. We would, in fact, all be better off if the trade accounts simply disappeared.
The post The Pernicious Trade Account appeared first on Marginal REVOLUTION.
In May 2024, Bloomberg ran a feature story by Mark Gurman under the headline, “Tim Cook Can’t Run Apple Forever. Who’s Next?” The subhead: “John Ternus, the head of hardware engineering, is emerging as a potential successor to the CEO.” The nut grafs from that piece:
There’s no reason to assume that a change at the helm is imminent. Cook may be older than the CEOs of the other tech companies at the top of the S&P 500, but he’s hardly the oldest person running a major corporation. “If Trump or Biden can be president at 80, Tim Cook can be CEO of Apple for many more years. It used to be automatic that CEOs are moved out at 65,” says someone who knows him. “The world has changed.”
While Cook hasn’t given any indication how long he’ll remain in charge — other than telling Dua Lipa it would be “a while” — people close to him believe he’ll be CEO at least another three years. After that, they say, he’ll start a charitable foundation to donate the wealth he accumulated at Apple.
If Cook were to stay that long, people within Apple say, the most likely successor would be John Ternus, the hardware engineering chief. In a company whose success has always come from building category-defining gadgets, the ascension of a hardware engineering expert to the CEO job would seem logical. Ternus, who’s not yet 50, would also be more likely than other members of the executive team to stick around for a long time, potentially providing another decade or more of Cook-esque stability.
Ternus is well-liked inside Apple, and he’s earned the respect of Cook, Williams and other leaders. “Tim likes him a lot, because he can give a good presentation, he’s very mild-mannered, never puts anything into an email that is controversial and is a very reticent decision-maker,” says one person close to Apple’s executive team. “He has a lot of managerial characteristics like Tim.” Christopher Stringer, a former top Apple hardware designer, called Ternus a “trustworthy hand” who’s “never failed with any role he’s been elevated to.” Eddy Cue, the Apple executive known as Cook’s closest confidant, has privately told colleagues that Ternus should be the next CEO, according to a person with knowledge of the matter.
Linking to Gurman’s report, I wrote:
I wouldn’t have linked to this if not for the above line about Eddy Cue. If Cue is telling people that, that means a lot. No executive at Apple is more juiced-in company-wide than Cue. Cook’s first action as CEO was to promote Cue, and Cue was arguably just as tight with and trusted by Steve Jobs.
It was two more years, not three, but Gurman was the first to report that Ternus was the guy at the top of the list.
There was no significant additional reporting between Gurman’s May 2024 Bloomberg report until November 15 last year, when the Financial Times published a blockbuster story under the headline “Apple Intensifies Succession Planning for CEO Tim Cook”, with four bylines: “Tim Bradshaw, Stephen Morris and Michael Acton in San Francisco and Daniel Thomas in London”. Bradshaw is the FT’s lead Apple reporter, and it’s no coincidence his name was first among the four. The article gets right to the point at the start:
Apple is stepping up its succession planning efforts, as it prepares for Tim Cook to step down as chief executive as soon as next year. Several people familiar with discussions inside the tech group told the Financial Times that its board and senior executives have recently intensified preparations for Cook to hand over the reins at the $4tn company after more than 14 years.
John Ternus, Apple’s senior vice-president of hardware engineering, is widely seen as Cook’s most likely successor, although no final decisions have been made, these people said.
People close to Apple say the long-planned transition is not related to the company’s current performance, ahead of what is expected to be a blockbuster end-of-year sales period for the iPhone. [...]
The company is unlikely to name a new CEO before its next earnings report in late January, which covers the critical holiday period. An announcement early in the year would give its new leadership team time to settle in ahead of its big annual keynote events, its developer conference in June and its iPhone launch in September, the people said. These people said that although preparations have intensified, the timing of any announcement could change.
So, per the FT in November, Apple’s plan was to name Ternus as the company’s next CEO “early in the year”, after their Q1 results (January 29) but ahead of WWDC (June 8). The halfway point between those dates was April 4; Apple announced Ternus as the company’s next CEO on April 20. Every single word of the FT report, in hindsight, was exactly correct. I can’t think of a way that their November story could have been more prescient. It was a home run. A report for the ages, like when CNet and The Wall Street Journal scooped the Mac’s transition to Intel processors on the eve of WWDC 2005.
My own take, back in November when the FT report dropped, was that it had the distinct aroma of a deliberate expectations-setting leak, and was almost certainly accurate:
That “several people” spoke to the FT about this says to me that those sources (members of the board?) did so with Cook’s blessing, and they want this announcement to be no more than a little surprising. [...]
I would also bet that Cook moves into the role of executive chairman, and will still play a significant, if not leading, role for the company when it comes to domestic and international politics. Especially with regard to Trump.
Cook moving into the position of executive chairman and continuing to play a leading role as the company’s political ambassador was my own speculation, and that proved out. Easy money, making that prediction.
One week after the FT’s report, in his Bloomberg “Power On” newsletter on November 23, Gurman wrote:
In October, I wrote that the internal spotlight on Ternus was “intensifying,” and that barring unforeseen circumstances he would be the leading candidate. But I didn’t put a date on when a change might happen. Then, around midnight two Fridays ago, the Financial Times published a report with three central claims: Apple is “intensifying” succession planning; Ternus is likely the next CEO; and Cook is expected to step down between late January and June.
The first two points are anything but revelations if you’ve read Bloomberg coverage and Power On, or have simply been paying attention to the realities of Cook’s age and tenure. The timing, however, is another matter entirely. It’s a huge deal that the FT did this: A respected publication should only predict the CEO transition date for a company of Apple’s scale with a high level of confidence — based on people legitimately in the know.
This is where I have concerns. Based on everything I’ve learned in recent weeks, I don’t believe a departure by the middle of next year is likely. In fact, I would be shocked if Cook steps down in the time frame outlined by the FT. Some people have speculated that the story was a “test balloon” orchestrated by Apple or someone close to Cook to prepare Wall Street for a change, but that isn’t the case either. I believe the story was simply false.
Gurman must be well and truly “shocked” by this week’s announcements, because as it turns out, Cook is stepping aside exactly “in the time frame outlined by the FT”. The FT’s report was not “simply false”. It was, in fact, completely true. The Financial Times, which truly is a respected publication (with no black marks on its record, like, say, Bloomberg’s to-this-day-still-uncorrected “The Big Hack” fiasco), obviously did have a high level of confidence in Apple’s plans, because they were, in fact, briefed by people “legitimately in the know”. Gurman’s reading comprehension is questionable as well, because the FT did not report that Cook would “step down” between January and June. The FT report spoke only of “naming a new CEO” and making an “announcement” between January and June. That’s exactly what happened. Nor is anyone “departing” — but a change in leadership will occur in the middle of the year.
In January, Gurman reiterated his stance that the FT was wrong:
It’s just a question of timing. The Financial Times reported last year that the change would happen as early as the beginning of 2026. But let me be clear: This seems unlikely.
By pooh-poohing the FT’s completely accurate reporting as “simply false”, Gurman wound up poo-pooing the bed. Calibrate the grains of salt with which you take his other reporting on Apple executive goings-on accordingly. A humble correction and sincere apology to the Financial Times — and Tim Bradshaw personally — are surely forthcoming in this weekend’s edition of Power On.1
And the check, I’m sure, is in the mail. ↩︎
The Norwegian Maritime Authority:
If you were born in 1980 or later and plan to operate a recreational craft of more than 8 metres in length or with an engine power of more than 25 hp, you need a boating licence. The boating licence is a certificate permitting you to operate Norwegian recreational craft of less than 15 meters in length (49.21 feet) in Norwegian territory.
That’s an interesting example of generational law. It kind of sucked, I’m sure, if you were from a family of mariners and were born in 1980 and your sibling was born in 1979. You got stuck having to qualify for a license and your sibling did not. But: this is very different from an outright ban on those born after a certain year. It’s a relatively gentle change, and the cutoff had to apply somewhere. (The state of Missouri has a similar law with a birth cutoff of 1984.)
This whole topic of generational law is fascinating. I’ve gotten more emails from readers — around the world — about my post on the U.K. ban on tobacco sales to those born in 2009 or later than just about anything I’ve written about recently. Lots of amazing feedback — including a note pointing me to the above Norwegian law. I’m replying to a bunch but can’t reply to them all, and but I’m thankful for every one.
What makes the Norwegian boat licensing cutoff unobjectionable to me is that it’s not binary. It’s not saying those born in 1979 can pilot a boat and those born in 1980 cannot. It’s only saying that there’s an additional restriction on those born in 1980. A generational restriction feels fundamentally different from a generational ban. A bunch of readers who support these generational tobacco bans point to other laws with age cutoffs, like when the age for buying alcohol changed from 18 to 21. I’m sure that sucked if you wanted to drink and were 18, 19, or 20 when the limit was raised to 21 in your state. (Or if you were 17, and went from being one year away to four years away with the swoop of your governor’s pen.) But everyone turns 21 eventually. Adults putting additional restrictions on the young feels to me entirely different than adults banning the young from ever partaking in something that they — the current adults who are imposing the restriction — can continue to do in perpetuity. It’s not just a violation of the idea that all adults are equals, but to me it’s just blatantly hypocritical.
If you tell me I’m not permitted to do something, but others are, it makes me want to do that thing. And it really makes me want to give the finger to whoever is imposing the restriction. Fine for you but not for me? Fuck you.
Also, grandfathering devices (old cars don’t need to meet new emission standards) or buildings (new buildings must have elevators for accessibility, but existing buildings aren’t required to add them) feels fundamentally different from grandfathering people.
To be clear, I support the intention of these tobacco laws, but I am highly dubious about their practical effect in addition to my objections to their fairness. Some people have a tendency to focus solely on intent and not on the practical effects of the law. That if the intent is good, the law must be good. I think laws are only good when their practical effects are beneficial. A well-intentioned law with no practical benefit is needless bureaucracy; a well-intentioned law with adverse practical effects is a bad law.1 I can’t help but think everyone who supports these generational smoking bans is stuck thinking of those below the age cut-off as the 17-year-olds they currently are. But they’re all going to be 40, 50, 60 years old eventually. It’s absurd to think about a 60-year-old man who needs to ask his 61-year-old friend to buy him smokes.
My spitball idea for a generational law to keep more young people from ever starting a tobacco habit — and thus, nicotine addiction — would be through scaled taxation. Require everyone, no matter what age, to present ID when purchasing tobacco. Set the tax rate on the year they were born, with significantly higher taxes the younger they are. But with no wild fluctuation from someone else who is a year or two apart in age. Start with the highest rate for 21-year-olds, and lower those taxes by a point or two for every additional year old someone is. In this structure, no adult would be forbidden from buying tobacco, but someone who is 21 would pay significantly more for a pack of cigarettes than someone who is 65 — but only slightly more than someone who is, say, 22 or 23. Keep increasing the base rate for everyone, every year, so that everyone, no matter how old, has to pay slightly higher prices year after year. Thus the starting price, for newly-turned-21-year-olds, would escalate annually. That feels fair, should reduce the demand for a black market, and I think would have the practical effect of decreasing the number of young people who ever start — while also minimizing the punitive costs for older adults with decades-long addictions.
This is my objection to the EU’s DMA in a nutshell. ↩︎
George Lucas got so many nuances right in Star Wars. Little touches that said so much. One of the most overlooked is a moment that I vividly remember from first seeing it, on the big screen, as a kindergartener or thereabouts. It’s during the scene where Luke enters the Mos Eisley cantina. We still haven’t met Han Solo and Chewbacca. And while we’ve seen space ships and droids, stormtroopers and Darth Vader, Jawas and Tusken Raiders, every character we’ve seen in the flesh is a human. And then, boom, 45 minutes into the movie, we enter the cantina, and the joint is absolutely lousy with dozens of wild and wildly different aliens — including the band playing that iconic jaunty song. We suddenly learn just how diverse the galaxy really is. It’s one of the best and most memorable scenes in movie history.
The moment I’m talking about is when Luke enters with C-3PO and R2-D2, and the frighteningly gruff bartender barks at him:
BARTENDER
We don't serve their kind here!
Luke, still recovering from the shock of
seeing so many outlandish creatures, doesn't
quite catch the bartender's drift.
LUKE
What?
BARTENDER
Your droids. They'll have to wait
outside. We don't want them here.
Luke looks at old Ben, who is busy talking
to one of the Galactic pirates. He notices
several of the gruesome creatures along the
bar are giving him a very unfriendly glare.
Luke pats Threepio on the shoulder.
LUKE
Listen, why don't you wait out by
the speeder. We don't want any
trouble.
THREEPIO
I heartily agree with you sir.
As a kid, I didn’t get it. Why would you not want droids? Star Wars made robots seem so real, so fun. Why would you ban them? That scene has stuck with me for my entire life. I didn’t get why, but I understood what it meant about that galaxy: the underclass deeply resented droids. The bartender’s attitude wasn’t “Hey kid, I’m sorry, but rules are rules and they’re not permitted.” His attitude was “Get those fucking things out of here.”
I think about this scene more and more lately.
Andy McMillan and Andy Baio:
Today, over 10 years later, and almost two full years after we retired the festival for good, we’re finally launching that website. Named after what we thought would be a throwaway title for a short-lived GitHub repo, allow us to introduce XOXO Explore. [...]
And, for the first time ever (!) on an XOXO website, we now have an actual About page, where we’ve attempted to explain what XOXO actually was, documenting the 12-year history of the festival and its related projects, and featuring a bunch of our favorite quotes from press and attendees over the years.
You’ll also notice the entire site is littered with floating visual ephemera: design artifacts, illustrations, animations, photos, and videos, all pulled from our vast archive of commissions and collaborations.
Conferences come and conferences go. But when they go, they tend to disappear. What a remarkable library, a record, XOXO Explore is.
Eva Corlett, reporting for The Guardian in 2023:
New Zealand’s new government will scrap the country’s world-leading law to ban smoking for future generations to help pay for tax cuts — a move that public health officials believe will cost thousands of lives and be “catastrophic” for Māori communities.
In 2022 the country passed pioneering legislation which introduced a steadily rising smoking age to stop those born after January 2009 from ever being able to legally buy cigarettes. The law was designed to prevent thousands of smoking-related deaths and save the health system billions of dollars. [...]
The laws were due to be implemented from July 2024. But as part of its coalition agreement with populist New Zealand First, National agreed to repeal the amendments, including “removing requirements for de-nicotisation, removing the reduction in retail outlets and the generation ban”.
It’s interesting that this law passed in the first place, but I would argue that there are no lessons to be learned from it given that it was repealed before it ever went into effect. And it seemingly was repealed solely along left/right political lines after a change in the country’s parliament, not because the public had turned against this specific not-yet-in-effect law.
The people do not yearn for automation
This written and video essay by Nilay Patel explores why AI is unpopular with the general public even as usage numbers for ChatGPT continue to skyrocket.It’s a superb piece of commentary, and something I expect I’ll be thinking about for a long time to come.
Nilay’s core idea is that people afflicted with “software brain” - who see the world as something to be automated as much as possible, and attempt to model everything in terms of information flows and data - are becoming detached from everyone else.
[…] software brain has ruled the business world for a long time. AI has just made it easier than ever for more people to make more software than ever before — for every kind of business to automate big chunks of itself with software. It’s everywhere: the absolute cutting edge of advertising and marketing is automation with AI. It’s not being a creative.
But: not everything is a business. Not everything is a loop! The entire human experience cannot be captured in a database. That’s the limit of software brain. That’s why people hate AI. It flattens them.
Regular people don’t see the opportunity to write code as an opportunity at all. The people do not yearn for automation. I’m a full-on smart home sicko; the lights and shades and climate controls of my house are automated in dozens of ways. But huge companies like Apple, Google and Amazon have struggled for over a decade now to make regular people care about smart home automation at all. And they just don’t.
Via John Gruber
Tags: ai, generative-ai, llms, nilay-patel, ai-ethics
The Financial Times has an article discussing Argentine inflation, which remains stubbornly high in 2026:
Javier Milei’s push to bring down Argentina’s chronic inflation is stalling, with the monthly rate hitting 3.4 per cent in March — its highest level in a year — as economists warn that tackling the final stretch could be far harder than halting the crisis at its peak.
Inflation has fallen sharply from the double-digit monthly rates Milei inherited when he became president in 2023. But the monthly rate bottomed out at 1.5 per cent in May and hit 2.9 per cent in both January and February. . . .
The libertarian president, who wrote a book called The End of Inflation as part of his pitch to voters in 2023, has said inflation could soon “start with a zero”, meaning a monthly rate of less than 1 per cent.But economists are sceptical, particularly as the energy price surge caused by the Iran war adds fresh pressure to already sticky price dynamics. Its annual rate of nearly 33 per cent is a long way from its peak of nearly 300 per cent, but still among the world’s worst.
Two things struck me about the FT article. There is no mention of the money supply and there is no explanation for why Milei has refused to ask the central bank to control inflation:
The government has resisted committing the central bank to an explicit policy of targeting inflation. In the absence of such a framework as well as the abandoned exchange-rate anchor, the process has lost its engine, argues Gabriel Caamaño, an economist at consultancy Outlier.
“The disinflation process is at an impasse,” he said.
Yes. But why?
Milton Friedman famously said that persistent inflation is always and everywhere a monetary phenomenon, by which he meant a money supply phenomenon.
I’d say that high rates of persistent inflation are always and everywhere a money supply phenomenon.
I say “money supply”, because it is a tautology that inflation is a monetary phenomenon. After all, (by definition) inflation is literally the percentage decrease in the purchasing power of money. But while the value of money can change because of shifts in either the supply or demand for money, when inflation rates are persistently very high the cause is virtually always a rapidly expanding supply of money.
Here’s a graph from Trading Economics showing explosive growth in Argentina’s monetary base. Milei was elected in December 2023:
To be clear, it is not true that the inflation rate is exactly equal to the growth rate of the money supply. When inflation is slowing, as in 2024 and 2025, then real money demand tends to rise and inflation is usually less than the money growth rate. When inflation is accelerating, as in 2026, then real money demand tends to fall and inflation often exceeds the money growth rate. But over any extended period of time, the sort of extremely rapid growth in the monetary base that we see in Argentina will produce high rates of inflation. Notice the strong correlation between base money growth rates (annual averages) and inflation rates for the ten highest inflation countries in the mid- to late 20th century:
The question is why does Milei want the Argentine central bank to print so much money?
Not where I’m going
By the time you read this, I should be on the other side of the Atlantic. Robin and I will be tourists somewhere spectacular for a week, and we will not spend that time drafting posts. I won’t promise complete radio silence, but I’ll only weigh in, briefly, if the world is falling apart. (In other words, I probably will say something, but not much.)
Back on duty at the end of next week.
MUSICAL CODA
Here's a neat trick they recommend for applications that might spend considerable time thinking before returning a user-visible response:
Before any tool calls for a multi-step task, send a short user-visible update that acknowledges the request and states the first step. Keep it to one or two sentences.
I've already noticed their Codex app doing this, and it does make longer running tasks feel less like the model has crashed.
OpenAI suggest running the following in Codex to upgrade your existing code using advice embedded in their openai-docs skill:
$openai-docs migrate this project to gpt-5.5
The upgrade guide the coding agent will follow is this one, which even includes light instructions on how to rewrite prompts to better fit the model.
Also relevant is the Using GPT-5.5 guide, which opens with this warning:
To get the most out of GPT-5.5, treat it as a new model family to tune for, not a drop-in replacement for
gpt-5.2orgpt-5.4. Begin migration with a fresh baseline instead of carrying over every instruction from an older prompt stack. Start with the smallest prompt that preserves the product contract, then tune reasoning effort, verbosity, tool descriptions, and output format against representative examples.
Interesting to see OpenAI recommend starting from scratch rather than trusting that existing prompts optimized for previous models will continue to work effectively with GPT-5.5.
Tags: ai, openai, prompt-engineering, generative-ai, llms, gpt
Release: llm 0.31
- New GPT-5.5 OpenAI model:
llm -m gpt-5.5. #1418- New option to set the text verbosity level for GPT-5+ OpenAI models:
-o verbosity low. Values arelow,medium,high.- New option for setting the image detail level used for image attachments to OpenAI models:
-o image_detail low- values arelow,highandauto, and GPT-5.4 and 5.5 also acceptoriginal.- Models listed in
extra-openai-models.yamlare now also registered as asynchronous. #1395
…Double-entry book-keeping – John and I believe, with some evidence, that he may well have been the person to bring double-entry book-keeping to the UK from the Low Countries. In turn an Italian invention of the 13th century…
Business exchanges rather than markets – Gresham certainly brought the idea of an exchange or bourse from Antwerp (in turn from Ghent) to England. It really was a radical idea. No phone directory, no advertising, no internet – we used to block of Cornhill with chains so merchants could meet at regular times in the mud and rain to establish ventures, principally voyages, and fund them. The Exchange became more and more populated as the Low Countries fought with Spain. People don’t bring money to a war zone (or a cybersecurity hazard). Thus, it was the vessel into which poured the extensive wealth of the Low Countries and turned London from an outback sheep town of 30,000 at the beginning of the 1500’s to a city of over 200,000 by 1600. Markets for cattle, sheep, produce, chickens, all existed – but a market for intangible things?
1st English Shopping Mall? – Gresham also brought the idea of a shopping mall to England. We have many examples of similar complexes and galleria from ancient times, but not in England. At the time it was the upper floor of his Exchange. The concept of shops not adhering to a physical locality – Bread Street, Milk Street, Boot street, etc. – was more radical than it sounds to modern ears. Amusing then to have England referred to in later centuries as “a nation of shop keepers”.
From Michael Mainelli, here is my original post on Gresham.
The post More on Sir Thomas Gresham (from my email) appeared first on Marginal REVOLUTION.
Around Hormuz, however, the Portuguese always had to be on guard. Many naturally protected sandy coves (khors in Arabic) practically invited “pirates.” The Nakhilu, or Banu Hula, were Sunni arabic speakers on the Gulf coast of Persia whose descendants still inhabit the Gulf coast of Iran. For decades they set up upocket ports in the many hidden bandars and byways of the mountainous shore and created an underground economy that rivaled Hormuz’s. These “pirates” were a major drain on Portuguese revenue, regularly attacking ships that paid the feed for the cartaz, and docked at Hormuz.
That is from Allen James Fromherz, The Center of the World: A Global History of the Persian Gulf from the Stone Age to the Present. From this same book I learned that Milton refers to the Straits in Paradise Lost, but under the name of Ormus:
High on a Throne of Royal State, which far
Outshone the wealth of Ormus and of Ind[ia],
Or where the gorgeous East with richest hand
Showrs on her Kings barbaric pearl and gold,
Satan exalted sat, by merit rais’d
To that bad eminence
The post That was then, this is now appeared first on Marginal REVOLUTION.

For more than two decades in sourcing and supply-chain architecture, we’ve watched industries scale only when their supply chains become predictable, certifiable and repeatable. Orbital and lunar data centers are now approaching that same inflection point. Hardware gets us into orbit; governance keeps us there. While we celebrate launch cadences, the orbital-grade supply chain is […]
The post The governance gap: Why orbital data centers need certification before they scale appeared first on SpaceNews.

Astrobotic, a developer of lunar landers and suborbital rockets, has successfully tested an advanced rocket engine that could power those vehicles.
The post Astrobotic tests advanced rocket engine appeared first on SpaceNews.

Contracts with 12 companies aim to test competing designs for boost-phase missile intercept from space
The post Space Force awards up to $3.2 billion for Golden Dome interceptor prototypes appeared first on SpaceNews.
This is the second part (I) of our series looking at the structure of the Carthaginian army. As we discussed last time, while Carthage has an unfair reputation for being an ‘un-military’ society, its military system was one of the highest performing in the ancient Mediterranean, able to produce vast and effective armies waging war on multiple fronts for prolonged periods.
Last time we surveyed the components of that military and then took a closer look at the role of Carthaginian citizen soldiers. What we noted was that Carthaginian citizen soldiers formed an important part of Carthage’s armies early in its history, and in its last decade, but at its height were generally not include in ‘expeditionary’ Carthaginian armies. I supposed that this is because Carthaginian citizen soldiers had their service restricted to Carthage’s North African homeland – because almost every time we gain visibility into Carthage’s wars there, we see citizen soldiers – but the evidence for this is extremely limited. What matters for us is that by the third century, Carthaginian citizens no longer make up a significant amount of Carthage’s military force outside of North Africa (though a handful still serve as officers).
That of course leads to the question: if Carthaginians weren’t the bedrock foundation of Carthage’s armies, who was? And this week, we’ll get to that answer, looking at the forces Carthage drew from North Africa. Our sources term them mercenaries, but we have more than enough reason to doubt that.
But first, as always, raising large armies of mercenaries, subject conscripts, vassal warlords and allies is expensive! If you too want to help me invade Italy with a multi-ethnic army of diverse origins in a doomed effort to stop the Roman Republic, you can help by supporting this project over at Patreon. If you want updates whenever a new post appears or want to hear my more bite-sized musings on history, security affairs and current events, you can follow me on Bluesky (@bretdevereaux.bsky.social). I am also active on Threads (bretdevereaux) and maintain a de minimis presence on Twitter (@bretdevereaux).
Returning briefly to our schematic of the Carthaginian army in 215, the second largest single component of Carthage’s roughly 160,000 men under arms in that year were 50,000 African infantry, joined by at least 11,000 African and Numidian cavalry. We’ll discuss the Numidians next week for reasons that will be clear then. But it is clear that the backbone of Carthage’s armies were these African infantrymen.
Our Latin sources (like Livy) term these fellows Afri, ‘Africans,’ while our Greek sources, like Diodorus and Polybius, will call generally them λίβυες, ‘Libyans,’ though we ought to be clear here that most of these men are coming from what today is Tunisia, rather than Libya. At the end of the First Punic War, Polybius notes that these men made up the largest part of Carthage’s army, returning in defeat from Sicily (Polyb. 1.67.7) and as noted above they are present in substantial numbers in Carthage’s armies in the Second Punic War. It is hardly the first time for these fellows, though: North Africans are reported in Carthage’s armies from the Battle of Himera on forward.
I should note, I am going to pretty consistently call these fellows from here on in ‘Africans’ or ‘North Africans.’ First off, it is very clear that when our Greek sources say λίβυες, they mean the same thing as our Latin sources saying Afri (indeed, often in cases where Livy is just straight up translating passages of Polybius with only modest embroidering, the equivalence is clear); these are just two different languages’ terms for the same people. But I think ‘Africans’ may be more helpful here for the modern reader for two reasons: first, most of Carthage’s African infantry does not come from the territory of the modern country of Libya; most of them come from what today is Tunisia, so one doesn’t want to give the incorrect sense that these troops are ‘Libyan’ in the modern sense of the country of Libya (some of them are, but most are not). Second, I think ‘African’ also gives a sense of the wider notion of these fellows as primarily being from Africa – some are indigenous Berbers, some are Phoenician settlers, some are of mixed heritage and – to go by recent DNA studies – some are likely settlers of Aegean extraction, who have substantially adopted Punic (=Phoenician) culture. So they’re all Africans in the sense that they live in Africa (both in the modern sense of the continent and the ancient sense of the region around Carthage), but a relatively diverse group.

Our reception of these troops is, alas, I think quite badly bent by Polybius who – in driving some of his own arguments – allows some critical misconceptions to fester in his writing. Polybius, as a source, is usually relatively trustworthy, but while Polybius will almost never lie to you, he will often allow you to believe things that aren’t strictly speaking true – Polybius is a master of ‘lying with the truth,’ as it were and this is one case.
We’ve actually discussed this before, but to recap briefly: Polybius describes Carthage’s African troops as μισθοφόροι, misthophoroi, which has a broad meaning (‘wage-bearing, wage-receiving’) and a narrow meaning (‘mercenary’) and here, as in a few other places, Polybius is happy to be technically correct with the first meaning and then let the reader assume the second meaning (which is wrong). That’s because Polybius seems to be – we don’t have all of his work, but this seems to be a thread of it – arguing for the superiority of citizen soldiers over mercenaries in an effort to get the Greeks of his own day to reform their own militaries to rely more on the former than the latter. Carthage thus provides an opportunity for Polybius to drive his ‘mercenaries are bad’ argument and he does so, fudging the terminology as necessary.
Because Polybius is generally so trusted, that has led generations of scholars to carelessly assume that Carthage’s armies – and their North African components – were mercenary in nature, but that assumption is broadly wrong.1
Instead, Diodorus Siculus gives us a remarkable picture of Carthaginian recruitment in the early 400s, describing Carthaginian musters in 410 and 406. In 410 (Diod. Sic. 13.44.6), the Carthaginian muster has three phases: first there is mercenary recruitment in Spain – signaled by the word ξενολογεῖν, xenologein ‘to recruit foreigners.’ Then Carthaginian citizens are mustered with καταγράφειν, katagraphein, ‘to write down, register, record.’ If that seems an odd way to muster someone, it has the same basic meaning and etymology as our own ‘conscript’ which comes from con+scriptus, ‘to write together.’2 We actually use the same idioms, we’ve just forgotten that we do: someone who is conscripted is written down (in a list of soldiers), someone who ‘enrolls’ or is ‘enrolled’ in the military is being added to the roll (list) of names. So we would say Carthaginian soldiers here are being enrolled. Finally, Carthage’s North African subjects are mustered with ἐπιλέγειν, epilegein, ‘picked out, called by name.’
That last word is striking, because that isn’t a process of taking volunteers: the North African troops are being picked, in this case by Carthage’s generals. In the muster of 406 (Diod. Sic. 13.80.1-4), Diodorus shifts his vocabulary a bit and this time it is the Africans who are katagraphein‘d into the army, this time explicitly by Carthaginian generals who head out into non-Carthaginian North African subject communities to conscript soldiers. In short these soldiers are paid conscripts, serving (as we’ll see) long terms, their recruitment presumably part of the deal Carthage imposed on subject North African communities.
I should note that older scholarship3 often supposed that perhaps this system was later superceded, that Carthage may have stopped conscripting Africans and instead imposed harsher taxes and started hiring mercenaries. This would make Polybius right, but the problem is that no source says this and as noted before, it isn’t necessary either: Polybius is generally slippery with the term misthophoros. As a result, modern scholars tend to reject this argument and instead view Carthage’s African infantry in the third century (that is, during the Punic and Mercenary Wars) as paid conscripts rather than volunteer mercenaries.4 And I think that is probably correct, that these are troops levied from Carthage’s North African dependencies – probably with a mix of incentives and compulsion – who are then paid for their continued service and loyalty.
In terms of the makeup of these communities, they were clearly a mix: some of these are Phoenician colonial foundations, while others were indigenous Libyan towns, whose population would have been broadly Berber. In terms of the incoming settlers, recent genetic work has suggested that Phoenician colonization drew very widely, with Punic settlements often showing a lot of Sicilian and Aegean (read: Greek) population in the mix too and actually very little Punic ancestry. That latter point puts me a bit on guard, because our sources are very clear that they understand a lot of these populations to be Phoenician (=Punic) by culture and descent and to have cultural and familial ties back to the Levant and Syria and the material culture archaeology seems to confirm this. More work is clearly going to be necessary here: the c. 200 remains analyzed in the above-linked study is a big sample size for this kind of work, but could easily be thrown off by something as simple as different burial practices. That said, we know there was mixing between the indigenous Berber and settler-colonial populations and our sources sometimes pick out specific groups as being ‘Liby-Phoenician’ (λιβυφοίνικες in Greek; libyphoenices in Latin), ethnically blended groups mixing Phoenician and Berber heritage.
Naturally, given our sources, we don’t have a great window into what the ‘terms’ of this military service were, but there are a few things we can sketch out. First, it seems like Carthage equips these soldiers out of its own stores. Appian (Pun. 80) gives the startlingly figure that prior to the Third Punic War (so Carthage has already been stripped of most of its empire by this point!), Carthage turned out 200,000 military panoplies (that is, sets of equipment); the number is surely exaggerated, but even a tenth of that number would imply large state armories in Carthage for maintaining its armies which – given that Carthaginian citizens don’t really serve outside of Africa – must be intended for this African ‘backbone’ force. It may also explain why, when Carthaginian citizens do serve, they seem indistinguishable from Carthage’s African levies (e.g. Plut. Tim. 27.5): they’re being equipped out of the same armories. So if you want to know what these guys carried, you can largely lean on the previous post for our evidence for Carthaginian citizen troops.

Mostly, this means that Carthage’s African troops served as heavy infantry, like Carthaginian citizens did. That’s certainly how Hannibal uses them: they are his heaviest infantry and form the backbone of his army. It also explains why they could loot Roman heavy infantry equipment and eventually reequip along those lines without a serious change in how they fought (Polyb. 3.114.1; Livy 22.46.4). Beyond that, it is almost impossible to give much detail to their equipment. Plutarch describes the Carthaginian battle line in 341 as having leukaspides, ‘white aspides,’ implying their shields were akin to the Greek aspis (round, dished) which fits with some of the very limited representational evidence we have, but perhaps with covers in hide rather than bronze (Plut. Tim. 27.4; 28.1). Later, Appian describes the Carthaginians during the Third Punic War as having thureoi (= the Roman scutum), so they may have switched to the Gallic/Roman oval shield at some point (App. Pun. 93). But on both cases these writers are not anything like eyewitnesses and give few details, so they could also both be wrong.
Soldiers from Libya also had a reputation as highly capable skirmish troops using javelins and we see hints of this too. Hannibal has a group of soldiers whose origin is never clarified, Polybius refers to as lonchophoroi (λογχοφόροι), lonche-bearers. This term has caused no end of problems, because W.R. Paton translates it as ‘pikemen’ (frustratingly un-fixed in the revised Paton, Walbank and Habicht (2010-2012) translation) leading a range of modern writers, especially popular ones, to misunderstand and imagine these fellows as Hellenistic-style sarisa infantry. But the lonche (λόγχη) is not a sarisa; the Greeks use this word very broadly to describe non-Greek spears, but most often to indicate kinds of dual-purpose thrusting-and-throwing weapons used by lighter infantry and cavalry. Arrian uses the word of the spears wielded by the Tyrians – fellow Phoenicians! – fighting Alexander at Tyre (Arr. Anab. 2.23.5) and Appian reports the Carthaginians preparing lonche for the Third Punic War (App. Pun. 93).
So these aren’t pikes – Carthage never utilized a Hellenistic-style pike formation – but rather a lighter dual-use spear. And let me just repeat that because I encounter this misconception all the time, so for the folks in the back: Carthage never utilized a Hellenistic-style pike formation and indeed, Carthage’s own tradition of close-order heavy infantry may also not have been a direct imitation or development from the Greek hoplite tradition either (the Greeks were hardly the only culture to stumble on the idea of ‘close-order infantry with spears and round shields‘). And indeed, if one looks even a little closely, the lonchophoroi are clearly a light infantry formation, generally deployed in a mixed group with Hannibal’s other elite light infantry, his Balearian slingers. We also get a reference to “light armed Balearians and Africans” at the Battle of Baecula with a different Carthaginian army, suggesting this sort of light infantry pairing may have been something of a standard (Livy 27.18.7).
So while most African infantry in Carthaginian service served as armored heavy infantry fighting in close-order, a small subset served as elite light infantry using lighter spears and often deployed alongside slingers. In this sense, the lonchophoroi may have filled a very similar role to Rome’s own velites: an integrated light-infantry javelin force that might scout or screen the main heavy infantry force. Hannibal’s combined force of Balearians and lonchophoroi at Trebia was 8,000, compared to probably something like 12,000 African ‘heavies,’ so there might have been something like 2 or 3 African ‘heavies’ for each light lonchophoros, which is quite similar to the Roman legion’s ratio of 2.5 heavy infantrymen (hastati, principes, triarii) to each veles.
Once recruited and equipped, these fellows evidently stayed in service for some time, perhaps for the duration of the campaign for which they were raised. They were probably gathered in Carthage itself to be marshaled and equipped. Notably, Polybius tells us that the families and possessions of the Carthaginian army returning from Sicily were initially waiting in Carthage itself (Polyb. 1.66), so it seems like these troops might leave their families in Carthage while out on campaign.
It’s also clear these soldiers were paid, though we don’t know the pay rates. What we do know, again from Polybius, is that like other mercenaries, most of their pay – their misthos (wages) as distinct from their sitos/sitonion/sitometria (maintenance pay) – seems to have been due at discharge, at the end of a campaign. That was, indeed, the problem that Carthage slammed into at the end of the First Punic War which led to the Mercenary War: the war being over, the arrears of their army suddenly came due at a moment when Carthage itself was basically bankrupt. That in turn might explain the willingness of African communities to put up with this conscription regime: at the end of each campaign, their men would normally come back with a whole bunch of cash in their pockets, essentially allowing each individual community to ‘recapture’ part of their tribute as it re-entered the community as settled misthos. That in turn, as Dexter Hoyos notes, might well have exacerbated the revolt against Carthage after the First Punic War: not only were the African troops incensed at not getting paid, but their home communities also felt cheated out of this economic bargain.5
What is clear is that African heavy infantry, supported probably in most cases by light infantry lonchophoroi were the backbone of Carthaginian armies. Even when Carthaginian armies are composed primarily of Iberian or Gallic auxiliaries, allies or mercenaries, they are constructed around an African ‘backbone,’ providing generals a reliable and loyal army component as the core of their army.
In battle, the Africans are often deployed in reserved positions. Hannibal tends (at both Trebia and Cannae) to put his Africans on the flanks, where their heavier formation provided strong structure to his army, but also where they avoided the brunt of the casualties. We’re told that Hannibal’s losses at Trasimene were concentrated among his Gallic troops (Polyb. 3.85.5) and at Cannae he evidently exposes his Gauls and Iberians and most of his losses (70%!) at that battle were taken by his Gallic troops, with the rest of the losses concentrated among his Iberians (Polyb. 3.117.6). At the Metaurus, Hasdrubal aims to win by attacking with his Iberian troops, holding his Africans in reserve and with his Gauls deployed simply to hold a hill on his left, suggesting both a lack of trust in his Gallic troops, but also a desire to avoid losses among his Africans (Livy 27.48, but see Lazenby (1978)). At Zama, Hannibal places his Iberians, Gauls and Ligurians (along with his skirmishers and elephants) in the front line, fresh African and Carthaginian troops in the second line and his own veterans in the final line (Polyb. 15.11; Livy 30.33). There’s a pretty clear pattern here in which Carthaginian generals aim to expend their Gauls first, their Iberians second and their Africans last.6
Carthage’s African troops are also frequently decisive, one way or the other. They are the heaviest infantry component in Carthage’s armies; our sources lead us to understand that they are as heavily equipped as any other kind of heavy infantry (hoplite, legionary, phalangite) in the Mediterranean at the time. Looking at our army figures from last time, we can also see that they are present in significant numbers in basically every Carthaginian field force during the Second Punic War. Polybius likewise reports that Africans made up the largest component of Carthage’s army at the end of the First Punic War, alongside Iberians, Gauls, Ligurians, Balearians and some Greeks (1.66.7).
It is hard to precisely assess the combat performance of these African troops, because they’re always deployed in mixed units. Certainly, as noted before, during Carthage’s Sicilian Wars, they seem to often be defeated by Greek hoplites, but equally – as noted – Carthage in that narrative seems to almost relentlessly ‘fail upward’ suggesting that perhaps Carthaginian (and thus African) military performance may have been somewhat better than our Greek sources let on. During the First Punic War, the Romans win nearly all of the open field engagements, but we never get a really detailed account of any of these battles, so it is hard to know what components of the Carthaginian army broke first.
During the Second Punic War, however, we do get some detailed battle narratives and what we see is that Carthage’s African infantry appear to be able to hold their own against Roman heavy infantry – quite clearly the best available at the time – pretty well. When Carthaginian armies are defeated, the Africans are generally the last to break; when they win, the Africans are often the key elements doing envelopment or holding key positions. On balance, then, I would say Carthage’s North African troops appear to be quite capable heavy infantry.
What Carthage doesn’t seem to have had was enough of them. We noted last time that at Carthage’s peak mobilization in 215, they had about 50,000 African infantry under arms. Michael Taylor in Soldiers & Silver (2020) looks more broadly at reported Carthaginian armies and estimated populations and concludes (and I think this is probably right) that this figure, around 50,000, probably represented the maximum sustainable mobilization from the North African population available to Carthage. That’s not bad – it’s far more than any Greek polis could manage – but hardly enough to rumble with alliances of Greek states (as in Sicily) or the major powers of the Mediterranean (like Pyrrhus or Rome in the Third Century) and so it would have to be supplemented.
And supplemented it was! And we’ll get to how in the next installment when we look at what we might term Carthaginian ‘vassals.’
Up betimes, and with my salt eel went down in the parler and there got my boy and did beat him till I was fain to take breath two or three times, yet for all I am afeard it will make the boy never the better, he is grown so hardened in his tricks, which I am sorry for, he being capable of making a brave man, and is a boy that I and my wife love very well. So made me ready, and to my office, where all the morning, and at noon home, whither came Captain Holland, who is lately come home from sea, and has been much harassed in law about the ship which he has bought, so that it seems in a despair he endeavoured to cut his own throat, but is recovered it; and it seems whether by that or any other persuasion (his wife’s mother being a great zealot) he is turned almost a Quaker, his discourse being nothing but holy, and that impertinent, that I was weary of him. At last pretending to go to the Change we walked thither together, and there I left him and home to dinner, sending my boy by the way to enquire after two dancing masters at our end of the town for my wife to learn, of whose names the boy brought word.
After dinner all the afternoon fiddling upon my viallin (which I have not done many a day) while Ashwell danced above in my upper best chamber, which is a rare room for musique, expecting this afternoon my wife to bring my cozen Scott and Stradwick, but they came not, and so in the evening we by ourselves to Half-way house to walk, but did not go in there, but only a walk and so home again and to supper, my father with us, and had a good lobster intended for part of our entertainment to these people to-day, and so to cards, and then to bed, being the first day that I have spent so much to my pleasure a great while.
Casinos don’t turn off. Not the lights, not the machines, not the air systems. Everything runs all the time, because that’s part of the experience. You walk in and it feels the same whether it’s morning or the middle of the night.
That constant activity comes at a cost though. A big one. In places like Las Vegas, casinos account for around 20% of all electricity consumption. And when you consider that everything runs 24 hours a day, it starts to make sense why both the financial and environmental costs are so high.
It’s not just slot machines and neon signs. Casinos are packed with things that need power. TV screens, sound systems, ventilation, cooling, and heating. All of it running without a break.
Lighting is the biggest piece of it. Everywhere you look, something is glowing or flashing. It’s designed that way on purpose. The environment keeps people focused on the games, not the time. But that alone can make up around 30% of the electricity bill.
Then there are the machines. Thousands of them. In Las Vegas, there are over 200,000 gaming machines, most running 24/7. They can use up to 35% of the total energy in a casino.
Put it all together and casinos end up using far more energy per square foot than places like hospitals.
Running like this isn’t cheap. One large casino can spend hundreds of thousands of dollars a year just on lighting. That doesn’t even include everything else going on behind the scenes.
But it’s not just about money anymore. Energy use impacts the environment, and that’s something operators can’t ignore the way they might have years ago.
Some changes being made are pretty straightforward. Swapping old bulbs for LED lighting is one of them. It doesn’t sound big, but it works. Energy use drops, sometimes by as much as 30%, and the savings add up quickly.
There’s also smarter control over how energy is used. Sensors can detect when areas are empty and adjust lighting or air conditioning. It’s not about shutting things off completely, just scaling back when possible.
Solar power has started to play a role too. Casinos have a lot of roof space, so panels make sense. Some properties are already generating a portion of their electricity this way. Large installations can even cover around 20% of energy needs.
It’s not enough to run an entire resort, though. These places are huge, with hotels, pools, theatres, and packed gaming floors. Solar helps, but it doesn’t replace everything.
Energy isn’t the only thing being looked at. Water use is another issue. Fountains, bathrooms, and cooling systems add up fast.
Low-flow fixtures are becoming more common because they’re easy to install and don’t take long to pay off. Some casinos are also looking into recycling water or using filtration systems to cut waste. There are even systems like geothermal heat pumps that reuse heat from the ground instead of relying on traditional energy.
There’s also been a notable shift away from physical spaces, at least for some players. Online casinos grew quickly when people couldn’t visit in person, and a lot of that stuck.
They still use energy, just in a different way, actually lowering the energy needed per player. For some players, going digital feels like a more responsible choice. And convenience plays a role too, which is just another great reason to check out popular gaming sites like Free Spins US to discover all the latest games.
They can improve, but there’s a limit. Casinos rely on being always on. That won’t change.
What can change is how efficiently they run. Operators are starting to balance the need for 24-hour entertainment with greener energy use. Better lighting, smarter systems, some renewable energy mixed in. It won’t cut usage overnight, but it’s taking things in the right direction.
Photo: Abhishek Navlakha via Pexels
CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT MISSION
The post How Will Gaming Centers Buck The 24-Hour Costs appeared first on DCReport.org.

Here’s U.S. Attorney for DC “Judge” Jeanine Pirro announcing that her office is dropping its investigation into the chair of the Federal Reserve:
Her attempt to save face here is accomplished by claiming the Fed Inspector General will take over her work. But, as various reporters have noted, Powell himself had already asked the IG to look into cost overruns. It’s not clear anything new is happening.
The question now, of course, is whether this will satisfy the GOP senators who predicated their vote for Trump’s Fed nominee, Kevin Warsh, on the DOJ dropping its Powell investigation. (Trump’s Fed meddling is of course not limited to the Powell investigation — his attempt to fire Lisa Cook, for instance, is still before the Supreme Court — but the investigation is where senators including, most outspokenly, the retiring Thom Tillis, R-NC, drew the line.)
At a hearing earlier this week, it became clear that Warsh’s nomination wasn’t going anywhere until the Trump administration backed off a little. We’ll see if GOP senators are willing to accept this as enough. (I’d be surprised if they didn’t.)
I mentioned this a bit earlier on Ari Melber’s show tonight when we were talking about the high-profile MAGA defections of Tucker Carlson, Megyn Kelly, Joe Rogan and others. I’ve seen various theories: It’s about the Iran War. It’s about AI Jesus. Yes, it’s about all those things, but as off-ramps more than causes or drivers. Trump looks like the weak horse and no one wants to bet on or be associated with the weak horse.
It’s primal. It’s particularly powerful if your loyalty and the political currency you operate in is power. People can see that Trump is faltering, not just politically and electorally but organically. Even his bloodcurdling threats of genocide and civilizational destruction tell the same story. He makes wild threats and then negotiates against himself, extends deadlines in the face of intransigence. Bracket the wild remarks and you see a man who is weak and flailing. He has visibly lost the instinctive ability he once had to hoodwink or jujitsu bad situations somehow to his advantage. That has emboldened his enemies. Increasingly it has demoralized his supporters. In that climate, new outrages that might have been sloughed off or inspired impassioned defenses start looking like potential off-ramps.
If you’re here from the Beat With Ari Melber, welcome! If you’re like to join our community and support our work, just click here. Thank you in advance.
In a late-night Truth, Trump claims that the Southern Poverty Law Center was part of his grand, imagined conspiracy to steal the 2020 election, and writes that his DOJ’s politicized prosecution of the non-profit is a step toward overturning his electoral defeat.

(Hat tip to law professor and election expert Rick Hasen who, like us, is not really sure what Trump is going for here beyond a kind of bête noire word cloud.)
Links for you. Science:
Does Gender-Affirming Care Make Mental Health Worse? The case of a rather poorly-done paper.
Lake sturgeon restoration in Milwaukee River reaches a milestone
These Chimps Began the Bloodiest ‘War’ on Record. No One Knows Why.
With New Charter, Kennedy Redesigns Vaccine Committee and May Sidestep Court Ruling
From Arsenic in Antifreeze to a Single Pill
The Plight of the Monarch
How RFK’s War on Fluoride Is Taking Over the Dentist’s Office
Other:
A.I. Isn’t People. How many Reddit posts does it take to learn to read?
No One Knows Where US Vaccine Policy Goes Next
America fought to defeat fascism. This ‘triumphal arch’ reeks of it.
AI is the boss at this retail store. What could go wrong?
The DNC expected big money after big Democratic wins. It never came.
Once Targeted By ICE, Minneapolis-Area School District Celebrates Return Of All Students
AI got the blame for the Iran school bombing. The truth is far more worrying. LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
The Trump Administration Is Killing The U.S. Forest Service So It Can Also Kill U.S. Forests
College Republicans director made racist and sexist remarks on live streams
Voting for Trump is costing Latinos their wealth
DHS attorney said agents in Los Angeles should have ‘started hitting’ protesters, emails show
ICE deports family, including deaf boy who wasn’t given his assistive devices
Evidence Grows That Google’s AI Overviews Have Eviscerated the Media Industry
How Elon Musk’s Sci-Fi Hyperloop Failed: Before his misadventures in government efficiency, Musk promised to revolutionize commuting with a subway that would speed passengers between DC and Baltimore in a matter of minutes. The project was a farce—and a sign of things to come.
What changed Trump’s mind on Iran? Who the hell knows?
Education secretary used fake photo in post about Black history icon
Just How Big Could Democrats Win In 2026?
EPA scales back oversight on how toxic coal waste is stored
Did Wisconsin Just Offer a Glimpse of a Post-Trump Future?
Descendants renew fight to save historic Black cemeteries in D.C.
She Followed Homeland Security Agents. Then Her Global Entry Was Revoked.
D.C. mayoral candidates want to build more housing — but investors don’t
Mass. must eliminate nonmedical vaccine exemptions
Bowser’s final D.C. budget includes $469M in cuts amid tough fiscal picture
New Republican Food Benefit Cuts Are Taking Effect
Tech Media Propaganda Operation Makes It Official, Goes In-House At OpenAI
Trump’s Capitalism
New York Radio Yutz Has Had It With Mr. Met’s Antisemitism
Why do media elites believe their own propaganda?
Dan Bilzerian Wants to ‘Kill Israelis’ and Thinks Judaism Is ‘Terrible.’ Now He’s Running for Congress. The influencer, once known as the “king of Instagram,” declined to answer when TMZ asked him whether Adolf Hitler was antisemitic
Since we last looked at D.C.’s crime data two weeks ago, we had a very bad five day stretch where six people were killed, including the daylight shooting of two teenagers (which would not have been prevented by the curfews the D.C. government has been debating this week). This increases the total to 19 homicides*. That said, at the same time last year, D.C. had 45 homicides, so we are still doing better. And D.C. is still on pace for another year of roughly one-third reduction in the number of homicides, as happened in 2024 and 2025.
Across the board, crimes are lower than at the same time last year, other than Assault w/Dangerous Weapon (and that might be a reporting difference), with ‘crimes against cars’ dropping precipitously. We are, however, seeing upticks in most categories this week, which is not entirely unexpected as the weather gets nice.
Here’s to hoping we have a quiet week this next week.
*Officially, we have had 21 murders this year, but two of the murders occurred in other years, with arrests that were not made until this year.
Relatively flat US output growth versus rising numbers of US researchers is often interpreted as evidence that ideas are getting harder to find. We build a new 45-year panel tracking the universe of US firms’ patenting to investigate the micro underpinnings of this claim, separately examining the relationships between research inputs and ideas (patents) versus ideas and growth. We find that average patents per R&D input are increasing, the elasticity of patents to R&D inputs is flat or rising, and there is no systematic evidence of a secular decline in patenting after controlling for research inputs. We then document a positive, significant, and fairly steady relationship between firms’ growth in ideas (patents) and labor productivity. Average firm growth after controlling for idea growth, however, declines. Together, these results suggest that innovative efforts play a key role in sustaining growth that has not diminished over the last four decades.
Here is the paper by Teresa C. Fort, Nathan Goldschlag, Jack Liang, Peter K. Schott, and Nikolas Zolas.
The post Growth is getting harder to find, not ideas appeared first on Marginal REVOLUTION.
For a decade, NASA promoted the idea of building a space station around the Moon known as the Lunar Gateway. It touted the facility as both a platform for exploring the lunar environment and testing the technology needed for deep-space habitation.
Like many major space projects, it faced delays. Originally, the first component of the space station was due to launch in 2022. Later, it was decided that this module, to provide power and propulsion, would launch in tandem with a habitable volume known as the Habitation and Logistics Outpost (HALO) in 2024. This core was slated to be joined by another pressurized habitation module contributed by international partners I-HAB in 2026.
These dates, of course, have come and gone. And in March, NASA Administrator Jared Isaacman announced that the Gateway was being "paused" so the space agency could focus on the lunar surface.
Welcome to Edition 8.38 of the Rocket Report! The big news this week concerned the third launch of the New Glenn rocket. The first 15 minutes of the flight were exhilarating for Blue Origin, seeing a previously flown rocket take flight and then triumphantly land on a barge at sea. But then the highest of highs was followed by the company's first loss of an orbital payload, the AST SpaceMobile satellite being injected into a low orbit due to an upper stage failure. We've heard it was due to a valve problem, but that would be no scoop as it seems like it's always the valves that fail in this industry.
As always, we welcome reader submissions, and if you don't want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.
Canada's spaceport plans are not without critics. About a month ago, the Canadian National Defense Minister, David McGuinty, announced an “historic investment” of $200 million over 10 years to Maritime Launch Services for the lease of a dedicated “space launch pad” in Nova Scotia. But some local residents, including Marie Lumsden, are pushing back. Writing in the Halifax Examiner, Lumsden shares a photo of a small concrete pad at the end of a gravel road (the entirety of the spaceport). The residents have formed a group, Action Against the Canso Spaceport, because they have "genuine concerns about this project and the people behind it."
The Wind in the Willows right? Kenneth Grahame, 1908.
We all know the children’s story inside-out. Mole and Ratty and gruff Badger and conceited Mr Toad with his motorcars and all their adventures.
I picked up the book as an adult - I can’t remember why - and it wasn’t what I expected.
This week I have been reading it again and it is again astonishing.
I mean… let me share some of the prose with you.
This is when Mole encounters the river for the first time.
Green turf sloped down to either edge, brown snaky tree-roots gleamed below the surface of the quiet water, while ahead of them the silvery shoulder and foamy tumble of a weir, arm-in-arm with a restless dripping mill-wheel, that held up in its turn a grey-gabled mill-house, filled the air with a soothing murmur of sound, dull and smothery, yet with little clear voices speaking up cheerfully out of it at intervals. It was so very beautiful that the Mole could only hold up both forepaws and gasp, “O my! O my! O my!”
There’s a chapter where they’re looking back on the summer just gone, and a description of the plant-life soars:
The pageant of the river bank had marched steadily along, unfolding itself in scene-pictures that succeeded each other in stately procession. Purple loosestrife arrived early, shaking luxuriant tangled locks along the edge of the mirror whence its own face laughed back at it. Willow-herb, tender and wistful, like a pink sunset cloud, was not slow to follow. Comfrey, the purple hand-in-hand with the white, crept forth to take its place in the line; and at last one morning the diffident and delaying dog-rose stepped delicately on the stage, and one knew, as if string-music had announced it in stately chords that strayed into a gavotte, that June at last was here. One member of the company was still awaited; the shepherd-boy for the nymphs to woo, the knight for whom the ladies waited at the window, the prince that was to kiss the sleeping summer back to life and love. But when meadow-sweet, debonair and odorous in amber jerkin, moved graciously to his place in the group, then the play was ready to begin.
They go out in the boat at night looking for a lost young otter. (A whole other story but they encounter the divine spirit Pan who intercedes with a miracle and then wipes their memories lest they suffer the rest of their lives in the shadow of that awe.)
A description of moonlight:
The line of the horizon was clear and hard against the sky, and in one particular quarter it showed black against a silvery climbing phosphorescence that grew and grew. At last, over the rim of the waiting earth the moon lifted with slow majesty till it swung clear of the horizon and rode off, free of moorings; and once more they began to see surfaces-meadows wide-spread, and quiet gardens, and the river itself from bank to bank, all softly disclosed, all washed clean of mystery and terror, all radiant again as by day, but with a difference that was tremendous. Their old haunts greeted them again in other raiment, as if they had slipped away and put on this pure new apparel and come quietly back, smiling as they shyly waited to see if they would be recognised again under it.
It’s just… it’s…
O my! O my! O my!
I asked ChatGPT to calculate some readability stats for me: the average sentence length is 18.5 words.
Sentence length in literature has been falling over the years (LanguageLog).
But it’s not the lengthy sentences that makes this prose work for me. It’s the rhythm.
And I don’t really get that from reading it dead on the page. It’s because I’ve been reading The Wind in the Willows out loud.
Some years back I read Ursula Le Guin’s book about writing, Steering the Craft.
The first chapter is all about the sound of your words:
"The basic elements of language are physical: the noise words make and the rhythm of their relationships."
She recommends reading out loud.
So I started reading out loud.
I would take a page of prose from a novel that I really loved, and I would read it out loud, and out loud again, and again, and again, and again, until I could make it sound as wonderful as I felt it was when I was reading in my head.
It’s so hard to do. And you learn so much about words and meaning with this practice.
So I doubt you’re reading this post out loud.
But that passage about moonlight above…
For me, it doesn’t work in my head. It’s okay. But when I read it out loud - to my kid, which is my excuse right now - to make it make sense to her ears and for the words to carry her, I have to read it in a certain way, and when I do Kenneth Grahame’s words loft me into the sky, swinging clear of the horizon and right up there, free of moorings, just like his moon.
And when I read his words about the foliage on the riverbank, out loud, I’m right there too.
Do me a favour. Read that moonlight paragraph out loud. Even if under your breath, but pause right now, take a moment and do that, read it out loud.
Then read all of The Wind in the Willows because it’s free on Project Gutenberg in Kindle format and everything, and if you have an excuse to read it to someone else then do that, it is transporting and majestic and gentle all at once, and it is a joy to have his words in your mouth and in the air and in your ears.
Auto-detected kinda similar posts:
It was used to track a Dutch naval ship:
Dutch journalist Just Vervaart, working for regional media network Omroep Gelderland, followed the directions posted on the Dutch government website and mailed a postcard with a hidden tracker inside. Because of this, they were able to track the ship for about a day, watching it sail from Heraklion, Crete, before it turned towards Cyprus. While it only showed the location of that one vessel, knowing that it was part of a carrier strike group sailing in the Mediterranean could potentially put the entire fleet at risk.
[…]
Navy officials reported that the tracker was discovered within 24 hours of the ship’s arrival, during mail sorting, and was eventually disabled. Because of this incident, the Dutch authorities now ban electronic greeting cards, which, unlike packages, weren’t x-rayed before being brought on the ship.
1. Luis Garicano on the task is not the job. And jobs and the Jevons paradox.
2. The lived experience of aphantasia.
3. Shruti on delimitation in India.
5. Leibniz on symbolic computation, from the unpublished papers.
6. A History of Christian Political Economy, by Ballor and Matson.
7. Michael Tilson Thomas, RIP (NYT).
The post Friday assorted links appeared first on Marginal REVOLUTION.
After several tests of unusual "nesting doll" satellites in low-Earth orbit, Russia is now fielding operational anti-satellite weapons with valuable US government satellites in their crosshairs, the four-star general leading US Space Command said this week.
Gen. Stephen Whiting didn't name the system, but he was almost certainly referring to a Russian military program named Nivelir, which has launched four satellites shadowing US spy satellites owned by the National Reconnaissance Office in low-Earth orbit. After reaching orbit, the Nivelir satellites have released smaller ships to start their own maneuvers, and at least one of those lobbed a mystery object at high velocity during a test in 2020. US analysts concluded this was a projectile that could be fired at another satellite.
US officials have compared the Nivelir architecture to a Matryoshka doll, or a Russian nesting doll, with an outer shell concealing smaller, unknown figures inside.
‘Big dumb objects’ (BDOs) appear to great effect in science fiction. They come in all manner of sizes and shapes and they fulfill a wide range of functions. An early favorite of mine was Cordwainer Smith’s “Golden the Ships Were Oh! Oh! Oh!,” which I snagged on a long ago trip to a Chicago newsstand, where it appeared in an issue of Amazing Stories. It’s probably found most easily these days in The Rediscovery of Man: The Complete Short Science Fiction of Cordwainer Smith (NESFA Press, 1993), a collection that should be on every science fiction fan’s shelf.
Smith (a pseudonym for Paul Myron Anthony Linebarger, whose life was as remarkable as his fiction) goes to work on structures that are millions of miles long. I won’t say more for fear of spoiling the story for newcomers. More recent BDOs are better known, Dyson spheres and Dyson swarms are no strangers to these pages, and have been the subject of intense scrutiny by Jason Wright and his colleagues at Pennsylvania State University. The G-HAT (Glimpsing Heat from Alien Technologies) project scanned data from the Wide-field Infrared Survey Explorer satellite looking at tens of thousands of galaxies for the waste heat signature of possible Dyson spheres. The idea that megastructures might interest a hugely advanced civilization is reasonable, but we have yet to find evidence that Dyson spheres exist.
Larry Niven’s Ringworld posits a structure that circles an entire star but does not encompass it. A transit signature might give this one away if ever found; imagine the lightcurve. Niven and Gregory Benford later come up with the ‘shipstar’ concept that Greg described some years back on Centauri Dreams. This was an unusual re-thinking of the original ‘Shkadov Thruster,’ a device that could be used to move an entire star. See the Bowl of Heaven trilogy for more.
The work of Russian physicist Leonid Shkadov in 1987, the thruster design used asymmetric light pressure from a huge mirror to move an entire planetary system to a new destination. The physics works, but we’re moving at slow speeds, on the order of 20 meters per second after a million years. On the other hand, a truly long-lived species might find waiting a billion years to reach 20 kilometers per second, with a whopping 34,000 light years shift in position, to be plausible. Shipstar would be able to move considerably faster.

Image: An artist’s conception of the Benford/Niven ‘shipstar’ concept. Think of the ‘bowl’ as half of a Dyson sphere curved around a star whose energies flow into a propulsive plasma jet that moves the entire structure on its journey. Here the notion of living space may remind you of Niven’s Ringworld, that vast structure completely encircling a star, though not enclosing it. The difference is that in the ShipStar scenario, most of the ‘bowl’ is made up of mirrors, with living space just on the rim. Credit: Don Davis.
In conversations with Benford about his shipstar concept a few years ago, I learned that a solid Dyson sphere is unstable, and would need constant adjustment to maintain its position. Concerns over stability plague BDOs. Colin McInnes (University of Glasgow) looks at the problem in a recent paper, noting this about the Shkadov design:
In its simplest form a stellar engine can be considered as a single ideal ultra-large rigid reflective disc in static equilibrium above a central star… As the disc accelerates due to radiation pressure from the star, the centre-of-mass of the gravitationally coupled star-reflector system accelerates, leading to a displacement of the star.

Image: This is Figure 1 from a paper by Duncan Forgan (citation below). Caption: Diagram of a Class A Stellar Engine, or Shkadov thruster. The star is viewed from the pole – the thruster is a spherical arc mirror (solid line), spanning a sector of total angular extent 2ψ. This produces an imbalance in the radiation pressure force produced by the star, resulting in a net thrust in the direction of the arrow. Credit: Duncan Forgan.
That seems straightforward, assuming a civilization so advanced that it could build mirror structures of the needed size. Here too, though, we have stability problems. The McInnes paper is highly interesting, examining megastructure concepts and the possible ways of stabilizing them. While a uniform, rigid reflective disk proves unstable as a star-moving engine, a disk with its mass concentrated at the edges can be stable. Instead of a flat disk, we are looking at something much closer to the shape of a ring. Here passive stability is what we want – i.e., the object does not need continual adjustment by other technologies to maintain its position and function.
In the case of the Schkadov engine, we have this consideration:
…for an ideal reflector subject to gravitational and radiation pressure forces the gradient of these forces across the reflector will induce stresses. While the direction of the radiation pressure force is always normal to the reflector, the direction of the gravitational force will vary across the reflector moving from the centre to the edge. Therefore, while the component of the gravitational force normal to the reflector can in principle be balanced by the radiation pressure force, there will be an in-plane component of the gravitational force which will generate a compressive stress. A thin reflector will clearly be unable to support such compression. However, in principle a zero-stress reflector can be configured for a non-homogeneous, partially reflecting rotating reflector…
The math for a stellar reflector and a stellar ring are laid out in the paper’s appendices.
McInnes thinks that stability is useful as we investigate possible technosignatures in our SETI work, whether they be star-moving thrusters or energy-gathering Dyson objects. The assumption is that passive stability will be sought after because it is efficient and economical, not requiring control systems that must continually adjust position. Remember, too, that in searching for technosignatures, we have the possibility of finding megastructures like these that have survived the demise of their creators. Passive stability is essential for these objects to remain intact and detectable.
What McInnes calls a ‘Dyson bubble’ can likewise be stabilized. Here we’re talking not about a solid Dyson sphere but a constellation of discs, a ‘power swarm’ that allows a civilization to exploit most of the output of its star. The terminology can be confusing but bear with me. The author distinguishes between a cloud of small reflectors in orbit around the central star – huge in number, these form a so-called ‘Dyson swarm’ – and a ‘Dyson bubble,’ by which he means a smaller number of large reflectors in ‘statite’ configuration, so that instead of orbiting, radiation pressure exactly balances gravity. In other words, the ‘bubble’’ components stay stationary relative to the star.
Self-stabilizing techniques are challenged not only by gravitational and radiation pressure but also collisions between the myriad orbiting disks as well as outside perturbing forces. Over large timeframes, passing stars can disrupt the gravitational dance, while interstellar comets, whose numbers are likely to be huge, present a similar risk of disruption. Even so, there are ways around this:
…the Dyson bubble can remain stable when its self-gravity and a simple model of a diffuse background of scattered radiation are included in the dynamics defined in Section 6.4. However, there are now regions of the parameter space where instability can occur, primarily at the edge of the Dyson bubble driven by the diffuse background radiation. In addition, it has been shown that the self-gravity of the Dyson bubble is in itself sufficient to ensure passive stability in the absence of the diffuse background radiation, and indeed it enhances the stability of the Dyson bubble when the diffuse background of scattered radiation is included.
A Dyson swarm if properly implemented can also ensure passive stability. Reflectors must always be configured ‘normal’ (perpendicular) to the central star “…using slighting conical reflectors with the centre-of-pressure displaced behind the centre-of-mass.”
So there are ways of doing these things as long as we abandon the Shkadov concept of a uniform reflector disc in favor of a ring supporting the reflector, or in the case of the two Dyson options McInnes looks at, a dense cloud of reflectors stabilized through orbital mechanics, or a smaller assembly of reflectors in static equilibrium with radiation pressure from the star exactly balancing gravity. But here I’m more interested in the consequences in terms of hunting for technosignatures:
A Dyson swarm can be expected to generate a different technosignature to a passively stable Dyson bubble discussed above. For example, the motion of the discs in a swarm would imply a flickering of the observed luminosity of the central star, with a larger variation expected from a small number of ultra-large discs relative to a large number of small discs. Finally, while an orbiting swarm of reflectors will be susceptible to collisions (B. C. Laki 2025), collisions within a Dyson swarm could in principle be minimised using families of displaced non-Keplerian orbits, where the orbit planes of the reflectors can be stacked in parallel rather than being inclined relative to each other (C. R. McInnes & J. F. L. Simmons 1992).
And what of Shipstar? A recent conversation with Jim Benford reminded me that his brother Greg had worked out a way to stabilize the induced flare on the central star through intense magnetic fields, but as far as I know, this concept has never been rigorously investigated. From the technosignature standpoint, McInnes’ paper reminds us that stability problems can be overcome should an advanced civilization choose to build Dyson-class structures, or undertake star-moving of the Shkadov variety. How to engineer the stability of BDOs should continue to provide insight into possible technosignatures, even if the lack of any trace of Dyson structures despite intensive work at G-HAT remains puzzling. Next week I want to look at an even more recent stellar engine concept as presented by Illinois State University’s Michael Caplan.
The paper is McInnes, “Stellar engines and Dyson bubbles can be stable,” Monthly Notices of the Royal Astronomical Society 546 (2026), 1-18 (full text). The Shkadov paper is “Possibility of Controlling Solar System Motion in the Galaxy,” presented at the 38th Congress of the International Astronautical Federation (IAF) in Brighton, UK. An English translation of the original paper was published in the Journal of Solar System Research Volume 22, Issue 4, pp 210–214 under the title “Possibility of Control of Galactic Motion of the Solar System.” The Forgan paper mentioned above is “On the Possibility of Detecting Class A Stellar Engines Using Exoplanet Transit Curves,” Journal of the British Interplanetary Society, Vol. 66, no. 5/6, 2013 pp. 144–154. Preprint.

Here are news reports on two kinds of insider trading on prediction markets: predicting what you are (or someone close to you is) going to do, or predicting a measurement you can control.
From the NYT:
Soldier Used Classified Information to Bet on Maduro’s Ouster, U.S. Says
Federal prosecutors say that Sgt. Gannon Ken Van Dyke, who was involved in the operation to oust Nicolás Maduro from power in Venezuela, used the information to place bets on a prediction market. By Benjamin Weiser and Jonah E. Bromwich
"A U.S. Army special forces soldier who helped capture Nicolás Maduro of Venezuela has been charged with using classified information to bet on the mission on Polymarket, a prediction marketplace, federal authorities said on Thursday.
"The soldier, Master Sgt. Gannon Ken Van Dyke, who was stationed at Fort Bragg in North Carolina, made more than $400,000 by betting on different outcomes related to Venezuela after learning of the operation, federal prosecutors and the F.B.I. said. "
##########
And from the WSJ:
Unusual weather bets on Polymarket spur French investigation by Alexander Osipovich, Sam Schechner
"France’s national weather service is investigating irregularities at a monitoring station at Paris Charles de Gaulle Airport after it reported anomalous temperature spikes. The spikes led to lucrative payoffs for some traders on Polymarket, the crypto-based betting platform.
...
"Every day, Polymarket lists contracts that allow its users to bet on the maximum temperature in dozens of cities worldwide. Its Paris contract is based on the reading at Charles de Gaulle airport, as reported by Weather Underground, an online weather data provider.
"On April 15, the temperature in Paris had reached 18 degrees Celsius in the afternoon and was cooling down in the evening when the airport gauge showed a brief, unexplained jump, hitting 22 degrees Celsius at 9:30 p.m. local time, Weather Underground data shows. Other nearby weather stations didn’t show a similar spike.
"Just before the anomaly, xX25Xx placed cheap, long-shot bets on Polymarket that the maximum temperature in Paris that day wouldn’t be 18 degrees Celsius, when other bettors were more than 99% sure that the day’s top temperature would remain at that level.
"The airport weather station also registered a temperature spike around 7 p.m. on April 6. That day, a Polymarket account with username “Hoaqin” made nearly $14,000 in profit by betting that Paris temperatures would peak at 21 degrees Celsius, Polymarket data shows. Temperatures at Charles de Gaulle had been hovering at around 18 degrees in the late afternoon, according to Weather Underground data.
...
"In March, an Israeli journalist said he had received death threats from Polymarket bettors demanding that he revise his article about an Iranian missile strike on March 10. The details of his article were used to settle bets on whether Iran had carried out a missile, drone or airstrike that day.
"Soon after the incident, Polymarket explicitly prohibited insider trading and market manipulation on its international platform for the first time. The amended rules state that Polymarket users can’t trade contracts where they can influence the outcome of the underlying event.
"A similar prohibition is in place in regulated prediction markets such as Polymarket’s main competitor, Kalshi, as well as at Polymarket’s new U.S. platform. "
Yesterday Secretary of the Navy John Phelan spent the day talking to lawmakers about the Navy’s plans for new ships and about the Pentagon’s huge budget request only to get a call from Defense Secretary Pete Hegseth asking him to resign. Phelan is a billionaire businessman who had no previous military experience but who raised millions of dollars for Trump’s 2024 presidential campaign.
Haley Britzky, Zachary Cohen, Kristen Holmes, Natasha Bertrand, and Kaitlan Collins of CNN report that Phelan’s close relationship with President Donald J. Trump has irked Hegseth, who saw Phelan’s direct communications with the president as an attempt to go around him. And Deputy Defense Secretary Stephen Feinberg, a close ally of Hegseth’s, wanted to take over shipbuilding and Navy acquisitions, jobs that normally fall to the secretary of the Navy.
As the title of an article by Drew FitzGerald, Lara Seligman, and Marcus Weisgerber of the Wall Street Journal noted earlier this month, Feinberg is a billionaire thanks to his career in private equity and now is mounting “his biggest takeover yet: the Pentagon.” Feinberg is pushing Congress to pass the $1.5 trillion military budget Trump wants while at the same time overseeing the newly created Economic Defense Unit (EDU) in the Defense Department. The EDU is directing government investment in private sector defense contractors and has cut deals for the government to start taking equity stakes in those businesses.
Greg Jaffe and Helene Cooper of the New York Times reported that Trump has been frustrated by Phelan’s inability to fulfill his demand for the first of his new battleships by 2028, an inability caused by the fact that the U.S. shipbuilding industry doesn’t have the capacity to do it. At a Wednesday meeting with Trump, Hegseth and Feinberg convinced the president that Phelan had to go.
According to the CNN reporters, Trump told Hegseth to “take care of it,” prompting his phone call to ask for Phelan’s resignation. But Phelan didn’t believe Trump knew of the request, so he called officials at the White House to ask if they had heard he had been asked to resign and whether Trump knew. At about 5:30, Pentagon spokesperson Sean Parnell posted on social media that “Secretary of the Navy John C. Phelan is departing the administration, effective immediately.”
Still unconvinced, Phelan finally went to the White House to meet with Trump, who did not see him but later confirmed in a phone call that Phelan was out.
On social media yesterday, Trump posted two different New York Times pieces about the 2004 ratings for the television reality show The Apprentice, in which he starred as a business executive whose famous line was “You’re fired!” Today, on social media, Trump’s account posted: “John Phelan is a long time friend, and very successful businessman, who did an outstanding job serving as my Secretary Of The Navy for the last year. I very much appreciate the job that he has done, and would certainly like to have him back within the Trump Administration sometime in the future.”
Lara Seligman, Josh Dawsey, Alexander Ward, and Natalie Andrews of the Wall Street Journal noted today that Trump sided with Hegseth over Phelan, who was his friend and neighbor and raised millions of dollars for him. Phelan’s firing shows that Trump still supports Hegseth despite his missteps and high-level firings as Hegseth seeks to remake the Pentagon.
Dan Lamothe, Tara Copp, and Noah Robertson of the Washington Post note that Hegseth has purged the military of its most senior ranks, including “the top generals and admirals of every branch of service except for the Marine Corps and Space Force, several military lawyers and even the head of the Army’s chaplain corps.”
Today the Pentagon cracked down on the independence of Stars and Stripes, the newspaper charged with providing “independent news and information to the U.S. military community.” Stars and Stripes operates out of the Department of Defense. In order to make sure the paper protects freedom of the press and remains independent of the Pentagon rather than becoming a propaganda outlet, Congress provided for it to be overseen by an ombudsman who regularly reports to Congress. Today the current ombudsman, Jacqueline Smith, reported that she has been fired.
Smith has publicly criticized Hegseth’s crackdown on press freedom, and noted in a farewell column today that “[n]o one should be surprised that they’re kicking out the one person charged by Congress with protecting Stars and Stripes’ editorial independence. For nearly a year, Pentagon leadership has placed more and more restrictions on the mainstream media.” She said she “knew there would be perils for speaking out against Pentagon attempts to control the news” and urged Americans not to let Stars and Stripes “be controlled by Pentagon brass.”
While Hegseth is shaping the military to his own specifications and Feinberg is working to tie the government and an expanded military more tightly together, Republicans in Congress are trying to strengthen the power of the president over the American people for the next three years.
As Charles Tiefer of Talking Points Memo reported today, Senate majority leader John Thune (R-SD) has proposed funding Immigrations and Customs Enforcement (ICE) and Customs and Border Protection (CBP), the parent agency for Border Patrol, through budget reconciliation, a process that cannot be filibustered in the Senate. Because Republicans control both the House and the Senate, this means things tucked into a budget reconciliation measure can pass without any Democratic votes.
Senate Democrats refused to fund ICE and CBP for 2026 until Republicans agreed to reform the rules for the agents’ behavior, including requiring them to get a warrant from a judge before breaking into someone’s home—as courts have always required before this administration—and to take off their masks.
But Republicans have refused to agree to those reforms and are turning to funding through budget reconciliation so they don’t have to negotiate. And rather than funding ICE and CBP for the year, as the rest of the appropriations bills do, Thune is proposing to fund them for the next three years, taking away Congress’s power to reform ICE and CBP by withholding funds not just for 2026, but for 2027 and for 2028. Even if Democrats take control of the House or Senate after 2026, they could not reform ICE or CBP, which would remain a growing force under the president’s control.
Today Thune also teed up a vote on a bill to extend Section 702 of the Foreign Intelligence Surveillance Act of 1978 for three years, until April 2029. Both Democrats and Republicans are concerned that the system for collecting information on foreigners who appear to pose a threat to the U.S. can also sweep in U.S. citizens, enabling the government to surveil citizens without a judicial warrant. They want to make sure there are stronger guardrails in place to keep the government within constitutional limits. The House has been trying to hammer out a measure with cosmetic reforms, but if it fails, Thune will try to pass a three-year extension of Section 702 with no reforms, taking away from Congress the ability to limit problematic government surveillance.
But the tide defending democratic values continues to rise.
On Tuesday, more than 100 former NASA astronauts announced they were launching Astronauts for America, a nonpartisan organization to protect American democracy. In an open letter introducing their organization, they noted that as astronauts, they “have sworn to defend the Constitution of the United States” and continued: “We are committed to science, evidence-based decision-making, public service, and the rule of law.” They vowed to speak out for American values and to work with lawmakers to protect those values: “the rule of law, constructive checks and balances, equal opportunity, and the peaceful transfer of power.” They reminded people that “[a] strong democracy makes all else possible: economic growth, national security, and our rights and freedoms.”
“I think we’ve all been getting concerned for quite a number of years about not being comfortable with the way some things are going,” Astronauts for America co-founder and former astronaut Linda Godwin told Adam Kovac of Scientific American. “It was powerful to find out that a lot of us felt the same way, and there’s a stronger voice together.”
—
Notes:
https://www.wsj.com/politics/national-security/us-navy-secretary-john-phelan-what-happened-83bbc61a
https://www.cnn.com/2026/04/22/politics/john-phelan-navy-secretary-leaving
https://www.washingtonpost.com/national-security/2026/04/22/john-phelan-navy-hegseth/
https://www.washingtonpost.com/business/2026/04/23/stars-stripes-ombudsman-fired-pentagon/
https://www.nytimes.com/2026/04/23/us/politics/trump-navy-secretary.html
https://www.congress.gov/bill/119th-congress/senate-bill/4344
https://www.politico.com/live-updates/2026/04/23/congress/mike-johnson-702-surveillance-00889135
https://www.astronautsforamerica.org/
X:
SeanParnellASW/status/2047064432564482188
SenatePress/status/2047220640336187420
Bluesky:

Stripped of easy moralising, literature makes us relish the search for truth in an age when many believe truth to be dead
- by Flora Champy
Speaking of Chris Espinosa, this is pretty neat:
On September 1 I’ll join the elite club (members Steve Wozniak, Steve Jobs, Mike Markkula, and Bill Fernandez) who have worked under a number of Apple CEOs ≥ our employee number:
Woz: 1 (Scott)
Jobs: 2 (Scott, Markkula)
Markkula: 3 (Scott, Sculley, Spindler)
Fernandez: 4 (Scott, Markkula, Sculley, Spindler)
Espinosa: 8 (Scott, Markkula, Sculley, Spindler, Amelio, Jobs, Cook, Ternus)
Tom Warren, The Verge (gift link):
“Many of these employees have spent years, and in some cases, decades, shaping Microsoft into what it is today,” says Microsoft’s HR chief Amy Coleman in a memo seen by The Verge. “For those who may be considering their next chapter, we’re offering a one‑time Voluntary Retirement Program.” Microsoft says it applies to only a “small percentage of our US employees.”
US employees whose combined years of service added to their age totals 70 or more will be eligible for voluntary retirement, and Coleman says this will include “generous company support.” It’s not clear if this is a precursor to more layoffs at Microsoft, but it certainly looks like a method to avoid a bigger round of layoffs ahead of Microsoft’s new financial year in July.
70 combined years? My god, when did Microsoft get so, well, soft? I just read about a guy at Apple whose age plus years of employment will hit something like 114 later this year. If I weren’t so lazy I’d double check the exact number with a calculator, but whatever it’s up to today, he hit 70 combined years back around the time the first iMac came out.
Rachel Metz, reporting for Bloomberg:
A small group of unauthorized users have accessed Anthropic PBC’s new Mythos AI model, a technology that the company says is so powerful it can enable dangerous cyberattacks, according to a person familiar with the matter and documentation viewed by Bloomberg News.
A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, said the person, who asked not to be named for fear of reprisal. The group has been using Mythos regularly since then, though not for cybersecurity purposes, said the person, who corroborated the account with screenshots and a live demonstration of the model.
Jess Weatherbed, at The Verge (gift link):
The model was reportedly accessed illicitly on April 7th, the same day that Anthropic announced it was releasing Mythos to a limited number of companies for testing. The group that gained the unauthorized access has not been publicly identified, though Bloomberg reports that its members are part of a Discord channel that seeks out information about unreleased AI models. [...] Other unreleased Anthropic AI models have also been accessed by the group, according to Bloomberg.
So on the one hand, Anthropic itself is the one describing Mythos as a dangerous national security threat. On the other hand, their own security is so sloppy that rando hooligans on Discord have had access to Mythos since the day it was announced, and regularly access other unreleased Claude models. This, just weeks after Anthropic screwed up and accidentally exposed the entire source code to Claude Code.
If Mythos is as dangerous as Anthropic (including CEO Dario Amodei) claims, this is a colossal screw up. If a Discord group of AI enthusiasts has unauthorized access, why should we not assume that Chinese, Russian, North Korean, and Iranian intelligence agencies do too? And if this is no big deal, then Anthropic (and Amodei) are full of shit about how dangerous Mythos is. One way or the other it looks like a total clown show over there.
Ephrat Livni, reporting for The New York Times (gift link):
Britain aims to raise a “smoke-free generation” by permanently banning the sale or supply of tobacco to anyone born in 2009 or after, with a bill that was approved by Parliament on Tuesday.
The bill applies to people currently 17 years old or younger and aims to keep them from ever picking up the habit in their lifetime. The proposal is expected to soon go into law after the final formality of approval by King Charles III.
Lawmakers say that in practice, the measure means the age of sale for tobacco products will rise over time as the targeted demographic group grows older and could lead to a smoke-free society. The law will apply in England, Scotland, Wales and Northern Ireland.
I’ve never smoked and I’m strongly in favor of most — maybe all? — of the smoking bans and tobacco-related public health measures that have been passed in my lifetime. I can’t imagine going back to when smoking was permitted in restaurants, bars, airplanes, and public spaces. I’m also strongly in favor of stiff taxes on tobacco products to discourage their use.
But this U.K. law seems bonkers to me. To me, something ought to be either legal for adults or not. The idea that if you’re already 18 years old you can buy tobacco products for the rest of your life, but if you were born in 2009 or later, you’ll never be permitted to, is so contrary to my sense of fairness that I’m finding it hard to put my objection into words. All adults should be equals under the law. That’s my take in a nut. If smoking should be illegal, it should be illegal for everyone. I’ve never heard of a law like this anywhere in the world. It’s like they’re enshrining in law that everyone in the U.K. who is today a child is forever a child when it comes to tobacco. If there are examples of similar laws I’m unaware of, I’d love to hear about them. [Update: Brookline Massachusetts passed a town ordinance like this in 2021, and after it was upheld by the state supreme court in 2024, a few other MA towns have too. My cynical guess is that the only effect of this law is to annoy young Brookline smokers by making them drive a few miles to buy smokes, but if the actual effect is that fewer young Brooklinites (sp?) smoke, that’s great. But I also doubt that anyone in Brookline’s municipal government is going to commission a study to see if the law had any practical effect on smoking rates.]
Maybe the British are different, but there’s no way this law would work in America. First, I don’t think such a law would ever gain popular traction. But even if it did, it would just create a black market. At least when we banned booze, we banned it for everyone.
Russ Choma, reporting for Mother Jones:
Devin Nunes was not an obvious choice to run a fledgling social media network, but after $1.1 billion in losses, the former dairy farmer and congressman is out as the head of Truth Social.
Donald Trump Jr., a board member at Trump Media + Technology, the parent company of Truth Social, said on Tuesday night that Nunes would be replaced by another executive who formerly worked at Hulu. Nunes confirmed the move in a Truth Social post of his own.
The company, which is majority owned by Donald Trump, has seen its stock plummet 84 percent under Nunes’ leadership, from its debut price of $58 back in 2024. The current share price of around $9.80 is arguably still optimistic for a company that has lost $1.1 billion since it went public, and recorded just over $10.6 million in revenue in the same time.
Like a well-oiled Atlantic City casino.
When Trump Media was first announced as a concept, the Trump family said it would include: Truth Social, streaming television services to rival Netflix and Amazon and web-hosting that would rival Amazon’s AWS business. And all of it would be devoted to fighting the “woke” media and corporate culture that Trump said had blacklisted him following Jan. 6. Truth Social would be a redoubt for freedom of speech, the streaming services would have wholesome non-“woke” content that America craved and the web-hosting would provide a home for any company that dared to challenge Amazon’s alleged anti-free speech motivations.
I’m sure the rest of that has merely been delayed, temporarily, while Trump Media’s best and brightest minds continue working on the cell phone they started selling last summer but still haven’t shipped.
Nilay Patel, in a terrific essay (and Decoder one-sider) at The Verge:
In fact, the polling on this is so strong, I think it’s fair to say that a lot of people hate AI, and that Gen Z in particular seems to hate AI more and more as they encounter it. There’s that NBC News poll showing AI with worse favorability than ICE and only a little bit above the war in Iran and the Democrats generally. That’s with nearly two thirds of respondents saying they used ChatGPT or Copilot in the last month. Quinnipiac just found that over half of Americans think AI will do more harm than good, while more than 80 percent of people were either very concerned or somewhat concerned about the technology. Only 35 percent of people were excited about it.
Poll after poll shows that Gen Z uses AI the most and has the most negative feelings about it. A recent Gallup poll found that only 18 percent of Gen Z was hopeful about AI, down from an already-bad 27 percent last year. At the same time, anger is growing: 31 percent of those Gen Z respondents said they feel angry about AI, up from 22 percent last year.
A good friend texted me a few weeks ago that “the phrase ‘software is eating the world’ sure hits differently now” than when Marc Andreessen coined the term back in 2011. (Patel, in fact, references Andreessen’s seminal essay.) That same friend texted me a link to this piece by Patel this morning.
Something is profoundly off in the computer industry when it comes to software broadly and AI specifically. It’s up for debate what exactly is off and what should be done about it, but the undeniable proof that something is profoundly off is the deep unpopularity surrounding everything related to AI. You can’t argue that the public always turns against groundbreaking technology. The last two epoch-defining shifts in technology were the smartphone in the 2000s, and the Internet/web in the 1990s. Neither of those moments generated this sort of mainstream popular backlash. I’d say in both of those cases, regular people were optimistically curious. The single most distinctive thing about “AI” today is the vociferous public opposition to it and deeply pessimistic expectations about what it’s going to do.
You can’t advertise people out of reacting to their own experiences. This is a fundamental disconnect between how tech people with software brains see the world and how regular people are living their lives.
So what is software brain? The simplest definition I’ve come up with is that it’s when you see the whole world as a series of databases that can be controlled with the structured language of software code. Like I said, this is a powerful way of seeing things. So much of our lives run through databases, and a bunch of important companies have been built around maintaining those databases and providing access to them.
Zillow is a database of houses. Uber is a database of cars and riders. YouTube is a database of videos. The Verge’s website is a database of stories. You can go on and on and on. Once you start seeing the world as a bunch of databases, it’s a small jump to feeling like you can control everything if you can just control the data.
But that doesn’t always work.
“Software brain” is a good term — a tidy two-word encapsulation of a sprawling worldview that is currently very much in vogue. Take some time to read Patel’s whole piece carefully. It feels important, and it’s really well considered.
An FT poll of 4,000 workers in the US and UK shows adoption is heavily skewed towards the best-paid workers: more than 60 per cent use AI daily, compared with just 16 per cent of the lower earners.
Link here. Note also that the youngest workers are not those who use AI the most, rather it is workers in their 30s. Men in the workplace are using AI more than women are. A very good piece by Madhumita Murgia and John Burn-Murdoch.
The post Which workers are using AI the most and best? appeared first on Marginal REVOLUTION.
The small robot has brushed past me five times in the last hour.
It runs loops around the perimeter of the third floor of this bio lab, serving as a courier. The machine’s job is to visit workstations and keep other robots - arms bolted to lab benches - fed with whatever they need be it pipette holders, sealed plates or something in a labeled bag. The little bot is relentless and unconcerned about me or much else beyond its job. Out of the corner of my eye, I spot chairs still rotating slowly on their bases from where it clipped them on the last pass.
About a hundred robotic arms fill this room, each one positioned beside a different scientific tool. The arms must deal with centrifuges, incubators, chambers and tubes. They run simultaneously and continuously. The small robot links them together, ferrying consumables between stations the way a junior scientist carries things between benches. Except the benches are robots. And so is the assistant.
All of this is the brainchild of Michelle Lee, the founder and CEO of Medra. And, at this moment, she’s rather proud that one of her robots has learned to open and close a glass door with ease.
MEDRA TODAY formally announced the opening of its 38,000 square foot warehouse in San Francisco. The company runs what it calls “physical AI scientists”: general-purpose robot arms with cameras mounted near their grippers and nine different sensors - all governed by software that lets the arms operate lab instruments the way a trained human would.
Standard lab automation gear, the kind that has existed for two decades, comes with dated APIs and rigid interfaces. Only about five percent of the instruments sitting on a scientist’s bench fall into the “can be automated” category. The rest — centrifuges you open and balance, pipettes you grip and tilt and time — were designed for hands. Medra thinks it has technology to automate the old and the new. Its software uses computer vision and manipulation models to adapt to the instruments that labs already own. Lee says that, if successful, Medra’s physical AI scientists can bump the overall automation number for bio-tech tasks from five percent to seventy-five percent.
THE PLATFORM works in two linked layers.
The first is physical: cameras are mounted on every arm and every lab bench with the nine sensors doing yet more monitoring. When an arm opens a centrifuge, for example, the wrist camera reads the rotor angle to balance the load. When a pipette misses a pick-up, the system catches the mistake and sends a notification. The sensor network logs the exact angle of every pipette tip, the exact depth of its insertion, the timing between reagent additions — all of it automatically. With humans in a lab, this layer of practice is tacit — an experienced scientist builds intuition for what to do over years, and once they leave or retire, their knowledge goes with them. Medra’s sensors would be among the first systems to put this information on the record. “The way science sometimes works is super subtle,” Lee says. “You vortex it thirty seconds more, shake a certain way, suddenly it starts working. How do you capture that? The robots just capture exactly what they do.”
The second layer is the AI scientist: a software agent that reads the results, identifies what’s going wrong, proposes protocol changes, and rewrites the protocol itself. It can run autonomously or hold for human approval. According to Lee, one customer ran an experiment to test whether their antibodies would bind to a target protein. The answer came back zero — meaning the antibodies weren’t sticking to anything. The AI scientist narrowed the problem to two hypotheses, designed a test to distinguish them, proposed adding a vortexing step mid-protocol, and watched binding jump from zero to more than seventy percent.
There was no automation engineer involved - just a chat interface and an arm. The doing and the thinking on one platform.
The arms are general-purpose hardware, sourced from the same manufacturer that supplies Toyota factories. The software is what makes them useful in a lab context.
“We adapt general robots for the reality we live in,” Lee says.
We’re in the midst of an AI-for-bio boom with a bottleneck problem. Companies like Chai Discovery can now design drug candidates at a pace that would have been unthinkable five years ago. But a designed molecule is not a validated one. Every drug candidate still has to be synthesized and tested in a physical lab by physical scientists who can only run so many experiments in a day. The software has sprinted ahead of the hardware.
Whether Medra is the company that closes the gap is another question. Lab automation and versions of “AI scientists” have been overpromised for two decades. But somebody has to build the throughput. A hundred arms running in San Francisco is a worthy attempt.
Medra’s old lab was 4,000 square feet and had a handful of robots in training. This new building has three floors of weight-bearing concrete and 38,000 square feet of space. Back in November, Medra had 15 employees. Now, it’s up to 45. Five customers have experiments scheduled to run across the robot army inside of the only autonomous lab in the city.
Customization is Medra’s moat. A new customer describes their protocol: instruments, throughput, consumables. An agent asks questions, builds a simulation from a JSON file, optimizes the layout, and runs the protocol virtually before the first arm moves. More than eighty-five percent of customers arrive with a request Medra has never fulfilled before. Because the software and hardware layer is consistent across protocols, reconfiguring from one setup to a hundred doesn’t require massive rebuilding. Over the last three months, Medra went from none of these systems in the building existing to a hundred arms running antibody binding.
Medra’s customers own their experimental data: the sequences, the targets, the candidates. What Medra retains is process knowledge – the pipette angle that produced good results, the vortex duration, the timing between reagent additions. The data edge compounds the more protocols the company runs.
One gap, though, remains. The system can detect a missing plate, catch a dropped tip, and read a centrifuge rotor. It cannot distinguish one colorless liquid from another. Humans still open boxes and load the consumables. For now, there’s no way around it.
LEE GREW up in Taiwan and came to America at fourteen. Her family worked in chemical engineering, and so, as one does, she studied chemical engineering, built a go-kart in undergrad, won a grant for an iPhone, and spent 2015 interning at SpaceX. You can hear traces of her time at SpaceX - and remnants of Elon Musk’s unwavering commitment to speed and infrastructure — in the conviction in her voice. Just ten years ago, everyone she knew at Google was praising Project Loon – Starlink seemed like insanity.
Now, she tells me, “Starlink feels inevitable.”
Lee was supposed to become a professor at NYU. Then, in 2021, AlphaFold 2 was released, and she started thinking through why it worked. Protein folding was solvable because fifty years of structural data existed to train on. Data for problems like drug target validation, antibody design and gene function is still limited, and the only way to get more data is to run more experiments. Labs can run only as many experiments as they have scientists, and scientists, like all humans, have limited working hours and, when they leave, take their technique with them.
From 2022 to 2024, Lee tried to build standardized cell culture boxes – something she could sell to multiple customers. She quickly learned that every lab wanted the work done differently and ended all the pilots in 2024. Then she rebuilt the hardware and software, this time designed to be reconfigured for each customer instead of sold as a fixed product.
The first Medra customer signed a six-figure contract on the basis of a PowerPoint and photographs of a robotic arm (the arm hadn’t even been hers — she had borrowed it from a friend with access to a lab.) The team had exactly one employee: Lee.
THE MODEL she uses to explain Medra is TSMC. TSMC manufactures the chips that make it possible for chip designers to exist. Medra wants to be what makes it possible for a drug discovery company to run experiments without building its own lab.
She grew up watching semiconductor manufacturing transform Taiwan into a geopolitical asset. Then realized early on that the infrastructure had to exist domestically. “Science is so critical to the United States’ — any nation’s — prosperity and also national security,” she notes. “If all our antibiotics come from abroad, what happens when there’s a national security crisis?” There’s urgency in her voice. “We need to move fast.”
The Chinese pharmaceutical industry has been moving fast for decades. Novo Nordisk, Eli Lilly, and most major American pharmaceutical companies manufacture extensively in China, where Chinese scientists, technicians, and — you guessed it — robots have been accumulating process knowledge at a volume no American lab has matched. As with more traditional manufacturing, the U.S. has fallen behind, which is not ideal as we head toward a century possibly full of bio-tech breakthroughs.
Medra offers the hope that the U.S. could play off its AI and software strengths and find a way to compete.
The arms are still running when you leave the third floor, and will still be running as you head to bed tonight. The small robot is still on its circuit – tip rack here, plate there – moving through the room on a schedule that doesn’t stop at five or take weekends. The jobs queue and clear. The arms complete their protocols. The chairs spin slowly in the corners.
“If we could cure cancer, Alzheimer’s, infectious disease – we have the ability to do that,” Lee says. “We just don’t have the throughput.”
The bot makes another pass.
In the wake of the voters of Virginia blessing a mid-decade redistricting that could give Democrats four more seats in the House, a few moderate Republicans are ever-so-gently suggesting that maybe it wasn’t such a great idea for President Trump to start this mid-decade redistricting war last summer by ordering Texas Republicans to give him five more seats. “Chess players think three to four moves ahead,” said Rep. Don Bacon of Nebraska. “It doesn’t appear this happened.”
Well no, it didn’t — Donald Trump is not a think-four-moves-ahead kind of guy, and he clearly didn’t anticipate that his opponents would fight back in the way they did. In fairness, that was not a crazy thing to believe, given Democrats’ long opposition to gerrymandering and their tendency to respond to procedural hardball with outraged harrumphing but not much in the way of action.
While it might be too much to say those days are over and Democrats will henceforth be a party of aggressive street fighters, it’s at least possible that this battle will wind up being a turning point for both parties. Should that happen, it will mean that the real defeat Republicans suffered wasn’t just about the number of seats that will or won’t swing their way.
In the short term, it’s looking like the Great Gerrymandering War of 2025-2026 might be a wash. Republicans redistricted their way to five more seats in Texas, two in Ohio, and one each in North Carolina and Missouri, for a total of nine. Democrats will probably get five in California and four in Virginia; also nine. In addition, Texas Republicans might have made some seats more vulnerable, since if you try to squeeze out a few more seats you’ll have to do it by moving your voters around in ways that make some of those seats less of a sure thing.
Nevertheless, the most likely result is that all that gerrymandering and those referenda will just get us back to where we started. But it’s much more than that.
You don’t have to tell Democrats in Congress that their base is angry; they know it all too well, and you can see it in the fact that every Democrat now says they’re a “fighter” who will “fight” Trump with all the fightin’ fightingness they can muster. They’ve also started swearing a lot, which is meant to further communicate their pugilistic spirit.
This reflects a growing realization that despite the argument made by centrists that Democrats can solve all their problems if they shuffle to the center on policy issues, the real problem the party has with the public isn’t about policy. G. Elliott Morris explains:
…the Democratic brand is not predominantly woke, but weak. Respondents to our survey associated the Democrats with traits like honesty and caring about the working class, but they are seen as weak and not particularly effective. The Republican brand, by contrast, is a strong brand that a majority of the country finds extreme.
Anyone who heard about the Virginia ballot measure — which the news media did contextualize by explaining how Trump started the whole conflict in Texas — just watched Democrats give as good as they got, even to the point of being willing to temporarily cast aside a principle they had committed themselves to in order to fight back.
Playing political hardball can therefore not only get Democrats a practical victory, it can also help change the impression that they’re weak, among both their own voters and the broader electorate. And there’s a third effect, one equally important but less immediately obvious: By changing their own behavior, Democrats can change Republicans’ behavior as well.
While Republicans will bleat about what a bunch of terribly ruthless cheaters Democrats are, they know that in fact, Democrats are much more committed to established rules and procedures, and reluctant to violate longstanding norms. That has given Republicans an enormous freedom of movement, because they can do pretty much whatever they want without worrying that Democrats will respond in kind.
For example, Republicans packed the Supreme Court by refusing for nearly a year to allow Barack Obama to fill an open seat after Antonin Scalia’s death; in effect, they shrank the court, and then expanded it once Donald Trump took office and they had the power to fill the seat. Then when Ruth Bader Ginsburg died, they put Amy Coney Barrett’s nomination on a rocket sled through the Senate, taking her from nomination to confirmation in just one month.
And what did Democrats do in response? Joe Biden, following up on a campaign promise, formed a commission to examine Supreme Court reform and report its findings! Nothing says “You’re going to pay for what you did” like appointing an august panel of experts to carefully produce a 294-page white paper.
Now imagine if, when Republicans sent Merrick Garland’s nomination to purgatory in early 2016, every Democratic leader said publicly and privately, “If you don’t immediately move this nomination through the process, we make you this promise: The second we have control of Congress and the White House, we will add four seats to the Supreme Court, all of which will be filled by the Democratic president.” And imagine if the threat was credible.
They didn’t do that, but it’s not too late. The redistricting battle showed that when Democrats want to play hardball, not only are they capable of it, their constituents will back them up. And the more they do it, the more their opponents will come to realize that they can’t count on Democrats lying down and letting themselves be rolled over. That will then change the strategic calculations Republicans make, by increasing the cost of transgressing the rules. It’s straightforward deterrence.
The next step in establishing that deterrence is to deliver accountability to Republicans for their actions — good and hard. An iron-clad promise to support genuine Supreme Court reform — including adding justices and imposing 18-year term limits — should be a litmus test for any Democrat running for president in 2028, as should sweeping change to the structure of the federal government to undo the damage done under this administration and vigorous corruption prosecutions for all the grifters and scammers currently slithering their way through the executive branch.
Republicans need to live in fear that if they violate norms, rules, and laws in the ways they have been for the last couple of decades, there will be hell to pay and they’ll be the ones paying it. Only then, once they know Democrats are serious, will they be brought to heel.
Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.
Over the past couple of years, one of the more ambitious experiments in American manufacturing has been taking place in Central Texas. It is called Proto-Town, and it has been something of a secret. …
GPT-5.5 is out. It's available in OpenAI Codex and is rolling out to paid ChatGPT subscribers. I've had some preview access and found it to be a fast, effective and highly capable model. As is usually the case these days, it's hard to put into words what's good about it - I ask it to build things and it builds exactly what I ask for!
There's one notable omission from today's release - the API:
API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale. We'll bring GPT‑5.5 and GPT‑5.5 Pro to the API very soon.
When I run my pelican benchmark I always prefer to use an API, to avoid hidden system prompts in ChatGPT or other agent harnesses from impacting the results.
One of the ongoing tension points in the AI world over the past few months has concerned how agent harnesses like OpenClaw and Pi interact with the APIs provided by the big providers.
Both OpenAI and Anthropic offer popular monthly subscriptions which provide access to their models at a significant discount to their raw API.
OpenClaw integrated directly with this mechanism, and was then blocked from doing so by Anthropic. This kicked off a whole thing. OpenAI - who recently hired OpenClaw creator Peter Steinberger - saw an opportunity for an easy karma win and announced that OpenClaw was welcome to continue integrating with OpenAI's subscriptions via the same mechanism used by their (open source) Codex CLI tool.
Does this mean anyone can write code that integrates with OpenAI's Codex-specific APIs to hook into those existing subscriptions?
The other day Jeremy Howard asked:
Anyone know whether OpenAI officially supports the use of the
/backend-api/codex/responsesendpoint that Pi and Opencode (IIUC) uses?
It turned out that on March 30th OpenAI's Romain Huet had tweeted:
We want people to be able to use Codex, and their ChatGPT subscription, wherever they like! That means in the app, in the terminal, but also in JetBrains, Xcode, OpenCode, Pi, and now Claude Code.
That’s why Codex CLI and Codex app server are open source too! 🙂
And Peter Steinberger replied to Jeremy that:
OpenAI sub is officially supported.
So... I had Claude Code reverse-engineer the openai/codex repo, figure out how authentication tokens were stored and build me llm-openai-via-codex, a new plugin for LLM which picks up your existing Codex subscription and uses it to run prompts!
(With hindsight I wish I'd used GPT-5.4 or the GPT-5.5 preview, it would have been funnier. I genuinely considered rewriting the project from scratch using Codex and GPT-5.5 for the sake of the joke, but decided not to spend any more time on this!)
Here's how to use it:
uv tool install llm
llm install llm-openai-via-codex
llm -m openai-codex/gpt-5.5 'Your prompt goes here'
All existing LLM features should also work - use -a filepath.jpg/URL to attach an image, llm chat -m openai-codex/gpt-5.5 to start an ongoing chat, llm logs to view logged conversations and llm --tool ... to try it out with tool support.
Let's generate a pelican!
llm install llm-openai-via-codex
llm -m openai-codex/gpt-5.5 'Generate an SVG of a pelican riding a bicycle'Here's what I got back:

I've seen better from GPT-5.4, so I tagged on -o reasoning_effort xhigh and tried again:
That one took almost four minutes to generate, but I think it's a much better effort.

If you compare the SVG code (default, xhigh) the xhigh one took a very different approach, which is much more CSS-heavy - as demonstrated by those gradients. xhigh used 9,322 reasoning tokens where the default used just 39.
One of the most notable things about GPT-5.5 is the pricing. Once it goes live in the API it's going to be priced at twice the cost of GPT-5.4 - $5 per 1M input tokens and $30 per 1M output tokens, where 5.4 is $2.5 and $15.
GPT-5.5 Pro will be even more: $30 per 1M input tokens and $180 per 1M output tokens.
GPT-5.4 will remain available. At half the price of 5.5 this feels like 5.4 is to 5.5 as Claude Sonnet is to Claude Opus.
Ethan Mollick has a detailed review of GPT-5.5 where he put it (and GPT-5.5 Pro) through an array of interesting challenges. His verdict: the jagged frontier continues to hold, with GPT-5.5 excellent at some things and challenged by others in a way that remains difficult to predict.
Tags: ai, openai, generative-ai, chatgpt, llms, llm, llm-pricing, pelican-riding-a-bicycle, llm-reasoning, llm-release, codex-cli, gpt
Release: llm-openai-via-codex 0.1a0
Hijacks your Codex CLI credentials to make API calls with LLM, as described in my post about GPT-5.5.
[...] if you ever needed another reason to learn in public by digital gardening or podcasting or streaming or whathaveyou, add on that people will assume you’re more competent than you are. This will get you invites to very cool exclusive events filled with high-achieving, interesting people, even though you have no right to be there. A+ side benefit.
— Maggie Appleton, Gathering Structures (via)
Tags: blogging, maggie-appleton
What is Gas City, you ask? It is Gas Town, but torn apart and rewritten from the ground up as an SDK for building your own dark factories. It enables you to deploy teams of collaborating agents in any topology, not just the hardwired original (and complex) Gas Town team shape.
Gas City released version v1.0.0 this week. It went to alpha test a few weeks ago and is ready for use today!

This is a pivotal moment in the Mad Max school of agent orchestration, a.k.a. Gaslandia, Gas Universe, Gas Nation, or the Gasosphere, depending on who you ask. It all began with Beads, which was like discovering oil. It continued with Gas Town, and we soon opened the Wasteland, a public commons board for federated work arbitrage and a budding private army. Gas City is the next step in that progression. I first predicted and described Gas City back in January, and it has finally arrived.
Disclaimer: I did not write Gas City. It was created by Julian Knutsen and Chris Sells, both of whom you can meet on the Discord. I am only lightly affiliated with the code, in the sense that I outlined my vision in a blog post, and they built it. But it’s exactly what I wished for, and it is being run by far more serious and disciplined engineers than me. I am all-in with Gas City, contributing myself, and this is our official new direction!
Gas City has deconstructed the entire Gas Town stack into composable, declarative building blocks called “packs”. You can use these to assemble arbitrary agent topologies, deploy them, sit back, and watch them work from a rich console. (Or tmux, if that floats your boat. That still works in Gas City!) Gas City is the supervisor plane that connects, manages, and coordinates these deployed mini-factories.
As its unboxing flex, Gas City comes with a fully functional “Gas Town” pack, which runs an exact replica of Gas Town. This is the default pack that runs on startup. So Gas City starts off as a drop-in replacement for the original Gas Town, and can import all your rigs and beads.
Both systems are backed by the powerful and peerless MEOW stack. MEOW, the Molecular Expression of Work, is a lightweight Beads-based framework that places Work front and center, as the first-class system primitive, creating a versioned knowledge graph of all your issues and tasks. Work is the currency that drives the Gas Universe ecosystem. It’s Beads all the way down, powered at the base of the stack by a unique git-versioned database called Dolt. Dolt was the magic that made our stack run smoothly.

Gas City solves an enormous number of problems associated with spinning up long-running agentic worker “teams.” It builds atop the innovations and community contributions from Gas Town, giving you out-of-the-box, scalable, convenient access to agent identity, messaging, history, context, state, skills, roles, personas, and much more. And for coding agent maintainers, Gas City exposes a rich Factory Worker API. It’s a way to make your own agent the driver for Gas City.
Gas City is generally an improvement on Gas Town on all fronts, from code quality, to the services it offers. For instance it offers fine-grained model selection and switching at various levels, for cost control.
Gas City doesn’t aim to solve all your problems. You will need to wire it to your own sandboxing, MCP servers, and so on. But it provides you with a rock-solid and easy-to-use foundation to build on: one with a Discord community with thousands of active members.
This combination of tech stack and community makes Gas City, as far as I can tell, the only viable solution for building custom orchestrators backed by Git. You can build and run an entire business with it, tracking every step taken by any agent in a database with git version history. The forensics and auditing capabilities of Gas City are unparalleled, because of MEOW and Dolt.
What about maturity? Unlike Gas Town, which is an experiment run like a Wild West, Gas City is a rapidly maturing, enterprise-focused SDK for building and deploying autonomous agentic workflows. These deployments can be for anything: devops and monitoring jobs, ETLs, data pipelines, ticket queues, incident response, whatever you like.
Gas City is completely open-sourced and MIT-licensed. Built for enterprise, fun for tinkerers.
The rest of this post is about why you might want to try it yourself. This post is not about how to use Gas City; there is a ton of getting started material emerging, but a good start is the gastownhall.ai Discord general announcements.
The Light Factory
Everyone is buzzing about “dark factories,” so let’s start there.
A dark factory is any system in which coding agents are set up to work autonomously without humans watching. The name is frankly a little bit misleading. It just means background work. It’s only dark inside a dark factory because the work is happening in rooms where there are no humans present. But those rooms are allowed to have windows. Observability is a choice in dark factory design.
Gas City, like Gas Town, has chosen to maximize Observability. You can dive in and interact with any worker at any time, and nothing is ever hidden from you, nor from the agents, except for the guardrails you choose to install.
So then is Claude Code a dark factory? The terminology here, with factories and harnesses and so on, is all evolving fast, so there aren’t any reliable definitions yet. But Claude Code did not start life as a dark factory, because you were generally supposed to watch it while it works. It’s making overtures in that direction with subagents and agent teams, but they keep the lights off intentionally, presumably because it’s a consumer-facing product and it keeps it simpler.
Gas City takes a different approach: all agent workers are equally visible and addressable. The lights are on. Normally for the ephemeral “polecat” workers, you don’t bother looking at them. But unlike with coding-agent subagents, if you want to talk to your polecats, you absolutely can.
And so far, Gas City is the only dark factory that has been designed with the goal of creating other factories. The lights are 100% on when you’re working with the Mayor and Crew in Gas City, and you can dial them up as needed in the back rooms with the polecats and dogs. For this reason I have begun to think of Gas City as a Light Factory… or at least, a very well-lit dark one!

At first, dark factories were only used for writing code faster. They focused on the software development process, and basically replaced IDEs. And that’s still primarily how they are used today. I do see some shops making good headway into CI/CD pipelines and peripheral business processes. But dark factories are headed towards handling infrastructure, operations, and even core business processes.
Gas Town users soon realized that you could use Gas Town’s stack to create standalone orchestrators that had nothing to do with writing code. Gene Kim and I have talked to companies that are doing this, and I’ve also started myself. Running my online game Wyvern requires a bunch of routine maintenance that goes way beyond CI/CD.
For instance, when players in my game hit 25th level, a unique perk is they can upload custom images for that character. That perk is permanent operational debt that I’m saddled with. I have to monitor the “Hall of Fame” submission queue, visually scan the player-uploaded images for inappropriateness, and then approve and upload them into the game. I’ve automated much of it, but I still have to look at them and run the scripts to approve or reject.
A dark factory could totally do that. And it’s got nothing to do with writing code. It’s a business process.
So dark factories are a much broader concept than IDEs. They can automate and orchestrate any arbitrary process. But with what reliability?
Well, that depends on how robust your custom factories are, doesn’t it?
The Shape of Things to Come
In the very near future, devs will become shepherds, tending flocks of agents which do the ground-level work. It’s not really a manager job, because the workers are not humans; a coding agent’s cat doesn’t get sick and need to go to the vet in the middle of a sev-1 outage. But agentic workers do make human-like mistakes, and they respond well to being managed like people, by and large. They don’t need management. But they need guidance. Shepherding. Keeping-on-rails. That’s the new role for human builders and operators.

As soon as I had this realization, the shepherd thing, I knew we’d discovered at least 2–3 new squares in my now-infamous “8 Stages of AI Adoption” diagram from Welcome to Gas Town:

At Level 8, you have mastered using an orchestrator to manage dozens of concurrent agents. As your experience grows, you begin to see the potential to use agent mini-factories everywhere.
After you deploy your first real one, you officially have a garden you’re tending. A tiny crew of agents, acting like employees. Your little factory team runs 24x7, which requires maintenance. You now have to keep it running, manage upgrades and patches, rotate the logs, and of course make sure it’s doing its job. But you don’t have to do the work yourself anymore! So it’s still a huge automation improvement… as long as you can keep it reliable.
You should almost never deploy a single-agent pack for a real business process. The reality is that any agent can go temporarily insane, at any time, and make a bad call. No matter how smart they are. We know now that hallucinations and false memories and forgetting are baked mathematically into all memory systems; there’s no avoiding it. So you should never just have one coding agent managing a piece of infrastructure. Not even for a low-stakes part of your business. You should always have at least two or three working together on a little crew.
This is exactly why dark factories are so attractive. With Gas City you can build any sort of adversarial group structure you like, for a team of collaborating agents. They can watch over each other. By catching each other’s mistakes, the agent group reaches a far more reliable consensus and outcomes than you can get from a single agent. That’s why we think of deployed orchestration as being fundamentally made of multi-agent teams: factories. Define your pack, deploy it, et voila — you are officially on the path to being an AI-native shop.
Orchestration-maxxing
Let’s look at what a specific small custom factory looks like in practice.
For my game’s player custom-image uploader, images are submitted via a website form, so it starts with an agent that wakes up on a hook and does the work. Then a second agent checks the first agent’s work. Perhaps the first agent makes a recommendation, and the second agent takes action.
I could add more agents to that pack, but I think two should be enough for this little workflow. It’s super low volume, low-stakes, not the end of the world if the agent crew messes up and approves a bad image. Adding that second agent, much like a second hash function, dramatically decreases the chance of some sort of collision. So in my pack, I declare the two agents, their identities, prompts, sandboxes, skills, and all the other stuff they need. Gas City can help you do this automatically, of course.

A deployed Gas City pack is an AI-native business process automation. Once your pack is running, Gas City’s supervisor agent system will keep it going, even on remote machines.
At this point, with one deployment, you are officially at Level 9 on the AI Adoption Chart.
After you make your first one, you realize you want to use dark factories for everything. I want every damn NPC in my actual game to be an agent, yeah? Not first, obviously! I’ll have to build up to that gradually, just like you should, when you first start with AI factories. Pick low-stakes, easy wins, get them automated, and learn your lessons early on when getting burned isn’t so bad.
Level 10 of the AI Adoption is where you’ve deployed a bunch of agent packs, each with its own little world it’s managing. They have identities, consoles, you can check in on any of them, tweak their standing orders, you name it. They’re starting to be a handful. But you don’t need to build an orchestrator for them, since Gas City is the orchestrator. What exactly are you missing?
The problem is, in stage 10, you, the human, are the control plane. Once you have a few dozen of these deployed packs, you will start to become the bottleneck on curating them. Gas City manages them, and they’re all functioning correctly on their own, but how are they working together as an end-to-end system? They can all send messages to each other, but have you given them clear, practical guidance on when it’s appropriate to do so?
The answer to all your problems at Stage 10 is of course to slather on more Gas City. You build yet another crew, with a declarative Gas City pack, and its job is to manage a subset of your deployed packs. Maybe you spin up a couple of them, one for the cloud services, one for customer service. Area manager teams, basically. You evolve it into the shape of your problem.
Once you start building your way out of needing to hand-manage dozens of packs, you’ve officially graduated to Level 11, which is Factory Builder. You’re building a full custom orchestrator, a full dark factory, and you’re operating at the level of architect, curator, and shepherd.

Your Gas City is the sum of those little factories you’ve created to run your business around the clock. No sick cats, no vets. Just the occasional wild worker hallucination. But your teams, if you build them right, will catch most mistakes. And all of it will be logged and auditable.
Reliability, friends, is a dial. You choose where to set it. More rounds of review, more backstops, more guardrails, more judges, and you can get agentic workers to be as reliable as you need them to be, at least up to some practical ceiling. I wouldn’t use it in situations where you could physically hurt people, e.g. in medical or navigation systems. Not in 2026. But we’ll build our way there, like engineers do, over the next couple of years.
That’s the new world. If you’re convinced already, then you’re done! You can just go play with Gas City. I’ll finish up by talking about how you can use Gas City to start chipping away at your SaaS problem.
Escape From SaaS Mountain
When I was a kid, my parents wouldn’t let me watch Escape From Witch Mountain (1975) for whatever dumb reason. Probably because it had “Witch” in the title and I was six. I’m still miffed about it, though. Who knows how much better-prepared I’d be for life if I’d watched it?
And now, I wonder, are investors and boardrooms watching Escape from SaaS Mountain? Or are their moms not letting them, too? Perhaps we’ll never know.
SaaS is in kind of a funny position right now. If you look at the pyramid from the top, at the SalesForces of the world, it looks like they’re fine, they just need to pivot to make their SaaS sexy to agents. Benioff did that recently with SF, exposing the whole platform headless and API-first, in exactly that: a bid to make SalesForce sexy to agents. That’s the new game for SaaS.
People look up there and say, oh it’s gonna be just fine, those are systems of record, nobody’s going to reimplement them. Nobody wants to pay the tax of maintaining those systems for security, compliance, performance, scalability, etc. Right? This is just the new Buy vs Build.
But then you look at the bottom of the SaaS pyramid and it’s like, oh crap, this stuff is disintegrating in real time. Just in this past month I was visiting an enterprise customer where the non-technical staff — non-technical! — have been rebuilding a $30k/year SaaS tool in-house on Gas Town. Their VP is now mapping out how to convert millions in annual SaaS spend into headcount, bringing those capabilities in-house as core competencies. The question they asked us was: how do we actually do this at scale?
From that perspective, it’s clear that some SaaS will be eaten, and the rest will have to pivot. The only question is how far up the pyramid people will be able to push, bringing it in-house. It’s going to take a few years to find out.

I’m going to share some stuff with you that will potentially change the way you think about SaaS forever. Much of it is credit to Brendan Hopper, Commonwealth Bank of Australia.
Atlassian, nominally, is an Australian company: a poster child for what the Aussies can accomplish when they put their mind to tech. But according to Brendan, it’s more complex than Australian versus American. Atlassian is staffed between the two continents, registered in Delaware, and traded on the NASDAQ — the ASX does not price tech companies as tech companies. About half their devs are U.S.-based, and they have strong incentives to have engineers in the U.S, such as tax code 174A. They serve pretty much all traffic out of the U.S.
For all practical purposes, and through no fault of their own, Atlassian is a U.S. company. Much love to Atlassian, but they have to be, because there is no other option. The center of gravity on the U.S. west coast is inescapable.
Every dollar of SaaS spent outside the U.S. has a portion extracted from a local economy and moved into California’s economy, which would now be the fourth largest economy in the world if it were a separate country. SaaS is moves money from the rest of the world into the U.S. And there’s no fighting it. If a big non-US tech company like Atlassian can’t, it’s going to be hard for everyone.
Another interesting thing about SaaS is that it grows to become the superset of all the features for all its customers. Most customers gradually recede to using 20% of the SaaS’s features, while subsidizing the remaining 80% for the other customers.
The implication is that you don’t need to reimplement all of SalesForce, just the parts you’re using. If you want to bring SaaS in-house, you only need to build the 20% of the features you need. And you also only need as much security, scaling, and compliance as is appropriate for your company, which is not necessarily the same as you need for a Fortune 100 company.
So SaaS is extractive, and expensive. Almost nobody is getting full value out of it, and all that value is streaming into Silicon Valley.
Yet another funny thing about SaaS is that the Venn diagram overlap with your own needs is always incomplete. Not only are you subsidizing features you won’t use, you’re also failing to get features that you would use. And when SaaS becomes dominant it stops innovating.
In short, SaaS began life as a way for everyone to get savings through specialization and economy of scale. And it has evolved into an extraction machine that’s ideal for almost nobody.
So you shouldn’t feel bad about de-SaaSing your company. Just start at the bottom.
The bottom of the SaaS pyramid is a rowdy place, a rough place. A lot of it isn’t even SaaS, it’s software on disks in a closet that someone is paying an annual license for. Construction, paralegal, medical, farming, biotech, environmental, there are hundreds of domains where ancient SaaS is still holding companies hostage, and not meeting people’s needs.
So the people are fighting back! As the models and tools get more powerful, people are beginning to bring their SaaS back in house. I hear VPs of Eng say, “We are spending $X million/year on SaaS. Let’s convert that to salaries and bring it in as a core competency.”
But until this week, there hasn’t been a viable path forward, aside from experimenting with building your own orchestration on raw coding agents. That is a long journey, and most people need help with it. Most of the orchestrator vendors out there are off building agent brains and persona libraries — interesting research projects, but not what you need to replace a piece of SaaS in production. To replace SaaS, you need the unglamorous stuff: declarative deploys, audit trails, version history, identity, and a memory layer that survives the inevitable agent failures. Those are the primitives that make in-housing tractable.
Enter Gas City: The ultimate de-SaaSer. It’s like de-lousing your company. It makes Build and Maintain a pragmatic choice, particularly at the bottom end of the pyramid. You build the replacement software, then set up agent teams to run it for you.
A small team of three to five human engineers running Gas City packs can credibly replace seven-figure SaaS bills, and the capability stays in-house as a compounding asset instead of leaking out as recurring rent. Yes, you own the uptime, security, and compliance now. But you only need the level of each that fits your company — not Salesforce-grade everything for a 200-person business. And Gas City’s audit trail, with every agent action recorded in a git-versioned Dolt database, is frankly better than what most SaaS vendors can produce when you ask for theirs. That’s your SOC2 story, sitting right there in the database, already written.
That’s the problem with OSS SaaS replacements today, and it’s why few have adopted them: they don’t run themselves. ChatWoot is a full-featured ZenDesk replacement, but you have to run it, and most companies can’t afford the operational overhead. SaaS has to evolve to be AI-native! It’s not good enough to be OSS. It has to have agents in it, running it, just like ZenDesk the SaaS handles everything for you as a service.
This implies that all SaaS worldwide needs to be rewritten/reenvisioned from the ground up to be fully agentic.
But where do you start?
That’s where Gas City gives you a refreshing leg up. You can start building bespoke Gas City orchestrators today. You’re not going to rewrite your business overnight. In fact it’s going to take years. You should approach the whole endeavor with appropriate levels of caution and even trepidation.
But you might as well start eating the elephant a bite at a time.
Next Up For Gaslandia
I had a set of projects in flight that were banking on at least Mythos-class models, which means they’re all on hold. Gas City will be the extent of my ambitions for a while. I plan to become a Gas City power user, not just for coding, but for running my own systems. I want to see that control plane emerge and understand how you can run a business with hundreds of collaborating agents.
Most other orchestrators I’ve seen have been able to be super high-ambition. Many lean heavily into the idea that agents, when put together, can just figure stuff out. And to a large extent, they can. But they are intentionally low-control systems. And when they fail, they go off-rails with no audit trail.
Gas City is a high-control system. It has high parallelism (Julian has had hundreds of concurrent workers in a city), but it uses structure to keep agent swarms organized. It’s still incredibly flexible and freeform when it needs to be; you can just tell a group of polecats to go solve any problem. But most work is spelled out, tracked, and governed carefully.
A key concept in MEOW is the formula — a reusable template for a unit of work. You write a formula once for a recurring workflow (triage an incident, review a player-uploaded image, rotate the logs, run the nightly ETL) and then pour it whenever you need that work done, instantiating a fresh molecule of beads that an agent crew can pick up and execute.
Over time, your library of formulas becomes a declarative inventory of every business process you’ve ever automated — version-controlled in Dolt, composable, and forkable by anyone on your team. That’s how Gas City turns undocumented workflow knowledge into a durable, shareable codebase.
When you express all your work in the MEOW stack, as Beads and Epics (typically harvested from an upstream system like Obsidian), and your agent actions are all recorded in both a database and version history, you wind up with infinitely more control over the outcomes.
In enterprises, controlling the outcomes is the holy grail. So I personally wouldn’t use any other orchestrator I’ve seen. They don’t have the federatable, versioned, queryable memory system that Dolt provides. They don’t have MEOW.
Everyone is suffering from tool overload. There are too many tools out there, and nobody can keep up with all of them. My life is so simple in comparison. I never look deeply at any new technology until it reaches a pretty loud public signal threshold. And I almost never have to, because the Gas Town ecosystem has solved so many of my problems.
The Gas Universe in general is a pretty reliable bet at this point. It integrates with everything. Beads has been out for six months, Gas Town for four months, and the Wasteland for two. And Dolt has been around for at least eight years, with enterprise-grade maturity. We’ve got over two thousand members on the Discord. The community has spoken: This is The Way. People love working with this system, despite its occasional quirks and frustrations. It’s more fun than anything else we know.
Should you switch from Gas Town to Gas City? Yes! Gas City aims to be better in every way. And Gas City hit 1.0 today, so you should be good to go. Make sure to complain loudly on the Discord if it’s not working for you! Help will be on the way.
Do you have to switch to Gas City? Nope, we have some new maintainers onboarding onto Gas Town this week to help with the load. We’re going to continue maintaining the O.G. Gas Town as long as people still need it.
See you all on the Discord at gastownhall.ai.

This week's edition of my email newsletter (aka content from this blog delivered to your inbox) features 4 pelicans riding bicycles, 1 possum on an e-scooter, up to 5 raccoons with ham radios hiding in crowds, 5 blog posts, 8 links, 3 quotes and a new chapter of my Agentic Engineering Patterns guide.
Tags: newsletter
The design of this looks very solid. It lets you write Python code for queues that looks like this:
import honker db = honker.open("app.db") emails = db.queue("emails") emails.enqueue({"to": "alice@example.com"}) # Consume (in a worker process) async for job in emails.claim("worker-1"): send(job.payload) job.ack()
And Kafka-style durable streams like this:
stream = db.stream("user-events") with db.transaction() as tx: tx.execute("UPDATE users SET name=? WHERE id=?", [name, uid]) stream.publish({"user_id": uid, "change": "name"}, tx=tx) async for event in stream.subscribe(consumer="dashboard"): await push_to_browser(event)
It also adds 20+ custom SQL functions including these two:
SELECT notify('orders', '{"id":42}');
SELECT honker_stream_read_since('orders', 0, 1000);The extension requires WAL mode, and workers can poll the .db-wal file with a stat call every 1ms to get as close to real-time as possible without the expense of running a full SQL query.
honker implements the transactional outbox pattern, which ensures items are only queued if a transaction successfully commits. My favorite explanation of that pattern remains Transactionally Staged Job Drains in Postgres by Brandur Leach. It's great to see a new implementation of that pattern for SQLite.
Via Show HN
Tags: databases, postgresql, sqlite, rust
An update on recent Claude Code quality reports
It turns out the high volume of complaints that Claude Code was providing worse quality results over the past two months was grounded in real problems.The models themselves were not to blame, but three separate issues in the Claude Code harness caused complex but material problems which directly affected users.
Anthropic's postmortem describes these in detail. This one in particular stood out to me:
On March 26, we shipped a change to clear Claude's older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive.
I frequently have Claude Code sessions which I leave for an hour (or often a day or longer) before returning to them. Right now I have 11 of those (according to ps aux | grep 'claude ') and that's after closing down dozens more the other day.
I estimate I spend more time prompting in these "stale" sessions than sessions that I've recently started!
If you're building agentic systems it's worth reading this article in detail - the kinds of bugs that affect harnesses are deeply complicated, even if you put aside the inherent non-deterministic nature of the models themselves.
Via Hacker News
Tags: ai, prompt-engineering, generative-ai, llms, anthropic, coding-agents, claude-code
spacecowboy runs the For You Feed, used by around 72,000 people. This guest post on the AT Protocol blog explains how it works.
The architecture is fascinating. The feed is served by a single Go process using SQLite on a "gaming" PC in spacecowboy's living room - 16 cores, 96GB of RAM and 4TB of attached NVMe storage.
Recommendations are based on likes: what else are the people who like the same things as you liking on the platform?
That Go server consumes the Bluesky firehose and stores the relevant details in SQLite, keeping the last 90 days of relevant data, which currently uses around 419GB of SQLite storage.
Public internet traffic is handled by a $7/month VPS on OVH, which talks to the living room server via Tailscale.
Total cost is now $30/month: $20 in electricity, $7 in VPS and $3 for the two domain names. spacecowboy estimates that the existing system could handle all ~1 million daily active Bluesky users if they were to switch to the cheapest algorithm they have found to work.
Tags: go, scaling, sqlite, software-architecture, tailscale, bluesky
LlamaIndex have a most excellent open source project called LiteParse, which provides a Node.js CLI tool for extracting text from PDFs. I got a version of LiteParse working entirely in the browser, using most of the same libraries that LiteParse uses to run in Node.js.
Refreshingly, LiteParse doesn't use AI models to do what it does: it's good old-fashioned PDF parsing, falling back to Tesseract OCR (or other pluggable OCR engines) for PDFs that contain images of text rather than the text itself.
The hard problem that LiteParse solves is extracting text in a sensible order despite the infuriating vagaries of PDF layouts. They describe this as "spatial text parsing" - they use some very clever heuristics to detect things like multi-column layouts and group and return the text in a sensible linear flow.
The LiteParse documentation describes a pattern for implementing Visual Citations with Bounding Boxes. I really like this idea: being able to answer questions from a PDF and accompany those answers with cropped, highlighted images feels like a great way of increasing the credibility of answers from RAG-style Q&A.
LiteParse is provided as a pure CLI tool, designed to be used by agents. You run it like this:
npm i -g @llamaindex/liteparse
lit parse document.pdf
I explored its capabilities with Claude and quickly determined that there was no real reason it had to stay a CLI app: it's built on top of PDF.js and Tesseract.js, two libraries I've used for something similar in a browser in the past.
The only reason LiteParse didn't have a pure browser-based version is that nobody had built one yet...
Visit https://simonw.github.io/liteparse/ to try out LiteParse against any PDF file, running entirely in your browser. Here's what that looks like:

The tool can work with or without running OCR, and can optionally display images for every page in the PDF further down the page.
The process of building this started in the regular Claude app on my iPhone. I wanted to try out LiteParse myself, so I started by uploading a random PDF I happened to have on my phone along with this prompt:
Clone https://github.com/run-llama/liteparse and try it against this file
Regular Claude chat can clone directly from GitHub these days, and while by default it can't access most of the internet from its container it can also install packages from PyPI and npm.
I often use this to try out new pieces of open source software on my phone - it's a quick way to exercise something without having to sit down with my laptop.
You can follow my full conversation in this shared Claude transcript. I asked a few follow-up questions about how it worked, and then asked:
Does this library run in a browser? Could it?
This gave me a thorough enough answer that I was convinced it was worth trying getting that to work for real. I opened up my laptop and switched to Claude Code.
I forked the original repo on GitHub, cloned a local copy, started a new web branch and pasted that last reply from Claude into a new file called notes.md. Then I told Claude Code:
Get this working as a web app. index.html, when loaded, should render an app that lets users open a PDF in their browser and select OCR or non-OCR mode and have this run. Read notes.md for initial research on this problem, then write out plan.md with your detailed implementation plan
I always like to start with a plan for this kind of project. Sometimes I'll use Claude's "planning mode", but in this case I knew I'd want the plan as an artifact in the repository so I told it to write plan.md directly.
This also means I can iterate on the plan with Claude. I noticed that Claude had decided to punt on generating screenshots of images in the PDF, and suggested we defer a "canvas-encode swap" to v2. I fixed that by prompting:
Update the plan to say we WILL do the canvas-encode swap so the screenshots thing works
After a few short follow-up prompts, here's the plan.md I thought was strong enough to implement.
I prompted:
build it.
And then mostly left Claude Code to its own devices, tinkered with some other projects, caught up on Duolingo and occasionally checked in to see how it was doing.
I added a few prompts to the queue as I was working. Those don't yet show up in my exported transcript, but it turns out running rg queue-operation --no-filename | grep enqueue | jq -r '.content' in the relevant ~/.claude/projects/ folder extracts them.
Here are the key follow-up prompts with some notes:
When you implement this use playwright and red/green TDD, plan that too - I've written more about red/green TDD here.let's use PDF.js's own renderer (it was messing around with pdfium)The final UI should include both the text and the pretty-printed JSON output, both of those in textareas and both with copy-to-clipboard buttons - it should also be mobile friendly - I had a new idea for how the UI should worksmall commits along the way - see belowMake sure the index.html page includes a link back to https://github.com/run-llama/liteparse near the top of the page - it's important to credit your dependencies in a project like this!View on GitHub → is bad copy because that's not the repo with this web app in, it's the web app for the underlying LiteParse libraryRun OCR should be unchecked by defaultWhen I try to parse a PDF in my browser I see 'Parse failed: undefined is not a function (near '...value of readableStream...') - it was testing with Playwright in Chrome, turned out there was a bug in Safari... oh that is in safari but it works in chromeWhen "Copy" is clicked the text should change to "Copied!" for 1.5s[Image #1] Style the file input so that long filenames don't break things on Firefox like this - in fact add one of those drag-drop zone UIs which you can also click to select a file - dropping screenshots in of small UI glitches works surprisingly wellTweak the drop zone such that the text is vertically centered, right now it is a bit closer to the topit breaks in Safari on macOS, works in both Chrome and Firefox. On Safari I see "Parse failed: undefined is not a function (near '...value of readableStream...')" after I click the Parse button, when OCR is not checked - it still wasn't working in Safari...works in safari now - but it fixed it pretty quickly once I pointed that out and it got Playwright working with that browserI've started habitually asking for "small commits along the way" because it makes for code that's easier to understand or review later on, and I have an unproven hunch that it helps the agent work more effectively too - it's yet another encouragement towards planning and taking on one problem at a time.
While it was working I decided it would be nice to be able to interact with an in-progress version. I asked a separate Claude Code session against the same directory for tips on how to run it, and it told me to use npx vite. Running that started a development server with live-reloading, which meant I could instantly see the effect of each change it made on disk - and prompt with further requests for tweaks and fixes.
Towards the end I decided it was going to be good enough to publish. I started a fresh Claude Code instance and told it:
Look at the web/ folder - set up GitHub actions for this repo such that any push runs the tests, and if the tests pass it then does a GitHub Pages deploy of the built vite app such that the web/index.html page is the index.html page for the thing that is deployed and it works on GitHub Pages
After a bit more iteration here's the GitHub Actions workflow that builds the app using Vite and deploys the result to https://simonw.github.io/liteparse/.
I love GitHub Pages for this kind of thing because it can be quickly configured (by Claude, in this case) to turn any repository into a deployed web-app, at zero cost and with whatever build step is necessary. It even works against private repos, if you don't mind your only security being a secret URL.
With this kind of project there's always a major risk that the model might "cheat" - mark key features as "TODO" and fake them, or take shortcuts that ignore the initial requirements.
The responsible way to prevent this is to review all of the code... but this wasn't intended as that kind of project, so instead I fired up OpenAI Codex with GPT-5.5 (I had preview access) and told it:
Describe the difference between how the node.js CLI tool runs and how the web/ version runs
The answer I got back was enough to give me confidence that Claude hadn't taken any project-threatening shortcuts.
... and that was about it. Total time in Claude Code for that "build it" step was 59 minutes. I used my claude-code-transcripts tool to export a readable version of the full transcript which you can view here, albeit without those additional queued prompts (here's my issue to fix that).
I'm a pedantic stickler when it comes to the original definition of vibe coding - vibe coding does not mean any time you use AI to help you write code, it's when you use AI without reviewing or caring about the code that's written at all.
By my own definition, this LiteParse for the web project is about as pure vibe coding as you can get! I have not looked at a single line of the HTML and TypeScript written for this project - in fact while writing this sentence I had to go and check if it had used JavaScript or TypeScript.
Yet somehow this one doesn't feel as vibe coded to me as many of my other vibe coded projects:
Most importantly, I'm happy to attach my reputation to this project and recommend that other people try it out. Unlike most of my vibe coded tools I'm not convinced that spending significant additional engineering time on this would have resulted in a meaningfully better initial release. It's fine as it is!
I haven't opened a PR against the origin repository because I've not discussed it with the LiteParse team. I've opened an issue, and if they want my vibe coded implementation as a starting point for something more official they're welcome to take it.
Tags: javascript, ocr, pdf, projects, ai, generative-ai, llms, vibe-coding, coding-agents, claude-code, agentic-engineering
In February 1777 George Washington issued an order requiring that American soldiers be inoculated against smallpox:
Finding the Small pox to be spreading much and fearing that no precaution can prevent it from running through the whole of our Army, I have determined that the troops shall be inoculated. This Expedient may be attended with some inconveniences and some disadvantages, but yet I trust in its consequences will have the most happy effects. Necessity not only authorizes but seems to require the measure, for should the disorder infect the Army in the natural way and rage with its usual virulence we should have more to dread from it than from the Sword of the Enemy.
It was a wise decision. Smallpox was a debilitating, often fatal disease. And Washington’s army, which put many farm boys with little previous exposure to infectious disease into crowded encampments, was especially vulnerable. As Washington said, the situation “seems to require the measure.”
It was, nonetheless, a bold, enlightened move. And why not? Washington, like many of the Founding Fathers, was very much a man of the Enlightenment.
By contrast, Pete Hegseth, the Secretary of Defense who insists on being called the Secretary of War, is a bloodthirsty religious fanatic. He’s more comfortable with fascism than with America’s founding principles. And in another attempt to prove his manhood, he announced on Tuesday that he was ending the sissy requirement that members of the military be vaccinated against the flu.
This was, he said, to “restore freedom” to our armed forces:
If you, an American warrior entrusted to defend this nation, believe that the flu vaccine is in your best interest, then you are free to take it. You shouldn’t. But we will not force you because your body, your faith, and your convictions are not negotiable.
Even before we get into the practical damage Hegseth’s move will inflict, note the bizarre framing. Personal freedom is great and should be granted wherever appropriate. But one place where it isn’t and never has been appropriate is in the military. When Americans sign up to serve the nation under arms, they agree to temporarily forego many of the freedoms of civilian life. They must wear uniforms, not street fashion. They must eat Army or Navy food. They must salute officers and obey orders. They must, in other words, adhere to military discipline.
It won’t surprise you to learn that Hegseth is completely hypocritical on this subject. He says that your body, your faith, and your convictions are not negotiable. But he has banned most beards from the U.S. military and cracked down on religious exemptions. After all, bearded men can’t be effective warriors:
He has also demanded that members of the military lose weight, because he doesn’t like how they look:
Frankly, it’s tiring to look out at combat formations or really any formation and see fat troops. Likewise, it’s completely unacceptable to see fat generals and admirals in the halls of the Pentagon and leading commands around the country and the world. It’s a bad look. It is bad and it’s not who we are.
But requiring that serving troops receive a vaccine that helps maintain their military effectiveness and also helps protect their comrades from infection? Tyranny!
This isn’t simply about vaccines and facial hair. These directives are part of a larger project, another step in Hegseth’s drive to cultify the US military.
What do I mean by cultifying the military? I mean creating an environment in which professional integrity, military discipline, and historical precedent are destroyed in service to the personality cult of Donald Trump and his enforcer, Pete Hegseth.
Think of these directives as loyalty tests. Hegseth can indulge his faux concerns about liberty while aligning himself with the science-hating right. If you are an officer concerned about the welfare of your troops and voice your concerns, you are out. Mention that the directive against beards is nonsensical and disproportionately harms black male soldiers with a common skin condition, then you are a woke weakling and are sent packing. If you are a general in possession of critical skills and hard-won experience, but served during the Biden administration, you will be unceremoniously fired.
Simply put, the method in Hegseth’s apparent madness is to destroy the integrity of the professional military corps through destructive and despotic behavior that drives out those – like Admiral Holsey – who hold to their principles.
And this should terrify every American. A powerful military always poses a potential threat to democracy. To keep that threat in check, the military must be presided over by an officer corps that understands that its duty is not to any one person, but to the Constitution and the rule of law. The U.S. military has been largely insulated from political influencesince the nation’s founding. But Hegseth is trying to subvert that.
Gratuitously exposing service members to disease isn’t a small issue. But it’s much more important as a symptom of the ongoing effort to corrupt the military and make it a servant of extremist politics and politicians.
MUSICAL CODA
Chinese AI lab DeepSeek's last model release was V3.2 (and V3.2 Speciale) last December. They just dropped the first of their hotly anticipated V4 series in the shape of two preview models, DeepSeek-V4-Pro and DeepSeek-V4-Flash.
Both models are 1 million token context Mixture of Experts. Pro is 1.6T total parameters, 49B active. Flash is 284B total, 13B active. They're using the standard MIT license.
I think this makes DeepSeek-V4-Pro the new largest open weights model. It's larger than Kimi K2.6 (1.1T) and GLM-5.1 (754B) and more than twice the size of DeepSeek V3.2 (685B).
Pro is 865GB on Hugging Face, Flash is 160GB. I'm hoping that a lightly quantized Flash will run on my 128GB M5 MacBook Pro. It's possible the Pro model may run on it if I can stream just the necessary active experts from disk.
For the moment I tried the models out via OpenRouter, using llm-openrouter:
llm install llm-openrouter
llm openrouter refresh
llm -m openrouter/deepseek/deepseek-v4-pro 'Generate an SVG of a pelican riding a bicycle'
Here's the pelican for DeepSeek-V4-Flash:

And for DeepSeek-V4-Pro:

For comparison, take a look at the pelicans I got from DeepSeek V3.2 in December, V3.1 in August, and V3-0324 in March 2025.
So the pelicans are pretty good, but what's really notable here is the cost. DeepSeek V4 is a very, very inexpensive model.
This is DeepSeek's pricing page. They're charging $0.14/million tokens input and $0.28/million tokens output for Flash, and $1.74/million input and $3.48/million output for Pro.
Here's a comparison table with the frontier models from Gemini, OpenAI and Anthropic:
| Model | Input ($/M) | Output ($/M) |
|---|---|---|
| DeepSeek V4 Flash | $0.14 | $0.28 |
| GPT-5.4 Nano | $0.20 | $1.25 |
| Gemini 3.1 Flash-Lite | $0.25 | $1.50 |
| Gemini 3 Flash Preview | $0.50 | $3 |
| GPT-5.4 Mini | $0.75 | $4.50 |
| Claude Haiku 4.5 | $1 | $5 |
| DeepSeek V4 Pro | $1.74 | $3.48 |
| Gemini 3.1 Pro | $2 | $12 |
| GPT-5.4 | $2.50 | $15 |
| Claude Sonnet 4.6 | $3 | $15 |
| Claude Opus 4.7 | $5 | $25 |
| GPT-5.5 | $5 | $30 |
DeepSeek-V4-Flash is the cheapest of the small models, beating even OpenAI's GPT-5.4 Nano. DeepSeek-V4-Pro is the cheapest of the larger frontier models.
This note from the DeepSeek paper helps explain why they can price these models so low - they've focused a great deal on efficiency with this release, especially for longer context prompts:
In the scenario of 1M-token context, even DeepSeek-V4-Pro, which has a larger number of activated parameters, attains only 27% of the single-token FLOPs (measured in equivalent FP8 FLOPs) and 10% of the KV cache size relative to DeepSeek-V3.2. Furthermore, DeepSeek-V4-Flash, with its smaller number of activated parameters, pushes efficiency even further: in the 1M-token context setting, it achieves only 10% of the single-token FLOPs and 7% of the KV cache size compared with DeepSeek-V3.2.
DeepSeek's self-reported benchmarks in their paper show their Pro model competitive with those other frontier models, albeit with this note:
Through the expansion of reasoning tokens, DeepSeek-V4-Pro-Max demonstrates superior performance relative to GPT-5.2 and Gemini-3.0-Pro on standard reasoning benchmarks. Nevertheless, its performance falls marginally short of GPT-5.4 and Gemini-3.1-Pro, suggesting a developmental trajectory that trails state-of-the-art frontier models by approximately 3 to 6 months.
I'm keeping an eye on huggingface.co/unsloth/models as I expect the Unsloth team will have a set of quantized versions out pretty soon. It's going to be very interesting to see how well that Flash model runs on my own machine.
Tags: ai, generative-ai, llms, llm, llm-pricing, pelican-riding-a-bicycle, deepseek, llm-release, openrouter, ai-in-china
Tool: Millisecond Converter
LLM reports prompt durations in milliseconds and I got fed up of having to think about how to convert those to seconds and minutes.
Tags: tools
Yes, I will be doing a Conversation with him. Excerpted (and edited) from a bio:
He is on the business faculty at Catholic University and has a background on both Wall Street and in the startup world, where he founded several companies. His first book, Wanting (2021), has been translated into 20+ languages and is selling more than copies than ever five years in. He is an expert on Rene Girard. His new book, The One and the Ninety-Nine, is out from St. Martin’s June 16 — a theory of how identity gets formed or deformed under conditions of technological social contagion. He has a third book with a major publisher (on “technology as soulcraft”) in the pipeline with a major publisher. He also lived in Italy and for a while was studying to be a priest. He remains a true Catholic, and is the founder and director of the Cluny Institute.
Here is Luke on Twitter. Here is Luke’s home page. So what should I ask him?
The post What should I ask Luke Burgis? appeared first on Marginal REVOLUTION.
1. Mason Currey, Making Art and Making a Living: Adventures in Funding a Creative Life. The best overall book I know on the different methods top artists have used to keep themselves going financially. It is perhaps more anecdotal and less theoretical than I would prefer, still a nice work.
2. Mangol Bayat, Mysticism and Dissent: Socioreligious Dissent in Qajar Iran. A very good, clear, and useful book on different dissident religiouis developments in Iran, leading up to the Bahai faith. Recommended, one of the best books I have found for grappling with the history of current Iran.
3. Lena Dunham, Famesick: A Memoir. Not exactly my thing, so I did not finish it. But it is pretty good, so if you are tempted give it a try.
4. Iain Pears, Parallel Lives: A Love Story from a Lost Continent. A delightful story/indirect memoir, telling the tale of the lives and marriage of Francis Haskell, the British art historian, and Larissa Salmina Haskell, a Russian woman who survived the siege of Leningrad as a girl. Pears had the full cooperation of Larissa, at an age where she doesn’t give a damn any more. This story truly comes to life, and that is helped by Pears’s background as a writer of very good fiction.
5. Lázár, by Nelio Biedermann. An excellent novel of ideas, in the style of earlier Continental literature, by a 23-year-old Swiss phenom. It is very good in German, I have not sampled the translation.
The post What I’ve been reading appeared first on Marginal REVOLUTION.
The current debate over Oregon’s “economic crisis” taking place around the state’s Prosperity Council relies on a narrow reading of data while overlooking broader evidence of success. The Brookings Institution’s “Metro Monitor” continues to rank Portland in the top ten large metros for income and wealth growth, and Bend remains the top-performing mid-sized city in the country.
Contrary to claims that Oregon’s numbers are largely, or entirely the result of pre-pandemic trends, the latest data from the federal government–released this month– show that Oregon’s per capita income is still very near a 25-year high.
The most reliable and very recent indicator of prosperity—per capita personal income—shows that Oregon has steadily gained ground. In 2010, Oregonians earned just 88 percent of the national average; today, newly released federal data shows we are at 96 percent, the highest level in twenty-five years.
Oregon’s official econometric model predicts this trend will continue, forecasting that Oregon’s job growth will outpace the national average over the next ten years. While national risks like tariffs or global instability could certainly trigger a recession, these are external shocks rather than flaws in Oregon’s policy.
Any “prosperity” strategy has to start with an accurate and realistic assessment of where the state stands.
The Oregon Prosperity Council, an advisory group appointed by Governor Tina Kotek has been subjected to a withering barrage of complaints about the state’s economy, and in our view, some unduly negative claims about the state’s economic performance. As we’ve pointed out, Oregon, and in particular its metropolitan areas (Portland, Salem, Eugene-Springfield and Bend) have been among the nation’s best performing, according to the widely respected “Metro Monitor” released by the Brookings Institution. Portland ranks in the top ten of large US metro areas on key measures of prosperity, while Bend is the number one performing metro area between 250,000 and 500,000 population. Doomsayers don’t dispute the accuracy of the Brookings data, but tell us that they can’t be right because we should ignore ten-year trends, and only look at data from after the pandemic (because presumably everything is different now). We’re told that we’ve now reached a “inflection point” and that we’re in crisis.
Oregon Public Broadcasting essentially repeated this “ignore anything before 2020” point in its coverage of the Prosperity Council’s deliberations:
That argument relies in part on ratings by the Brookings Institution, which puts Portland in the middle of the pack of top metro areas when it comes to economic growth from 2014-24. The organization also rates Portland in the top 10 in terms of income and wealth growth. Notably, the decade Brookings uses to make those comparisons includes a period of surging pre-pandemic growth in the city.
As every economist knows, shorter term changes in data series are less meaningful than longer term shifts, so looking at one or two years data is seldom indicative of an enduring trend. But if we look at one of the most basic and revealing measures of economic prosperity, it is clear that Oregon’s economy has done well, and is continuing to do well, relative to the rest of the nation. Our focus here is on per capita income–the total income earned by Oregonians, divided by the state’s population. We use this metric to compare Oregon’s per capita personal income to the United States. For those who demand “fresh” data on recent progress, these are the U.S. Bureau of Economic Analysis’s latest, revised estimates, released on April 9, 2026.
Ideally, we’d like to see Oregon have a per capita income at or above the US national average. For a long time, Oregon has lagged well below the US average. If we look at the record of the 21st Century, Oregon started out well below the US average, and through 2010, our per capita incomes lagged behind the US average. By 2010, we were at only 88 percent of the US average. Since then, we’ve reversed the relative decline in Oregon per capita incomes that happened in the first decade of this century Since 2011, our incomes have headed upward, and we’ve closed most of the gap with the nation. The latest federal data–for calendar year 2025–show that we’re at a bit above 96 percent of the US average, roughly as high as we’ve been any time in the past quarter century.
Moreover, if a weaker performance was in the cards, you would expect that to be baked into the state’s economic forecast. Instead, the latest state economic forecast is Oregon will maintain its relative position in terms of per capita personal income.
In addition, the state forecast calls for Oregon job growth to be faster than in the rest of the US for the next decade.
Now, of course, an economic model is just a projection. But it is important to note that the Oregon econometric model is largely driven by recent trends, i.e. data gathered and relationships since 2020, and not before. If anything, the model represents a conservative projection of the current trajectory of the Oregon economy, based on its performance relative to the nation. It’s also worth noting that in general, Oregon’s econometric model has consistently under-estimated the future performance of the Oregon economy (which is why we’ve had five “kicker” payments in the past five biennia–the Oregon economy has done better than the economic model predicted it would). That’s not so much an indictment of the econometric model as it is an indication that the Oregon economy has gotten steadily stronger and actually out-performed what any reasonable economic model calibrated backward-looking data would have predicted.
In short, if one were to look at the recent evidence on Oregon economic performance and its likely trajectory in the next few years, one would conclude that we’re likely to do well.
None of this is to say that there aren’t risks to the forecast. But, increasingly, these risks have to do with erratic and damaging federal policies, from an illegal and capricious system of tariffs, to an illegal and destabilizing war on Iran, to fundamental damage to immigration, science, education and the rule of law, all of which have underpinned long-term US economic growth. And plainly, Oregon’s economy has always been pro-cyclical–it goes up more in boom times and down more in troubled times. That means if—and likely when—we suffer a national recession, Oregon will likely see bigger job losses, and suffer more painful economic effects than other states. But these are problems of the national economy–not evidence of emergent flaws in Oregon’s economic policies.
U.S. Bureau of Economic Analysis, “SAINC1 State annual personal income summary: personal income, population, per capita personal income” (accessed Wednesday, April 22, 2026).
About 23 million people live in Taiwan, a Pacific island about the size of Maryland. Despite its size, the island produces a tremendous amount of agricultural goods per year—about $18 billion, according to Taiwan’s Ministry of Agriculture.
The average size of a farm in Taiwan (less than 1 hectare) is much smaller than in the United Kingdom (87 hectares) or the United States (187 hectares). Since much of the island is mountainous, only about one-quarter of Taiwan’s land is arable, and it is mostly located on the southwestern side of the island in the Chianan Plain. That amounts to 0.03 hectares of farmland per Taiwanese citizen—about half as much arable farmland as there is per person in the United Kingdom and one-tenth as much as in the United States.
The small plot size is apparent in this satellite image of farmland in Yunlin County in southwestern Taiwan, one of the island’s most productive agricultural areas. The modest scale is partly a result of past policies that limited the size of farms and partly a byproduct of cultural traditions that often lead to the division of farms into smaller parcels as property is passed from one generation to the next.
Located along the floodplains of the Zhoushui and Beigang rivers, Yunlin County is mostly flat, has fertile soils, and has easy access to irrigation water. The county, one of Taiwan’s main agricultural hubs, is known for producing a wide range of crops, including rice, sweet potatoes, peanuts, corn, sugarcane, garlic, scallions, coffee, fruit, and leafy greens. Farms in the county also raise millions of pigs, the most of any county in Taiwan.
Most crops in Yunlin County are grown in small rectangular plots defined by roadways and networks of irrigation canals. The exception is sugarcane, which was grown widely in the county in the early 1900s when Japan controlled Taiwan and established an expansive network of sugarcane plantations in the southwestern part of the country. These plantations were consolidated into the Taiwan Sugar Corporation after the conclusion of World War II, and the large plot sizes in the farmland north of Baozhong in the image above persist as a legacy of this period.
While the amount of sugarcane cultivated in Taiwan has declined in recent decades and many of the fields have transitioned to other crops, Taiwan Sugar Corporation still raises sugarcane around Baozhong. The company operates a railway that transports harvested cane to nearby Huwei, site of one of just a few remaining sugar refineries on the island. Although Taiwan also once had a large network of sugar railways that serviced thousands of kilometers of track and dozens of sugar refineries, the line that serves Huwei is the only one on the island that remains active.
Another area that stands out in the mosaicked agricultural landscape of Yunlin is located around Xiluo (above). Here the fields take on an unusual greenish-blue hue, largely because of the ubiquity of shade nets. Farmers use the nets to protect crops from heat, sun, heavy rains, and pests. They are generally deployed for specialty crops such as vegetables, fruit, and flowers. This area contrasts with the darker green region in the lower right of the first image, where rice is the dominant crop.
NASA Earth Observatory images by Michala Garrison, using Landsat data from the U.S. Geological Survey. Story by Adam Voiland.
Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

The activity of herring around Vancouver Island in British Columbia brightened coastal waters enough to be detectable from space.

Winds blowing past the volcanic landmass near the Korean Peninsula created a trail of spiraling clouds, while murky water churned…

The Tongan volcano expanded its mid-Pacific real estate during its latest eruptive phase.
The post An Agricultural Mosaic in Taiwan appeared first on NASA Science.
Not long ago we looked at construction productivity trends for the US and for countries around the world. We found that in the US, and in most other large, wealthy countries, construction productivity is stagnant or declining. Unlike manufacturing and agriculture, or the economy overall, which generally show improving productivity over time, in the field of construction we find that productivity tends to at best stay constant, and at worst decline over time.
Understanding trends in productivity — how much output we get for a given amount of input over time — is useful, but it’s also useful to look at other metrics of construction industry progress. One particularly salient measure is construction costs: how much money it takes to build a house or an office or an apartment building, and how those costs have changed over time. Cost is a good improvement metric because it directly tracks what we actually care about: we would like the costs of building housing, buildings, and infrastructure to fall and become more affordable, and we basically care about more abstract measures like productivity to the extent that they’re a proxy for costs.
Unsurprisingly, when we look at construction costs we see similar trends to what we saw with construction productivity; construction rarely gets any cheaper over time, and construction costs tend to rise at or above the level of overall inflation. As with productivity, we see this when we analyze the data at different levels of granularity, and we see it in both the U.S. and in countries around the world.
Changes in construction cost are generally tracked using cost indexes, measures produced by various organizations which collect and analyze data to try and capture large-scale changes in construction cost. At a high level, there are two broad types of index: output indexes, and input indexes. Output indexes try to measure changes in the cost of finished buildings or infrastructure: how much it costs to build a house, or an office building, or a segment of road over time. Input indexes measure changes in the cost of some basket of construction inputs: the price of different construction tasks, or materials, or labor.
It’s not always straightforward to tell whether an index is an output index or an input index, because exactly how indexes are constructed can be somewhat opaque. An index that initially appears as if it’s an output index, because it apparently tracks changes in a particular type of construction (like new apartment buildings), may actually function more like an input index if it is constructed from price changes in inputs specific to that type of construction. All else being equal, I prefer output indexes to input indexes, because they should more closely track what we actually care about (the cost of finished buildings), and should be less subject to distortion. For instance, the invention of some great cost-saving construction method might not be reflected in an input index that simply tallies up the cost of 10 hours of labor, 100 pounds of steel, and 1 ton of cement (which is how many input indexes are constructed). But in practice output and input indexes tend to track each other quite closely.
Cost indexes are resistant to some of the measurement difficulties that dog productivity metrics, because they’re typically constructed to try and mirror the cost changes of actual buildings. For instance, we’ve previously seen that productivity metrics are dogged by problems of “changes in the output mix” — changes in the type of construction that takes place in a given geography or during a particular collection period can mask actual productivity trends. But the producers of cost indexes will often monitor trends in the construction marketplace, and modify how their index is constructed by weighing some items more heavily and other items less heavily to try and reflect that. We should thus expect them to be more resilient to changing output mix problems.
But in some cases cost indexes share the same measurement issues as productivity metrics. In particular, it can be difficult to adjust cost indexes for quality; a modern building might cost more per square foot, but be built to higher standards or otherwise have higher performance than an older building, which looking only at changes in costs won’t capture. Some indexes, such as the Census Bureau’s Constant Quality Index, try to account for quality changes, but most don’t. (This is in contrast to, say, the Bureau of Labor Statistics’s sector-specific inflation measures, which try to take into account quality changes when calculating inflation trends for things like TVs or new cars.) Indexes that do try to adjust for quality changes likely can’t account for it completely. These issues are somewhat mitigated by the fact that we care about costs as such, and it’s valuable to know how those costs are changing — i.e., even if some proportion of rising costs is due to increased standards and we are getting more bang for our buck, it’s still useful to know how construction costs are changing with respect to other prices. Nonetheless, we should keep this point about quality changes not always being reliably captured mind when we’re looking at cost trends.
To look at trends in U.S. construction costs, we’ll use the following indexes:
The Turner Building Cost Index — Produced by Turner Construction, one of the largest general contractors in the US, this index tracks the price of non-residential buildings by considering such factors as “labor rates and productivity, material prices, and the competitive condition of the marketplace.” This is one of the oldest continuously produced construction cost indexes, going all the way back to 1915.
The Census Bureau’s Single-Family Constant Quality Index — Produced by the US Census Bureau, this index tracks changes in the price of single-family homes, and goes back to 1964.
Handy-Whitman — Produced by Whitman Requardt and Associates, data in this index tracks the cost of building reinforced concrete, brick-lined utility buildings (though there are also other data for other types of buildings). The index is constructed by looking at the price of various inputs (materials, labor, equipment) for these types of buildings, but the relative proportions are adjusted to ensure that they reflect “current construction practice,” so I’m classifying this as an output index. I was able to get data for this index from 1915 to 2002.
Craftsman single-family home costs — Craftsman’s National Construction Estimator, an estimating guide that has been published since the 1950s, includes an estimated cost per square foot to build a “typical” single-family home in the U.S. I was able to get these values going back to 1966.
The National Highway Construction Cost Index — Produced by the U.S. Federal Highway Association, this index tracks the cost of building highways over time, and is based on the price of winning bids for highway construction contracts. This public index goes back to 1915.
E.H. Boeckh Index — Produced by E.H. Boeckh and Associates, this index tracks the cost of a variety of different building types in cities around the U.S., based on “115 elements,” including labor costs, material costs, and tax and insurance elements. (I’m including this in the input indexes because I think it’s basically using the basket-of-inputs approach to construct costs, but depending on how they weighed these elements this might make more sense as an output index.) For many years this index was included in the Survey of Current Business produced by the U.S. Bureau of Economic Analysis. I look at this index for residential construction, for the years 1910 through 1991.
ENR Construction Cost Index — Produced by Engineering News-Record, this index tracks a basket of several different construction inputs — unskilled labor, steel, cement, and wood — the relative proportions of which are periodically adjusted. ENR also produces a virtually identical “Building Cost Index” that replaces unskilled labor with skilled labor. This index has been continuously produced since 1908.
RS Means Historical Cost Index — Produced by the RSMeans estimating company, this index tracks a basket of construction labor, materials and equipment costs. I was able to get data for this index going back to 1953.
Riggleman Index — Produced for an unpublished doctoral dissertation (by Dr. John R. Riggleman) in 1934, this index was made using several other indexes, such as the ENR construction cost index and the American Appraisal Company’s cost index for industrial buildings. This index is primarily useful because it goes back all the way to 1868.
Blank Residential Index — This is another composite index, which uses a weighted basket of construction inputs as well as the E.H. Boeckh index, to track the cost of residential construction. This index is useful because it goes back to 1889.
We’ll compare each of these indexes to the Consumer Price Index (CPI), a common measure of overall inflation. Because the Consumer Price Index only goes back to 1913, for earlier values we’ll use inflation conversion factors produced by Robert Sahr of Oregon State.
The graphs below show various cost indexes between 1870 and 1950.
And these graphs show cost indexes from 1950 to 2022.
We can see that regardless of time period, and regardless of whether we’re looking at input indexes or output indexes, construction costs are rising roughly as fast as, or faster than, overall inflation. When we looked at productivity trends, we saw that since roughly the 1960s U.S. construction productivity has been stagnant or declining. Cost data suggests that the problem extends even further back, and that U.S. construction costs have virtually never fallen with respect to overall inflation.
These graphs give us a good large-scale view of cost trends for different indices, but it makes it hard to see cost trends over specific time periods. So let’s look at the average annual growth rate for each index over 10-year periods, minus the average growth rate of CPI for the same period. This will let us see how construction costs are changing with respect to inflation over specific periods: positive values mean construction costs are rising faster than inflation, negative means construction costs are rising slower than inflation.
We see that in almost every period of time, construction costs are rising faster than overall inflation for virtually every cost index. The major exception is the period from 1975 to 1995, where most indexes show lower rates of increase or even declines against overall inflation. We also see that historic rates of cost increase seem to be as bad or worse than modern ones. For four of the five 10-year periods between 1915 and 1965, the Turner Cost index rose more than a percentage point faster than overall inflation, whereas for the periods from 1995 to 2025 it rose less than a percentage point.
As with construction productivity, we can also look at more granular construction cost trends, by looking at how the costs of individual construction tasks have changed. We can do this using construction estimating guides, which provide estimates for the costs of various construction materials and tasks. By looking at the costs of the same, or similar tasks across various versions of estimating guides, we can see how the cost of those tasks are changing.
The chart below shows the cost of 40 different construction tasks taken from three different versions of the RSMeans estimating guide published in 1954, 1985, and 2023.
And this chart shows the cost of 20 different construction tasks taken from several different versions of the Craftsman National Construction Estimator published between 1967 and 2016.
We can see that cost changes in individual construction tasks aren’t uniform. Some have risen in cost faster than overall inflation; others more slowly. But on average, the cost of these construction tasks has risen at the level of overall inflation. So not only are buildings not getting any cheaper to produce on average, the cost of individual construction tasks isn’t falling either on average, at least for this collection of construction tasks.
There are issues with looking at changes in individual construction tasks. As we noted when we looked at construction productivity, all else being equal we might expect construction to improve by way of introducing new, improved processes, and thus looking at changes in older processes might not reveal very much. In the 19th century, nails got cheaper due to the introduction of new nailmaking processes - replacing hand-made nails with the cut nail process, and then the wire-nail process. If we looked only at improvements in hand-made nails, we might conclude that nails on the market hadn’t gotten any cheaper, even though what actually happened was that an older process had simply been replaced by a newer, better process. I’ve tried to avoid this by using construction tasks that I know are still in use, but this isn’t perfect. Unfortunately, this method may run into an adverse selection problem: picking tasks that appear in many versions of the estimating guide might deliberately select for ones that have been difficult to substitute. Nonetheless, it’s the best method we have for analyzing costs at the granular task level.
We can address this issue the same way we did when we looked at construction productivity, by looking at cost trends in broad categories of tasks. The chart below shows the cost per square foot for 32 categories of tasks required to build a single-family home from Craftsman’s National Construction Estimator. As we can see, task costs generally rise at roughly the same rate that overall home prices rise, and rarely change. (This is sort of mechanical outcome of the fact that task category prices are given as a percentage of overall costs, and for most task categories that percentage has changed little over time, but it’s nevertheless notable.)
Thus, at the level of construction tasks, we also see costs tending to rise at or above the level of overall inflation.
To see whether this trend is also observed internationally, we can look at similar construction cost indexes constructed for other countries. The cost indexes we’ll look at are below:
Eurostat Construction Producer Price Index for Residential Buildings — This cost index, produced by Eurostat for 36 different European countries, tracks the cost changes for residential buildings. (It’s not particularly clear to me how this was constructed: the website merely says that producer price indexes track “the average price development of all goods and related services resulting from that activity.”) For most countries this index goes back to 2000, but for some it goes all the way back to the 1950s.
The U.K.’s BIS construction output price index — This index tracks output prices for several different UK construction sectors (I used values from “All Construction”), going back to 1955. Because this index only goes up to 2011, I supplemented this with the similar Construction Output Price Index from the U.K.’s Office for National Statistics, which began in 2014.
Belgium’s ABEX index — This index tracks the price of building residences in Belgium. The Eurostat Producer Price Index includes data for Belgium, but it only goes back to 2000, whereas the ABEX index goes all the way back to 1914(!).
Japan’s Construction Cost Deflator — This index tracks price changes for several different sectors of Japanese construction, going back to 1960. I used the value for residential construction.
South Korea’s Construction Cost Index — This index tracks the change in construction costs for several different Korean construction sectors, going back to 2000. I used the index for housing construction.
Hong Kong’s Building Works Tender Price Index — Tracks the cost of new buildings in Hong Kong, based on contractor bids. Goes back to 1970.
Taiwan’s Construction Price Index — This index tracks the changes in construction costs in Taiwan, and is based on the prices of 115 different construction inputs. It goes back to 1991.
The graphs below show each of these indexes against the consumer price index for 12 major European countries, as well as the U.S. consumer price index.
And this graph shows construction cost trends for Asian countries.
We see the same pattern that we saw with U.S. construction cost indexes: construction costs nearly always rise at, or faster than, the level of overall inflation in the country.
We can also see this if we look at changes in construction cost minus changes in consumer price index in 10-year buckets, as we did for U.S. construction costs. The chart below shows decade-by-decade changes in construction cost minus CPI for 41 different European and Asian countries (including lots of smaller and poorer countries that I didn’t include on the above graphs).
There’s somewhat more blue on this chart than in the US construction costs, but we can still see costs are, more often than not, rising faster than overall inflation.
When we looked at trends in construction productivity — how much construction output we get for a given amount of input — we saw that it’s mostly either relatively unchanging, or declining over time. We saw this in the U.S. using a variety of different metrics of varying granularity, and we saw it in most other wealthy countries. With construction costs — how much construction output we get for a given amount of currency — we see something similar. Construction costs tend to rise at, or above, the level of overall inflation, and it rarely (if ever) gets cheaper to build houses, offices, or other buildings. We see this in the U.S. with a variety of different metrics, and we see it in countries around the world. With stagnant construction productivity, we could date the problem as far back as roughly the 1960s. With construction costs, we can push the problem back even further: outside of a few windows of time, construction costs have virtually never fallen with respect to overall inflation.
In today’s digital world, data loss from your PC has become a common and frustrating issue. Whether due to accidental deletion, formatting, system failures, or any other reason, losing important files and folders from your Windows computer can put you in trouble. Fortunately, there are several free data recovery software tools available that you can try to recover your lost files in Windows without spending money. Choosing the right recovery software depends on your data loss situation, device type, ease of use, recovery limits, and required features.
In this article, we will explore the top 5 free data recovery software of 2026, their features, pros & cons, and best use cases to help you choose the right tool. Let’s get started!
While there are several DIY methods, like backups, which can help you recover deleted data, third-party recovery software offers a more powerful, reliable, and versatile solution. These free data recovery tools are popular because of certain features, including:
Here are our best 05 free data recovery software tools which you can use to restore your deleted files on your Windows PC:

Stellar Data Recovery Free is a versatile, free data recovery software that supports the recovery of files that are accidentally deleted or lost due to formatting and other reasons. The software supports recovery of deleted files of various types, including documents, photos, videos, and emails from multiple storage devices, including hard drives, SSDs, USB drives, and memory cards. The free version allows you to restore deleted files up to 1 GB without any additional cost. Its preview feature lets you preview the recoverable files before restoring them.
Pros
Cons
Best Use Case
Ideal for recovering recently deleted files, documents, photos, or videos from laptops, desktops, USB drives, or memory cards efficiently.
EaseUS Data Recovery Wizard Free is a powerful data recovery software, mainly known for its simple interface and high recovery success rate. You can use this free data recovery tool to recover deleted, formatted, or lost files from multiple storage devices, including hard drives, SSDs, USB drives, and memory cards. It offers both quick and deep scan modes, allowing you to recover recently deleted files as well as data lost due to complex issues like partition loss or system crashes.

Pros
Cons
Best Use Case
Suitable for beginners needing a simple, guided recovery solution with reliable performance for small-scale data loss situations without technical expertise required.

Disk Drill is a modern and feature-rich free data recovery software that you can use to recover deleted files on both Windows and macOS devices. The software is widely known for its clean interface and powerful scanning algorithms that combine quick scan and deep scan methods. You can use this software to recover deleted photos, videos, documents, and other types of files from various internal or external storage media. It also offers a file preview before recovery, allowing you to verify data before saving it on your device.
Pros
Cons
Best Use Case
Ideal for quick recovery of recently deleted files, photos, videos, or documents from personal computers, USB drives, or memory cards.

PhotoRec, bundled with TestDisk, is a powerful open-source data recovery tool which you can use to recover files deleted due to several reasons, including formatting, damaged storage device, etc. Unlike typical free data recovery tools, it works through a command-line interface, which may seem complex to beginners. It is the best option to recover files deleted from severely corrupted, formatted, or damaged drives. Since it is completely free and open-source, there are no recovery limits.
Pros
Cons
Best Use Case
Suitable for advanced users or IT professionals who need powerful, unlimited recovery tool for deep scanning and complex data recovery scenarios.

Recuva, developed by Piriform, is a lightweight and completely free data recovery software designed primarily for Windows users. The software supports data recovery from hard drives, USB drives, memory cards, and other storage devices. This tool offers a wizard-based interface that guides users through the recovery process step-by-step, making it ideal for beginners. It can also perform deep scan of your storage device for locating hard-to-find files. Unlike many competitors, Recuva provides unlimited free recovery, making it a great choice for users who want a no-cost solution.
Pros
Cons
Best Use Case
Ideal for recovering recently deleted files, photos, documents, or videos from Windows computers, USB drives, or memory cards quickly.
Data loss on your device can be a distressing experience, especially if the lost data is valuable. Thanks for the aforementioned free data recovery software tools, which allow recovery of data lost from any internal or external storage media. Whether you are a beginner or an advanced user, there is a suitable option available for every need. Carefully check the features of these tools and select the one that best fits your needs.
While free tools have limitations, they are often sufficient for recovering critical files. For larger recoveries, upgrading to a paid version may be necessary.
CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT MISSION
The post Top 5 Free Data Recovery Software of 2026 appeared first on DCReport.org.
Genie Lessons from the Genie Sessions — every Friday I work on a real problem with an AI tool, live, for paid subscribers. Every Monday the lesson drops here, free.
Nobody wants agents. Nobody wants agent swarms. I have a system and I want it to change. That’s the whole thing.
This session I was using Intent by Augment Code — multi-agent, coordinator plus implementer plus verifier. Working on an adaptive radix tree in Go, optimizing for human readability. The coordinator delegates, the implementer runs off and builds things, the verifier checks. I called it the Freudian architecture: id, superego, ego. The id rushes ahead. The superego folds its arms. The ego negotiates. It kind of works.
But watching the swarm spin up, I noticed something. I was managing it. Watching which agent was doing what. Wondering when to interrupt. Holding state in my head that the system should have been holding for me. I’d said I wanted readable code and instead I had a coordination problem.
The mismatch is this: when I was working on performance last week, what I actually wanted was — how much faster can we make this? How hard would it be? How much would it cost? Outcomes. I don’t want to prompt-engineer my way toward an answer. I want to describe the result I’m after and have the genie tell me if it’s achievable and what it would take. I’ve never been able to get two agents working on the same codebase at the same time without my head exploding. So I’m not convinced the swarm is the answer.
Multi-agent is a feature. Outcome-orientation is the thing the feature is supposed to deliver. We keep getting those confused.
The other frontier nobody’s working on yet: multiplayer. Right now, five agents can work on this codebase simultaneously. Five people can’t. That’s backwards. The person who figures out real-time collaborative augmented development — where multiple humans actually steer together, not just watch — that person is solving the real problem.
Nobody knows what that looks like. But I’m pretty sure it’s not a coordinator with finger guns.
This Genie Session was sponsored by Augment Code.
Intent is the AI coding tool built for the way software actually gets written now. Describe what you want. Intent handles the rest — planning, implementing, verifying — so you stay focused on outcomes, not orchestration.

Atmos Space Cargo has raised 25.7 million euros ($30.1 million) to fly a series of reentry vehicle missions and work on a larger spacecraft.
The post Atmos Space Cargo raises $30 million for reentry missions appeared first on SpaceNews.

The Pentagon’s 2027 budget proposal includes an estimated $58.5 billion tied to artificial intelligence.
The post Pentagon seeks $2.3 billion for Maven AI battlefield system appeared first on SpaceNews.

Members of the House Science Committee rejected a proposed fiscal year 2027 budget for NASA because of sweeping cuts as the agency’s administrator argued it could do more with less.
The post House Science Committee pans NASA budget request appeared first on SpaceNews.

Deal gives government future ownership in Missile Solutions business
The post Pentagon closes $1 billion investment in L3Harris missile unit appeared first on SpaceNews.

A Rocket Lab Electron launched a set of cubesats sponsored by the Japanese space agency JAXA April 22 on the company’s second dedicated mission for the agency.
The post Electron launches Japanese cubesats appeared first on SpaceNews.

French startup Univity has raised around $32 million to deploy a pair of 5G demonstrators into very low Earth orbit next year, ahead of plans for at least 1,600 VLEO satellites to help telecom operators extend 5G coverage from space.
The post Univity funds VLEO 5G demonstrators with $32 million Series A appeared first on SpaceNews.

The Federal Communications Commission has moved to lock down incumbent rights to Mobile Satellite Service spectrum, dismissing bids by SpaceX and others to access frequencies increasingly prized for direct-to-device connectivity.
The post FCC throws out satellite spectrum challenges as D2D dealmaking heats up appeared first on SpaceNews.

The contract is to demonstrate space-based data links using the Link-182 standard that will support Golden Dome
The post SpaceX wins $57 million U.S. military contract for satellite crosslink demo appeared first on SpaceNews.

Jordan is the latest country to sign the Artemis Accords as NASA works to attract more countries to its lunar exploration efforts.
The post Jordan signs the Artemis Accords appeared first on SpaceNews.

In this episode of Space Minds, we head back to Space Symposium where SpaceNews’ Sandra Erwin moderated a panel on how optical communication links can provide warfighters and operators with […]
The post Optical links in contested space appeared first on SpaceNews.
“Never interrupt your enemy when he is making a mistake.”
Napoleon Bonaparte’s maxim may well have been in the minds of policymakers in Moscow and Beijing these past weeks, as the U.S. war in Iran dragged on. And now that a 14-day ceasefire between Tehran and Washington is in effect – with both sides claiming “victory” – Russian and Chinese leaders still have an opportunity to profit from what many see as America’s latest folly in the Middle East.
Throughout the weekslong conflict, China and Russia struck a delicate balance. Both declined to give Iran – seen to a varying degree as an ally of both nations – their full-throated support or sink any real costs into the conflict.
Instead, they opted for limited assistance in the form of small-scale intelligence and diplomatic support.
As a scholar of international security and great power politics I believe that is for good reason. Beijing and Moscow were fully aware that Iran could not “win” against the combined military might of the United States and Israel. Rather, Iran just needed to survive to serve the interests of Washington’s main geopolitical rivals.
Below are four ways in which the U.S. war in Iran has damaged Washington’s position in the great power rivalries of the 21st century.
As I explore in my book “Defending Frenemies,” the U.S. has long struggled to balance competing objectives in the Middle East. During the Cold War, this meant limiting the Soviet Union’s influence in the region, while contending with the development of nuclear weapons by two troublesome allies, Israel and Pakistan.
By the 2020s, the priorities in Washington were aimed at restricting the influence of the U.S.’s great power rivals – China and to a lesser degree Russia – in the Middle East.
Yet under Presidents Xi Jinping and Vladimir Putin, China and Russia have sought to increase their footprint in the region through a variety of formal alliances and informal measures.
For Russia, this took the form of aligning with Iran, while also partnering with Tehran to prop up the now-ousted regime of President Bashar Assad during the Syrian civil war. Meanwhile, China increased its diplomatic profile in the Middle East, notably by acting as a mediator as Saudi Arabia and Iran restored diplomatic ties in 2023.
The irony of the latest Iran war is that it follows a period in which circumstances were unfavorable to Russian and Chinese aims of increasing their influence in the Middle East.
The fall of Assad in December 2024 deprived Russia of its one reliable ally in the region. And Trump’s May 2025 tour of the Gulf states, in which he secured major technology and economic deals with Saudi Arabia, the United Arab Emirates, Qatar and Bahrain, was aimed at countering China’s growing economic and diplomatic influence in those countries.
With Washington perceived as an increasingly unreliable protector, the Gulf states may seek greater security and economic cooperation elsewhere.
In expanding military, diplomatic and economic ties in the Middle East, Russia and China over the past two decades were exploiting a desire by Washington to move its assets and attention away from the region following two costly wars in Iraq and Afghanistan.
Trump’s decision to wage war against Iran directly contradicts the national security strategy his administration released in November 2025. According to the strategy, the administration would prioritize the Western Hemisphere and the Indo-Pacific, while the Middle East’s importance “will recede.”
In co-launching a war in Tehran with Israel, without any prior consultation with Washington’s other allies, Trump has shown a complete disregard for their strategic and economic concerns. NATO, already riven by Trump’s repeated threats to the alliance and designs on Greenland, has now shown further signs of internal divisions.
That offers benefits for China and Russia, which have long sought to capitalize on cracks between America and its allies.
The irony, again, is that the war in Iran came as Trump’s vision of the U.S. as the hegemonic power in the Western Hemisphere was making advances. International law and legitimacy concerns aside, Washington had ousted a thorn in its side with Nicolás Maduro in Venezuela and replaced him with a more compliant leader.
Iran’s closure of the Strait of Hormuz, where some 20% of the world’s oil passes, was as predictable as it was destructive for U.S. interests.
But for Russia, this meant higher oil prices that boosted its war economy. It also led to the temporary but ongoing easing of U.S. sanctions, which has provided Moscow an indispensable lifeline after years of economic pressure over the war in Ukraine.
While a prolonged closure and extensive damage to oil and natural gas infrastructure in Iran and the Gulf states no doubt hurts China’s energy security and economy, these were risks Xi appears willing to accept, at least for a time.
And by building up a domestic oil reserve and diversifying energy sources to include solar, electric batteries and coal, China is far better positioned to weather a prolonged global energy crisis than the U.S. Indeed, Beijing has made strides in recent year to encourage domestic consumption as a source of economic growth, rather than be so reliant on global trade. That may have given China some protection during the global economic shock caused by the Iran war, as well as push the economy further down its own track.
The more the U.S. loses control over events in the strait, the more it loses influence in the region – especially as Iran appears to be placing restrictions on ships from unfriendly nations.
Trump’s willingness to abandon talks to go to war, and the contradictory rhetoric he has employed throughout the Iran conflict, has weakened the perception of the U.S. as an honest broker.
That provides a massive soft power boost for Beijing. It was China that pressed Iran to accept the 14-day ceasefire proposal brokered by Pakistan. Indeed, China has slowly chipped away at America’s longtime status as global mediator of first resort.
Beijing has successfully mediated in the past between Iran and Saudi Arabia, and it attempted to do the same with Russia and Ukraine and Israel and the Palestinians.
In general, the Iran war adds weight to Beijing’s worldview that the U.S.-led liberal international order is over. Even if China benefited at some level from the war continuing, its decision to help broker the ceasefire shows that China is increasingly taking on the mantle of global leadership that the U.S. used to own.
And for Russia, the Iran war and the rupture between Trump and America’s NATO allies over their lack of support for it, shift world attention and U.S. involvement from the war in Ukraine.
“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.
The post 4 ways the war in Iran has weakened the United States in the great power game appeared first on DCReport.org.
Bipolar disorder does not announce itself with a single, obvious sign. It builds gradually through shifting moods, disrupted sleep, and strained connections that slowly chip away at stability. Many people wait too long before reaching out, often because they are unsure whether what they feel warrants professional attention. The truth is, earlier support tends to produce better results. This post breaks down the specific moments when seeing a therapist becomes less of an option and more of a necessity.
Everyone experiences emotional ups and downs from time to time. That is part of being human. But there is a meaningful difference between a rough week and a pattern of intense mood shifts that cycle with increasing speed. Manic highs that fuel sleepless nights, followed by depressive crashes that make getting out of bed feel impossible, point to something clinical. A mental health professional can evaluate these patterns and determine whether a formal treatment plan is needed.
Often, the people closest to someone notice these shifts before the individual does. A partner might point out erratic spending during a manic stretch, or a friend might flag sudden social withdrawal. That outside perspective matters. Working with a bipolar disorder therapist gives individuals a reliable framework for managing these cycles. Specialists in this area use evidence-based methods to help clients recognize triggers, build coping mechanisms, and develop routines that promote emotional balance over time.
Mood instability ripples outward. During manic periods, impulsive remarks or restless irritability can damage even the strongest bonds. Depressive stretches often bring isolation, leaving loved ones feeling shut out. Over months or years, these cycles create a pattern of rupture and repair that exhausts everyone involved.
A therapist offers a contained space to examine how these emotional extremes affect the people around us. Practical tools, like communication exercises and emotional regulation techniques, make a real difference. In some cases, joint sessions with a partner or family member help rebuild trust and establish healthier ways of supporting one another.
Missed deadlines, ignored bills, and mounting household tasks are clear signs that symptoms have started affecting everyday functioning. Mania can produce intense bursts of activity that burn out quickly, leaving projects half-finished. Depression strips away drive entirely, turning even minor obligations into exhausting ordeals.
A noticeable drop in output at a job or in coursework deserves attention. Difficulty in concentrating, gaps in memory, and unpredictable energy levels all interfere with consistent performance. A clinician experienced with mood disorders can help design strategies for staying on track during difficult stretches, including structured scheduling and energy management techniques.
Reaching for alcohol, substances, or other high-risk outlets to blunt emotional extremes is a serious red flag. These habits offer short-term numbness but deepen instability over time. They also raise the likelihood of developing a co-occurring substance use condition, which complicates treatment significantly.
Therapeutic intervention targets the underlying emotional pain that fuels these choices. Cognitive behavioral techniques, consistent mood monitoring, and coordinated medication management work together to reduce reliance on harmful coping habits. Seeking support before these behaviors become deeply rooted improves recovery outcomes considerably.
It is surprisingly common for people to step away from treatment after a period of stability. Feeling good can create a false sense of resolution, as though the condition has somehow passed. Bipolar disorder, however, is a lifelong condition. Symptoms almost always resurface without ongoing care.
Re-engaging with a mental health provider after time away is a practical and encouraged step. Treatment plans can be adjusted to reflect new life circumstances, updated research, or changes in symptom presentation. Consistent clinical engagement lowers the risk of severe episodes and reduces the chance of hospitalization.
This is the most critical signal, and it calls for immediate action. Depressive episodes tied to bipolar disorder carry a heightened risk of suicidal thinking. No one should sit with those thoughts alone. Crisis hotlines, emergency services, and urgent therapy appointments all exist for exactly this reason.
After a crisis passes, sustained therapeutic support helps build a safety plan and identify warning signs before they escalate again.
Asking for help is one of the most grounded, self-aware decisions a person can make. Whether mood episodes are accelerating, personal connections are fraying, or basic responsibilities have become overwhelming, a qualified therapist provides clarity and direction. Each session offers a chance to learn new tools, process difficult emotions, and build a more stable foundation. That first appointment is not the end of a struggle; it is the beginning of a more informed, supported way of living with bipolar disorder.
The post When To See a Bipolar Disorder Therapist for Support appeared first on DCReport.org.
There is a growing interest in medical red light therapy for improving health. Choosing the right device is not that easy. The effectiveness and potential use for home or clinic can be affected by several factors. Both individuals and healthcare providers need to evaluate their choices before acting on them.
Red light treatment includes exposure to particular wavelengths of light. This therapy serves as an aid in recovery, minimizes pain, and enhances the appearance of the skin. Devices come in different sizes, power, and functionality. Understanding these distinctions allows users to make informed decisions about medical red light therapy systems.
Identifying the primary purpose for acquiring a red light therapy system is essential. Some units are suitable for home use, while others fit clinical environments. Home devices focus on convenience and ease of operation. Professional tools may offer higher intensity and advanced settings suitable for practitioners.
Wavelength is a critical factor in therapeutic efficacy. The majority of systems function in the 600 to 900 nanometer range. Each wavelength range targets varied tissues and conditions. Power output determines the depth and efficiency of what you’re treating; power output is measured in mW/cm2. Some systems have settings that enable users to customize sessions according to needs.
When deciding on medical equipment, safety continues to be a primary concern. Therapy systems receive certification for their adherence to known health and electrical safety standards. Certifications from respected organizations can instill confidence in the user. By reading the product documentation, you can make sure the device can fulfil these needs. Check for components that enable heat regulation to ensure enhanced user safety.
There are different types of therapy devices that providers offer. Smaller units are for targeted treatments and ease of travel. Large panels are good for people looking for a broader treatment area or treating multiple regions at once. Design aspects, including adjustable stands or flexible arms, offer added ease and comfort while using the device.
Many devices offer pre-defined protocols for some common issues. These settings can be used to simplify use and to help achieve reproducible results. Meanwhile, some devices allow you to manually adjust them, providing more control over session length and intensity to more experienced users. Glancing through the available settings allows users to find a system that matches their preferences.
User-friendly design ensures therapy remains accessible to everyone. Clear instructions, intuitive controls, and simple interfaces support effective operation. Devices with minimal setup and maintenance requirements save time and reduce frustration. Evaluating these aspects can improve the therapy experience for all users.
A dependable customer support can help with queries and concerns during the use of the product. Check the warranty coverage and ensure it protects you against defects or malfunctions. Manufacturers that offer responsive support when needed and set clear policies show that they are committed to customer service. Users get a sense of security in their purchase decisions once they browse through these services before buying.
Comparing models that are priced similarly helps when trying to identify what works best for your budget. Basic functions and durability might be sacrificed in favor of a lower price, or, on the other hand, you can pay more for a more sophisticated piece of technology, with a longer warranty. Maintain a balance between affordability and functionality, to guarantee a value-for-money investment.
Users and medical professionals provide valuable feedback. Expert opinions, evaluations, and endorsements tend to expose either positive or negative characteristics of individual models. Consulting with a medical professional also ensures that the selected system aligns with health needs and safety standards. Getting a variety of viewpoints helps make good choices.
Choosing a medical red light therapy system can be difficult. However, paying attention to safety, performance, usability, and support can help users figure out the right device that suits their individual needs. Selecting the right device optimizes therapy outcomes, which leads to more satisfaction. All in all, be it for personal or professional use, with an informed decision, people can leverage the positive effects of red light therapy, hassle-free.
“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.
The post Choosing the Right Medical Red Light Therapy System appeared first on DCReport.org.
Tuesday’s Senate confirmation hearing for Kevin Warsh, the recent forced resignations from the Cabinet and the poorly veiled entreaty to two Supreme Court justices to step down all point up anew the power of Donald Trump to name a team that is more committed to him than to doing the job.
Trump has a free hand, of course, in naming those who work in the White House and who offer direct counsel on policy and messaging. But things get messy fast when the job at hand is supposed to involve a certain distant expertise not governed by Trump’s expansive gut, whether health, education, defense or monetary policy.
The key question for senators at all the Cabinet-level confirmation hearings has been the same: Will you act independently of Trump or merely as his tool to dominate yet another area of government?
Despite a messy war with Iran that is goosing oil prices globally, Trump still expects that Warsh will bring about an immediate and significant drop in the basic borrowing rates that look to economists and financial experts as an invitation to sustained inflation or worse. Trump wants the investor class to have access to cheap loans and seems to ignore the predictable effect on consumer markets.
Quite apart from any judgments about Jerome Powell as Fed chair concerning monetary rates or economic forecasts, Powell will be remembered for standing up to Trump – which is why Trump wants him replaced even if any confirmation is delayed.
By contrast, other than for their publicly embarrassing confrontations with Congress, no one really understands why Kristi Noem was ousted as Homeland Security Secretary or Pam Bondi was forced out as a willing accomplice at using the Justice Department to pursue Trump’s perceived political enemies. It’s not as if any policies suddenly changed as a result.
Nor does this week’s departure of Labor Secretary Lori Chavez-DeRemer spell any policy change, though it eliminates some bad publicity for her behaviors, a fate that may consume Kash Patel’s time as director of the FBI as well.
From all that is publicly shared about Warsh, a prominent economist, investment banker and former Federal Reserve governor who worked on the navigating the 2008 financial crisis, he would be the type of candidate to sail through the Republican-majority Senate confirmation process, despite criticisms from ranking minority committee member, Sen. Elizabeth Warren, D-Mass.
The sticking point is his explanation of “independence,”
As with Todd Blanche, the former Trump defense lawyer serving as interim Attorney General, Warsh sees little wrong with Trump expressing strong opinions about how he, Warsh, would do the job at the Fed. Blanche says Trump should have opinions about who gets prosecuted by Justice, and that Justice should heed the advice. Warsh was seeking to be careful at his hearing about just how far to show deference to Trump as dictator of national monetary policy – an area set by law for review independent of the partisan concerns of a president.
Decisions by the Fed are supposed to reflect technical economic readings, not the political needs of a president facing an adverse mid-term election for a would-be quick-fix rate drop. But Trump wants the “lowest rates in the world” so badly that he has ordered the Justice Department to launch criminal investigation of Powell’s management of construction – and Fed board member Lisa Cook — in hope of forcing him out. Courts have ruled twice now that search warrants were unjustified because they reflected political concern, not crime. At least one senator, Thom Tillis, R-NC, says he won’t support Warsh while unjustified charges still loom for Powell, himself a Trump appointee to the Fed chair.
No economic measure has improved for an incoming Warsh. Indeed, the war with Iran, tariffs, a budget bloated for deportation and military adventurism, and global pricing and shipping uncertainties will make any legit reduction in Fed rates more difficult to achieve. So, we know now that Warsh will become a focal point for whether Trump gets his way.
The problem for senators was crafting a way to gauge protestations of “independence” by Warsh or even on the questions involving economic direction at a time of so much uncertainty. It was easier to focus on the White House’s vocal anti-Powell campaign.
Indeed, senators questioned Warsh’s undisclosed personal investments and whether he has complied with required ethics declarations, his willingness to back limits over cybercurrency and AI, and the pace of Fed decision-making, but the main question kept coming back to queries that tried repeatedly to get at the independence issue.
Warsh’s attempts to say that he would not routinely follow Trump’s open demands on interest rates drew a healthy amount of skepticism from senators who noted that Trump makes clear his expectation of appointees.
What does not get discussion is how Trump goes out of way to disdain expertise in reviewing appointments. The problem with using loyalty as the only measurement consistently is creating continuing problems for Trump in health policies that kick people off Medicaid and close hospitals, in deportation campaigns that overrun legal boundaries, in environmental policies that ignore pollution to promote more oil drilling and on through the Cabinet positions.
There is too little sustained focus on the aims of these agencies, and a lot on the willingness of agencies to reinterpret their own authority to evade accountability.
In monetary policy, as with other areas, Trump starts with the desired conclusion and then skips over the available evidence – much as seemed to have happened in launching this war with Iran. For the Fed, he has decided that lower rates are what he needs, without regard to the voluminous economic review that the Fed undertakes.
The “independence” of the Fed is supposed to guarantee that expertise, not political ends, govern.
“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.
The post Gauging ‘Independence’ from Trump appeared first on DCReport.org.
St. George’s day and Coronacion, the King and Court being at Windsor, at the installing of the King of Denmark by proxy and the Duke of Monmouth.
I up betimes, and with my father, having a fire made in my wife’s new closet above, it being a wet and cold day, we sat there all the morning looking over his country accounts ever since his going into the country. I find his spending hitherto has been (without extraordinary charges) at full 100l. per annum, which troubles me, and I did let him apprehend it, so as that the poor man wept, though he did make it well appear to me that he could not have saved a farthing of it. I did tell him how things stand with us, and did shew my distrust of Pall, both for her good nature and housewifery, which he was sorry for, telling me that indeed she carries herself very well and carefully, which I am glad to hear, though I doubt it was but his doting and not being able to find her miscarriages so well nowadays as he could heretofore have done.
We resolve upon sending for Will Stankes up to town to give us a right understanding in all that we have in Brampton, and before my father goes to settle every thing so as to resolve how to find a living for my father and to pay debts and legacies, and also to understand truly how Tom’s condition is in the world, that we may know what we are like to expect of his doing ill or well.
So to dinner, and after dinner to the office, where some of us met and did a little business, and so to Sir W. Batten’s to see a little picture drawing of his by a Dutchman which is very well done.
So to my office and put a few things in order, and so home to spend the evening with my father. At cards till late, and being at supper, my boy being sent for some mustard to a neat’s tongue, the rogue staid half an hour in the streets, it seems at a bonfire, at which I was very angry, and resolve to beat him to-morrow.
In the Netherlands, not only is it legal to receive medical aid in dying (MAID), but a growing number of MAID patients are able to successfully achieve their desire to become deceased organ donors.
From the American Journal of Transplantation:
Wijbenga, N., Gan, C.T., Ruigrok, D., Berg, E.M., Hagenaars, J.A.M., Siregar, S., van der Kaaij, N.P., Mathot, B.J., van Pel, R., Seghers, L. and Manintveld, O.C., 2026. The Increasing Contribution of Organ Donation after Euthanasia to the Lung Transplantation Donor Pool in the Netherlands. American Journal of Transplantation.
"Abstract: The number of organ donation after euthanasia (ODE) procedures in the Netherlands has grown substantially, yet their contribution to the lung-donor pool remains unclear. There is no clinical consensus on how these potential ODE lung-donors should be assessed. We aimed to describe the total contribution of ODE to the lung-donor pool in the Netherlands and describe the assessment of potential ODE lung-donors.
We collected data from all ODE procedures performed between 2012-2024 in the Netherlands. We assessed the number of ODE-lungs offered, rejected, accepted, and transplanted, comparing characteristics of discarded and transplanted lungs.
Of 1166 lung-donor, 664(60%) were DCD donors of which 154(23%) were ODE lung-donors. The total proportion of donor lungs from ODE lung-donors acceptable to offer for lung transplantation was 117 of which 104 (89%) were transplanted. Evaluation prior to donation was highly variable, with medical history and chest CT most affecting acceptance decisions. Short-term outcomes were excellent, with 1-year survival of 84%.
Our findings indicate that ODE lung donors are increasingly important in the Netherlands, with high acceptance rates, despite highly variable evaluation methods. Standardizing the assessment of potential ODE lung donors could further improve acceptance rates and enhance the contribution of ODE to the lung-donor pool."

We had an illustration Tuesday night of one of the most crucial questions in our current politics and the one that will determine whether civic democracy can have a rebirth in the U.S. Gerrymandering is a bane to civic democracy because it dilutes the expression of the popular will by building district lines around partisan advantage or to diminish the power of disempowered minorities. Democrats spent much of the 2010s and 2020s fighting a legal and legislative battle against gerrymandering. But the Roberts Court has chosen to legalize every manner of gerrymandering, making the current a destructive race to the bottom.
Democrats had a choice. They could express effete outrage and a meaningless devotion to broken norms and principles and agree to wage elections on a permanently tilted plane. Or they could decide to play by the rules Republicans had forced on everyone. They did just that and it was unquestionable the right decision by every measure. It really never seemed to occur to Trump Republicans that Democrats would fight on the playbook Republicans created. There’s a special comedy to this because anyone familiar with the facts on the ground knew that Republicans had already used gerrymandering much more aggressively than Democrats. So there was much more juice in the gerrymandering lemon for Democrats if and when they decided to employ tactics Republicans have been using for more than a decade. It’s worth Democrats considering how deeply Republicans had internalized the belief that Democrats would simply never respond in kind.
If you’re worried about where this goes long term there’s a simple solution: Ask any Democrat who supports fighting the redistricting wars to vote for a national redistricting law. This isn’t some notional outcome. It should be and I believe is still at the top of any Democratic reform agenda: non-partisan electoral districts. What usually goes unreported is that even the Virginia gerrymandering referendum, which has caused MAGA tears to flood X, mandates that the state go back to the non-partisan rules after the 2030 census. (Voters passed the measure on Tuesday, but the state Supreme Court still has to hear several legal challenges against it). If Republicans now suddenly see the shortcomings of the rules they and their corrupt judicial allies created then they can join with the new Democratic majority next year and pass a national anti-gerrymandering law. It’s really so simple. You need to acquire power by every legal means in order to enact change.
This is a critical moment because it previews equally critical decision points in the future. There is no possibility of a civic democratic revival in the United States without abolishing the filibuster and reforming the corrupt Supreme Court. These both now exist as either substantial (in the first instance) or total (in the case of Supreme Court corruption) impediments to democratic self-government in the United States. I’ve been giving a lot of thought of late to just what kind of new social contract can be devised to succeed the ones created in the New Deal and early Cold War eras. I have only a very limited insight into what that might be. What I understand much more clearly are the structural changes required to create any rebirth of a civic democratic future in the U.S. We’ve spoken about them plenty before: abolish the filibuster and reform the corrupt Supreme Court. These are the sine qua non reforms without which small-d democratic self-government in the United States is really no longer possible.
The question now is whether Democrats can bring the same fight and clarity to these questions as they did, to the surprise of many, to redistricting. I’ve noted a number of times that it’s critical what kind of majorities Democrats elect if they elect them. The most important spectrum is not conventional left and right but clarity about the need for structural reform and indifference to kinds of false propriety that stand in the way. The irony is that it may be harder to get Democrats united behind these reforms even though the positive case is much more straightforward. Aggressive partisan gerrymandering is a bad thing which Republicans have forced on Democrats as the only path to maintaining and expanding political power. The filibuster is an affirmatively bad thing which primarily places limits on Democrats. Supreme Court corruption has remade the Court not simply into an effective veto on the kind of expansive government Democrats once championed but now, far more straightforwardly, on any Democratic government at all. The current Supreme Court is nothing more than another legislative filibuster, housed in the judicial branch, and crusted over with Harvard and Yale JDs to make it look pretty to the gullible.
What happened Tuesday night and which has happened more generally in the redistricting wars comes down to a simple question of aspiration: can the majority of Democrats learn how to acquire and use political power effectively and be willing to do so? The redistricting battles suggest the answer is yes, more than a lot of people thought. Killing the filibuster and reforming the corrupt Supreme Court are the next critical, sine qua non tests.
We have two noteworthy pieces for you this morning in TPM Cafe, both in different ways speaking to the state of the GOP.
I’ve grown reluctant to use the K word—because it’s so easily misunderstood.
As soon as people hear me say karma, they think I’m spouting off mystical gibberish. I might as well lay out tarot cards or peer into a crystal ball. I’m just a step away from joining a UFO cult.
It’s true—I have a high tolerance for metaphysics. But when I refer to karma, I’m talking about hardheaded science. I’m drawing on empirical studies and historical evidence.
Yes, I believe karma can be demonstrated via psychology and social science, but also mathematically. It’s almost like Newton’s third law—actions in one direction create responses in the opposite direction—only applied to human behavior. Once you understand how it works, you see it everywhere.
For example, I find validation in the game theory competitions Robert Axelrod conducted in the 1980s. He invited experts to participate in contests based on the Prisoner’s Dilemma—but lasting 200 rounds. In each round, players were forced to choose between cooperation and betrayal.
But here’s the twist. In the Prisoner’s Dilemma, cooperation only wins if your adversary also cooperates. You lose if your opponent is willing to betray you. And everybody loses if both parties betray each other.
So what’s the winning strategy?
Participants tested various solutions in these competitions, some of them quite elaborate—drawing on advanced statistical analysis. But the simplest strategy was the winner.
This winning strategy was “tit-for-tat.” You did to others what happened to you in the previous round. If you encountered cooperation, you tried it the next time. But if you got betrayed, you mimicked that behavior in the following round.
It was amazing that a tactic so simple could defeat more sophisticated strategies. It suggested that there was something inherently powerful in reciprocity. Just as the proverb predicts: As you sow, so shall you reap.
This is what we call karma—the tendency of the universe to give you in return what you have given others. But now it was vindicated by game theory.
I studied the results of Axelrod’s tournaments while I was a student at Stanford’s Graduate School of Business. But at that very same moment, a brilliant thinker was reaching a similar conclusion a few hundred yards away inside Stanford’s Department of French and Italian.
His name was René Girard.
Links for you. Science:
Why this NASA climate scientist wants you to stay angry
Clinical Trials Were Not Always This Complicated
After harsh winter, Ukrainians find joy in releasing bats rescued from war
US Scientists Sequence 1,000 Genomes From Measles, a Disease Long Eliminated With Vaccines
Antibiotic use and gut microbiome composition links from individual-level prescription data of 14,979 individuals
A complete set of canonical nucleobases in the carbonaceous asteroid (162173) Ryugu
CDC delays publishing report showing covid vaccine benefits
Other:
How the Internet Fringe Infiltrated Republican Politics
Not Enough Teen Pregnancy (“…I remember when journalists would happily maintain the fiction that all young Republican staffers and politicians were virgins until marriage…”)
Harvard scientist’s visa was unlawfully canceled, judge finds
Donald Trump Impeachment Backed by Most Americans: Poll
In D.C.’s mayoral race, everyone wants more housing
Sam Altman Is Giving OpenAI a Makeover to Woo Democrats
He was willing to testify against the cartel — but ICE got to him first
What if we’d followed a 1912 plan to build streetcar tunnels around the White House?
Louisiana GOP races to eliminate an elected office won by an exonerated man (Republicans are so petty)
TRUMP RETREATS INTO THE RIGHT-WING BUBBLE
Hegseth’s Pentagon Purge: Under the cover of the Iran war, Pete Hegseth moved to oust Army chief Randy George, a staunch ally of his archnemesis and untouchable Pentagon rival, Dan Driscoll. Was it a well-calculated plot, a sign of his juice, or maybe a signal that J.D. Vance has lost some of his foreign policy sway?
Well-timed bets on Polymarket tied to the Iran war draw calls for investigations from lawmakers
War makes some Tucson Raytheon employees, retirees question work
Donald Trump’s Unfreedom of the Seas: The president is giving up on centuries of wealth and power.
Trump administration plans to attack Biden DOJ as ‘anti-Christian’ in new report
Trump’s SAVE America Act would end voter registration drives nationwide
Chicago Turns All Public School IDs Into Library Cards To Boost Student Access
Surprise inspection finds ICE stuffing migrants ‘like sardines’ into a facility with no bed, showers
Repealing Section 230 as antitrust
Inside Alligator Alcatraz, Wasserman Schultz finds men crammed in cages, smell of urine, inadequate food
Shin Bet Chief Does Not View Jewish Attacks on Palestinians as Terror
Why is Melania Trump suddenly making a big deal about the “fact” that she never had anything at all to do with Jeffrey Epstein?
Netanyahu-ism has achieved nothing for Israelis – and come at a monstrously high price
Why I’m betting on ATProto (and why you should, too)
Kentucky Republicans Are Trying to Impeach a Judge For Acknowledging That Racism Exists
Donald Trump’s Plan To Steal Or Destroy Everything
Getting New York City to Believe in Government
Former staffer says Rep. Eric Swalwell, candidate for California governor, sexually assaulted her
Israel’s War in Lebanon Has Not Stopped
Trump Tirade at MAGA War Critics Accidentally Makes Surprise Admission
While northern professions in 1600 did not require lengthy training in mathematics or science, there was popular interest in these topics. England’s first chair in mathematics was endowed by Thomas Gresham,61 who had founded London’s Royal Exchange and pledged the rents from that institution to fund seven professorships, who would not train student but would rather give two public lectures (in Latin and English) each week. As Gresham also gave chairs in astronomy and “physik,” this produced a cluster of scientifically minded individuals, who would later play an outsized role in the founding of the Royal Society. Robert Hooke was the Gresham Professor of Geometry, William Petty the Gresham Professor of Music, and Christopher Wren the Gresham Professor of Astronomy.
Perhaps because of Gresham’s public lectures, interest in mathematics grew. More professorships followed, including the mid-17th century Lucasian Chair in Mathematics (after William Lucas, member of parliament for Cambridge), for which Isaac Newton would be the second occupant (Clark, 1904). The popular interest in science also meant that teachers at urban universities could fill public lecture halls by teaching about chemistry, and even performing public chemistry experiments.
That is from a new NBER working paper by David M. Cutler and Edward L. Glaeser, “How Have Universities Survived for Nearly a Millennium?” Has any single individual funded three equally prestigious chairs or anything close to that?
The post Thomas Gresham is underrated appeared first on Marginal REVOLUTION.
The iPad should be radically (though obviously) touch-only. No keyboards. No pointers. No mice. No trackpads. Just your disgusting fingers flopping over the screen and mooshing into icons. It should not have any window’d modes. Each app should fill the whole screen and only the whole screen.
iPad apps should be weird as hell, unlike anything you find on a desktop operating system. PushPopPress began to illuminate this path fifteen years ago, and then they got slurped up — like so many other promising, young, talented designers and companies around that time — by Facebook, only to disappear into the wake of Mark Zuckerberg’s electric hydrofoil surfboard. Using an iPad should feel like a finger ballet. Your hands should be swooping and swiping and the whole OS should feel like skipping across a taut slackline, a bit bouncy and pleasing and physical but also precise and quick and focused taking you where you need to go, across some creative gulf. There should be no “hard edges” anywhere. iPadOS shouldn’t be anything like Windows or macOS or Linux, it shouldn’t be iOS made big, it should be only like iPadOS — a singular thing of finger-poking joy. When you pick up one of those magic slabs (and truly, the amount of engineering and power in those thin-as-heck slabs is something else) you should feel giddy, like you’re about to enter a whole ’nother computer-ing universe, one that is all about elegant multitouch tactility, worlds apart from your phone or your laptop.
1. The rise of Chinese micro-dramas.
2. Niklas Luhmann.
3. Why Rome never industrialized (YouTube video).
4. One account of the genocidal impulse.
5. Organs on demand? We will see.
6. U.S. at the Venice Biennial (NYT).
7. “Argentina’s economy shrank 2.6 per cent in February compared to January, the largest monthly contraction since President Javier Milei took office in late 2023, as his inflation-busting economic programme weighed on major industries.” FT link here.
10. A fragment of Homer’s Iliad inside an Egyptian mummy?
The post Thursday assorted links appeared first on Marginal REVOLUTION.
404 Media reports (alternate site):
The FBI was able to forensically extract copies of incoming Signal messages from a defendant’s iPhone, even after the app was deleted, because copies of the content were saved in the device’s push notification database….
The news shows how forensic extraction—when someone has physical access to a device and is able to run specialized software on it—can yield sensitive data derived from secure messaging apps in unexpected places. Signal already has a setting that blocks message content from displaying in push notifications; the case highlights why such a feature might be important for some users to turn on.
“We learned that specifically on iPhones, if one’s settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device,” a supporter of the defendants who was taking notes during the trial told 404 Media.
EDITED TO ADD (4/24): Apple has patched this vulnerability.
Well, TEH FREEDOMZ didn’t last too long (boldface mine):
A report showing the efficacy of the covid-19 vaccine that was previously delayed by the head of the Centers for Disease Control and Prevention has been blocked from being published in the agency’s flagship scientific journal, according to three people familiar with the decision who spoke on the condition of anonymity for fear of retaliation. The report showed that the vaccine reduced emergency department visits and hospitalizations among healthy adults by about half this past winter.
The move, which has not been previously reported, has raised concerns among current and former officials that information about the vaccine’s benefits is being downplayed because they conflict with the views of Health Secretary Robert F. Kennedy Jr., who has been an outspoken critic of the shots. Kennedy’s vaccine agenda has received pointed questioning from lawmakers during budget hearings that began last week and conclude Wednesday.
The Washington Post reported two weeks ago that Jay Bhattacharya, who is temporarily overseeing the CDC, delayed publication of the report over concerns about methodology. The report had been scheduled for publication March 19 in the Morbidity and Mortality Weekly Report.
In recent days, a decision was made that the report would not be published, according to two of the people who spoke to The Post….
On Tuesday, Nixon described the decision differently: “The MMWR’s editorial assessment identified concerns regarding the methodological approach to estimating vaccine effectiveness and the manuscript was not accepted for publication,” a characterization that differs from accounts by people familiar with the report’s review…
Bhattacharya had concerns about a methodology that has long been used by the CDC to evaluate vaccine effectiveness for respiratory viruses, including influenza. A report about flu vaccine effectiveness this past winter — using the same methodology — was published in the MMWR a week earlier. An HHS official had previously said Bhattacharya was not in a position to review the earlier study and would have raised the same concerns.
A report using this methodology to gauge covid vaccine effectiveness in children was published in MMWR in December.
The methodology was also used in a 2021 study on covid vaccine effectiveness in clinics and hospitals published in the New England Journal of Medicine. Vaccine effectiveness estimates using the same methodology have also been published in other peer-reviewed journals, including JAMA Network Open, the Lancet and Pediatrics.
Freedom for me, but not for thee. And it is the height of arrogance for Bhattacharya to think that he, along with a plucky few iconclasts, have discovered a fatal flaw with the test-negative study design*. And at the upcoming NAS symposium where a bunch of COVID (and public health) contrarians will be speaking, I hope someone asks them about this.
Of course, this is part of a larger agenda to reduce vaccination by calling into question the efficacy of the vaccines, with the idea being that vaccines are supposedly harmful, and they only protect at high risk populations. It’s just pseudoscientific bullshit all the way down.
*In a test-negative design study, the efficacy of the vaccine is evaluated by examining a pool of people with symptoms, and then determining if they actually have the disease (e.g., they might just have a bad respiratory infection that is not due to COVID). Then the vaccination rates between those with COVID and those without are compared. What vaccine denialists typically argue is that, if healthy people were recruited and followed, as was done initially for the COVID vaccines, there would be little effect, as healthy übermenschen don’t need no stinkin’ vaccine, while the genetic underclass does because healthy people gain very little from vaccination (even though with COVID vaccines, that was not the case). What this sort of requirement would do is make most vaccines that need to be updated annually nearly impossible to test in time. Because they are fucking evil people.
Under the directives of the President of the UAE, we launch a new government model.
Within two years, 50% of government sectors, services, and operations will run on Agentic AI, making the UAE the first government globally to operate at this scale through autonomous systems.
AI is no longer a tool. It analyses, decides, executes, and improves in real time. It will become our executive partner to enhance services, accelerate decisions, and raise efficiency.
This transformation has a clear timeline. Two years. Performance across government will be measured by speed of adoption, quality of implementation, and mastery of AI in redesigning government work.
We are investing in our people. Every federal employee will be trained to master AI, building one of the world’s strongest capabilities in AI-driven government.
Implementation will be overseen by Sheikh Mansour bin Zayed, with a dedicated taskforce chaired by Mohammad Al Gergawi driving execution.
The world is changing. Technology is accelerating. Our principle remains constant. People come first. Our goal is a government that is faster, more responsive, and more impactful.
Here is the link. While there is typically a certain amount of PR in such pronouncements, I do not think this one is only PR.
The post From the UAE appeared first on Marginal REVOLUTION.
This is the sixth chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon. Thank you!
The goal of this chapter is to use the concepts you have learned to build a web traffic analytics solution. This will serve as reinforcement of the techniques demonstrated in previous chapters as well as an example of a more complex and realistic database design.
In the run up to the May publication date, I've been interviewed on a variety of podcasts about my book Moral Economics: From Prostitution to Organ Sales, What Controversial Transactions Reveal About How Markets Work. Here's one from the podcast Passion Struck: Nobel Laureate Alvin Roth: How Incentives Shape Your Life | EP 757
#############
Earlier:

In the Atacama Desert, scientists race to find novel cures for antibiotic-resistant infections, as mining interests encroach
- by Aeon Video

Aldo Leopold saw this in the eyes of a dying wolf: when we no longer fear nature, we are on the road to its destruction
- by Shawn Simpson
Virginia voters yesterday agreed to a constitutional amendment that would temporarily redistrict the state if any other state redistricted for partisan reasons: that is, in retaliation for the partisan redistricting President Donald J. Trump launched in Texas in 2025 in an effort to retain control of the House of Representatives.
As Matt Cohen of Democracy Docket noted, Trump supporters immediately insisted the voting was rigged, probably through mail-in ballots. Trump himself took to social media to attack the election, repeating charges of rigging and then adding: “In addition to everything else, the language on the Referendum was purposefully unintelligible and deceptive. As everyone knows, I am an extraordinarily brilliant person, and even I had no idea what the hell they were talking about in the Referendum, and neither do they! Let’s see if the Courts will fix this travesty of ‘Justice.’”
In fact, Trump himself began this mid-decade partisan gerrymander race with his pressure on Texas to rejigger its maps to give Republicans more House seats. That prompted California to retaliate with its own temporary redistricting to offset the new Texas Republican-leaning seats. Other states followed suit. Republicans redistricted Missouri, North Carolina, and Ohio, in addition to Texas, and expect those mid-decade redistricts will net them nine more seats. Democrats think their redistricting of California, along with a court-ordered redistricting of Utah, will get them an additional six seats. They are hoping that the temporary redistricting of Virginia will give them four more seats.
State lawmakers in Florida will convene a special session next week to consider redistricting that state, as well, to benefit the Republicans.
Journalist Brian Tyler Cohen noted that the Republicans have full control of the federal government and could pass a law to ban partisan gerrymandering any time they want to, as Democrats have called for, but they refuse. “Republicans aren’t mad gerrymandering exists,” Cohen notes; “they’re mad that they’re not the only ones using it.”
The Republican National Committee, now controlled by Trump, immediately sued over the Virginia election, and a Virginia judge ruled that both the constitutional amendment and the referendum voters approved were invalid. He said that “any and all votes for or against the proposed constitutional amendment in the April 21, 2026 special election are ineffective,” and prevented officials from certifying the results.
But, as Yunior Rivas of Democracy Docket wrote, Virginia attorney general Jay Jones is challenging the decision, saying: “Virginia voters have spoken, and an activist judge should not have the power over the People’s vote. We look forward to defending the outcome of last night’s election in court.”
Complaints about the Democratic push for a partisan gerrymander in Virginia have exposed a tendency to excuse Republican machinations to control politics while jumping on Democrats for similar behavior.
In August 2025, when Texas Republicans began this fight by redistricting their state after a brutal contest that drove Democratic legislators to leave the state and take refuge in Illinois and Massachusetts to deny Republicans enough legislators to pass a redistricting law, the Washington Post Editorial Board wrote: “What’s happening in the Lone Star State is not a threat to democracy.” “Even if Texas’s move triggers an arms race, the trend will not put American democracy on life support,” it said, dismissing the concerns of those fighting the Republicans’ attempt to game the 2026 elections.
But with last night’s Democratic partisan gerrymander—one that, unlike the Texas gerrymander, went before the people for a vote—the Editorial Board changed its tune. It called this redistricting plan “a power grab by Democrats.” “They’re right that the [Republicans] started this fight by trying to pick up five House seats in Texas through gerrymandering, but they can spare us the false sanctimony about democratic norms going forward,” board members wrote.
Their argument appears to be that the Democrats stand a good chance of winning the midterms even if the Republicans have gamed the system, so the Democrats should not push back. “The news will embolden Republicans in Florida to forge ahead with their own gerrymandering…, continuing the race to the bottom,” they write, seeming to excuse the behavior of Republicans by blaming Democrats for it.
This pattern—expecting Republicans to behave wildly and cheat to grab power while expecting Democrats to behave according to the rules of normal times—has been going on now for years, and it is a dynamic that reflects the political patterns of the years before the Civil War. Then, Americans expected southern Democrats to bully and bluster and rig the system while northerners tried to jolly them into honoring the laws.
In the 1850s, southerners championed their region as the one that had correctly developed the society envisioned by the Founders. In the South a few very wealthy men controlled government and society, enslaving their neighbors. This system, its apologists asserted, was the highest form of human civilization. They opposed any attempt to restrict its spread. The South was superior to the North, enslavers insisted; it alone was patriotic, honored the Constitution, and understood economic growth. In the interests of union, northerners repeatedly ceded ground to enslavers and left their claim to superiority unchallenged.
Then, on May 22, 1856, Representative Preston Brooks of South Carolina beat Senator Charles Sumner of Massachusetts nearly to death on the Senate floor shortly after a speech in which Sumner had called out those who were forcing enslavement on Kansas and insulted a relative of Brooks. Southern lawmakers and newspapermen alike cheered the violence against an elected representative in the Capitol. Lawmakers refused to expel Brooks, and one newspaper editor wrote: “We trust other gentlemen will follow the example of Mr. Brooks…. If need be, let us have a caning or cowhiding every day.”
But the attack on Sumner was a bridge too far for his colleague, Massachusetts representative Anson Burlingame. On June 21, he stood up in Congress to call out as inferior Brooks and the system of enslavement he defended. Burlingame was sick and tired of buying peace by letting southerners abuse the North. Enough, he said, was enough.
Enslavement was not a superior system, he said; it had dragged the nation backward. Slavery kept workers ignorant and godless while the northern system of freedom lifted workers up with schools and churches. Slavery feared innovation; freedom encouraged workers to try new ideas. Slavery kept the South mired in the past; freedom welcomed the modern world and pushed Americans into a new, thriving economy. And finally, when Sumner had spoken up against the tyranny of slavery, a southerner had clubbed him almost to death on the floor of the Senate.
Was ignorance, economic stagnation, and violence the true American system? For his part, Burlingame preferred to throw his lot with the North, which he said was superior to the South in its morality, education, economy, loyalty to the government, and fidelity to the Constitution. Northerners were willing to defend their system, he said, with guns if necessary.
Burlingame’s “Defense of Massachusetts” speech marked the first time a prominent northerner had offered to fight to defend the northern way of life. Previously, southerners had been the ones threatening war and demanding concessions from the North to preserve the peace. Burlingame explained that he was willing to accept a battle because what was at stake was the future of the nation.
Forgotten now, Burlingame’s speech was once widely considered one of the most important speeches in American history. It marked the moment when northerners shocked southern leaders by calling them out for trying to destroy democracy. Northerners rallied to Burlingame’s call, and to the new Republican Party he was helping to build, because he had shown it would stand up for their rights.
Representative Alexandria Ocasio-Cortez (D-NY) echoed Burlingame today when a reporter asked what she thought of complaints about the Virginia vote. “Oh, wah, wah, wah,” she laughed. “Listen. Democrats have attempted and asked Republicans for 10 years to ban partisan gerrymandering. And for 10 years, Republicans have said no. Republicans have fought for partisan gerrymanders across the United States of America, and these are the rules that they have set….
“What they’re just mad at is that they have been accustomed to a Democratic Party that rolls over, doesn’t fight, and takes everything sitting down. And what they’re mad at right now is that we are here in a new day. And we have been asking the Democratic Party to stand up and fight, and now they did, and now the Republican Party doesn’t like the fact that they are fighting against someone who actually will stand up for the American people.
“So if Republicans decide that they would like to revisit a ban on…partisan gerrymandering, I welcome them. We have the bill right here to end this all today. But they don’t want to because they like pursuing and continuing to enact an unfair electoral landscape. And so we have an obligation to defend ourselves.”
—
Notes:
https://thehill.com/homenews/state-watch/5834173-florida-redistricting-session-delay/
https://thehill.com/homenews/campaign/5842969-desantis-florida-republican-redistricting-risk/
https://www.democracydocket.com/wp-content/uploads/2026/02/2026-04-22-Final-judgment.pdf
https://www.cnn.com/2025/08/03/politics/texas-democrats-redistricting
https://www.battlefields.org/learn/articles/caning-charles-sumner
https://archive.org/details/defenceofmassach00burl/page/n7/mode/2up
Bluesky:
briantylercohen.bsky.social/post/3mk2hwddyfc22
We construct a posttax, posttransfer income measure from 1963 to 2023 based on the Current Population Survey Annual Social and Economic Supplement that allows us to consistently compare the economic well-being of five generations of Americans at ages 36–40. We find that Millennials had a real median household income that was 20% higher than that of the previous generation, a slowdown from the growth rate of the Silent Generation (36%) and Baby Boomers (26%), but similar to that of Generation X (16%). The slowdown for younger generations largely resulted from stalled growth in work hours among women. Progress for Millennials younger than 30 has also remained robust, though largely due to greater reliance on their parents. Additionally, lifetime income gains for younger generations far outweigh their higher educational costs.
That is from Kevin Corrinth and Jeff Larrimore in Demography. Via the excellent Kevin Lewis.
The post Is each American generation doing better? appeared first on Marginal REVOLUTION.
As AI sweeps into white-collar workplaces, old-timey hands-on jobs are getting a new look—and some of those professions even have shortages.
Consider tailors. Sewing is a vanishing skill, much like lacemaking and watchmaking, putting tailors in short supply when big retailers like Nordstrom and Men’s Wearhouse, as well as fashion designers and local dry cleaners, say they need more of them.
The job, which can take years to master, can be a tough sell to younger generations more accustomed to instant gratification. But apprenticeships that offer pay to learn on the job and new training programs are helping entice more people…
For the first semester of its program, which concluded in December, FIT received more than 190 applications for 15 spots. The nine-week course requires prior sewing experience. Nordstrom hired seven students from the inaugural class.
“It’s increasingly becoming more challenging to find people to fill these alterations jobs,” said Marco Esquivel, the director of alterations and aftercare services at Nordstrom, which employs about 1,500 tailors. Similar to other high-end retailers, Nordstrom offers free basic tailoring for garments purchased at the department-store chain and charges a fee for those bought elsewhere.
Tailored Brands, which employs about 1,300 tailors at its Men’s Wearhouse, Jos. A. Bank and other chains, is updating its apprenticeship program to include more self-guided videos with the goal of moving people through the training faster.
Here is more from Suzanne Kapner at the WSJ. Via LJ Fenkell.
The post Those old factory sector jobs appeared first on Marginal REVOLUTION.
Chiang Mai, Thailand’s second-largest city, lies within a network of narrow valleys in the country’s northern highlands. Though the historic city is known for panoramic views of the surrounding mountains, clear skies have become less common. In recent decades, smoke has increasingly darkened the skies during the dry season, particularly in March and April.
A NASA satellite captured this smoky view of the city and the surrounding region on April 22, 2026, when haze partially obscured valleys and ridges typically visible under clearer conditions. Most of the smoke likely comes from small agricultural and forest fires lit to burn off crop debris or maintain forest ecosystems. In 2026, satellite sensors detected small numbers of fires throughout January, but fire detections became more numerous and widespread in February, March, and April. Fire activity typically peaks in March and fades by May as seasonal rains increase.
Research indicates that smoke from biomass burning is one of the largest contributors to poor air quality in northern Thailand during the dry season. By one estimate, about 70 percent of fine particulate matter (PM2.5) in Chiang Mai in April comes from biomass burning. Smaller contributors to the region’s hazy skies include vehicles, power plants and industry, and charcoal burning for cooking and heating. Geography also plays a key role; the surrounding mountains block air flow and encourage temperature inversions that trap both local pollution and haze from the broader region in the valleys.
On the same day the satellite image was captured, air quality sensors on the ground recorded “unhealthy” and “very unhealthy” levels of PM2.5 air pollution throughout Chiang Mai and the region, according to data from the World Air Quality Index project. Prolonged exposure to high levels of air pollution can contribute to respiratory and cardiovascular diseases and other health problems.
News reports suggest that the haze is affecting the tourism industry and has contributed to a decrease in the number of international travelers coming to Chiang Mai. After more than a month of persistent haze, the number of tourists arriving in the town of Pai, a popular destination for backpackers northwest of Chiang Mai, was down 90 percent, according to one local newspaper.
Unusually warm and dry conditions have gripped the region in recent weeks, according to meteorologists with the ASEAN Specialised Meteorological Centre (ASMC). On March 27, the group advised that there was a “high risk” of severe transboundary haze in the region and elevated its alert level to three, the highest on the scale.
In late March, the group noted that dry conditions were forecast to persist over most parts of the Mekong sub-region, with prevailing winds expected to blow mostly from the south or southwest. “Under these conditions,” ASMC noted, “the hotspot and smoke haze situation could escalate further.”
NASA Earth Observatory image by Lauren Dauphin, using MODIS data from NASA EOSDIS LANCE and GIBS/Worldview. Story by Adam Voiland.
Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Scientists say the seasonal crop fires are burning later in the day than in previous years.

Villages and farmland were swamped after unusually heavy early-February rains pushed the Sinú River over its banks.

Satellite-based maps show northern wildland fires becoming more frequent and widespread as temperatures rise and lightning reaches higher latitudes.
The post Smoke Shrouds Northern Thailand appeared first on NASA Science.

Update Apr. 22, 11:52 p.m. EDT (0352 UTC): SpaceX landed its booster on the droneship.
SpaceX launched its 40th Starlink mission of the year when its Falcon 9 rocket took off from Vandenberg Space Force Base Wednesday night.
The Starlink 17-14 mission will add another 24 broadband internet satellites to the company’s low Earth orbit constellation, which consists of more than 10,200 spacecraft.
Liftoff from Space Launch Complex 4 East happened at 8:23:09 p.m. PDT (11:23:09 p.m. EDT / 0323:09 UTC). The rocket flew on a south-southwesterly trajectory upon leaving the pad.
SpaceX launched the mission using the Falcon 9 first stage booster with the tail number 1100. This was its fifth flight following the launches of NROL-105 along with three other batches of Starlink satellites.
A little more than eight minutes after liftoff, B1100 landed on the drone ship, ‘Of Course I Still Love You’. This was the 192nd booster landing on this vessel and the 602nd booster landing to date for SpaceX.