
Yesterday, we talked about the global Authoritarian Movement or Authoritarian International (with the convenient acronym “AI”). Today, I wanted to talk about something slightly more specific. It’s part of the same phenomenon, perhaps a subset of it, but it’s distinct.
Back during Trump’s first term, people in the anti-Trump world became intensely, if superficially, engaged with the inner-workings of Russia under Vladimir Putin, particularly the aggressive use of influence and disruption operations in competitor states, as well as the use of “kompromat” to maintain control over Russian oligarchs and key people — allies and enemies — abroad. One of the features of that world is that it’s really not extortion. It can be an oddly stabilizing system because everyone kind of has something on everyone else. In any case, this became a big part of the Trump opposition world during Trump’s first term. What did Putin have on Trump? What did he want? When did it start?
When a lot of highly motivated people suddenly get interested in the pretty opaque functionings of a society and government, a lot of nuance and key facts are going to be missed. But at a minimum, this was a fairly accurate view of how the elite functioned in post-Soviet, post-democratic Russia. How much it related to Trump, specifically, is hard to say. But later in Trump’s presidency it became clear to me that this was by no means limited to Russia. There was a big chunk of what I’ve described as the Authoritarian International that seemed to organize itself and operate in a pretty similar way.
I first started to understand this in deeper reporting on #metoo, especially Ronan Farrow’s book account of his early reporting which broke out key stories that really moved the meta-story to the front pages. This part of Farrow’s story was inevitably murky. But someone, quite likely Harvey Weinstein or those working on his behalf, had sicced Israeli private-sector intel firms on and were surveilling him and perhaps hacking his devices in ways that go far beyond what even a high-end private investigator can do. Every major power has highly effective digital warfare capacities. Israel has some of the best. And it has a private sector intel industry where, for the right money, you can get access to stuff that is pretty close to what the big states use.
In any case, a recurrent pattern came up. People in the Gulf wanting to pressure, harass or control key people abroad. By whatever level of indirection they get the access to the private sector Israeli intel stuff. And they’re using it in countries like the U.S., in Europe, etc. In a way, Trump’s “catch and kill” arrangement with The National Enquirer was just a somewhat more primitive version of this dynamic.
Another strand of the story comes from the fact that Silicon Valley, U.S. hedge funds and lots of other parts of the economy are highly dependent on money from the Gulf states — principally the Saudi sovereign wealth fund, which appears to be increasingly used at the discretion of Mohammad bin Salman but also those of the other Gulf emirates, particularly the UAE. Those relationships are now deepening. And none of these players can write off Gulf money as at least a major part of their investor portfolio. A lot of this predates Trump. In some ways it produced Trump. But in ways I still don’t fully understand, Trump’s first presidency helped to congeal this, make a lot of these people decide they were on the same team and to see how much more easily “business” could be done with someone like Trump in office. There isn’t the same concern with lobbyists and interagency processes and historic U.S. policy or U.S. domestic stakeholders and certainly not Congress. You get a meeting with the president, and if you convince him what you want is awesome, that’s it. You’re good to go. Throw in some lost-money investments in one of his family firms and you’re set.
So what we see here is something like that system out of Russia being brought worldwide, particularly with the people in the particular power groupings I’ve described above — a lot of wholesale use of non-state or quasi-state intel capacities to collect information on friends and enemies, a lot of use of those Israeli private intel firms. In a way, it’s bringing the Putin model of state and state stakeholder management to the global stage. But in a different way it’s taking the oligarch system and taking it worldwide. It’s the oligarchization of the global elite. Because we’re no longer talking just about post-Soviet oligarchs. (For all the regalia, the Saudis run the ultimate oligarch government, running a whole country on the basis of a single, primitive extractive economy.) This is happening because oligarchs and hyper-billionaires across the globe are becoming more united, growing in influence, power and a common perception of their own proper role in a new global order. Meanwhile, the representatives of the old elite — government stakeholders and leadership of more conventional global businesses — are declining in power and losing coherence as a definable group because of the fraying of the post-war world order.
As I said at the beginning of this piece, this isn’t identical to the global Authoritarian Movement. It also doesn’t include the voters who have elected authoritarian parties to power in the U.S., Brazil, Hungary, Poland, arguably India, Israel and so many other states. But it involves many of the same key and central players — what we might call the Global Authoritarian elite, or significant parts of it. We have a fairly clear sense of how the movement operates domestically. This, I would argue, is how it operates internationally, above all on the basis of opacity, private deals between national leaders which mix national interest with individual financial interest, a complex and subterranean world of secrets and compromising information and a general aim of keeping states under the leadership of national governments who play by these rules. Beyond what we in the United States face at home, this is what we face abroad.
Jeff Johnson:
In today’s macOS 26.3 update, Apple implemented a “fix” for an issue I blogged about a month ago, macOS Tahoe broke Finder columns view. (At the behest of John Gruber and the Apple Style Guide, I’m now using the term “column view” rather than “columns view.”) Specifically, the issue was with the system setting to always show scroll bars. [...]
Without the path bar, the columns are now taller, but the vertical scrollers remain the same height as before, leaving vertical gaps, a ridiculous amount of space between the bottom of the scrollers and the bottom of the columns, looking silly and amateurish.
Did nobody inside Apple test this configuration either? Or do they simply not care?
In one sense, this whole issue with column view in the Finder with scroll bars set to always show is a little thing. It was downright broken in earlier versions of MacOS 26 — you literally could not resize the columns. So now it’s not broken. But as Johnson says, it looks silly and amateurish.
This is the sort of detail that Apple used to strive to get pixel-perfect, all the time, for all settings. “Whatever, good enough” instead of “insanely great”.
There is a new paper by Nick Bostrom with that title:
Developing superintelligence is not like playing Russian roulette; it is more like undergoing risky surgery for a condition that will otherwise prove fatal. We examine optimal timing from a person-affecting stance (and set aside simulation hypotheses and other arcane considerations). Models incorporating safety progress, temporal discounting, quality-of-life differentials, and concave QALY utilities suggest that even high catastrophe probabilities are often worth accepting. Prioritarian weighting further shortens timelines. For many parameter settings, the optimal strategy would involve moving quickly to AGI capability, then pausing briefly before full deployment: swift to harbor, slow to berth. But poorly implemented pauses could do more harm than good.
Via Nabeel.
The post Optimal timing for superintelligence appeared first on Marginal REVOLUTION.
Up and find myself pretty well, and so to the office, and there all the morning. Rose at noon and home to dinner in my green chamber, having a good fire. Thither there came my wife’s brother and brought Mary Ashwell with him, whom we find a very likely person to please us, both for person, discourse, and other qualitys. She dined with us, and after dinner went away again, being agreed to come to us about three weeks or a month hence. My wife and I well pleased with our choice, only I pray God I may be able to maintain it.
Then came an old man from Mr. Povy, to give me some advice about his experience in the stone, which I [am] beholden to him for, and was well pleased with it, his chief remedy being Castle soap in a posset.
Then in the evening to the office, late writing letters and my Journall since Saturday, and so home to supper and to bed.
On February 12, 1809, Nancy Hanks Lincoln gave birth to her second child, a son: Abraham.
Abraham Lincoln grew up to become the nation’s sixteenth president, leading the country from March 1861 until his assassination in April 1865, a little over a month into his second term. He piloted the country through the Civil War, preserving the concept of American democracy. It was a system that had never been fully realized but that he still saw as “the last, best hope of earth” to prove that people could govern themselves.
“Four score and seven years ago,” he told an audience at Gettysburg, Pennsylvania, in November 1863, “our fathers brought forth on this continent a new nation, conceived in liberty and dedicated to the proposition that all men are created equal.”
Lincoln dated the founding of the nation from the Declaration of Independence rather than the Constitution, the document enslavers preferred because of that document’s protection of property. In the Declaration, the Founders wrote that they held certain “truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.—That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed….”
But in Lincoln’s day, fabulously wealthy enslavers had gained control over the government and had begun to argue that the Founders had gotten their worldview terribly wrong. They insisted that their system of human enslavement, which had enabled them to amass fortunes previously unimaginable, was the right one. Most men were dull drudges who must be led by their betters for their own good, southern leaders said. As South Carolina senator and enslaver James Henry Hammond put it, “I repudiate, as ridiculously absurd, that much-lauded but nowhere accredited dogma of Mr. Jefferson, that ‘all men are born equal.’”
In 1858, Abraham Lincoln, then a candidate for the Senate, warned that arguments limiting American equality to white men were the same arguments “that kings have made for enslaving the people in all ages of the world…. Turn in whatever way you will—whether it come from the mouth of a King, an excuse for enslaving the people of his country, or from the mouth of men of one race as a reason for enslaving the men of another race, it is all the same old serpent.” Either people—men, in his day—were equal, or they were not. Lincoln went on, “I should like to know if taking this old Declaration of Independence, which declares that all men are equal upon principle and making exceptions to it…where will it stop?”
Lincoln had thought deeply about the logic of equality. In his 1860 campaign biography, he permitted the biographer to identify six books that had influenced him. One was a book published in 1817 and wildly popular in the Midwest in the 1830s: Capt. Riley’s Narrative. The book was written by James Riley, and the full title of the book was “An Authentic Narrative of the Loss of the American Brig Commerce, Wrecked on the Western Coast of Africa, in the Month of August, 1815, With the Sufferings of Her Surviving Officers and Crew, Who Were Enslaved by the Wandering Arabs on the Great African Desart [sic], or Zahahrah.” The story was exactly what the title indicated: the tale of white men enslaved in Africa.
In the 1850s, on a fragment of paper, Lincoln figured out the logic of a world that permitted the law to sort people into different places in a hierarchy, applying the reasoning he heard around him. “If A. can prove, however conclusively, that he may, of right, enslave B.—why may not B. snatch the same argument, and prove equally, that he may enslave A?” Lincoln wrote. “You say A. is white, and B. is black. It is color, then; the lighter, having the right to enslave the darker? Take care. By this rule, you are to be slave to the first man you meet, with a fairer skin than your own. You do not mean color exactly?—You mean the whites are intellectually the superiors of the blacks, and, therefore have the right to enslave them? Take care again. By this rule, you are to be slave to the first man you meet, with an intellect superior to your own. But, say you, it is a question of interest; and, if you can make it your interest, you have the right to enslave another. Very well. And if he can make it his interest, he has the right to enslave you.”
Lincoln saw clearly that if we give up the principle of equality before the law, we have given up the whole game. We have admitted the principle that people are unequal and that some people are better than others. Once we have replaced the principle of equality with the idea that humans are unequal, we have granted approval to the idea of rulers and ruled. At that point, all any of us can do is to hope that no one in power decides that we belong in one of the lesser groups.
In 1863, Lincoln reminded his audience at Gettysburg that the Founders had created a nation “dedicated to the proposition that all men are created equal,” but it was no longer clear whether “any nation so conceived and so dedicated, can long endure.” During the Civil War, the people of the United States were defending that principle against those who were trying to create a new nation based, as the Confederacy’s vice president Alexander Stephens said, “upon the great truth” that men were not, in fact, created equal, that the “great physical, philosophical, and moral truth” was that there was a “superior race.”
In the midst of the Civil War, Lincoln called for Americans to understand what was at stake, and to “highly resolve…that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.”
It should be SO EASY to share + collaborate on Markdown text files. The AI world runs on .md files. Yet frictionless Google Docs-style collab is so hard… UNTIL NOW, and how about that for a tease.
If you don’t know Markdown, it’s a way to format a simple text file with marks like **bold** and # Headers and - lists… e.g. here’s the Markdown for this blog post.
Pretty much all AI prompts are written in Markdown; engineers coding with AI agents have folders full of .md files and that’s what they primarily work on now. A lot of blog posts too: if you want to collaborate on a blog post ahead of publishing, it’s gonna be Markdown. Keep notes in software like Obsidian? Folders of Markdown.
John Gruber invented the Markdown format in 2004. Here’s the Markdown spec, it hasn’t changed since. Which is its strength. Read Anil Dash’s essay How Markdown Took Over the World (2026) for more.
So it’s a wildly popular format with lots of interop that humans can read+write and machines too.
AND YET… where is Google Docs for Markdown?
I want to be able to share a Markdown doc as easily as sharing a link, and have real-time multiplayer editing, suggested edits, and comments, without a heavyweight app in the background.
Like, the “source of truth” is my blog CMS or the code repo where the prompts are, or whatever, so I don’t need a whole online document library things. But if I want to super quickly run some words by someone else… I can’t.
I needed this tool at the day job, couldn’t find it… built it, done.

Say hi to mist!
I included a couple of opinionated features…
I’m proud of roundtripping suggested edits and comment threads: the point of Markdown is that everything is in the doc, not in a separate database, and you know I love files (2021). I used a format called CriticMark to achieve this – so if you build a tool like this too, let’s interop.
Hit the New Document button on the homepage and it introduces itself.
Also!
For engineers!
Try this from your terminal:
curl https://mist.inanimate.tech -T file.md
Start a new collaborative mist doc from an existing file, and immediately get a shareable link.
EASY PEASY
Anyway –
It’s work in progress. I banged it out over the w/e because I needed it for work, tons of bugs I’m sure so lmk otherwise I’ll fix them while I use it… though do get in touch if you have a strong feature request which would unlock your specific use case because I’m keep for this to be useful.
So I made this with Claude Code obv
Coding with agents is still work: mist is 50 commits.
But this is the first project where I’ve gone end-to-end trying to avoid artisanal, hand-written code.
I started Saturday afternoon: I talked to my watch for 30 minutes while I was walking to pick my kid up from theatre.
Right at the start I said this
So I think job number one before anything else, and this is directed to you Claude, job number one before anything else is to review this entire transcript and sort out its ordering. I’d like you to turn it into a plan. I’ll talk about how in a second.
Then I dropped all 3,289 words of the transcript into an empty repo and let Claude have at it.
Look, although my 30 mins walk-and-talk was nonlinear and all over the place, what I asked Claude to do was highly structured: I asked it to create docs for the technical architecture, design system, goals, and ways of working, and reorganise the rest into a phased plan with specific tasks.
I kept an eye at every step, rewinded its attempt at initial scaffolding and re-prompted that closely when it wasn’t as I wanted, and jumped in to point the way on some refactoring, or nudge it up to a higher abstraction level when an implementation was feeling brittle, etc. I have strong opinions about the technology and the approach.
And the tests – the trick with writing code with agents is use the heck out of code tests. Test everything load bearing (and write tests that test that the test coverage is at a sufficient level). We’re not quite at the point that code is a compiled version of the docs and the test suite… but we’re getting there.
You know it’s very addictive using Claude Code over the weekend. Drop in and write another para as a prompt, hang out with the family, drop in and write a bit more, go do the laundry, tune a design nit that’s thrned up… scratch that old-school Civ itch, "just one more turn." Coding as entertainment.
The main takeaway from my Claude use is that I wanted a collaborative Markdown editor 5 months ago:
app request
- pure markdown editor on the web (like Obsidian, Ulysses, iA Writer)
- with Google Docs collab features (live cursor, comments, track changes)
- collab metadata stored in file
- single doc sharing via URL like a GitHub gistam I… am I going to have to make this?
My need for that tool didn’t go away.
And now I have it.
So tools don’t need huge work and therefore have to be justified by huge audiences now (I’ve spent more time on blog posts). No biggie, it would be useful to us so why not make it and put it out there.
Multiplayer ephemeral Markdown is not what we’re building at Inanimate but it is a tool we need (there are mists on our Slack already) and it is also the very first thing we’ve shipped.
A milestone!
So that’s mist.
Share and Enjoy
xx
More posts tagged: inanimate (2), multiplayer (31).
Auto-detected kinda similar posts:
Links for you. Science:
Bari Weiss’s new CBS hires include ‘germ theory denialist’ doctor
A Secret Panel to Question Climate Science Was Unlawful, Judge Rules
Trump Is Making America Stupider. How MAGA is purging scientists and other skilled workers from both the private and public sectors.
HHS Wasn’t Worried About South Carolina’s Measles Outbreak. It’s Now Enormous.
U.S. government has lost more than 10,000 STEM Ph.D.s since Trump took office
Blood test may identify COVID survivors at risk for ongoing lung disease
Other:
‘Suicide rightism’ and the penguin. Is it based or soy to kill yourself?
Congress Must Allow D.C. to Spend Its Own Local Dollars
Trump announces upcoming IndyCar race through Washington’s streets — including Pennsylvania Avenue
The rise of the slopagandist
ICE’s excuse for wearing masks has never actually manifested
ICE Pretends It’s a Military Force. Its Tactics Would Get Real Soldiers Killed
How ICE Already Knows Who Minneapolis Protesters Are
Protester hit by SUV at Fremont student walkout protesting ICE
ICE agents ‘laugh’ at teen bringing medicine to detained dad who was working at McDonald’s (no balm in Gilead for these sin-sick souls)
Teen defends home after fake ICE agent breaks in to steal PlayStation gaming device: cops
Trump Erupts at GOPers over Noem as Support for Her Slips
DHS Illegally Ended Venezuelan Migrant Status, 9th Cir. Says
Government by AI? Trump Administration Plans to Write Regulations Using Artificial Intelligence
Mamdani Goes From a Winter Storm to a Fiscal One
Space Data Centers
Border Patrol employee found ‘covered in vomit’ in St. Paul, charged with drunk driving
Maine’s “Lobster Lady,” who fished for 97 years, has died at 105
Did D.C. drop the ball on snow-clearing, or were conditions uniquely bad?
Trump’s bogus Board of Peace plots to squat in seized federal building
Queer Eye spotlights D.C.’s LGBTQ history, and those working to preserve it
A Bad Heir Day at the Fed. No, Kevin Warsh isn’t qualified
Musk to Epstein: ‘What Day/Night Will Be the Wildest Party on Your Island?’
Best gas masks: On tear gas, and what it means when the government uses it on civilians.
The Border Patrol’s Legacy of Violence
Jim Pattison won’t sell U.S. warehouse proposed as new ICE facility. B.C. billionaire won’t sell an industrial building in Virginia that was proposed to become an ICE facility
Terror Returns to Springfield
Trump’s Agents Arrested Don Lemon. Then the Story Got Even Darker.
Don Lemon’s Arrest Is a Five-Alarm Fire Moment
What Bari Weiss Doesn’t Get About CBS News
Tesla’s Wile E. Coyote Moment Is Here
1. Using Claude Code for academic work.
2. Younger Firms and CEOs Allow More Work from Home.
3. Extractive taxes were indeed a major force behind the French Revolution.
4. How much will “the human touch” persist?
5. “It was one attempt to do so, by Charles Jones of Stanford University, that entertained the negative top rate of -26%. If high earners produce a lot of ideas that help society, then “subsidising the discovery of new ideas through low tax rates may be as effective as redistribution in raising worker welfare”, he writes.” (The Economist)
6. Moral intuitions about love, romance, and reproduction are not Coasean.
7. Do not exercise options unless you have to!
8. I know Paul, he has very high standards.
9. Claims about Mexico’s security posture.
The post Thursday assorted links appeared first on Marginal REVOLUTION.

At SmallSat Symposium, executives cite enduring missile threat as rationale for continued investment
The post Space companies bet on Golden Dome as questions persist over scope and funding appeared first on SpaceNews.

SEATTLE, Feb. 11, 2026 — Integrate, the developer of the world’s first ultra-secure project management platform for dynamic multi-entity execution, today announced a $17 million Series A raise led by FPV Ventures with participation from Fuse […]
The post Integrate Raises $17M to Commercialize the World’s First Ultra-Secure Project Management Platform for Classified Programs appeared first on SpaceNews.

MOUNTAIN VIEW, Calif. – At the recent World Economic Forum in Switzerland, much of the conversation revolved around the concerns of middle powers, nations with the wherewithal to influence international events that are not among the great powers. At the SmallSat Symposium in Mountain View, representatives of Earth-observation companies said middle powers that previously relied […]
The post Demand for sovereign systems extends to the Earth-observation stack appeared first on SpaceNews.

MILAN — The French-German aerospace company The Exploration Company completed mock splashdown tests for its Nyx space capsule, a modular, reusable spacecraft designed to transport cargo and eventually crew to low Earth orbit and beyond. The company conducted water-impact tests on a mock capsule from Jan. 13 through 28. The testing campaign was not a […]
The post The Exploration Company completes water-impact tests for its Nyx space capsule appeared first on SpaceNews.

Join Leidos and SpaceNews on Thursday, Feb. 19 at 2 p.m. ET to hear how the U.S. Space Force is partnering with industry to accelerate new approaches for collapsing space kill chains through rapid commercial integration and unclassified technology cohorts.
The post Register Now: New Approaches to Collapse Space Kill Chains appeared first on SpaceNews.

Launch companies are divided on how to compete with SpaceX in a market where demand outstrips supply, yet customers remain price sensitive.
The post Launch companies debate how to compete against SpaceX appeared first on SpaceNews.

USSF-87 sends GSSAP payloads and propulsive ESPA ring on fourth Vulcan flight
The post ULA’s Vulcan launches Space Force mission; solid booster anomaly under investigation appeared first on SpaceNews.

MOUNTAIN VIEW, Calif. — The U.S. Federal Communications Commission’s Space Bureau is pursuing an ambitious agenda for regulatory reform. The space plank of the FCC’s Build America Agenda would allocate additional spectrum for space activities, streamline the satellite licensing process and give spacecraft operators more flexibility to modernize operations. “We’re seeking to extend the reach […]
The post FCC Space Bureau chief shares agenda for regulatory reform appeared first on SpaceNews.

Space investors and dealmakers anticipate SpaceX’s planned IPO this year will trigger a surge of capital across the industry, but not without the risk of pulling investor attention away from other companies in the run-up.
The post SpaceX IPO may suck oxygen from market before unleashing broad capital surge appeared first on SpaceNews.

MILAN — United Kingdom-based launch company Orbex announced that its business is folding after multiple attempts to stay solvent fell through. The company announced Feb. 11 that it has filed a notice of intention to appoint administrators — a process in the U.K. that’s similar to declaring bankruptcy — after fundraising, merger and acquisition efforts […]
The post UK launcher Orbex files for administration after failed funding efforts appeared first on SpaceNews.

Astronomy and commercial space are often portrayed as being on a collision course, yet their futures are deeply intertwined. As satellite constellations expand, astronomers raise concerns about trails across images, interference with radio telescopes and the loss of dark skies. At the same time, commercial operators point to the enormous economic, scientific and national security […]
The post It is time to take astronomy off Earth appeared first on SpaceNews.

The company developed a collaborative project management platform designed to operate in classified environments
The post Software startup Integrate makes push into defense market following Space Force award appeared first on SpaceNews.
New York is contemplating a bill that adds surveillance to 3D printers:
New York’s 20262027 executive budget bill (S.9005 / A.10005) includes language that should alarm every maker, educator, and small manufacturer in the state. Buried in Part C is a provision requiring all 3D printers sold or delivered in New York to include “blocking technology.” This is defined as software or firmware that scans every print file through a “firearms blueprint detection algorithm” and refuses to print anything it flags as a potential firearm or firearm component.
I get the policy goals here, but the solution just won’t work. It’s the same problem as DRM: trying to prevent general-purpose computers from doing specific things. Cory Doctorow wrote about it in 2018 and—more generally—spoke about it in 2011.
Put another way, Trump might be able to get away with murdering someone on 5th Avenue in NYC, but he almost certainly could get away with murdering someone on Constitution Avenue in D.C.
Consider the question in the post title as a dig against leaders of sovereign states as context. If our domestic internal security services were to murder someone like they did with Renee Good and Alex Pretti* in D.C., it would be nearly impossible for the local colonial authority to do much about it.
First, Trump can take over the MPD (D.C.’s police force) for thirty days simply by writing a letter to Congress: he does not need Congressional approval, he just has to tell them he is doing this. D.C.’s prosecutors are federal prosecutors that ultimately report to Pam Bondi, not to a state authority**. Finally, if convicted, this is a federal crime, which means Trump would have pardon power, and could–and almost certainly would–pardon the murderers.
Like I said, consider the previous paragraph in the context of local and state politicians who claim they can not do anything to arrest–or even detain temporarily–out of control federal agents. In other words, why would they cede authorities, even limited ones, that D.C. lacks entirely?
Find yourself a Democrat who feels the same way about ICE and CBP as Russell Vought does CDC and NIH.
*One reason these murders are so salient, which has gone mostly unremarked, is they were captured in multiple angles on video unlike other murders.
**D.C. does have an attorney general, but the attorney general typically does not prosecute crimes, and it is unclear what its authority to do so would be in this case.
Links for you. Science:
Gardeners are close observers of nature’s clock and the havoc wreaked by climate change
Spider monkeys pool their knowledge to find the best fruit
RFK Jr. Names 21 New Members to Federal Autism Committee. Many of the members believe autism is linked to vaccination
Tourism takes toll on ancient seagrass
Biofilm removal in hospital sink drains drives unintended surges in antibiotic resistance
Alligators May Boost Carbon Storage in Coastal Wetlands
Other:
This is Literally the Job. Political journalists need to stop pretending they don’t know what Republicans are going to do. (excellent)
How not to talk about ICE’s killing spree
Minnesota Proved MAGA Wrong
Nonpolitical Media Has Been Flooded With Outrage at Alex Pretti’s Murder
Meta Is Blocking Links To ICE List on Facebook, Instagram, and Threads
Alex Pretti, MAGA, And The Public Meaning Of Masculinity
Martha Stewart’s Granddaughter Got Her to Post About ICE
Trump Claims Ilhan Omar Is Worth $44M. Here’s Why That’s Highly Unlikely.
The ICE Resistance That You’re Not Seeing
Accused Omar Attacker’s Crazed Trump-Loving Life Laid Bare
The Battle for Minneapolis
‘We’re not safe right now’: A nurse’s dramatic ICE arrest ignites fear in Maine’s immigrant communities
Brother labels suspect who attacked Ilhan Omar a ‘right-wing extremist’ who has hated Somalis for decades
The People Are Winning the Battle Against ICE
The Trump administration has secretly rewritten nuclear safety rules
Going Wobbly
Feds Knew Who Alex Pretti Was—and Broke His Rib in Earlier Fight
Minneapolis May Be Trump’s Gettysburg
Minneapolis Proved Something MAGA Can’t Accept: Most People Are Actually Virtuous
Data Centers Are Driving a US Gas Boom
Donald Trump Is Frightened
Healthcare Workers Must Continue Alex Pretti’s Fight
Dozens of Jewish graves vandalized at Barcelona cemetery
Minnesota is the Beginning of an American Color Revolution
Libs of TikTok is doxxing teachers and nurses who support Alex Pretti or oppose ICE, trying to get them fired
Data Centers Are Not “Campuses”
App for Quitting Porn Leaked Users’ Masturbation Habits
Alex Pretti ‘had a way of lighting up every room he walked into’
The Means-Testing Industrial Complex
A Video of a Bat-Wielding Tenant Confronting Her Landlord Went Viral. But There’s More to the Story.

NASA is loading liquid hydrogen aboard its Space Launch System moon rocket at the Kennedy Space Center on Thursday for an unpublicized but crucial test of the repairs made to a leaky umbilical that derailed a countdown rehearsal on Feb. 2.
The operation to load liquid hydrogen into the huge fuel tank on the rocket’s core stage was thought to be already underway at launch complex 39B on Thursday morning. The test will determine if new seals installed in the launch pad umbilical are working.
“As part of our work to assess the repair we made in the area where we saw elevated hydrogen gas concentrations during the previous wet dress rehearsal, engineers are testing the new seals by running some liquid hydrogen across the interface and partially filling the core stage liquid hydrogen tank. The data will inform the timeline for our next wet dress rehearsal,” a NASA spokesperson said about the previously unannounced test.
During the Wet Dress Rehearsal or WDR, the launch team managed hydrogen leaks from the umbilical at the base of the rocket that feeds the propellant into the rocket by stopping and starting the process, allowing the umbilical seals to warm and plug the leaks.
Liquid hydrogen is notoriously difficult to handle because its tiny molecules can escape through even the smallest imperfection in the propellant system. It is also extremely explosive when mixed with air.
The launch team was able to fully load the propellant tanks during the Feb. 2 fueling test but called off the countdown because of a large spike in hydrogen leakage when the fuel tank was pressurized during the final minutes of the countdown.
The spokesperson did not immediately provide any additional details, including the amount of hydrogen to be loaded aboard the rocket or if the propellant tank would be pressurized to duplicate the conditions that interrupted the WDR.
Following the Feb. 2 dress rehearsal, technicians disconnected the hydrogen lines which are located on a plate that retracts into a three-story-high structure that rises from the deck of the mobile launcher. There are two tail service masts, one for liquid hydrogen and one for liquid oxygen. Engineers removed and replaced the seals on two hydrogen lines.
If all goes well with the hydrogen testing on Thursday, NASA could schedule a second Wet Dress Rehearsal as soon as next week.
For many families seeking autism services, the biggest obstacle isn’t a lack of compassion or even funding. It’s paperwork. Forms. Deadlines. Assessments that lead to more assessments. Systems that don’t talk to one another, but still expect families to keep everything aligned.
For parents already balancing caregiving, work, and uncertainty, administrative complexity has quietly become one of the most powerful barriers to autism support. To receive a diagnosis, therapy, educational accommodations, or financial assistance, families are often required to manage overlapping bureaucracies, including healthcare providers, school districts, insurance companies, and social service agencies, each with its own rules and definitions of need.
The burden rarely arrives all at once. It builds over time. A parent may finally secure a diagnosis, only to learn it isn’t accepted by a school district. A school evaluation might not satisfy an insurer. Children who qualify for services at one age are often required to re-prove eligibility later, sometimes more than once. Miss a form. Miss a deadline. Start again.
For providers, these same administrative demands unfold behind the scenes. Autism services such as Applied Behavior Analysis, or ABA, are governed by detailed billing codes, documentation requirements, and payer-specific rules. Billing specialists and ABA billing experts frequently note that even minor administrative errors can delay reimbursement or interrupt care. When that happens, families often feel the impact directly, through reduced hours, paused services, or unexpected disruptions. Some providers turn to specialized billing resources, such as Missing Piece ABA Billing, to help navigate this complexity and keep care moving, highlighting just how technical and unforgiving the system has become.
Delays are not neutral. Early and consistent access to autism services is closely linked to better long-term outcomes. When support is postponed, not by denial but by process, families lose time that cannot be recovered.
The stress compounds. Caregiving responsibilities become heavier. Holding onto a job becomes harder. And the longer the services are delayed, the more complex and costly the intervention can become later.
Adults face some of the steepest barriers of all. Many autism systems are built around childhood identification, leaving adults, especially women and people with lower or less visible support needs, without clear pathways to diagnosis or care. Age-based cutoffs, long waitlists, and rigid eligibility rules quietly close doors. Need does not disappear at adulthood, but access often does.
Administrative complexity does not affect everyone equally. Families with higher incomes, flexible jobs, legal literacy, or access to advocates are far more likely to find their way through these systems. Others face steeper odds. Language barriers, inflexible work schedules, limited digital access, or past experiences with institutions can turn paperwork into a wall. Over time, bureaucracy becomes a sorting mechanism, filtering access based on resources rather than need.
Policy makers increasingly acknowledge these non-financial barriers. Streamlined applications, one-stop portals, and digital systems are often promoted as solutions.
Sometimes they help. Sometimes they don’t. Digital tools can simplify access for some families while creating new obstacles for others who need accommodations or personal support. And during major life transitions, especially the shift from childhood to adult services, gaps in care remain common.
If administrative complexity is part of the problem, reform must focus on system design, not just funding levels. In practice, that means rethinking how eligibility, documentation, and continuity of care are handled across agencies.
One starting point is alignment. Families are often required to submit the same information to multiple systems that do not share data or recognize one another’s assessments. Coordinating eligibility standards across healthcare, education, and social services would reduce duplication and limit the need for repeated re-evaluations that add little clinical value.
Continuity also matters. During major life transitions, such as moving from childhood to adult services, support is frequently interrupted while paperwork catches up. Policies that default to continued eligibility during these transitions could prevent gaps in care that are difficult, and sometimes impossible, to repair.
Administrative systems also work best when they include human support. Digital portals and streamlined forms can help, but they cannot replace guidance from caseworkers, advocates, or navigators who understand how systems interact. Without that support, simplification efforts risk helping only those already equipped to manage complexity.
Taken together, these changes would not eliminate bureaucracy. They would make it manageable, and more importantly, humane.
What’s striking is how rarely these failures are treated as design problems. Families are labeled noncompliant. Applications are marked incomplete. The quiet assumption is that if services exist, access will naturally follow. It doesn’t.
When systems are fragmented and unforgiving, complexity becomes a form of quiet rationing, limiting access without ever formally saying no.
Administrative complexity in autism services reflects a broader truth about public policy. Access is shaped not only by funding, but by the paths people must take to reach it. Reducing administrative burden isn’t about convenience. It’s about equity, public health, and dignity. When families spend years fighting systems instead of receiving support, the cost is borne by individuals and, ultimately, by society as a whole.
CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM
The post Autism Services Blocked by Administrative Barriers appeared first on DCReport.org.
(This is a chapter of a longer report I’m working on that summarizes and expands the last several years of my work on construction productivity. I plan on publishing one chapter a month on the newsletter, and aim to have the full report done by the end of the year.)
For decades, American construction has fallen behind almost every other major sector in productivity growth. As far back as 1970 researchers noted that construction productivity improvement significantly lagged productivity improvement in the economy overall, and by 1985 economists were investigating what appeared to be declining construction productivity. Stanford civil engineering professor Paul Teicholz noted in a 2004 article in AECbytes that between 1964 and 2004, construction productivity declined by 0.59% per year on average, which was “particularly alarming when compared to the increasing labor productivity in all non-farm industries, which have experienced an increasing productivity of 1.77%/year over the same time period.” A 2017 article in The Economist noted that “construction holds the dubious honour of having the lowest productivity gains of any industry.” In a 2023 New York Times column,Ezra Klein wrote that “A construction worker in 2020 produced less than a construction worker in 1970, at least according to the official statistics.”
The trend of construction productivity in the United States failing to improve over time is indeed concerning. “Productivity” means some measure of output, divided by some measure of input. When productivity is improving, we get more output for a given amount of input over time; if productivity is falling, we get less output for a given amount of input over time. If productivity doesn’t improve, we can’t expect construction costs to fall and things like houses, roads, and bridges to get any cheaper. Because of this, it’s worth looking deeply at what exactly the trends in US construction productivity are.
Economists and researchers measure construction productivity in a variety of different ways. We can broadly categorize these metrics by their level of granularity:
At the lowest level of granularity, we have metrics that track productivity changes across the entire construction sector.
Slightly more granular are metrics that look at productivity changes in a particular subsector, such as housing construction.
Looking more specifically, we have metrics that look at productivity changes for constructing particular buildings.
And finally we have metrics that track productivity changes for individual construction tasks.
Each category of metric gives a slightly different perspective on productivity trends, and each has its own measurement challenges that we must consider when interpreting the data.
Sector-wide productivity metrics look at productivity trends across the entire construction industry. They answer if, overall, we’re getting more or less construction output for a given amount of input. The graph below, for instance, shows trends in US construction productivity by using total construction spending as a measure of output, and total hours worked in the construction sector as a measure of input. (Spending has been adjusted to 2025 dollars using the Consumer Price Index —we’ll talk more about whether this is a reasonable way to adjust for inflation later.)
We can see that, per this metric, construction labor productivity — the amount of construction output we get for a given amount of labor — is virtually flat between 1964 and 2024, whereas labor productivity in the economy overall rose by a factor of three.
Sector-wide metrics which look at productivity trends across the entire construction industry are very common. Paul Teicholz uses the same data we used above to look at trends in construction productivity in a 2013 article, and his 2004 article uses a very similar metric (rather than total spending, he uses US Department of Commerce construction spending data, a subset, as a measure of output).
In their 2025 paper “The Strange and Awful Path of Construction Productivity in the US”, economists Austin Goolsbee and Chad Syverson use a slightly different sector-wide productivity metric. For output they use real (inflation-adjusted) construction value-add data from the Bureau of Economic Analysis, and for input they use the number of full-time construction employees. (Unlike total construction spending, which just tracks the value of the outputs, value-add measures the value of construction outputs minus the value of the inputs used.) Goolsbee and Syverson also look at trends in construction total factor productivity (TFP), which measures productivity of both labor and capital (equipment, machinery, etc.) by comparing the growth rates of real construction value-add to the growth rates of construction labor and capital inputs. According to Goolsbee and Syverson’s productivity metrics, construction productivity looks even worse. Productivity increased from the 1950s until the mid-1960s, but since the 1950s it has declined by roughly 50%.
Discussions of US construction productivity often reference this Goolsbee and Syverson paper, or the data behind it. An early version of Goolsbee and Syverson’s paper is what Ezra Klein is referring to in his 2023 New York Times column, and it’s referred to in a 2025 Federal Reserve Economic Brief examining productivity. The data is also used in a 2026 report from Goldman Sachs looking at the causes of low US construction productivity. Management consultancy McKinsey likewise uses BEA value-add data in a 2017 report to construct a similar productivity metric, gross value add per hour worked, to show that in the US construction productivity improvement had lagged virtually every other industry:
The Bureau of Labor Statistics also uses BEA data, combined with its own estimates of hours worked, to calculate trends in both labor productivity and total factor productivity for a variety of sectors, including construction. This metric likewise shows construction productivity as stagnant or declining. It’s not uncommon for discussions of productivity to also reference this BLS metric; for instance, it’s used by Federal Reserve economists Daniel Garcia and Raven Molloy in their 2025 paper “Reexamining Lackluster Productivity Growth in Construction”.
Sector-wide measures of US construction productivity thus tell a consistent story of stagnant productivity growth, differing only in how bad the problem appears. By some measures, productivity is merely flat over the last several decades; by others, productivity has declined significantly.
Subsector metrics are also commonly used to get a picture of national construction productivity trends, particularly metrics that look at trends in housing construction. In their 2023 NBER working paper, “Why Has Construction Productivity Stagnated?” Princeton economist Leanardo D’Amico and coauthors looked at productivity trends in US homebuilding by dividing the total number of housing units produced in the US by the total number of residential construction employees. They found that housing productivity had declined significantly since the 1960s — though, as we’ll see, there are issues with their choice of metric. Goolsbee and Syverson also looked at housing units per employee in their 2025 paper, along with another housing productivity metric, square footage of housing per employee. As with D’Amico et al., housing units per employee shows declining productivity over time, while square feet per employee shows slightly more complex trends: productivity appears to decline between the 1970s and the early 1990s, and decline since then for multifamily construction, but single-family construction shows an increase in productivity of close to 50% between 1990 and 2020. In their 2025 paper, Garcia and Molloy also look at productivity trends in single-family home construction using square footage of housing produced per employee, though they also try to include quality adjustments in this metric. (We’ll discuss quality adjustments more later.)
The Bureau of Labor Statistics also produces estimates for construction productivity trends for four sub-sectors: single-family home construction, multifamily home construction (i.e., apartment buildings), industrial building construction, and highway and bridge construction. These are based on individual subsector estimates of construction spending from the US Census, and BLS estimates of hours worked. Per the BLS, while single-family home productivity has been stagnant since 1987 and highway and bridge productivity has declined, productivity is up for both multifamily construction and for industrial building construction.
Construction subsector productivity estimates thus generally show stagnant or declining construction productivity, though with significant variation. Some subsectors show increasing productivity, and some show different trends by different metrics. Single-family home construction shows increasing productivity when measured by square feet of home per employee, but unchanging productivity when measured by subsector spending per labor hour; for multifamily home construction, the reverse is true.
Below the level of construction subsectors, we have productivity metrics that look at trends for individual building types, such as the amount of labor required to build a single-family home. These sorts of metrics are much less common, as it’s rare to get detailed project-level productivity data from builders, but are still seen occasionally. In 1964 and 1972 the Bureau of Labor Statistics conducted studies on the number of hours it took to build a single-family home, finding that the average annual percent change in labor hours per square foot was just -0.6% per year (ie: productivity increased, but slowly). The Construction Industry Institute has a “Benchmarking and Metrics Productivity Database” that tracks project-level productivity metrics for submitted projects. A NIST analysis of this database from 2000 to 2007 noted a decline in project-level productivity, measured in output in dollars per labor-hour.
We can construct our own building-level productivity metric by using data from construction estimating guides. Estimating guides, produced by companies like RS Means and Craftsman, provide information on cost, labor, and material requirements for hundreds of different construction tasks, and are used to generate cost estimates for new construction projects. Some companies have also often been producing their estimating guides for many years, making them a valuable tool for analyzing productivity trends; both RS Means and Craftsman have been producing estimating guides since the 1950s.
Starting in 1993, Craftsman’s National Construction Estimator included an estimate of the total number of hours required to build a “typical” single-family home. If we compare the estimated number of hours per square foot in 1993 and 2026, they’re almost identical. The only task that has changed is insulation installation, which took a single man six days in 1993 and now takes one man 3 days. It’s also worth noting that this hours per square foot figure is also virtually the same as the number of hours per square foot calculated by the BLS in their 1964 and 1972 studies.
Thus, project-level measurements of US construction productivity also tend to show a stagnation or a decline in US construction productivity over time.
Finally, below project-level productivity metrics, we have measures that look at productivity of individual construction tasks: laying bricks, framing walls, installing plumbing, and so on. These metrics are fairly commonly used, thanks to the existence of estimating guides. We can look at changes in task-level construction productivity by seeing how the time and labor required for various specific construction tasks has changed in estimating guides over time.
Allmon et al (2000) looked at productivity changes for 20 different construction tasks from 1974 through 1996 using RS Means estimating guide data, and found that labor productivity increased for seven tasks, decreased for two tasks, and was unchanged for 11 tasks. Goodrum et al (2002) looked at productivity changes between 1976 and 1998 for 200 different construction tasks using data from several different estimating guides. They found that labor productivity declined for 30 tasks, was unchanged for 64 tasks, and improved for 107 tasks, with an average growth rate in labor productivity ranging from 0.8% to 1.8% depending on the estimating guide. A follow up study by Goodrum in 2009 that looked at productivity trends in 100 different construction tasks between 1977 and 2004 found a somewhat lower average productivity increase of just 0.47% per year, with significant variation between task categories.
We can also use different versions of estimating guides to do our own analysis of productivity trends. The chart below shows the relative installation rates for 40 different construction tasks which are listed in the RS Means estimating guides from 1985 and 2023. 10 tasks got more productive over the period, 10 got less productive, and 20 tasks were unchanged.
We can also try to calculate installation rates directly, using the values RS Means lists for task labor cost and hourly wages. The chart below shows the installation rates calculated for 17 construction tasks performed by either carpenters or sheet metal workers that were listed in the 1954, 1985, and 2023 version of the RS Means estimating guide. Effective installation rates for each task were calculated by dividing unit labor costs for the task by the average worker wage for that task type. By this analysis, 12 of 17 tasks improved in productivity between 1954 and 1985, and 15 of 17 tasks between 1985 and 2023 got more productive.
Footnote: RSMeans doesn’t give individual trade hourly rates for 1954, so for the 1954-1985 period we’ll simply use the average construction wage increase over that time.
One challenge with task-level productivity metrics is that we should expect a major mechanism of productivity improvement to be replacing old tasks for new ones. Steel manufacturing became massively more productive with the introduction of the Bessemer process, which took much less time and effort than the previous cementation process, but a task-level analysis — seeing how productivity in the cementation process improved over time — wouldn’t capture this.
One way around this is to look at the categories of tasks necessary for completing a building, rather than specific tasks. We can do this using Craftsman’s National Construction Estimator, which includes a breakdown of what’s necessary to complete a single-family home — excavation, installing doors and windows, running wiring, etc. — and what fraction of the total cost to build a home they make up. By looking at changes in these fractions of total home cost over time, we can see which sorts of tasks have gotten more productive and which have gotten less productive.
The chart below shows the relative fraction of different categories of tasks needed to build a single-family home for 1986 and 2026. Overall, there’s surprisingly little change: most task categories have the same ratio of overall costs in 1986 as they were in 2026, suggesting few types of tasks overall had much change in productivity.
Overall task-level productivity analysis shows significant variation in productivity trends. Looking at published installation rates for several dozen construction tasks between 1985 and 2023 implies that, as with other measures, construction productivity has shown little to no increase. Looking at high-level tasks needed to complete a single-family home also shows few tasks that have improved in productivity. But other analyses yield different results. Calculating implied installation rates using labor costs suggests significant task-level productivity improvement over time. Likewise, various studies of installation rates show construction tasks improving in productivity on average from the 1970s through the 1990s (with rate of improvement perhaps falling off over time).
The above metrics of construction productivity all look at trends in US construction. However, it’s also worth understanding construction productivity trends in other countries. If other countries show substantial construction productivity improvements, that suggests that the US’s productivity challenges are something specific to the US. But if other countries show stagnant or declining construction productivity, that suggests the challenges may be due to broader trends, or to the nature of the process of construction itself.
We can look at international trends in construction productivity at the sector level by using KLEMS databases, which aggregate industry-level productivity data for countries around the world.1 EU KLEMS has productivity data for European countries, as well as the US, UK, Japan, and (for older releases) Korea, Canada, and Australia. Asia KLEMS has productivity data for Japan, Korea, Taiwan, and India. LA KLEMS has productivity data for several Latin American countries, and World KLEMS has links to Russian, Chinese, and Canadian KLEMS data.2
The charts below show changes in construction labor productivity, measured as gross value add per labor hour, for 45 different countries. Productivity has been normalized to equal 100 for the first year in which there’s data.
Per KLEMS data, US construction productivity steadily declined from 1970 to around 1995, after which it leveled off. This is broadly consistent with other measures of US construction-sector productivity, which show either stagnant or declining productivity since roughly the 1960s.
Other countries show a somewhat different historical pattern. For the 20 countries where data goes back to the 1970s (which includes most of Western Europe, the Anglosphere, Japan, and Korea), only one other country, Greece, shows declining construction productivity from 1970 to 1995, and its rate of decline is much lower than the US. Every other country saw rising construction productivity during that period.
Since 1995, however, construction productivity in these 20 countries (minus Canada and Korea, whose time series stopped around 2010) improved much less. Per KLEMS, the US has an average annual rate of improvement of 0.2% per year from 1995 to 2021, which is slightly better than average for this group of 18 countries. Only Belgium and Ireland have maintained a steady, high rate of construction productivity improvement greater than 1% per year.
Starting in the 1980s, there is also KLEMS data for China, Taiwan, and India, and starting in the 1990s there’s data for Eastern Europe, Latin America, and Russia. Taiwan shows improving productivity until around 2000, after which it flattens out/declines. Korea and Russia show similar patterns of improvement followed by stagnation. India’s productivity improvement has remained flat, as has Poland’s, Czechia’s, Malta’s, Cyprus’ and Slovenia’s. Other Eastern European countries have improved in construction productivity since the 1990s, as have Latin American countries (with the exception of Honduras, which has declined significantly over time).
China’s productivity improved from the late 1980s through the 2010s, though its rate of improvement does not appear to be particularly impressive. (It’s roughly similar to the historical rates of improvement seen in Korea, France, Sweden, or Portugal.)
Goldman Sachs also looked at international construction productivity for several large, wealthy countries in a 2026 report. While they also found poor records of construction productivity for most countries since 1990, per their analysis the US had the worst record of construction productivity improvement of any country analyzed. This appears to stem in part from using BEA data for US productivity calculations, which yield a greater productivity decline than other US productivity metrics.

Overall, international construction-sector productivity data suggests that the US is not alone in suffering from stagnant or declining construction sector productivity. Rates of productivity improvement in the US over the last several decades appear broadly similar to improvement rates observed in other large, wealthy countries. Many countries that at one point had substantially improving construction productivity (Western Europe, Korea, Taiwan) have seen it flatten out in recent years. Others (India, Japan) have never seen substantial improvements. The countries that do show sustained, large improvements tend to be either small (Ireland, Denmark, Estonia), poor (Colombia, Peru), or both. Rates of construction productivity improvement are nearly always much lower than improvements seen in manufacturing, or in the economy overall.
Accurately measuring trends in construction productivity means accurately measuring both inputs and outputs over time. There are a number of difficulties in doing this.
For outputs, one major challenge is that outputs might change over time in ways that are difficult to account for. Sector-wide measures of construction productivity, for instance, typically measure construction output in terms of total construction spending, tallying up everything that was spent on construction during the year — housing in Texas, skyscrapers in New York, schools in Washington, and so on. However, if the composition of things that are built in the country changes — if over time there are more homes built in Texas and fewer skyscrapers built in New York — this could distort productivity measures.
For example, assume there are two types of houses, Type A which requires 1000 hours of labor to produce, and Type B which requires 1500 hours of labor to produce. Last year 100 of each type of house were built, yielding 200 total houses built with 250,000 hours of labor. The next year, however, 50 Type A houses and 150 Type B houses were built, yielding 200 total houses built with 275,000 hours of labor. If you simply look at the outputs (200 houses each year) without accounting for the differing difficulty of building them, this looks like a roughly 9% decline in productivity, since it took more hours to build the same number of houses. But what’s actually happened is a shift to building fewer easy to build houses and more hard to build houses. You could in fact get a measured productivity decline even if productivity was improving for each type of house. This is a variation of Simpson’s Paradox, the observation that for groups with differences between them, trends in individual sub-groups can be reversed when looking at the groups collectively.
These effects of a changing output mix aren’t merely theoretical. When Allen (1985) looked at US construction productivity trends from 1968 to 1978, he found that this sort of change in the output mix — specifically, a shift from capital-intensive civil construction to labor intensive home construction — was responsible for the lion’s share of the measured productivity decline.
This sort of shift in the output can also be at work in sub-sector measures of construction productivity. Measures of housing sector productivity, for instance, can be distorted by failing to account for changes in what sort of housing gets built. D’Amico et al. (2023) used “housing units per employee” as a measure of construction productivity, but this measure fails to take into account the fact that on average houses increased in size over time. An average home in 2025 is much larger, and requires more effort to build, than an average home in 1985.
This particular distortion is relatively easily corrected by multiplying the number of homes produced by average home size, to get “square feet of home produced per employee”. As we’ve noted, several studies of construction productivity use this metric. But increasing size isn’t the only way that homes change over time. For one, modern homes are built to stricter building code standards than older homes; they will have greater fire resistance, greater ability to withstand high winds and earthquakes, and greater energy efficiency. For another, modern homes have more amenities and services in them: they’re more likely to have air conditioners, dishwashers, insulation, generally will have more bathrooms, and so on. Thus, a square foot of home built today should be considered as more output than a square foot of home built in 1960.

It can also be challenging to accurately measure inputs to the construction process. Labor is the construction input most often tracked, but this can be subject to its own “input mix” problems — namely, ensuring whether labor hours are all actually being devoted to the outputs being considered. As with changes in the output mix, if there’s a shift in how construction workers are spending their time, this could show up as a change in construction productivity that’s not actually occurring.
For instance, we’ve noted that both D’Amico et al (2023) and Goolsbee and Syverson (2025) include productivity metrics which track the amount of housing produced per residential construction employee. However, workers in residential construction don’t merely build new houses, they also renovate old houses. And there has been a gradual trend upward in spending on residential renovations. Renovations now represent 40-45% of spending on residential construction, up from 20-25% in 1970. Thus some of the measured change in “housing per employee” is likely an artifact of employees increasingly working on renovations. (Goolsbee and Syverson’s housing-per-employee metric likely doesn’t have this problem for post-1990 data, as after that BLS employment data breaks down residential remodeling employment separately).
More generally, if labor isn’t properly accounted for, that will obviously distort any productivity measures. It’s notable that for some of the sub-sector productivity measures produced by the Bureau of Labor Statistics, labor hours worked appears to be much more uniform than changes in output. For industrial buildings, there are several spikes in output (2009, 2015, and 2024) during which labor input stays flat, resulting in productivity spikes. It’s possible these are real (though it seems unlikely that firms suddenly got 50% more productive, then 50% less productive, over just a few years), but it’s also possible these are fictional, at least partially the result of labor inputs not being properly accounted for. Notably, single-family home construction does show labor inputs rising and falling in concert with output, and shows productivity as much flatter over time.
Another problem regarding labor is that many measures of construction specifically measure labor productivity — output per labor hour, or per employee — rather than total factor productivity (output per total amount of inputs). This is a problem because it’s often possible to automate or mechanize construction work — replace labor with capital — in ways that aren’t efficiency-enhancing. Construction automation and mechanization often requires a large amount of equipment to duplicate what’s possible with a relatively small amount of labor. (When I worked at the modular construction startup Katerra, the executives would often complain how hard it was for Katerra, with its expensive factories, to compete with “Bubba and his truck”: low-overhead contractors who used little more than power tools and manual labor.) Thus labor productivity could improve even as overall productivity declined.
A problem related to changes in the output mix is that construction output is often measured in dollars spent. Spending in dollars must be adjusted to account for the fact that the value of a dollar changes over time. This is typically done using what’s known as a deflator, some measure of price changes over time that can be used to convert spending in dollars to a consistent measure of construction output. The Consumer Price Index (CPI), which measures price inflation for a basket of consumer goods over time, is an example of a deflator.
There are several challenges with using deflators. One is simply choosing one that accurately captures price changes relevant to construction. Construction uses certain sorts of inputs whose price changes may not be adequately captured by commonly-used deflators. A spike in the price of building materials (such as was observed during the Covid-19 pandemic) might dramatically raise the costs of construction much more than the CPI rises.
For example, consider a building that requires $100,000 worth of materials, and gets sold for a threefold markup at $300,000. Now an identical building is being planned, but building material prices double, raising the input price to $200,000 and the output price is now $600,000. However, the Consumer Price Index only increases by 10%. Deflating output by the cost of building materials would show identical output for the first and second buildings — the price of the final building doubled, but so did the cost of the input materials. But deflating by the consumer price index, which only rose 10% between the first and second buildings, would now make it appear that the second building was worth much more output than the identical first building.
In practice, this sort of disconnect between construction input prices and other measures of inflation appears rare — in Teicholz’s 2013 article, there was little difference in productivity trends using construction-specific deflators and the more general Consumer Price Index. A more difficult problem with deflators is that a deflator can mask any gains (or losses) in productivity. Ideally we would have a deflator that measured changes in the prices of a finished building — a so-called “output deflator”, which measures the price changes of final goods and services. The Consumer Price Index is an example of an output deflator. However, many construction deflators are “input deflators”, which track changes in the price of various construction inputs — materials, labor, and so on. Using an input deflator can mask changes in productivity, because actual output is not being properly accounted for.
For instance, assume we have a building that requires $100,000 worth of materials, and 1000 hours of labor to build. We sell this building for $300,000. Now suppose that building material prices stay flat, but we figure out a way to build an essentially identical building for $75,000 worth of materials and 750 hours worth of labor, which we sell for $225,000.
In this example, productivity has improved markedly — we’ve gotten an effectively identical building for 75% of the material and 75% of the labor required previously. However, if we used an input deflator, we’d get no measured change in productivity. Output is measured in dollars for both buildings — $300,000 for the first, $225,000 for the second — and the value of the deflator stays the same since input prices haven’t changed. So using an input deflator would make it look like we’re using 75% of the material and labor inputs to get 75% of the output. Conversely, if we got less productive — if it now took $125,000 worth of materials and 1250 hours of labor to build an effectively identical building which we sold for $375,000 — that would also look like productivity being flat, getting 25% more building by using 25% more materials and labor, rather than the decline that it is.
It’s thus important to have some deflator that can capture changes in the actual value of buildings, not merely the prices of their inputs. However, figuring out how much a given amount of construction should be valued is difficult: it’s susceptible both to the output mix problem (a sector-wide deflator needs to account for the fact that the mix of buildings may be changing), and the changing quality problem (your deflator needs to somehow capture the fact that a 1000 square foot house today is “more house” than a 1000 square foot house built in 1975 due to code improvements, more amenities, and so on).
Thus “what deflator to use?” is a perennial problem when analyzing trends in construction productivity. In his 2013 article on construction productivity, Teicholz went so far as to use seven different deflators to analyze construction productivity trends. Goolsbee and Syverson (2025) noted that much of their apparent measured decline of construction productivity was a product of the construction deflator used by the BEA, which showed a much higher rate of increase in construction prices than other deflators: since output is dollars spent divided by the deflator, this would register as a decline in construction output, and thus a decline in productivity, compared to deflators which showed a lower increase in price. Garcia and Molloy (2025) note that the Census Single Family Price Index, a commonly used deflator for single-family home construction, does not fully capture changes in home quality over time: the quality adjustments include things like increases in square footage and changes in HVAC systems, but not things like improved energy efficiency or interior finish quality. Garcia and Molloy estimate that improperly accounted for quality changes result in underestimating single-family construction productivity improvements by up to 0.8% per year.
More generally, the accounting required for accurate sector or sub-sector construction productivity estimates is very difficult. We can see this by looking at changes in KLEMS data over time. New KLEMS releases don’t merely extend the time series for existing data, but they revise and update past data. These revisions can substantially alter productivity trends. Between 2019 and 2024, revisions to the UK KLEMS data resulted in a swing from showing positive construction productivity growth between 1996 and 2016 to those same years showing negative productivity growth. Swedish data revisions showed the opposite, going from negative productivity growth using 2019 data to flat productivity growth using 2024 data.
Task-level metrics of construction productivity are immune to many construction productivity measurement problems. Because we typically have a direct measure of output (materials installed per hour, etc.), we don’t face the problem of converting an output measured in dollars, or of output mix problems that stem from combining different types of outputs into one measure. But task-level metrics have their own challenges. For one, we face the problem of how to go from task-level productivity estimates to estimates of whole building, subsector, or sector productivity. For instance, task-level productivity may be improving on average (as per Goodrum et al. 2002 and 2009), but perhaps the productivity-improving tasks are less commonly used, and the more commonly used ones are showing less growth. It seems notable that for Craftsman’s full-house estimates, only one category of task — insulation installation — improved in productivity from 1993 to 2025.
Another difficulty of task-level measures of productivity is that they’re almost universally based on estimating guide data, rather than data from actual buildings produced. But this is a relatively minor weakness, as there’s good reasons to think that estimating guide data is reasonably accurate: empirically, it’s valuable enough for construction businesses to continue to pay for it over many decades, and estimating guide data seems to largely track data from other sources. Potter and Syverson (2025) noted that RS Means estimates of city-level construction costs largely agreed with construction cost survey data, and Craftman’s task-based estimates for single-family home construction cost aligns with the average price per square foot of new home construction from the US Census.
Overall, it’s hard to be confident of any single metric of construction productivity, due to the numerous, difficult-to-resolve measurement issues at work. Examinations of construction productivity will thus often use multiple metrics. Goolsbee and Syverson (2025) consider several different productivity metrics, and Teicholz (2013) uses several different deflators to try to avoid distortions from any outlier deflators.
Tying this all together, what can we say about trends in US construction productivity?
We can look at trends in productivity — the amount of output we get for a given amount of input — at different levels, from the sector as a whole, to sub-sectors such as housing construction, to individual buildings, all the way down to individual construction tasks. Productivity metrics for the entire construction sector consistently show productivity either staying the same or declining over time, in contrast to other sectors like manufacturing and to the economy overall. We see these trends both in the US and in most large, wealthy countries.
Sub-sector productivity metrics also broadly show stagnant or declining productivity, though not universally. Project and building-level measures of productivity also generally show trends of stagnant or declining productivity, though most of this data is for home construction.
Task-level productivity trend estimates are, like sub-sector trends, somewhat mixed. Some task-level estimates show similar trends of stagnant or declining productivity over time, others show sluggish to modest growth depending on the collection of tasks and the time period considered.
All these estimates must be taken with a grain of salt, as it’s difficult to accurately measure construction inputs and outputs. Productivity estimates can be distorted by a variety of factors, including changes in the output mix, failing to properly account for construction labor inputs, and improperly deflating construction spending.
But these measurement difficulties are tempered by the fact that the estimates almost all point in the same direction. Most measures of construction productivity show at best very low levels of growth, far below what’s observed in the economy overall; many measures show declining productivity. We see some subsectors that may have seen periods of substantially increasing productivity (such as industrial building construction), and evidence that some individual construction tasks have gotten more productive for at least some periods of time, but overall the picture of stagnant productivity growth is fairly consistent.
KLEMS stands for capital (K), labor (L), energy (E), materials (M), and services (S).
To calculate productivity using this data — specifically, labor productivity, or the amount of output we get for a specific amount of labor — we can use the “chain linked gross value add” measure, VA_Q or VA_QI in the database. Gross value-add is the value of the outputs (in this case, the buildings and infrastructure produced) minus the value of “intermediate inputs” — materials, services, energy, and other things purchased from outside the sector in question. In other words, it’s the total value that the industry itself contributes. “Chain linked” is a way of adjusting for inflation, by calculating the growth rate for one year using the previous year’s prices, then “chaining” those growth rates together. To get sector productivity, we just divide chain linked gross value-add by a measure of total labor effort in that sector. For that labor effort variable, we’ll use H_EMP, which is the total number of hours worked by “engaged persons” — employees, business owners, and people who are self-employed. For a few countries, we’ll need to calculate labor productivity slightly differently. India’s KLEMS data doesn’t include H_EMP, so we’ll use the number of employees instead. China’s KLEMS data doesn’t include VA_Q, but it does include the growth rate of labor productivity by industry, which provides the same information.
Getting a book out involves some tedium (e.g. trying to proofread the index) as well as many small excitements: here's the full book cover and jackets for Moral Economics:)
I had high hopes and low expectations that the FDA under the new administration would be less paternalistic and more open to medical freedom. Instead, what we are getting is paternalism with different preferences. In particular, the FDA now appears to have a bizarre anti-vaccine fixation, particularly of the mRNA variety (disappointing but not surprising given the leadership of RFK Jr.).
The latest is that the FDA has issued a Refusal-to-File (RTF) letter to Moderna for their mRNA influenza vaccine, mRNA-1010. An RTF means the FDA has determined that the application is so deficient it doesn’t even warrant a review. RTF letters are not unheard of, but they’re rare—especially given that Moderna spent hundreds of millions of dollars running Phase 3 trials enrolling over 43,000 participants based on FDA guidance, and is now being told the (apparently) agreed-upon design was inadequate.
Moderna compared the efficacy of their vaccine to a standard flu vaccine widely used in the United States. The FDA’s stated rationale is that the control arm did not reflect the “best-available standard of care.” In plain English, that appears to mean the comparator should have been one of the ACIP-preferred “enhanced” flu vaccines for adults 65+ (e.g., high-dose/adjuvanted) rather than a standard-dose product.
Out of context, that’s not crazy but it’s also not necessarily wise. There is nothing wrong with having multiple drugs and vaccines, some of which are less effective on average than others. We want a medical armamentarium: different platforms, different supply chains, different side-effect profiles, and more options when one product isn’t available or isn’t a good fit. The mRNA vaccines, for example, can be updated faster than standard vaccines, so having an mRNA option available may produce superior real-world effectiveness even if it were less efficacious in a head-to-head trial.
In context, this looks like the regulatory rules of the game are being changed retroactively—a textbook example of regulatory uncertainty destroying option value. STAT News reports that Vinay Prasad personally handled the letter and overrode staff who were prepared to proceed with review. Moderna took the unusual step of publicly releasing Prasad’s letter—companies almost never do this, suggesting they’ve calculated the reputational risk of publicly fighting the FDA is lower than the cost of acquiescing.
Moreover, the comparator issue was discussed—and seemingly settled—beforehand. Moderna says the FDA agreed with the trial design in April 2024, and as recently as August 2025 suggested it would file the application and address comparator issues during the review process.
Finally, Moderna also provided immunogenicity and safety data from a separate Phase 3 study in adults 65+ comparing mRNA-1010 against a licensed high-dose flu vaccine, just as FDA had requested—yet the application was still refused.
What is most disturbing is not the specifics of this case but the arbitrariness and capriciousness of the process. The EU, Canada, and Australia have all accepted Moderna’s application for review. We may soon see an mRNA flu vaccine available across the developed world but not in the United States—not because it failed on safety or efficacy, but because FDA political leadership decided, after the fact, that the comparator choice they inherited was now unacceptable.
The irony is staggering. Moderna is an American company. Its mRNA platform was developed at record speed with billions in U.S. taxpayer support through Operation Warp Speed — the signature public health achievement of the first Trump administration. The same government that funded the creation of this technology is now dismantling it. In August, HHS canceled $500 million in BARDA contracts for mRNA vaccine development and terminated a separate $590 million contract with Moderna for an avian flu vaccine. Several states have introduced legislation to ban mRNA vaccines. Insanity.
The consequences are already visible. In January, Moderna’s CEO announced the company will no longer invest in new Phase 3 vaccine trials for infectious diseases: “You cannot make a return on investment if you don’t have access to the U.S. market.” Vaccines for Epstein-Barr virus, herpes, and shingles have been shelved. That’s what regulatory roulette buys you: a shrinking pipeline of medical innovation.
An administration that promised medical freedom is delivering medical nationalism: fewer options, less innovation, and a clear signal to every company considering pharmaceutical investment that the rules can change after the game is played. And this isn’t a one-product story. mRNA is a general-purpose platform with spillovers across infectious disease and vaccines for cancer; if the U.S. turns mRNA into a political third rail, the investment, talent, and manufacturing will migrate elsewhere. America built this capability, and we’re now choosing to export it—along with the health benefits.
The post I Regret to Inform You that the FDA is FDAing Again appeared first on Marginal REVOLUTION.
Hat tip: Logan Dobson.
The post That Was Then/This is Now appeared first on Marginal REVOLUTION.

United Launch Alliance said an issue affected one of the four solid rocket boosters that helped propel its Vulcan rocket into space Thursday on a mission for the United States Space Force. Despite the problem the rocket, making only its fourth flight, continued on its planned trajectory, the company said.
The 202-foot-tall (61.6 m) rocket thundered away from pad 41 at Cape Canaveral Space Force Station at 4:22 a.m. EST (0922 UTC) but less than 30 seconds into the flight, there appeared to be a burn through of one of the nozzles on a Northrop Grumman-built graphite epoxy motor (GEM) 63XL solid rocket boosters (SRBs).
Shortly after, as the rocket performed its pitch over maneuver, the vehicle began to roll in a more pronounced way than is typical for this stage of flight. The Vulcan rocket appeared to counteract the anomaly and the SRBs jettisoned as planned at T+ 1 minute, 37 seconds into the flight.
“We had an observation early during flight on one of the four solid rocket motors, the team is currently reviewing the data,” ULA said in a statement roughly an hour after liftoff. “The booster, upper stage, and spacecraft continued to perform on a nominal trajectory.”
Roughly 20 seconds after the liftoff of ULA’s Vulcan rocket on the USSF-87 mission, there appeared to be a possible burn through of at least one of the solid rocket booster nozzles. We’ve reached out to ULA for comment. Video shot by @ABernNYC for Spaceflight Now
Watch: pic.twitter.com/u2sFmlxSih
— Spaceflight Now (@SpaceflightNow) February 12, 2026
The rocket was carrying the USSF-87 mission. It’s a series of payloads for the U.S. Space Force, highlighted by at least one Geosynchronous Space Situational Awareness Program (GSSAP) satellite, though two may be onboard.
ULA leadership said prior to launch that it would be roughly 10 hours from liftoff until the end of the mission, so it might be Thursday afternoon before an update on the status of the payload is given.

This was ULA’s second national security mission following completion of the Vulcan rocket’s certification in March 2025. There are several more on the company’s launch manifest for 2026, including a GPS satellite and satellites for the Space Force’s Space Development Agency.
ULA’s plan for 2026 was to launch 16 to 18 missions with Vulcan. The latter vehicle would launch from both coasts.
The “observation” noted on one of the SRBs on Thursday morning’s flight marks the second time in just four flights that ULA ran into a similar issue.
A burn through was noted during the second certification launch of Vulcan back on Oct. 4, 2024. ULA and Northrop Grumman went through a series of tests and analysis to address the anomaly, including a hot fire test in Utah.
Ultimately, the U.S. Space Force deemed Vulcan capable to launch national security payloads for it and the National Reconnaissance Office (NRO). The USSF-106 mission on Aug. 12, 2025, went smoothly, giving ULA leadership confidence in their launch vehicle.
“We’ve had a couple of anomalies that we’ve worked through. You all are aware of those. Those are behind us now and so the Vulcan rocket is ready to go,” said John Elbon, the interim CEO of ULA, during a virtual media roundtable on Tuesday.

David’s handcrafted figurines pay tribute to cultural icons. His latest project takes on his greatest hero, his late brother
- by Aeon Video

In their visions of the underworld Dante and Milton were truly subversive, incorporating predecessors into their own repudiation
- by Charlie Ericson
Planets orbiting two stars have been found, but not all that many of them. We’re talking here about a planet that orbits both stars of a close binary system, and thus far, although we’ve confirmed over 6,000 exoplanets, we’ve only found 14 of them in this configuration. Circumbinary planets are odd enough to make us question what it is we don’t know about their formation and evolution that accounts for this. Now a paper from researchers at UC-Berkeley and the American University of Beirut probes a mechanism Einstein would love.
At play here are relativistic effects, having to do with the fact that, as Einstein explained, intense gravitational fields have detectable effects upon the stars’ orbits. This is hardly news, as it was the precession of Mercury in the sky that General Relativity first predicted. The planet’s orbit could be seen to precess (shift) by 43 arcseconds per century more than was expected by Newtonian mechanics. Einstein showed in 1915 that spacetime curvature could account for this, and calculated the exact 43 arcsecond shift astronomers observed.
What we see in close binary systems is that if we diagram the elliptical orbit usually found in such systems, the line connecting the closest approach (periastron) and farthest point in the orbit (apoapsis) gradually rotates. The term for this is ,em>apsidal precession. This precession – rotation of the orbital axis – is coupled with tidal interactions between the two stars, which make their own contribution to the effect. Close binary orbits, then, should be seen as shifting over time, partly as a consequence of General Relativity.
The researchers calculate that as the precession rate of the stars increases, that of a planet orbiting both stars slows. The planet’s perturbation can be accounted for by Newtonian mechanics, and its lessening precession is the result of tidal effects gradually shrinking the orbit of the two binary stars. But note this: When the two precession rates match, or come into resonance, the planet experiences serious consequences. Mohammad Farhat, (UC Berkeley) and first author of the paper, phrases the matter this way:
“Two things can happen: Either the planet gets very, very close to the binary, suffering tidal disruption or being engulfed by one of the stars, or its orbit gets significantly perturbed by the binary to be eventually ejected from the system. In both cases, you get rid of the planet.”

Image: An artist’s depiction of a planet orbiting a binary star. Here, the stars have radically different masses and as they orbit one another, they tug the planet in a way that makes the planet’s orbit slowly rotate or precess. Based on dynamic modeling, general relativistic effects make the orbit of the binary also precess. Over time, the precession rates change and, if they sync, the planet’s orbit becomes wildly eccentric. This causes the planet to either get expelled from the system or engulfed by one of the stars. Credit: NASA GSFC.
Does this mean that circumbinary planets are rare, or does it imply that most of them are probably in outer orbits and hard to find by our current methods? Ejection from the system seems the most likely outcome, but who knows? The researchers make three points about this. Quoting the paper:
(i) Systems that result in tight binaries (period ≤ 7.45 days, that of Kepler-47) via orbital decay are more likely than not deprived of a companion planet: the resonance-driven growth of the planet’s eccentricity typically drives it into the throes of its host’s driven instabilities, leading to ejection or engulfment by that host.
(ii) Planetary survivors of the sweeping resonance mostly reside far from their host and are therefore less likely to have their transits detected. Should eccentric survivors nevertheless be detected, they are expected to bear the signature of resonant capture into apse alignment with the binary.
(iii) The process appears robust to the modeling of the initial binary separation, with three out of four planets around tight binaries experiencing disruption…
What we wind up with here is that circumbinary planets are hard to find, but the greatest scarcity is going to be circumbinaries around binary systems whose orbital period is seven days or less. The researchers note that 12 of the 14 known circumbinary planets are close to but not within what they describe as the ‘instability zone,’ where these effects would be the strongest. Indeed, the combination of general relativistic effects and tidal interactions is calculated here to disrupt planets around tight binaries about 80 percent of the time. Most of the planets thus disrupted would most likely be destroyed in the process.
The paper is Farhat & Touma, “Capture into Apsidal Resonance and the Decimation of Planets around Inspiraling Binaries,” Astrophysical Journal Letters Vol. 995, No. 1 (8 December 2025), L23. Full text.


Welcome to another roundup of interesting news and events from around the econosphere, from my traditional, 100% handcrafted human-written blog.
First, here’s an episode of Econ 102 for you! As regular readers know, Econ 102’s regular run has ended due to my co-host getting extremely busy with his new job. But we will still come out with an episode every now and then. This episode is about how cameras can improve public safety — and whether they should:
Anyway, on to this week’s list of interesting things.
The Bay Area Rapid Transit system (BART) is in a parlous state. Ridership has plummeted in recent years; it did not even come close to fully bouncing back after the pandemic.

If BART doesn’t get bailed out with higher taxes, it will have to close stations, reduce service, and lay off workers.
Why did so many people stop riding BART? It’s possible that the pandemic permanently shifted people’s tastes; maybe people just got used to taking Uber or driving instead of using the train. But it’s also possible that the general increase in public disorder in the Bay Area just made BART unacceptable as a mode of transportation. It seemed like every train had its share of shady characters, drug users, vagrants, and the mentally ill.
For a long time, everyone talked about this, but no one had the hard evidence to prove it. Well, now we do. After BART installed fare gates last year at many of its stations, crime on the trains plummeted by 54% in a single year.1 What’s more, the amount of time that BART employees have to spend on “patron related Corrective Maintenance” — i.e., fixing or cleaning up things that riders break or defile — went from huge amounts to almost nothing:

It turns out that just a few riders were causing most of the disorder on the BART — and those riders were mostly not paying their fares, since the fare gates were effective in stopping them.
This demonstrates a general principle: You only have to restrain a very small number of people in order to maintain public order.
Progressives often argue against measures like fare gates, labeling them “carceral” and “racist”. This demonstrates a principle that I call anarchyfare — the idea that eliminating society’s rules serves as a kind of welfare benefit for marginalized people. But in fact, most poor and marginalized people are just peace-loving people who need to ride the train to get to work. They are the chief victims of the tiny number of chaotic individuals who destroy the commons and make public spaces and public services unusable.
BART’s lesson should be applied throughout much of our society. Restraining a very few uncontrollable and chaotic individuals makes life much better for the poor and working class.
As agentic coding apps wow the world, it’s time for yet another round of “Is AI taking our jobs yet?”. Most of the attention has been focused on young college grads. The story here is that so far, AI primarily automates knowledge work — software engineering, legal services, and so on — and so impacts white-collar entry-level hiring more than other types of hiring.
This was the thesis of Brynjolfsson et al. (2025). And it’s the subject of a new post by Mike Konczal:

Konczal writes:
As you can see, people without college degrees who are 22-27 more or less track exactly what we’d expect given the labor market slowdown. But young college is much higher than our trendline we’d expect from the slowdown.
To be clear: college-educated unemployment is still lower than non-college unemployment, and everyone’s unemployment is up…What’s unusual is the gap between where college unemployment should be historically and where it actually is today…[Y]oung people have higher unemployment than we’d expect at 4.4% overall unemployment. It’s especially higher at its peak and throughout their 20s for people with a college degree. Their recent unemployment rate is historically a surprise. The bad kind of surprise.
He doesn’t claim that we know it’s AI causing the change in the historical pattern, but it’s heavily implied.
But Adam Ozimek points out that this story depends on using unemployment rates. If you look at employment rates instead, the picture looks very different:

You’d think employment rates and unemployment rates would just measure the same thing, right? But they don’t. The way our government calculates things, the employment rate is the percentage of people who have a job. The unemployment rate is the percentage of people who don’t have a job but are looking for a job. In other words, the unemployment rate depends on who says “I’m looking for a job right now”. The employment rate does not depend on this.
In fact, as Ozimek shows, recent college grads have shown pretty constant labor force participation (i.e. they’re all still trying to find jobs), while a number of non-college people of the same age have stopped looking entirely. This shows up as higher unemployment for the college grads, even though the gap in terms of “who has a job” has actually widened since the release of ChatGPT.
This simple observation throws a lot of cold water on the idea that AI is taking jobs from young college graduates. As if that isn’t enough, Zanna Iscenko has a good post that casts even more doubt on the thesis. Iscenko points out that the jobs that are typically reckoned to be more “AI exposed” also tend to be more sensitive to macroeconomic swings:
“AI exposure” and “interest rate sensitivity” are deeply correlated variables…[O]ccupations in the top quintile of AI exposure are overwhelmingly concentrated in…the most AI-exposed quintile…These are precisely the sectors most sensitive to capital costs and broad economic uncertainty. This finding is supported by existing economic literature, such as research by Gregor Zens, Maximilian Böck, and Thomas O. [Zörner] (2020), which found that workers in tasks that are easily automated are also disproportionately affected by conventional monetary policy shocks.
Further supporting the interpretation that AI exposure is correlated with sensitivity to macroeconomic shocks is the fact that we also see more pronounced drops in job postings for “AI-exposed” occupations during the hiring slowdown in early 2020…when Generative AI could not even theoretically be the explanation for the difference.
It still looks to me as if the slowdown in new-grad hiring is not a great example of AI taking jobs. I understand why everyone is worried about this, and I do think it’s plausible that many people will have to find new jobs in the age of AI. On top of that, it seems easily possible that uncertainty about the effects of AI could slow hiring, even if people don’t end up being replaced.
But I just don’t think there’s good evidence that it’s happening yet. Perhaps this year will be the year.
Tariffs aren’t creating a wave of manufacturing jobs for Americans; in fact, manufacturing jobs are decreasing. But it’s also worth asking whether tariffs are even having an effect on America’s overall trade deficit. Recall that Trump thinks trade deficits are bad in and of themselves — a sign of American “defeat” in a global competition, and a way in which America is dependent on foreigners.
The effect of the tariffs on trade deficits doesn’t appear for a little while. At first, everyone tries to front-run the tariffs by importing as much as they can before the tariffs go into effect, leading to a giant temporary spike in the trade deficit. But after that temporary effect abated, it looked like U.S. trade deficits were shrinking:
The November data throws a bit of cold water on that idea. Imports soared and exports fell. November might turn out to be an anomaly, but so far, it doesn’t look like tariffs have done much to trim America’s trade deficit.
But the U.S. bilateral trade deficit with China has come down a lot. The percentage of U.S. imports that come from China has been falling since the pandemic, but it absolutely fell off a cliff after Trump’s big tariffs were announced. America used to get more than a fifth of its imports from China; now it gets less than a thirteenth:

Some people claim that China is just shipping more goods to America indirectly, through third countries like Vietnam. But Gerard DiPippo looked at which goods China has started selling more of to third countries, and found that transshipment to America can’t be very high:
By comparing the decline in China’s exports of specific goods to the United States with the increase in its exports of those same goods to other markets, we can approximate how much of China’s exports are being diverted. By this method, about 82 percent of China’s lost exports to the United States found alternative markets in the second quarter…The top destinations for those diverted exports are Southeast Asia and Europe. Comparing those trade diversion estimates with the increased U.S. imports of those same goods from those regions, we can estimate a maximum for potential transshipment into the U.S. market. By that metric, Southeast Asia is the top potential source of transshipped goods. Overall, potentially transshipped goods during the second quarter equal 23 percent of China’s diverted trade, suggesting that Chinese exporters have, at least so far, mostly found alternative markets.
True transshipment is likely lower than the 23% number that DiPippo cites as an upper bound.
What’s more plausible is that China is shipping intermediate goods — parts, materials, etc. — to countries like Vietnam, who assemble the inputs into consumer goods and sell them to the U.S. But while this means that the U.S. is still dependent on China for some types of technology, the actual manufacturing base is migrating out of China — which is good for the rest of the world, since it’ll help other countries industrialize. Also, having assembly outside China reduces America’s geopolitical vulnerability somewhat.
So while tariffs haven’t clobbered the trade deficit or led to a manufacturing renaissance, they do appear to be working to decouple the world’s two largest economies. Recall that the tariff rate on China is still much bigger than the rate on other countries:

If you view import dependence on China as a geopolitical risk, then this is a positive result for tariffs.
Jon Stewart was my favorite political comedian when I was younger. He didn’t always get everything right, but he could almost always make you laugh, and it was clear that his heart was in the right place. He just wanted to see America succeed and Americans be happy.
But in recent years, this admirable desire has slowly morphed into a kind of lazy centrist populism. One of Stewart’s occasional targets is the economics profession — a favorite punching bag of left-populists everywhere. But because Stewart doesn’t know much about the field of economics, or what economists do, or what economics is about, or what economics research actually says, his critiques often feel uninformed and fall flat.
Jerusalem Demsas saw a recent Stewart interview with behavioral economist Richard Thaler, and decided she had finally had it with the former Daily Show host:
Stewart interjected…“But that’s not economics, economics doesn’t take into account what’s best for society!”…“The goal of economics in a capitalist system is to make the most amount of money for your shareholders. So my point is, since when is economics about improving the human condition and not just making money for the companies that are extracting the fossil fuels from the earth?”
At this point it became clear that Stewart has conflated the entire field of economics with a half-remembered, left-wing caricature of capitalism…Throughout the interview, Stewart seemed to believe that economics is just a sophisticated justification for letting rich people and corporations do whatever they want. And this total lack of basic understanding renders him an inept translator of politics and an ineffective force for the very policies he says he supports.
Demsas noted a hilarious moment in the interview in which Stewart rejects the notion that economists have anything useful to say about climate change, and then immediately endorses a cap-and-trade scheme — something economists invented. It’s as if a talk show host rejected the science of physics by saying we didn’t need physics equations to land on the moon.
Jason Furman, an incredibly mild-mannered and affable guy, was nevertheless willing to vent about his own interview with Stewart:
Meanwhile, in order to get some backup for his newfound crusade against the economics profession, Stewart has recruited Oren Cass, a Trump supporter and big fan of tariffs who spends much of his time yelling about how economists don’t know anything. I have written in the past about the utter vapidity of Cass’ critiques of economics. Every time I see a story about how U.S. manufacturing employment keeps falling and falling, I tweet to him and ask him whether he has revised his belief that tariffs help manufacturing. He never answers.
Anyway, the point here is that although Jon Stewart’s style of comedy is great for making fun of American politics, it’s not an effective or interesting way to address economic policy challenges. Unfortunately, the “econ is fake” meme has given a lot of people permission to treat those challenges as if they’re a simple matter of common sense. They are not.
Trump and the MAGA movement aren’t just going after illegal immigrants; they’re also opposed to high-skilled legal immigration. The administration’s main target on that front has been the H-1B visa, which brings smart people to work in U.S. tech companies. Most H-1B recipients are from nonwhite countries, with India taking by far the biggest share. Trump has implemented a huge fee for hiring H-1Bs, and other GOP politicians are also trying to curb use of the visas.
Proponents of skilled immigration, such as Yours Truly, have long warned that if companies can’t get talent to come to America, they’ll simply set up overseas offices and take advantage of talent there. In fact, Glennon (2023) has evidence that this is exactly what happens:
How do multinational firms respond when artificial constraints, namely policies restricting skilled immigration, are placed on their ability to hire scarce human capital?…[F]irms respond to restrictions on H-1B immigration by increasing foreign affiliate employment…particularly in China, India, and Canada. The most impacted jobs were R&D-intensive ones…[F]or every visa rejection, [multinational companies] hire 0.4 employees abroad.
Well, it’s happening again:
Alphabet Inc. is plotting to dramatically expand its presence in India, with the possibility of taking millions of square feet in new office space in Bangalore, India’s tech hub…
US President Donald Trump’s visa restrictions have made it harder to bring foreign talent to America, prompting some companies to recruit more staff overseas. India has become an increasingly important place for US companies to hire, particularly in the race to dominate artificial intelligence…
Google rivals including OpenAI and Anthropic PBC have recently set up shop in the country…
For US tech giants, India offers a strategic workaround to Washington’s tightening immigration regime. The Trump administration has moved to sharply hike the fees for H-1B work visas — potentially to $100,000 per application — making it harder for companies to bring Indian engineers to the US.
This shift is fueling the growth of so-called global capability centers, or technology hubs operated by multinational corporations across sectors from software and retail to finance. Many of these centers are now focused on building AI products and infrastructure. Nasscom, India’s IT industry trade group, estimates such centers will employ 2.5 million people by 2030, up from 1.9 million today.
If those jobs were in America, the Indians who are working at those jobs would be spending their money on American doctors and dentists, American tax preparers and financial advisers, American restaurants and shops. Now, instead, thanks to Trump, that money is being spent in India.
As I noted in my last post, Japan has a huge national debt. Even once you net out the portions of the debt that are held by other branches of the government, debt is only around 119% of GDP — about the same as in the U.S., which is highly indebted. But a recent post by Toby Nangle shows how Japan’s government has managed to reduce the impact of that debt by basically acting as a giant hedge fund, making huge profits on various macroeconomic “trades”:
[I]f the Japanese government had raised a bazillion yen on the bond market and funnelled it all into, say, a successful forex trading operation and a long-only stock portfolio which has gone to the moon, maybe we should consider these assets too when trying to work out what sort of parlous state Japan is actually in?…
[Japan] has enjoyed spectacular returns on its monster macro punts over the past few years…They’ve scored healthy profits on its FX interventions since 1991, which we reckon could be worth around eight per cent of GDP…The Bank of Japan’s most outlandish version of QQE has involved building a huge position in stocks, and we estimate the unrealised P&L could be worth 11 per cent of GDP…And that’s before we chalk up jumps in the value of GPIF, Japan’s $1.8tn public pension reserve fund that is maintained to help the government pay pensions. This has benefitted bigly from a combination of a slide in the yen and booming stocks…
Beyond these three, there are a host of other such trades. But pretty much all of them come down to one core basket of positions: short yen vs US dollars, and long stocks. And this trade has been wildly successful.
Nangle plays with some numbers from Fed economist YiLi Chien for Japan’s various government trading positions, and finds that they shrink Japan’s debt by around half:

The U.S. has not done anything of the sort. If George Bush had implemented his Social Security scheme in 2005, we’d be reaping many of the same benefits Japan is now enjoying…but we did not. Japan is acting like a giant — and very successful — macro hedge fund, while the U.S. keeps its money in cash under the mattress.
The fare gates are also raising millions of dollars for the BART system.
A little more than two months ago, a Rocket Lab employee called the Stennis Space Center Fire Department from the nearby A3 test stand. There was a grass fire where Archimedes engines undergo testing. Could they please send personnel over?
According to the fire station's November 30 dispatcher log, the employee said, "The fire started during a test when an anomaly caused an electrical box to catch fire."
Satellite imagery from before and after the anomaly appears to show that the roof had been blown off the left test cell, one of two at the test stand at the historic NASA facility in southern Mississippi. One person with knowledge of the anomaly said, "The characterization of this as an electrical fire doesn't reflect what actually occurred. This was a catastrophic engine explosion that resulted in significant infrastructure damage."
The Federal Aviation Administration abruptly halted flights into and out of El Paso International Airport on Tuesday night at 11:30 pm local time (1:30 am EST Wednesday) and said the restrictions would remain in place for 10 days.
In its notice, the FAA also restricted air space extended in a radius of 10 nautical miles from the airport. Violators were subject to being shot down, the agency said.
However, less than 10 hours later and without any additional explanation, the FAA ended the restrictions. "The temporary closure of airspace over El Paso has been lifted," the federal agency said on social media. "There is no threat to commercial aviation. All flights will resume as normal."
On Tuesday night, the Federal Aviation Administration closed airspace up to 18,000 feet above the El Paso International Airport in Texas, saying the restrictions would be in place for 10 days. Then, less than 10 hours later, the federal agency reopened the airspace, allowing planes to land and take off at the busy airport.
About an hour after lifting the restrictions, US Secretary of Transportation Sean Duffy, whose responsibilities include overseeing the FAA, explained the unexpected closure by saying, "The FAA and DOW acted swiftly to address a cartel drone incursion." (The Trump Administration refers to the Department of Defense as the Department of War, or DOW, although its legal name remains the former.)
Not everyone agrees with Duffy's account.
Following the removal of 50% of unauthorized immigrants, in the short run average native real wages rise 0.15% nationally, driven by an increase in the capital-labor ratio. In the long run, however, native real wages fall in every state, and by 0.33% nationally, as capital gets decumulated in response to a lower population. Consumer prices in the sectors intensive in unauthorized workers – such as Farming – rise by about 1% relative to the price of the average consumption basket, while most other sectors experience negligible relative price changes.
That research result is from
The post The economics of mass deportation appeared first on Marginal REVOLUTION.
Mark Gurman, reporting for Bloomberg:
After planning to include the new capabilities in iOS 26.4 — an operating system update slated for March — Apple is now working to spread them out over future versions, according to people familiar with the matter. That would mean possibly postponing some features until at least iOS 26.5, due in May, and iOS 27, which comes out in September. [...]
In recent days, Apple instructed engineers to use the upcoming iOS 26.5 in order to test new Siri features, implying that the functionality may have been moved back by at least one release. Internal versions of that update now include a notice describing the addition of some Siri enhancements. One feature is especially likely to slip: the expanded ability for Siri to tap into personal data. That technology would let users ask the assistant to, say, search old text messages to locate a podcast shared by a friend and immediately play it.
Internal iterations of iOS 26.5 also include a settings toggle allowing employees to enable a “preview” of that functionality. That suggests Apple is weighing the idea of warning users that the initial launch is incomplete or may not work reliably — similar to what it does with beta tests of new operating systems.
When Gurman began reporting about personalized Siri delays a year ago, his reporting turned out to be exactly right. If these features are going to drop in iOS 26.4, they should be in pretty good shape right now internally. If they’re in bad shape right now in internal builds, it’s really hard to see how they could drop in iOS 26.4. And once you start talking about iOS 26.5 (let alone 26.6), we’d be getting really close to WWDC, where Apple’s messaging will turn to the version 27 OSes.
Something still seems rotten.
Launch Complex 39A at NASA's Kennedy Space Center in Florida is accustomed to getting makeovers. It got another one Wednesday with the removal of the Crew Access Arm used by astronauts to board their rides to space.
Construction workers first carved the footprint for the launch pad from the Florida wetlands more than 60 years ago. NASA used the site to launch Saturn V rockets dispatching astronauts to the Moon, then converted the pad for the Space Shuttle program. The last shuttle flight lifted off from Pad 39A in 2011, and the agency leased the site to SpaceX for use as the departure point for the company's Falcon 9 and Falcon Heavy rockets.
SpaceX started launching from Pad 39A in 2017, then installed a new Crew Access Arm on the pad's tower the following year, replacing the aging shuttle-era arm that connected to the hatches of NASA's orbiters. SpaceX added the new arm ahead of the first test flight of the company's human-rated Crew Dragon spacecraft in 2019. Astronauts started using the pathway, suspended more than 200 feet above the pad surface, beginning with the first crew flight on a Dragon spacecraft in 2020.
China's space program, striving to land astronauts on the Moon by 2030, carried out a test flight of a new reusable booster and crew capsule late Tuesday (US time), and the results were spectacular.
The demonstration "marks a significant breakthrough in the development of [China's] manned lunar exploration program," the China Manned Space Agency (CMSA) said in a statement. China and the United States are racing to accomplish the next human landing on the Moon in a competition for national prestige and lunar resources. The Long March 10 rocket and Mengzhou spacecraft, both tested Tuesday, are core elements of China's lunar architecture.
The launch of a subscale version of the Long March 10 rocket, still in development, provided engineers with an opportunity to verify the performance of an important part of the new Mengzhou capsule's safety system. The test began with liftoff of the Long March 10 booster from a new launch pad at Wenchang Space Launch Site on Hainan Island, China's southernmost province, at 10 pm EST Tuesday (03:00 UTC or 11 am Beijing time Wednesday).
The OC Register reports that the gods are angry with Disney:
ORLANDO, Fla. (AP) — A Walt Disney World worker in Florida was injured while attempting to stop a large runaway prop boulder from rolling into seated spectators at the Indiana Jones live show.
The worker at the “Indiana Jones Epic Stunt Spectacular” at the Disney’s Hollywood Studios park was knocked to the ground by the 400-pound prop boulder after it moved off its track on Tuesday and started rolling toward audience members. Another worker stopped the boulder before it reached the spectators.
“But Trump achieved Operation Warp Speed in his first term.” File that under the accuracy of broken clocks:
I’ve often wondered how long it would be before the global political re-alignment that we are seeing begins to impact the economic policy views of the various political parties. The Economist has an interesting piece on “low-tax lefties”:
On the left, tax has turned from a fundamental bargain with the state to a cost-of-living issue. Why should young grumpy professionals who dominate the British left pay more when they receive so little? “Nick, 30 ans”, a French meme about an overtaxed young professional, is beloved by the online right in Britain, who assume that fed up yuppies will flock to the right for lower taxes. Run this demographic through a pollster’s table and it soon becomes clear “Nick” probably voted Labour at the last election. Would he still, if Labour put up his taxes? “Cut bills, tax billionaires,” says Mr Polanski. After all, “Nick” is not a billionaire.
Back in 2008 I wrote a paper entitled The Great Danes. Ever since then, I’ve been obsessed with Denmark (a country I’ve never visited.) This caught my eye:
On December 30th PostNord will take things further: after 400 years, it will end its collection and delivery of letters entirely.
Denmark will be the first European country to do so.
And on a less positive note, this one too:
In Britain, natives and foreign-born people have almost identical employment rates, and migrant employees earn more. In Denmark, by contrast, natives are employed at substantially higher rates than immigrants or their descendants. The PISA education tests carried out by the oecd, a club of mostly rich countries, show that the children of migrants fare poorly in Denmark and well in Britain (see chart 2). Indeed, migrants’ children in Britain score higher in both maths and reading than native Danes.
The two countries have different immigration traditions. Like many European countries, Denmark opened its labour market to “guest workers” in the 1960s, implying that anyone who arrived was temporary. Britain drew from its current and former colonies. Although Commonwealth migrants suffered appalling racism, they clung to the view that they were fully British, and eventually ground almost all white Britons into agreeing.
Much of the world (including the US) is moving away from neoliberalism. The Vietnamese have a better idea:
In May, Vietnam issued Resolution 68, recasting the private sector as “the most important driving force” of the economy and aiming to boost its size. The new law promises easier access to land, capital and regulatory permissions for private firms. It aims to empower smaller businesses, as well as spurring conglomerates to compete abroad. A range of other initiatives are in motion, too, from supercharging Vietnam’s r&d capacity to transforming the port city of Da Nang into a global financial hub.
Perhaps most importantly, Mr Lam has directed Vietnam’s bureaucrats to move with haste. Too often in the past their aversion to risk has stood in the way of dynamism. He has abolished five ministries and eliminated an entire layer of the bureaucracy. He is reducing the number of provinces from 63 to 34. The civil service is set to shrink by 100,000 jobs.
In contrast, American economic policy (under both Biden and Trump) is weakening the hand of China’s neoliberals:
China is confident of its leverage over America. That swagger is hard for trade partners to take. But its intransigence has still deeper roots. China’s rulers like their plan to dominate the commanding heights of global manufacturing, and do not wish to change.
Reform-minded Chinese share foreigners’ fears that this manufacturing drive is unsustainable. But party bosses see Mr Trump’s adoption of Chinese-style industrial policies, including government demands for stakes in leading companies, as an endorsement of their own approach. Equally, they feel vindicated in their obsession with self-reliance. Their distrust of America is now near-total, after Mr Trump’s attempts to choke off China’s access to American technologies, interspersed with campaigns to sell China more of them. America “made a huge mistake”, says the Chinese economist. It “woke up China”, but could not prevent the country from developing world-beating industries.
Mr Trump came to power promising a manufacturing boom for the ages. It would be awkward if he succeeds, but in China.
Iraq still has major problems, but an article in the Economist suggests that things are getting better:
[Mr Sudani] oversees powerful investment committees that can swiftly approve projects. “What we used to do in a year or two, they can now do in one sitting,” says Namir al-Akabi, chairman of Amwaj, one of Iraq’s largest real-estate firms, which is throwing up apartment blocks across Baghdad.
Progress goes beyond the capital. Mr Sudani has digitised many government services. The passport office in Baghdad issues new travel documents within 45 minutes; officials claim they are the fastest in the world. Until 2023, annual customs income had never exceeded 900bn Iraqi dinars ($690m). This year it is expected to exceed 3trn dinars. The days of dodging fees by importing containers of iPhones as bananas are over, thanks to digitisation, says one un official.
Government salaries are no longer paid in cash. Payments for government services, such as those speedy new passports, can be made only with a bank card. Five years ago almost no one in Iraq had one; today they are essential.
Morocco has recently adopted a number of economic reforms, which seem to be paying off:
The results include a high-speed train that runs up the country’s west coast. On the road into Tanger Med, drivers pass endless wind and solar farms, as well as special economic zones ready to welcome investment.
Perhaps the biggest draw for European firms is a free-trade agreement that was struck with the EU in 2000. Preferential deals with 60 other countries have followed. This drew big investments by carmakers such as Renault and later Stellantis . . .
Last year Morocco became the biggest exporter of cars and parts to Europe, surpassing China and Japan.
Morocco is a manufacturing powerhouse? Who knew?
In some cases, Trump is indirectly pushing other countries in a positive direction. Here’s the Economist:
When it comes to the bilateral relationship, Mr Carney acknowledges Mr Trump’s oft-repeated claim that the United States “has the cards”. But he insists that there is “not just one game” and that Canada is “going to play other games with other players”. He has cut taxes and simplified regulation to foster an infrastructure boom at home; he says he will double Canada’s rate of home-building; he is working to eliminate the significant trade barriers between Canada’s provinces. The other players are Europe and Asia, with which Mr Carney wants to expand trade dramatically. “We can give ourselves far more than the United States can take away,” says Mr Carney. . . .
Mr Carney also wants to get Canada “building infrastructure at a pace and a scale that we haven’t done for generations”. That includes oil pipelines, port expansion, electricity transmission lines, critical-mineral mines and, of course, housing. He has cancelled a planned rise in capital-gains tax and is “changing the way we do regulation in this economy so there’s much greater certainty”, in the hope of stimulating investment.
And India:
The European Union and India concluded a free-trade agreement after almost two decades of negotiations, part of an effort to deepen economic ties that has gained momentum due to the Trump administration’s aggressive tariff policies.
Tariffs are making the US a very expensive place to do manufacturing:
LOL:
California is losing population, but it’s not because lots of people are leaving the state. Instead, hardly anyone is moving here (except me in 2017.)
Note that 2.1% of Americans changed states in 2024. That same year, 1.7% of Californians left for other parts of the U.S. That below-average departure rate is also down from California’s 2% departure rate in 2021-23.
California is among the states with the most loyal residents. For 2024, only Michigan (1.3%), Ohio (1.5%), and Texas (1.6%) had smaller departure rates.
Oh, Florida’s exit rate was 2.2% of its population, ranking No. 30.
California is sort of like another country, where gasoline costs $5/gallon and a modest ranch house can cost $1 or $2 million. Just as people rarely move from one country to another, people rarely move in or out of California.
Left wingers: Don’t have whites play black roles.
Right wingers: Don’t have blacks play white roles.
Me: End identity politics. Become a colorblind society.
The Economist recently interviewed a bunch of patriotic MAGA-types who work at some of those innovative military tech companies in El Segundo, California. This caught my eye:
Asked why Neros does this in California, rather than in a more pro-business, pro-Pentagon state like Texas, he smiles mockingly. “The best engineers in the world don’t want to live in Texas,” he snorts.
If you wish to know what he’s talking about, check out this tweet.
Seeing this headline in the Free Press got me thinking about how the alt-right decides to assign different moral worth to different ethnic groups:
The media is obsessed with sex and violence, which is why there’s so much coverage of ICE killings and the Epstein files. Meanwhile, this story has attracted relatively little attention. (The National Review has more detailed information.)
The Economist has a good article explaining how Britain’s Reform party is beginning to adopt the issue positions of the Conservatives:
If the personnel are beginning to look similar, so is the governing philosophy. Britain’s fundamental problem is a lack of economic growth, which Reform has little intention of solving. The simplest policy prescriptions that would make Britain richer—making it easier to build, being open to foreign talent, making trade easier with Europe—are anathema to Reform, just as they were to the Tories. By the end of their tenure the Conservatives relied on elderly voters who had little interest in economic growth. Why should they? They will live through the upheaval yet not feel the benefit. Reform is repeating this. By 2024 the Tories were a right-wing party wedded to policies that will make Britain poorer. Come 2029 Reform will accept that mantle.
In recent weeks, the Economist has published a series of articles suggesting that alcohol may have helped to create the modern world:
Edward Slingerland of the University of British Columbia argues that alcohol was not merely a companion of progress but a precondition. His “drunk hypothesis”, proposed in 2021, is that alcohol’s effects on the human pre-frontal cortex drove the emergence of large-scale, stratified societies by allowing “fiercely tribal primates to co-operate with strangers”. Human societies are so complex, and depend so much on creativity and the cultural transmission of knowledge, that humans could not have built civilisation without first getting drunk enough to intermingle and co-operate to a degree that is unusual for other species.
A few weeks later they discussed another study:
It would be wrong to minimise the real health risks associated with drinking, particularly as researchers have raised serious doubts over earlier findings of a “J-shape curve” in which those who drink moderately were thought to be healthier than both heavy drinkers and those who abstain entirely. Even so, alcohol itself often provides the lubricant around which many people socialise. Researchers at the University of Oxford noted in a paper published in 2017 that regulars at a local pub are “more socially engaged, feel more contented in their lives, and are more likely to trust other members of their community”.
You won’t find this information at MarginalRevolution! Seriously, I’m not a drinker, but that’s for health reasons, not by choice. (I love the taste of wine.)
Unlike in the West, anti-immigrant attitudes in Japan are more common in urban areas:
[A]nti-foreigner turn risks setting the LDP further at odds with the many Japanese voters who still take a moderate position on immigration. Japan’s business leaders tend to favour policies aimed at expanding the number of foreign workers. And the governors of Japan’s 47 prefectures, worried by the tone of debates, recently banded together to issue a statement in support of multiculturalism. “Xenophobia must not be tolerated,” said the leader of their association.
Perhaps surprisingly, Japanese from the countryside tend to be more open to newcomers than urbanites. The reason is that labour shortages have hit rural areas hardest . . . “We’ll be in deep trouble if the foreigners stop coming here,” says Mizuno Daisuke, the boss of a fishing co-operative on the island of Shikoku. Half of his employees are Indonesian. “We should only be saying thanks.”
23. Isn’t this ironic?
Chinese firms are already hearing loud demands from European and other governments to transfer more advanced technologies to foreign partners, and to source more components from local supply chains. The European Union is debating “buy European” local-content rules for public procurement contracts, in a bid to give such demands some bite. Still, many Chinese businesses will try to keep their most valuable operations at home.
What would Michael Scott say?
This Economist story is worth thinking about:
In 2024 Brightline transported 2.8m people in Florida. But 41 people also died in accidents involving its trains, according to the Federal Railroad Administration (FRA). Its data excludes suicides. Since launching in 2018 over 180 people have been killed, including suicide cases, according to data collated by the Miami Herald. . . . By international standards, the death toll is astonishing. In the year to March 2025 Britain’s railways transported 1.7bn people and around a dozen people were killed on train tracks.
On a per trip basis, Brightline is more than 2000 times more dangerous. The per mile difference may be smaller but certainly wouldn’t explain this enormous difference. A small airline like Spirit Airlines carries 44 million passengers per year. Imagine if they had 15 commuter plane crashes every year, with each plane carrying 41 passengers.
To be clear, I’m not suggesting that we pay too little attention to Brightline deaths; I’m suggesting we pay too much attention to airline crashes. But why the difference?
When sober, all civilized people insist that stupid people don’t deserve to die. But throw back a few beers and people start making jokes about “Florida Man” and “Darwin Award winners”:
The FRA data show that all of the accidental fatalities on Brightline tracks last year involved trespassers. “If you’re pointing the finger at the train, you’re looking at the wrong source of the problem,” says Alfred Sanchez, head of the Greater Miami Chamber of Commerce. On his daily commute home he used to cross a Brightline track and would see people trying to manoeuvre past the guardrails—in cars, on foot and on bikes. “I don’t know why people don’t take it more seriously here, but they do not take it more seriously,” he says.
At least subconsciously, this may explain why we tolerate many more deaths in some areas than in others.
From Charles I. Jones and Christopher Tonetti:
How muchof past economic growth is due to automation, and what does this imply about the effects of A.I. and automation in the coming decades? We perform growth accounting using a task-based model for key sectors in the U.S. economy. Historically, TFP growth is largely due to improvements in capital productivity. The annual growth rate of capital productivity is at least 5pp larger than the sum of labor and factor-neutral productivity growth. The main benefit of automation is that we use rapidly-improving machines instead of slowly-improving humans on anincreasing set of tasks. Looking to the future, we develop an endogenous growth model in which the production of both goods and ideas is endogenously automated. We calibrate this model based on our historical evidence. Two key findings emerge. First, automation leads economic growth to accelerate over the next 75 years. Second, the acceleration is remarkably slow. By 2040, output is only 4% higher than it would have been without the growth acceleration, and by 2060 the gain is still only 19%. A key reason for the slow acceleration is the prominence of “weak links” (an elasticity of substitution among tasks less than one). Even when most tasks are automated by rapidly improving capital, output is constrained by the tasks performed by slowly-improving labor.
And an important sentence from the paper itself:
…, the key gain from automation is that it allows production of a task to shift away from slowly-improving human labor to rapidly-improving machines.
The authors stress that those are preliminary results, and the numbers are likely to change. For the pointer I thank the excellent Kurtis Hingl, who is also my research assistant.
The post Past Automation and Future A.I.: How Weak Links Tame the Growth Explosion appeared first on Marginal REVOLUTION.
GLM-5: From Vibe Coding to Agentic Engineering
This is a huge new MIT-licensed model: 754B parameters and 1.51TB on Hugging Face twice the size of GLM-4.7 which was 368B and 717GB (4.5 and 4.6 were around that size too).It's interesting to see Z.ai take a position on what we should call professional software engineers building with LLMs - I've seen "Agentic Engineering" show up in a few other places recently. most notable from Andrej Karpathy and Addy Osmani.
I ran my "Generate an SVG of a pelican riding a bicycle" prompt through GLM-5 via OpenRouter and got back a very good pelican on a disappointing bicycle frame:
Via Hacker News
Tags: definitions, ai, generative-ai, llms, ai-assisted-programming, pelican-riding-a-bicycle, llm-release, vibe-coding, openrouter, ai-in-china, glm
cysqlite - a new sqlite driver
Charles Leifer has been maintaining pysqlite3 - a fork of the Python standard library'ssqlite3 module that makes it much easier to run upgraded SQLite versions - since 2018.
He's been working on a ground-up Cython rewrite called cysqlite for almost as long, but it's finally at a stage where it's ready for people to try out.
The biggest change from the sqlite3 module involves transactions. Charles explains his discomfort with the sqlite3 implementation at length - that library provides two different variants neither of which exactly match the autocommit mechanism in SQLite itself.
I'm particularly excited about the support for custom virtual tables, a feature I'd love to see in sqlite3 itself.
cysqlite provides a Python extension compiled from C, which means it normally wouldn't be available in Pyodide. I set Claude Code on it and it built me cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.whl, a 688KB wheel file with a WASM build of the library that can be loaded into Pyodide like this:
import micropip await micropip.install( "https://simonw.github.io/research/cysqlite-wasm-wheel/cysqlite-0.1.4-cp311-cp311-emscripten_3_1_46_wasm32.whl" ) import cysqlite print(cysqlite.connect(":memory:").execute( "select sqlite_version()" ).fetchone())
(I also learned that wheels like this have to be built for the emscripten version used by that edition of Pyodide - my experimental wheel loads in Pyodide 0.25.1 but fails in 0.27.5 with a Wheel was built with Emscripten v3.1.46 but Pyodide was built with Emscripten v3.1.58 error.)
You can try my wheel in this new Pyodide REPL i had Claude build as a mobile-friendly alternative to Pyodide's own hosted console.
I also had Claude build this demo page that executes the original test suite in the browser and displays the results:
Via lobste.rs
Tags: python, sqlite, charles-leifer, webassembly, pyodide, ai-assisted-programming, claude-code
We do not usually do venture capitalists on the Core Memory podcast. They can be a lot and like to hear themselves talk a bit too much. (Not you! The other ones – Ed.)
But, for Peter Barrett, we will always make an exception. He’s a general partner at Playground Global and is one of those people who knows an awful lot about an awful lot of things. He is one of my favorite people to listen to and gets my mind racing with tons of new ideas every time we speak.
Self-taught, Peter spent the early part of his career as a force of nature in the software industry. He was visited by the Men In Black as a teenager. He helped start Rocket Science Games, which was the hottest video game maker in town before it wasn’t. While there, Peter happened to employ a young intern named Elon Musk. . .
Later, Peter would be part of the team that created WebTV and a longtime distinguished engineer at Microsoft, working alongside Bill Gates and Steve Ballmer.
These days Peter goes deep on deep tech at Playground. As such, we talk quantum computing, the insane world of AI agents, nuclear power, data centers in space (and why they won’t work) and whether or not humans should be in total panic.
The Core Memory podcast is on all major platforms and on our YouTube channel over here. If you enjoy the show, please leave a review and tell your friends.
This podcast is sponsored by Brex, the intelligent finance platform built to help companies spend smarter and move faster.
We run on Brex and so should you. Learn more about Brex right here.
The podcast is also made possible by E1 Ventures, which backs the most ambitious founders and start-ups.
Not Howard Lutnick. Also, a short post today.
There’s a longstanding tradition in American politics of what Richard Hofstadter famously called the paranoid style – a way of thinking that sees conspiracies lurking everywhere. MAGA-world is particularly riddled with conspiracy thinking – from George Soros and Jewish space lasers, QAnon and the Great Replacement Theory, to Italian satellites hacking into voting machines to deliver the 2020 election to Joe Biden.
But these are far-fetched fantasies. The truth is far more banal and shocking.
There are people in positions of great power in the U.S. government engaged in evil conspiracies against everything that is good and decent. Their conspiracies are far more extensive and damaging than almost anyone imagined. But there are no evil masterminds behind this. Only amoral, stupid grifters like Howard Lutnick.
During Trump 47’s first year, Lutnick, the Commerce secretary, was an omnipresent spokesman for Donald Trump’s policies, a constant presence on TV, especially the Sunday talk shows.
He was not impressive in that role. Unlike Scott Bessent, he lacked any hint of gravitas. He doesn’t have Pete Hegseth’s hair. Moreover, Lutnick’s Trump boosterism has been consistently and embarrassingly incompetent.
The only waves he has made are a result of his exceptional combination of stupidity and offensive tone-deafness.
Thus he promised to revive U.S. manufacturing by bringing back “the work of millions and millions of human beings screwing in little, little screws.” Lutnick, a billionaire, dismissed concerns about chaos at the Social Security Administration by saying that his mother-in-law wouldn’t complain about a missed check. He gave a Europe-bashing speech to a private dinner at Davos so offensive that Christine Lagarde, president of the European Central Bank, walked out.
And in Congressional testimony today, Lutnick admitted that he visited Epstein Island, but said that he did so with his wife, nannies and children, and asserted that “We left with all of my children.”
It would be tempting to dismiss Lutnick as a buffoon. Yet despite his intelligence deficit, he sits at the intersection of not one but at least two ugly conspiracies.
Before joining Trump’s cabinet, Lutnick ran the Wall Street firm Cantor Fitzgerald — presenting a huge potential conflict of interest that he claims to have ended by turning the business over to … his sons. Cantor Fitzgerald, in turn, is intimately linked to Tether, a cryptocurrency that is highly profitable because it has become a favorite channel for money-laundering by international criminals.
Nor was money-laundering through cryptocurrency the only criminal conspiracy to which Lutnick was, at the very least, adjacent. Lutnick has in the past vehemently denied having any association with Jeffrey Epstein, insisting that he severed all contact with the pedophile ringleader in 2005. But even the highly limited, extremely redacted release of the Epstein files — everything we’ve seen reeks of a major coverup — shows that he was flat-out lying. Not only did he stay in close contact with Epstein, the two men appear to have gone into business together.
But, at this point, who could possibly be surprised? The more we learn, the more pedophilia and criminal use of cryptocurrency look related, even like different aspects of a single conspiracy. Epstein, it turns out, was a major early investor in the crypto industry. In the backrooms of MAGA-land, passing around under-age girls is a lot like passing around insider crypto deals.
In any previous administration, Lutnick’s naked conflicts of interest and his Epstein lies would have led to his immediate departure. But Trump 47 is using his position to massively enrich himself, and whatever the Justice department is hiding, what we already know about Trump’s personal history is damning — “Grab ‘em by the pussy. You can do anything.”
Lutnick may be under wraps for a while, but don’t expect him to resign. Pushing him out would be a tacit admission that huge conflicts of interest, family business that enables crime and association with sexual predators are bad. Oh, and let’s not forget jaw-dropping stupidity. Not going to happen.
While MAGA-world’s fantasy villains like George Soros are brilliant and subtle, MAGA’s real villains are uncouth and dim-witted. Yet they carry out their sinister schemes in broad daylight. For all they need to flourish is utter shamelessness, along with the backing of a corrupt administration and a corrupt political party.
So it’s worth remembering Hannah Arendt’s observations about the architects of Hitler’s genocide, which led her to coin the phrase “the banality of evil”. As Arendt noted, the horrors of Nazism were not inflicted by brilliant geniuses, but through the normalization of thoughtless, amoral behavior that eventually turned into evil. Thus while Lutnick appears on the surface like a dim-witted backroom grifter, he is a warning of something far more sinister and malign lurking below.
MUSICAL CODA
An AI-generated report, delivered directly to the email inboxes of journalists, was an essential tool in the Times’ coverage. It was also one of the first signals that conservative media was turning against the administration [...]
Built in-house and known internally as the “Manosphere Report,” the tool uses large language models (LLMs) to transcribe and summarize new episodes of dozens of podcasts.
“The Manosphere Report gave us a really fast and clear signal that this was not going over well with that segment of the President’s base,” said Seward. “There was a direct link between seeing that and then diving in to actually cover it.”
— Andrew Deck for Niemen Lab, How The New York Times uses a custom AI tool to track the “manosphere”
Tags: generative-ai, new-york-times, journalism, ai, data-journalism, llms
r = OpenAI().responses.create( model="gpt-5.2", tools=[ { "type": "shell", "environment": { "type": "container_auto", "skills": [ { "type": "inline", "name": "wc", "description": "Count words in a file.", "source": { "type": "base64", "media_type": "application/zip", "data": b64_encoded_zip_file, }, } ], }, } ], input="Use the wc skill to count words in its own SKILL.md file.", ) print(r.output_text)
I built that example script after first having Claude Code for web use Showboat to explore the API for me and create this report. My opening prompt for the research project was:
Run uvx showboat --help - you will use this tool later
Fetch https://developers.openai.com/cookbook/examples/skills_in_api.md to /tmp with curl, then read it
Use the OpenAI API key you have in your environment variables
Use showboat to build up a detailed demo of this, replaying the examples from the documents and then trying some experiments of your own
Tags: ai, openai, generative-ai, llms, ai-assisted-programming, skills, showboat
Nestled among high snowy peaks in northern Italy, Cortina d’Ampezzo is hosting athletes in the 2026 Winter Olympics and Paralympics who are skiing, sliding, and curling toward a spot on the podium. The scenic mountain town is the co-host, along with Milan, of the international sporting extravaganza.
Cortina sits within the Dolomites, a mountain range in the northern Italian Alps known for its sheer cliffs, rock pinnacles, tall peaks, and deep, narrow valleys. In this three-dimensional oblique map, several peaks over 3,000 meters (10,000 feet) tall rise above the town. To create the map, an image acquired with the OLI (Operational Land Imager) on Landsat 8 on January 27, 2026, was overlaid on a digital elevation model.
Tofana di Mezzo, the third-highest peak in the Dolomites at 3,244 meters (10,643 feet), is the site of the Tofane Alpine Skiing Centre, the venue for the Olympic women’s Alpine skiing and all Paralympic skiing events. Competitors on the Olympia delle Tofane course descend 750 meters (2,460 feet), reaching high speeds and catching big air along the way. A highlight is the steep, 33-degree drop through the Tofana Schuss, a chute bounded by tall rock walls near the top of the course.
More adrenaline-filled races are taking place at the Cortina Sliding Centre, the venue for bobsled, luge, and skeleton events. Athletes are competing on a rebuilt version of the track used in the 1956 Olympics, hosted by Cortina. And curlers, trading speed for strategy, are going for gold at the Cortina Curling Olympic Stadium, built for the 1956 Olympic figure skating competition and opening ceremony. (There is indeed a theme: almost all of the 2026 Games are being held in existing or refurbished facilities.)




January 27, 2026
These Landsat images show Cortina and its surrounding alpine terrain in natural color and false color. The band combination (6-5-4) highlights areas of snow (light blue), while steep, mostly snow-free cliffs stand out as areas of light brown, and forests appear green.
Locations across the Italian Alps join Cortina in hosting the snow sports, which also include cross-country skiing, ski jumping, ski mountaineering, and snowboarding. As with many past Olympics, the 2026 Winter Games are manufacturing snow at the various venues to ensure consistent conditions. New high-elevation reservoirs were created to store water for snowmaking, according to reports. Automated systems are being used to limit snow production to the minimum amount required, and most snowmaking operations are being powered by renewable energy, the International Olympic Committee said.
Snowfall in northern Italy was below average at the start of the season, but a storm on February 3—three days before the opening ceremony—eased some of the need for snowmaking. Still, snow coverage and the ability of Winter Olympic venues to maintain consistent conditions are areas of concern as global temperatures rise. Researchers studying the issue have suggested several ways to address this, including holding competitions at higher elevations, choosing regional or multi-country hosts, and shifting the Paralympic Games from early March to January or February when it’s typically colder and snowier.
NASA Earth Observatory images by Michala Garrison, using Landsat data from the U.S. Geological Survey and elevation data from TINITALY. Story by Lindsey Doermann.
Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

About 2,900 Olympic athletes have converged on northern Italy to sort out who is the GOAT—or perhaps the stoat.

Very wet—but very warm—weather in the western U.S. has left many mountainous regions looking at substantial snowpack deficits.

Satellites observed a frozen landscape across much of the country after a massive winter storm.
The post Reaching Top Speed in the Dolomites appeared first on NASA Science.
In each successive generation of code creation thus far, we’ve abstracted away the prior generation over time. Usually, only a small percentage of coders still work on the lower layers of the stack that used to be the space where everyone was working. I’ve been coding long enough that people were still creating code in assembly when I started (though I was never any good at it!), though I started with BASIC. Since BASIC was an interpreted language, its interpreter would write the assembly language for me, and I never had to see exactly what assembly language code was being created.
I definitely did know old-school coders who used to, at first, check that assembly code to see if they liked the output. But eventually, over time, they just learned to trust the system and stopped looking at what happened after the system finished compiling. Even people using more “close to the metal” languages like C generally trust that their compilers have been optimized enough that they seldom inspect the output of the compiler to make sure it was perfectly optimized for their particular processor or configuration. The benefits of delegating those concerns to the teams that create compilers, and coding tools in general, yielded so many advantages that that tradeoff was easily worth it, once you got over the slightly uncomfortable feeling.
In the years that followed, though a small cohort of expert coders who would hand-tune assembly code for things like getting the most extreme performance out of a gaming console, most folks stopped writing it, and very few new coders learned assembly at all. The vast majority of working coders treat the output from the compiler layer as a black box, trusting the tools to do the right thing and delegating the concerns below that to the toolmakers.
We may be seeing that pattern repeat itself. Only this time, the abstraction is happening through AI tools abstracting away all the code. Which can feel a little scary.
Just as interpreted languages took away chores like memory management, and high-level languages took away the tedium of writing assembly code, we’re starting to see the first wave of tools that completely abstract away the writing of code. (I described this in more detail in the piece about codeless softwarerecently.
The individual practice of professionalizing the writing of software with LLMs seems to have settled on the term “agentic engineering”, as Simon Willison recently noted.
But the next step beyond that is when teams don’t write any of the code themselves, instead moving to an entirely abstracted way of creating code. In this model, teams (or even individual coders):
With this kind of model deployed, the software that is created can essentially be output from the system in the way that assembly code or bytecode is output from compilers today, with no direct inspection from the people who are directing its creation. Another way of thinking about this is that we’re abstracting away many different specific programming languages and detailed syntaxes to more human-written Markdown files, created much of the time in collaboration with these LLM tools.
Presently, most people and teams who are pursuing this path are doing so with costly commercial LLMs. I would strongly advocate that most organizations, and especially most professional coders, be very fluent in ways of accomplishing these tasks with a fleet of low-cost, locally-hosted, open source/open-weight models contributing to the workload. I don’t think they are performant enough yet to accomplish all of the coding tasks needed for a non-trivial application yet, but there are a significant number of sub-tasks that could reasonably be delegated. More importantly, it will be increasingly vital to ensure that this entire “codeless compilation” stack for agentic engineering works in a vendor-neutral way that can be decoupled from the major LLM vendors, as they get more irresponsible in their business practices and more aggressive towards today’s working coders and creators.
For many, those worries about Big AI are why their reaction to these developments in agentic coding make them want to recoil. But in reality, these issues are exactly why we desperately need to engage.
Many of the smartest coders I know have a lot of legitimate and understandable misgivings about the impact that LLMs are having on the coding world, especially as they’re often being evangelized by companies that plainly have ill intent towards working coders. It is reasonable, and even smart, to be skeptical of their motivations and incentives.
But the response to that skepticism is not to reject the category of technology, but rather to capture it and seize control over its direction, away from the Big AI companies. This shift to a new level of coding abstraction is exactly the kind of platform shift that presents that sort of opportunity. It’s potentially a chance for coders to be in control of some part of their destiny, at a time when a lot of bosses clearly want to get rid of as many coders as they can.
At the very least, this is one area where the people who actually make things are ahead of the big platforms that want to cash in on it.
I think a lot of coders are going to be understandably skeptical. The most common concern is, “I write really great code, how could it possibly be good news that we’re going to abstract away the writing of code?”. Or, “How the hell could a software factory be good news for people who make software?”
For that first question, the answer is going to involve some grieving, at first. It may be the case that writing really clean, elegant, idiomatic Python code is a skill that will be reduced in demand in the same way that writing incredibly performant, highly-tuned assembly code is. There is a market for it, but it’s on the edges, in specific scenarios. People ask for it when they need it, but they don’t usually start by saying they need it.
But for the deeper question, we may have a more hopeful answer. By elevating our focus up from the individual lines of code to the more ambitious focus on the overall problem we’re trying to solve, we may reconnect with the “why” that brought us to creating software and tech in the first place. We can raise our gaze from the steps right in front of us to the horizon a bit further ahead, and think more deeply about the problem we’re trying to solve. Or maybe even about the people who we’re trying to solve that problem for.
I think people who create code today, if they have access to super-efficient code-creation tools, will make better and more thoughtful products than the financiers who are currently carrying out mass layoffs of the best and most thoughtful people in the tech industry.
I also know there’s a history of worker-owned factories being safer and more successful than others in their industries, while often making better, longer-lasting products and being better neighbors in their communities. Maybe it’s possible that there’s an internet where agentic engineering tools could enable smart creators to build their own software factories that could work the same way.
Roden Readers —
Hello from the backside of some Tokyo snow. It was purty. I miss snow. Maybe not as much snow as New York’s been gettin’, but still — I wish we had a good five or six snow days in Tokyo each year. As is, we’re lucky to get one, and then it’s all gone in twenty-four hours. I still remember a Valentine’s Day night some thirteen years or so ago — a mega blizzard hit Tokyo. I popped out of a restaurant with a friend and we just laughed and laughed at all the snow, everywhere.

Day after day we’re seeing more signs of Donald Trump’s slipping grip not only on public opinion, but at the margins of the GOP itself. But I thought it was a good time to remind ourselves that Donald Trump isn’t the only problem. Yes, there’s the GOP, which could easily dispatch him at any point if he didn’t have an iron hold over the party. There’s the 30%-40% of voters who are solidly in the MAGA camp. Without them, Trump’s nothing. I don’t mean either of those. I’m talking about the global authoritarian movement, which includes and is even perhaps led by Trump. But it exists quite apart from him and has roots in some of the wealthiest and most powerful people and governments around the world.
I’m talking about the Authoritarian International which includes a host of authoritarian governments around the world, the princelings of the Gulf monarchies, the sprinkling of European right-ravanchist governments, the rightward portion of Silicon Valley (which accounts for a larger and larger percentage of the top owners if not the larger community), the Israeli private intel sector, various post-Soviet oligarchs and, increasingly, the world’s billionaire class. Trump is their avatar, but they exist and are now joined together in a way that will outlive him personally and electorally.
Early in the Biden administration I talked to a U.S. hedge funder who gets invited to the confabs Mohammed bin Salman has put on for the world’s billionaires since he became the country’s de facto leader. He described that world to me, a bit about its mores, what he saw. As you’d imagine for this 21st Century kind of Kremlinology, who gets to sit next to MBS at the dinners is the subject of close scrutiny and much envy. At the last of these confabs before this conversation, Jared Kushner had been given the seat of honor at something like every dinner. He was MBS’s guy. And remember, this was when Trump was at his nadir. Maybe MBS and the Saudis just had a better view of America’s political future than I did. Certainly possible. But the bigger takeaway was: this wasn’t just a transactional relationship. Kushner and MBS and Trump and MBS were a thing in and out of office.
It is not so much an anti-democratic world — though it is certainly that — as an anti-civic world. It’s a world of private, one-off deals, mutual pledges of secrecy, often enforced by soft, mutual extortion, and above all, a rejection of democratic accountability. We saw this coming into view during the late Biden administration, when Biden was already rapidly losing public support, with Elon Musk’s increasingly brazen efforts to run U.S. foreign policy from Twitter and SpaceX. The Saudis meanwhile were trying to ease Biden out of office through the manipulation of oil prices. It was no accident that Musk was advancing a strongly pro-Russian line in Ukraine, where he was most visibly trying to undermine U.S. policy.
I’ve discussed this concept in the past. So I don’t want to belabor the point of its existence. I want to point out how its forces are arrayed against civic democracy in the U.S. — quite apart from Donald Trump. This wasn’t always the case. There didn’t use to be so many U.S. billionaires. And they characteristically had economic views which aimed to preserve their wealth. But they were not clearly on the right in the way they are now. They have moved an increasingly anti-civic democratic direction as the scale of their wealth and their identity as a class has exploded. They also weren’t so increasingly allied with primitive economy petro-states of the Gulf.
The point is that they will exist no matter what happens to Trump. They command vast economic resources; they run the governments in many countries where the government never changes; they have deep tentacles into the U.S. political system and many of its key players are from the U.S. Trump didn’t create this movement precisely. But his role in global politics over the last decade solidified it as a self-conscious group and congealed it together. Any movement of civic democratic revival in the U.S. will be menaced by its continued existence. Now is the time to think about how a revived and revitalized civic democratic movement in the U.S. could combat it and avoid being destroyed by it.
Andrew Cunningham, writing for Ars Technica at the end of January:
Apple also outlines a number of usage restrictions for the generative AI features that rely on external services. Apple says that, “at a minimum,” users will be able to generate 50 images, 50 presentations of between 8 to 10 slides each, and to generate presenter notes in Keynote for 700 slides. More usage may be possible, but this depends on “the complexity of the queries, server availability, and network availability.”
Steven Troughton-Smith, last week, after creating an entire app with OpenAI’s Codex:
This entire app used 7% of my weekly Codex usage limit. Compare that to a single (awful) slideshow in Keynote using 47% of my monthly Apple Creator Studio usage limit 👀
Something feels off here, by at least an order of magnitude (maybe two?), that creating an entire good app costs way less than creating one shitty slide deck in Keynote. It should be the other way around.
Sometimes in life, you stumble upon things that make you stop and go, “Holy shit!”
Last night, in Orange, two hours before my class at Chapman, I stumbled upon this.
I went, “Holy shit!”
And recorded it …
I asked whether the participants were affiliated with a particular group.
“No,” someone said.
Then they handed me this …
Fight on.
As of yesterday, members of Congress who sit on the House or Senate Judiciary Committees can see unredacted versions of the Epstein files the Department of Justice (DOJ) has already released. As Herb Scribner of Axios explained, the documents are available from 9:00 AM to 6:00 PM on computers in the DOJ building in Washington, D.C. The lawmakers cannot bring electronic devices into the room with them, but they are allowed to take notes. They must give the DOJ 24 hours notice before they access the files.
The Epstein Files Transparency Act required the DOJ to release all the Epstein files by December 19. Only about half of them have been released to date, and many of them are so heavily redacted they convey little information. After members of Congress complained, on Friday, January 30, Deputy Attorney General Todd Blanche said they could see the unredacted documents if they asked.
In a letter dated the next day, Representative Jamie Raskin (D-MD) immediately asked for access on behalf of the Democratic members of the House Judiciary Committee, saying they would be ready to view the files the following day, Sunday, February 1.
After viewing the files briefly yesterday, Raskin told Andrew Solender of Axios that when he searched the files for President Donald Trump’s name, it came up “more than a million times.” Raskin suggested that limiting members’ access to the files is part of a cover-up to hide Trump’s relationship with the convicted sex offender, a cover-up that includes the three million files the DOJ has yet to release despite the requirements of the Epstein Files Transparency Act. One of the files he did see referred to a child of 9. Raskin called it “gruesome and grim.”
Representative Ro Khanna (D-CA) added: “There’s still a lot that’s redacted—even in what we’re seeing, we’re seeing redacted versions. I thought we were supposed to see the unredacted versions.”
Material that has come out has already shown members of the administration and their allies are lying about their connections to Epstein. Commerce Secretary Howard Lutnick, who lived next door to Epstein for more than ten years, said in October that he had cut ties with Epstein in 2005 after visiting his home and being disgusted. The files show that in fact, Lutnick not only maintained ties with Epstein but also was in business with him until at least 2018, long after Epstein was a convicted sex offender. Members of both parties have called for Lutnick to resign.
Testifying today before the Senate Appropriations Committee, where members took the opportunity to ask him about his ties to Epstein. Lutnick acknowledged that he had had more contact with Epstein than he had previously admitted, but maintained: “I did not have any relationship with him. I barely had anything to do with him.” But even Republicans expressed discomfort with Lutnick’s visit with his family to Epstein’s private island.
Khanna called for Lutnick to resign. “In this country, we have to make a decision,” he said. “Are we going to allow rich and powerful people who were friends and had no problem doing business and showing up with a pedophile who is raping underage girls, are we just going to allow them to skate? Or, like other countries, are we going to have…accountability for the people who did that?”
In Europe the revelation that a leader had ties to Epstein has abruptly ended careers. The former British ambassador to Washington, Peter Mandelson, was fired and has created a crisis for Prime Minister Keir Starmer for appointing him. Two senior Norwegian diplomats are under investigation for gross corruption from their ties to Epstein; one of them, Mona Juul, resigned Sunday from her position as ambassador to Jordan and Iraq. Slovakia’s national security advisor Miroslav Lajčák resigned after messages between him and Epstein showed them talking about women while also discussing Lajčák’s meetings with Russian foreign minister Sergey Lavrov.
Poland announced it was launching an investigation into whether Epstein was tied to Russian intelligence. “More and more leads, more and more information, and more and more commentary in the global press all relate to the suspicion that this unprecedented paedophilia scandal was co-organised by Russian intelligence services,” Polish prime minister Donald Tusk said. “I don’t need to tell you how serious the increasingly likely possibility that Russian intelligence services co-organised this operation is for the security of the Polish state. This can only mean that they also possess compromising materials against many leaders still active today.”
Yesterday, Epstein associate Ghislaine Maxwell, who is serving 20 years in prison for sex trafficking, testified by video before the House Oversight Committee. She refused to answer any questions, invoking her Fifth Amendment right against self-incrimination. Her lawyer said she is “prepared to speak fully and honestly” if Trump grants her clemency.
Todd Lyons, the acting head of Immigration and Customs Enforcement; Rodney Scott, the commissioner of Customs and Border Protection; and Joseph Edlow, the director of U.S. Citizenship and Immigration Services, all part of the Department of Homeland Security, testified today before the House Committee on Homeland Security. As Eric Bazail-Eimil of Politico reported, Lyons defended the actions of ICE agents, saying they are properly enforcing immigration laws and that they are the real victims of the encounters that have left protesters dead or injured because the protests put agents in danger. Most Republicans backed them up, saying the Democrats are trying to stop the removal of criminals.
Democrats asked the men about federal arrests of U.S. citizens and the deaths of Renee Good and Alex Pretti and demanded changes at ICE and Border Patrol. Funding for the Department of Homeland Security will run out on February 13, and the administration officials warned members of Congress that a shutdown would disrupt their operations and thus endanger national security. Representative James Walkinshaw (D-VA) later told a reporter: “Look, all of this comes from Stephen Miller’s sick and twisted, deranged Great Replacement theory. Whether these folks here…know it or not, they’re…just pawns in Stephen Miller’s sick and twisted scheme.”
Daniel Klaidman, Michael Kaplan, and Matt Gutman of CBS News reported that the American Civil Liberties Union (ACLU) has filed a federal civil rights lawsuit after a federal raid on a popular horse racing venue in Wilder, Idaho, led to the detention of 105 undocumented immigrants as well as the temporary detention of 375 U.S. citizens or lawful residents. Only five arrests ended in criminal charges, all for unlicensed gambling.
Answering allegations that agents had used zip ties on children, both the Federal Bureau of Investigation (FBI) field office in Boise and Homeland Security spokesperson Trisha McLaughlin flatly denied the allegations. “ICE didn’t zip tie, restrain, or arrest any children,” she said. “ICE does not zip tie or handcuff children. This is the kind of garbage rhetoric contributing to our officers facing a 1,300% increase in assaults against them and an 8,000% increase in death threats.” But after photographic evidence of zip-tie bruises on a 14-year-old female U.S. citizen as well as personal testimony, the FBI changed their assertion to say no “young” children were zip-tied.
Court documents unsealed today show that the FBI raid on the warehouse in Fulton County, Georgia, that led to the seizure of 700 boxes of ballots and other election related items was based on debunked claims of fraud from 2020 election deniers. As Ashley Cleaves and Matt Cohen of Democracy Docket explained, the affidavit that informed the search warrant came from Kurt Olsen, one of the lawyers who worked with Trump to overturn the 2020 election and whom Trump has recently appointed director of election security and integrity. In the affidavit, Olsen recycled a number of debunked theories.
Legal analyst Joyce White Vance notes that, aside from the merits of the case, it appears that the statute of limitations has run out on any potential election crimes stemming from 2020. She goes on to expose the weakness of the case itself and, finally, to point out that both the General Assembly and the Georgia State Election Board that said there was no intentional fraud or misconduct in the counting of the Fulton County ballots in 2020 were Republican led. White suggests the raid was “less about bringing a meritorious criminal prosecution against specific individuals and more about casting suspicion over Fulton County’s voting system and ability to conduct a fair election.”
Today the National Governors Association cancelled its annual bipartisan meeting with the president that usually involves a business meeting and a dinner. Trump had disinvited two Democratic governors, Jared Polis of Colorado and Wes Moore of Maryland, prompting the rest of the Democratic governors to refuse to attend. “Democratic governors have a long record of working across the aisle to deliver results and we remain committed to this effort. But it’s disappointing this administration doesn’t seem to share the same goal. At every turn, President Trump is creating chaos and division, and it is the American people who are hurting as a result,” the Democratic governors wrote. “If the reports are true that not all governors are invited to these events, which have historically been productive and bipartisan opportunities for collaboration, we will not be attending the White House dinner this year. Democratic governors remain united and will never stop fighting to protect and make life better for people in our states.”
Moore is the vice-chair of the NGA. Yesterday its chair, Oklahoma’s Republican governor Kevin Stitt, wrote: “Because NGA’s mission is to represent all 55 governors, the Association is no longer serving as the facilitator for that event, and it is no longer included in our official program.”
White House press secretary Karoline Leavitt told reporters: “I just spoke with the president about this. It is a dinner at the White House. It’s the ‘People’s House.’ It’s also the president’s home, and he can invite whomever he wants to dinners and events here at the White House.”
In Washington today, a grand jury refused to indict six Democratic members of Congress for breaking a law that makes it a crime to “interfere with, impair, or influence the loyalty, morale, or discipline of the military or naval forces of the United States.” Senators Mark Kelly of Arizona, a retired Navy captain and astronaut; Elissa Slotkin of Michigan, a former CIA analyst; and Representatives Jason Crow of Colorado, a former Army Ranger; Chris Deluzio of Pennsylvania, a former Navy officer; Maggie Goodlander of New Hampshire, a Navy veteran; and Chrissy Houlahan of Pennsylvania, a former Air Force officer, recorded a video last November reminding service members that they must refuse illegal orders.
Trump called it “SEDITIOUS BEHAVIOR, punishable by DEATH!”
Although the bar for an indictment is so low that grand juries almost always return one, the Trump administration’s attempts to harass those he perceives as opponents have been so outrageous that grand juries have repeatedly refused to go along. The New York Times called today’s refusal “a remarkable rebuke.”
—
Edited at noon on February 11 to remove the information that Brad Karp had resigned from the law firm Paul Weiss over Epstein revelations. That appears to be incorrect.
—
Notes:
https://www.justice.gov/opa/media/1426091/dl
https://www.axios.com/2026/02/09/epstein-files-unredacted-congress-doj-review
https://www.axios.com/2026/02/10/trump-epstein-files-jamie-raskin-unredacted
https://www.ms.now/news/lawmakers-say-some-epstein-files-remain-redacted-despite-dojs-pledge
https://apnews.com/article/jeffrey-epstein-files-howard-lutnick-2ead9f281ba2491e0581aced50a0533d
https://www.thetimes.com/world/europe/article/woman-rapist-epstein-files-france-vpfkfvcr8
https://www.politico.eu/article/slovak-adviser-resigns-jeffrey-epstein-revelations-disclosures-fico/
https://www.theguardian.com/us-news/2026/feb/10/jeffrey-epstein-brad-karp-woman-deported
https://www.cbsnews.com/live-updates/ice-hearing-cbp-uscis-congress-immigration/
https://www.politico.com/news/2026/02/10/ice-todd-lyons-dhs-funding-hearing-00774309
https://www.cbsnews.com/news/feds-zip-tied-14-year-old-girl-idaho-raid-ice-tactics/
https://thehill.com/homenews/administration/5731782-governors-association-skips-trump-dinner/
https://www.washingtonpost.com/national-security/2026/02/10/dc-grand-jury-kelly-slotkin-pirro/
https://www.nytimes.com/2026/02/10/us/politics/trump-democrats-illegal-orders-pirro.html
https://www.documentcloud.org/documents/26927576-fulton-county-affidavit/
Bluesky:
thebulwark.com/post/3mehjzvw45a2o
Took a clyster in the morning and rose in the afternoon. My wife and I dined on a pullet and I eat heartily, having eat nothing since Sunday but water gruel and posset drink, but must needs say that our new maid Mary has played her part very well in her readiness and discretion in attending me, of which I am very glad.
In the afternoon several people came to see me, my uncle Thomas, Mr. Creed, Sir J. Minnes (who has been, God knows to what end, mighty kind to me and careful of me in my sickness). At night my wife read Sir H. Vane’s tryall to me, which she began last night, and I find it a very excellent thing, worth reading, and him to have been a very wise man.
So to supper and to bed.
Florida’s strict liability rule can sound simple: the owner pays when a dog bites. In real life, it is easy for victims to get pushed into delays, low offers, or unfair blame. Knowing what ‘strict’ covers helps you protect your health and your claim.
It also helps you speak clearly to doctors, insurers, and witnesses, and keeps you from accepting blame in a conversation that feels casual but gets recorded. Here are five ways Florida’s strict liability rule works.
Under Florida dog bite laws, the owner can be responsible even if the dog has never bitten anyone before. You usually do not have to prove the owner knew the dog was dangerous. The case often turns on whether a bite happened and whether you were lawfully on the property. Be sure to take photos of the wound, the scene, and any torn clothing.
Strict liability hinges on where you were and why you were there. If you were in a public place, the rule is straightforward. If you were on private property, your right to be there matters.
Guests, delivery workers, and people doing business usually qualify. Trespassing arguments can derail claims, so write down why you were there, who invited you, and the time.
Florida’s dog bite statute has a “Bad Dog” sign carveout for the owner’s property, with exceptions for children under six years and for cases tied to the owner’s negligence. Even when strict liability applies, your actions can still reduce damages through comparative fault.
Florida dog bite laws can bar recovery in many negligence actions if you are found more than 50% at fault. Do not aim to guess fault in the moment. Be sure to stick to facts, and avoid lines like ‘I shouldn’t have touched the dog.’
A bite can mean stitches, infection care, scar treatment, and follow-ups. It can also mean missed work, disrupted sleep, and anxiety. Strict liability is about damages suffered, so documentation drives value. Be sure to keep every receipt, and track pain days and limits, like trouble driving or lifting. Ask your doctor to note restrictions in writing.
You have two years to sue for negligence in Florida. This makes early evidence even more important because memories fade and records get harder to pull. Report the bite to animal control, get contact information for witnesses, and request the incident report. If an insurer calls, you can listen, but you do not need to rush into a recorded statement on day one.
Strict liability can make Florida dog bite cases clearer, but it does not make them effortless. Evidence and timing still drive results. Focus on care first, then preserve proof while details are fresh. If you need advice, a Florida attorney can review your facts and deadlines.
Photo: Ivan Babydov via Pexels.
CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM
The post Florida’s Strict Liability Rule for Dog Bites: What it Means for Victims appeared first on DCReport.org.

United Launch Alliance is gearing up for a predawn launch of its Vulcan rocket on Thursday morning, the companies first flight of the year.
The United States Space Force (USSF)-87 mission consists of multiple satellites, though the exact number was not publicly disclosed prior to launch. This is ULA’s second national security mission using its Vulcan rockets and will also be the company’s longest mission to date.
“This mission will last, total duration from launch to end of mission, 10 hours. As has been stated before, Vulcan was purpose built and this is the type of mission that the team actually designed this launch vehicle to support,” said Gary Wentz, ULA’s vice president of Atlas and Vulcan Programs.
“It’s significant payloads to very complex orbits, multi-manifested national security space, direct to GEO (geosynchronous Earth orbit). So this is tailor fit for that mission. This is why we put the Vulcan in place and designed it this way.”
Liftoff from Space Launch Complex 41 at Cape Canaveral Space Force Station is scheduled for 3:30 a.m. EST (0830 UTC), at the opening of a two-hour window. The rocket will fly on an easterly trajectory upon leaving the launch pad.
Spaceflight Now will have live coverage beginning about an hour prior to liftoff.
The 45th Weather Squadron forecast idyllic weather during the launch window, showing a 95 percent chance for good conditions at liftoff. However, they are keeping an eye on solar activity, anticipating “an increased probability of X-ray flares” during the primary and backup launch dates.
“With light winds overnight, there’s a chance shallow mist may decrease visibility in the early morning hours on Thursday morning,” launch weather officers wrote. “The surface high begins to break down on Thursday as a weak frontal boundary approaches in the afternoon. No significant precipitation is expected with this boundary.”

The rocket is flying in a VC4S configuration, meaning it has four, side-mounted Graphite Epoxy Motor (GEM) 63XL solid rocket boosters (SRBs) and a ‘standard’ 51-foot-long (15.5 m), 17.7-foot-diameter (5.4 m) payload fairing.
The SRBs will jettison less than two minutes after liftoff, which will be followed by the separation of the Vulcan booster from the Centaur 5 upper stage about five minutes into the mission.
In typical fashion for a mission designated as supporting U.S. national security interests, the exact timing of payload deployment is not disclosed.
The Vulcan rocket, designated as V-005, has the Geosynchronous Space Situational Awareness Program (GSSAP) system as its primary payload. In a statement, the U.S. Space Force’s Space Systems Command (SSC) described it as “a high-performance, dedicated Space Surveillance Network sensor” designed to augment the U.S. Space Command’s awareness of activities in geostationary Earth orbit, roughly 22,000 miles (35,000 km) above the Earth.
Neither the Space Force nor ULA would confirm in the week leading up to launch how many GSSAP satellites were manifested on the mission. During a prelaunch briefing, Wentz said, “I can’t say that it’s two or one or three or any other number.”

That said, historically, these satellites flew in pairs. The first two pairs launched on Delta 4 Medium-plus rockets, first in July 2014, followed by the next set in August 2016. The third and most recent pair, GSSAP-5 and 6 launched on an Atlas 5 rocket in January 2022.
One of the first two satellites, GSSAP space vehicle 2, was taken out of service and put in a graveyard orbit, according to a statement from Lt. Col. Greg Fertig, the then deputy program manager of the SSC’s GSSAP Program Office, to Space News in August 2023.
Fertig also told Space News at the time that the Space Force had ordered two more of these GSSAP satellites to be built by Northrop Grumman.
“In addition to the GSSAP payload, USSF-87 will include additional research, development and training systems. Guardians will use these systems to refine tactics, techniques and procedures for precision on-orbit maneuvers,” SSC said in a statement. “These systems will also enhance and validate resiliency and protection in geosynchronous orbit.”
These additional payloads will be mounted to Northrop Grumman’s ESPAStar platform, a maneuverable spacecraft capable of housing up to six hosted payloads and up to 12 deployable payloads.

The author is Brendan Greeley, and the subtitle is 500 Years of the World’s Most Powerful Money. A very well-timed book, excellent on the history of the dollar as it spans the centuries, I was happy to write a blurb for it.
The post *The Almighty Dollar* appeared first on Marginal REVOLUTION.
1. The rise of the Saudi-UAE split.
2. How Peru came the leading exporter of blueberries.
3. One view on how to respond to AI potentially taking your job.
4. Austin Steady rates books mentioned on MR.
5. 36 hours in Lagos (NYT).
6. Why economists linearize everything.
7. New Malcolm Gladwell book coming on guns and violence.
8. Apply to be a Mercatus Emerging Scholar.
The post Wednesday assorted links appeared first on Marginal REVOLUTION.

Amazon received approval Feb. 10 to deploy thousands more broadband satellites, weeks after seeking relief from a July milestone for its first-generation network after reaching only about 11% of the required deployment.
The post FCC approves thousands more Amazon Leo satellites as Gen 1 deadline looms appeared first on SpaceNews.

“When you have a fully reusable vehicle … you can send it anywhere … to the moon … past Mars” —Jared Isaacman, NASA Administrator. Imagine a future where thousands of people travel to space every year. Some stay a week. Some a month. Some never come back — they stay, build and live. Space is […]
The post Reusable launch vehicles will change everything in space, and on Earth appeared first on SpaceNews.

Eutelsat has signed a 975 million euro ($1.2 billion) France-backed export credit agency financing package to help fund 440 replacement satellites for its OneWeb LEO broadband constellation.
The post Eutelsat gets nearly 1 billion euros in French-backed ECA financing appeared first on SpaceNews.

Stoke Space has raised an additional $350 million to advance work on its reusable launch vehicle and future projects.
The post Stoke Space adds $350 million to Series D round appeared first on SpaceNews.

China took a major step forward in its lunar and human spaceflight programs late Tuesday with successful in-flight abort and rocket recovery tests.
The post China tests crewed spacecraft abort and rocket recovery in major lunar milestone appeared first on SpaceNews.
I just noticed that the ebook version of Rewriring Democracy is on sale for $5 on Amazon, Apple Books, Barnes & Noble, Books A Million, Google Play, Kobo, and presumably everywhere else in the US. I have no idea how long this will last.
Also, Amazon has a coupon that brings the hardcover price down to $20. You’ll see the discount at checkout.
Interesting research: “CHAI: Command Hijacking Against Embodied AI.”
Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however, also create new security risks. In this paper, we introduce CHAI (Command Hijacking against embodied AI), a new class of prompt-based attacks that exploit the multimodal language interpretation abilities of Large Visual-Language Models (LVLMs). CHAI embeds deceptive natural language instructions, such as misleading signs, in visual input, systematically searches the token space, builds a dictionary of prompts, and guides an attacker model to generate Visual Attack Prompts. We evaluate CHAI on four LVLM agents; drone emergency landing, autonomous driving, and aerial object tracking, and on a real robotic vehicle. Our experiments show that CHAI consistently outperforms state-of-the-art attacks. By exploiting the semantic and multimodal reasoning strengths of next-generation embodied AI systems, CHAI underscores the urgent need for defenses that extend beyond traditional adversarial robustness.
News article.

As SpaceX and other vertically integrated space giants expand their reach, questions are growing over just how much room other small satellite companies have to build scalable businesses.
The post How much is vertical integration squeezing the smallsat opportunity? appeared first on SpaceNews.

Boulder, CO and Pasadena, CA — February 11, 2026 — Motiv Space Systems announced a contractual agreement with PickNik Robotics to support software development for NASA’s Fly Foundational Robotics (FFR) […]
The post Motiv Space Systems and PickNik Robotics Collaborate on Software for NASA’s Fly Foundational Robotics (FFR) Mission appeared first on SpaceNews.

Catalyst Campus is proud to welcome the fourth cohort of small businesses to the SDA TAP Lab – Catalyst Campus Mini Accelerator, a dynamic two-month program designed to prepare innovative […]
The post Nine innovative companies selected for fourth cohort of SDA TAP Lab – Catalyst Campus Mini Accelerator appeared first on SpaceNews.
Exit Interviews is a new podcast run by David Piegaro. I am honored to be one of the first few guests, along with Chris Christie. Think of this session as “Tyler Cowen as regional thinker.” Almost 100% fresh material, not to mention some trolling directed at Central and South Jersey, Philly too. Here is my episode.
Definitely recommended, and let us hope that David Remnick gets on soon to defend the honor of River Vale vs. Hillsdale in Bergen County…
The post My New Jersey history podcast with “Exit Interviews” appeared first on Marginal REVOLUTION.
In case you missed it, last week, in one of adjudicated rapist Donald Trump’s posting binges, he posted a video depicting Barack and Michelle Obama as apes. Republican Senator Tim Scott, who is the only Black member of the Republican Senate caucus, posted this on Xitter:

While Scott is stating the obvious (though White House spokesminion Karolina Leavitt claimed otherwise), there’s an interesting phrase that has not received much attention: the most racist. This implies that Trump has posted other things, which, while not as racist, are still racist.
Maybe someone could commit journalism and ask Senator Scott what those other not-quite-as-racist things are. Would be fascinating to find out.
Update: After I wrote this, Politico reported that the offensive post was removed, and that the Trump administration blamed it on a staffer. So who was the staffer, why did they post this, and why do they still have a job? That said, they lie all the time, so there’s no reason to believe this explanation is true. Regardless, Trump did not hold a press conference in which he denounced the video and decried the racism–because he likes the racism.
Update of the update: After the political press corps took Trump’s minions at their word when they claimed it was a staffer, Trump now admits he sent the post.
Trucking safety standards are crucial for ensuring public safety and maintaining the efficiency of the trucking industry. In Washington DC, these regulations are pivotal in reducing truck accidents and protecting the community. Compliance with these safety measures is essential for both legal and operational reasons.
In Washington DC, trucking safety standards serve as a backbone for both public safety and the commercial trucks sector. These regulations not only aim to minimize the causes of truck accidents but also play a critical role in safeguarding the lives of truck drivers and other road users. Additionally, strict safety regulations further reinforce accountability across all stakeholders. By implementing strict safety protocols, the regulatory bodies intend to create a safer environment on the roads. As you delve into these safety requirements, it becomes evident how they shape the trucking operations within the city. Washington DC truck accident lawyer is a crucial resource for navigating these complex regulations, especially when incidents occur.
Liability in a Washington DC truck accident often involves multiple parties, including the truck driver, the trucking company, and sometimes even the manufacturer of the truck or its parts. Determining who is at fault requires a thorough investigation of the accident scene, vehicle conditions, and driver actions. Legal frameworks in Washington D.C. are designed to assess these factors comprehensively, ensuring that the responsible parties are held accountable.
Insurance companies play a significant role in the liability process, as they evaluate claims and determine compensation based on the extent of damages and injuries. Trucking companies are required to carry substantial insurance policies to cover potential liabilities. This ensures that victims of truck accidents receive adequate compensation for their losses, including medical expenses, lost wages, and other damages.
The specific safety regulations governing commercial trucks in Washington D.C. are comprehensive and multifaceted. These rules encompass various aspects of trucking operations, including vehicle maintenance, driver qualifications, and operational limits. Regular inspections ensure that trucks meet safety standards, thereby reducing the risk of mechanical failures that can lead to truck accidents. Moreover, strict guidelines on driving hours are implemented to prevent fatigue-related mishaps among truck drivers. The focus is on mitigating potential hazards before they culminate in accidents.
Additionally, mandatory training programs for truck drivers are an integral part of these regulations. Each truck driver has a responsibility to adhere to strict guidelines designed to reduce errors and boost overall road safety. They cover essential skills such as defensive driving techniques and emergency response protocols. By emphasizing continuous education and skill enhancement, these measures ensure that drivers remain competent and prepared for unforeseen circumstances on the road. Consequently, compliance with these safety regulations not only enhances road safety but also contributes to smoother traffic flow and reduced congestion.
For trucking companies operating in Washington D.C., adherence to these safety regulations is non-negotiable. Companies must ensure their fleet is routinely serviced and compliant with emission standards to avoid contributing to environmental pollution. Regular audits by regulatory authorities assess compliance with these standards, ensuring that only safe vehicles are permitted on the roads. Furthermore, maintaining accurate records of vehicle maintenance and driver hours is critical for demonstrating compliance during inspections.
Hiring qualified truck drivers is another compliance requirement that companies must prioritize. It involves thorough background checks and ensuring that all drivers possess valid commercial driving licenses. Implementing internal policies that align with federal and local regulations helps companies navigate the complexities of compliance effectively. Ultimately, adhering to these stringent requirements not only averts legal consequences but also enhances the company’s reputation within the industry.
Navigating through compliance can be challenging due to several factors that companies face regularly. One significant challenge is keeping up with evolving rules that require constant updates to company policies and procedures. As new technologies emerge, integrating them into existing systems can be both costly and time-consuming. The high turnover rate among truck drivers also poses a challenge, as it necessitates ongoing recruitment and training efforts.
Moreover, smaller companies may struggle with limited resources to implement comprehensive compliance programs compared to larger counterparts. Balancing operational demands with strict adherence to regulations requires strategic planning and resource allocation. Identifying the causes of truck accidents remains a vital component of reducing collision rates and ensuring public safety.
IF YOU VALUE YOUR RIGHTS, PLEASE CONSIDER A DONATION TO OUR NONPROFIT NEWSROOM TODAY
The post Liability in a Washington DC truck accident appeared first on DCReport.org.
On March 1, I will have been a free agent for 15 years. In February 28, 2011, I said goodbye to my colleagues at Xerox, where I’d been a researcher for the previous five years. It was my first and last real job. I’d actually gone partly feral a couple of years earlier, by going full remote, but going truly untethered was still quite a shock to my system, accustomed as I had become to benevolent institutional environments for a decade at that point (I was 36).
We’re in a very different world today. One that makes me deeply tired in some ways. The spiritually nourishing rewilding environment I jumped into in 2011 has become domesticated and gentrified in ways that have quietly and insidiously reversed some of my hard-won ferality. I’ve become redomesticated to some extent, and I don’t like it.
It is becoming increasingly hard to tell the free-agent economy and the paycheck economy apart now, on both the indie consulting side and the “creator economy” side.

It is interesting to reread my going-indie post from March 1, 2011, Where the Wild Thoughts Are. This bit in particular:
…let me tell you about the one thing I have sort of worked out: a business philosophy. I call it my “Wild Thoughts” business philosophy, and it was put to the test the very week I sketched it out on the proverbial paper-napkin: two friends independently sent me the same provocative article that’s been doing the rounds, Julien Smith’s The Future of Blogs is Paid Access [this link appears to have bit-rotted now]. Reading it, I immediately realized that this was one decision about the future of Ribbonfarm that I could not postpone. For a variety of reasons, if I was going to consider paid access, I’d have to decide now.
I won’t keep you guessing: I decided against paid access or walled gardens of any sort. Ribbonfarm and the Be Slightly Evil email list [retired] are going to remain free. There will be no paywalls, no premium content and no paid members-only communities.
This was written when people were talking about paywalls in the context of pre-Substack solutions, and bloggers were experimenting with various bespoke business models like running paid member communities, events, boutique print publishing operations, schools/courses bolted on with Teachable, and of course, sketchy vitamins. Those of you who have been with me long enough might remember my experiments in some of these departments.
Though I technically stuck to my commitment to never paywall Ribbonfarm, I guess gradually moving a growing fraction of my writing energy to Substack after 2019, and eventually retiring Ribbonfarm in 2023, counts as a violation of the spirit of that commitment.
I’ve actually stopped using the paywall here now, though paid subscriptions are still on. But I haven’t made any new principled commitments about it.
Rather surprisingly, I find that my reasoning for this move is basically the same as in 2011. The philosophy is still about looking for Wild Thoughts. Ferality remains the True North. At the moment, I can’t think of a way to use the paywall feature that respects that principle. Going forward, posts will be un-paywalled by default, and I’m slowly un-paywalling my archives too (there is no obvious way to do it in bulk). I won’t be using the paywall unless there’s an exceptional reason to lock up something, or I can figure out a way that doesn’t mess with wildness.
The tactical problem of how to use the Substack paywall feature well in service of Wild Thoughts is symptomatic of a larger problem in the zeitgeist — the slow disappearance of open, wild public spaces.
This is true of the consulting side of my life too. There was a certain wildness to the ZIRPy gig economy I entered in 2011 that is gone now.
It feels like I have to figure out how to go feral all over again. Fortunately, there is a New Nature emerging that promises whole new kinds of wildness.
***
In important ways, I’ve learned absolutely nothing in the last 15 years. I mean, sure, I wrote a whole 2-volume book called the Art of Gig, but that was mostly things I thought others could learn from me. Not the sort of transformative learning people seem to call Personal Growth,™ featuring a good deal of Overcoming Adversity.™
Or to put it another way, I’ve grown a lot older, but not significantly wiser. Looking back at some of the impressively wise stuff I wrote in 2011, I might even have grown unwiser. This is why I don’t do the personal-journey/overcoming adversity type of reflection many people seem to, on reaching significant milestones. Personal Degrowth ™ mostly featuring ZIRPy Dumb Luck™ does not make for an inspiring story. It barely even makes a story at all.
Humans I think age on what ought to be considered depreciation curves, even if we sometimes pretend to age like fine wine rather than rusty equipment. And the depreciation rate is a function of your environment. I’ve been on the feral depreciation curve, which is about 3-5 % steeper than the domesticated depreciation curve after adjusting for inflation and interest rates. After all, feral cats and dogs don’t live as long as domestic pets, tend to be more diseased and malnourished, more cowed-down and fearful, and would probably get beaten up by their healthier domesticated cousins in a real fight (though they’d bring a certain murderous viciousness to the party). So why should humans be any different? I mean, sure there’s a lot of posturing about being more street-smart, and knowing where all the best dumpsters are, but come on.
Speaking of inflation and interest rates, someone reminded me that I’m apparently on record at some point having said that indie consulting was a ZIRP phenomenon.
It’s sort of true. The whole free-agency model I benefitted from, 2011-2019 or so, was powered in part by free distribution at scale, which led to things like viral hits and wildcard lead-gen for gigs. That era is toast. Some of the advice I offer in Art of Gig probably needs qualification now, given that “going viral” is no longer as sound a strategy for finding lucrative and interesting leads.
Speaking of leads, I don’t think I’ve received a single consulting lead from my 7 years of Substack writing.
Whatever leads I still get these days, not counting the spammy ones, originate from my old blog, Ribbonfarm, and networks spawned by that. You could say I captured the network effects of the old blog in a way that is neither possible, nor worthwhile, on Substack. The outlier wildness of the blogosphere has become farmland expanses on Substack. It makes sense for Substack the company to run that old aggregation theory playbook and capture the aggregate network effect to harvest what’s left of the old media landscape, but individuals can only really climb leaderboards here, not trees. Going “viral” in the old sense, of not just enjoying a spike of high reach, but reach into unusual places, triggering weird outlier opportunities and serendipity, is no longer really a thing. It’s not about Twitter getting Muskened, or Substack gentrifying blogs. It’s not about any one specific thing. It’s about the whole ecology being transformed.
Whale hunting has given way to a sort of creative yield-farming.
Not that I’m complaining. Fortunately, a couple of steady, meaty gigs for the last few years (Ethereum Foundation and TensTorrent), both the result of old Ribbonfarm equity, have kept me happy and lazy. I’m sure I’ll pay the price eventually.
***
Doing some vibe-multiple-regression eyeballing my archives, I think only about 52% of the consulting lead-gen failure from Substack can be attributed to my visibly growing decrepitude and lack of “I need to hire this guy” insight density in my writing. I’m sure it doesn’t help that I now mostly write weird shit about monsters and ooze instead of useful, actionable things like management insights distilled from TV shows like I used to.
But the other 48% is Substack’s fault. To borrow the term from that old VC debate, it has replaced a Black Swan farming game with a Moneyball game.
Well, not exactly Substack’s fault, but the fault of the zeitgeist Substack is part of, and in some ways leads — a glorious retreat to culturally conservative grinder modes of being and doing online.
These are modes that make readers cast writers into different cultural roles in their mental models. The blogosphere was where the most eclectic readers went to find not just alpha, but liveness, before 2019 or so, while the normies read The World is Flat and Sapiens. Bloggers were emissaries from wild cultural margins. Substack is LinkedIn for domesticated free agents.
I mean, shit, people work on their substacks like it’s a job. If you quit your job today to go “free agent” today and by that you meant starting a substack leveraging your network from your old job, I’m not entirely sure you’d be able to tell the difference. In 2011, we all aspired to the 4-hour work-week selling sketchy vitamins, not the 168-hour work-week producing monumental thudposts. We aimed to 0.1x the effort required to survive in the paycheck economy, not 10x it.
Sure, I never quite hit the 4-hour mark, (and always thought Tim Ferriss was full of shit and likely worked way harder than he let on, tbf), but I mean he was oriented right. He pretended to do/aspire to the right thing. Today, people are more likely to brag about how they worked 1000 hours on a big “drop” than how they cunningly arbitraged a vitamin supply chain to generate passive income while they relax on the beach.
See, the thing is, free agency is about risk-adjusted return for time-rich people, and in 2011, the emphasis was almost entirely on taking weird risks that were too small for big risk-capitalists like bankers to care about, and too marginal and subcultural for normies to even spot. This called for a certain ferality of disposition, and a certain picaresque attitude towards personal narratives. It drove divergence and variety rather than convergence and competition.
In 2026, free agency is about visibly virtuous and competitively benchmarkable hard work, featuring a kind of retail-grade New Sincerity. The emphasis is on using the time much more intensely than idly trawling for weird risks to take. Picaresque attitudes are rounded down to pure grift, reputationally. Affection for charming rogues is at all time low. Esteem is reserved for effortmaxxing agentic juggernauts going hypomanic with Claude Code.
In other words, in 2011, going free agent felt like trying to engineer weird luck for yourself (and I certainly managed to engineer several lightning bolts of weird luck for myself). In 2026, the goal seems to be to figure out a “system” that gets you self-employed in a grinder job you can’t be fired from, and where bureaucrats and middle managers can’t stop you from putting in 168-hour work-weeks.
It’s weird. Humans should aspire to a certain degree of laziness befitting our position as the apex villain species of the Anthropocene.
Whether you get a W2, 1099, or 1099K at tax time is irrelevant. If you solve for steady income and freedom to grind to the limit of your capacity as your main thing, then you have a job. The only question (in the US) is whether it comes with good health insurance.
***
It’s not just Substack. We live in grinder times. Some people on here write heavy lift thudposts that probably represent more effort that I would have put in over an entire year a decade ago on Ribbonfarm, during my peak effort years (the peak being a smallish hill, well short of “mountain”). And even that’s apparently not enough for some. I see people out there registering domain names and putting up fancy sites for single essays formatted with the care of illuminated manuscripts, and representing epic research journeys.
I expect to see Essay Unboxing videos on YouTube soon.
Don’t get me wrong. I appreciate this effort, especially when it’s shared for free with high-minded generosity. I even sometimes read such things without LLM help.
But damn.
So. Much. Grinding.
The magnitude aside, there is also a difference in the nature of the effort. All the effort is much more narrowly focused. Not wild, scattershot effort. Much less gambling, much more AI-in-the-loop Protestant Ethic-ing.
And many people preach this ethos. In 2011, people would have been apologetic about it, and somewhat embarrassed at not finding their 4-hour-work-week hack. In 2011, people bragged about passive income rather than being agentic.
Many actually refuse to believe low-effort happy-go-lucky wing-and-prayer trajectories are even possible. They think people who claim low-effort results are just lying.
I don’t blame them. The era when that was the norm is already a fading memory.
This is a much harder world to survive in than 2011, and to the extent people like me can get away with not doing effortmaxxing grinding, it’s because we’re living off accumulated fossil fuel from happier, lazier, more rascally times.
The loss of variety, vitality, and sheer fun, due to this shift from risk-orientation to effort-orientation is very real, and costly both for individuals, and for the economy as a whole. It is not a good thing for the world when the supposedly “free agent” economy becomes indistinguishable from the paycheck economy, in terms of risk profiles and effort-allocation patterns.
And speaking of grindsets, man, the quality control of this era deserves an ISO 9000 certification. It’s all over the free-agent economy, but is particularly evident in the corner of the writing economy visible on Substack. This is war-mode six-sigma hand-crafted writing in a John Henry existential death-struggle with the slop tsunami. Shitposting now seems like transgression rather than the low-effort default.
That’s what makes me tired by the way. Not me working hard, but watching everybody else work so radically hard I get tired just watching it.
Me, I just publish literal slop instead, half the time. Substack has already introduced a “report slop” button for the Notes feed, and the environment here is only going to get more hostile I think. When that button is added to the essays themselves, it will be game over for me.
The other half of the the time, I only write hand-crafted stuff when it’s easier than forcing ChatGPT to be sloppy enough to sink to my low standards. About 99% of my thoughts simply cannot rise to the level of gravitas ChatGPT brings to every topic. If I’d tried to prompt this essay out of ChatGPT for instance, it would have taken 100x the effort.
***
Anyway, 15 years, huh.
What have I been doing if I haven’t been grinding away or experiencing Personal Growth™?
I suppose I was busy trying to get lucky. And succeeding to the extent the environment was wild, and I was sufficiently feral in inhabiting it.
This wasn’t hard under ZIRP conditions. Freebie luck was available to anyone who paid attention to things in peripheral vision in the late aughts and tens, and I did nothing special to snag my share of that luck.
Under non-ZIRPy conditions, I suppose tunnel vision pays off more.
There was also luck as in being in the right place at the right time at the right age. I suppose I can claim some credit for that. Far too many people stayed put in the wrong places during ZIRP.
I’ve been on the Tech Coast during the right age for me. You see, 35-50 is an age when people in Tech listen to you as the voice of experience, without expecting you to do stuff, but haven’t yet written you off as a has-been. And they’re hungry for this. I was perfect for filling this role, at least while free, wild distribution was a thing.
Through these years, I’ve been part of two major intersecting milieus: The corporate tech economy and the popular discourse blogosphere loosely associated with it. Both have been the right place at the right time for me.
Neither is anything like it was when I started, and I’m not sure either is right for me anymore. Which makes me wonder where I could go next, socially speaking.
I have some stuff brewing (all good, to be shared soon) that’s going to trigger some significant lifestyle changes for me this year, but one of the things I’m thinking hard about is how to discover a New Ferality, and engineer it into at least my personal circumstances in ways that give it the inviolable force of New Nature, with no unwary redomestication possible.
The last time around, I backslid from ferality towards unwitting re-domestication through the gentrification of the environment around me.
This time around, I’m going to be looking for ways to shift gears from don’t become domesticated to can’t become domesticated.
Like everything else in life, blurbs have editors, so not everything you write gets published.
Some of the books I've read and blurbed might surprise you, such as this one (on a familiar subject, but a surprising one for a book):
You've Been Pooping All Wrong by Dr. Trisha Pasricha (who I first encountered some years ago).
Here's my blurb as it appears on the book's web page:
“An entertaining and instructive book.”
Here's the full blurb that I wrote
"Dr. Trisha Pasricha has written an entertaining and instructive book, in very plain language, about how our bodies turn inputs into outputs, along with tips on managing that. Along the way she writes equally clearly about the emerging, polysyllabic field of neurogastroenterology, which studies the lifelong, two-way conversation between brains and guts."
##########
"My needs in life are simple, I want three things maybe four...a little love, just enough to eat, a warm place to sleep, and everything I write should be published"
Two days before money for Homeland Security is set to expire, Congress is seemingly helplessly snared in debate over often brutal tactics of masked federal agents in the national deportation campaign, including two shooting deaths of citizens by agents.
Top immigration officials faced sharp questions yesterday at a congressional hearing about aggressive immigration policies, but little seemed to point to compromise about tactics. Heads of the three immigration enforcement efforts did not even acknowledge problems, whether in anonymity, lack of warrants or training.
Even the prospect of withholding money from the department does not seem to be stalling enforcement efforts or prompting much activity that looks like trying to resolve outstanding budget issues. Indeed, from the silence of administration officials, the plan for partial government shutdown seems to be that ICE agents continue full bore while other functions slow or shut in the 260,000-employee agency.
Skepticism about the efficacy of Congress aside, it seems absurd that our government cannot handle a conversation better about holding ICE to the same standards as any local police department or even lift this debate an inch over partisanship.
Even with the “drawdown” of 700 agents from the 3,000 deployed to Minneapolis, the roundups are continuing. Reports of federal officers near schools and homes continue to circulate on social media and many immigrant families remain hidden at home. So, too, are the protests continuing, including one this weekend in which, incongruously, sex toys were being thrown around outside the federal building and 40 were arrested.
Residents still brave the cold to observe federal officers, honking horns and blowing whistles to alert neighbors, volunteers are shuttling immigrants to jobs and delivering food and diapers.
ICE agents were active in a small town in Idaho, where local authorities were not consulted, while being cut back in Maine after Republican Sen. Susan Collins complained to Homeland Security Secretary Kristi Noem.
A national boycott of big companies that have remained silent about immigration enforcement excesses is gaining a toehold, even as Donald Trump and his administration holler about disloyal Americans who question his policies on the street, among Olympic athletes or public events. We also saw street protests in Milan from Italians upset about ICE agents being deployed to the Olympics as security consultants.
Time is running out for any bipartisan deal on passing the remaining Homeland Security budget bill. Bills to fund the rest of government for the year have been passed. Even proposals for stopgap efforts for more time seemed doomed by the Senate math needed to secure sufficient votes.
Despite some exchange of proposals on Monday, the basics have not changed: Democrats, appalled by scenes from Minneapolis, want legislation to require ICE drop masks and eliminate warrantless searches in homes and businesses, all included in a list of proposed changes to ICE. Republicans, including Trump, have issued an outright dismissal, and, indeed, want legislation to ban “sanctuary” cities that limit cooperation by local police with immigration enforcement. A unreleased White House counterproposal to Democratic demands “included neither details nor legislative text” and does not address “the concerns Americans have about ICE’s lawless conduct,” said Democrats.
At this stage, a Homeland Security shutdown can only be averted if all 100 senators agree to hold a vote before Thursday night when several senators are scheduled to depart for the Munich Security Conference. Republicans note that ICE already has funds by the One Big Beautiful Bill Act, so a shutdown will affect critical agencies like FEMA, TSA and the U.S. Coast Guard.
Instead, Republicans are pushing the SAVE America Act which requires voters to show proof of U.S. citizenship in federal elections despite laws that already bar non-citizens from federal voting. It also requires states to remove undocumented immigrants from voter rolls, though non-citizens now can vote legally on local elections in several states. Despite all the noise, there is little evidence that voting by undocumented immigrants has had any impact on federal elections, and the proposal will not pass in the Senate.
While voting is related to immigration enforcement, Democrats see Trump’s efforts to “nationalize elections” among a series of actions to tilt results of the November elections.
The daily drumbeat of legal and political cases arising from deportations continues.
–The Minnesota Bureau of Criminal Apprehension says it “remains committed” to working with the FBI and the Department of Justice, but there are no plans for how a joint homicide investigation will proceed in the fatal shooting of Alex Pretti. The Pretti family has gone to court to get access to the names of the border patrol agents who shot him, though journalists have identified them.
–A review by Politico of hundreds of cases brought by ICE detainees across the country shows judges increasingly furious and exhausted by the Trump administration’s tactics of delaying, avoiding or outright ignoring their orders. Homeland Security responded by criticizing “activist judges” who “thwart President Trump from fulfilling the American people’s mandate for mass deportations.” The statement didn’t directly address judges’ complaints about their orders being violated.
–A federal appeals court in California ruled that the Trump administration legally may lift Temporary Protective Status it had awarded for 60,000 refugees from Nepal, Honduras and Nicaragua after a similar Supreme Court ruling on Venezuelans. A separate ruling has temporarily blocked lifting the status for Haitians.
–A district court disallowed a California law requiring federal agents to unmask, but said it could require agents to carry identification. The case turned on whether state law enforcement officers also faced the requirement.
–An immigration judge ended removal proceedings against Rumeysa Ozturk, a Tufts graduate student from Turkey was had been detained by border police for her wriitings in graduate school.
–Democratic U.S. Representatives Angie Craig and Betty McCollum of Minnesota said they were denied entry to ICE holding facilities in Minneapolis in violation of law,
–White House officials say at least 4,000 arrests have been made in Minnesota since Dec. 1, and federal authorities were moving to reinstate deportation proceedings against the Ramos family, whose five-year-old boy in a bunny-hat has become a national symbol, after a court ordered their release.
–Even as Trump and Noem continued to thunder about criminals taken off the street, a CBS reported obtaining an internal Homeland Security memo detailing that fewer than 14% of those arrested nationwide in 2025 had charges or convictions for violent criminal offenses.
Your government at work.
The post Frozen Talks on ICE Tactics appeared first on DCReport.org.

A jaunty song calls for greater appreciation of Indian wool, as imports undermine the livelihoods of local herders
- by Aeon Video
We measure the impact of increased immigration on mortality among elderly Americans, who rely on the immigrant-intensive health and long-term care sectors. Using a shift-share approach we find a strong impact of immigration on the size of the immigrant care workforce: admitting 1,000 new immigrants would lead to 142 new foreign healthcare workers, without evidence of crowd out of native health care workers. We also find striking effects on mortality: a 25% increase in the steady state flow of immigrants to the US would result in 5,000 fewer deaths nationwide. We identify reduced use of nursing homes as a key mechanism driving this result.
That is from a new NBER working paper by
The post Immigration and health for elderly Americans appeared first on Marginal REVOLUTION.
The upgraded Super Heavy booster slated to launch SpaceX's next Starship flight has completed cryogenic proof testing, clearing a hurdle that resulted in the destruction of the company's previous booster.
SpaceX announced the milestone in a social media post Tuesday: "Cryoproof operations complete for the first time with a Super Heavy V3 booster. This multi-day campaign tested the booster's redesigned propellant systems and its structural strength."
Ground teams at Starbase, Texas, rolled the 237-foot-tall (72.3-meter) stainless-steel booster out of its factory and transported it a few miles away to Massey's Test Site last week. The test crew first performed a pressure test on the rocket at ambient temperatures, then loaded super-cold liquid nitrogen into the rocket four times over six days, putting the booster through repeated thermal and pressurization cycles. The nitrogen is a stand-in for the cryogenic methane and liquid oxygen that will fill the booster's propellant tanks on launch day.
This was an unusually hard post to write, because it flies in the face of everything else going on.
I first started noticing a concerning new phenomenon a month ago, just after the new year, where people were overworking due to AI.
This week I’m suddenly seeing a bunch of articles about it.
I’ve collected a number of data points, and I have a theory. My belief is that this all has a very simple explanation: AI is starting to kill us all, Colin Robinson style.
If you’ll recall from What We Do In The Shadows (worth a watch, yo), Colin Robinson was an Energy Vampire. Being in the same room with him would drain people.
That’s…pretty much what’s happening. Being in the same room with AI is draining people.

10x Productivity is Real
Let’s start with the root cause, which is that AI does actually make you more 10x productive, once you learn how.
I know some of you are holding on to old September/October-ish beliefs about this, from last time you tried it, and you respectfully disagree.
But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete. AI coding hit an event horizon on November 24th, 2025. It’s the real deal. And unfortunately, all your other tools and models are pretty terrible in comparison.
But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft. If you give your worker bees open season, they will quickly find the path of least resistance, and that path goes through Claude Code.
Let’s not quibble about the exact productivity boost from AI. The boost amount isn’t what this post is about. It just needs to be higher than about 2x for the vampire effect to kick in. We’ll use 10x because it’s a number people throw around. Let’s use it for the sake of argument.
With a 10x boost, if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value.
For someone.
But who actually gets to keep that value?
Value Capture
Let’s pretend you’re the only person at your company using AI.
In Scenario A, you decide you’re going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison.
In that scenario, your employer captures 100% of the value from you adopting AI. You get nothing, or at any rate, it ain’t gonna be 9x your salary. And everyone hates you now.
And you’re exhausted. You’re tired, Boss. You got nothing for it.
Congrats, you were just drained by a company. I’ve been drained to the point of burnout several times in my career, even at Google once or twice. But now with AI, it’s oh, so much easier.
Now let’s look at Scenario B. You decide instead that you will only work for an hour a day, and aim to keep up with your peers using AI. On that heavily reduced workload, you manage to scrape by, and nobody notices.
In this scenario, you capture 100% of the value from your adopting AI.
In this scenario, your company goes out of business. I’m sorry, but your victory over The Man will be pyrrhic, because The Man is about to be kicked in The Balls, since with everyone slacking off, a competitor will take them out pretty fast.
But in Scenario A your company is honestly pretty precarious too, since you’re running all your employees on the ragged edge of burnout.
The answer to “who captures the value” must lie somewhere in the middle, or we’re all pretty screwed.

Inherent Acceleration
The world is accelerating, against its will. I can feel it; I grew up in the 1980s, when time really did move more slowly, in the sense that news and events were spaced way out, and society had time to reflect on them. Now it changes so fast we can’t even keep up, let alone reflect.
I’ve been watching the effect the AI Vampire is having on people around me and I’m growing concerned. We’re all excited, but it’s also… weird.
I already posted about the Nap Attacks, how I fall asleep suddenly at all hours of the day after long vibe-coding sessions, and how my colleagues at SageOx are seriously considering installing nap pods at the “office.” I’m still sleeping a crazy amount.
It would seem that we are addicted to a new drug, and we don’t understand all of its effects yet. But one of them is massive fatigue, every day.
I don’t think that’s… good. And if anything, it seems to be getting more widespread. The developing situation is a multi-whammy coming at developers from all sides:
So you’re damned if you do (you’ll be drained) and you’re damned if you don’t (you’ll be left behind.)
Before we get into how to fight back, I need to take some accountability myself.
Unrealistic Beauty Standards
Agentic software building is genuinely addictive. The better you get at it, the more you want to use it. It’s simultaneously satisfying, frustrating, and exhilarating. It doles out dopamine and adrenaline shots like they’re on a fire sale.
Many have likened it to a slot machine. You pull a lever with each prompt, and get random rewards and sometimes amazing “payouts.” No wonder it’s addictive.
People are discovering this fun and they’re shouting from the rooftops about the crazy stuff they built during a 40-hour nonstop sprint with Claude Code.
And that’s where the problem gets into full swing. Because other people are listening!
People like me, and folks on LinkedIn saying their whole company has 3+ Claude Pro/Max accounts for Gas Town, and Jeffrey Emanuel and his 22 accounts at $4400/month, not to mention all the other crazy early adopters–we’re all part of the problem.
We’re all setting unrealistic standards for everyone else.
Maybe me worst of all. I have 40 years of experience, I’ve led large teams, I read fast, and I have essentially unlimited time, energy, and now tokens for experimenting. I am completely unrepresentative of the average developer.
But I’m still standing up and telling everyone “do it this way!” I even co-wrote a book about it.
Employers are very likely starting to look at me, and the rest of us far outliers, and saying, “Hey, all my employees could be like that!”
And dollar-signs appear in their eyeballs, like cartoon bosses.
I know that look. There’s no reasoning with the dollar-eyeball stare.

I don’t think there’s a damn thing we can do to stop the train. But we can certainly control the culture, since the culture is us. I plan to practice what I preach, and dial my hours back. That’s going to mean saying No to a lot of people who want to chat with me (sorry!), and also dialing back some of my ambitions, even if it means losing some footraces. I don’t care. I will fight the vampire.
The next group that needs to arm up with garlic and wooden stakes is AI-native startups, where I’m concerned that the frenzy is getting out of hand.
Startups Are Poisoning The Well
Startups are an especially big contributor to the AI Vampire problem.
If you have joined an AI-native startup, the founders and investors are using the VC system to extract value from you, today, with the glimmer of hope for big returns for you all later.
Most of these ideas will fail.
I know this because they are literally telling me their plans like villains at the end of an old movie, since with Gas Town I have mastered the illusion of knowing what I’m doing. Truth is, nobody, least of all me, knows what they’re doing right now. But I look like I do, so everyone is coming to show me their almost uniformly terrible ideas.
Startup founders are out there draining people at a faster rate than at any time in history, in pursuit of instantly banal ideas like “oh hey, I bet nobody thought of making a sandbox system for agents.” Cue nine thousand sandbox startups, all of which will eventually be killed off by a single OSS winner wrapped by home-grown internal vibe-coded SaaS.
I could list out a bunch of others. It’s pretty bad. There’s a massive amount of talent being thrown at an incredible dearth of real ideas, basically the same six tired pitches. (“AI personas!” “Agent memory!” “Gas Town, but safe!” “Better RAG!”)
The overwhelming majority of these startups won’t sell a flea-bitten dollar of ARR. Even though enterprises aren’t too bright about SaaS, collectively (evidence: many are still on Copilot), they are quickly growing savvy enough to know that Build is the New Buy. Finance departments are about to make your head spin refusing to re-up SaaS contracts this year.
But the SaaS founders are throwing themselves and their entire companies into it like it’s a classic gold rush, where everyone’s going to get a stake if they just work to exhaustion. I don’t think it works that way this time, but that’s how they’re treating it. A footrace to stake claims in the AI space.
That’s a race that ends, in my opinion, with everyone collapsing in exhaustion without actually winning the race.
And while they run, they are setting the tone for the rest of us. I see these frenzied AI-native startups as an army of a million hopeful prolecats, each with an invisible vampiric imp perched on their shoulder, drinking, draining. And the bosses have them too.

Enterprises see the oncoming horde and think, oh jeez, we need to hustle. And they’re not exactly wrong. Which means this lovely dystopian picture is making its way slowly but surely into enterprise, at the big company where you work.
Executives everywhere are excited about AI. Many of them are addicted as well, vibe coding at home, somewhat dangerously. And they’re thinking, gosh, if I just had a few engineers who worked this hard all the time then I wouldn’t need a bunch of the others! This is really just a recruiting problem!
They’re reframing the problem in terms of finding people ripest for extraction.
The Anti-Amazon-Extraction Formula
Back when I was at Amazon in the shiny new US1 building in the International District in downtown Seattle, 2001–2003-ish, people began to tire of the ridiculous pace. The company was post-IPO; the market had been up and down, and the company was starting to mature. But everyone was still working like sled dogs.
Most of my colleagues who put up with that environment are billionaires now, so it’s easy to point back at that time and say, “Oh, it was worth it.”
But what if it hadn’t been a success? How many CEOs have bet everything, including their company’s wellbeing and mental health, on a big launch, only for it to go nowhere?
I’ve been there. Plenty of times I’ve allowed myself to be extracted from (drained) for the promise of some big potential future payout. One that often never came.
Companies are straight-up designed for extraction, and so you need to be the counter-force.
My friends who were grumbling back in 2001 needed some help with this, and I gave it to them. One day I walked up to the whiteboard during a particularly heated grumble-session, and I wrote a ratio on the board: $/hr (dollars divided by hours).

I said to everyone, Amazon pays you a flat salary, no bonuses, and you work a certain number of hours per week. From that, you can calculate that you make a certain number of dollars per hour.
I told the grumbler group, you can’t control the numerator of this ratio. But you have significant control over the denominator. I pointed at the /hr for dramatic effect.
They all looked at me, wide-eyed, never having EVER thought of it from this perspective before.
I don’t think they fully believed me, but at least I got them thinking about it.
As for my part, I went ahead and dialed that denominator down, and lived life a bit while I was at Amazon, because fuck extraction.
Funny thing, a couple of times over the next few months I’d be walking by some office full of people and they’d all be studying a formula on the board that said, in big letters: $/hr.
$/hr To The Rescue
That old formula is also my proposed solution for the AI Vampire, a quarter century later.
Someone else might control the numerator. But you control the denominator.
You might think you don’t. And indeed, individually you may not have much sway over it. But collectively, the employees of your company have literally all the power. Now that I’ve been up at the top, I’ve learned that CEOs have surprisingly little power.
You need to push back. You need to tell your CEO, your boss, your HR, your leadership, about the AI vampire. Point them at this post. Send them to me. I’m their age and can look them in the eye and be like yo. Don’t be a fool.
You need to educate them about sharing the AI value capture between the company and the employees, and how to strike a good balance of sustainability and competitiveness.
When I was visiting Combank in Sydney in December, in their historic train station tech campus, I was awestruck by how it seemed like the ideal balance of happiness and productivity. It was open-plan, high ceilings, fancy, natural light, fully green with plants everywhere, with a huge coffee and snack stand in the middle of all the offices. People were sprawled out through the building, working, meeting, socializing, walking around outside, eating, enjoying the sun. It was, like, Tuesday for them.
They found a great setting for the dial, at least for this time and place. I don’t know how it changes with AI. But I feel like their current setting is where we need to aim as the future changes us.
It’s not even remotely sustainable for companies to capture 100% of the value from AI. And when employees capture 100% of the value, it will be temporary at best: that company gets beat by someone who’s got the dial turned higher.
I don’t even know what the right setting for the dial is. Hell, I’m the worst person to ask, because I’ve got the dial set to 11 and I’m putting all my weight on it, trying to make it go to 12.
But the right setting is in the middle somewhere. Companies will try to drag it higher. You need to fight to drag it lower.
I would argue you need to consciously fight the AI Vampire even if you’re at a 30-person startup, where everyone agreed when they signed up that this was a sprint to try to get rich.
You need to fight it if you’re an investor. You will kill your Golden Geese.

You need to fight the AI vampire most of all if you’re a CEO or founder. People will be caught up in your enthusiasm. And they won’t understand why they’re being drained until they hit a wall and maybe can’t recover. Burnout’s a serious deal and can take someone down for a year or more. So take it seriously.
As company leadership, what, realistically, can you do? I mean, nap pods is an option, probably a good one if people come into the office. But what if people just didn’t have to work so many hours? That is by far the most concrete way to fight the vampire. Change your expectations about how many hours there are in a human workday.
I’ve argued that AI has turned us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving. I find that I am only really comfortable working at that pace for short bursts of a few hours once or occasionally twice a day, even with lots of practice.
So I guess what I’m trying to say is, the new workday should be three to four hours. For everyone. It may involve 8 hours of hanging out with people. But not doing this crazy vampire thing the whole time. That will kill people.
As an individual developer, you need to fight the vampire yourself, when you’re all alone, with nobody pushing you but the AI itself. I think every single one of us needs to go touch grass, every day. Do something without AI. Close the computer. Go be a human.
I regret the unrealistic standards that I’m contributing to setting. I don’t believe most people can work like I’ve been working. I’m not sure how long I can work how I’ve been working.
I’m convinced that 3 to 4 hours is going to be the sweet spot for the new workday. Give people unlimited tokens, but only let people stare at reports and make decisions for short stretches. Assume that exhaustion is the norm. Building things with AI takes a lot of human energy.
I’m going to continue to launch stuff, post blogs, all that. But be aware that I’m pushing back hard behind the scenes. I’m saying No to a bunch of people asking for meetings, and resisting the incessant demand for podcasts and appearances.
I’m making sure that if this all comes crashing down, I won’t have Regret Years to look back on. I’m even typing this post out at the mall, with Linh and Mozart, because when I close the computer, we’re going to go for a walk.
I’ll see you next time. I hope we’re both more refreshed.

My core model is both simple and depressing. Fertility rates have declined around the world because birth control technologies became much better and easier to use. And people — women in particular — just do not want that many kids.
I do understand that better birth control happened a long time ago, for instance birth control pills become widely available in the wealthier countries in the 1960s, or sometimes the 1970s. Nonetheless the diffusion of new technologies can be very slow, and for norms to shift it can take generational turnover or even a bit more. Plus “fertility contagion effects” take a long time to work their way fully through the system.
Those long lags may be difficult to swallow, but social science has numerous examples of very long operative mechanisms. (Just think of how long it took potential migrants to exploit open borders, for instance pre-WWI.) Furthermore, fertility rates have indeed been falling for a long time in the wealthier countries.
So a lot of women, once they face the realities of the stress and trying to make ends meet, want only one kid. You end up with a large number of one kid families, some people who never marry/procreate at all, and a modest percentage of families with 2-4 kids. There are also plenty of cases cases where the guy leaves, self-destructs, or never marries, after siring a single child with a woman. That gives you the fertility rates we are seeing, albeit with cultural and economic variation.
Richard Hanania considers why income is not the driving force behind the decline, and why the decline is continuing.
Part of this model is that many women just love having a child. They love “children” so much that a single child fills up their needs and desires.
I see a similar mechanism in my own life. I very much enjoy having Spinoza around the house, but I have zero desire to take in another canine. Whenever I want more “dog attention,” I can assure you that the supply is highly elastic. Similarly, a single kid can take up a lot of your time and affection, again supply is elastic from the side of the kid. Maybe parents learning how much they can enjoy a single kid has been another cultural lag?
Under my preference-driven model, fertility declines are very difficult to reverse. I believe that is also consistent with the evidence to date.
So this is a problem we need to worry about. The asymptote is rather unpleasant, and the path along the way involve less human well-being, possibly less innovation, and maybe some major fiscal crises as well.
As Arnold Kling would say, “Have a nice day.”
The post My simple model of fertility decline appeared first on Marginal REVOLUTION.
While a part of the United States braved extreme winter cold, January 2026 brought sweltering summer conditions to many parts of Australia.
Australia’s area-averaged mean temperature was 1.90 degrees Celsius (3.42 degrees Fahrenheit) above the 1961–1990 average, making it the fourth-warmest January since the start of observations in 1910, according to the Bureau of Meteorology (BoM). Contributing to this was a late-month heatwave in the country’s southeast that was especially intense between January 26 and January 30. During that period, numerous weather stations in South Australia, New South Wales, and Victoria recorded record-high daily temperatures.
The heatwave’s intensity and extent are evident in this map, which shows air temperatures at 03:00 Universal Time (2 p.m. local time in Victoria) on January 29, modeled at 2 meters (6.5 feet) above the ground. It was produced with a version of the GEOS (Goddard Earth Observing System) model, which integrates meteorological observations with mathematical equations that represent physical processes in the atmosphere. The darkest reds are where the model indicates temperatures reaching or exceeding 45°C (113°F).
According to BoM, the hottest temperatures of January 2026 were measured in two places in South Australia: in the town of Andamooka on the 29th and at the Port Augusta airport on the 30th, where temperatures reached 50.0°C (122.0°F). In both New South Wales and Victoria, the month’s hottest day was on the 27th, when temperatures reached 49.7°C (121.5°F) at a station in Pooncarie and 48.9°C (120.0°F) at stations in Walpeup and Hopetoun.
The heatwave brought significant human and public-health effects, including the increased risk of heat-related illness. Organizers of the Australian Open tennis tournament in Melbourne, Victoria, suspended play on some courts and closed roofs to provide shade as part of an “extreme heat policy” to protect players and spectators, according to news reports.
The recent warmth followed another bout of heat earlier in the month that, combined with strong winds and dry conditions, created dangerous fire conditions. Numerous bushfires were burning across Victoria on January 9 as officials urged people to evacuate. By mid-month, news reports indicated that the fires had destroyed hundreds of structures and killed tens of thousands of livestock.
NASA Earth Observatory image by Lauren Dauphin, using GEOS data from the Global Modeling and Assimilation Office at NASA GSFC. Story by Kathryn Hansen.
Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Following a significant winter storm, frigid temperatures lingered in late January 2026 across a vast swath of the U.S.

A prolonged high-pressure weather system brought unusually warm September temperatures to British Columbia and the Pacific Northwest.

Tens of thousands of people fled to safety as blazes spread throughout the country’s Biobío and Ñuble regions.
The post Summer Heat Hits Southeastern Australia appeared first on NASA Science.

Update Feb. 11 1:33 p.m. EST (1833 UTC): SpaceX confirmed deployment of the 24 Starlink satellites.
SpaceX completed its 12th Starlink mission of the year so far with a Falcon 9 rocket launch on Wednesday morning from Vandenberg Space Force Base.
The Starlink 17-34 mission added another 24 broadband internet satellites to the growing low Earth orbit constellation.
Liftoff from Space Launch Complex 4 East happened at 9:11:29 a.m. PST (12:11:29 p.m. EST / 1711:29 UTC). The rocket flew on a southerly trajectory upon leaving the pad.
SpaceX launched the mission using the Falcon 9 booster with the tail number 1100. This was its third flight after previously launching the Starlink 11-30 and NROL-105 missions.
Nearly 8.5 minutes after liftoff B1100 landed on the drone ship, ‘Of Course I Still Love You,’ positioned in the Pacific Ocean. This was the 177th landing on this vessel and the 569th booster landing for SpaceX.

On the cusp of launching its first Vulcan rocket of the year on Thursday, United Launch Alliance leadership announced its goal for 2026 to launch between 18 and 22 times.
Speaking during a virtual media roundtable on Feb. 10, Gary Wentz, ULA’s vice president of Atlas and Vulcan Programs, said the company aims to launch two to four Atlas 5 missions and 16 to 18 Vulcan missions. He said the Vulcan rockets will be split between pad 41 at Cape Canaveral Space Force Station and pad 3 at Vandenberg Space Force Base.
“It’s a balance. We’re working with our customers to determine specific priorities and order of missions and in the case of Space Force and NRO (National Reconnaissance Office), to determine which missions they wan to get off with higher priority,” Wentz said. “And as we finalize that over the next about six to eight months out of the mission, then we’ll assign whether or not its going to be an Atlas mission or a Vulcan mission.”
John Elbon, the interim CEO following the departure of Tory Bruno in December, said that the company has a “strong commitment” from their commercial and government customers, citing a backlog of more than 80 missions.
“Mark (Peller) and I will be laser focused during the next period on continuing to meet our customers’ needs and importantly, getting us set for a reliable and sustainable increased launch rate,” Elbon said.
A large chunk of those missions come from a massive purchase of 47 launches by Amazon to fly its broadband internet satellites, called Amazon Leo, into low Earth orbit. ULA still has all 38 of its Vulcan launches ahead of it as well as four more flights on its Atlas 5 rockets.
ULA also has dozens of missions lined up for the U.S. Space Force and the NRO via the National Security Space Launch (NSSL) Phase 2 and Phase 3 contracts. Wentz offered a glimpse at part of the lineup for the year when it comes to these NSSL missions and said it will go as follows:
Regarding the upcoming cargo launch of the Boeing Starliner-1 mission to the International Space Station, Wentz reaffirmed what NASA leaders said during a separate briefing on Monday, that a spot on the manifest is being saved for that mission in April.
“That would go in after, as you described, after GPS and before SF-57 and that is our current plan,” Wentz said. “And then, if they were approved to fly crew, we have a slot in the October/November timeframe, where we would work between Space Force and NRO on priority to put a crew mission out there in October.
In the past couple of years, ULA began the year with goals of launching in the double digits, but ended up flying five times in 2024 and six times in 2025. Elbon pointed to some anomalies that constrained their launch rate with Vulcan.
One notable instance was the solid rocket booster anomaly seen during the second certification flight of Vulcan in Fall 2024. That contributed to the delay of certification for Vulcan to fly national security payloads until March 2025.

“Those are behind us now and so the Vulcan rocket is ready to go. We talked about getting what we call ‘Track A’ online, the new Vertical Integration Facility, the new Vulcan launch platform that allows us to double the rate, as Mark described, at KSC,” Elbon said. “We also will be bringing online later this year SLC-3 out at Vandenberg. And so, we’ll have launches out there.
“And we have in inventory, already built and finished goods, the rockets that will allow us to get up to that rate through this year. And the payloads are ready to go. So what we need to do is execute our launch activities at the Cape and at Vandenberg. And so, it’s very achievable for us to get up to the rate that we need to get up to through this year.”
The aforementioned Track A and additional assets refers to a second VIF built to complement the original VIF, called VIF-G for “government.” Wentz noted that VIF-G is intended solely for government missions, like those awarded under the NSSL contract and the six Starliner missions on Atlas 5.
VIF-A, with the A standing for Amazon, will be dedicated to Vulcan launches.
Progress toward another inaugural heavy-lift launch, as @ulalaunch preps an all-new launch platform (VLP-A) and integration facility (VIF-A) dedicated to Leo missions on Vulcan.
Payload for LV-01 is fully stacked ahead of launch, and processing is already underway for LV-02. pic.twitter.com/9qD2fGLJk0
— Amazon Leo (@Amazonleo) January 20, 2026
The media roundtable on Tuesday was ULA’s first such virtual gathering since the departure of former ULA President and CEO Tory Bruno. In his remarks, Elbon thanked Bruno for his 12 years of work at the company.
Elbon will be holding the reins during a search for the next permanent CEO. He had previously planned on retiring in April prior to Bruno’s departure to Blue Origin.
“[Tory] led us through a transformation, through the development of Vulcan as it was certified. I think in his new role, Tory has an opportunity to focus on defense of our nation, which he has a real passion around. So those are the kind of things I believe factored into his decision,” Elbon said.
“Tory to some degree was the face of ULA, but our strength is really in the engineering expertise and the production expertise and the launch expertise, the 3,000 people that do that work,” Elbon added. “I remain just incredibly proud of that team and we’re going to do great things going forward.”
