Starving Genies

Since my genies seems to have all gone to rehab at the same time I have leisure (!?) to write this.

In the Expand phase, growth isn’t a curve. it’s a staircase. You grow until you approach a ceiling—some rate-limiting resource.

Either increase the supply of that resource or reduce the consumption until you get back to growth. Disaster averted!

Then the next rate-limiting resource looms and you repeat it. Then the next one. Then the next.

Eventually you know (from bumping into them) what the rate-limiting resources are and how to keep the supply curve above the demand curve. Then you can shift into Extract mode.

(Successful companies often retcon all these near-death Expand experiences to make their success seem inevitable.)

Genies

Everybody cuts limits at once. Not a coincidence, a signal.

The genie is in Expand. Hard. Usage is growing faster than almost any product in history.

When you’re about to hit the ceiling, you have two choices. To bend the supply curve up—build more data centers, get more chips, make inference cheaper. To bend the demand curve down—slow the growth, ration the usage, make free/cheap tiers less attractive.

The model providers are doing both, but bending demand down is the surprise move, at least to me. Who shuts the rocket motor off mid-air?

The twist here is competitive dynamics. Normally, bending demand down in the face of competition is suicide. Users leave. They go to the competitor. You lose. Expand doesn’t forgive.

The Bottleneck?

In Expand, the first question is always: what’s the next rate-limiting resource?

Chips? Nvidia supply is constrained, H100s are scarce, everyone’s fighting for allocation. Except Google makes their own. Amazon makes their own. Anthropic has a preferential supply agreement. And all three still cut limits at the same time. Not chips.

Raw compute capacity? Data centers, cooling, power delivery. Real constraints, multi-year buildouts. But physical constraints hit different companies at different times — different footprints, different geographies. You’d see variation. You see synchrony. Not physical capacity.

The economics of inference? The ratio of what it costs to serve a query to what users pay. Broken at scale, especially for free users. This one feels right until—they have basically unlimited capital. The compute bill is large but fundable. They’re not cutting limits because they literally can’t afford to serve you.

So what is it?

The story. Specifically, how long investors will fund giving away expensive capability while waiting for profits to catch up. That story has a shelf life. At some point you have to demonstrate a path to profitability, not just assert one. Usage limits are evidence you’re managing toward that—it’s a signal to investors, not a “we’re running out of money” decision.

That’s why all three moved together. The same investor class, the same stage, the same moment when “trust us, it’ll work out” stops being enough.

The bottleneck isn’t engineering. It’s narrative.

It’s a Race

What breaks the model cartel? Someone bends the supply curve up. Fixes the narrative. Inference gets cheaper through distillation, caching, smarter routing to smaller models. Custom silicon matures. New data center capacity comes online. One company gets meaningfully ahead on unit economics and can afford to open the throttle while competitors can’t.

That company wins the next wave.

What’s Next?

I write about augmented coding—developers working with genies all day. Usage limits bite differently for us. A power user hitting a daily cap mid-flow isn’t mildly inconvenienced. Their work stops. They have to, I don’t know, write another blog post.

For developers, caps pressure them toward the API—explicit pricing, higher ceilings, no daily cliff. For everyone else, it’s just a wall.

Limits split the user base: casual users get free but capped, developers get metered but uncapped, and the middle—technical-but-not-API-savvy power users—gets squeezed into paid consumer tiers. That’s the conversion pressure the companies actually want, even if a bit more revenue isn’t going to change the global constraint.

Uncomfortable question: are the limits temporary—a bridge while supply catches up—or are they the beginning of a new equilibrium where heavy AI usage is a premium product, not a default?

In 3X terms: is this a brief pause on the way to the next Expand staircase, or the beginning of Extract—where growth slows, margins matter, and rationing becomes a feature?

I don’t know. But I’m watching which company bends supply up first. That tells you everything about where Expand goes next.

SpaceX and Amazon spar over satellite deployments

Amazon says it will revise deployment plans for its broadband satellite constellation while denying claims from SpaceX that its current approach represents a space safety risk.

The post SpaceX and Amazon spar over satellite deployments appeared first on SpaceNews.

Europe’s strategic autonomy in space will define its role in the ‘second space age’

Ariane 64 launch

Europe’s future in space really boils down to one question: can it stay ahead without relying on technology made somewhere else? As we step into what experts call the “second space age,” strategic autonomy is suddenly front and center for the European Union. And it is increasingly essential that Europe can get to, control and […]

The post Europe’s strategic autonomy in space will define its role in the ‘second space age’ appeared first on SpaceNews.

Artemis 2 in good shape cruising towards the moon

Artemis 2 image of Earth

A day after lighting its engine to head to the moon, the Artemis 2 Orion spacecraft is performing well with only minor issues.

The post Artemis 2 in good shape cruising towards the moon appeared first on SpaceNews.

Italy’s Argotec plans to scale Florida satellite facility to meet rising US demand

Italy’s Argotec has officially opened its first U.S. satellite production facility, cementing a foothold near Kennedy Space Center in Florida to join other foreign space firms pursuing growing demand from American programs.

The post Italy’s Argotec plans to scale Florida satellite facility to meet rising US demand appeared first on SpaceNews.

White House again proposes steep NASA budget cuts

For the second consecutive year, the White House is proposing a major budget cut for NASA that would significantly impact the agency’s science programs and the International Space Station.

The post White House again proposes steep NASA budget cuts appeared first on SpaceNews.

Nonfiction Publishing, Under Threat, Is More Important Than Ever (New Republic)

 As an author with a forthcoming non-fiction book, it's both depressing to read that non-fiction book sales are down, but inspiring to read of the importance of books.

The New Republic considers the (diminishing) prospects and (continuing) importance of non-fiction books.

Nonfiction Publishing, Under Threat, Is More Important Than Ever
Cuts in publishing and book reviewing imperil the future of narrative nonfiction, and our understanding of the world around us. 
 by Paul Elie

 “The decline in sales of new nonfiction might reflect a changing information ecosystem,” Elizabeth Harris observed. “People looking for information can now easily turn to chatbots, YouTube, podcasts and other free online sources.” Last December, The Guardian cited NielsenIQ figures indicating a one-year drop of 8.4 percent in nonfiction book sales (twice that of fiction) and quoted a writer who had “heard publishers have soured on any nonfiction that isn’t ‘Hollywood friendly.’”

... 

"Fretful narratives about the demise of books and the rise of devices have been in play for half a century or longer. “Our world of books, like most other worlds now, is the arena of an increasingly bitter struggle for space, and for the limited reading time that a busy citizen in this electronic age can afford,” John Updike lamented when accepting the American Book Award in 1982. Narrative nonfiction in particular has faced headwinds in mass culture before. And in many respects, the challenges it faces are built in. Long fact is hard to publish and always has been. Reportage and research take time, resources, attention, and fortitude. A book can require several years to write and another year and a half to be edited, checked, printed, and publicized—only to wind up coming out during a news cycle dominated by a sex scandal, school shooting, pandemic, or war. It was as true half a century ago as it is today that readers expect to pay for fiction but are used to getting nonfiction passively through the media. 

...

"In societies where freedom is under threat, an informed citizen is countercultural and deep reading is an act of resistance. Just as protest and vigilance are essential, so is the ability to read and think. In a would-be autocracy, the autocrat aims to subsume our society’s particular narratives into his master narrative—in which his name fills the headlines, his voice and image dominate the broadcasts, and his airbrushed visage appears on the facades of government. To read a book, however, is to enter a narrative that stands outside the politics-and-media maelstrom. In a would-be autocracy, even a small bookstore—with hundreds of books, classic, recent, and current—is a space of contrary narratives, where truth is recognized as both essential and complicated." 

Advice for economics graduate students (and faculty?) vis-a-vis AI

From Isiah Andrews, via Emily Oster and the excellent Samir Varma.  A good piece, though I think it needs to more explicitly consider the most likely case, namely that the models are better at all intellectual tasks, including “taste,” or whatever else might be knockin’ around in your noggin…I am still seeing massive copium.  But the models still are not able to “operate in the actual world as a being.”  Those are the complementarities you need to be looking for, namely how you as a physical entity can enhance the superpowers of your model, or should I express that the other way around?  That might include gathering data in the field, persuading a politician, or raising money.  I am sure you can think of examples on your own.

The post Advice for economics graduate students (and faculty?) vis-a-vis AI appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

How should you change your life decisions if we are being watched by alien drone probes?

I’ve asked a few people that question lately, and get either no answer or very exaggerated answers.

Rep. Burchett recently raised the possibility of being terrified and not sleeping at night if UAPs are aliens.  But even if that is your immediate response, you need a more constructive medium-term adjustment to the new situation.

One option would be to pray to the aliens as gods, but I do not recommend that.

Another option is to not change anything, on the grounds that the aliens (probably?) have not been interfering in earthly affairs.  Or if they have been interfering, they might be interfering in steady ways which are compatible with you continuing your previous life course.

That is mostly a defensible stance, but it hardly seems a true marginalist should make zero adjustments in light of the new and very radical piece of information.  If nothing else, you need to consider that other people will in time respond, and you will in turn want a response to their choices.

A third option is to write more about the aliens, so that when their presence is (partially?) revealed, you will rise in status and influence.

Should you buy more insurance?  But against what exactly?

Hold more defense stocks in your portfolio, if you anticipate more defense spending as the pending human reaction to the revelations?

Consume more?  Maybe.

The most plausible decision however is to slightly lower your level of ambition.  Consider a few of the core scenarios.

If the aliens go rogue on us and end it all, the efforts you might be making now will have been for naught.

If the aliens are here to cap the level of human achievement, for instance to keep us on Earth and prevent us from exploring the galaxy, yet without harm, you also can scale back your ambition a bit.  You do not need to invest so much capital in supporting the space program.  Most of your more local ambitions however should remain untouched.  You might even become more ambitious in keeping the Earth a safe place, since escape hatches are now less likely.  Alternatively, you might think the aliens are our “saviors of last resort,” but that too probably makes you less ambitious.

A more general Bayesian update is simply that human efforts, in the broader scheme of things, have lower relative marginal products than you might have thought.  The aliens apparently have lots of powers, at least if they managed to get here.  That too militates in favor of lowering your ambitions.  Conversely, if you start believing we are the only intelligent, agentic beings in the galaxy, arguably you should increase your ambitions.  There will be fewer outside forces to stop, limit, or reverse your efforts.

To be clear, in this Bayesian update large numbers of people still should increase their ambitions, since they were not optimizing in the first place.  But they should increase those ambitions slightly less than one used to think.  And in some areas, perhaps they should not increase their ambitions at all.

Finally, you should not decrease your ambitions a lot.  For one thing, you may need an ongoing high level of energy and ambition to deal with the changes that aliens — or even the perceptions of alien presence — will bring to earthly civilization.  Furthermore, since any alien-induced uncertainty about the future is very hard to model, most people will do best by simply continuing on their current tracks.  It makes no sense to start waving around a sword to scare off the alien drone probes.

Nonetheless, some of your more extreme ambitions should be carved back just a wee bit.  Sorry about that.

I guess it is a good thing nobody is watching then.

Addendum: For this post I am indebted to a useful lunch conversation with Robin Hanson, Bryan Caplan, and Alex T.

The post How should you change your life decisions if we are being watched by alien drone probes? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

As Artemis II zooms to the Moon, everything seems to be going swimmingly

As the Artemis II lunar mission moved into its third day on Friday, and with the spacecraft's big engine firing behind it, the four astronauts on board had a little more downtime.

So the four crew members—Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen—had their first opportunities to speak with their families at length, and also did a couple of media events. They held medical conferences with physicians back in Houston, although these were apparently routine since none of the crew members were experiencing space adaptation sickness.

And they had some time to take pictures. Wiseman, the mission's commander, sent a particularly spectacular image on Friday morning that showed our planet's night side (with a relatively long exposure). Among the beautiful details in this image were not one but two auroras, as well as zodiacal light in the bottom right of the image. The Sun is visible in the distance, lighting the far side of the Earth.

Read full article

Comments

Salarymen, specialists, and small businesses

Photo by Joe Mabel via Wikimedia Commons

In the medium to long term, AI may replace all human jobs (or maybe not). But in the short term, AI doesn’t seem to be doing this yet. Employment rates for prime-age workers in the U.S. are hovering near all-time highs:

A recent survey of corporate CFOs found “little evidence of near-term aggregate employment declines due to AI.” A survey of European firms found no evidence of job reductions so far, despite rising productivity due to AI. Geoffrey Hinton, one of the pioneers of modern AI, famously predicted the imminent displacement of all radiologists by AI algorithms; in fact, radiologists are in greater demand than ever.

So even though AI may displace human beings en masse in the future, it’s not doing that today. But it is likely to change the nature of work. Software engineers, for whom “writing code” was a big part of the job description just a few months ago, are now mainly checkers and maintainers of code written by AIs. But this hasn’t eliminated the need for software engineers — at least, not yet. It has just shifted their job descriptions.

Humlum and Vestergaard (2026) find that so far, this pattern — workers shifting to new tasks without losing their jobs — is the norm, at least in Denmark:

[M]ost employers in [AI] exposed occupations have adopted chatbot initiatives, workers report productivity benefits, and new AI-related tasks are widespread. Yet…we estimate precise null effects on earnings and recorded hours at both the worker and workplace levels, ruling out effects larger than 2% two years after the launch of ChatGPT. What moves is the structure of work: employers absorb AI through task reorganization—including new tasks in content generation, AI oversight, and AI integration—and adopters transition into higher-paying occupations where AI chatbots are more relevant, though still too few to move average earnings. [emphasis mine]

In other words, so far, AI is replacing tasks, not jobs. Alex Imas and Soumitra Shukla have written that as long as there are a few things that only humans can do, this pattern can be expected to hold. Observers of AI consistently find that its capabilities are “jagged” — it’s much better at some tasks than others.

That’s good news for people who are worried about losing their jobs (at least in the next decade). But it’s still very troubling for people trying to decide what to study. A decade ago, it made sense — or at least, it seemed to make sense — to tell young people to “learn to code”. Nowadays, what do you tell them to learn? What tasks will be the ones that humans still need to do, and which will be subsumed by AI? With AI getting steadily better at a very wide variety of tasks, it’s hard to predict exactly what humans will still be doing in five years, even if you’re pretty sure they’ll be doing something.

I have some friends who have spent the last decade or more thinking carefully about what the future of work will look like in the age of AI. No one has ever found a satisfactory answer. As AI technology has developed and changed, even the most plausible predictions for the future of human labor tend to get falsified almost as quickly as they’re made.

But I’ve been thinking about this question too, and I think I’m beginning to see the shape of an answer. I think the near future of work will mostly be divided into three types of jobs — salarymen, specialists, and small businesspeople.

Let’s talk about specialists first, because they’re the easiest to understand. A new theory by Luis Garicano, Jin Li, and Yanhui Wu describes why some workers will keep their jobs largely as they exist today.

Like many economists, Garicano et al. envision a job as a bundle of various tasks. But they also theorize that in some jobs, these tasks are only “weakly bundled” — you don’t really need the same person to do all of those tasks. For these jobs, it would be easy to divide up the tasks between different workers — or between a human and an AI. But in other jobs, the authors assume that the tasks are “strongly bundled” — the same person who does one part of the job has to do the other parts, or the job can’t be done.

The paper’s basic conclusion is that AI tends to replace weakly bundled jobs a lot more quickly than it replaces strongly bundled ones. For example, they theorize that radiologists still have jobs because even though AI can do most of the task of basic scan-reading, there are a lot of other pieces of the job that radiologists still need to do in order to deliver patients the kind of care and expertise they demand. They foresee employment in strongly bundled industries resisting automation until AI capabilities get extremely good:

The people in those strongly bundled jobs are specialists. An example of a specialist might be a blogger. AI, so far, is very good at doing background research, proofreading, and a number of other tasks that are useful for the writing process. But even though it can generate infinite amounts of text, AI is not yet good at writing. Writing communicates a unique human perspective; simply pressing a button to generate text doesn’t say what you want to say. So the tasks that make up my own job are — so far, at least — strongly bundled. AI is making me more productive, but so far it isn’t putting me in danger of unemployment.

But what about those weakly bundled jobs? Garicano et al. predict that these will begin to decline only after demand becomes sufficiently inelastic — in other words, once AI becomes so productive that its output hits diminishing returns for the consumer. After that point, automation tends to replace human labor — it becomes a way to make the same amount of stuff with fewer workers, instead of a way to make more stuff with the same amount of workers.

Until that point, there will be quite a lot of work for people in weakly bundled jobs to do, because of expanded demand. And yet at the same time, companies won’t know which tasks to hire workers for, because AI’s “jagged” strengths and weaknesses will be constantly changing.

The rapidity with which Claude Code replaced the task of code-writing demonstrates this problem. In 2025, companies hiring software engineers could judge their merit based on how good they were at writing code. In 2026, companies have to judge the merit of software engineers based on how good they are at checking and maintaining code. Those skills don’t always go together.

The solution, I think, is to hire more generalists. Instead of picking people to do specific tasks, companies will pick people whose job is to constantly learn what AI is good and bad at, and to fill in the gaps. Cedric Savarese sums up this idea:

The first stage of ‘vibe freedom’ is…[t]he dreaded report that would have taken all night looks better than anything you could have done yourself and only took a few minutes…The next stage comes almost by surprise — there’s something that’s not quite right. You start doubting the accuracy of the work — you review and then wonder if it wouldn’t have been quicker to just do it yourself in the first place…You argue with the AI, you’re led down confusing paths, but slowly you start developing an understanding — a mental model of the AI mind. You learn to recognize the confidently incorrect, you learn to push back and cross-check, you learn to trust and verify…

Curiosity becomes essential. So does the willingness to learn quickly, think critically, spot inconsistencies, and to rely on judgment rather than treating AI as infallible…That’s the new job of the generalist: Not to be an expert in everything, but to understand the AI mind enough to catch when something is off, and to defer to a true specialist when the stakes are high[.]

Essentially, AI is going to be unreliable, but not in a predictable way. Its mistakes and shortcomings will require constant human exploration and patching. This is the job of a generalist. Instead of people who do “payroll” or “back-end engineering” or “accounting”, companies will need to hire people who can do a little bit of everything, if and when the AI messes something up.

In fact, we have an example of a corporate system that relied very heavily on this type of generalist: Japan. Until very recently, Japanese companies treated their “salarymen” as almost interchangeable labor, rotating them between different divisions and requiring them to learn a wide array of tasks. You might start your career in HR, then move to accounting, then do some product design, and so on.

This system might not have been very efficient, and the lack of specialization may have contributed to Japan’s notoriously low white-collar productivity. And it may be why salaryman jobs have been in decline for many years. But in the age of AI, it may finally make sense. When human expertise is replaced by AI expertise, humans’ role may be to flit from task to task, doing whatever the AI is bad at, and supervising AI at whatever it’s good at.

In other words, instead of hiring people who are good accountants or good HR specialists or whatever, companies might start hiring people who are just good AI wranglers, and who have the agency, mental flexibility, and energy levels to keep plugging the ever-shifting holes in what AI can do. In other words, salarymen.

The salaryman system also naturally lends itself to long job tenure. If I’m a highly specialized engineer, I can take my talents and move to a different company with my human capital intact. But if I’m a generalist who does a little bit of everything, what becomes more important to my value as a worker are my human networks within a company, and my understanding of the company’s system. This makes me a much less portable worker; I’m inclined to stay at the company where my long job tenure makes me more valuable than newcomers.

You can already see hints of this happening in American companies. We’re in a “no-hire, no fire” economy — workers are hunkering down in their jobs and refusing to switch, and companies are keeping them there instead of hiring new workers:

Source: a16z

This is exactly what you’d expect from a model of firm-specific human capital — in other words, from an economy where everyone increasingly realizes that modern employees need to act like Japanese salarymen. The hypothesis here is that people don’t want to leave their jobs (and companies are happy to keep them in their jobs) because their technical skills might be devalued due to rapid AI progress; instead, they’re staying in their companies, where knowing people and knowing how things work are still important.

So America may yet come to embrace the way of the salaryman. But the third category of future employment will also be very Japanese: self-employment and small business.

Japan has long had a very high prevalence of small business ownership. It has one of the world’s largest proportions of small and medium-sized enterprises. In manufacturing as well as in retail, Japan has traditionally had a lot more small business than other OECD countries. This is now decreasing, as the population ages and business owners retire without heirs or proteges. But it still might point the way to the AI-enabled future.

AI creates leverage; it allows you to do more with a smaller team. For many businesses, the optimal size of this team will fall to only one person or a few people. Thus, I expect to see a lot of small companies sprout up, as people use AI agents to increase their productivity to the point where they only need a few employees (or even zero).

In other words, I expect AI to make the American labor system look a bit more like the Japanese labor system of the 1960s-2000s. There will be a bunch of generalists running around looking for things to do within their companies, a bunch of small businesspeople striking out on their own, and a few specialists with specific skills that still make them valuable. If you’re not one of the lucky few in the latter category, your choices will be to become a cog in an ever-changing corporate machine, or to strike out on your own and manage an AI “team” to sell some good or service directly to the consumer.

This might not be the most optimistic or enticing view of the future of work, especially to people who have lived their whole life thinking that their specific job skills are what made them valuable to society. But it’s probably better than humans becoming economically obsolete.


Subscribe now

Share

Friday Squid Blogging: Jurassic Fish Chokes on Squid

Here’s a fossil of a 150-million year old fish that choked to death on a belemnite rostrum: the hard, internal shell of an extinct, squid-like animal.

Original paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Company that Secretly Records and Publishes Zoom Meetings

WebinarTV searches the internet for public Zoom invites, joins the meetings, secretly records them, and publishes (alternate link) the recordings. It doesn’t use the Zoom record feature, so Zoom can’t do anything about it.

NSF update

The White House seeks to slash the NSF budget by nearly 55%, to $4 billion. The proposal also cuts all funding for the NSF division that funds research on the social sciences and economics. At an internal all-hands meeting on Friday, NSF leaders announced that they would dissolve the agency’s Social, Behavioral and Economic Sciences directorate based on the budget request, according to two NSF staff members who shared information anonymously in order to speak freely.

Here is the full story.

The post NSF update appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Come See Us in Austin, TX

I just had two emailers in a row who I had back and forths with about the comparison between the Iran War and the Suez Crisis of 1956. And at the end of each exchange they said, hey, looking forward to the live podcast in Austin next week! (Who knows? Maybe Austin is a big Suez Crisis town.) More important, it reminded me that we’ve secured additional space and now have small additional number of tickets for next Wednesday. So if you’re in Austin or near enough that it’s convenient to get there, come see us in Austin next Wednesday night, April 8. Click here for tickets.

Watch This: Trump’s Word Is Not His Bondi

Kate and Josh talk Pam Bondi’s ouster, Trump’s Iran stemwinder and the birthright citizenship oral arguments.

Watch and subscribe to see all of our video content on our YouTube page.

You can listen to the new episode of The Josh Marshall Podcast here.

Refunds for Some

Many of the smaller businesses that took a hit from Trump’s tariffs are not, court filings suggest, set up to collect a refund, and they may never be, Layla A. Jones reports.

Congressional Pratfalls Unpacked

We discussed a wild few weeks on Capitol Hill yesterday, including a comical series of maneuvers by Senate and House Republicans, each of whom are now swallowing legislation they pledged to oppose, and a seeming attempt by Republican leadership to get Trump off their backs when it comes to the SAVE Act. Watch here.

Who’s the Next Lady on Trump’s Chopping Block

In the before times, when a president wanted to make a change at the top of a department, he had a talk with that person or have an intermediary do so and explain it was time for a change. The secretary was allowed to make the decision on their own, even if it was usually known that it wasn’t really their choice. I was thinking about that this week as Pam Bondi’s ouster speedran from hint to certainty in … what? 24 hours? Why doesn’t she just step down on her own, I thought? But I quickly realized why, just on the basis of thinking about the pattern and about Trump. If Trump is getting ready to fire you and you quit, I strongly suspect this would enrage him. He’d see it as a major and perhaps unforgivable act of defiance. Trump gets to fire you. Period. I think he would see anything else the way others might see a subordinate announcing and claiming credit for a project the executive felt he owned.

Trump gets to fire you. Period. It’s a privilege of his power. It’s a benefit of the job. And it’s a reminder, at least this is how I interpret all this, that the firing itself is as much as anything an act of presidential self-soothing, just as launching the war against Iran was. With Kristi Noem and now Bondi canned, I and many others have been waiting for the axe to fall next on Tulsi Gabbard. She is, after all, a woman. Unlike Noem and Bondi she really had very, very little pre-existing relationship with Trump. That’s actually a reason her nomination surprised me from the start. She’s bad. But she’s not really one of his people. Like Bobby Kennedy Jr., he really doesn’t have reason to trust her. The fact that she is wildly unqualified (being in the pocket of a major foreign adversary is usually a deal killer for an intelligence chief) is kind of beside the point. But this article in the Times says that it may be Labor Secretary Lori Chavez-DeRemer.

As is always the case with Trump, you have to balance the fact that Chavez-DeRemer almost certainly should be fired (she and her husband respectively have a mix of inappropriate workplace relationships and accusations of sexual assault at the department) with the fact that she’s almost certainly about to be fired because she’s a woman. I mean, are we really saying that Pete Hegseth or Scott Bessent are doing great work? Howard Lutnick? Incompetence and manifest corruption are just the baseline. They can’t be a reason for anyone’s ouster.

The Inflation Surge Is Just Getting Started

Fossil fuel stocks haven’t kept up with the market in recent years. (Anton Petrus/Getty Images)

These days we often think of memes that capture a particular moment or idea. In the old days it was cartoons. There’s a classic that captures a big part of what is happening now with the stoppage of tankers (they’re not all oil or even other hydrocarbons) in the Strait of Hormuz. I think the cartoon in question is from The New Yorker. If anyone has a copy, do send it. In the cartoon a guy has jumped off a skyscraper. As he flies by the 50th floor a guy in the building asks him, “How’s it going?” The guy flying by says, “So far, so good!”

That about captures the current moment. I noted yesterday that the oil futures markets currently show the price of oil getting back close to where it was in mid-February by December 2028. Yes, 2028. So if you’re thinking in U.S. electoral terms, this isn’t just something for the 2026 midterms. It’s an issue for the 2028 general election as well. The current spot price for oil out of the Gulf (Brent Crude) is at $141, a price runup based on immediate scarcity issues which the futures markets assume will level out fairly quickly. The point is that a big runup in prices is basically already locked in. Obviously, markets could be pricing in more price increases than will actually happen. But those prices seem to assume a quicker end to the conflict than will actually happen. And it’s not just oil. It’s all the things that run on oil. It will show up in the price of foodstuffs that get shipped around the United States in trucks which run on diesel fuel. It’s fertilizer that comes out of the Gulf. Donald Trump claims and maybe believes that this isn’t really an issue for the United States since we now produce more oil than we use domestically. But obviously that’s not how a global market works. If prices are high for Gulf oil going to Asia, they’ll start pulling oil in from other parts of the world including the U.S. More or less, it will even out.

I’m already seeing good signs that major shippers of various products are already figuring into their planning two or three points higher inflation over the next couple years. They could be wrong too. But one of the things about inflation surges is that people raise prices on expectations. So some of the predictions create their own price reality.

I’m not saying anything that people paying attention and with money on the line don’t know. It hasn’t creeped into the news and political conversation yet. But it’s coming.

Liftoff! Returning to the Moon

Liftoff! Returning to the Moon Liftoff! Returning to the Moon


Trump proposes steep cut to NASA budget as astronauts head for the Moon

President Donald Trump released a budget blueprint on Friday calling for a 23 percent cut to NASA's budget, two days after the agency launched four astronauts on the first crewed lunar mission in more than 50 years.

The spending proposal for fiscal year 2027 is the opening salvo in a multi-month budget process. Both houses of Congress must pass their own appropriations bills, reconcile any differences between the two, and then send the final budget to the White House for President Trump's signature. Fiscal year 2027 begins on October 1.

The White House requested a similar cut to NASA last year. The Republican-led Congress resoundingly rejected the proposal and kept NASA's budget close to its level in the final year of the Biden administration. Like last year's budget, the proposal from the Trump administration will undergo major changes as Congress weighs in over the coming months.

Read full article

Comments

Friday 3 April 1663

Waked betimes and talked half an hour with my father, and so I rose and to my office, and about 9 o’clock by water from the Old Swan to White Hall and to chappell, which being most monstrous full, I could not go into my pew, but sat among the quire. Dr. Creeton, the Scotchman, preached a most admirable, good, learned, honest and most severe sermon, yet comicall, upon the words of the woman concerning the Virgin, “Blessed is the womb that bare thee (meaning Christ) and the paps that gave thee suck; and he answered, Nay; rather is he blessed that heareth the word of God, and keepeth it.”

He railed bitterly ever and anon against John Calvin, and his brood, the Presbyterians, and against the present term, now in use, of “tender consciences.” He ripped up Hugh Peters (calling him the execrable skellum), his preaching and stirring up the maids of the city to bring in their bodkins and thimbles.

Thence going out of White Hall, I met Captain Grove, who did give me a letter directed to myself from himself. I discerned money to be in it, and took it, knowing, as I found it to be, the proceed of the place I have got him to be, the taking up of vessels for Tangier. But I did not open it till I came home to my office, and there I broke it open, not looking into it till all the money was out, that I might say I saw no money in the paper, if ever I should be questioned about it. There was a piece in gold and 4l. in silver.

So home to dinner with my father and wife, and after dinner up to my tryangle, where I found that above my expectation Ashwell has very good principles of musique and can take out a lesson herself with very little pains, at which I am very glad. Thence away back again by water to Whitehall, and there to the Tangier Committee, where we find ourselves at a great stand; the establishment being but 70,000l. per annum, and the forces to be kept in the town at the least estimate that my Lord Rutherford can be got to bring it is 53,000l.. The charge of this year’s work of the Mole will be 13,000l.; besides 1000l. a-year to my Lord Peterborough as a pension, and the fortifications and contingencys, which puts us to a great stand, and so unsettled what to do therein we rose, and I to see my Lord Sandwich, whom I found merry at cards, and so by coach home, and after supper a little to my office and so home and to bed.

I find at Court that there is some bad news from Ireland of an insurrection of the Catholiques there, which puts them into an alarm.

I hear also in the City that for certain there is an embargo upon all our ships in Spayne, upon this action of my Lord Windsor’s at Cuba, which signifies little or nothing, but only he hath a mind to say that he hath done something before he comes back again.

Late tonight I sent to invite my uncle Wight and aunt with Mrs. Turner to-morrow.

Read the annotations

April 2, 2026

This afternoon, President Donald J. Trump posted on social media a video of the theme song of the Davy Crockett TV series from 1954–1955 starring Fess Parker. Over the clip, he wrote: “Davy Crockett, obviously a distant relative of Jasmine Crockett, and a very High IQ Frontiersman, would be proud of the legacy that he began long ago, and especially Jasmine’s Great Success as a Politician from the Great State of Texas! President DONALD J. TRUMP”

The Walt Disney Studio designed the Davy Crockett western series for children when Trump was about nine, an age that put him in the right demographic to have been part of the Davy Crockett craze that put “The Ballad of Davy Crockett” at the top of the Hit Parade and spurred the sale of $300 million of Davy Crockett merchandise as little boys begged their parents for raccoon caps that would make them look like a western hero.

Jasmine Crockett is a current Democratic U.S. representative from Texas. There is no evidence she is related to David Crockett, who served as a U.S. representative from Tennessee from 1827 to 1835 and who died at the Battle of the Alamo in 1836. Trump mused about their possible relationship before, in 2025.

It feels frighteningly appropriate for a 1950s television western to seem more important to Trump right now than the real world of April 2026 does. Davy Crockett was only one of the many westerns on television in the 1950s and 1960s as those eager to dismantle the New Deal government championed the idea of the western hero as the true American. Trump is trying to bring to life a right-wing political fantasy of the 1950s, and Americans in the present are making clear they reject it.

After World War II, Republican businessmen, southern racists, and religious traditionalists hated the government that both Democrats and Republicans had embraced since 1933, one that leveled the American social and economic playing field by regulating business, providing a basic social safety net, promoting infrastructure, and protecting civil rights. They insisted that such a system of government action was socialism or even communism, and contrasted it with their fantasy of an independent white man on the frontier who wanted nothing of the government but to be left alone.

In 1960 a ghost-written book released under the name of Arizona senator Barry Goldwater, who wore a cowboy hat and boasted of his family’s ties to the Old West although he himself grew up with a live-in maid and a chauffeur, articulated this right-wing vision.

The Conscience of a Conservative maintained that even if Americans liked the new government that had stabilized the country since the Great Depression and World War II, the Constitution’s framers had deliberately written a document that would prevent “the tyranny of the masses.”

In place of a strong federal government, the book said, power should go back to the states to restore true freedom to Black Americans, farmers, and workers. Federal action had given those groups too much power, and they were using it to destroy liberty and lower the American standard of living. In their hands, the book said, the U.S. was on its way to becoming a totalitarian state. At the same time, the government must protect the country with an increasingly strong military.

At an Easter lunch reception yesterday, Trump echoed this argument precisely. “I said to [Office of Management and Budget director] Russell [Vought], ‘Don’t send any money for daycare because the United States can’t take care of daycare,’” he said. “That has to be up to a state. We can’t take care of daycare. We’re a big country. We have fifty states, we have all these other people. We’re fighting wars. We can’t take care of daycare. You gotta let a state take care of daycare, and they should pay for it, too. They should pay. They’ll have to raise their taxes, but they should pay for it. And we could lower our taxes a little bit to them to make up, but we, it’s not possible for us to take care of daycare. Medicaid, Medicare, all these individual things, they can do it on a state basis. You can’t do it on a federal. We have to take care of one thing, military protection.”

Trump is expected to release his 2027 budget plan tomorrow, in time to use it to shape Republicans’ argument for the midterm elections in November. Like Trump’s budget requests for 2026, it calls for an enormous boost to the nation’s military spending, $1.5 trillion, to be paid for with cuts to domestic programs. But members of Congress recognized that domestic spending is popular, and their 2026 appropriations bills kept domestic spending relatively flat.

The popular pressure to fund domestic programs showed today when House speaker Mike Johnson (R-LA) backpedaled on the Senate’s plan to fund the Department of Homeland Security (DHS) without funding Immigration and Customs Enforcement and the parent agency for Border Patrol, Customs and Border Protection. Far-right House Republicans opposed the Senate’s bill, and bowing to them, Johnson called the Senate’s bill “a joke” and sent House members home until April 13 without voting on it. Today Johnson said he would bring the bill forward to pass it with Democratic support and that Republicans would then try to fund ICE and Customs and Border Protection through a budget reconciliation measure that does not need Democratic votes.

Racism was central to the rhetoric of cowboy individualism, and the institutionalization of that racism in the mass deportations and incarcerations of the Department of Homeland Security under Trump has created a backlash. A poll last week by the Public Religion Research Institute (PRRI) shows that only 35% of Americans approve of Trump’s handling of immigration while 61% disapprove.

An analysis of DHS records by Ali Winston and Maddy Varner of Wired revealed today that DHS has used agents from special units accustomed to dealing with high-risk warrants, armed drug cartels, and manhunts for civilian immigration sweeps. Agents from Border Patrol Tactical Unit (BORTAC) and its sister unit, Border Patrol Search, Trauma, and Rescue (BORSTAR), are part of what the journalists call “a secretive, tightly knit world.”

The journalists’ analysis shows that these agents are “as a group, the most violent of the hundreds of federal agents deployed to Chicago.” Following the use-of-force guidelines rewritten by former leader Gregory Bovino—himself a member of BORTAC—their use of force there “included punching and kicking protesters, throwing tear gas, macing civilians, firing pepperballs and 40-mm foam rounds into crowds, shocking people with tasers, unleashing dogs on deportation targets, and shooting unarmed civilians, killing at least one of them [Silverio Villegas González, shot at “close range” as he fled from officers after a traffic stop].

The county medical examiner yesterday declared the death of Nurul Amin Shah Alam, a visually impaired Rohingya refugee from Myanmar whom Border Patrol agents dropped off in the parking lot of a coffee shop on a frigid February night in Buffalo, New York, a homicide. Rather than releasing him to his family or lawyer, CBP officers offered Shah Alam what they called a “courtesy ride.” He was found dead five days after agents left him at the closed shop.

A DHS spokesperson told Sydney Carruth of MS NOW that the homicide ruling was “another hoax being peddled by the media and sanctuary politicians to demonize our law enforcement. This death had NOTHING to do with Border Patrol.”

Those who oppose government social welfare programs, regulation of business, and so on, have worked to concentrate power in the president, knowing that Congress will hesitate to slash programs their voters like. Yesterday Assistant Attorney General T. Elliot Gaiser, of the Office of Legal Counsel, published an opinion for the White House that claims the Presidential Records Act, which requires that presidents keep records of their official business and turn them over at the end of their term, is unconstitutional. Gaiser clerked for Supreme Court Justice Samuel Alito.

“The PRA is not a valid exercise of Congress’s Article I authority and unconstitutionally intrudes on the independence and autonomy of the President guaranteed by Article II. The Act establishes a permanent and burdensome regime of congressional regulation of the Presidency untethered from any valid and identifiable legislative purpose,” the memo reads. “For these reasons, the PRA is unconstitutional, and the President need not further comply with its dictates.”

The fallout from that concentration of power is showing now in Trump’s disastrous adventure in Iran, undertaking to attack the country without consultation either with Congress or with allies.

Yesterday evening, Trump commandeered time from television networks to deliver what officials billed as a major announcement on the Iran war. But rather than announce anything new in his first address to the nation about a war that has gone on now for more than a month, Trump rambled for 19 minutes, reiterating what he has put in social media posts. He said the war was almost over but also that military operations were going to intensify, said its purpose was to destroy Iran’s nuclear capabilities—despite his claim in June 2025 to have obliterated those capabilities—and said the rise in oil and gas prices would be only a “short-term increase.”

Sounding tired and speaking in a monotone, Trump reiterated his claim that the U.S. doesn’t need the oil that travels through the Strait of Hormuz and demanded that other nations who need the oil more force Iran to reopen it. In reality, the U.S. is tied into international oil markets, and prices not only of oil, but also of products that use oil to get to market, are already rising.

One Republican strategist from a battleground state texted Lisa Kashinsky and Alec Hernandez of Politico: “What the hell did he just say?” The strategist called the speech “nonsense.”

As Trump spoke, U.S. stock futures plummeted, erasing about $550 billion in 25 minutes.

Today forty nations, led by Britain and France, discussed ways in which they could work to reopen the Strait of Hormuz. The United States was not invited to participate.

In the midst of this crisis, the tension between the Army’s leadership and Defense Secretary Pete Hegseth blew up today when Hegeseth fired Army Chief of Staff General Randy George. The Army chief of staff is the highest-ranking officer in the U.S. Army, the top military advisor for the Secretary of the Army, overseeing planning, training, and policy. George was appointed to his position in 2023 and worked closely with former defense secretary Lloyd J. Austin III, the four-star general who preceded Hegseth. Recently, George refused to remove four officers—two women and two Black men—from a promotion list at Hegseth’s insistence.

A source who spoke to Jennifer Jacobs, Eleanor Watson, and James LaPorta of CBS News said that Hegseth “wants someone in the role who will implement President Trump and Hegseth’s vision for the Army.” Two other Army leaders were also removed: General David Hodne, leader of the Army’s Transformation and Training Command, and Major General William Green, head of the Army’s Chaplain Corps. Hegseth has reworked the Chaplain Corps recently to limit the range of religious instruction available to military personnel.

And finally, Trump today fired Attorney General Pam Bondi by posting her dismissal on social media. He was apparently angry that she has not adequately punished his enemies and that her botched handling of the Epstein files has stoked rather than calmed the story. For the present, her replacement will be Deputy Attorney General Todd Blanche, who was Trump’s personal lawyer before joining the Department of Justice.

It was Blanche who met privately with Jeffrey Epstein’s associate, convicted sex trafficker Ghislaine Maxwell, last July, as the outcry over the Department of Justice’s apparent cover-up of the Epstein files grew. After their meeting, Maxwell was moved from the prison where she was being held in Florida, to a less restrictive, minimum-security federal prison camp in Texas.

Notes:

https://tnmuseum.org/junior-curators/posts/the-davy-crockett-craze

https://d23.com/a-to-z/davy-crockett-television/

Rick Perlstein, Before the Storm: Barry Goldwater and the Unmaking of the American Consensus (New York: Hill and Wang, 2001), pp. 19-21.

Barry Goldwater [L. Brent Bozell], The Conscience of a Conservative (1960; rpt. Princeton: Princeton University Press, 2007).

https://newrepublic.com/post/208523/trump-no-money-daycare-medicare-fight-wars-military

https://prri.org/research/americans-views-on-immigration-enforcement-ice-and-civil-liberties-in-the-second-trump-administration/

https://thehill.com/homenews/5812743-house-gop-split-over-dhs-funding/

https://www.wired.com/story/border-patrol-bortac-borstar-use-of-force-midway-blitz/

https://chicago.suntimes.com/the-watchdogs/2025/11/17/silverio-villegas-gonzalez-ice-dhs-trump-midway-blitz-shooting-homicide-franklin-park-chicago

https://www.ms.now/news/rohingya-refugees-death-in-new-york-ruled-a-homicide

https://www.politico.com/news/2026/04/02/what-the-hell-did-he-just-say-gop-iran-worries-build-after-trump-speech-00855321

https://www.nytimes.com/2026/04/01/us/politics/trump-iran-war-address-takeaways.html

https://www.reuters.com/business/energy/dozens-countries-discuss-coalition-secure-passage-through-strait-hormuz-2026-04-02/

https://www.nytimes.com/2026/04/02/us/politics/hegseth-fires-general-randy-george.html

https://www.cbsnews.com/news/hegseth-ousts-army-chief-of-staff-gen-randy-george/

https://www.stripes.com/theaters/us/2026-03-25/chaplain-corps-rank-insignia-hegseth-21176194.html

https://www.nbcnews.com/politics/justice-department/ghislaine-maxwell-justice-department-meetings-rcna221240

https://www.pbs.org/newshour/nation/ghislaine-maxwell-transferred-to-minimum-security-prison-camp-in-texas

https://www.justice.gov/olc/media/1434131/dl

https://www.cbsnews.com/news/justice-department-presidential-records-act-unconstitutional/

https://www.independent.co.uk/news/world/americas/us-politics/trump-jasmine-crockett-cnbc-b2802261.html

X:

KobeissiLetter/status/2039515620492857474

Bluesky:

atrupar.com/post/3mijr3x4pvs2d

lincolnproject.us/post/3mijkfjuhdg2l\

drjackbrown.bsky.social/post/3mikad7mpn22s

premthakker.bsky.social/post/3mijpwxnyms2k

Share

Politics Chat, April 2, 2026

The Legacy of Birthright Citizenship

Politics Chat, April 2, 2026

ULA’s Atlas 5 rocket launches its heaviest payload ever with fifth Amazon Leo mission

The United Launch Alliance (ULA) Atlas 5 rocket sits on Space Launch Complex 41 (SLC-41) at Cape Canaveral at sunset. This will be ULA’s fifth launch for the Amazon Leo broadband satellite constellation.

United Launch Alliance launched its latest Atlas 5 rocket, which carried a batch of 29 Amazon Leo satellites to low Earth orbit. The mission was the largest and heaviest payload carried to orbit by an Atlas 5 rocket to date, according to ULA.

The mission was called Amazon Leo 5 by ULA and Leo Atlas 5 (LA-05) by Amazon. This was the fifth launch of operational satellites by ULA and the ninth for the constellation, which included one flight by Arianespace’s Ariane 6 rocket and three flights on SpaceX Falcon 9 rockets.

Liftoff of LA-05 happened Saturday, April 4, at 1:46 a.m. EDT (0546 UTC). The rocket headed out on a north-easterly trajectory upon leaving the launch pad. U.S. Space Force meteorologists predicted a 90 percent chance of acceptable weather for the launch.

After completing its launch readiness review on March 26, the following morning, ULA began rolling out its 62.5-meter-tall (205 ft) rocket from its Vertical Integration Facility out to the pad at Space Launch Complex 41. The move began around 10 a.m. EDT (1400 UTC) and ULA reported a “hard down” at the pad at 11:16 a.m. EDT (1516 UTC).

However, with high winds forecast for the rocket’s original launch date of March 29, ULA was forced to push back the launch until the next available launch date at Cape Canaveral after NASA’s Artemis 2 launch.

The Atlas 5 rolled back to its hangar on Tuesday and returned to the pad Thursday.

The 29 Amazon Leo satellites were released starting about 21 minutes after liftoff. There were 10 deployment sequences, which ended about 17 minutes later. The RL10C-1-1 engine on the Centaur 3 upper stage then reignited about 55 minutes after liftoff for a disposal burn, which will end the mission.

The previous four missions for Amazon Leo launched on by Atlas 5 rockets carried 27 satellites each. ULA and Amazon Leo were able to increase the payload stack to 29 as “a result of detailed engineering work between ULA and Amazon,” according to ULA.

Amazon pointed to ULA’s use of the RL10C-1-1 engine on the rocket’s upper stage as a key reason why they were able to add two more satellites to the mission.

“While the engine has flown on previous missions, LA-05 marks the first time the program has completed the extensive engineering and safety analysis required to use it with our larger payload,” Amazon said in a blog post. “Our engineering teams capitalized on the additional performance margin, adding a fourth level to the previous three-tier dispenser configuration for Atlas 5.”

A sleep aid

Mostly I go to sleep very easily. Like 3 minutes from lights out, max.

I’m content, I exercise, I burn my tokens each day, I think all that helps.

Often I wake early and think. I’m protective over what goes into my 4am thinking time, I enjoy it. You don’t get to choose what you think about at 4am. It’s inevitably going to be work. So I optimise for having interesting work and I’m very lucky there. Mostly I go back to sleep after a bit.

Sometimes I don’t get to sleep easily, for example in 2021.

In that case I close my eyes and visualise a device:

The device has 6 buttons arranged in two rows. It changes in appearance but the most common form in my imagination is a Dieter Rams-style enclosure in beige about 4 or 5 inches across with its buttons on the top, and the buttons are flush against each other with a circular depression on the top to push down with your finger.

The buttons are really satisfying to push. Good resistance, good slip-clunk into place when engaged.

Sometimes it’s different. Sometimes the button click in like a pen top when they’re down; sometimes they rise up as soon as I’m not pressing. Sometimes they light up when activated, sometimes not.

The game, in my imagination, is this:

There is some combination of buttons that I can push which causes me to instantaneously fall asleep. But I don’t know the code.

So what I imagine (it’s a visual and tactile experience) is trying every single combination of buttons until I push the right ones together.

There aren’t many combinations to try, only 63. I usually try a few then run through methodically by counting up in binary.

Perhaps the code is different each night, perhaps it is the same – I wouldn’t know because I discover it successfully every time and go to sleep and forget what happened.

I was gently advised against posting this because it makes me sound like a weirdo but you already know that about me. And now you know about the six buttons too.

The Collision at La Guardia

April 1, 2026

So far I haven’t had much to say about the deadly collision at La Guardia airport on March 22nd, when a Jazz Aviation (operating as Air Canada Express) regional jet collided with a fire truck seconds after touching down. The truck had been cleared by air traffic control to cross the active runway.

The most obvious question is why the controller permitted the truck to cross. How did he forget, or not notice, that the RJ was, at that moment, barreling down the same runway? The airport was busy and the tower had been dealing with a different flight declaring an emergency. Maybe that explains a few things, but how is it, in a time of high workload and high distraction, that a single controller is empowered to make a life-or-death decision without a second controller’s scrutiny, especially at night?

ATC understaffing, I’m sure, has a role here. Otherwise, as a pilot-pundit I’m supposed to have answers. I’m afraid I don’t.

What I can tell you, though, is only a few days before the accident I’d remarked to a friend about how the proliferation of vehicles at busy airports felt unsafe to me. Not so much the myriad cars, trucks, and tugs that work the inner ramps, shuttling around luggage and whatnot, but the ones with authorization to operate on active runways and taxiways. These include airport maintenance vehicles, plows, emergency vehicles, and so on.

What training do these drivers receive? Listening over the radio, I sometimes shake my head. Their clearance read-backs, for instance, often sound tentative or uncertain. What sort of situational awareness do they have? In the cockpit, pilots listen out not only for the own instructions, but for those of other aircraft as well, allowing us to paint a mental picture of the movement around us. The importance of this would seem self-evident, but does the man or woman steering a fire truck think this way too?

And couldn’t the driver have seen the regional jet? A pilot will never cross a runway without double-checking, visually, for oncoming traffic. This isn’t possible in low visibility, but most of the time it is. The weather at LGA wasn’t great, but it wasn’t terrible either. As a motorist who’s been broadsided at intersections knows, putting your trust in a stoplight isn’t enough. You don’t cruise through a green without making sure that someone isn’t running the red. The truck, responding to an emergency, approached the runway at an angle. It may have been hard for the driver to see. Was his view obstructed, or did he merely take the controller’s word that the runway was safe?

And what of the Jazz pilots? It’s possible they heard the controller issuing that ill-fated crossing clearance. But they were already on the ground with only a few seconds to react.

The plane hit the truck straight on, nose-first, and both pilots were killed. Everyone else survived. It’s interesting to wonder what the outcome might’ve been had the pilots swerved to avoid the collision. There wasn’t enough time to turn clear; either way they were going to hit. And had they swerved, the point of impact would have been closer to the plane’s midsection, or even at the wing root, resulting in an explosion and many more deaths. The lack of a fire saved the passengers.

Most likely, once the investigation is complete, the La Guardia controller will receive brunt of the blame. This won’t tell the whole story. Understaffing, darkness, urgency, distraction, and ATC protocols all had roles to play, creating a situation where one small mistake proved fatal.

 

Photo by Jordi Moncasi, courtesy of Unsplash.

The post The Collision at La Guardia appeared first on AskThePilot.com.

Is Jason Sams real, AI or hologram?

There is a man named Jason Sams.

I think.

I’m pretty sure.

Perhaps.

He is running for a position on the Orange County Board of Education—area five

I think.

I’m pretty sure.

Perhaps.

Last Wednesday, on a street corner in Mission Viejo, I watched him address the crowd at yet another wonderful South OC For Democracy event. He spoke for, oh, seven minutes, and offered nary a word of substance or detail or interest. To be blunt, it was the worst presentation I’ve witnessed in my year throwing down Truth OC jewels.

If you don’t believe me …

Because I was but a few feet away, I can confirm—with 96.5 percent certainty—that Jason Sams is a real Homo sapien; a 50-year-old man with shoes, pants, a belt, a shirt, sunglasses, a shaved head, a warm smile. That, however, is as far as I can go.

He is the most befuddling Democratic candidate I’ve seen thus far. And his, “Hey, why not?” approach to this profoundly annoys me.

So let’s dig in …

To start with, Jason Sams has a website. Which, obviously, is required of 2026 political candidates. Here is the link. Nothing (literally nothing) about the page makes sense. It is a base-level GoDaddy template, and whoever filled in the blanks has clearly never set foot near a functioning computer. The fonts are buffoonery squared. The primary photograph is crudely placed. The capitalization decisions wound my journalistic soul.

This is what is listed below MY STORY …

Just for kicks, I asked ChatGPT to write a one-paragraph summation of why I (Jeff) am running for Orange County Board of Education (which I’m not), and it produced this …

I mean … y’all see it, don’t you?

You see it! Right? Right!?

I digress.

There is a DONATE link that takes you to a … blank-ish PayPal page. There is a listing of CAMPAIGN PRIORITIES that (again) could be straight from a ChatGPT quickie. Inexplicably, there’s a HELP OUR CAUSE plea alongside this photograph …

Is the cause forestry? Nature photography? Chopper bungee jumping? The return of Alf to NBC?

I do not know.

And the site irks me. First, because it sucks. Second, can we at least take this shit somewhat seriously? Please. I’ve been hard of late on Esther Kim Varet—but at least she’s in it to win it. Say what you want, the woman is trying. Jason Sams’ website screams, I’M IN IT BECAUSE MOM TOOK AWAY MY NINTENDO SWITCH! It’s vague and ugly and ridiculous and useless, and nary a single voter would leave thinking, “This is my guy.”

Oh, and it gets worse: In the lord’s year of 2026, Jason Sams has no social media presence. Literally zero. He links to nothing from his site, and a scan of the ol’ IG comes up empty. Same with TikTok. And Facebook.

Again—can we at least take this shit seriously?

Jason Sams’ LinkedIn page is little better. It’s just … vague.

Here’s the ABOUT section …

His first two EXPERIENCE listings involve recent advisory roles …

And his longest position—16 years and running—is as the founder and chairman/sustainability of the Stone Water Group, a company that hasn’t shown a pulse since 2018, boasts a dead website and, I guess, sorta kinda maybe existed/exists.

If that’s not kooky enough, Jason Sams spent two whole months as the executive director of something called the Markarian Law Group and nine months as the board of director at the Ryan Banks Academy.

Sigh.

•••

And here’s the thing. Dogging Jason Sams is not fun for me. At all. But we—Orange County’s Democrats and liberals—have to stop with this shit. Candidates are our business cards, and if we roll out ridiculousness, we’re doomed to be branded ridiculous. Either the people we push forward to run need to be engaged and qualified and inspiring (See Galvez, J.J.), or we need to find other place fillers. But the worst thing we can do (the absolute worst thing we can do) is promote duds, then watch them drown and have independents think, “Same ol’ OC Dems ...”

I have no reason to believe Jason Sams doesn’t have good intentions, but I also have no reason to believe he knows what he’s doing. He probably saw an opening, saw an election, saw some free time and thought, “Hey, LFG.”

But if you can’t express a real, non-ChatGPT reason for your campaign, and if you can’t offer up base-level facts on the position you aspire to hold, and if your LinkedIn page includes a two-month job, you’re not the right dude.

There’s no shame in that.

Links 4/3/26

Links for you. Science:

Antibiotic used in COVID patients tied to increased signs of antibiotic resistance
Somebody Finally Stood Up to RFK Jr. A federal judge’s ruling highlights the ways Kennedy’s anti-vax agenda is putting public health at risk.
More than 150,000 uncounted COVID-19 deaths occurred early in the pandemic, a study finds (paper here; I have some doubts about the assumptions of the model, but still interesting)
Lineage dynamics of invasive Escherichia coli isolates in the Netherlands from 1975 to 2021: a retrospective longitudinal genomic analysis
Rising Death Rate for Gen X, Elder Millennials Is ‘Genuinely Alarming’
What does the appendix do? Biologists explain the complicated evolution of this inconvenient organ

Other:

Despair
Iran, Slopulism, and the Eternal Innocence of the American People (excellent)
Ol’ Donny Trump Has Really Stepped in It This Time (excellent)
Trump’s War With Iran Is a Product of His Deep Stupidity
F—k Kash Patel and his $tupid shoes
What We Forget About Covid Will Shape the Next Pandemic
Afroman’s Defamation Trial Is Going About As Well For The Deputies As Their Original Raid Did
The fight over transgender rights in America has entered a new phase
The Men Obsessed With ‘High T’: Fueled by the manosphere, men are boosting their testosterone levels through natural and synthetic means, with some competitively swapping test results on a regular basis.
Trump Case for War Undermined by Bombshell as MAGA Breaks
Feds Drop Charges Against Disabled Woman Arrested For Standing Up At State Of The Union
The Algorithm Is Your Asshole Boyfriend
The AI boom is dangerously dependent on helium
Aurora ICE detainees are malnourished and forced to work, advocates report (BuT ThEy’rE NoT CoNcEnTrAtIoN CaMpS)
Straight women are rewriting the rules of heterosexual life
Why Knight Foundation Invested in Bluesky
The Supreme Court Just Heeded One of Ketanji Brown Jackson’s Sharpest Dissents
College Republicans Chapter Sues School for Right to Make Nazi Salute
President Trump isn’t making sense
Democratic Turnout Surges in Mississippi’s US Senate Primaries, Nearly Matching GOP Vote Total
The 49MB Web Page
My dogs helped when I was sad
Pentagon plans to keep National Guard in DC into 2029, 2 US officials say (this ends if Republican governors refuse to send troops)
For Gen Z Republican men, sex is solitary. Young conservatives’ anger at women is taking a nihilistic turn
Gamblers trying to win a bet on Polymarket are vowing to kill me if I don’t rewrite an Iran missile story
I’m Sorry to Burst Your Bubble: You Are Being Fooled About AI, and You Will Soon Feel Really Stupid
House Republicans move to kill D.C. traffic cameras — despite some using them back home
FEMA disaster chief claims he is able to teleport: ‘I landed in a ditch by a baptist church’
With sharp attacks and high stakes, the mayoral race kicks into gear
Chicago hires D.C.’s Housing Authority head, surprising leaders in both cities

Collections: Reconstructing the Roman Pectoral

This week we’re going to look a specific piece of early Roman military equipment, the humble bronze pectoral, which it turns out is surprisingly tricky for us to confidently reconstruct, in part because the period of its use that most interests us (the run from c. 264 to c. 146 where Rome is winning its first big overseas wars) is a relative gap – fancy word, ‘lacuna‘ – in our evidence, making it really difficult to correlate what our literary source (Polybius) is telling us to the physical evidence we have (both preserved examples and artwork). This was, we are told (by Polybius) the armor of the common Roman soldier in the period of their greatest wars, yet on some level we do not really know what it looked like. Not with certainty, in any case.

In particular I am going to argue that the most common reconstruction of this armor, as a single bronze plate suspended usually by leather straps over the chest, is probably wrong and that the armor more likely existed as a complex harness, simplified in brief literary description down to just its core element. But as we’ll see, this is going to be a zone of what I term ‘real uncertainty’ – a situation where without new evidence coming out of the ground, we simply cannot know for sure.

So this is not just an exercise in working through how to reconstruct one specific kind of equipment, but also how historians engage in questions that exist in a zone of really low confidence.

But first, as always, affording a full panoply of heavy infantry equipment as is the duty of any propertied Roman citizen is expensive! If you want to help me waste spend my money on reproduction ancient military equipment, you can support this project over at Patreon. If you want updates whenever a new post appears or want to hear my more bite-sized musings on history, security affairs and current events, you can follow me on Bluesky (@bretdevereaux.bsky.social). I am also active on Threads (bretdevereaux) and maintain a de minimis presence on Twitter (@bretdevereaux).

Via Wikipedia, a rough map of cultural groups in pre-Roman Italy. Key:
Dark Blue: Ligures
Brown: Veneti
Pink: Etruscans
Light Blue: Piceni
Light Green: Umbrians
Dark Green: Oscans (including the Samnites, discussed below)
Orange: Messapii
Yellow: Greeks
Gold: Latins (including the Romans)

Polybius

Our first stop is Polybius. Polybius wrote in the mid-second century (that is, the 140s), but his history covers the period from 264 to 146 and his description of the pectoral is placed relatively early in the narrative, in 216, as part of a larger explanation of the Roman military system. There is thus immediately a question as to if the details Polybius is giving are correct for 216 or for the 140s when he wrote. In practice, the answer must be something of a mix: Polybius has sources that reach back and might give him details appropriate to the period (he seems to have the writings of a military tribune to use for this description of the dilectus), but it seems likely that his description of the pectoral comes from observing it. Consequently, while I suspect that Polybius’ description of who is required to wear what may be accurate for 216, he has clearly seen the pectoral and understands it to still be in use in his own day (indeed, at other points in this extended passage, he explicitly notes things that used to be one way but had changed by his own day).1

That’s handy, because Polybius is the only source that describes this armor. Later historians – Livy, Plutarch, etc. – seem broadly unaware of it and it really does seem like the pectoral was in the process of going extinct when Polybius was writing (for reasons below). So we have one description of the armor, but at least it is by an eyewitness. Here it is (Polyb. 6.23.14-15, trans. mine):

The many [hoi polloi, “the common folk”] taking a bronze plate a span [c. 23cm] on all sides, which they place over their chests and call ‘heart protectors’ [καρδιοφύλαξ, very literally ‘heart protector’], finish their armaments. However those worth more than ten thousand drachmas [= the first class of Roman infantry], instead of the heart-protector wear mail coats [αλυσιδωτοί θώρακες, “hooked [or chain] cuirasses” which we know is the Greek way to say ‘mail coats.’]

And…that’s it. From later authors (Varro, De Ling. Lat. 5.116; Plin HN 34.18) we get the Latin name for this armor, pectorale (pectorale, pectoralis (n) for the Latin nerds), thus the English term ‘pectoral’ but no more details of its construction.2

Crucially, this armor doesn’t show up on any highly visible Roman military monuments. The reason is fairly simple: the earliest really visible Roman military monuments are the Pydna Monument (168) and the so-called Altar of Domitius Ahenobarbus (late second century) by which point the pectoral was already on the way out. Ancient artists tend to prefer high status equipment and so with the pectoral on the way out (though likely still very much in use in 168) and the poorer, lower status armor, they didn’t depict it, instead preferring to use mail armor to signal Roman soldiers (specifically, mail is used on the Pydna Monument to signal ‘these are Romans’ in contrast to Macedonians or Gauls).3

As a result, scholars initially didn’t have a lot to go on except Polybius’ description – the archaeology, as we’ll see, doesn’t really get sorted out until the last 40 years or so. So they reconstructed on that basis. A ‘span’ (σπιθαμή) is a ‘natural’ unit, the distance between the thumb and the little finger at full extension, which is conveniently more or less half of a cubit (the length of a forearm out to the end of the middle finger), which eventually becomes formalized in Attic measurements (which Polybius tends to use; other places might have slightly different measures for the same terms) as 23.1cm and 46.2cm respectively.

That leads to the most common thing we see in artistic reconstructions and reenactor kit: the pectoral is reconstructed as a brass or bronze plate, usually about 1-2mm thick (the normal thickness for breastplates), 23cm by 23cm square. Since obviously it needs to be attached to something it is often shown backed in leather, with leather straps around the waist and over the shoulders holding it in place. I am going to call this reconstruction – a single plate, 23cm square, on a leather harness – the ‘traditional’ reconstruction.

That size lets the pectoral cover most of the chest, but it does nothing for the belly, sides or shoulders. On that basis, I have very often heard scholars regard it as a very minimal, almost token defense, unlikely to do much at all to protect the men wearing it. And again, before there was much archaeology to work with (or before finds had been analyzed, arranged chronologically and had their development worked through), you can see how this is the most logical extrapolation of what Polybius is saying.

But I do want to note some things here. Polybius’ description of this armor is extremely brief. He does not even bother to explain what Roman mail armor is like at all – no description, for instance, of its length (to the knees) or shoulder-doubling or the front-closure mechanism. If it weren’t for period depictions of mail, we would probably reconstruct it without these elements. As for the pectoral, all he says is that it is a span square and the Romans have a funny name for it. Which is to say it is entirely possible that Polybius is leaving out some details here. Which brings us to:

The Development of the Italic Pectoral

This, of course, is the point at which we naturally turn to archaeology to provide us both physical examples of this kind of armor and also visual representations of it. And he we run into an immediate problem: the third and second century feature a near total lacuna of Italic armor, in both artwork and preserved examples. The problem is frustrating in its elegant simplicity: the Roman military system – terribly efficient and in its way, anti-aristocratic – coincides as it expands with the end of aristocratic ‘warrior burials’ wherever it goes. Thus as Rome during the fourth and early third century goes about consolidating control of Italy, the amount of nice tomb paintings with aristocratic warrior in procession or burials with arms and armor drop to basically nothing. The Roman army is removing the evidence we might have for the Roman army. Astoundingly frustrating.

The evidentiary record begins to pick up a bit in the second century with more artistic depictions of Roman soldiers as the Roman state engages in more monumental depictions of its soldiers (noted above), but by that point mail rather than the pectoral is the ‘national armor’ of Rome’s armies (even though the pectoral is likely in use) and pectorals never appear. The really strong archaeological record for armor will have to wait until the imperial period, when the permanent stationing of Rome’s armies on the frontier of the empire means they sit in one place long enough for us to recover bits of armor.4 Weapons show up more often than armor (pila more often than any other type of weapon, a testament to their disposability) and we get a lot of helmets (for reasons not entirely clear to me), but functionally no body armor from this period. The best we can do are tiny fragments of metal rings for mail and even those are rare.

Worse yet, as mentioned before, the pectoral was going extinct in this period. Notably, when our evidence improves massively in the first century BC and AD, the pectoral is nowhere to be found. No source mentions it as still in use in that period, no artist depicts it, no finds of it are recovered. Polybius is thus our last source for this armor, suggesting that by the start of the first century, it had been wholly replaced by mail. No shock, mail is awesome (if expensive). But that means we cannot look for later examples to help us understand what Polybius is saying.

But we can can look at earlier ones.5

The Italic pectoral seems to have arrived from the Middle East in the 8th or perhaps 7th centuries (sometime between c. 750 and c. 680). This form of armor, a more or less flat metal place (as opposed to an enclosing breastplate of the sort we see in Greece around this time) has Middle Eastern precedents (we see Assyrian soldiers in artwork wearing similar armor), though how exactly it made it to Italy is unclear – Phoenicians seems most probable, but uncertain. In either case, by the seventh century, these pectoral armors are quite common over all of Italy, including Latium (where Rome is) and Etruria. The armor at this point generally consists of two bronze plates (a front plate and a back plate), which might be rectangular or circular, about 20-25cm wide (or tall; sometimes these are even smaller than this) and which were connected by leather straps. We generally call these ‘rectangular’ and ‘single disc’ pectorals. When decorated (and they very frequently are), they usually feature either geometric designs (often rectangles within a rectangular cuirass) or animal designs, either punched into the plate or embossed.

Via the British Museum (1872,1008.1) an Italic kardiophylax (c. 700-600BC), 25.4cm wide.

And you can see how an archaeologist looking at these pectorals from the seventh century might be thinking, “ah, I see exactly what Polybius was talking about: a bronze plate a span square!” Except, of course, the sixth century is not the second century and these pectorals keep evolving.

Now, in significant parts of Italy, especially Etruria, these pectoral armors begin to be replaced in the late sixth century by Greek-style armor, especially for elite, high-status warriors. In particular, the Etruscans love the tube-and-yoke (linothorax) armor when it shows up and it swiftly becomes a marker of elite status, though pectorals do occasionally show up in Etruscan art, albeit less frequently, but they are certainly petering out. Annoyingly, at roughly this point the archaeological record for Rome specifically also dries up, so it isn’t clear exactly what armors are popular in Rome in the very early Republic (our literary sources assume Greek-style armors, which may be right, but they are guessing and deeply anachronistic in their assumptions).

However in central Italy, in the Apennines Mountains, the pectoral persists and undergoes some significant design changes. Around 600, we start to see changes to the strap mechanisms holding the armor together: one shoulder strap is replaced with a pair of bronze plates connected by a hinge. The resulting harness gets pretty complex, as you can see in the figure of the Capestrano Warrior (c. 550), where the harness that holds the pectoral also supplies a scabbard (suspended at the chest) for the sword and there is a clear contrast between the metal hinged plate (over the right shoulder) and the more reddish-colored leather straps (of which there are three, two wide and one narrow) holding the harness and scabbard together.

Via Wikipedia, the Capestrano Warrior (c. 550), found at Capestrano in Abruzzo (in Italy), depicting a warrior of the Piceni, a central Italic peoples on the Adriatic coast.

In the early fifth century, this design is both enhanced and greatly simplified with the emergence of the first ‘triple disc’ pectorals. These are so named because the front plate (and back plate) take the form of three discs in a triangular arrangement, though I must stress this is a single plate with three circular designs on it in a roughly triangular shape, not three individual circular plates. Indeed, earlier archaeologists supposed that the ‘triple disc’ cuirass must have evolved in two stages from the disc pectorals discussed above and posited a ‘double disc’ cuirass, which turns out not to have existed.

These triple-disc breast- and back-plates were joined together not by leather straps but by a simplified version of the hinged plate system used in the sixth century disc pectorals, except now there are four connecting plates: one each over the shoulders (each of them hinged) and one at the sides (without hinges). These plates also get a bit wider, providing relatively fuller coverage over the upper body and the armor is supplemented by a wide bronze belt worn around the waist which protects the lower abdomen. You can see the full armor clearly in artwork:

Via the British Museum, a fourth century squat lekythos showing a pectoral cuirass (in this case a ‘triple disk’ type) worn by a Campanian warrior

In the second-half of the fourth century (so 350 onwards), we see these triple-disc cuirasses joined by another type, particularly on the western coast of southern Italy (so the area south of Latium), the ‘rectangular anatomical cuirass.’ This takes the existing triple-disc harness structure, with its bronze belt and connecting side and shoulder plates, but instead of the triangular triple-disc cuirass, it substitutes rectangular breast- and back-plates, with the designs on these invariably mimicking the musculature on Greek muscle cuirasses, although – because these plates are smaller than Greek breastplates (which wrap around the body) – the muscles depicted are visibly smaller-than-lifelike. In short, the artistic form of the muscle cuirass is being copied, but this is not an effort to mimic the actual muscles of the man wearing the armor.

To give a sense of size, recovered triple disc cuirasses range from 27-32.5cm tall and 26-28cm at the widest, while the rectangular anatomical cuirasses range from 29.5 x 37cm tall to 25 x 30cm wide for the front plates.6 Combined with side and shoulder plates that tend to be 5-8cm wide and a wide bronze belt (7-12cm wide, 70-110cm long, ~1mm thick), these really do cover most of the upper body, albeit with gaps, and are something closer to an articulated breastplate than they are to the small ‘heart protector’ of the Capestrano Warrior.

And you may note that a rectangular plate over the chest of c. 30cm by c. 28cm is not very far from Polybius’ description of “a bronze plate a span on all sides” and better yet is far more likely to have actually be in use in the third and second centuries for Polybius to see.

Via Wikipedia, a triple-disc cuirass with its shoulder and side plates (but no bronze belt), in the Museo Archeologico Nazionale di Paestum.
This is, as an aside, a good example – particularly the triple-disc component – of how simple the decoration of these armors could get. The cuirass is cut out of sheet metal, has three simple discs hammered into its shape and is otherwise mostly unadorned. Assuming sufficient bronze, such cuirasses could likely be made relatively quickly and cheaply, compared to something like a muscle cuirass (or certainly compared to later mail armor).

Notably – and this is going to matter in a moment – these fifth and fourth century pectoral harnesses do not appear without bronze belts or connecting plates. You will find these pectorals in museums without those added elements, in many cases because when the first of these armors were excavated (and/or looted) it was done carelessly and so the smaller plates were missed. However, whenever we get these armors with secure provenance or see them depicted in artwork, as Michael Burns notes, without exception, we get the full harness with all seven elements (frontplate, backplate, 2 shoulder plates, 2 side plates, bronze belt). We never to my knowledge ever see them suspected in simple leather harnesses; it surely was possible to do so, but it is unclear that anyone ever did after the introduction of the four-plate harness.

What Michael Burns thinks is happening (revising earlier work by the late, great Peter Connolly), and I think he is right, is that Southern Italic peoples are responding to the increasing presence of Greek muscle cuirasses coming in through Greek colonies in Southern Italy. But rather than just copying the muscle cuirass, they seem to have innovated from their own single-disc pectorals (which didn’t always cover a whole lot of the chest) to the triple-disc to create a kind of ‘exploded’ muscle cuirass. Initially, they do this by taking their own armor form, the single-disc cuirass, and expanding it out into a full ‘exploded’ breastplate, but eventually, in the fourth century, there’s enough artistic crossover that designs that use a rectangular plate and intentionally mirror Greek artistic tropes appear alongside triple-disc styles (which do not go away). It is worth noting that some of these triple-disc and rectangular anatomical armors are wonderfully decorated with complex designs, but many of them are very minimally decorated, especially as we get into the fourth century, suggesting a demand for a cheaper, no-frills version of this protection.

And then in 290 the Romans win the Third Samnite War and take control of the non-Greek parts of Southern Italy. And as noted above, when the Romans incorporate a given part of Italy into their ‘alliance’ system, for reasons that are not entirely clear to us (but the pattern is very strong), warrior burials, ritual weapon depositions and aristocratic artwork of warriors stop. Which means right around the year 300, our evidence for the Italic pectoral tradition simply vanishes. Really, we basically have an expanding bubble of darkness, radiating out from Rome (which is also probably how the Roman conquest felt to the Samnites), blinding our ability to track the development of armor in Italy.

Via Wikipedia, the Ksour Essef Cuirass, a triple-disc cuirass found in a Punic tomb in Ksour Essef, Tunisia. This cuirass is now generally dated to the late fourth or early third century, before the First Punic War, so its presence suggests significant trade contacts between Carthage and Italy, such at that a local Punic elite might acquire a beautifully decorated piece of Italian armor.

So by the third century, we do not see any pectorals, because we don’t see much of anything (except helmets; we continue to see those) for quite some time.

Except…

The Weird Exception We Need To Dismiss

The one odd exception to this is a pectoral disc found in the siege camps at Numantia.7 It is 17cm wide and circular, with a pattern of concentric circles and a large central knob and for quite some time if you went looking for an actual Roman pectoral this is what you would find.

The problem is that it isn’t Roman, it is very obviously Spanish. This spent a century not getting noticed because archaeologists working on ancient arms and armor tend to be very geographically specialized, so folks working on Roman and Italic arms and armor are not likely to be very familiar with the arms and armor of the fifth century Celtiberian Meseta. But if you are familiar with that, it is very clear that this is not a Roman pectoral at all, but a Spanish one, despite it turning up in a Roman camp.

Left: The Numantia pectoral, as illustrated by M.C. Bishop in Bishop and Coulston, Roman Military Equipment (2006), image © M. C. Bishop
Right: Via the Museo Arqueologico Nacional, Madrid, a Celtiberian pectoral harness (MAN 1940/27/AA/314, main disc 18cm in diameter, late fifth to early fourth century), showing similar concentric circle motifs and punch-holes around the outer edge.

First, while Italy had single-disc circular pectorals these had been replaced in the archaeological and artistic record in the fifth century by the larger triple-disc pectorals discussed above. Moreover, those earlier single-disc Italic pectorals don’t feature raised concentric circles as part of their normal artistic motifs. The more often have animals on them, or punch-holed simple geometric designs. They were also flat and did not feature central knobs.

But you know who did have pectoral harnesses with circular central plates featuring raised concentric circle designs and prominent central knobs? The Celtiberians, who are the people who lived at Numantia, where these camps were. Now the tricky bit here is that these pectorals are also – as far as we can tell – long out of use in the Iberian Peninsula as well: they persist through the fifth century, but fade out at the beginning of the fourth.

But whereas it is a little difficult to imagine a second-century Roman soldier decided to bring a piece of armor with him to Spain that had been out of use in Italy for something like four centuries, it is a lot easier to imagine the same Roman soldier in Spain might have looted a temple or a tomb (or simply struck a burial while entrenching his camp) that contained a fifth or very early fourth century Celtiberian disc-harness and that this soldier then looted the shiny bronze plate, later to be (for whatever reason) discarded in the camp.

Reconstructing the Roman Pectoral

So that is the shape of our evidence: with the Numantia pectoral removed (because it is not Roman at all, but Celtiberian), we have no examples of this armor from the third or second centuries B.C. What we do have is a tradition of pectoral armors which lead to the emergence of the triple-disc and rectangular anatomical pectoral harnesses in the fourth century, which we lose sight of in the general lacuna for most non-helmet military equipment in the third and second century. When our evidence returns, they are gone but we have this report by Polybius that poorer-but-still-propertied Romans in the heavy infantry (so not the poorest Romans fighting, those are the velites or do not serve at all) wear a bronze pectoral plate about a span square over their chest.

That admittedly quite poor evidence base leaves us with really just two options, both of them somewhat unsatisfactory.

The first option, the one taken – so far as I can tell – by the great majority of modern artistic reconstructions, is to simply read Polybius and reconstruct exactly what he says. That gives these Roman soldiers a single metal plate, typically shown mounted on a leather backing with leather straps, about 23cm square. This is, in a sense, the philologically elegant solution: it assumes nothing not in our text. The problem, from an archaeological perspective, is that this effectively requires arguing one of two cases: either first that sixth century pectoral – with its simple leather suspension – somehow survived in Italy for four centuries to be observed in action on the battlefield by Polybius in the mid-second century without leaving any other evidence at all. Not one piece of artwork, not one surviving example in the intervening period, despite the fact that we have sixty-seven fifth and fourth century examples of the later pectoral types (45 triple-disc and 22 rectangular anatomical cuirass types). That could be right. But it is a heroic assumption.

Alternately, the argument would be that the Romans at some point developed their own version of the pectoral, probably based off of the rectangular anatomical type, which discarded with the wide bronze belt, the shoulder plates and the side plates and so consisted only of a breastplate and a backplate. The problem here is simple: as Michael Burns notes in his survey of Italic pectorals, that configuration never occurs in artwork or in archaeology where site and provenance are secure. We do not have a single example of those later Southern Italic pectorals – the types that emerge after the more complex harness structure discussed above – dispensing with those pieces. Could they have done? Of course. But as of 2005 (and so far as I know, to the present), we have no evidence that anyone ever did. This solution thus requires conjuring into existence an effectively unknown armor-type. That could be right, particularly given how bad our evidence for Roman arms and armor in the Early Republic is. You can even imagine, if we had evidence of it, how we’d explain it: the broadening participation in the Roman army leads to poorer Romans to take up the Samnite cuirasses (that is, triple-disc and rectangular anatomical cuirasses) they have seen, but to jettison the ‘extra bits’ to make it cheaper and more affordable, effectively reversing a few centuries of armor development to create a stripped down breast- and back-plate only version. That’s what we’d posit, if we had some, but we don’t have some and I would argue that it runs against the rules of evidence as practices in archaeology to conjure into existence an unattested variant of an object-class (which does not developmentally link to anything else you can see) simply because it would be convenient. That is not how we assess coins or pots, I do not see why we would do it with armor.

That leaves another option: Polybius is describing the Southern Italian pectoral harness we can see, but doing so incompletely. It is not hard to imagine how the Romans will have picked up this armor: they spent the period from 343 to 290 fighting the Samnites in Campania and the Samnites are the major users of the triple-disc cuirass and Campania is where we most often see them in artwork. If the Romans weren’t already using this armor (and remember, we have no evidence at all of what armor the Romans are using in c. 300), they could certainly pick it up.

Then Polybius comes along in the mid-second century, where this armor is already dying out, largely replaced by mail, but still hanging on here or there – perhaps as hand-me-downs used by poorer Romans. One advantage of the pectoral harness’ seven-part structure is that it is a sort of ‘one-size-fits-no-one’ set that would be reasonably easy to modify or pass down to new users (unlike a Greek-style muscle cuirass, which really needs to be fitted to the wearer). Polybius then, writing about the Roman army as it existed in the Second Punic War (218-201) and, as per Rawson, using perhaps the accounts of some military tribunes, is aware of this armor’s place in the military regulations of that time and so includes it but with only minimal description. As a Greek, Polybius is used to thinking about body armor as a single piece – a breastplate, a tube-and-yoke cuirass, a mail coat – rather than a harness, so looking at a rectangular anatomical cuirass that is, perhaps 30cm by 28cm for its front plate, he describe sit simply as ” bronze plate a span on all sides.” Just as he doesn’t include the details of Roman mail armor’s shoulder doubling, he feels no real need to include the shoulder and side plates of the harness and he may not even be aware that the wide bronze belt has any real armor value at all (early archaeologists made the same error, assessing it as purely decorative, but it would offer some protection).

I think these are the three options we are left with for the pectoral: surprising sixth-century survival in the mid-second century, otherwise un-evidenced recreation of an older form out of the fourth-century rectangular anatomical cuirass or simply that it is the rectangular anatomical cuirass, harness at all, that Polybius has described incompletely. My own instinct is that the latter is probably correct. One interesting thing is that compared to, say, muscle cuirasses, these pectoral cuirasses of both the triple-disc and rectangular anatomical types were probably produced from sheet metal (sheet bronze, in particular), rather than forged from an ingot, which would have made it relatively easier to produce larger numbers of armors – equally if one opted for a style with simple decorations, amply in evidence in the archaeological record. Meanwhile, as noted the design is fairly easy to adjust for size. Jeremy Armstrong and Nicholas Harrison suggest that this in part allowed for “the expansion of warfare in Italy seen in the fourth century and marked by Rome’s wars of conquest” and I think that is right.8

Now in the fourth century, that armor might still be restricted to the fairly well-off. But in the late third or early second century, it is not hard seeing how the introduction of an even better but also substantially more expensive armor – mail – might ‘push’ existing pectoral cuirasses (again, of both types) down the socioeconomic ladder as the Roman first census class was required – as Polybius tells us – to acquire mail. The spare armor might ‘flow downwards’ as it were, making the affluent man’s undecorated but still shiny bronze armor of c. 350 the poor man’s pectoral of c. 150. Indeed, there is no reason it couldn’t be the very same piece of armor.

I do not think the evidence allows us to answer this question with confidence, but I do think that simple inertia has led scholars to continue reproducing the ‘traditional’ pectoral reconstruction long after it stopped being the most likely one. Instead, the most likely solution is that the Romans had continued to use, in some form, the full triple-disc or rectangular anatomical cuirass, including metal connecting plates (and perhaps the wide bronze belt) and that what Polybius was seeing was not, in fact, the small decorative chest-plates of the sixth century but rather this armor.

Too good to be true

The Committee for a Responsible Federal Budget (CRFB) has put forth an excellent plan to save Social Security, featuring a $100,000 benefit cap. The plan is so good that I see almost no prospect for it ever being enacted by our Congress, an institution that has fallen to a sadly dysfunctional state. In this post, I’ll discuss why I like the plan and then describe the type of far inferior plan likely to eventually be adopted.

The CRFB’s proposal is essentially a progressive consumption tax, although it won’t look like that to the average person. I cannot teach an entire course in public finance theory in a blog post, but the essence of a consumption tax is as follows:

The Pursuit of Happiness is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Imagine a world where people can either spend $6000/month on consumption today, or $12,000/month on consumption in 20 years, by saving their incomes. Now assume you impose a 33.3% tax in that world, which takes away a third of the public’s resources for consumption. With a pure consumption tax, your choice is now $4000/consumption today or $8000 consumption in 20 years.

Notice that the “terms of trade” have not changed, in both cases, the opportunity cost of a dollar spent on consumption today is foregoing two dollar’s consumption in 20 years. A consumption tax is a tax that does not change the relative price of current and future consumption. In a sense, all taxes are consumption taxes, as the burden of any tax is its impact on a person’s lifetime consumption. However, economists use the term “consumption tax” to refer specifically to taxes that treat current and future consumption equally. An income tax punishes savers and hence is not a consumption tax.

I’m pretty sure that most people don’t understand this concept, as I often see commenters say things like “we should tax consumption, not labor.” Actually, a labor tax is a consumption tax. Indeed, these three taxes are all equivalent consumption taxes, in the long run:

1. A 20% VAT

2. A 20% payroll tax on wages

3. A 20% income tax with unlimited ability to put savings into a 401k plan, and no mandatory date of withdrawal from the 401k. (Funds borrowed for consumption are also taxed.)

Another misconception is that there is a big difference between benefit cuts and tax increases. Not so. There’s essentially no difference between cutting benefits of a wealthy Social Security recipient by $1000 and not cutting their benefits at all but instead taxing that person an extra $1000. The CRFB plan that I will discuss is generally framed as a “spending cut”, but it’s essentially a progressive consumption tax on high end Social Security recipients.

The CRFB plan is described as a $100,000 cap on the total amount of annual Social Security benefits that a household can receive, but the details are somewhat more complicated. That headline $100,000 cap applies to the official retirement age of 67, but as you probably know a retiree’s benefit level depends on when they retire. In order to prevent the cap from encouraging early retirement, they make it depend on age:

The $100,000 SFL would be adjusted based on marital status and collection age. A single person collecting at the NRA would face a $50,000 limit. A couple in which both spouses began collecting benefits at age 70 would face a $124,000 limit, reflecting the 24% delayed retirement credit. A couple with both spouses collecting at age 62 would face a $70,000 limit, reflecting the 30% early retirement actuarial reduction. Different claiming ages would result in a blended limit.

They present three ways in which the cap could be adjusted over time:

The SFL could be indexed over time in a variety of ways. For this analysis, Jason DeBacker of the Open Research Group modeled three options – a $100,000 limit indexed to inflation, a limit frozen in nominal terms at $100,000 for 20 years and then indexed to average wage growth, and a limit frozen at $100,000 for 30 years before being indexed to wage growth. The Inflation-Indexed SFL could also switch to wage indexing after a specified number of years.

Because wage inflation runs slightly over 1% higher than price inflation, these approaches have different long run implications for how the cap evolves over time:

I’ll probably be dead by 2046, so I’d be better off with the inflation adjusted cap (blue line.) But the basic idea is so good I’d be thrilled with any of the three versions.

These proposals are not enough to fully save Social Security. Thus they also propose a tax increase, and again the proposal is almost too good to be true:

This Trust Fund Solutions Initiative white paper suggests a new alternative – replacing the employer side of the payroll tax with a flat Employer Compensation Tax (ECT) on all employer compensation costs. While workers would continue to pay payroll taxes, employers would instead pay an ECT on all wages (with no tax cap) and all fringe benefits such as employer-sponsored insurance and stock options.

Not only is this a progressive consumption tax (relative to the current tax), it also reduces the distortion of current wage taxes that exempt health insurance benefits. That tax break has been an important factor driving up health care costs. (The employee side FICA should also tax health care benefits, as should the personal income tax.)

Part 2: The ant and the grasshopper

Unfortunately, the CRFB plan seems too good to be true, and I expect Congress to implement something far worse. In order to understand why, consider two neighbors that both spent their careers in upper-middle class jobs making close to the Social Security taxable maximum (currently $184,500). Both retire as single people entitled to roughly $50,000/year in benefits. Both would see their benefits capped in nominal terms, which means their real benefit levels would decline over time.

But these two neighbors differ in one very important way. Smith was a high spender who would buy the latest BMW, while Jones was a high saver who always bought used cars. Smith saved very little while Jones maxed out his 401k plan.

Now Smith starts whining to his congressman that the proposed cap is unfair. It should only apply to “the wealthy”. His neighbor Jones is now pulling $100,000/year out of his 401k and doesn’t “need” his Social Security benefit to rise with inflation. “Please make the cuts depend on income levels, not benefit levels.” Because America has far more grasshoppers than ants, Congress listens to the whiners and applies benefit cuts only to those with high current incomes, not those with high lifetime wage incomes. They punish savers and reward spendthrifts.

Why do I believe this will occur? Why am I so cynical? Because this is what Congress has been doing for the past 113 years. As a result of numerous policies, savers are effectively taxed at a higher rate than those who don’t save, which means that future consumption has become more expensive relative to current consumption. But it doesn’t look that way to the average person.

People focus on the fact that those who are currently wealthy have more resources than the less wealthy, even when the gap is 100% due to the less thrifty person choosing to spend at an earlier stage of their lives. In my thought experiment, the two neighbors were equally wealthy in the only way that matters—they had equal lifetime resources to allocate to consumption and simply choose to do so at different points in time. Smith consumed when he was young enough to enjoy it, and Jones foolishly waited until he was old, wrongly imagining that he could still get a thrill out of life at age 70.

At one time you might have expected the GOP to champion the interests of high savers, but those days are probably gone. The GOP coalition is trending toward low savers and the party is becoming increasingly “populist” on economic matters.

[Full disclosure: I’ve never bought a new car in my entire life and I always maxed out my 401k contributions. I’m the foolish miser.]

Part 3. Charity isn’t what you think it is

There’s an ongoing debate over how much money billionaires ought to donate to charity. Unfortunately, most people miss the point. The issue isn’t charity vs. investment; it is consumption vs. non-consumption. A charitable person is an individual that doesn’t consume much relative to his or her wealth. If you wish to consider heirs, you might say a charitable person is someone who ensures that he or she and all their future heirs consume only a modest portion of their current wealth.

But you can also argue that a charitable person is someone that maximizes their wealth, given the share of that wealth that they intend to use for consumption (both they and their heirs.) A wealthy person can become more charitable by reducing long run family consumption of wealth from 40% to 30%, but also by increasing their total stock of wealth and keeping the consumption share at 40%.

I have no idea what Elon Musk intends to do with his wealth, but you can make an argument that the most charitable use of his current resources is to build up even more wealth, and then eventually donate most of it to a good cause. But you can also make a good argument that Bill Gates is doing the right thing by donating a very high share of his wealth to various causes that he considers to be effective forms of altruism. Either approach is defensible, and the best path partly depends on subjective estimates of how much more wealth can be generated via productive investments.

As Matt Yglesias recently pointed out, those billionaires that consume a large share of their wealth are the actual problem. I don’t wish to sound like a scold, as I undoubtedly consume more than optimal from the perspective of a utilitarian like Peter Singer. If I were a billionaire I’d probably live in a very expensive mansion in coastal California. Like most people, I’m at least somewhat selfish. If I’m pointing fingers, then I’m including myself. But as a purely factual matter, more personal consumption comes at an opportunity cost, what else could have been done with those resources? Let’s just admit that most people are a mix of selfishness and altruism.

As an aside, in my view the hardest problem is figuring out how to be effectively altruistic, if you have decided that this is the path you’d like to take. As Tyler Cowen recently suggested, billionaires often end up funding causes of questionable value. I’m not trying to use this post to provide any sort of grand theory of altruism. Rather my point is much more basic:

To be altruistic is to forego consumption for you and your heirs. That’s it.

PS. This post has only examined one aspect of the Social Security problem and there is much more that could be said on the issue. Thus, it would have been better if the Social Security system had been fully funded from the beginning. And I’d prefer the wage tax (FICA) fall 100% on employees, as this would make for more informed voters. But I understand that these ideas are politically infeasible in the US, at least at the moment, and hence view the CRFB plan as a pragmatic compromise.

PPS. Steven Landsburg has a nice defense of Scrooge:

In this whole world, there is nobody more generous than the miser—the man who could deplete the world’s resources but chooses not to. The only difference between miserliness and philanthropy is that the philanthropist serves a favored few while the miser spreads his largess far and wide.

If you build a house and refuse to buy a house, the rest of the world is one house richer. If you earn a dollar and refuse to spend a dollar, the rest of the world is one dollar richer—because you produced a dollar’s worth of goods and didn’t consume them.

Who exactly gets those goods? That depends on how you save. Put a dollar in the bank and you’ll bid down the interest rate by just enough so someone somewhere can afford an extra dollar’s worth of vacation or home improvement. Put a dollar in your mattress and (by effectively reducing the money supply) you’ll drive down prices by just enough so someone somewhere can have an extra dollar’s worth of coffee with his dinner.

What did Bastiat say? “That which is seen, and that which is not seen.”

The Pursuit of Happiness is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

My email on NBA anti-tanking rules

I fear that bad management is a recurring problem with those teams. So perhaps no system of incentives can fix that.

I am not sure that bidding and superstar teams are so unpopular with the fans, especially as the NBA has become more international. Maybe ten superstars sell the league in any case, and you want them to be on very good teams.

…The incentives system also has to be palatable and explicable to the very casual fan, which I think rules out some of the more complex options. If the fans are asking “is my team trying to win or to lose now?” the system is maybe already broken.

The post My email on NBA anti-tanking rules appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

OpenAI, Supposedly Tightening Its Focus on Its Core Products, Buys Tech-Industry Talk Show TBPN

Katie Deighton, reporting for The Wall Street Journal (main link is a gift link; also on News+):

OpenAI bought TBPN to encourage constructive conversation around the changes AI creates by helping the show grow, according to a memo sent by Fidji Simo, the OpenAI’s CEO of applications. TBPN will report to Chris Lehane, OpenAI’s chief global affairs officer, and will help with company communications and marketing outside of the show.

“They’ve helped many brands market online and because they have a strong pulse on where the industry is going, their comms and marketing ideas have really impressed me,” Simo wrote in the memo.

But TBPN will remain editorially independent, retaining control over its programming, editorial decisions, guest selection and production schedule, OpenAI said.

Yes, I’m sure they’ll remain totally independent. You know, like The Washington Post under Jeff Bezos, and CBS News under David Ellison. Many news and commentary publications have remained steadfastly independent while reporting to the head of PR for a company they ostensibly cover.

 ★ 

‘No, We’re Not Stupid. Our Dads Just Got Us Crummy Computers.’

Back in March 1991, Saturday Night Live ran what I consider the best Apple parody ad ever made: “McIntosh Jr.” Siracusa and I talked about it on The Talk Show this week, celebrating Apple’s 50th anniversary, so I looked it up for the show notes. Alas, this appallingly low-resolution copy hosted on Reddit is seemingly the only free-to-watch copy of it available. (If you can find — or make — a better version, let me know.) If you have a Peacock account, you can watch it in much higher quality in their SNL archive: Season 16, Episode 16, starting at 7:30, just after host Jeremy Irons’s monologue. (It rolls right into a good “Deep Thoughts by Jack Handey”.)

We just recorded tomorrow’s episode of Dithering, and Ben asked me my favorite Apple commercial of all time. I was tempted to say this one, despite the fact that it isn’t real. The best parodies are the ones that hew the closest to the truth of their subject, that exaggerate the least. And the message of “McIntosh Jr.” is, at its heart, the actual purpose of the Macintosh, and of Apple writ large. Computers that enable you to do your best work. Bicycles for the mind. And, yes, the power to crush the other kids. That’s what drew me and Siracusa to Apple computers, and keeps us drawn to them today.

Update: Here’s a high-quality free-to-watch version on Rumble. Nice!

 ★ 

Trump’s White House Ballroom Design Is Shit

The New York Times (gift link):

Critics warn it still has many issues — its portico is too big, its stairs lead nowhere, its columns will block views from inside the ballroom.

And that’s just the portico.

This is a really good piece, with animated-as-you-scroll illustrations pointing out specific problems with the design.

Such details affect how people passing by experience these iconic places, and how each structure fits into a capital city that has been planned around civic symbols and sightlines since the 1790s. The deliberation is also an expression of democracy, said Carol Quillen, the president and chief executive of the National Trust for Historic Preservation, which has sued the administration over the ballroom.

“Even if we are slow and we make mistakes and we fight, that process has meaning to us,” Ms. Quillen said. No project belonging to the public should be the vision of just one man, she said.

That is, however, how the ballroom has often been described.

“President Trump is the best builder and developer in the entire world, and the American people can rest well knowing that this project is in his hands,” Davis Ingle, a White House spokesman, said in a statement. Past administrations and presidents have wanted a ballroom for more than 150 years, he said, and Mr. Trump will accomplish it.

The way that these lickspittles talk about Trump exactly the way North Koreans speak of Little Kim, or that anyone in any other cult speaks of the cult leader, is just revolting. Even the Chinese don’t speak of Xi “The Pooh” Jinping like this. No one in China pretends Xi is a genius architect.

 ★ 

Apple Still Has Jessica Chastain’s ‘The Savant’ on Ice, Seven Months After It Was Set to Debut

John Voorhees, at MacStories:

It’s a new month and you know what that means: time for a roundup of everything coming to Apple TV and Apple Arcade for April 2026.

What’s still not coming: Jessica Chastain’s political thriller The Savant, originally set for September, but rescheduled for “at a later dateout of cowardice.

Apple’s “at a later date” is looking more and more like Trump’s “in two weeks”.

 ★ 

John Buck on the Invention of QuickTime

John Buck at The Verge (gift link), excerpted from his great book, Inventing the Future:

Steve Perlman: Almost everyone at Apple, and definitely everywhere else, assumed that multimedia would always require specialized hardware — and be expensive. A few of us thought otherwise.

One of the few was Gavin Miller, a research scientist in Apple’s Graphics Group, who worked with Hoffert to crack the problem of software compression and decompression, otherwise known as codec.

Gavin Miller, research scientist: We went for a lunchtime walk, and by the end of it, we had generalized the model to include constant color blocks and 2-bit per-pixel interpolating blocks. This allowed us to trade off quantization artifacts in large flat areas for more detail in textured areas. The result was an increase in quality and performance that helped to make the codec practical for really small video sizes.

Just a typical lunchtime walk-and-talk.

Fun anecdote from 1990:

He asked Peppel to create a product plan that he could announce at Apple’s Worldwide Developers Conference on May 7th. That day, Casey took to the stage and announced QuickTime to a stunned audience, saying, “Apple intends to develop real-time software compression/decompression technology that will run on today’s modular Macintosh systems. A system-wide time coding to allow synchronization of sound, animation, and other time-critical processes.”

Casey explained that Apple’s new multimedia architecture would be delivered by the end of the year. He did not say that QuickTime had no budget, staff, or offices.

Worthington: We were dumbfounded.

Konstantin Othmer, QuickDraw engineer: I was standing next to Bruce Leak, and asked him, “What the heck was that?” He said he had no idea.

QuickTime actually shipped by WWDC 1991, teaching Apple the important lesson that anything they announce at WWDC, no matter how premature, will ship as promised.

 ★ 

Artemis II Crew on Way to Moon

Great roundup of links from Stephen Hackett:

The crew is made up of Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen. They are now on their way to the moon, set to return in 10 days. Their rocket may be the product of a hugely-flawed program, but right now, that doesn’t matter. They are getting us closer to returning to the lunar surface than we’ve been in 50 years. That’s worth celebrating.

 ★ 

llm-gemini 0.30

Release: llm-gemini 0.30

New models gemini-3.1-flash-lite-preview, gemma-4-26b-a4b-it and gemma-4-31b-it. See my notes on Gemma 4.

Tags: gemini, llm, gemma

Ask Almost A Doctor: Papal Blessing Edition

Second edition vibes generated by Gemini.

If you have questions, you can email me at eryneym@gmail.com, DM me on Twitter or Substack. Or put them in the comments below!

Also, none of the below constitutes medical advice. (Seriously. This is not medical advice - Ed.)

Enjoy.

Subscribe now

Algobaker @algobaker

How will the economics of capsid designs end up working out? Will there be libraries of patented designs, and any new group designing a new payload will have to choose between paying the tax to use an existing capsid they see can already do the job, vs paying to do the experiments necessary to patent-bust it?

Great question. Obligatory mention that I’m a former employee and current shareholder of Dyno Therapeutics, which (I personally think) is setting the frontier of capsid design.

Now that I’m done talking my book, let’s turn to your question. Engineering genetic delivery vectors today is mostly about picking your priorities. Generally what people try to select for is a virus that can go to specific organs / regions of the body, but there’s also considerations like avoiding the immune system, avoidance of the liver (we call this detargeting) or to a lesser extent, production efficiency. An entire field exists around trying to manipulate these viruses using AI models and fancy protein engineering, but it’s not really something that can be done without intense experimentation with very long feedback loops that take a lot of time.

How can we shave time? Well, AGI of course! This is because it’s probably the only real avenue towards avoiding the requirement of testing hypotheses in the lab. If you can avoid the experiments, you save both time and money, and thus open up a ton of options for yourself.

Until we have superhuman intelligence my sense is your need to pay the tax to the current incumbents for existing capsid IP comes down to how much you’re feeling the AGI, and secondarily, how organ-specific you need a capsid to be. Though the ideal world is one where you have a 1000-fold better brain delivery capsid than, say, AAV9 (the best “free” natural variant for the brain), you need to ask yourself whether you can get away with 5x? What about 2x? In that case it’s possible you can design your own capsid with a combination of off the shelf ML models, a good cloning and production pipeline and some non-human primates. I don’t know what it would cost you (at least $250k if I had to guess) but it would definitely take you time.

It’s worth mentioning here that when I say AGI I mean legitimately AGI, not models that are 10% better at writing code. I personally think the only real way it becomes cost efficient to engineer your own capsids instead of paying for it is that the field gets access to protein ML models that can tell you with very high confidence how a protein will behave in a zero-shot manner. Despite what you read online, we are not there yet, and we won’t be for many years.

The downstream consequences of this timeline is an exercise I will leave to the reader.

David Dales @d2dev_

Now AI is out and public figures are telling me more hospitals are hiring more doctors to use the AI - can you confirm or deny with data? I heard x-rays and MRI scans largely use AI to detect issues these days. I think it was in the last 5 minutes of the recent Lex Friedman/Jensen Huang podcast.

Image

Source

I’m going to take your question to refer to radiology, as that is what people generally mean by AI replacing doctors. Radiologists are actually in such high demand that they could easily out-earn neurosurgeons if they decided to work more than a few months per year. But how can this be true? Well, let’s start by first addressing the misconception that AI is replacing radiologists.

NVIDIA CEO Jensen Huang, and also Anthropic CEO Dario Amodei, are both incredibly smart guys who have gotten this one wrong completely. I won’t comment on how that happens — keep in mind they both have reason to push the narrative that AI can do complex jobs easily today — but I will say that this is a pervasive myth in the tech community. I will put it very plainly here: no hospital is replacing radiologists with AI today. While the range of what AI models can do is growing (see here this study on a new neuroradiology model from Michigan), the skillset is still incomplete, and thus hasn’t changed the job landscape at all.

Subscribe now

The hiring effects you see with radiologists are actually the result of something entirely unrelated to AI, and that is the rise of advanced imaging in medicine. Previously (and by previously, I mean like 50 years ago), doctors put a lot of weight in their clinical intuition and the art of the physical exam. Sadly, that is a fading skillset, but we can directly tie it to the rise of on-demand CT and MRI in healthcare. Have an ear ache? Head CT. Strange lump? Ultrasound. Think you tweaked your knee? Let’s get you an MRI. All of these things were at some point diagnosed from physical exam findings, but now get imaging. The result is that we get imaging on way more patients than we used to, and that amount is further increasing as new modalities get added.

Source

I can’t read the future, so I won’t make a prediction here about eventual AI capabilities in radiology. The point is though that I think the hiring happening in healthcare is very much real today.

Claire Goldsmith @c_goldsmith

What is going to happen with monoclonal antibodies over the next two decades? Everyone talks about costs coming down and this not being a particularly attractive part of the market long-term, but it’s doing very well for big players today (J&J etc). Other than patent cliffs, what do you think actually manifests that cost curve compression? Seems much more like a manufacturing problem than a design problem. Also, which comes first, broad use of mabs for more disease areas outside indication or major decrease in cost of manufacturing?

Are you feeling the AGI, Claire? Monoclonal design is getting better thanks to advances in AI-enabled protein design — faster than most proteins right now, I’d say — but honestly, target selection seems like more of the rate limiter here. Everyone seems to be tackling the same exact ideas. Structure models like AlphaFold or RosettaFold or Chai seem poised to help the design of drugs that act on established targets with improved efficacy / potency (referred to as me-betters) but they don’t really help you pick out novel idea space. In theory, that’s where AGI helps. I am not convinced we have it yet, though.

There’s another element to the economics here which is that the arrival of biosimilars (“generics” for antibodies) drives the price of monoclonal antibodies down significantly, upwards of 80%. When this happens doctors become more willing to prescribe a particular biologic for off-label use. I expect that to happen more and more, especially now that the FDA has said their goal is to make biosimilars easier to get through the pipeline.

If your question on manufacturing is one of cost, I would argue that’s not really the problem. Right now most monoclonals can be made around $100/g. If we look at the cancer buster Keytruda, which is dosed at ~2mg/kg every few weeks, you’re looking at a max of around $500 in terms of production costs. The gap between that number and the $150,000 price tag is owed to amortization of R&D and clinical trials.

Claire Goldsmith @c_goldsmith

Growing organs…. Is this working? Will we be able to do it? What problems does it actually solve? Transplant success rates after the 1-year mark are not improving, and I don’t think organ supply is the problem.

If you’re referring to growing whole organs, we’re quite far off, so I think it would be unwise to be all-in on this. There are other options though, like xenotransplantation – taking organs from other organisms (namely pigs) – which are kind of getting there. NYU has a trial for kidney transplants from pigs and as of today, the longest survival time is 9 months. Again, not bad, but as you point out, not enough. This is a little above my pay grade, but my understanding is that the main way we humanize pig organs is to eliminate endogenous retroviruses within the animal that immediately activate our immune systems if they get transplanted. Unfortunately, there seems to be some antigen that has yet to reveal itself, which ends up the same way as many human-to-human transplants — rejection.

Whole organs are a difficult business, but patches seem viable. Lots of companies are working on this, new and old. I wrote about one last year, Polyphron, but there are plenty of others. The goal of these approaches is to swap out broken bits of an organ. Also not there yet.

You didn’t ask, but I think it’s kind of cool that since our last edition, the Pope issued an official decree that Catholics are able to accept pig organs according to the Written Word. America’s Pope supports American biotech. Annuit cœptis.

waitingonyou @Imyouropnow

Why are there more investments in AI innovation at the bench rather than the beside? Do you think it might be possible to infer immunotherapeutic effects (w/o drug perturbation experiments) through, for example, cytokine-symptom effects at the bedside?

Your question reminds me of a fantastic book I read a few months before the COVID-19 pandemic, The Great Influenza. It’s about the Spanish Flu (or Kansas Flu, IYKYK). The flu accelerated medical science, but split the process of discovery between the bench and the bedside. There became a specialist class of researchers whose whole thing became studying biological phenomena independent of the treatment of patients. This is great, but it did have cultural consequences. Doctors don’t really do the physician-scientist thing like they used to. They rely on the biological sciences to be the engine of discovery and while some patient-facing physicians try things in the clinic, our healthcare system works well by having doctors mostly implement things once they’ve been established as safe and effective in small numbers via trials.

So, why is there no AI innovation at the bedside? Well, it’s not immediately useful. AI-powered tools like OpenEvidence are great for distilling dense medical literature to a specific question, but that’s not really the same thing as innovating, is it?

Now, I do think there is a lot of useful biological data to be gathered from the clinic for those with the stomach to figure out how to get it. A large part of the limitations are the fact that we have very poor measurement tools, which is why I spent some time working on this at Caltech. But the tools largely still need to be built. There probably are insights that can still be gathered, though, so if you’re an engineer or scientist who wants clinical data, try connecting with a clinician.

Towards your specific comment about cytokine-symptom effects, I expect it’s mostly just that doctors are waiting for the science to clarify what they should do. AI is best served doing that, because as things go today, that’s not really the job of doctors. An interesting question is whether AI will enable that to happen, though. That’s one whose answer has yet to reveal itself to me.

Ashlee Vance @ashleevance

If I’ve already had shingles, should I take the vaccine anyway, too?

The simple answer is yes. The longer answer is definitely yes.

Shingles is the result of varicella zoster — the virus that causes chickenpox — staying dormant in your nerves after you clear the initial infection. Because our immune systems wane in efficacy over our life, the virus generally gets reactivated resulting in what I’ve heard described as the worst pain imaginable. If you’re unlucky, you can actually get it again at some later point — having shingles once doesn’t mean you’re set for life. It’s recommended that anyone over 50 gets it. As a Sensitive Young Man of 29 Years Age, I naturally haven’t gotten the shingles shot, but I’ve heard it’s quite painful. Still, less painful than shingles!

I’ll add that a study in December 2025 in Wales shows that, while the shingles vaccine doesn’t stop dementia once it gets going, it can slow its progression down and even prevent new cases in the vaccinated. They use two different cohorts to demonstrate that the effect is real and is not the result of weird selection effects. Interestingly, they show that this effect is stronger in women than in men.

So, Miss Ashlee, I’d get your vaccine.

Happy Friday.

Share

In Batteries We Trust

MRSC - Battery Energy Storage Systems – Coming Soon to Your Community?

The war goes on, and so does the global energy crisis. In fact, I believe that prices of oil futures remain too low given how much spot prices will need to rise to resolve the shortages that will hit once oil supplies that were shipped before the Strait of Hormuz was closed are exhausted.

But a better future is coming, despite Donald Trump’s assault on renewable energy as he tries to drag us back into the fossil fuel past. Regardless of Trump’s chest-thumping, America is not the world. We account for only 15 percent of global energy consumption, compared with China’s 28 percent. And the rest of the world is moving rapidly to renewables, thanks to a technological revolution in solar power, wind power, and, less visibly, batteries.

So let me take an optimism break and talk about why batteries may save the world.

The decline in battery prices has been incredible. It’s like nothing anyone has ever seen before. Big, strong men with tears in their eyes come up to me and say, “Sir, have you seen the progress in batteries?”:

A graph with a line going up

AI-generated content may be incorrect.

Why does this matter?

First, cheap battery storage of electricity greatly mitigates the problem of intermittency — the sun doesn’t always shine, the wind doesn’t always blow. This was a major concern early in the renewable revolution. Some energy economists scolded me for my naïve optimism when I first wrote about solar technology way back in 2011. But solar + batteries provides round-the-clock power.

Here’s a graph of California’s electricity supply generated by renewables and batteries over the course of 24 hours on April 1 that illustrates my point:

A graph of a line

AI-generated content may be incorrect.

During the middle of the day, California generates lots of electricity from solar. Much of it is poured into batteries, which provide electricity when the sun sets. Californians don’t even notice the switch.

Second, battery performance has soared as prices have plunged. Crucially, there has been a huge increase in batteries’ volumetric energy density: the amount of electricity that can be stored in a given space. Until a few years ago the energy density of gasoline gave internal combustion a huge advantage over electric vehicles. But no longer. Outside the U.S. electrification, the transition away from petroleum and towards electricity — particularly green-sourced electricity — is well underway:

A graph of sales

AI-generated content may be incorrect.

Third, we should expect continuing rapid improvement in renewable energy. That’s because the progress in batteries has come from cumulative learning rather than scientific breakthroughs. Lithium-ion batteries are, in fact, a decades-old technology. Yet costs have fallen drastically and energy density risen thanks to an ongoing process of learning, which shows no sign of coming to an end.

Furthermore, we’ve seen rapid progress in all components of the green energy transformation, even though their underlying technologies have little in common. Solar panels, wind turbines, and batteries are very different, yet all have seen revolutionary improvements. This strongly suggests that the whole renewable energy complex is experiencing a virtuous circle: ever-growing use leads to falling costs and falling costs lead to ever-growing use.

If we ask where this virtuous circle is taking place, the answer is, largely in China with an assist from Europe. And the corollary is “not in America.” The United States has allowed itself to be far surpassed by China and is now only a peripheral player in the renewable revolution. Fortunately for the rest of the world, this means that the Trump administration’s hostility to renewable energy, its attempts to sabotage progress, won’t stop that revolution or even noticeably slow its momentum. True, Trump’s anti-green, pro-pollution tilt will serve to leave America further behind, but progress in fighting climate change and reducing the risks of global dependence on oil will continue.

So although we are now in the midst of a severe energy crisis that could easily go on for many months, this too shall pass. A better, cheaper, cleaner energy future is on the way, and not even Trump can stop it.

MUSICAL CODA

Can JavaScript Escape a CSP Meta Tag Inside an Iframe?

Research: Can JavaScript Escape a CSP Meta Tag Inside an Iframe?

In trying to build my own version of Claude Artifacts I got curious about options for applying CSP headers to content in sandboxed iframes without using a separate domain to host the files. Turns out you can inject <meta http-equiv="Content-Security-Policy"...> tags at the top of the iframe content and they'll be obeyed even if subsequent untrusted JavaScript tries to manipulate them.

Tags: iframes, security, javascript, content-security-policy, sandboxing

The Axios supply chain attack used individually targeted social engineering

The Axios team have published a full postmortem on the supply chain attack which resulted in a malware dependency going out in a release the other day, and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saayman'a description of how that worked:

so the attack vector mimics what google has documented here: https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering

they tailored this process specifically to me by doing the following:

  • they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself.
  • they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a plausible manner. the slack was thought out very well, they had channels where they were sharing linked-in posts, the linked in posts i presume just went to the real companys account but it was super convincing etc. they even had what i presume were fake profiles of the team of the company but also number of other oss maintainers.
  • they scheduled a meeting with me to connect. the meeting was on ms teams. the meeting had what seemed to be a group of people that were involved.
  • the meeting said something on my system was out of date. i installed the missing item as i presumed it was something to do with teams, and this was the RAT.
  • everything was extremely well co-ordinated looked legit and was done in a professional manner.

A RAT is a Remote Access Trojan - this was the software which stole the developer's credentials which could then be used to publish the malicious package.

That's a very effective scam. I join a lot of meetings where I find myself needing to install Webex or Microsoft Teams or similar at the last moment and the time constraint means I always click "yes" to things as quickly as possible to make sure I don't join late.

Every maintainer of open source software used by enough people to be worth taking in this way needs to be familiar with this attack strategy.

Tags: open-source, packaging, security, social-engineering, supply-chain

Highlights from my conversation about agentic engineering on Lenny's Podcast

I was a guest on Lenny Rachitsky's podcast, in a new episode titled An AI state of the union: We've passed the inflection point, dark factories are coming, and automation timelines. It's available on YouTube, Spotify, and Apple Podcasts. Here are my highlights from our conversation, with relevant links.

The November inflection point

4:19 - The end result of these two labs throwing everything they had at making their models better at code is that in November we had what I call the inflection point where GPT 5.1 and Claude Opus 4.5 came along.

They were both incrementally better than the previous models, but in a way that crossed a threshold where previously the code would mostly work, but you had to pay very close attention to it. And suddenly we went from that to... almost all of the time it does what you told it to do, which makes all of the difference in the world.

Now you can spin up a coding agent and say, build me a Mac application that does this thing, and you'll get something back which won't just be a buggy pile of rubbish that doesn't do anything.

Software engineers as bellwethers for other information workers

5:49 - I can churn out 10,000 lines of code in a day. And most of it works. Is that good? Like, how do we get from most of it works to all of it works? There are so many new questions that we're facing, which I think makes us a bellwether for other information workers.

Code is easier than almost every other problem that you pose these agents because code is obviously right or wrong - either it works or it doesn't work. There might be a few subtle hidden bugs, but generally you can tell if the thing actually works.

If it writes you an essay, if it prepares a lawsuit for you, it's so much harder to derive if it's actually done a good job, and to figure out if it got things right or wrong. But it's happening to us as software engineers. It came for us first.

And we're figuring out, OK, what do our careers look like? How do we work as teams when part of what we did that used to take most of the time doesn't take most of the time anymore? What does that look like? And it's going to be very interesting seeing how this rolls out to other information work in the future.

Lawyers are falling for this really badly. The AI hallucination cases database is up to 1,228 cases now!

Plus this bit from the cold open at the start:

It used to be you'd ask ChatGPT for some code, and it would spit out some code, and you'd have to run it and test it. The coding agents take that step for you now. And an open question for me is how many other knowledge work fields are actually prone to these agent loops?

Writing code on my phone

8:19 - I write so much of my code on my phone. It's wild. I can get good work done walking the dog along the beach, which is delightful.

I mainly use the Claude iPhone app for this, both with a regular Claude chat session (which can execute code now) or using it to control Claude Code for web.

Responsible vibe coding

9:55 If you're vibe coding something for yourself, where the only person who gets hurt if it has bugs is you, go wild. That's completely fine. The moment you ship your vibe coding code for other people to use, where your bugs might actually harm somebody else, that's when you need to take a step back.

See also When is it OK to vibe code?

Dark Factories and StrongDM

12:49 The reason it's called the dark factory is there's this idea in factory automation that if your factory is so automated that you don't need any people there, you can turn the lights off. Like the machines can operate in complete darkness if you don't need people on the factory floor. What does that look like for software? [...]

So there's this policy that nobody writes any code: you cannot type code into a computer. And honestly, six months ago, I thought that was crazy. And today, probably 95% of the code that I produce, I didn't type myself. That world is practical already because the latest models are good enough that you can tell them to rename that variable and refactor and add this line there... and they'll just do it - it's faster than you typing on the keyboard yourself.

The next rule though, is nobody reads the code. And this is the thing which StrongDM started doing last year.

I wrote a lot more about StrongDM's dark factory explorations back in February.

The bottleneck has moved to testing

21:27 - It used to be, you'd come up with a spec and you hand it to your engineering team. And three weeks later, if you're lucky, they'd come back with an implementation. And now that maybe takes three hours, depending on how well the coding agents are established for that kind of thing. So now what, right? Now, where else are the bottlenecks?

Anyone who's done any product work knows that your initial ideas are always wrong. What matters is proving them, and testing them.

We can test things so much faster now because we can build workable prototypes so much quicker. So there's an interesting thing I've been doing in my own work where any feature that I want to design, I'll often prototype three different ways it could work because that takes very little time.

I've always loved prototyping things, and prototyping is even more valuable now.

22:40 - A UI prototype is free now. ChatGPT and Claude will just build you a very convincing UI for anything that you describe. And that's how you should be working. I think anyone who's doing product design and isn't vibe coding little prototypes is missing out on the most powerful boost that we get in that step.

But then what do you do? Given your three options that you have instead of one option, how do you prove to yourself which one of those is the best? I don't have a confident answer to that. I expect this is where the good old fashioned usability testing comes in.

More on prototyping later on:

46:35 - Throughout my entire career, my superpower has been prototyping. I've been very quick at knocking out working prototypes of things. I'm the person who can show up at a meeting and say, look, here's how it could work. And that was kind of my unique selling point. And that's gone. Anyone can do what I could do.

This stuff is exhausting

26:25 - I'm finding that using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting. I can fire up four agents in parallel and have them work on four different problems. And by like 11 AM, I am wiped out for the day. [...]

There's a personal skill we have to learn in finding our new limits - what's a responsible way for us not to burn out.

I've talked to a lot of people who are losing sleep because they're like, my coding agents could be doing work for me. I'm just going to stay up an extra half hour and set off a bunch of extra things... and then waking up at four in the morning. That's obviously unsustainable. [...]

There's an element of sort of gambling and addiction to how we're using some of these tools.

Interruptions cost a lot less now

45:16 - People talk about how important it is not to interrupt your coders. Your coders need to have solid two to four hour blocks of uninterrupted work so they can spin up their mental model and churn out the code. That's changed completely. My programming work, I need two minutes every now and then to prompt my agent about what to do next. And then I can do the other stuff and I can go back. I'm much more interruptible than I used to be.

My ability to estimate software is broken

28:19 - I've got 25 years of experience in how long it takes to build something. And that's all completely gone - it doesn't work anymore because I can look at a problem and say that this is going to take two weeks, so it's not worth it. And now it's like... maybe it's going to take 20 minutes because the reason it would have taken two weeks was all of the sort of crufty coding things that the AI is now covering for us.

I constantly throw tasks at AI that I don't think it'll be able to do because every now and then it does it. And when it doesn't do it, you learn, right? But when it does do something, especially something that the previous models couldn't do, that's actually cutting edge AI research.

And a related anecdote:

36:56 - A lot of my friends have been talking about how they have this backlog of side projects, right? For the last 10, 15 years, they've got projects they never quite finished. And some of them are like, well, I've done them all now. Last couple of months, I just went through and every evening I'm like, let's take that project and finish it. And they almost feel a sort of sense of loss at the end where they're like, well, okay, my backlog's gone. Now what am I going to build?

It's tough for people in the middle

29:29 - So ThoughtWorks, the big IT consultancy, did an offsite about a month ago, and they got a whole bunch of engineering VPs in from different companies to talk about this stuff. And one of the interesting theories they came up with is they think this stuff is really good for experienced engineers, like it amplifies their skills. It's really good for new engineers because it solves so many of those onboarding problems. The problem is the people in the middle. If you're mid-career, if you haven't made it to sort of super senior engineer yet, but you're not sort of new either, that's the group which is probably in the most trouble right now.

I mentioned Cloudflare hiring 1,000 interns, and Shopify too.

Lenny asked for my advice for people stuck in that middle:

31:21 - That's a big responsibility you're putting on me there! I think the way forward is to lean into this stuff and figure out how do I help this make me better?

A lot of people worry about skill atrophy: if the AI is doing it for you, you're not learning anything. I think if you're worried about that, you push back at it. You have to be mindful about how you're applying the technology and think, okay, I've been given this thing that can answer any question and often gets it right. How can I use this to amplify my own skills, to learn new things, to take on much more ambitious projects? [...]

33:05 - Everything is changing so fast right now. The only universal skill is being able to roll with the changes. That's the thing that we all need.

The term that comes up most in these conversations about how you can be great with AI is agency. I think agents have no agency at all. I would argue that the one thing AI can never have is agency because it doesn't have human motivations.

So I'd say that's the thing is to invest in your own agency and invest in how to use this technology to get better at what you do and to do new things.

It's harder to evaluate software

The fact that it's so easy to create software with detailed documentation and robust tests means it's harder to figure out what's a credible project.

37:47 Sometimes I'll have an idea for a piece of software, Python library or whatever, and I can knock it out in like an hour and get to a point where it's got documentation and tests and all of those things, and it looks like the kind of software that previously I'd have spent several weeks on - and I can stick it up on GitHub

And yet... I don't believe in it. And the reason I don't believe in it is that I got to rush through all of those things... I think the quality is probably good, but I haven't spent enough time with it to feel confident in that quality. Most importantly, I haven't used it yet.

It turns out when I'm using somebody else's software, the thing I care most about is I want them to have used it for months.

I've got some very cool software that I built that I've never used. It was quicker to build it than to actually try and use it!

The misconception that AI tools are easy

41:31 - Everyone's like, oh, it must be easy. It's just a chat bot. It's not easy. That's one of the great misconceptions in AI is that using these tools effectively is easy. It takes a lot of practice and it takes a lot of trying things that didn't work and trying things that did work.

Coding agents are useful for security research now

19:04 - In the past sort of three to six months, they've started being credible as security researchers, which is sending shockwaves through the security research industry.

See Thomas Ptacek: Vulnerability Research Is Cooked.

At the same time, open source projects are being bombarded with junk security reports:

20:05 - There are these people who don't know what they're doing, who are asking ChatGPT to find a security hole and then reporting it to the maintainer. And the report looks good. ChatGPT can produce a very well formatted report of a vulnerability. It's a total waste of time. It's not actually verified as being a real problem.

A good example of the right way to do this is Anthropic's collaboration with Firefox, where Anthropic's security team verified every security problem before passing them to Mozilla.

OpenClaw

Of course we had to talk about OpenClaw! Lenny had his running on a Mac Mini.

1:29:23 - OpenClaw demonstrates that people want a personal digital assistant so much that they are willing to not just overlook the security side of things, but also getting the thing running is not easy. You've got to create API keys and tokens and install stuff. It's not trivial to get set up and hundreds of thousands of people got it set up. [...]

The first line of code for OpenClaw was written on November the 25th. And then in the Super Bowl, there was an ad for AI.com, which was effectively a vaporware white labeled OpenClaw hosting provider. So we went from first line of code in November to Super Bowl ad in what? Three and a half months.

I continue to love Drew Breunig's description of OpenClaw as a digital pet:

A friend of mine said that OpenClaw is basically a Tamagotchi. It's a digital pet and you buy the Mac Mini as an aquarium.

Journalists are good at dealing with unreliable sources

In talking about my explorations of AI for data journalism through Datasette:

1:34:58 - You would have thought that AI is a very bad fit for journalism where the whole idea is to find the truth. But the flip side is journalists deal with untrustworthy sources all the time. The art of journalism is you talk to a bunch of people and some of them lie to you and you figure out what's true. So as long as the journalist treats the AI as yet another unreliable source, they're actually better equipped to work with AI than most other professions are.

The pelican benchmark

Obviously we talked about pelicans riding bicycles:

56:10 - There appears to be a very strong correlation between how good their drawing of a pelican riding a bicycle is and how good they are at everything else. And nobody can explain to me why that is. [...]

People kept on asking me, what if labs cheat on the benchmark? And my answer has always been, really, all I want from life is a really good picture of a pelican riding a bicycle. And if I can trick every AI lab in the world into cheating on benchmarks to get it, then that just achieves my goal.

59:56 - I think something people often miss is that this space is inherently funny. The fact that we have these incredibly expensive, power hungry, supposedly the most advanced computers of all time. And if you ask them to draw a pelican on a bicycle, it looks like a five-year-old drew it. That's really funny to me.

And finally, some good news about parrots

Lenny asked if I had anything else I wanted to leave listeners with to wrap up the show, so I went with the best piece of news in the world right now.

1:38:10 - There is a rare parrot in New Zealand called the Kākāpō. There are only 250 of these parrots left in the world. They are flightless nocturnal parrots - beautiful green dumpy looking things. And the good news is they're having a fantastic breeding season in 2026,

They only breed when the Rimu trees in New Zealand have a mass fruiting season, and the Rimu trees haven't done that since 2022 - so there has not been a single baby kākāpō born in four years.

This year, the Rimu trees are in fruit. The kākāpō are breeding. There have been dozens of new chicks born. It's a really, really good time. It's great news for rare New Zealand parrots and you should look them up because they're delightful.

Everyone should watch the live stream of Rakiura on her nest with two chicks!

YouTube chapters

Here's the full list of chapters Lenny's team defined for the YouTube video:

  • 00:00: Introduction to Simon Willison
  • 02:40: The November 2025 inflection point
  • 08:01: What's possible now with AI coding
  • 10:42: Vibe coding vs. agentic engineering
  • 13:57: The dark-factory pattern
  • 20:41: Where bottlenecks have shifted
  • 23:36: Where human brains will continue to be valuable
  • 25:32: Defending of software engineers
  • 29:12: Why experienced engineers get better results
  • 30:48: Advice for avoiding the permanent underclass
  • 33:52: Leaning into AI to amplify your skills
  • 35:12: Why Simon says he's working harder than ever
  • 37:23: The market for pre-2022 human-written code
  • 40:01: Prediction: 50% of engineers writing 95% AI code by the end of 2026
  • 44:34: The impact of cheap code
  • 48:27: Simon's AI stack
  • 54:08: Using AI for research
  • 55:12: The pelican-riding-a-bicycle benchmark
  • 59:01: The inherent ridiculousness of AI
  • 1:00:52: Hoarding things you know how to do
  • 1:08:21: Red/green TDD pattern for better AI code
  • 1:14:43: Starting projects with good templates
  • 1:16:31: The lethal trifecta and prompt injection
  • 1:21:53: Why 97% effectiveness is a failing grade
  • 1:25:19: The normalization of deviance
  • 1:28:32: OpenClaw: the security nightmare everyone is looking past
  • 1:34:22: What's next for Simon
  • 1:36:47: Zero-deliverable consulting
  • 1:38:05: Good news about Kakapo parrots

Tags: ai, kakapo, generative-ai, llms, podcast-appearances, coding-agents, agentic-engineering

Gemma 4: Byte for byte, the most capable open models

Gemma 4: Byte for byte, the most capable open models

Four new vision-capable Apache 2.0 licensed reasoning LLMs from Google DeepMind, sized at 2B, 4B, 31B, plus a 26B-A4B Mixture-of-Experts.

Google emphasize "unprecedented level of intelligence-per-parameter", providing yet more evidence that creating small useful models is one of the hottest areas of research right now.

They actually label the two smaller models as E2B and E4B for "Effective" parameter size. The system card explains:

The smaller models incorporate Per-Layer Embeddings (PLE) to maximize parameter efficiency in on-device deployments. Rather than adding more layers or parameters to the model, PLE gives each decoder layer its own small embedding for every token. These embedding tables are large but are only used for quick lookups, which is why the effective parameter count is much smaller than the total.

I don't entirely understand that, but apparently that's what the "E" in E2B means!

One particularly exciting feature of these models is that they are multi-modal beyond just images:

Vision and audio: All models natively process video and images, supporting variable resolutions, and excelling at visual tasks like OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.

I've not figured out a way to run audio input locally - I don't think that feature is in LM Studio or Ollama yet.

I tried them out using the GGUFs for LM Studio. The 2B (4.41GB), 4B (6.33GB) and 26B-A4B (17.99GB) models all worked perfectly, but the 31B (19.89GB) model was broken and spat out "---\n" in a loop for every prompt I tried.

The succession of pelican quality from 2B to 4B to 26B-A4B is notable:

E2B:

Two blue circles on a brown rectangle and a weird mess of orange blob and yellow triangle for the pelican

E4B:

Two black wheels joined by a sort of grey surfboard, the pelican is semicircles and a blue blob floating above it

26B-A4B:

Bicycle has the right pieces although the frame is wonky. Pelican is genuinely good, has a big triangle beak and a nice curved neck and is clearly a bird that is sitting on the bicycle

(This one actually had an SVG error - "error on line 18 at column 88: Attribute x1 redefined" - but after fixing that I got probably the best pelican I've seen yet from a model that runs on my laptop.)

Google are providing API access to the two larger Gemma models via their AI Studio. I added support to llm-gemini and then ran a pelican through the 31B model using that:

llm -m gemini/gemma-4-31b-it 'Generate an SVG of a pelican riding a bicycle'

Pretty good, though it is missing the front part of the bicycle frame:

Motion blur lines, a mostly great bicycle albeit missing the front part of the frame. Pelican is decent.

Tags: google, ai, generative-ai, local-llms, llms, llm, vision-llms, pelican-riding-a-bicycle, llm-reasoning, gemma, llm-release, lm-studio

Friday assorted links

1. Ben Yeoh on Measure for Measure.

2. How much is a badly damaged Gentileschi worth?

3. Sabine Hossenfelder on UAP evidence.  And a bit more.

4. New record as Indian painting auctions for $17.9 million.

5. On African urbanization.

6. South Africa banned TV until 1976.

7. Ping Pong Park, in France.

8. How do AI models respond to direct authoritarian requests?

9. Lynne Kiesling on which parts of economics will be repriced, as a result of AI.

10. How replaceable am I?  An agent takes on that question.  And another Karpathy idea.

The post Friday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

The Week Observed: April 3, 2026

What City Observatory Did This Week

A freeway doesn’t run through it.  The New York Times had a feature on Portland, calling the city “weird” and adding life is good.  It features a photo of the blossoming cherry trees along the Willamette River, but fails to note that it was exactly this site that a Robert Moses-inspired freeway had cut the center of the city off from its riverfront.  That all changed half a century ago, when, in response to citizen activism, the city chose to tear out the freeway, build a park, and ultimately, plant cherry trees.

Left:  Tom McCall Waterfront Park, 2026.  –   Right:  Same Place, 1950:  Harbor Drive Freeway

Its a reminder that when we build cities for people, and not for cars, we get startling results for the better.  Portland flourished economically in the decades after tearing out the old Harbor Drive Expressway.  The city gained more economically from the freeways it demolished, and from the ones it never built, than the ones it did.

Must Read

 

Paris voters endorse a bike friendly city.  Over the past twelve years, Mayor Anne Hidalgo has transformed Paris investing in new bike infrastructure and pedestrian streets, and boldly taking street space back from cars and giving it to people.  To someone who has visited the city for decades, the last few years have witnessed the most rapid and remarkable transformation.  Principal streets like the Rue du Rivoli are a steady stream of people on bikes. The city is both more alive and busier, and in many respects, quieter, as the walking and pedaling traffic is far less noisy that les voitures.  But as in the United States, none of this shift happened with out a loud outcry from opponents.  There were some signs that when Mayor Anne retired, as she chose to do this year, she would be replaced by a car advocate.

Instead, her successor, fellow socialist Emmanuel Grégoire cruised to victory–taking a victory lap on a Velib’ bicycle on election night, has he beat back and centrist-right wing coalition candidate by more than 10 points.   As Ron Johnson notes at Momentum Magazine, this signals that a largely quiet, if not silent, majority actually endorsed these transportation changes, and vastly outnumbered the noisy opponents.

What Paris may have just revealed is a classic case of pluralistic ignorance — when large numbers of people quietly support a policy but assume they’re in the minority. In that vacuum, critics dominate the conversation, creating the illusion of widespread opposition. Bike lanes become “controversial.” Traffic calming becomes “divisive.” And politicians elsewhere take note — often the wrong lesson.

Mission accomplished:   The Trump Administration squelches immigration to across the country.  New Census data, reported by the New York Times, shows that in 2025, immigration declined to every metropolitan area in the United States.  For a nation founded upon, and thriving because of immigration, this is a body blow.  It is particularly devastating to cities, which have traditionally been the point of entry and assimilation into the United States.  As the Times reports:

Every metro area in the United States, in fact, experienced lower immigration rates during the year leading up to July 2025 compared with the previous year, according to new estimates released on Thursday by the Census Bureau. In about 75 percent of all counties, overall population growth — including immigration, domestic migration, births and deaths — either slowed or turned negative. Only 25 percent grew faster. And large urban counties and border counties, which had experienced a surge in new arrivals in recent years, were among the hardest-hit parts of the country.

And naive xenophobic claims that immigration hurts the economic condition of current residents are not just flatly wrong, they’re exactly backwards.  The American economy benefits from immigration.  A whole range of industries, from agriculture, to food service, to construction and increasingly, to health care, depend on migrants to address a growing shortage of labor.  Careful economic studies show immigrants complement native-born workers, and tend to drive up compensation.  And, as Paul Krugman explains, cracking down on immigration and scaring away potential immigrants is making the economy and our fiscal situation worse, not better:

. . . waging war against immigrants is not resulting in higher employment of the native-born. In fact, it’s contributing to a stalling of the economy in construction and in the service industries.  . . . Immigration expands the base of taxpayers, which means more people to share the burden of paying taxes to pay for defense. This includes undocumented immigrants, because their employers collect payroll taxes out of their wages, with the added fiscal payoff that they will never collect benefits. And because immigrants are relatively young and healthy, they increase the amount going into government coffers while having a delayed impact on outlays. The Social Security Administration does sensitivity analysis of factors affecting its projections, and consistently finds that higher immigration improves the system’s financial health, while lower immigration worsens it.

The Trump Administration assault on immigration and immigrants, is profoundly immoral and un-American.  And it plainly threatens one of the key foundations of long term U.S. prosperity.

Can your mayoral candidate do this? Nithya Raman, who is running for Mayor of Los Angeles has a minute-long video that neatly explains the reasons the city has a housing affordability problem and what she would do about it.  She even includes a scatter plot of metro area rent and housing data, with a regression line and an R-squared value.

So, Nithya.  Why is the rent so damn high in Los Angeles?

Angelenos, and residents of just about any city could greatly benefit from leaders like this who can speak with such clarity, and forcefully marshal data to make their case.

 

 

 

Harbor Drive

Greetings from war-ravaged Portland!  Here, according to the March 25, 2026 New York Times, is what the city looks like now, at pretty much the epicenter of a so-called doom-loop.  Clouds of sakura (cherry blossoms) float over the park that runs along the Willamette River in the center of the city.
 If anything, the Times photos understate just how many people are drawn to the spot by the simple reverie of the blossoms.  Here’s what the same spot looked like a couple of days before the Times article:
Sure, now, its a picture-post card tourist attraction.  It didn’t always look this way.
This is exactly the place where, half a century ago, Portland tore out the Harbor Drive freeway that cut off the city from its riverfront, and instead built a park.  In the 1950s, this same area looked like this.
If highway engineers and the chamber of commerce had their way, Portland’s waterfront would still look like this.
The freeway was built, pretty much following the advice of Robert Moses, in the 1940s.  They leveled Portland’s waterfront and built an expressway along the Willamette River.  In the 1960s, many citizens started lobbying to turn the area into a park.  At least initially, they were blocked by the powers-that-be who equated roadways with economic activity.  A nine-man task force (all its members were men) was appointed by the Mayor and Governor, and chaired by Glenn Jackson–Chair of the city’s largest utility and chair of the state transportation commission–to re-evaluate possible uses of the waterfront.  The business community objected to calls to remove Harbor Drive, and the highway engineers said the roadway was needed to meet growing traffic.  The state highway department actually recommended expanding Harbor Drive from four lanes to six.
. . . in spite of this public outcry the taskforce reinforced the decision to retain the 6 lane freeway in the relocated location stating that “the state highway engineers projected there would be 90,000 trips per day in the corridor by 1990” and there was no way one could get rid of Harbor Drive.
Ultimately, in the face of public protests, Governor Tom McCall prevailed on Jackson and the state highway department to agree to remove Harbor Drive, and allow the city of Portland to turn the area in to a park.  The park was a centerpiece of the city’s new downtown plan, which helped trigger a rennaisance in a city that had been in decline.  The quirkiness and high quality of life praised in the New York Times radiates from the city’s revived center and waterfront.
The lesson here is simple:  Freeways are toxic to urban space and city economies.  Jeffrey Brinkman and Jeffrey Lin of the Philadelphia Federal Reserve studied the effect of freeway construction on urban neighborhoods and found the more freeways a city built, the more its population declined, and the closer a city neighborhood was to a freeway, the more population it lost.  Freeway construction directly destroy housing and businesses, but that’s just the beginning.  Car dependence, and the traffic and pollution from vehicles, lowers the quality of life, and drives away residents.  Portland’s economy has always benefited more from the freeways it demolished, and the ones it never built, than the one’s that sliced through city neighborhoods.
And the park that now stands where the freeway once stood is not merely a bit of urban greenery, its an important civic space in its own right.  For example, No Kings demonstrations in Portland, like this one in 2025 took place on symbolically important ground in the heart of the city.  Tens of thousands marched along Naito Parkway and Governor Tom McCall Waterfront Park, and through the Japanese-American Memorial Plaza.
The Naito Parkway honors civic leader Bill Naito, who was among those pushed out of Portland in the early days of World War II by the federal government’s illegal “internment” of Japanese Americans.  Cities are the places where we come together to seek redress of grievances and exercise these fundamental rights.

The President(s) Fought the Law and the Law Won

In our textbook, Modern Principles, Tyler and I emphasize that Congress and the President are subject to a higher law, the law of supply and demand. In an excellent column, Jason Furman gives a clear example of how difficult it is to fight the law of inelastic demand:

…Today a given number of autoworkers can make, according to my calculations, three times as many cars in a year as they could 50 years ago.

The problem is that consumers do not want three times as many cars. Even as people get richer, they increase their spending on manufactured goods only modestly, preferring instead to spend more on services like travel, health care and dining out. There are only so many cars a family can own, but that’s not the case for expensive vacations or fancy meals. As a result we have fewer people working in auto factories and more people working in luxury resorts and the like.

These forces — rising productivity but steady demand — explain why the United States was losing manufacturing job share as far back as the 1950s and 1960s, long before trade became a major factor.

The post The President(s) Fought the Law and the Law Won appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Day Counter

It has been −2,147,483,648 days since our last integer overflow.

Information and Technological Evolution

I spend a lot of time reading about the nature of technological progress, and I’ve found that the literature on technology is somewhat uneven. If you want to learn about how some particular technology came into existence, there’s often very good resources available. Most major inventions, and many not-so-major ones, have a decent book written about them. Some of my favorites are Crystal Fire (about the invention of the transistor), Copies in Seconds (about the early history of Xerox), and High-Speed Dreams (about early efforts to build a supersonic airliner).

But if you’re trying to understand the nature of technological progress more generally, the range of good options narrows significantly. There’s probably not more than ten or twenty folks who have studied the nature of technological progress itself and whose work I think is worth reading.

One such researcher is Brian Arthur, an economist at the Santa Fe Institute.1 Arthur is the author of an extremely good book about the nature of technology (called, appropriately, “The Nature of Technology,”) which I often return to. He’s also the co-author, along with Wolfgang Polak, of an interesting 2006 paper, “The Evolution of Technology within a Simple Computer Model,” that I think is worth highlighting. In this paper Arthur evolves various boolean logic circuits (circuits that take ones and zeroes as inputs and give ones and zeroes as outputs) by starting with simple building blocks and gradually building up more and more complex functions (such as a circuit that can add two eight-bit numbers together).

Logic circuits invented by Arthur’s simulation.

I wanted to highlight this paper because I think it sheds some light on the nature of technological progress, but also because the paper does a somewhat poor job of articulating the most important takeaways. Some of what the paper focuses on — like the mechanics of how one technology gets replaced by a superior technology — I don’t actually think are particularly illuminating. By contrast, what I think is the most important aspect of the paper — how creating some new technology requires successfully navigating enormous search spaces — is only touched on vaguely and obliquely. But with a little additional work, we can flesh out and strengthen some of these ideas. And when we look a little closer, we find what the paper is really showing us is that finding some new technology is a question of efficiently acquiring information.

Outline of the paper

The basic design of the experiment is simple: run a simulation that randomly generates various boolean logic circuits and analyze the sort of circuits that the simulation generates. Boolean logic circuits are collections of various functions (such as AND, OR, NOT, EQUAL) that perform some particular operation on binary numbers. The logic circuit below, for instance, determines whether two 4-bit numbers are equal using four exclusive nor (XNOR) gates, which output a 1 if both inputs are identical, and a 4-way AND gate, which outputs a 1 if all inputs are 1. Boolean logic circuits are important because they’re how computers are built: a modern computer does its computation by way of billions and billions of transistors arranged in various logic circuits.

The simulation works by starting with three basic circuit elements that can be included in the randomly generated circuits: the Not And (NAND) gate (which outputs 0 if both inputs are 1, and 1 otherwise), and two CONST elements which always output either 1 or 0. The NAND gate is particularly important because NAND is functionally complete; any boolean logic circuit can be built through the proper arrangement of NAND gates.

Using these starting elements, the simulation tries to build up towards higher-level logical functions. Some of these goals, such as creating the OR, AND, and exclusive-or (XOR) functions, are simple, and can be completed with just a few starting elements. Others are extremely complex, and require dozens of starting elements to implement: an 8-bit adder, for instance, requires 68 properly arranged NAND gates.

To achieve these goals, during each iteration the simulation randomly combines several circuit elements — which at the beginning are just NAND, one, and zero. It randomly selects between two and 12 components, wires them together randomly, and looks to see if the outputs of the resulting circuit achieve any of its goals. If it has — if, by chance, the random combination of elements has created an AND function, or an XOR function, or any of its other goals — that goal is marked as fulfilled, and circuit that fulfills it gets “encapsulated,” added to the pool of possible circuit elements. Once the simulation finds an arrangement of NAND components that produces AND and OR, for instance, those AND and OR arrangements get added to the pool of circuit elements with NAND and the two CONSTS. Future iterations thus might accidentally stumble across XOR by combining AND, OR, and NAND.

An XOR gate made from a NAND, an OR, and an AND gate.

Because finding an exact match for a given goal might be hard, especially as goals get more complex, the simulation will also add a given circuit to the pool of usable components if it partially fulfills a goal, as long as it does a better job of meeting that goal than any existing circuit. Circuits that partially meet some goal (such as a 4-bit adder that gets just the last digit wrong) are similarly used as components that can be recombined with other elements. So the simulation might try wiring up our partly-correct 4-bit adder with other elements (NAND, OR, etc.) to see what it gets; maybe it finds another mini-circuit that can correct that last digit.

Over time, the pool of circuit elements that the simulation randomly draws from grows larger and larger, filled both with circuits that completely satisfy various goals and some partly-working circuits. A circuit can also get added to the pool if it’s less expensive — uses fewer components — than existing circuits for that goal. So if the simulation has a 2-bit adder made from 10 components, but stumbles across a 2-bit adder made from 8 components, the 8-component adder will replace the 10 component one.

When the simulation is run, it begins randomly combining components, which at the beginning are just NAND, one, and zero. At first only simple goals are fulfilled: OR, AND, NOT, etc. The circuits that meet these goals then become building blocks for more complex goals. Once a 4-way AND gate is found (which outputs 1 only if all its inputs are 1), that can be used to build a 5-way AND gate, which in turn can be used to build a 6-way AND gate. Over several hundred thousand iterations, surprisingly complex circuits can be generated: circuits which compare whether two 4-bit numbers are equal, circuits which add two 8-bit numbers together, and so on.

However, if the simpler goals aren’t met first, the simulation won’t find solutions to the more complex goals. If you remove a full-adder from the list of goals, the simulation will never find the more complex 2-bit adder. Per Arthur, this demonstrates the importance of using simpler technologies as “stepping stones” to more complex ones, and how technologies consist of hierarchical arrangements of sub-technologies (which is a major focus of his book).

We find that our artificial system can create complicated technologies (circuits), but only by first creating simpler ones as building blocks. Our results mirror Lenski et al.’s: that complex features can be created in biological evolution only if simpler functions are first favored and act as stepping stones.

Analyzing this paper

I don’t have access to the original simulation that Arthur ran, but thanks to modern AI tools it was relatively easy for me to recreate it and replicate many of these results. Running it for a million iterations, I was able to build up to several complex goals: 6-bit equal, a full-adder (which adds 3 1-bit inputs together), 7-bitwise-XOR, and even a 15-way AND circuit.

Screenshot of my simulation running.

But I also found that not all of the simulation design elements from the original paper are load-bearing, at least in my recreated version. In particular, much of the simulation is devoted to the complex “partial fulfillment” mechanic, which adds circuits that only partially meet goals, and gradually replaces them as circuits that better meet those goals are found. The intent of this mechanic, I think, is to make it possible to gradually converge on a goal by building off of partly-working technologies, which is how real-world technologies come about. However, when I turn this mechanic off, forcing the simulation to discard any circuit that doesn’t 100% fulfill some goal, I get no real difference in how many goals get found: the partial fulfillment mechanic basically adds nothing (though this could be due to differences in how the simulations were implemented).

To me the most interesting aspect of this paper isn’t showing how new, better technologies supersede earlier ones, but how the search for a new technology requires navigating enormous search spaces. Finding complex functions like an 8-bit adder or a 6-bit equal requires successfully finding working functions amidst a vast ocean of non-working ones. Let me show you what I mean.

We can define a particular boolean logic function with a truth table – an enumeration of every possible combination of inputs and outputs. The truth table for an AND function, for instance, which outputs a 1 if both inputs are 1 and 0 otherwise, looks like this:

Every logic function will have a unique truth table, and for a given number of inputs and outputs there are only so many possible logic functions, so many possible truth tables. For instance, there are only four possible 1 input, 1 output functions.

However, the space of possible logic functions gets very very large, very very quickly. For a function with n inputs and m outputs, the number of possible truth tables is (2^m)^(2^n). So if you have 2 inputs and 1 output, there are 2^4 = 16 possible functions (AND, NAND, OR, NOR, XOR, XNOR, and 10 others). If you have 3 inputs and 2 outputs, that rises to 4^8 = 65,536 possible logic functions. If you have 16 inputs and 9 outputs, like an 8-bit adder does, you have a mind-boggling 10^177554 possible logic functions. By comparison, the number of atoms in the universe is estimated to be on the order of 10^80, and the number of milliseconds since the big bang is on the order of 4x10^20. Fulfilling some goal from circuit space means finding one particular function in a gargantuan sea of possibilities.

The question is, how is the simulation able to navigate this enormous search space? Arthur touches on the answer — proceeding to complex goals by way of simpler goals — but he doesn’t really look deeply at the combinatorics in the paper, or how this navigation happens specifically.2

The emergence of circuits such as 8-bit adders seems not difficult. But consider the combinatorics. If a component has n inputs and m outputs there are (2^m)^(2^n) possible phenotypes, each of which could be realized in a practical way by a large number of different circuits. For example, an 8-bit adder is one of over 10^177,554 phenotypes with 16 inputs and 9 outputs. The likelihood of such a circuit being discovered by random combinations in 250,000 steps is negligible. Our experiment— or algorithm—arrives at complicated circuits by first satisfying simpler needs and using the results as building blocks to bootstrap its way to satisfy more complex ones.

Navigating large search spaces

In his 1962 paper “The Architecture of Complexity,” Nobel Prize-winning economist Herbert Simon describes two hypothetical watchmakers, Hora and Tempus. Each makes watches with 1000 parts in them, and assembles them one part at a time. Tempus’ watches are built in such a way that if the watchmaker gets interrupted — if he has to put down the watch to, say, answer the phone — the assembly falls apart, and he has to start all over. Hora’s watches, on the other hand, are made from stable subassemblies. Ten parts get put together to form a level 1 assembly, ten level 1 assemblies get put together to form a level 2 assembly, and 10 level 2 assemblies get put together to form the final watch. If Hora is interrupted in the middle of a subassembly, it falls to pieces just like Tempus’ watches, but once a subassembly is complete it’s stable; he can put it down and move on to the next assembly.

It’s easy to see that Tempus will make far fewer watches than Hora. If both have a 1% chance of getting interrupted each time they put in a part, Tempus only has a 0.99 ^ 1000 = 0.0043% chance of assembling a completed watch; the vast majority of the time, the entire watch falls to pieces before he can finish. But when Hora gets interrupted, he doesn’t have to start completely over, just from the last stable subassembly. The result is that Hora makes completed watches about 4,000 times faster than Tempus.

Simon uses this model to illustrate how complex biological systems might have evolved; if a biological system is some assemblage of chemicals, it’s much more likely for those chemicals to come together by chance if some small subset of them can form a stable subassembly. But we can also use the Tempus/Hora model to describe the technological evolution being simulated in Arthur’s paper.

Consider a technology as some particular arrangement of 1,000 different parts, such as the NAND gates that are the basic building blocks of Arthur’s logic circuits. If you can find the proper arrangement of parts, you can build a working technology. Assume we try to build a technology by adding one part at a time, like Tempus and Hora build their watches, until all 1000 parts have been added. In this version, instead of having some small probability of being interrupted and needing to start over, we have a small probability (say 1%) of correctly guessing the next component. This mirrors Arthur’s simulation, where we had a small probability of randomly connecting a component correctly to fulfill some goal. Only by properly guessing the arrangement of each part, in order, can we create a working technology.

In Simon’s original model, assembling a watch was like flipping 1000 biased coins in a row. Each coin had a 99% chance of coming up heads, and only when 1000 heads were flipped was a watch successfully assembled. Our modified model is like flipping 1000 biased coins which have only a 1% chance of coming up heads. Creating a technology via the “Tempus” method is like flipping 1000 coins in a row and hoping for heads each time. The probability of producing a working technology is 0.01^1000, essentially zero. But if we create a technology via the “Hora” method of building it out of stable subassemblies, the combinatorics become much less punishing. Now instead of needing to flip 1000 heads in a row, we only need to flip 10 in a row. 10 successful coinflips — 10 parts successfully added — gives us a stable subassembly, letting us essentially “save our place.” Flipping a tails doesn’t send us all the way back to zero, just to the last stable subassembly. The odds are still low — for each subassembly, you only have a 0.01^10 chance of getting it right — but it’s enormously higher than the Tempus design. You’re much more likely to stumble across a working technology if that technology is composed of simpler stable components, and you can determine whether the individual components are correct.

Arthur’s circuit simulation is able to find complex technologies because it works like Hora, not Tempus: complex circuits are built up from simpler technologies, the way Hora’s watches are built from stable subassemblies. Going from nothing to an 8-bit adder is like Tempus trying to build an entire and very complex watch by getting every step perfect. Much easier to be like Hora, and be the one that only needs to get the next few steps to a stable subassembly correct: adding a few components to a 6-bit adder to get a 7-bit adder, then adding a few to that one to get an 8-bit adder, and so on.

We can illustrate this more clearly with a modified version of Arthur’s circuit search. In this version, rather than trying to fulfill a huge collection of goals, we’re just trying to find the design for a specific 8-bit adder made from 68 NAND gates. Rather than build this up from simpler sub-components (7-bit adders, 6-bit adders, full adders), in this simulation we simply go NAND gate by NAND gate. Each iteration we add a NAND gate, and randomly wire it to our existing set of NAND gates. If we get the wiring correct, we keep it, and go on to try adding the next gate. If it’s incorrect, we discard it and try again.

We can think of this as a sort of modular construction, akin to building a complex circuit up from simpler circuits; at each level, we’re just combining two components, our existing subassembly and one additional NAND gate. This loses verisimilitude, since each subassembly no longer implements some particular functionality (we essentially just dictate that the simulation knows when it stumbles upon the correct gate wiring). But we don’t lose that much: it is, notably, possible to build an 8-bit adder with a hierarchy that requires just two components at almost every level (a few steps require three components). And this simpler simulation has the benefit of making it very easy to calculate the combinatorics at each step.

Hierarchical 8-bit adder. FA is a full-adder (which adds 3 input values together), HA is a half-adder which adds 2 input values together). Full decomposition down to NAND gates not shown.

68 NAND gates can create around 2^852 possible wiring arrangements with 16 inputs and 9 outputs. This is much less than the 10^177,554 possibile 16 input and 9 output functions, but it’s still an outrageously enormous number. If we tried to find the right wiring arrangement by random guessing all 68 gates at once, we’d never succeed: even if every atom in the universe was a computer, each one trying a trillion guesses a second, we’d still be guessing for about 10,000,000,000,000…(140 more zeroes)..000 years.

But by going gate-by-gate, the correct arrangement can be found in 453,000 iterations, on average. Each time we add a gate, there’s only a few thousand possible ways that it can be connected, so after a few thousand iterations we guess it correctly, lock the answer in, and move on to the next gate. By determining whether each step is correct, instead of trying to guess the complete answer all at once, the search becomes feasible.

This is why Arthur’s original simulation couldn’t fulfill complex needs without fulfilling simpler needs first: if you try to take too many steps at once, the combinatorics become too punishing, and it becomes almost impossible to find the correct answer by random guessing. In our 68 NAND gate search, finding an 8-bit adder is relatively easy if we go one gate at a time, but if we change that to two gates at a time (randomly adding one gate, then another gate, then checking to see if we’re correct), the expected number of iterations rises from 453,000 to 1.75 billion: if the probability of guessing one gate correctly is 1/1,000, the probability of guessing two gates is 1/1,000,000. If we try to guess three gates at a time (1 in a billion odds of guessing correctly), the number of expected iterations to guess all 68 gates correctly rises to ~9.3 trillion.

The explosive combinatorics gives us a better understanding of some of the results that come out of Arthur’s simulation. For instance, Arthur notes that in each iteration the simulation combines up to twelve components, then checks to see if a working circuit has been found. But Arthur notes that you can vary the maximum value and it doesn’t impact the results of the simulation much, stating of the various simulation settings that “[e]xtensive experiments with different settings have shown that our results are not particularly sensitive to the choice of these parameters.” Indeed, if we re-run the simulation and only allow it to try a maximum of 4 components at once, it works basically just as well as with 12 components. The more random components you combine together, the more the combinatorial possibilities explode, and the lower the chance of finding something useful. The probabilities of finding a useful circuit amongst the various possibilities becomes so immensely low with larger numbers of components that you don’t lose much by not bothering with them at all. Similarly, this also explains another result in the paper, that it’s easier to find complex goals if you specify only a narrower subset of simpler goals related to them. Arthur notes that a complex 8-bit adder is found much more quickly if you only give the simulation a few goals related to building adders. With fewer goals specified, the pool of possible technology components will remain smaller, the number of possible random combinations becomes fewer, and the easier it becomes to find the complex goals.

In essence, using simpler components as stepping stones to more complex ones is a kind of hill-climbing. The simulation looks in various directions (possible combinations of building blocks), until it finds one that’s higher up the hill (finds a circuit that meets some simple goal), restarts the search at the new, higher point on the hill, until it reaches a peak (satisfies a complex goal). The simulation is able to satisfy complex goals because it specified a series of simpler ones that provide a path up the hill to the complex goals. Arthur notes that “[t]he algorithm works best in spaces where needs are ordered (achievable by repetitive pattern), so that complexity can bootstrap itself by exploiting regularities in constructing complicated objects from simpler ones.”

Trying to go to complex circuits directly, then, is akin to just testing random locations in the landscape and seeing if they’re a high point: this is obviously much worse than following the slope of the landscape to find the high points.

Technological search and information

We can sharpen these ideas even further by bringing in some concepts from information theory. Information theory was invented by Claude Shannon at Bell Labs in the late 1940s, and it provides a framework for quantifying your uncertainty, and how much a given event reduces that uncertainty.

I find the easiest way to understand information theory is with binary numbers. The normal math we use day to day uses base 10 numbers. When we count upward from zero, we go from 0 to 9, then reset the first digit to 0 and increment the next digit: 10. With binary, or base 2, we increment the next digit after we get to 1. So 1 in base 10 is 1 in binary, but 2 is 10, 3 is 11, 4 is 100, and so on.

Decimal (base 10) and binary (base 2) numbers.

In binary, each binary digit, or bit, doubles the potential size of the number we can represent. So with two digits, we can define 4 possible values (0, 1, 2, and 3 in base 10). With 3 digits, that doubles to 8 possible values (getting us from 0 through 7), with 4 digits that doubles again to 16 possible values, and so on. A 16 bit binary number can represent 2^16 = 65,536 possible values, which is why in computer programming the largest value that a 16 bit integer can represent is 65,535.

Say you have a string of bits, but don’t know whether they’re ones or zeroes. Because each bit doubles the number of possible values that can be represented, each unknown bit you fill in reduces the number of possible values by half. If you have 3 binary digits, there are 8 possible numbers that could be represented. Each time you learn what one of the bits is, you reduce the number of possible values by half.

With information theory, we generalize this concept somewhat. In information theory one bit of information reduces the space of possibilities by 50%; in other words, each bit reduces our uncertainty by half. Say you’re like me, and you often lose your phone in your jacket pockets. If you’re wearing a coat with 2 pockets and you know the phone is in one of them, specifying the location of your phone, narrowing it down from 2 possibilities to 1, takes one bit of information. If you’re wearing a coat with 4 pockets, you now need 2 bits of information: 1 bit to tell you whether it’s on the right or the left, and another bit to tell you whether it’s an upper or lower pocket. The first bit cuts the possibilities in half, leaving you with two possibilities, and the second bit cuts it in half again. If your jacket has 8 pockets, now you need 3 bits to specify its location, and so on. The more places that something could be, the more information it takes to specify its location.

Information theory is particularly useful for quantifying how much information we get from some particular outcome. Say someone flips a fair coin; how much information do I get when they reveal whether it was heads or tails? Well, before they reveal it, I knew it could be one of two options, heads or tails. Revealing it narrows the number of possibilities from two down to one. We’ve cut the number of possibilities in half, and thus gained 1 bit of information. More generally, the information provided by some outcome is equal to -log2(the probability of that outcome). So revealing how a fair coin was flipped gives us -log2(0.5) = 1 bit. If we’re dealt a single card from a deck face down, when we reveal that card we’ve reduced the number of possible cards from 52 down to 1, and gained -log2(1/52) = 5.7 bits of information.

For a repetitive process, we also want to know a related quantity: entropy. Entropy is determined by calculating the information received from each possible outcome, multiplying it by the probability of that outcome, then summing all those values together. It’s the expected quantity of information you’ll get by taking some particular action.

Say I’ve lost my phone in my jacket with eight pockets, and am looking for it by randomly trying pockets until I find it. A random guess has a 1/8 chance of successfully finding the phone, and a 7/8 chance of coming up empty. Guessing correctly will yield me -log2(1/8) = 3 bits of information, as expected: once I guess correctly, I know the phone’s location. But an incorrect guess will yield me only 0.19 bits of information: I already knew most of the pockets don’t have the phone, so failing to find the phone in one pocket doesn’t tell me much that I didn’t already know. The entropy of a guess is (1/8) * log2(1/8) + (7/8) * log2(7/8) = 0.54 bits. When I first check a pocket, I can expect to get a little more than half a bit of information. (If I rule out pockets that I’ve already checked, the expected amount of information I get will rise each time, though if you’re like me you might have to check the same pocket a few times before you find the phone.)

Because each bit of information we get cuts the number of possibilities remaining by 50%, it doesn’t take that much information to narrow down an enormously large search space. The 2^852 possible circuits that can be created by wiring up our 68 NAND gates requires only 852 bits — 852 times cutting the number of possibilities in half — to specify. (That’s approximately the same number of bits that it takes to specify each letter of this sentence.)

A key aspect of entropy is that we maximize how much information we get when each outcome is equally plausible. So the entropy of a fair coin, with a 50% chance of coming up heads, is 1 bit. But if the coin has a 90% chance of coming up heads, the entropy is now just 0.46 bits. If the coin has a 99% chance of coming up heads, the entropy falls to 0.08 bits. When one outcome is very likely, you learn much less on each attempt, because you mostly get the outcome you already knew was likely. This is why when playing the game “20 questions,” the most efficient strategy is to try and ask questions where the answer divides the number of possibilities in half. “Is it bigger than a breadbox?” is a good starting question because there are probably roughly similar numbers of items that are bigger and smaller than a breadbox. “Is it a 1997 Nissan Sentra?” is a bad starting question because most possibilities are not a 1997 Nissan Sentra, so we learn very little when the answer is “no.”

We can think of our 68 NAND gate search as flipping a series of very biased coins, each one with a ~ 1/(several thousand) probability of coming up heads (where “heads” is “guessing the right wiring combination for that particular NAND gate”). The entropy of this process — the expected amount of information that we get — is very low, around 0.003 bits per attempt. Each attempt we learn very little about the correct wiring diagram (“it wasn’t this arrangement, it wasn’t this arrangement either, or this one”) so we need a lot of attempts — around 453,000, on average — to accumulate the 852 bits needed to specify the correct wiring for our 8-bit adder.3

Trying to guess two gates at a time is like biasing the coin even further: now each one has a ~1/(several million) probability of coming up heads. We thus get vastly less information per attempt — less than 0.000001 bits per attempt on average — so it takes us many, many more attempts to accumulate the information needed.

A useful way of thinking about our 68 NAND gate search is that it’s like a huge, branching tree. At every step — every time we add a gate — there are thousands of branches, each one representing one possible way to wire up the gate. Each branch then splits into thousands more (representing all the possible ways of wiring up the next gate), which split into thousands more, which split into thousands more, until at the end we have 2^852 possible “leaves,” each one representing a unique way of wiring up all 68 gates. Trying to get all 68 gates right at once, and then checking to see whether or not you did, is like examining one single leaf, one path from the base of the tree all the way to the tip of a branch. Not only are you overwhelmingly likely to guess wrong, but you haven’t narrowed down your possibilities at all: all you know is that one single leaf wasn’t the right answer, leaving you with the rest of the 2^852 possible leaves to sift through.

Checking to see whether each gate we add is correct before we proceed to the next one, by contrast, massively narrows down the number of possibilities. Whenever we determine a gate isn’t in the right spot, it eliminates every possibility that branches off from that point. If there are 1000 possible ways to wire each gate, each time we guess correctly we’ve narrowed down the possibilities downstream of that choicepoint by 99.9%. Huge swaths of possibilities get eliminated at each correct guess, letting us converge on the correct answer much more quickly.

The same basic logic applies to Arthur’s simulation. (In fact, in another publication, Arthur uses a very similar metaphor, describing technological search as trying to find a working path up a mountain, which is full of various obstacles.) Building up complex functions without the aid of intermediate, simpler ones is like trying to find a single leaf on a tree the size of the universe. Building up to complex circuits gradually, using simpler components as building blocks, lets you screen off huge branches of the tree at once. Once you have a working 2-bit adder, every branch that has a non-working 2-bit adder in it gets screened off. Your iterations yield massively more information, and the search problem becomes tractable.

Conclusion

The logic of Arthur’s simulation, and our simpler simulation, also applies to creating new technologies more generally. Logic circuits are a useful model to explore, because they’re real technology that is very amenable to simulation (they have a well-defined, simple behavior), but technology in general can be thought of as a combination of simpler components or elements arranged in various ways to create more complex ones. As Arthur notes:

…in 1912 the amplifier circuit was constructed from the already existing triode vacuum tube in combination with other existing circuit components. The amplifier in turn made possible the oscillator (which could generate pure sine waves), and these with other components made possible the heterodyne mixer (which could shift signals’ frequencies). These two components in combination with other standard ones went on to make possible continuouswave radio transmitters and receivers. And these in conjunction with still other elements made possible radio broadcasting. In its collective sense, technology forms a network of elements in which novel elements are continually constructed from existing ones. Over time, this set bootstraps itself by combining simple elements to construct more complicated ones and by using few building-block elements to create many.

One takeaway from this paper, as Arthur notes and we explored more deeply, is that a hierarchical arrangement of components, where a complex technology is made of simpler components, which are in turn made from even simpler components, makes it much easier to create some new technology. But a more general takeaway is that successfully creating some new technology means getting new information as quickly as possible. Working from (or towards) a hierarchical, modular design for some technology, where each element has some specific job it must do, makes it easier to find new technologies in part because you learn vastly more from each attempt at building one of those subparts. Knowing whether some entire complex function works or not tells you much less than knowing which individual component is working right, and what specific functionality needs to be corrected to fix the problem.

1

In addition to Brian Arthur, some other folks who I think have done really good work on this are Bernard Carlson, Clay Christensen, Joel Mokyr, Hugh Aitken, Edward Constant, and various folks associated with the Trancik Lab. There’s also a few folks, such as Joan Bromberg and Lillian Hoddeson, who have produced multiple very good technological histories that I return to often.

2

Indeed, we find that if we just randomly combine dozens of NAND gates, we get a random truth table almost every time, and never solve even medium-complex functions with a few inputs and outputs.

3

Adding things up, you find that the search actually yields over 900 bits, rather than 852 bits. This is due to the information overhead of a sequential search: you end up getting “extra” information that you don’t need. In our 8-pocket jacket search, if we just guess randomly it will take us on average 8 tries to find the phone. 8 attempts * 0.54 bits per attempt yields 4.32 bits, more than the 3 bits we need to actually specify the phone’s location.

Relativity, Hermeus, Astrion and Divergent executives join Fortastra C-suite

SAN FRANCISCO – Los Angeles startup Fortastra has hired veterans from Relativity Space, Hermeus, Astrion and Divergent Technologies to design and operate maneuverable spacecraft for on-orbit security. Josh Jetter, former Relativity senior director of avionics engineering and manufacturing, will be Fortastra’s chief technology officer. Sahil Desai, Fortastra’s vice president of product, was Divergent Technologies’ vice president […]

The post Relativity, Hermeus, Astrion and Divergent executives join Fortastra C-suite appeared first on SpaceNews.

Optical terminals still a bottleneck in Pentagon’s proliferated constellation

GP Sandhoo: ‘From an optical communications terminal perspective, we’re not there yet on how many we need’

The post Optical terminals still a bottleneck in Pentagon’s proliferated constellation appeared first on SpaceNews.

Pentagon awards Raytheon $45 million for GPS ground system as program future is reassessed

The ‘unpriced change order’ supports satellite launches while officials reassess long-delayed ground system

The post Pentagon awards Raytheon $45 million for GPS ground system as program future is reassessed appeared first on SpaceNews.

Moog Technology Successfully Steers Artemis II Launch

Moog logo

East Aurora, NY — Moog Inc. (NYSE: MOG.A and MOG.B), a worldwide designer, manufacturer, and systems integrator of high-performance precision motion and fluid controls and control systems, provided the critical […]

The post Moog Technology Successfully Steers Artemis II Launch appeared first on SpaceNews.

Carmel Ortiz on the evolving landscape of satellite communications

In this week’s episode of Space Minds, host Mike Gruss interviews Carmel Ortiz, senior vice president of medium-Earth-orbit constellation programs at SES. They discuss new technology to help satellites communicate […]

The post Carmel Ortiz on the evolving landscape of satellite communications appeared first on SpaceNews.

Phantom Space buys thermal specialist to support orbital data center push

Phantom Space has acquired satellite thermal hardware provider Thermal Management Technologies to bolster development of its planned Phantom Cloud orbital data center constellation.

The post Phantom Space buys thermal specialist to support orbital data center push appeared first on SpaceNews.

China’s commercial Tianlong-3 rocket fails on debut launch

Full-scale mockup of Space Pioneer's Tianlong-3 rocket stands vertically on a newly completed launch pad at the Dongfeng Commercial Space Innovation Test Zone, located in the desert landscape of Jiuquan Satellite Launch Center. The launch pad is equipped with support towers, fueling infrastructure, and ground facilities visible under clear blue skies.

The first launch of the Tianlong-3 rocket from Chinese commercial firm Space Pioneer failed Friday after suffering an anomaly in its ascent phase.

The post China’s commercial Tianlong-3 rocket fails on debut launch appeared first on SpaceNews.

Artemis 2 heads to the moon

Orion and crescent earth

NASA’s Artemis 2 mission is on its way to the moon after a successful maneuver April 2.

The post Artemis 2 heads to the moon appeared first on SpaceNews.

Swift spacecraft reorientation buys time for reboost mission

LINK and Swift

NASA modified operations of an astrophysics spacecraft in a decaying orbit to buy more time for a mission later this year that will attempt to raise its orbit.

The post Swift spacecraft reorientation buys time for reboost mission appeared first on SpaceNews.

Former Sierra Space CEO Tom Vice to lead Astrion

Huntsville-based defense contractor focuses on systems engineering and integration, and space mission assurance

The post Former Sierra Space CEO Tom Vice to lead Astrion  appeared first on SpaceNews.

Stanford remembers John Roberts (1945-2026)

 Economist John Roberts, leader in organizational research, dies at 80
The Stanford professor’s work brought game theory to management practices in firms around the world. 

"Donald John Roberts, the John H. Scully Professor of Economics, Strategic Management and International Business, Emeritus, died Jan. 23 after a long illness. He was 80.

"His start at Stanford GSB was carefully cultivated. When economics professor Robert Wilson began growing the economics faculty at the business school in the late 1970s, he had already recruited an impressive group of young scholars. But he needed someone to shape the intellectual direction of the program.

"Wilson believed Roberts was that person.

At the time, Roberts was a young professor at Northwestern University’s Kellogg School of Management, already known for his teaching credentials and research in economic theory. Wilson persuaded him to join Stanford in 1980, bringing him west to help build what would become one of the most influential economics groups in academia.

“John played a central role in shaping the direction of the economics group in those years,” says Wilson, the Adams Distinguished Professor of Management, Emeritus, and winner of the Nobel Memorial Prize in Economic Sciences. “He had a remarkable ability to see where an idea could lead and to push it until the logic became clear.”

"Roberts remained at the school until his retirement in 2012. At Stanford GSB, he helped lead the doctoral program, mentored younger faculty, and played a central role in recruiting a generation of economists whose work reshaped the field. His four decades of research helped transform how economists study organizations and their management, bringing rigorous economic theory to questions about how firms function internally.

...
“Besides his scholarship, John was an institution builder who helped shape the intellectual culture of the school,” says David M. Kreps, the Adams Distinguished Professor of Management, Emeritus. “John helped create an environment where both ambitious research and professional education thrived. He was the personification of balanced excellence.” 

When trauma becomes trope

A young boy in a car’s front seat with adults in the other seats on a dusty road with a dark, stormy sky in the background.

Humanitarian journalism is a moral calling to document human suffering. But in practice, it’s an ethically murky undertaking

- by Cathy Otten

Read on Aeon

The Happiness Crash of 2020

From the still-active Sam Peltzman:

I document a sudden, sharp and historically unprecedented decline in self-reported happiness in the US population. It occurred during 2020, the year of the Covid pandemic, and mainly persists through 2024. This happiness crash spread across nearly all typical demographics and geographies. The happiest groups pre-Covid (e.g., whites, high income, well-educated and politically/ideologically right-leaning) tend to show the largest happiness reductions. The glaring exception is marital status, which has consistently been an important marker for happiness. The already wide happiness premium for marriage has, if anything, become slightly wider. With both married and unmarried reporting large declines in happiness the country has become segregated: slightly over half-the married adults-remain happy on balance; the unmarried, nearly half, are now distinctly unhappy. I also show that across a number of aspects of personal and social capital post-Covid deterioration is the norm, including a collapse of belief in the fairness of others and of trust in the US Supreme Court.

Here is the paper, via the excellent Kevin Lewis.

The post The Happiness Crash of 2020 appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Gas Town: from Clown Show to v1.0

TL;DR: Gas Town and Beads have both released version 1.0.0 today. Enjoy!

Gas Town and Beads hit v1.0.0

It has been a wild 3-month ride since I launched Gas Town.

First there was the part where I was like nooo don’t use it, and everyone was like, hold my beer. I am so glad some of you ignored me so hard. It’s just what I’d hoped for. You early adopters helped pave the way for everyone else.

And we went through some chaotic times early on. There were the serial killer sprees, viciously taking out random workers mid-job. (It’s always the Deacon, the modern-day Butler in the Gas Town murder mysteries.) There was the 22-nose Clown Show, where the Mayor scored a new clown nose every time it had massive data loss, which went on for weeks. And more. We’ve had our share of trying times, honking alert noses, piles of worker corpses. All long past us now.

The Gas Town Serial Murders and Clown Show

Despite the early bumps, we’ve continued to enjoy absolutely massive community engagement. Even though Gas Town “only” has 13k stars (at 3 months!), it has hundreds of enthusiastic committers, and bugs get noticed and fixed fast.

It’s safe to say that Gas Town has largely been in maintenance mode since the Dolt migration finished up, and that was well over a month ago. I’ve continued to allow a few nice features here and there, but for the most part we are now directing people’s creative efforts to the successor, Gas City, which is in alpha testing and on track for a fast GA.

And maintenance mode is a good thing! It means it’s not thrashing. Gas Town “just works.” It does its job, it has tons of integration points, and it has been stable for many weeks. People are using it to build real stuff.

As one example, Gene Kim and I were chatting with a very cool midsize company, who are making a company-wide move to adopt agentic AI. A person in their Communications department, who is a Comms major four years out of school, shared with us (to our lasting astonishment) that she has been using Gas Town since “a few weeks after it came out.” She decided to build replacement for a niche but pricey SaaS product their company has been paying for. She’s working with another non-technologist on it, and it’s so good the company is getting ready to switch over to it. Amazing!

Anyone can build software with Gas Town — and people are!

Non-technologists using Gas town to build software! It sounds crazy but I’m seeing it all over. People in academia, non-technical knowledge workers, even just people curious about vibe coding; all are figuring it out.

So as far as I’m concerned, Gas Town is ready. That’s why I feel it merits a 1.0.0 release.

To get started, you just have your coding agent install it, and talk to the Mayor. More on that below. The Mayor is cool. You’ll like the Mayor.

Importantly, we are also rolling Beads to version 1.0.0 today. Beads is the secret sauce that makes Gas Town and Gas City both possible and best-of-class. I’ll spend some time talking about Beads before we get back to Gas Town.

Beads: The Memory Revolution

Last year I noticed that agents were struggling with basic stuff: working memory, and simple task tracking. They had zero attention span and developed progressive dementia. That led me to create Beads, which is a drop-in, generic, unopinionated memory system and knowledge graph for coding agents. Beads gives your coding agent sudden clarity and long-horizon planning capability.

Beads started life back in October as a lightweight issue tracker with version control. But it quickly became clear that it was like Adderall for your agent. It is an instant cognitive upgrade for any coding agent, even replacing their built-in memory and task tracking systems with a system that’s more powerful, more portable, and every bit as transparent and easy to use. You don’t need to know Beads in order to use it; your agent handles it all.

Over time, it became clear that Beads was a sort of universal discovery, a gift that keeps giving. It’s way more than an issue tracker, and is evolving gradually into something more like a universal ledger for all knowledge work. One that agents happen to really, really like.

It was a high-level insight from Chris Sells, an old friend of mine and (with Julian Knutsen) the co-creator of Gas City, that helped crystallize for me why Beads seems to solve so many problems at once: Beads is the Why.

Beads is the Missing Why

In Beads, every work item becomes a bead. A bead is just a structure with some fields: a lightweight bug/issue report, with a title, description, status, etc. Beads are stored and versioned in Git, linked together as a multi-graph, and they are queryable with SQL like a database. Best of all worlds.

Versioning your Beads is critical: you get a complete historical log of every change to any Bead, trivial to query. So in multi-agent environments, everyone using Beads can tell what everyone else using Beads did, and why.

Your project’s Git commit history has always been your permanent ledger that contains the What, Where, Who, and How of what happened to your code. But as Chris Sells astutely observed a few months back, Beads is the Why — the missing piece in your commit history. It completes the data-warehousing picture of your project needed (by agents) for forensics, recovery, onboarding, design, and more. Having this information handy is invaluable for agents when they are trying to reconstruct how we got where we are.

Beads: The Missing Why for your projects

Individual beads capture and record all your work on the Git ledger, through its entire lifecycle, from planning/design, through implementation, and then they form the audit trail after the work is closed. This isn’t limited to development work, either; you can use it anything. Someone once told me they use Beads for their grocery shopping (which they do with an agent.)

A key insight was that you can use Beads for defining and tracking orchestration work, which is how Gas Town and Gas City operate. Beads string together into “molecules” that have deterministic steps to follow, for patrols, releases, etc. Every step an agent takes in a Beads-based workflow is recorded on a ledger. This acts like a save-game that you can roll back to, or at least use to see how you got where you are.

Beads is for Literally Everyone, and Everything

Beads is completely unaware of Gas Town (though Gas Town uses Beads as a dependency.) You can use Beads by itself and get a vastly improved agentic experience, no matter which coding agent you’re using. Unlike Gas Town, which only works with a handful of agents today, Beads works with anything and everything, as long as it’s roughly as smart as Claude Sonnet 3.5 was.

People who switch to Beads soon realize they can build their own workflows and orchestration using nothing but Beads. It’s an incredibly powerful and versatile data plane. Once you start storing stuff in Beads, you kind of want everything there. It doesn’t solve the memory problem by itself, but it certainly gives you a solid foundation for solving it your own way.

Beads crossed 20k stars on GitHub this week, a bit over 5 months since launch. I have not been much of a GitHub user for most of my career, so I didn’t appreciate how unusual 20k stars is until this week. Chris, Julian and I all guessed that roughly 10k-20k repos would be that popular. But we were way off. You can browse them all with a query. There are currently 1988 with more than 20k stars.

So Beads is already in roughly the top 2000 GitHub repos, out of some 300 million. That’s pretty rarified air. But it makes sense. I mean, it just works. It’s soooo easy. You start using Beads, everything becomes a bead, and life with agents just starts getting easier.

Beads enters the stratosphere with 20k stars

Beads: Ready for v1

I held out on a v1.0.0 release for Beads for months because I had a feeling it would become clear when it’s ready for prime time. With the recent completion of Beads with Embedded Dolt by the amazing Dolt team, Beads is finally back to its Day Zero experience. We have managed to land on an architecture and implementation that serve all the key audiences:

  • Solo, single-player users with just a coding agent or a chat session. Great experience out of the box, simple setup, syncs to GitHub automatically.
  • Multi-agent power users who might be working on multiple projects or workflows.
  • Gas Town users doing high-velocity orchestration on heavy-duty project work.
  • Gas City users doing multiple projects and enterprise-level orchestration setups.
  • Wasteland and other federation users, who want the power of Dolt, Git, and Beads as a work federation protocol.
  • Anyone building their own orchestrator.

Many of these audiences were poorly served by the original, janky Beads architecture of SQLite + JSONL + awkward syncing and tons of merge conflicts. When it did work, it was often last-write wins semantics with SQLite just taking “whatever” happened to win. Not exactly enterprise-grade building material.

All that jank is gone. Torn out of the code base entirely. Beads is now backed with the power of Dolt, which itself sports an impressive 22k GH stars. The inherent fragility of the v0.x Beads architecture, with its bidirectional sync, 3-way merge, two sources of truth, race conditions, and tombstone hell — that’s all gone now. Dolt was designed to handle all this stuff gracefully. We got incredibly lucky that it exists at all.

Now that Beads is stable on Dolt, with both embedded and server-mode fully supported, v1.0.0 is the right call. I’ve moved the Beads repo into the gastownhall org (soon to be gascityhall), where we will continue to support Beads as a first-class standalone product for non-GT/GC users.

Dolt: Migration complete!

Gas Town: It’s That Dang Mayor

I want to chat a little more about what we’ve learned about Gas Town before we wrap.

One of the reasons that people like Gas Town is that they don’t have to read as much, or even pay attention as closely. It’s more like DM’ing with a friend.

Claude Code makes you read. A lot. It doesn’t matter if you don’t like reading, or if this isn’t your native tongue, or if you’re busy, or tired. With coding agents, you’re gonna do some reading. Read read read read. It’s like a stevey post gone wild, running rampage, in every session. But make sure you don’t miss anything important!

I read just fine, and even I didn’t like doing all that reading. Most of it clearly could have been read by a model, saving me the trouble. I wanted something else, some other interface, but wasn’t sure what. I just didn’t want to have to read so much unnecessary cruft.

I spent a bunch of time building orchestrators last year, trying to get Claude to run Claude. At first, I was trying to achieve the elusive “visibility without reading” by chasing classic Observability. I was initially thinking that I wanted dashboards or activity feeds or some other visualization of my town’s workers. And some people still do like those, and they can be handy.

But after a while I realized I just wanted someone to talk to, while the system was working. And perhaps, as occasion might demand, someone to yell at.

The Mayor abstraction turned out to be perfect. Mayors are there to get yelled at. A Mayor isn’t so distant, like some higher-level governor or executive, to whom yelling seems like it will go unheard. A city mayor is ostensibly someone who has your local interests at heart, so the mayor is who you yell at first. It’s a social custom going back centuries. As one famous and rather wise U.S. mayor put it a week ago, if your constituents aren’t yelling at you, it’s because they aren’t around at all, and you don’t want that.

Programming in 2026 will become talking to a face

With the Gas Town Mayor, you feel like you’re operating at a special level, a VIP, above all the workers. You are talking to someone important: the mayor of a factory the size of a town. You have access to someone with resources, someone who gets you, someone who appreciates how busy you are.

Working with regular coding agents just doesn’t give you that special feeling. I’m not making this up; this is a pretty consistent report I get from the field, from people around the world, particularly nontechnical people. I truly think it comes down to the Mayor giving you less stuff to read.

Claude Code only has one way to tell you what’s going on, which is to tell you what’s going on. It babbles while it works. “Now I will run this awk script s@(*fj$&h(*!&. Now I will print 8 pages of recaps. Now I’m deleting your database. Now I’m printing more recaps, and running another script here is the code #$AWESR@#$.”

Claude Code is a wall of scrolling text. The harder it works, the scrollier it gets. Now imagine having 10 standard coding agents running. Any agent, could be Codex, whatever. Ten of them puking out text. And you have to sort through all their output to find the nuggets of actually interesting stuff you need to know about, like the part where they’re deleting the database.

This is why people love Gas Town. The Mayor reads all that crap that the workers are printing. The Mayor knows your context, your hopes and dreams. The Mayor has an army of polecats it can whip up when it needs to. The Mayor has all these cool-sounding resources, like the Crew and Convoys and Dog patrols, that it can bring to bear on your problem. Just say the word, the Mayor’s on it for you.

Claude Code and some other agents are trying to turn themselves into dark factories, by running subagents, and providing their own task management, memory systems, etc. But so far, they’re all trying to do it with a product lens, no platform to speak of — a monolith. I’ve read some interesting blog posts about that approach, but safe to say I’m not a fan.

Gas Town at least lets you talk to your agents as first class citizens, with externally visible identities; Gas City takes it further and decomposes the entire stack into a modular platform architecture.

In short, what’s behind the Mayor also matters, and as soon as people start getting curious, today’s coding agents immediately disappoint. And believe you me, people are getting curious. And they’re finding their way to Gas Town.

The Mayor does your reading for you, so you can supervise

Ultimately the Mayor is doing way more than just saving you a bunch of reading. It is your personal concierge. If Claude Code is an Executive Assistant, then the Gas Town Mayor is more like your Chief of Staff, who manages a full team of capable EAs, all working for you behind the scenes.

I’ve been saying since last year that by the end of 2026, people will be mostly programming by talking to a face. There’s absolutely NO reason to type with the Mayor. You should be able to chat with them like a person. You’ll have a cartoon fox there onscreen, in costume, building and managing your production software, and showing you pretty status updates whenever you ask for one. This is the end state for IDEs.

On to Gas City

As I mentioned in my long-overdue Vibe Maintainer post, we’re going to start gently nudging everyone towards Gas City. You literally just install it, import your Gas Town configuration, and then you’re using that instead of Gas Town. It’s functionally identical, when used as a dev IDE.

Except with Gas City, you can now build your own orchestrators using all the Gas Town primitives: identity, roles, messaging, mail, sessions, cost tracking, multi-model dispatch, skills, prompting and priming, hooks, GUPP, NDI, formulas, molecules, beads, epics, convoys, orders, patrols, plugins, tmux, seances, and more more more. It’s all there. You can mix and match to create arbitrarily simple or fancy orchestrators, with all their work logged to a beautiful set of git ledgers.

There will be nothing like it. You are going to want to use Gas City. We will have some imitators, but I’m not worried. Ask your agent to dig into Dolt federation and have a look at our Wasteland, and you’ll quickly see why this is a superior way to do work.

I can’t begin to express my excitement about Gas City. It is all MIT-licensed, supported by a growing team of enthusiasts, and it is already starting to have legit hosting options for people who want to build orchestration in the cloud. I will have a detailed post about it when we get closer to GA, when it’s late beta and ready for wider adoption.

But no need to wait. At a high level, Gas City is the answer to all your problems. Ha! At least, for certain classes of problem, such as, “How can I bring AI into my company and pass an audit trail,” “How can I rid myself of gougy niche SaaS by in-sourcing it all to AI,” and similar. I know you’re all thinking about it!

Stay tuned. I have another blog post hot on this one’s heels. I’m giving some talks next week, one in NYC and one in San Jose, and I figured, why not just spill all my talk secrets in a public blog so that I have nothing interesting or new to add during my talks!

Anyway, that’s a wrap! Congrats to the Beads community for riding the wave to the 1.0 release and 20k stars, and for finally getting a solid embedded-Dolt experience. Thanks to the Dolt team and Dustin Brown for that! And congrats to everyone who has used Gas Town to do something cool. I couldn’t be happier!

Finally, a huge thank you to the core team who have all worked incredibly hard to bring you Gas Town, Gas City, the Wasteland, and much more to come. From left to right, skipping me the panda, there’s Matt Beane, Chris Sells, Julian Knutsen, Tim Sehn, and Brendan Hopper. We’ve got so much more in store for this ecosystem. Come join our Discord at gastownhall.ai!

Gas Town Ecosystem Generals: Matt, Chris, Steve, Julian, Tim, Brendan

April 2, 2026.   Security Squander?

I guess, for the sake of temporary reprieve, we’re glad for the push to get TSA workers paid again and back to work. The security lines were becoming abysmal.

Count me among those a little disappointed, however. Ultimately, what we need isn’t to keep TSA funded, but quite the opposite. We need to dismantle the entire thing and start over. As I’ve been opining for years, our approach to airport security is, for the most part, a colossal waste of time and money. It needs to be rethought and rebuilt.

We may have lost a moment.

 

The post April 2, 2026.   Security Squander? appeared first on AskThePilot.com.

New Music Is Slowly Dying

New music is slowly dying.

  • The major record labels have abandoned it—investing in old songs, not new artists.

  • Streaming platforms are even worse, promoting AI slop and algorithmic crud.

  • Meanwhile the whole technocracy wants to turn music-making into digital content farming—and will spend a trillion dollars to make that happen.

Each year, fewer and fewer new songs reach the charts. Every genre gets turned into a museum, where antiquated works predominate. And fans can’t even remember the name of the artist they heard online—because they never learned it in the first place.

Below are the latest numbers, courtesy of Chartmetric. It shows the decline of new music as a percent of streaming hits. The trend is ominous.


If you want to support my work, please take out a premium subscription (just $6 per month).

Subscribe now


The decline in 2025 was ugly—the collapse of new music accelerated during those 12 months. But now it looks like 2026 will be even worse.

Source: Chartmetric

How bad is it?

“Instead of going to music school, you are advised to find a wealthy spouse—that’s how you will prepare for a music career in the future.“

Here’s one measure. Do you remember when radio stations played top 40 hits? In 2026, you would struggle to find 40 new song that qualify as hits.

Read more

Barents Sea Tied to Low Arctic Sea Ice

Dark open water lies south of thin, broken up sea ice near Franz Josef Land, with a thin layer of clouds covering part of the scene.
Thin, broken-up sea ice and areas of open water dominate the northern Barents Sea in this image acquired on March 17, 2026, by the MODIS (Moderate Resolution Imaging Spectroradiometer) on NASA’s Terra satellite.

At the top of the planet, the cap of sea ice across Arctic waters grows and shrinks with the seasons, usually reaching its annual maximum extent in March. In 2026, this peak occurred on March 15, when the extent reached 14.29 million square kilometers, matching the lowest maximum observed since satellite monitoring began in 1979. One of the key areas contributing to the low maximum this year was the Barents Sea.

The Barents Sea lies at the periphery of the Arctic Ocean, bordered to the northwest by the Norwegian archipelago of Svalbard, and to the northeast and east by the Russian islands of Franz Josef Land and Novaya Zemlya, respectively. It is one of more than a dozen subregions—including the Central Arctic Ocean and nearby seas, bays, and waterways—across which scientists use remote sensing to track sea ice. The region is important for fisheries, shipping routes, and scientific research.

On March 17, 2026, the Terra satellite captured this image of the northern Barents Sea. Near Franz Josef Land, broken sea ice drifted near areas of open water closer to Novaya Zemlya. The region is often cloudy, as it was that day, but most clouds were thin enough to reveal the sea ice and water below.

In addition to the low extent, data from NASA’s ICESat-2 satellite indicate that Barents sea ice in mid-March 2026 was also very thin, according to Nathan Kurtz, chief of the Cryospheric Sciences Laboratory at NASA’s Goddard Space Flight Center.

Previous years, such as 2021 and 2025, also saw especially thin ice around the time of the maximum. “What was striking this year, however, was that the ice was also completely melted away in more of the Barents Sea, in addition to areas of thinning spreading northward,” Kurtz said.

On the opposite side of the Arctic, the Sea of Okhotsk also contributed to the low total sea ice extent across the Arctic in March 2026. But the factors driving the losses differ between the two regions.

In the Barents, studies have shown that the main driver is large-scale atmospheric circulation, with winds channeling warm, humid air from the North Atlantic straight into the area, accelerating melt. These winds can be influenced by tropical weather thousands of miles away. Disturbances originating over the Maritime Continent near Indonesia can “send ripples through the atmosphere that reach the Arctic within one to two weeks,” Kurtz said.

In contrast, the Sea of Okhotsk mostly has thin, seasonal ice that changes thickness from year to year. Local winds play a big role, sometimes pushing the ice together to create thicker, ridged areas, and other times spreading it out, making it thinner. Because of this, the ice loss there is mainly driven by local weather, unlike in the Barents Sea, where distant atmospheric forces have a greater impact.

NASA Earth Observatory image by Michala Garrison, using MODIS data from NASA EOSDIS LANCE and GIBS/Worldview. Story by Kathryn Hansen.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Antarctic Sea Ice Saw Its Third-Lowest Maximum
2 min read

Sea ice around the southernmost continent hit one of its lowest seasonal highs since the start of the satellite record.

Article
Cañon Fiord’s Whirling Waters
3 min read

During the 2022 summer melt season, sediment plumes and fractured sea ice traced swirling eddies in a branch of the…

Article
Seeing Blue During Schirmacher’s Summer Melt Season
5 min read

A network of meltwater lakes and drainage channels made an Antarctic ice shelf known for its blue ice areas even…

Article

The post Barents Sea Tied to Low Arctic Sea Ice appeared first on NASA Science.

What I’ve been reading

1. Allister Sparks, The Mind of South Africa: The Story of the Rise and Fall of Apartheid.  This history book actually tries to explain to the reader how things were.  Oh such books are so rare!  (Why is that?)  Definitely recommended, written at the very end of the apartheid era which gives it yet another angle of interest.

2. Nic von Wielligh and Lydia von Wielligh-Steyn, The Bomb: South Africa’s Nuclear Weapons Programme.  I had been looking for a book on this topic for a long time, and finally I found the right one in a South African bookshop.  They did build six atomic bombs, almost seven, and this is the story of how that started and was later reversed.  Hundreds of pages of substantive detail, and I had not realized how much the conflict in Angola, and Cuban/Soviet involvement, was a major factor in the whole episode.

3. David Stuart, The Four Heavens: A New History of the Ancient Maya.  We keep on learning lots about the Maya, and this is the best book to follow what has been going on.  Well-written and clear, and it does not numb your mind with details you may not care about.

4. Mark B. Smith, Exit Stalin: The Soviet Union as a Civilization 1953-1991.  I am seeing an increasing number of excellent books on what the Soviet Union really was.  This one is well written, broad in scope, and yet rich in detail, treating the covered era as a living, breathing time in human history.  It makes the time and place imaginable.  The book also goes a long way toward disaggregating different Soviet eras, rather than just the end of Stalinism.

5. Kevin Hartnett, The Proof is in the Code: How a Truth Machine is Transforming Math and AI.  A very useful book about the history of proving math theorems by computer.

The post What I’ve been reading appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Beyond the Lowest Bid: Identifying a Printer That Can Scale with Your Campaign

In the world of politics, time isn’t just money – it’s momentum, and momentum is everything. Election cycles are notoriously volatile, moving from a quiet stroll to a full-blown sprint in the blink of an eye. For a campaign manager, the pressure to stay visible while responding to a rapidly changing landscape is a constant weight. You need a team behind you that understands that a delay of even a few days can feel like a lifetime when the polls are about to open.

Most local print shops are great for a small business that needs a few hundred business cards or a single banner for a grand opening. However, those same shops often crumble when they are hit with an order for ten thousand yard signs on a Tuesday afternoon. Political work requires a level of intensity and a specific understanding of deadlines that your average commercial printer simply isn’t built to handle. You aren’t just looking for a vendor; you’re looking for a logistical partner who can survive the storm with you.

Choosing the wrong partner can lead to empty street corners and missed opportunities just when the race is heating up. It’s about finding a facility that has the horsepower to keep up with your growth and the flexibility to pivot when the strategy changes. Knowing how to choose a political printer that can handle the unique demands of a campaign is essential to protecting your candidate’s success.

Analyzing Throughput and Production Capacity

When you’re vetting a potential printer, the first thing you need to look at is their actual “throughput capacity.” This isn’t just about how many machines they have on the floor, but how fast those machines can actually turn a digital file into a finished product. In a tight race, you might need thousands of signs printed, dried, and ready for pickup within a forty-eight-hour window. If a shop can’t guarantee that kind of speed, they are a liability to your field operation.

High-volume printing requires specialized equipment that can run around the clock without breaking down or losing quality. You want a shop that has invested in industrial-grade presses designed for speed and consistency across every single unit. Ask them about their peak capacity and how they handle multiple large orders simultaneously during the busy season. A printer that is already at eighty percent capacity before you even place your order is a recipe for disaster.

Efficiency in the back-end logistics is just as important as the speed of the press itself. You need to know that they have the staff to handle the trimming, the grommeting, and the packaging without creating a bottleneck in the warehouse. A fast printer with a slow finishing department is still a slow printer at the end of the day. Verify that their entire workflow is optimized for the kind of rapid-fire production that political campaigns demand.

The Importance of the Union Label

For many political organizations and candidates, the presence of a “Union Label”—often called the Union Bug—is a non-negotiable requirement for all printed materials. This small mark signals to voters and labor groups that the campaign supports fair wages and professional working conditions. In a competitive primary or a general election, failing to include this label on your signs can lead to significant blowback from key stakeholders. It’s a small detail that carries a massive amount of political weight.

The Union Bug acts as a symbol of solidarity and a commitment to the local workforce that resonates deeply with many voter demographics. It shows that you aren’t just looking for the cheapest possible option, but that you value the people who are actually building your campaign materials. For some organizations, the absence of this mark is enough to withhold an endorsement or a donation. It is a vital part of your brand’s “street cred” and its overall standing in the community.

Not every print shop is authorized to use these labels, so you must verify this capability long before you sign a contract. A shop that claims they can “just add it” without a legitimate union agreement is putting your campaign at serious legal and reputational risk. Make sure you’re working with a shop that is fully certified and understands the specific placement rules for these marks. The Union Bug is an essential tool for building trust with a large and influential part of the electorate.

Prioritizing Reliability Over the Initial Low Bid

When you’re managing a tight budget, it is incredibly tempting to just go with the shop that provides the lowest initial bid. However, in the world of political printing, a lower price often comes with a hidden cost in terms of reliability and speed. If a sign is five cents cheaper but arrives three days late, it’s actually much more expensive for the campaign. Saving a few dollars isn’t worth the risk of being invisible during a critical voting window.

The real value of a printing partnership is found in the “total cost of success,” which includes the peace of mind that the job will be done right. You want to pay for a team that knows your history, understands your branding, and is committed to your candidate’s victory. This relationship allows for a much more efficient workflow where errors are minimized and the quality remains high across every single piece. A trusted partner is an investment in the overall health and momentum of the race.

Ultimately, the goal is to build a foundation that allows the candidate to focus on the voters rather than the logistics of the signs. By choosing a printer based on capacity, certification, and reliability, you’re setting the stage for a much smoother and more effective campaign. A few extra cents per sign is a small price to pay for the confidence that your message will be on the street when it matters most. Success is built on the quality of the people you choose to have in your corner.

Photo: bearfotos via Freepik.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post Beyond the Lowest Bid: Identifying a Printer That Can Scale with Your Campaign appeared first on DCReport.org.

Why is NASA bothering to go back to the Moon if we've already been there?

KENNEDY SPACE CENTER, Fla.—The first time NASA launched humans toward the Moon, in December 1968, the United States was a deeply fractured nation.

The historic flight of three people into the unknown brought a measure of solace to a country riven by assassinations, riots, political discord, and a deeply unpopular foreign war.

If history does not repeat itself, it certainly rhymes. Today, four humans are on the way to the Moon, Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen. They do so, once again, amid a troubled world.

Read full article

Comments

US Bans All Foreign-Made Consumer Routers

This is for new routers; you don’t have to throw away your existing ones:

The Executive Branch determination noted that foreign-produced routers (1) introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense” and (2) pose “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”

More information:

Any new router made outside the US will now need to be approved by the FCC before it can be imported, marketed, or sold in the country.

In order to get that approval, companies manufacturing routers outside the US must apply for conditional approval in a process that will require the disclosure of the firm’s foreign investors or influence, as well as a plan to bring the manufacturing of the routers to the US.

Certain routers may be exempted from the list if they are deemed acceptable by the Department of Defense or the Department of Homeland Security, the FCC said. Neither agency has yet added any specific routers to its list of equipment exceptions.

[…]

Popular brands of router in the US include Netgear, a US company, which manufactures all of its products abroad.

One exception to the general absence of US-made routers is the newer Starlink WiFi router. Starlink is part of Elon Musk’s company SpaceX.

Presumably US companies will start making home routers, if they think this policy is stable enough to plan around. But they will be more expensive than routers made in China or Taiwan. Security is never free, but policy determines who pays for it.

Possible US Government iPhone Hacking Tool Leaked

Wired writes (alternate source):

Security researchers at Google on Tuesday released a report describing what they’re calling “Coruna,” a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

[…]

Coruna’s code also appears to have been originally written by English-speaking coders, notes iVerify’s cofounder Rocky Cole. “It’s highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government,” Cole tells WIRED. “This is the first example we’ve seen of very likely US government tools­based on what the code is telling us­spinning out of control and being used by both our adversaries and cybercriminal groups.”

TechCrunch reports that Coruna is definitely of US origin:

Two former employees of government contractor L3Harris told TechCrunch that Coruna was, at least in part, developed by the company’s hacking and surveillance tech division, Trenchant. The two former employees both had knowledge of the company’s iPhone hacking tools. Both spoke on condition of anonymity because they weren’t authorized to talk about their work for the company.

It’s always super interesting to see what malware looks like when it’s created through a professional software development process. And the TechCrunch article has some speculation as to how the US lost control of it. It seems that an employee of L3Harris’s surviellance tech division, Trenchant, sold it to the Russian government.

Four astronauts are now inexorably bound for the Moon

The Orion spacecraft successfully fired its main engine for 5 minutes and 50 seconds on Thursday, sending four astronauts on a free-return trajectory around the Moon. For NASA and the Artemis II crew members, this marked a point of no return for more than a week.

About three-quarters of the American population has not witnessed humans leaving low-Earth orbit in their lifetimes. The last time this occurred was 1972, with the final Apollo Moon mission.

The “translunar injection” burn of Orion’s main engine occurred about one day after the successful launch of the mission on NASA’s Space Launch System rocket from Kennedy Space Center on Wednesday. This burn was the last major firing of Orion’s main engine and sets the crew on a course to fly around the Moon on Monday, slingshot back toward Earth under lunar gravity, and splash down in the Pacific Ocean on Friday, April 10.

Read full article

Comments

TPM Live: Please Explain What the Hell Congress is Doing

Airports in chaos, Senate Republicans caving to Senate Democrats, House Republicans caving to Senate Republicans, a huge bill for Iran, the sweeping, voter-suppressing SAVE Act: there’s a lot that Congress is (in theory) handling right now with (in practice) limited success. TPM reporter Emine Yücel and I will try to make sense of it all at noon. Watch here.

Always Stuck in the 1950s, Trump Courts His Own Suez

Iran said today that after the war with the U.S. and Israel concludes that it will “oversee” transit through the Strait of Hormuz. It says it will do so in some kind of common arrangement with Oman. (Oman is the country on the other side of narrowest point of the Strait.) This was mixed with statements that this does not mean ships will be blocked. Basically Iran and Oman will try to make it a better cargo experience for everyone. The Times reports that Kazem Gharibabadi, Iran’s deputy foreign minister for legal and international affairs says that this oversight “will naturally not mean restrictions; rather, they are intended to facilitate and ensure safe passage and to provide better services to ships passing through this route.”

Obviously what Iran says will happen and what will happen are not necessarily the same thing. But when Iran and the President of the United States are saying essentially the same thing it starts to seem like this is what will happen. The geopolitical impact of this whole adventure starts to seem very reminiscent of the Suez Crisis of 1956. (Simply put: the UK and France got together and with a secret agreement with Israel tried to assert control over the Suez Canal. But the plan fell apart, the U.S. refused to support the scheme and the whole thing blew up in the former colonial powers’ faces. The UK and France were the past; the U.S. was the future.)

Perhaps it’s not quite what happened to the United Kingdom and France on the global stage, the way Suez cemented the secondary status of these two former Great Powers. One of the great advantages the U.S. has always had is internal wealth, vast land mass, a massive and highly educated population, relative isolation dominating a whole hemisphere. But it still looks like what no one is quite yet willing to call a massive, almost unimaginable strategic defeat. Taking Iran, which after the events of 2024 and 2025 was weaker than it had been in almost 50 years, and allowing it to emerge from a direct military confrontation with the United States as arbiter of a quarter of 20% of the global supply of oil and gas, simply beggars belief.

We’ve Only Got a Handful of Tickets Left for Our Austin Show

In less than a week, the TPM team is heading down to Austin to hang with our Texas readers and friends at the Observer. If you haven’t gotten your tickets yet, now is the time!

Remember, Inside members get free access to all events. And as Prime member, you get 33% off your tickets. Forgot or didn’t receive the discount code? Just email Joe Ragazzo at joe@talkingpointsmemo.com

As a reminder, the night will begin with a conversation between TPM founder and editor-in-chief Josh Marshall and Texas Observer’s politics editor, Justin Miller. They’ll be talking the Sen. John Cornyn vs. AG Ken Paxton runoff and the Trump endorsement that wasn’t; whether James Talarico can become the first Democratic senator in Texas in more than 30 years; and the state of the redistricting wars.

Then, D.C. reporter Kate Riga and Josh will record a live episode of The Josh Marshall Podcast featuring Kate Riga. After the pod, there will be an audience Q&A and then we’ll wrap up the night in the bar.

We’re excited to see you there!

Schrödinger’s Attorney General

News is breaking now that Trump has fired Pam Bondi from her job as attorney general. Some reports suggest he may replace her with EPA head Lee Zeldin.

But Fox News reports that she’s actually been out of the job for the better part of a day now:

Bondi met with Trump in the Oval Office Wednesday night ahead of his speech to the nation on the war in Iran, where she reportedly was informed of her ouster, according to two sources familiar with the meeting. 

One of those sources said that by the time Trump took his place behind the podium for the address, Bondi already lost her job and was on her way back to Florida.

Todd Blanche is now running DOJ as acting attorney general, NBC reports.

Update, 1:27 p.m.: Here’s Trump’s inevitable announcement. “We love Pam,” who will be “transitioning” to an “important new job in the private sector.” Blanche is in charge.

Thursday 2 April 1663

Up by very betimes and to my office, where all the morning till towards noon, and then by coach to Westminster Hall with Sir W. Pen, and while he went up to the House I walked in the Hall with Mr. Pierce, the surgeon, that I met there, talking about my business the other day with Holmes, whom I told my mind, and did freely tell how I do depend upon my care and diligence in my employment to bear me out against the pride of Holmes or any man else in things that are honest, and much to that purpose which I know he will make good use of. But he did advise me to take as few occasions as I can of disobliging Commanders, though this is one that every body is glad to hear that he do receive a check.

By and by the House rises and I home again with Sir W. Pen, and all the way talking of the same business, to whom I did on purpose tell him my mind freely, and let him see that it must be a wiser man than Holmes (in these very words) that shall do me any hurt while I do my duty. I to remember him of Holmes’s words against Sir J. Minnes, that he was a knave, rogue, coward, and that he will kick him and pull him by the ears, which he remembered all of them and may have occasion to do it hereafter to his owne shame to suffer them to be spoke in his presence without any reply but what I did give him, which, has caused all this feud. But I am glad of it, for I would now and then take occasion to let the world know that I will not be made a novice.

Sir W. Pen took occasion to speak about my wife’s strangeness to him and his daughter, and that believing at last that it was from his taking of Sarah to be his maid, he hath now put her away, at which I am glad.

He told me, that this day the King hath sent to the House his concurrence wholly with them against the Popish priests, Jesuits, &c., which gives great content, and I am glad of it. So home, whither my father comes and dines with us, and being willing to be merry with him I made myself so as much as I could, and so to the office, where we sat all the afternoon, and at night having done all my business I went home to my wife and father, and supped, and so to bed, my father lying with me in Ashwell’s bed in the red chamber.

Read the annotations

Artemis II, NASA's boldest mission in generations, launches crew to the Moon

KENNEDY SPACE CENTER, Fla.—Three Americans and one Canadian launched into orbit from Florida's Space Coast on Wednesday, flying the most powerful rocket ridden by humans on the first leg of a nine-day voyage around the Moon.

Perched atop the 322-foot-tall (98-meter) Space Launch System rocket, the four astronauts lifted off from NASA's Kennedy Space Center at 6:35 pm EDT (22:35 UTC).

Four hydrogen-fueled RS-25 engines and two solid rocket boosters flashed to life to push the nearly 6 million-pound rocket from its moorings at Launch Complex 39B. The engines and boosters collectively generated 8.8 million pounds of thrust, outclassing NASA's Saturn V rocket used for Apollo lunar missions.

Read full article

Comments

Links 4/2/26

Links for you. Science:

Trump Administration Orders Dismantling of the U.S. Forest Service. The headquarters is going to Utah. Every regional office is being shuttered. The research program is being destroyed.
Child vaccination rate drops sharply in Michigan under RFK Jr’s influence
London, San Francisco and Beijing achieve ‘remarkable reductions’ in air pollution
NASA’s next X-ray mission, AXIS, has been killed
Science has an Epstein problem. Women in paleontology say it’s a symptom of a deeper misogyny
menB outbreak in Kent — initial thoughts
NIH pivots away from agency-directed science. US biomedical funding behemoth says the approach will boost innovation, but some researchers worry that understudied areas of science will suffer.

Other:

Senate Democrats Should Kill the Filibuster
Finally, Democrats—of All Stripes—Are Coming After the Wealthy’s Money
“What if We Didn’t Suck?”
Team USA’s Soulless Militarism Was Their Undoing in the WBC
‘Should my son not run for president?’ GOP senator lashes out over Trump’s learning disabilities crack
Here Comes the Self-Driving Traffic Surge
A Disturbing New Low in the Polymarket Era. Maybe turning war into a casino was a bad idea?
Cesar Chavez, a Civil Rights Icon, Is Accused of Abusing Girls for Years
This Is Your Kid’s Brain on AI Slop
Absurd AI-Powered Lawsuits Are Causing Chaos in Courts, Attorneys Say, “Clogging the System” and Driving Up Costs
US Mint takes down video of meeting criticizing proposed Trump 24K gold coin
Is it rude to throw dog poop bags in someone else’s trash? In a time of zero-sum politics and slashed city budgets, the matter is getting more urgent
Following Trump, Republicans in Congress Propose to Ban Most Voting by Mail. A restrictive voter I.D. bill under consideration in the Senate could severely limit mail-in voting. Conservatives are pressing to end the practice outright, taking aim at an option that is widely used by voters.
Trump’s Warm Body: The SEAL He Picked to Beat Massie
But How Will We Pay For That
House panel gives green light to bill to eliminate DC traffic cameras
Nick Fuentes is just the beginning of the GOP’s Nazi problem
Markwayne’s World: The ‘Cinematic’ And ‘Fantastical’ Life Of Trump’s DHS Pick
Strait of Hormuz standoff puts supply of America’s generic drug prescriptions at risk
The Party of AOC or AIPAC? In Illinois, Neither Bloc Could Dominate
Mamdani Halts NYPD’s Criminal Crackdown on Cyclists, Ending Harsher Treatment of Bicyclists Than Car Drivers
AI is exhausting workers so much, researchers have dubbed the condition ‘AI brain fry’
‘He has to justify what he did’: Black leaders slam JB Pritzker after Illinois primary
Increasing supply decreases prices
Let’s Catch Up On The Hilarious Afroman Defamation Trial
Whopper of the Week: In a Deadly Flu Season, RFK Jr. Discourages Vaccination
N.Y.C. High School Student Freed After 10 Months in ICE Detention
ICE Detains DACA Recipient on His Way to See Baby in NICU
MAGA Demands Unwavering Media Support For Their Shitty, Unpopular War
Maples’ voting address under scrutiny in District 87 special election (found some election fraud!)

The Absurdity of the SAVE Act

It is pretty clear at this point Trump really wants to pass the SAVE Act, even though senate Republicans do not want it to pass: depending on the state, these id-based restrictions would likely hurt Republicans. One absurdity in all of this is there was report by a think tank in 2022 showing that there have only been 51 cases of either non-citizens trying to vote or voter identity fraud out of hundreds of millions of ballots cast, which is precisely the fraud the SAVE act is supposed to stop.

The name of that think tank? The Heritage Foundation. Yes, the fascist-friendly think tank that is stocking the Trump Administration.

They know they are lying about the need for the SAVE Act. The question is will any of the large media organizations ask why we need the SAVE Act, when the premier rightwing think tank says otherwise. I bet I can guess the correct answer!

📙 #084 - Art, links, pen holders, no Dark Forest and videos.

LOVE this icosahedron plotted by Dave Mawer, onto four layers of tracing paper. We’ve been seeing a lot* of animated plotter drawings recently, but this method of - I assume - using the z-axis to decide which layer of paper to plot onto adds another dimension.

You can see more of his work over on Instagram, go say “Hi”.

Meanwhile, LB ALLIX (IG) made this, and the sheer chunkiness of it makes me smile.

Tiny jam jars are great for adding weight to pens, but this custom 3d print is probably far more practical and allows you to adjust the weight by adding more or less nuts to the nut caddy.

The files are over here: https://www.thingiverse.com/thing:7313939


# VPYPE, BUT NICE

vpype is correctly self described as “the Swiss-Army-knife command-line tool for plotter vector graphics.” and will solve a lot of your pen plotting troubles, so it’s worth getting to grips with.

However, sometimes you just want to do the most common optimising tasks with little fuss.

Draw Scape (IG) - who has a lot of experience dealing with kind of thing - has you covered with a web tool that handles the most common use cases.

https://drawscape.io/resources/optimize

Also handy that it shows you the commands so you can (eventually) figure out how to do it yourself.


# HEXAGON PEN HOLDERS

Those with long memories may remember a project called “Hexagones Landscape” [sic] that released on fxhash back in November 2022. It looked a bit like this…

Well, apparently a side project of mine is making pen holders. Because I always have a lot of pens kicking around, and the most efficient way of keeping them ready to go while taking up minimal space is to store the vertically. Here’s some I made to hold TWSBI Eco pens.

Now I need to do the same for my Pentel brush pens that are scattered all over the place, here’s an v.2 that allows you to both hold pens and create your own hexagon landscape.

v.3 was fun, but not practical…

…and v.4 is about to be sent to be printed next, so we’ll see how that goes.


# DARK FOREST OS

Last time I said I’d write about DFOS, and I had a few emails from people saying they were interested in hearing more.

Sadly, I haven’t had a chance to play with it much more since last time. The plan was to use it some more, set up a few things, have some content there so it’s not a ghost town, then finally send out invite links.

That and I’m waiting for a couple more features to be rolled out, that I think really need to be there to make the whole thing a lot easier for people, before really going to town with it.

Hopefully I’ll have more of an update, and an invite link either next newsletter or the one after.


# DRAWING MACHINES 101

Two more videos since last time, I wanted these to go out at the same time as they work well back-to-back.

For a lot of people who already know a bit of code, those two are going to be all you need to get up and running writing code to make your own SVG files.

I think the second one also does a good job of showing that you don’t need much code to start to do interesting things, and the code is only half of it. With different paper, pens and even moving the machine around (sometimes while it plots) you can take some simple lines quite far.

I also uploaded the March Patreon Q&A video, which answers a bunch of questions. Even if you don’t watch the whole thing, I like the longer answer about how to get yourself into an idea creating mindset at 01:44 and the brush pen plotting at 18:52.

The whole thing is here…

I’ve finally reached the virtuous circle of creating pen plots during the making of the tutorials, which’ll form the basis of what I send out to the Patreon supporters. Getting systems and processes in place is often a hard slog, but it’s nice when it finally pays off.


# THE END

LAMY, which I know a bunch of you use, have the popular SAFARI fountain pen, just introduced the SAFARI Roll-Ink, which is a rollerball pen that takes the BLUE T10 ink cartridge from the fountain pen; but not the ink convertor, nor do they recommend any of the other ink cartridges.

So fuck that right, I washed out the cartridge and replaced the ink with iroshizuku to-ro ink, and so far, it’s worked just fine.

I’ll update if it’s still working in the next newsletter on Thursday 18th, catch you then!

Love you all
Dan
🧡

When you suffer from vagina neck ...

I do not believe in look-shaming.

I think it’s wrongheaded and gross and also exposes a jarring lack of self-awareness. Like, I’m 53. I’m balding. I have the body of a relatively fit-yet-aging man of my years. Sometimes I have to yank a hair from my nostril. Or ear. I suck in my gut quite regularly.

I know who I am, just as most readers (I assume) know who they are.

Which leads me to this post, via Donald Trump earlier today …

I honestly believe Donald Trump gazes into a mirror and sees Brad Pitt. Which is weird, because (in no particular order):

• Donald Trump has a vagina neck.

• Donald Trump spray tans his face and hands in a bright orange/pink glow.

• Donald Trump is probably 30 pounds overweight.

• Donald Trump (according to many people) smells of poop and wears an adult diaper.

• Donald Trump’s mouth droops.

• Donald Trump’s hair appears similar to that of the Wizard of Oz’s Scarecrow.

And I just wanna say—who cares? Like, truly, who cares? Some people are attractive, some people are less attractive, most people fall into the big middle pool of meh.

But it’s beyond weird that our physically unattractive president, what with his vagina neck and his fake tan and his paperclip-sized hands, outwardly ridicules a revered rock ‘n roll icon who looks like … this.

So, seriously, just shut up and focus on Iran.

That’s the country you bombed without reason or plan.

April 1, 2026

Today, for the first time in U.S. history, a sitting president attended oral arguments at the United States Supreme Court. President Donald J. Trump broke precedent to take a seat in the front row of the Supreme Court’s public seating area, alongside Attorney General Pam Bondi and Commerce Secretary Howard Lutnick, to observe arguments in the case of Trump v. Barbara, a case under which Trump hopes to end the birthright citizenship guaranteed by the Fourteenth Amendment.

The case argued before the court today grew out of Trump’s executive order of January 20, 2025, the day he took the oath of office a second time, titled “Protecting the Meaning and Value of American Citizenship.” Fulfilling a campaign promise, the order declared that, contrary to the Fourteenth Amendment, individuals born in the United States are not citizens if their parents do not have legal permanent status.

With the help of the American Civil Liberties Union (ACLU) and other partners, three families who represented the many people endangered by this order sued the administration. Barbara, for whom the case is named, is an applicant for asylum from Honduras whose baby was due after the order was set to go into effect.

Trump has called for ending birthright citizenship since his first term as part of his appeal to his racist supporters who want to end Black and Brown equality in the United States. But his argument would overturn the central idea of the United States articulated in the Declaration of Independence, that we are all created equal.

The Fourteenth Amendment that established birthright citizenship came out of a very specific moment and addressed a specific problem. After the Civil War ended in 1865, former Confederates in the American South denied their Black neighbors basic rights. To remedy the problem, the Republican Congress passed a civil rights bill in 1866 establishing “[t]hat all persons born in the United States and not subject to any foreign power, excluding Indians, not taxed, are hereby declared to be citizens of the United States; and such citizens of every race and color…shall have the same right[s] in every State and Territory in the United States.”

But President Andrew Johnson, who was a southern Democrat elected in 1864 on a union ticket with President Abraham Lincoln, a Republican, vetoed the 1866 Civil Rights Bill. While the Republican Party organized in the 1850s to fight the idea that there should be different classes of Americans based on race, Democrats tended to support racial discrimination. In that era, not only Black Americans, but also Irish, Chinese, Mexican, and Indigenous Americans, faced discriminatory state laws.

In contrast to the Democrats, Republicans stated explicitly in their 1860 platform that they were “opposed to any change in our naturalization laws or any state legislation by which the rights of citizens hitherto accorded to immigrants from foreign lands shall be abridged or impaired; and in favor of giving a full and efficient protection to the rights of all classes of citizens, whether native or naturalized, both at home and abroad.”

When Republicans tried to enshrine civil rights into federal law in 1866, Johnson objected that the proposed law “comprehends the Chinese of the Pacific States, Indians subject to taxation, the people called Gipsies, as well as the entire race designated as blacks,” as citizens, and noted that if “all persons who are native-born already are, by virtue of the Constitution, citizens of the United States, the passage of the pending bill cannot be necessary to make them such.” And if they weren’t already citizens, he wrote, Congress should not pass a law “to make our entire colored population and all other excepted classes citizens of the United States” when eleven southern states were not represented in Congress.

When Congress wrote the Fourteenth Amendment to the Constitution, it took Johnson’s admonition to heart. It did not confer citizenship on the groups Johnson outlined; it simply acknowledged that the Constitution had already established their citizenship. The first sentence of the Fourteenth Amendment reads: “All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.”

In the short term, Americans recognized that the Fourteenth Amendment overturned the 1857 Dred Scott v. Sandford decision, in which the Supreme Court ruled that people of African descent “are not included, and were not intended to be included, under the word ‘citizens’ in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States.” The Fourteenth Amendment established that Black men were citizens.

But the question of whether the amendment recognized birthright citizenship for all immigrants quickly became an issue in the American West, where white settlers were not terribly concerned about Black Americans—there were only 4,272 Black Americans in California in 1870, while there were almost half a million white Americans—but wanted no part of allowing Chinese men to be part of American society.

Western state legislatures continued to discriminate against Asian immigrants by falling back on the country’s early naturalization laws, finalized in 1802, to exclude first Chinese immigrants and then others from citizenship. Those laws were carefully designed to clarify that Afro-Caribbeans and Africans—imported to be enslaved—would not have the same rights as Euro-Americans. Those laws permitted only “free white persons” to become citizens.

In the late nineteenth century, state and territorial legal systems kept people of color at the margins, using treaties, military actions, and territorial and state laws that limited land ownership, suffrage, and intermarriage.

As late as 1922, in the case of Takao Ozawa v. United States, the Supreme Court ruled that Takao Ozawa, born in Japan, could not become a citizen under the 1906 Naturalization Act because that law had not overridden the 1790 naturalization law limiting citizenship to “free white persons.” The court decided that “white person” meant “persons of the Caucasian Race.” “A Japanese, born in Japan, being clearly not a Caucasian, cannot be made a citizen of the United States,” it said.

The next year, the Supreme Court decision in United States v. Bhagat Singh Thind upheld the argument that only “free white persons” could become citizens. In that case, the court said that Thind, an Indian Sikh man who identified himself as Indo-European, could not become a U.S. citizen because he was not a “white person” under U.S. law, and only “free white persons” could become citizens. After the Thind decision, the United States stripped the citizenship of about fifty South Asian Americans who had already become American citizens.

Those discriminatory laws would stand until after World War II, when U.S. calculations of who could be a citizen shifted along with global alliances and Americans of all backgrounds turned out to save democracy.

But despite the longstanding use of laws designed to perpetuate human enslavement to prevent certain immigrants from becoming citizens, the Supreme Court always upheld the citizenship of their children. In 1882, during a period of racist hysteria, Congress passed the Chinese Exclusion Act agreeing that Chinese immigrants could not become citizens.

Wong Kim Ark was born around 1873, the child of Chinese parents who were merchants in San Francisco. In 1889 he traveled with his parents when they repatriated to China, where he married. He then returned to the U.S., leaving his wife behind, and was readmitted. After another trip to China in 1894, though, customs officials denied him reentry to the U.S. in 1895, claiming he was a Chinese subject because his parents were Chinese.

Wong sued, and his lawsuit was the first to climb all the way to the U.S. Supreme Court, thanks to the government’s recognition that with the U.S. in the middle of an immigration boom, the question of birthright citizenship must be addressed. In the 1898 U.S. v. Wong Kim Ark decision, the court held by a vote of 6–2 that Wong was a citizen because he was born in the United States.

Immigration scholar Hidetaka Hirota of the University of California, Berkeley, explains that the government went even further to protect children born in the U.S. In 1889 the Treasury Department—which then oversaw immigration—decided that a native-born child could not be sent out of the country with her foreign-born mother. Nor did the government want to hurt the U.S. citizen by expelling her mother and leaving her without a guardian. So it admitted the foreign-born mother to take care of the citizen child.

The Treasury concluded that it was not “the intention of Congress to sever the sacred ties existing between parent and child, or forcibly banish and expatriate a native-born child for the reason that its parent is a pauper.”

In May 2023, then–presidential candidate Donald J. Trump released a video promising that on “Day One” of a new presidential term, he would issue an executive order that would end birthright citizenship. He claimed that the understanding that anyone born in the United States is automatically a citizen is “based on an historical myth, and a willful misinterpretation of the law by the open borders advocates.”

But one judge after another has sided against him on this issue, and he apparently showed up at the Supreme Court today to try to intimidate the three judges who owe their seats on the bench to him into supporting his own radical reworking of one of the key principles of our nation. He left after an hour and a half, before Cecillia Wang, the ACLU lawyer arguing for the plaintiffs, began to speak.

Later, Wang described what it was like to argue in court today. She explained, it’s “a nerve-wracking experience to argue any case in the Supreme Court, and especially one as weighty as this one, where the president of the United States is taking aim at a cherished American tradition and individual right of citizenship based on your birth in this country. I myself am a Fourteenth Amendment citizen because my parents had not yet naturalized when I was born. So I walked in today with the spirit of my parents and so many people’s ancestors in that first generation of Americans—whether they naturalized or not, I consider them all Americans. They came to this country with hopes and dreams, and they gave birth to future Americans, and that’s us.”

Notes:

https://apnews.com/article/supreme-court-immigration-trump-birthright-citizenship-e97c0c6f37fc68a70acc6075ff7d8e47

https://www.washingtonpost.com/politics/2026/04/01/trump-supreme-court-birthright-citizenship/

https://www.factcheck.org/2023/06/trumps-dubious-promise-to-end-birthright-citizenship/

https://www.oyez.org/cases/2025/25-365

https://www.presidency.ucsb.edu/documents/republican-party-platform-1860

Edward McPherson, The Political History of the United States of America during the Period of Reconstruction (Washington: Solomons & Chapman, 1875), pp. 74–75, 78, at https://www.google.com/books/edition/The_Political_History_of_the_United_Stat/x7HmnHL1OvQC

https://werehistory.org/immigrant-parents/

https://www.archives.gov/milestone-documents/dred-scott-v-sandford

https://case-law.vlex.com/vid/takao-ozawa-v-united-889889672

https://tile.loc.gov/storage-services/service/ll/usrep/usrep261/usrep261204/usrep261204.pdf

https://supreme.justia.com/cases/federal/us/169/649/

Bluesky:

sifill.bsky.social/post/3mihywa5sms2q

Share

SQLAlchemy 2 In Practice - Chapter 3 - One-To-Many Relationships

This is the third chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon. Thank you!

In the previous chapter you learned how to execute a variety of queries on the products table. Interestingly, some of those queries were designed to obtain product manufacturers and not products, and this required duplicates to be removed by grouping the results.

Saul Steinberg’s Cartography

It’s likely that artist Saul Steinberg may be best known for “View of the World from 9th Avenue,” an illustration that appeared as the well-known cover of the 29 March 1976 issue of The New… More

The One Phrase That Explains Trump's Twisted Psychology

The Cross Section is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Last weekend, Yonatan Touval wrote an essay in the New York Times with an explanation for the American and Israeli governments’ apparent failure to consider that if they attacked Iran, the Iranians might, you know, do things in response, making choices colored by their history, their beliefs, their culture, and their politics. “Our leaders preside over an extraordinary machinery of destruction, but they remain strikingly obtuse about human beings — about their pride, shame, convictions and historical memory,” Touval wrote.

Donald Trump in particular is incapable of empathy, the capacity to see the world from the perspective of someone else, even only for a moment. Some responded to Touval’s essay by saying Trump has no theory of mind, no capacity to imagine how someone else thinks and makes decisions. But that’s not quite true. He has a theory, it’s just that it’s one in which all other minds exist only to regard him with awe. Everyone is a member of the his audience, watching him and reading about him and shaking our heads in wonder at him.

You can see it in Trump’s obsession with the gaze of the crowd, which has gripped him all his life. The true measure of a person, an action, or an event, he believes, is that it is seen, and by how many. And the highest compliment one can pay, the greatest superlative imaginable, is that the crowd will say “We’ve never seen anything like it before.”

Take a look at some of the things he said in the speech he gave from the White House Wednesday night, the purpose of which was to convince the public that the war in Iran was a great idea and is going splendidly:

  • “In these past four weeks, our armed forces have delivered swift, decisive, overwhelming victories on the battlefield, victories like few people have ever seen before.”

  • “I also want to thank our troops for the masterful job they did in taking the country of Venezuela in a matter of minutes. That hit was quick, lethal, violent and respected by everyone all over the world.

  • “In June, I ordered a strike on Iran’s key nuclear facilities in Operation Midnight Hammer. Nobody’s ever seen anything like it.”

  • “We just learned that, we took them out, we took them all out so that no one would really dare stop them and their race for a nuclear bomb, a nuclear weapon like nobody’s ever seen before.”

  • “Our armed forces have been extraordinary. There’s never been anything like it militarily. Everyone is talking about it.”

  • “With our historic tax cuts, where people are just now talking about receiving larger refunds than they ever thought possible, they are getting so much more money than they thought.”

  • The whole world is watching and they can’t believe the power, strength and brilliance, they just can’t believe what they’re seeing, they, leave it to your imagination, but they can’t believe what they’re seeing, the brilliance of the United States military.”

  • “Because of the actions we have taken, we are on the cusp of ending Iran’s sinister threat to America and the world. And I’ll tell you, the world is watching.”

Importantly, almost all of these comments were in the ad-libbed portions of the speech, when Trump adds his own commentary and emphasis to what his aides have written for him. The picture he paints is one in which everyone — individual people, countries, the world as a whole — is an eternal spectator in a constant state of awe, slack-jawed in amazement at either events as they unfold or, more often, the greatness of Trump and his accomplishments. Everything he does is something no one has ever seen before, and our purpose is to stand back and behold him in his glory.

This is the perspective of a man whose brain has been poisoned by his life-long pursuit of fame and admiration. He exists only if he’s being watched, and his decisions are good only so far as they are seen. This is why Trump is the purest expression of this cursed moment in cultural history: He is every fame-lusting Real Housewife, every hungry influencer trying to boost their middling follower count, every looksmaxxing douchebro, taken to a terrifying extreme.

And underneath all the desperate bravado is a well of insecurity so deep and dark it would destroy the world. Way back in 2017, I wrote an article about Trump’s obsession with the idea that other countries were laughing at the United States, the most horrible fate that could befall us:

Send President Trump abroad to rub shoulders with a bunch of foreigners, and chances are somewhere around 100 percent that he’ll come back thinking about whether anyone is laughing at us. “Russian officials must be laughing at the U.S. & how a lame excuse for why the Dems lost the election has taken over the Fake News,” the president tweeted on Tuesday morning.

If you’ve been paying any attention at all over the last couple of years, you know this is a topic he returns to again and again. Search Trump’s Twitter feed and you’ll find that who’s laughing at whom is an obsession for him, with the United States usually the target of the laughter. “The world is laughing at us.” China is “laughing at USA!Iran is “laughing at Kerry & Obama!“ “ISIS & all others laughing!“ “Mexican leadership has been laughing at us for many years.” “Everybody is laughing at Jeb Bush.” “Putin is laughing at Obama.” “OPEC is laughing at how stupid we are.” “Dopey, nobody is laughing at me!“ I could go on (and on, and on), but I’ll spare you.

Before he ran for president, Trump was certainly mocked here and there in the media for his vulgar nouveau-riche excesses, but it wasn’t until he achieved his goal of conquering the world’s attention that the laughing truly began. Indeed, by now there is no figure in the world, and perhaps in all of history, who has been the object of so much laughter and ridicule. This incident was one that must have wounded him to the core:

All the cool kids making fun of him behind his back! No wonder he hates them so much.

Today, those leaders and their successors want nothing to do with his disastrous war, a reaction that has sent him into such a tizzy that he may finally try to withdraw from NATO. But in the meantime, he’ll continue to trumpet his unprecedented success, insisting that all eyes are upon him and no one can believe what they’re seeing.

Which is true. Just not in the way he means.

Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.

Leave a comment

Subscribe now

March 2026 sponsors-only newsletter

I just sent the March edition of my sponsors-only monthly newsletter. If you are a sponsor (or if you start a sponsorship now) you can access it here. In this month's newsletter:

  • More agentic engineering patterns
  • Streaming experts with MoE models on a Mac
  • Model releases in March
  • Vibe porting
  • Supply chain attacks against PyPI and NPM
  • Stuff I shipped
  • What I'm using, March 2026 edition
  • And a couple of museums

Here's a copy of the February newsletter as a preview of what you'll get. Pay $10/month to stay a month ahead of the free copy!

Tags: newsletter

datasette-llm 0.1a6

Release: datasette-llm 0.1a6

  • The same model ID no longer needs to be repeated in both the default model and allowed models lists - setting it as a default model automatically adds it to the allowed models list. #6
  • Improved documentation for Python API usage.

Tags: llm, datasette

Huge Round Barn in Oregon

I was delving around in the photo files from our book Home Work, published in 2004. This is the so-called round barn, built by cattleman Peter French and what is now in the Malheur Wildlife Refuge in southeast Oregon.

In 1872, French set out for Oregon from Sacramento, California with 1200 head of select shorthorn cattle, six Mexican vaqueros, and a Chinese cook.

He drove the cattle across the Sacramento River and then northward up into Eastern Oregon, where he settled on the west side of Steens Mountains.

Over the years, his ranching Empire grew to encompass 200,000 acres and 45,000 head of cattle, one of the largest cattle empires west of the Rockies.

In the late ’70s or early ’80s, French built three round barns for breaking horses in the winter months. This one is 100 feet in diameter, the conical roof framed with a 35-foot center pole of Juniper (about 40” diameter at the bottom, tapering to maybe 28” at the top), 14 surrounding Juniper posts and then a third wall of posts at the perimeter about 8 feet high.

This configuration provided an unbroken interior circular ring inside the barn so horses could run around and get exercise during severe weather.

He built three such barns, with this one being preserved.

It’s a breathtaking building; I spent a couple of hours there in Spring, 2003, shooting photos.

Barns are my cathedrals.

It’s a great story, with 7 more photos, told on pages 206 to 207 of Home Work.

I did over 5000 blog posts back in the day. Every once in a while I’ll drag something over to Substack, like this one.

Trump Doesn't Even Have the Courage to Run Away

A short video rather than my usual morning post

Donald Trump doesn’t even have the courage to run away.

Hi, I’m Paul Krugman. I’m not going to do a regular post today — it’s Thursday morning — because I wanted to wait and see what was in the big speech from Donald Trump last night. And I thought I could just do a short video about it.

It turns out that the speech was sort of an anticlimax, although not in a good way. Many people expected Trump to pull the mother of all TACOs, to declare victory and surrender. He did not do that. He declared victory, of course, but he did not actually announce an end to hostilities. On the contrary, he said we’re going to bomb Iran into the Stone Age. So add massive war crimes to your schedule.

There is clearly no strategy here. There’s no endgame. There’s nothing. It’s hard to tell, as always, whether Trump is delusional or just completely unable to admit something that he actually knows.

One of the moments that really struck me in the speech was him declaring that the whole world was extremely impressed by what happened. He said,

the whole world is watching and they can’t believe the power, strength and brilliance. They just can’t believe what they’re seeing. The world can’t believe what it’s seeing.

What it’s seeing is that the world’s greatest military power took on a fourth-rate power. Again, as I said the other day, Iran’s military budget is a rounding error in our military budget. And we lost. For all practical purposes, we’ve left ourselves in a much weaker position and Iran in a stronger position than it was before.

But Trump has to believe or has to claim that he believes that the whole world is extremely impressed. You might say, why do we care? Well, he cares, obviously. His whole thing is about dominance and believing that we’ve got the world awed by our strength.

If you want the real verdict on the speech, well, Brent oil futures were under $100 when Trump started speaking. They are over $108 as I record this. The oil market, I think is a more clear gauge, although the stock market has also reacted.

Basically, everybody said, oh my God, we thought that this was going to be at least the beginning of the end, and instead it looks like an endless quagmire. I still think that people are not fully taking into account the implications for global oil prices and everything else of the Strait of Hormuz remaining closed for the indefinite future.

So this is going to be really bad. But anyway, it was radically disappointing even to people who are, you know, the markets and a lot of people in the world were actually hoping that the United States would give up. I mean, it’ll be terrible. We really don’t want a medievalist theocracy empowered. But since this is heading nowhere except for, again, massive war crimes, better to end it. But we’re not getting that.

What really strikes me, and there’s obviously deeper stuff in here, but it is a question of character. It’s funny, I don’t think there’s a sort of, if you like, native English term for the Yiddish — but it’s effectively English now — word mensch. A mensch is literally a person, but it means somebody who takes responsibility for their actions, who accepts defeats as being defeats and tries to move on, who tries to improve, basically just being a mensch.

It’s hard to imagine somebody who’s less of a mensch than Donald Trump, except maybe for some of the members of his cabinet. It’s incredible that they’re so lacking in the basics of character.

The thing about what this means for America’s role in the world is not only that Trump and company are doing great damage, but the whole world is watching. They saw that this guy, and it wasn’t hard to see what kind of a person Trump was, that America elected this guy twice. It appears that the American public has completely lost sight of what it means to be a responsible, serious person.

I might say, since this masculinity posturing is such a part of it, they’ve forgotten what it is to be a man. Obviously, that applies to all genders. A country that will elect somebody like that twice is not a country anyone can rely on. And that is the ultimate lesson here.

We have Trump lecturing the world and saying, why are you cowards? Why don’t you come in and help us in this ill-conceived, disastrous war that we started without checking with you? But the reality is that the world is looking and saying, my God, what is wrong with America? They may still have a lot of bombs — although not as many as we started with — but it’s not a country anybody can trust for anything. And that, even more than the price of oil, is going to be the legacy of this war.

I guess have a great day.

★ David Pogue’s ‘Apple: The First 50 Years’

Pogue was my guest on The Talk Show a few weeks ago to talk about his new book, Apple: The First 50 Years, and the show was a lot of fun. But the book is so good, so comprehensive, so fun that it feels essential to link to it whilst we celebrate Apple’s 50th year. I’m a print guy, generally, but the print edition of this book is especially good — it’s a gorgeous book printed in full color throughout (not just, say, 16 color pages in the middle). Apple’s history is both literally and figuratively colorful, and the photos and screenshots Pogue includes are terrific.

The book is nothing short of an instant classic — simultaneously a very enjoyable read, and a meticulously-researched reference for the decades to come. Pogue both covers well-known ground and reports umpteen nuggets, anecdotes, and details that have never been told before. For example, we all know that Steve Jobs was resistant to opening the iPhone to third-party apps. But Pogue interviewed Scott Forstall and got this story, about just how far Steve Jobs thought Apple could go to expand the iPhone’s software library while not opening it to third-party developers:

“I want you to make a list of every app any customer would ever want to use,” he told Forstall. “And then the two of us will prioritize that list. And then I’m going to write you a blank check, and you are going to build the largest development team in the history of the world, to build as many apps as you can as quickly as possible.”

Forstall, dubious, began composing a list. But on the side, he instructed his engineers to build the security foundations of an app store into the iPhone’s software-“against Steve’s knowledge and wishes,” Forstall says. [...]

Two weeks after the iPhone’s release, someone figured out how to “jailbreak” the iPhone: to hack it so that they could install custom apps.

Jobs burst into Forstall’s office. “You have to shut this down!”

But Forstall didn’t see the harm of developers spending their efforts making the iPhone better. “If they add something malicious, we’ll ship an update tomorrow to protect against that. But if all they’re doing is adding apps that are useful, there’s no reason to break that.”

Jobs, troubled, reluctantly agreed.

Week by week, more cool apps arrived, available only to jailbroken phones. One day in October, Jobs read an article about some of the coolest ones.

“You know what?” he said. “We should build an app store.”

Forstall, delighted, revealed his secret plan. He had followed in the footsteps of Burrell Smith (the Mac’s memory-expansion circuit) and Bob Belleville (the Sony floppy-drive deal): He’d disobeyed Jobs and wound up saving the project.

The book is just under 600 pages, including a comprehensive index, and it isn’t padded. It is a veritable encyclopedia of Apple history. Just a remarkable, essential, and unique work. If you haven’t ordered a copy, you should, and if you do, here are some make-me-rich affiliate links:

David Pogue: ‘Apple and Me’

David Pogue, on his new blog at Substack:

When the iPhone was about to go on sale in 2007, a thousand people lined up around the block at New York City’s Apple Store.

I’d written a parody of “My Way,” with the crazy idea of filming a music video with the participation of people standing in that line. It was a total blast; everyone in line was game. I edited the results together and uploaded it — and for six hours, ladies and gentlemen, it was the most watched video on YouTube. (It’s still there.)

Anyway. That night, I got a call from Jobs’s assistant. “I have Steve on the line,” she said. “Can you take the call?”

I was out to dinner with my family, but I said yes.

“David?” Jobs said when he came on the line. “I saw that song video you posted today.”

Oh GREAT, I thought. I steeled myself for another epic reaming by the CEO of Apple.

“I just wanted to say, it was the funniest fucking thing I’ve ever seen,” he said.

 ★ 

Chris Espinosa, Employee #8, Profiled in The New York Times

Kalley Huang, writing for The New York Times (gift link):

As that happened, Apple laid off staff “again and again and again,” Mr. Espinosa said. His manager told him that he had been spared because he had worked for the company for so long that his severance package would be too expensive.

“I was wondering what I was going to do because I had no college degree and I had only worked at one company,” Mr. Espinosa said. Then he figured: “I was here when we turned the lights on. I might as well stick around until we turn the lights off.”

Lovely read.

 ★ 

The Talk Show: ‘Apple at 50’

Who better to join the show to commemorate Apple’s 50th anniversary than John Siracusa?

Sponsored by:

  • Sentry: A real-time error monitoring and tracing platform. Use code TALKSHOW for $80 in free credits.
  • Notion: The AI workspace where teams and AI agents get more done together.
  • Factor: Healthy eating, made easy. Get 50% off your first box, plus free breakfast for 1 year, with code talkshow50off.
 ★ 

Jason Snell on Covering Apple for 33 Years

Jason Snell, writing at Macworld, regarding joining the staff at MacUser back in 1993:

But as amazing and revelatory as the Mac was for me as a writer and editor of print and online publications, I rapidly discovered that the Apple of the period was a mess. My first day as a full-time employee, a copy editor popped his head over the cubicle wall and asked me if I had heard anything about layoffs. Welcome to the media, kid.

 ★ 

‘Great Things in Business Are Never Done by One Person. They’re Done by a Team of People.’

60 Minutes published a short clip of a 2003 Dan Rather interview with Steve Jobs, and it’s a good one. Seems apt both regarding Apple’s continued success after Jobs’s death, and a refutation of the personality cult in The White House.

 ★ 

Thursday assorted links

1. The most important woman in Kant’s life.

2. Incentives matter?  Dealing with Iranian scientists (New Yorker).

3. “In the months that followed, US tariff policy changed more than 50 times, spanning rate increases, rate decreases, new product exemptions, and new product inclusions.

4. The British minimum wage.

5. The next generation of books and publishing?

6. More Scott Sumner movie reviews.

7. How people actually use ChatGPT.  A massive new dataset from OpenAI.

8. They added a baby to the end of Tristan!???

9. Results on reproducibility.  And a simple visual, comparing different fields.  “Education” does not do great.  And the Nature link.  None of this should come as a surprise.

10. Resilient societies: a Mercatus call for proposals.

11. “There are now ten toilets in space.” (And the aliens?)

12. The anti-data center coalitions and their squabbles.

Lots to read and ponder in today’s links…

The post Thursday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Artemis 2 crew blasts off on historic moon mission

Photographers at the Press Site capture the launch of the Artemis 2 mission at 6:35 p.m. EDT on April 1, 2026. Photo: Michael Cain/Spaceflight Now.

A three-man one-woman crew blasted off on a voyage to the moon Wednesday, riding atop the world’s most powerful operational rocket as it roared away on a trail-blazing flight to help pave the way for upcoming lunar landings and an American moon base.

It was the first piloted moonshot since the end of the Apollo program 53 years ago, a flight expected to carry Artemis 2 commander Reid Wiseman, Victor Glover, Christina Koch and Canadian astronaut Jeremy Hansen farther from Earth than any astronauts before them.

The crew will not land on the moon or even go into lunar orbit. But they plan to thoroughly test their Orion capsule, making only its second flight — its first with a crew on board — to make sure it’s up to the task.

At the same time, the mission will test flight controllers and procedures needed to safely send astronauts back to the moon for long-duration stays as NASA sets its sights on winning a superpower space race with China, which plans to send its own taikonauts to the moon before the end of the decade.

“This is a test flight,” NASA Administrator Jared Isaacman told CBS News. “This is the opening act in a series of missions that will send astronauts to and from the moon with great frequency as we return to stay, to build the moon base and realize the scientific and economic potential on the lunar surface.”

For the Artemis 2 astronauts, named to the mission with great fanfare in 2023, the launching came two months later than planned because of work to fix hydrogen leaks in the Space Launch System rocket’s first stage and to resolve an upper stage propellant pressurization problem.

The Artemis 2 astronauts depart for the launch pad. (left to right) Jeremy Hansen, Victor Glover, Reid Wiseman and Christina Koch. Image: Adam Bernstein/Spaceflight Now.

On Wednesday, the launch team ran into a couple of what turned out to be minor problems, extending a final planned hold in the countdown at the T-minus 10-minute mark to make sure everything was ship shape and ready to go.

Launch Director Charlie Blackwell Thompson then conducted a poll of engineers in Firing Room 1 asking Wiseman if the crew was “go” for launch. The astronauts all said yes.

“On this historic mission, you take with you the heart of this Artemis team, the daring spirit of the American people and our partners across the globe and the hopes and dreams of a new generation,” Blackwell-Thompson said to the astronauts. “Good luck, Godspeed, Artemis II, let’s go.”

From that point, the countdown ticked smoothly to zero, and the SLS rocket thundered to life with billowing clouds of steam at 6:35:12 p.m. EDT, just 11 minutes late, when its four shuttle-era main engines ignited and throttled up to a combined two million pounds of thrust.

After a lightning-fast round of computer checks, the rocket’s two extended strap-on solid fuel boosters ignited, explosive bolts holding the SLS to its launch pad shattered and the 5.7-million-pound rocket climbed away from pad 39B at the Kennedy Space Center atop a combined 8.8 million pounds of thrust.

Like the Orion, it was the rocket’s second launch in three years and its first with astronauts on board.

Generating an ear-splitting roar that shook the ground for miles around, the huge rocket reached about 120 mph — straight up — in less than 10 seconds. Consuming 8,000 gallons of liquid propellant and 24,000 pounds of solid fuel per second, the SLS rapidly accelerated as it burned through propellant and lost weight.

Moments after clearing the launch pad’s gantry and lightning towers, the SLS arced away to the east over the Atlantic Ocean, putting on a spectacular show for tens of thousands of area residents and tourists who flocked to Florida’s “Space Coast” to witness NASA’s first piloted moon launch in a half century.

The SLS rocket broke through the “sound barrier” 55 seconds after liftoff and smoothly raced through the region of maximum aerodynamic pressure as it plowed out of the dense lower atmosphere.

The Space Launch System rocket thunders away from Kennedy Space Center, carrying four astronauts on a mission to loop around the moon and back. Image: Michael Cain/Spaceflight Now.

The twin strap-on boosters, providing two-thirds of the rocket’s liftoff thrust, exhausted their propellant and fell away about two minutes after launch. The SLS core stage continued the ascent on the power of its four RS-25 main engines.

Eight minutes and 10 seconds or so after liftoff, the engines shut down, the core stage fell away and the Orion crew capsule, the astronauts now weightless, continued coasting upward, still attached to the rocket’s upper stage, known as the Interim Cryogenic Propulsion Stage, or ICPS. The spacecraft’s four solar wings unfolded a few minutes later.

At that point, the astronauts were in an elliptical orbit with a high point, or apogee, of about 1,380 miles and a low point, or perigee, of just 17 miles or so. The ICPS fired its main engine for the first time about 50 minutes after liftoff, raising the low point to a safe 115 miles.

An hour later, the ICPS engine fired a second time, raising the high point of the orbit to some 43,760 miles, higher than any astronauts have flown since the final Apollo moon mission in 1972.

The Orion capsule, attached to a European Space Agency-supplied service module housing air, water, propellant, maneuvering thrusters and a single main engine, separated from the ICPS three hours and 20 minutes after launch.

The orbit adjustments were designed to put the astronauts in a highly elliptical 24-hour-long orbit, giving them plenty of time to check out the Orion capsule, making sure the ship’s communications, navigation, propulsion and life support systems are working properly before heading to the moon.

That includes the capsule’s cramped toilet compartment, resembling a small telephone booth built into the floor of the capsule. Koch reported problems shortly after reaching orbit as she was activating the system.

“Christina, with the toilet, the fault that you reported, the toilet cannot spin up,” a flight controller radioed. “You can still use it for fecal collection, but you’ll have to use (contingency bags) for urine.”

He said engineers were working on a repair plan and within an hour or so, Koch was able to restore it to normal operation.

A major objective of the flight came a little more than three hours into the mission when Glover took over manual control of the Orion capsule, flying in formation with the spent ICPS stage that helped boost them into orbit. He said he was able to precisely re-position the capsule with no problems, approaching the ICPS and backing away as planned.

He described the sound and feel of Orion’s thrusters firing as “a little rumble, like driving on a rocky road.”

“We are essentially going to make sure that the vehicle flies the way that we think it does, that we designed it to do,” Glover said before launch. “And so we’re going to not only fly the vehicle manually, we’re going to execute the six degrees of freedom, so (moving) forward, backwards, left, right, up and down.”

He also re-oriented the capsule in roll, nose up-and-down pitch and side-to-side yaw.

“But we also want to give qualitative and quantitative feedback to the ground team, so letting them know what it feels like now that we can hear and feel the thrusters and to just understand the human experience.”

The crew will end an 18-hour day with two four-hour “sleep” periods early Thursday. They’ll get up after the first break to monitor a firing of their own service module engine to again raise the low point of the orbit and slightly boost the high point up to around 44,555 miles. At that point, the crew will get another four hours to nap.

In the meantime, NASA’s mission management team will review Orion’s performance to that point and, if all goes well, declare the spacecraft “go” for the all-important “trans-lunar injection,” or TIL, service module main engine firing.

The planned six-minute TLI burn, starting around 7:30 p.m. Thursday, will increase the spacecraft’s velocity by about 900 mph, breaking the ship out of Earth orbit to finally head for the moon.

The TLI burn will put the Orion on a free-return trajectory. From that point on, the crew’s path back to Earth will be set. As the ship loops around the moon, lunar gravity will bend the trajectory back toward a precisely targeted Pacific Ocean splashdown off the southern California coast on April 10.

The coast out to the moon will take about four days. All the while, Earth’s gravity will continue pulling on Orion, steadily slowing the ship as it flies farther away. But on Monday, the astronauts will enter the “lunar sphere of influence” and begin speeding up again as the moon’s gravitational pull finally begins exceeding Earth’s.

Later that day, the spacecraft is expected to reach a distance of 248,655 miles from Earth, equaling and then passing a record set by the Apollo 13 crew in 1970.

The Orion will pass behind the leading edge of the moon as seen from Earth and out of contact with mission control for about 40 minutes starting around 6:40 p.m. Monday. Sailing over the far side of the moon, the astronauts will pass within about 4,000 miles of the lunar surface at close approach and reach a maximum distance from Earth of some 252,800 miles.

During passage around the far side, about a quarter of the moon will be in sunlight, giving the astronauts a chance to observe, photograph and shoot video of features never before seen by human eyes.

“We are going to maximize every minute of looking at that far side,” Koch said. “There are launch windows where we could have illumination that will allow us to see things for the first time ever with human eyes, and that actually makes a difference to the people doing the scientific data analysis.”

Added Glover: “Twenty-four men have seen the moon, and we’re going to send the first set of woman’s eyes. They think that she can potentially see colors that we may not see. And so I think that’s also very important.”

The flyby phase of the flight is expected to come to a close Monday evening and the spacecraft will leave the lunar sphere of influence Tuesday afternoon as it heads back to Earth, steadily picking up speed as the planet’s gravity again becomes dominant.

Next Thursday, the astronauts will attempt a ship-to-ship call with the crew of the International Space Station followed by a crew news conference later that afternoon. That will set the stage for re-entry on Friday, April 10.

A critical thruster firing Friday afternoon will fine-tune the crew’s approach before they jettison the no-longer-needed service module.

Flying heat shield forward, the Orion will hit the top of the discernible atmosphere around 8 p.m. while moving at some 25,000 mph. The heat shield will experience temperatures of up to 5,000 degrees as the spacecraft rapidly slows in a blaze of atmospheric friction.

Once through the zone of maximum heating, the capsule will be descending at a much more more sedate velocity. A series of parachutes will sequentially deploy to slow the craft to a relatively gentle 15 mph splashdown. Navy crews will be standing by to help the astronauts out of their spaceship for short helicopter rides to a nearby revery ship.

“I think Jeremy said it best, when that hatch opens on the Pacific Ocean, we’ll probably be pretty ready to get out,” Koch said. “But a part of us will know that there are some moments left that we will miss forever and probably won’t ever get to have back.”

The astronauts will be extracted from Orion and flown by helicopter to a waiting recovery ship for initial medical checks and calls home to family and friends before heading home to Houston for debriefing and reunions with family. The Orion, meanwhile, will be towed into the recovery ship’s flooded “well deck” and secured for the trip back to shore.

With the Artemis 2 crew back on the ground, NASA’s focus will shift to the Artemis III mission and beyond, gearing up for another Orion crew to test rendezvous and docking procedures next year with one or both moon landers being built by SpaceX and Blue Origin.

If that goes well, NASA plans to launch one and possible two moon landing missions in 2028 using whichever landers are deemed safe and ready for flight. Agency managers say they plan to increase the flight rate to moon landings every six months to begin building a moon base near the lunar south pole.

But that will depend on steady funding from from Washington across multiple presidential administrations. The Trump administration kicked off the Artemis program, but it’s not yet known how the project will fare over the long haul.

Isaacman is optimistic.

“It’s important because we’re fulfilling a promise … for America’s return to the moon as a stepping stone for all the things that we are going to do farther out into our solar system, like some day American astronauts planting the stars and stripes on Mars,” he said in an interview with CBS News.

“So you’re doing it for the scientific potential, the economic potential as a technological proving ground to do the things on the moon that you’re going to need on Mars.

“And how about inspiring the next generation?” he added. “How many kids after this mission are going to dress up as astronauts for Halloween and want to grow up and contribute to this great adventure?”

Sam Altman’s prediction has come through

From his house in Los Angeles, Mr. Gallagher, 41, used A.I. to write the code for the software that powers his company, produce the website copy, generate the images and videos for ads and handle customer service. He created A.I. systems to analyze his business’s performance. And he outsourced the other stuff he couldn’t do himself.

His start-up, Medvi, a telehealth provider of GLP-1 weight-loss drugs, got 300 customers in its first month. In its second month, it gained 1,000 more. In 2025, Medvi’s first full year in business, the company generated $401 million in sales.

Mr. Gallagher then hired his only employee, his younger brother, Elliot. This year, they are on track to do $1.8 billion in sales.

Here is more from Erin Griffith at the NYT.   Maybe Sam said “one person” running a billion dollar company, but if the two are closely genetically related still I will count this.

The post Sam Altman’s prediction has come through appeared first on Marginal REVOLUTION.

       

Comments

 

China shock fact of the day

China’s share in US imports at 9% is back down to what it was right before China joined the WTO (2001).

From Gita Gopinath.

Addendum: If you are curious, here is GPT on how much is now shipped through third countries.

The post China shock fact of the day appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

U.S.A. fact of the day

New Penn-Wharton study shows per-capita federal spending on each age group:

Seniors: $43,700

Children and young adults: $4,300.

Here is more from Jessica Riedl.

The post U.S.A. fact of the day appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

How to revive science in America by Harvey V. Fineberg, in PNAS

 Here's a paper in the latest PNAS that begins with this epigraph:
 
“Don’t tell me where your priorities are. Show me where you spend your money, and I’ll tell you what they are.” — attributed to James W. Frick (Vice President, University of Notre Dame, 1965–1983) 

 The rest is commentary, (and a figure is worth a thousand (1,000) words). 

How to revive science in America by Harvey V. Fineberg, PNAS, March 26, 2026   https://doi.org/10.1073/pnas.2537854123


 

 

The hidden world of plant roots

Close-up photo of a person in glasses examining a bunch of dried mushrooms held up by their hand.

Plant roots don’t have a nervous system, yet can produce sophisticated responses. What does that say about intelligence?

- by Aeon Video

Watch on Aeon

The house is a work of art

A modern house built over a waterfall surrounded by lush green forest, with stone and concrete architecture.

Frank Lloyd Wright exalted the individual and made ordinary life beautiful. But his life was marked by scandal and grief

- by Andrew Deming

Read on Aeon

From Columbus to Chávez: L.A.'s disappearing, disfigured and displaced statues

Statues in L.A. are not as immobile as you’d think. They’re here, they’re there, they move from pedestal to pedestal. Sometimes they disappear altogether.

My very interesting Conversation with Arthur C. Brooks

Here is the audio, video, and transcript.  Here is part of the episode summary:

Tyler and Arthur cover how scarcity makes savoring possible and why knowing you’ll die young sharpens the mind, what twin studies tell us about the genetics of well-being and why that’s not actually depressing, the four habits of the genuinely happy, the placebo theory of happiness books, curiosity as an evolved positive emotion, the optimal degree of self-deception, why Arthur chose Catholicism rather than Orthodoxy, what the research says about accepting death, how he became an economist via correspondence school, AI’s effect on think tanks, the future of classical music, whether Trumpism or Reaganism is the equilibrium state of American conservatism, whether his views on immigration have changed, what he and Oprah actually agree on, which president from his lifetime he most admires, Barcelona versus Madrid, what 60-year-olds are especially good at, why he’s reading Josef Pieper, how he’ll face death, and much more.

Excerpt:

COWEN: What do you think of the view that books on happiness or the meaning of life, they’re a kind of placebo? They don’t help directly, but you feel you’ve done something to become happier, and the placebo is somewhat effective.

BROOKS: I think that there’s probably something to that, although there’s some pretty interesting new research that shows that the placebo effect is actually not real. Have you seen some of that new research?

COWEN: Yes, but I don’t believe it. Nocebos also seem to work in many situations.

BROOKS: I know. I take your broader point. I take your broader point. I think that the reason for that is that when people read most of the self-improvement literature, not just happiness literature, what happens is that they get a flush of epiphany, a new way of thinking. That feels really good. That feels really inspirational. The problem is it doesn’t take root.

It’s like the seeds that are thrown on a path in the biblical parable. They don’t go through the algorithm that I just talked about, and so not all of these things can be compared. I would not have gotten into this line of research and this line of teaching if I thought that it was just going to add another book to a long line of self-improvement books that make people feel good but don’t ultimately change their lives.

COWEN: Say a person reads a new and different book on happiness once a year at the beginning of the year. Now, under the placebo view, that’s a fine thing to do. It’ll get you a bit happier each year. Under your view, it seems there’s something wrong. Isn’t the placebo view doing a bit better there? You should read a book on happiness every year, a different one. It’ll revitalize you a bit. Whether or not it’s new only matters a little.

BROOKS: Yes. It might remind you of some things that you knew to be the truth that you had fallen away from. One of the things that I like to do is I like to read a good book by one of the church fathers, for example. They’re more or less saying the same thing. It reminds me of something that I learned as a boy and that I’ve forgotten as an adult. It might actually remind me to come back to many of these practices and many of these views.

I think that there are real insights. There’s real value that can come from science-based knowledge about how to live a better life. I think that you and I are both dedicated to science in the public interest and also science in the private interest as well. I think there is some good to be gotten through many of these ideas. Not all. Once again, not all happiness literature is created equal.

And:

COWEN: Why not cram all that contemplation of death into your last three months rather than your last 18 months? Do intertemporal substitution, right? Accelerate it. Ben Sasse probably is facing a pretty short timeline, but he’s done a remarkable job, even publicly, of coming to terms with what’s happening. Isn’t that better than two years of the same?

And:

COWEN: I think it’s fair to say what we call the right wing in America, it’s become much, much more Trumpy. Does this shift you to the left or make you question what the right wing was to begin with, or do you just feel lost and confused, or do you say, that’s great, I’m more Trumpy, too? How have you dealt with that emotionally and intellectually?

BROOKS: Yes. I’ll answer, but you’re going to have to answer after me, will you?

COWEN: Sure.

Interesting throughout.

The post My very interesting Conversation with Arthur C. Brooks appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Réunion Island Lava Reaches the Sea

Thermal image of Piton de la Fournaise showing a bright lava flow on the southeastern flank contrasted with cooler vegetation and rock.
Lava flows east in this thermal image captured by the Thermal Infrared Sensor (TIRS) on Landsat 9 on March 28, 2026.
NASA Earth Observatory/Michala Garrison

Located 700 kilometers (440 miles) east of Madagascar, Réunion Island is the product of a long-lived mantle hotspot on the floor of the Indian Ocean. The island first emerged above the ocean’s surface about 2 million years ago. It remains active today, with frequent eruptions from Piton de la Fournaise, a shield volcano on the island’s eastern side.

Since the 17th century, the volcano has had more than 150 documented eruptions. The most recent began within the Enclos Fouqué caldera on February 13, 2026, with the opening of four fissures that fueled sustained lava fountains reaching 10 to 50 meters (30 to 160 feet). Throughout February and March, basaltic lava spilled down the volcano, advancing through forested and grassy areas toward its eastern side.

This thermal satellite image shows lava flowing east toward the ocean on March 28, 2026. The signal reveals the amount of heat emanating from surfaces on Earth based on detections of thermal radiation in two wavelengths. Warmer areas are mapped in yellow and cooler surfaces in blue. The thermal data were overlaid on a digital elevation model of the island.

The current activity likely marks the onset of a new cycle of frequent eruptive activity at Piton de la Fournaise

Diego Coppola

University of Turin

“The hottest areas, shown as the brightest tones, correspond to the eruptive vent, the active lava channel, and the flow front,” said Adele Campus, a University of Turin volcanologist. From the vent, lava flows downslope for several kilometers, often through lava tubes. “The places where lava re-emerges at the surface through breakouts appear as localized hotspots,” she added. Campus and colleagues analyzed more than two decades of NASA and NOAA satellite observations in a 2025 study, identifying key trends and patterns in the volcano’s thermal activity and rate of lava effusion.

On March 13, lava cut through the island’s Route Nationale 2 (RN2). By March 16, it had begun to spill into the Indian Ocean, producing acidic plumes of steam and volcanic gases, known as laze, according to the Observatoire Volcanologique du Piton de la Fournaise (OVPF). Scientists on the ground measured lava temperatures of 1,100 to 1,130 degrees Celsius (2,010 to 2,070 degrees Fahrenheit) as lava neared the ocean. Thermal surveys also showed that water temperatures exceeded 36°C (97°F) up to 600 meters from the entry point, according to OVPF. As of March 24, materials entering the ocean had created a new lava delta that extended the coastline by 190 meters.

“This eruption appears to be longer and to have produced a larger volume of lava than usual,” said Diego Coppola, a professor of volcanology at the University of Turin who coauthored the analysis with Campus. Such characteristics are often associated with the onset or end of an eruptive cycle. The most recent cycle began in 2014, culminated in 2015, and ended in July 2023. “The current activity,” he said, “likely marks the onset of a new cycle of frequent eruptive activity at Piton de la Fournaise.”

NASA Earth Observatory image by Michala Garrison, using Landsat data from the U.S. Geological Survey and elevation data from the Shuttle Radar Topography Mission (SRTM). Story by Adam Voiland.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

A Hot and Fiery Decade for Kīlauea
6 min read

The volcano in Hawaii is one of the most active in the world, and NASA tech makes it easier for…

Article
Restless Kīlauea Launches Lava and Ash
3 min read

Episode 43 of the Hawaiian volcano’s current eruption was marked by high lava fountains and widespread ash dispersal.

Article
Krasheninnikova Remains Restless
3 min read

The volcano on Russia’s Kamchatka Peninsula continues to erupt after centuries of quiescence.

Article

The post Réunion Island Lava Reaches the Sea appeared first on NASA Science.

MRU high school fellowship

The post MRU high school fellowship appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

How can we see what is invisible? How can we see what is invisible?


Axios, Super Popular NPM Package, Was Compromised in Attack on the Module’s Maintainer

StepSecurity:

If you have installed axios@1.14.1 or axios@0.30.4, assume your system is compromised.

There are zero lines of malicious code inside axios itself, and that’s exactly what makes this attack so dangerous. Both poisoned releases inject a fake dependency, plain-crypto-js@4.2.1, a package never imported anywhere in the axios source, whose sole purpose is to run a postinstall script that deploys a cross-platform remote access trojan. The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy. A developer who inspects their node_modules folder after the fact will find no indication anything went wrong.

This was not opportunistic. It was precision. The malicious dependency was staged 18 hours in advance. Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker’s server before npm had even finished resolving dependencies. This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package.

Could be my bigotry against JavaScript speaking, but I find it unsurprising that this happened to the same framework that this and this happened to.

 ★ 

Active Spring Like Pattern Across the Eastern Half of the Country