December ICE Mortgage Monitor: Home Prices "Firmed" in November, Up 0.8% Year-over-year

Today, in the Real Estate Newsletter: December ICE Mortgage Monitor: Home Prices "Firmed" in November, Up 0.8% Year-over-year

Brief excerpt:
Inventory Impacts Prices

• About one-third of markets are seeing annual home price declines, while two-thirds are posting gains

• The Northeast and Midwest dominate growth, with 24 of the top 25 markets for annual price gains located there, while all 36 markets with annual declines are in the South and Westbr /> ...
ICE Home Price Index• New Haven, Conn., leads with prices up +7.3% year-over-year, followed by Syracuse, N. Y. (+7.2%), and Scranton, Pa. (+6.9%). The largest declines are in parts of Florida, Texas, Colorado and California

• Markets are showing signs of rebalancing, with inventory improving in the Northeast and tightening in the South and West

• The 10 hottest markets saw monthly gains below their 12-month averages, hinting at cooler growth ahead, while 27 of 36 markets with annual declines posted adjusted price increases from October to November, signaling modest firming in late 2025
emphasis added
There is much more in the article.

What if bigger models, like bigger stars, fail faster?

The current debate over whether OpenAI has become “too big to fail,” triggered by the viral Wall Street Journal article, tends to frame the risk in familiar economic terms: over-concentration, interlocking commitments, trillion-dollar infrastructure buildouts, and the emergence of a firm whose collapse could destabilize a sector that now props up a sluggish U.S. economy. That argument is correct but incomplete. The deeper structural fragility lies not in the financing of AI infrastructure but in the epistemic dynamics of the models themselves. As we worked through the numbers, it became clear that OpenAI’s infrastructure roadmap—petawatts of compute, trillion-parameter systems, multi-trillion-dollar capital requirements spread across cloud providers, chip manufacturers, and sovereign backers—was constructed on an essentially theological belief in seamless exponential model improvement, a belief that assumed scaling could continue indefinitely toward “AGI.” That faith was not grounded in empirical availability of training data or in any theoretical understanding of how learning actually behaves at frontier scale. The infrastructure has been sized for stars that burn hotter and hotter, without regard for the fuel supply.


Sloptraptions is an AI-assisted opt-in section of the Contraptions Newsletter. If you only want my hand-crafted writing, you can unsubscribe from this section.


The real fuel, of course, is training data: the cultural, linguistic, computational, and behavioral traces that models attempt to fit. And here the numbers are uncompromising. The growth of high-quality data is slow and diminishing. The world’s stock of usable text, code, imagery, and speech grows incrementally, not exponentially. Meanwhile model sizes, compute budgets, and context windows have expanded by orders of magnitude. That mismatch means that newer, larger models are trained on datasets that are only marginally larger than those that fed their predecessors. The result is not graceful scaling but increasing epistemic brittleness. These larger systems learn the training distribution with greater and greater precision, pushing well past the semantic “signal” of an era and into its high-frequency cultural noise. They fit not only the stable structures of human knowledge but its accidents, its transient biases, its stylistic detritus. Shear’s observation—that frontier models are barely regularized and therefore massively overfit—captures this dynamic in accessible language.

But the deeper point is that overfitting to a static cultural snapshot becomes more catastrophic the larger the model grows. Culture is non-stationary; code ecosystems evolve; APIs change; institutions churn; slang mutates; the factual substrate of the world drifts each month. A small model trained on yesterday’s world degrades slowly. A large model trained on yesterday’s world degrades quickly and fails sharply.

This leads to a paradox at the heart of current AI economics. The trillion-dollar infrastructure wave justified by OpenAI’s ambitions has been built to support the next generation of massive models, but those massive models become obsolete faster than smaller ones. Like large stars, they burn brighter but collapse sooner. They present answers with greater surface coherence and tighter epistemic compression, giving users the illusion of deeper insight when they are actually reproducing the micro-structure of an outdated distribution. People will rely on this increased apparent precision—mistaking fluency for truth—and take correspondingly larger risks, operational, financial, political, and scientific. Precision becomes a kind of leverage: as confidence grows faster than correctness, the system tilts toward a bubble of over-trusted, under-verified automated reasoning. When the model slips outside of its training-era manifold, it does so abruptly, invisibly, and in ways that propagate errors with unprecedented speed across the organizations that depend on it. This is a new kind of systemic fragility: epistemic over-leverage driven by model scale rather than financial leverage driven by debt.

Against this background, the “too big to fail” scenario acquires a different meaning. The infrastructure ecosystem—Oracle’s data centers, Microsoft’s GPU clusters, Broadcom’s networking pipelines, Nvidia’s supply chain—was scaled for frontier models that may offer shrinking marginal returns and increasing temporal instability. If model quality plateaus or degrades because data does not keep pace, the economic justification for the infrastructure may collapse even as the infrastructure itself remains technically capable and commercially underutilized. The danger is not that OpenAI fails outright, but that the sector pivots into a phase where the largest models have the shortest useful lifespans, while the capital commitments they require stretch across decades. This is a structural misalignment between epistemic time and financial time.

Yet the story need not end in collapse. There is a way out, and it comes from expanding the data manifold itself rather than merely scaling the model against a static corpus. The next major frontier is likely not text or code but 4D video—continuous, high-bandwidth, spatiotemporal sensory data that more closely matches the real structure of the physical world. Unlike textual culture, which is finite and saturating, the spatiotemporal world generates unbounded data streams. High-fidelity 4D capture, simulation, and reconstruction offer an escape from the bottleneck that is slowly strangling language-model scaling. Models trained on rich physical dynamics rather than frozen cultural snapshots would not merely grow larger; they would grow deeper, anchored to a data distribution that evolves with reality instead of drifting away from it. If the industry moves decisively toward 4D multimodal modeling—robotics, embodied agents, physical reasoning, simulation feedback loops—then the present overfitting trap can be broken. The fuel supply becomes effectively renewable, and the models’ lifespans lengthen rather than shrink. In that sense, the most optimistic path is not to keep scaling cultural predictors but to graduate beyond them, giving the infrastructure something real to learn from and restoring coherence between model scale, data scale, and the world itself.

Causal Why vs Teleological Why

Asked ChatGPT a question that has always bugged me about English as well as all the Indian languages I know. Sharing the one-shot answer with no further processing. This kinda explains why German is a better language for philosophy than English. Possibly Russian too.

Read more

It is still It is still


Political Organization in Pre-Colonial Africa

We provide an overview of the explanations for the relative lack of state formation historically in Africa. In doing so we systematically document for the first time the extent to which Africa was politically decentralized, calculating that in 1880 there were probably 45,000 independent polities which were rarely organized on ethnic lines. At most 2% of these could be classified as states. [emphasis added by TC] We advance a new argument for this extreme political decentralization positing that African societies were deliberately organized to stop centralization emerging. In this they were successful. We point out some key aspects of African societies that helped them to manage this equilibrium. We also emphasize how the organization of the economy was subservient to these political goals.

That is from a new NBER working paper by Soeren J. Henn and James A. Robinson.

The post Political Organization in Pre-Colonial Africa appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Trump Administration to End Affirmative Action for Male College Applicants

While the bias in acceptance rates towards men in college admissions isn’t news to those who follow this stuff and actually know what they’re talking about, a bunch of people are about to be in for a very rude awakening (boldface mine):

Brown University, one of the most selective institutions in America, attracted nearly 50,000 applicants who vied for just 1,700 freshman seats last year.

The university accepted nearly equal numbers of male and female prospects, though, like some other schools, it got nearly twice as many female applicants. That math meant it was easier for male students to get in — 7 percent of male applicants were admitted, compared with 4.4 percent of female applicants, university data shows.

The Trump administration’s policies may soon put an end to that advantage enjoyed by men at some colleges, admissions and higher-education experts say.

While much of the president’s recent scrutiny of college admissions practices has focused on race, these experts say his ban on diversity, equity and inclusion is likely to hit another underrepresented group of applicants: men, and particularly White men — the largest subset of male college applicants.

“This drips with irony,” said Ted Mitchell, president of the American Council on Education, or ACE, the nation’s largest association of universities and colleges, who said he expects that colleges and universities will end any consideration of gender in admission. “The idea of males, including White males, being at the short end of the stick all of a sudden would be a truly ironic outcome.”

Universities are looking at the administration’s edicts “and they’re saying, ‘Well, we’d rather be cautious than stick our neck out’” by continuing to give advantages to male applicants, said ACE’s Mitchell, who was undersecretary of education under President Barack Obama. “I think we will see people dropping gender preferences, even though it is still within the law.”

…Private institutions are allowed to consider gender in admission under Title IX, the federal law otherwise banning discrimination by universities and colleges that get federal funding. That’s due to a loophole dating from when the law was passed, in 1971.

It would be fitting for our times if this somehow was brought before the Supreme Court, and they determine that, of course, one can discriminate against women. No doubt, this also will serve as ‘culture war’ fodder.

Housing December 8th Weekly Update: Inventory Down 2.7% Week-over-week

Altos reports that active single-family inventory was down 2.7% week-over-week.  Inventory usually starts to decline in the fall and then declines sharply during the holiday season.

The first graph shows the seasonal pattern for active single-family inventory since 2015.

Altos Year-over-year Home InventoryClick on graph for larger image.

The red line is for 2025.  The black line is for 2019.  

Inventory was up 15.3% compared to the same week in 2024 (last week it was up 15.6%), and down 4.1% compared to the same week in 2019 (last week it was down 4.3%). 

Inventory started 2025 down 22% compared to 2019.  Inventory has closed most of that gap, but it appears inventory will still be below 2019 levels at the end of 2025.

Altos Home InventoryThis second inventory graph is courtesy of Altos Research.

As of December 5th, inventory was at 795 thousand (7-day average), compared to 817 thousand the prior week.  

Mike Simonsen discusses this data and much more regularly on YouTube

Disagreement in Science: Missing Women by David Klinowski

 Here's an study of women in science that explores a novel angle.

David Klinowski; Voicing Disagreement in Science: Missing Women. The Review of Economics and Statistics 2025; 107 (6): 1743–1753. doi: https://doi.org/10.1162/rest_a_01322 

Abstract: This paper examines the authorship of post-publication criticisms in the scientific literature, with a focus on gender differences. Bibliometrics from journals in the natural and social sciences show that comments that criticize or correct a published study are 20% to 40% less likely than regular papers to have a female author. In preprints in the life sciences, prior to peer review, women are missing by 20% to 40% in failed replications compared to regular papers, but they are not missing in successful replications. In an experiment, I then find large gender differences in willingness to point out and penalize a mistake in someone's work. 

 

 

 

Guetlein defends Golden Dome secrecy, says industry is ‘well informed’ despite criticism

Guetlein said his office has held extensive private engagements with industry.

The post Guetlein defends Golden Dome secrecy, says industry is ‘well informed’ despite criticism appeared first on SpaceNews.

China hearing focuses on U.S. policy shortfalls

Griffin

A House hearing on the rise of China’s space program turned into a broader critique of U.S. space policy, including NASA’s current approach to returning astronauts to the moon.

The post China hearing focuses on U.S. policy shortfalls appeared first on SpaceNews.

Russia Blocks FaceTime and Snapchat

Dasha Litvinova, reporting for the AP:

Russian authorities said Thursday they have imposed restrictions on Apple’s video calling service FaceTime, the latest step in an effort to tighten control over the internet and communications online. State internet regulator Roskomnadzor alleged in a statement that the service is being “used to organize and conduct terrorist activities on the territory of the country, to recruit perpetrators (and) commit fraud and other crimes against our citizens.” Apple did not respond to an emailed request for comment.

The Russian regulator also announced that it has blocked Snapchat, a messaging app for sharing photos, videos and text messages, citing the same grounds it gave for restricting FaceTime. It said that it took the action Oct. 10 even though it only reported the move on Thursday.

I’m sure the crime rate in Russia will soon plummet. (I’m curious why iMessage isn’t blocked too.)

 ★ 

★ Meta Says Fuck That Metaverse Shit

Mike Isaac, reporting for The New York Times, “Meta Weighs Cuts to Its Metaverse Unit” (gift link):

Meta is considering making cuts to a division in its Reality Labs unit that works on the so-called metaverse, said three employees with knowledge of the matter.

The cuts could come as soon as next month and amount to 10 to 30 percent of employees in the Metaverse unit, which works on virtual reality headsets and a V.R.-based social network, the people said. The numbers of potential layoffs are still in flux, they said. Other parts of the Reality Labs division develop smart glasses, wristbands and other wearable devices. The total number of employees in Reality Labs could not be learned.

Meta does not plan to abandon building the metaverse, the people said. Instead, executives expect to shift the savings from the cuts into investments in its augmented reality glasses, the people said.

Meta confirmed the cuts to the Wall Street Journal, and Blooomberg’s Kurt Wagner broke the news Thursday.

I’m so old that I remember ... checks notes ... four years ago, when Facebook renamed itself Meta in late 2021 with this statement: “Meta’s focus will be to bring the metaverse to life and help people connect, find communities and grow businesses.” And Mark Zuckerberg, announcing the change, wrote:

But all of our products, including our apps, now share a new vision: to help bring the metaverse to life. And now we have a name that reflects the breadth of what we do.

From now on, we will be metaverse-first, not Facebook-first. That means that over time you won’t need a Facebook account to use our other services. As our new brand starts showing up in our products, I hope people around the world come to know the Meta brand and the future we stand for.

Many of us never fell for this metaverse nonsense. For example, I’m also old enough to remember just one year later, near the end of Joanna Stern’s on-stage interview with Craig Federighi and Greg Joswiak at a 2022 WSJ event, seven months before Vision Pro was announced (at the 29:30 mark):

Stern: You have to finish this sentence, both of you. The metaverse is...

Joz: A word I’ll never use.

He might want to use the word now, just to make jokes.

Om Malik, writing in April this year:

Some of us are old enough to remember that the reason Mark renamed the company is because the Facebook brand was becoming toxic, and associated with misinformation and global-scale crap. It was viewed as a tired, last-generation company. Meta allowed the company to rebrand itself as something amazing and fresh.

Lastly, yours truly, linking to Malik’s post:

And so while “Meta” will never be remembered as the company that spearheaded the metaverse — because the metaverse never was or will be an actual thing — it’s in truth the perfect name for a company that believes in nothing other than its own success.

Quoting Cory Doctorow

Now I want to talk about how they're selling AI. The growth narrative of AI is that AI will disrupt labor markets. I use "disrupt" here in its most disreputable, tech bro sense.

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

That's it.

That's the $13T growth story that MorganStanley is telling. It's why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family's financial security.

Cory Doctorow, The Reverse Centaur’s Guide to Criticizing AI

Tags: cory-doctorow, ai-ethics, ai

Using LLMs at Oxide

Using LLMs at Oxide

Thoughtful guidance from Bryan Cantrill, who evaluates applications of LLMs against Oxide's core values of responsibility, rigor, empathy, teamwork, and urgency.

Via Lobste.rs

Tags: ai, generative-ai, llms, oxide, bryan-cantrill

Quoting David Crespo

What to try first?

Run Claude Code in a repo (whether you know it well or not) and ask a question about how something works. You'll see how it looks through the files to find the answer.

The next thing to try is a code change where you know exactly what you want but it's tedious to type. Describe it in detail and let Claude figure it out. If there is similar code that it should follow, tell it so. From there, you can build intuition about more complex changes that it might be good at. [...]

As conversation length grows, each message gets more expensive while Claude gets dumber. That's a bad trade! [...] Run /reset (or just quit and restart) to start over from scratch. Tell Claude to summarize the conversation so far to give you something to paste into the next chat if you want to save some of the context.

David Crespo, Oxide's internal tips on LLM use

Tags: coding-agents, ai-assisted-programming, oxide, claude-code, generative-ai, llms

What has gone wrong with tourism to Las Vegas?

Agitators in the city have attempted to document the deterioration by posting ominous images of barren casinos, conjuring the perception of a place hollowed out by economic armageddon. The reality is more nuanced, but it is true that practically every conceivable indicator tracking tourism to Las Vegas is flashing warning signs. Hotel occupancy has cratered. Rooms were only 66.7 percent full in July, down by 16.8 percent from the previous year. The number of travelers passing through Harry Reid International Airport also declined by 4.5 percent in 2025 during an ongoing ebb of foreign tourists, for familiar reasons. Canadians, historically one of the city’s most reliable sources of degenerates, have effectively vanished. Ticket sales for Air Canada jets flying to Las Vegas have slipped by 33 percent, while the Edmonton-based low-cost carrier Flair has reported a 62 percent drop-off.

Here is the full story, which shows it is by no means an exclusively Canadian phenomenon.  Overall, I am happy to see a shift away from gambling, drinking, and “shows for wealthy old people”?

The post What has gone wrong with tourism to Las Vegas? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Affordability, Part II

Using MOOCs To Help The Unemployed Back Into Work

In last week’s primer I showed that the media’s usual story — that Americans have been impoverished by the surge in inflation that began in 2021 — isn’t right. In fact, according to the conventional measure that economists use to gauge purchasing power – real income – the purchasing power of most Americans is higher today than it was before the 2000 pandemic. But in last week’s primer I also argued that looking only at income divided by the Consumer Price Index (CPI) means that we miss some important ways in which the current economy is worse than the conventional measures indicate. In particular, I emphasized the adverse effects of high borrowing costs and low hiring, which aren’t included in the CPI.

Beyond that, I also argued our general sense of affordability encompasses more than just purchasing power. We also care about economic inclusion, security, and fairness.

Beyond the paywall I’ll explain these concepts and how they help explain Americans’ economic dissatisfaction. Specifically, I’ll address the following:

1. Why life doesn’t feel affordable when people aren’t able to buy those goods and services that make them feel that they are full members of society.

2. Why life doesn’t feel affordable unless people feel assured that a stretch of bad luck won’t lead to financial disaster

3. Why it’s important to people that prices reflect their sense of fair play, and that they don’t see themselves being taken advantage of by those in positions of privilege and power.

Read more

Niche Museums: The Museum of Jurassic Technology

Niche Museums: The Museum of Jurassic Technology

I finally got to check off the museum that's been top of my want-to-go list since I first started documenting niche museums I've been to back in 2019.

The Museum of Jurassic Technology opened in Culver City, Los Angeles in 1988 and has been leaving visitors confused as to what's real and what isn't for nearly forty years.

Tags: museums

Colors of growth

This looks pretty tremendous:

We develop a novel approach to measuring long-run economic growth by exploiting systematic variation in the use of color in European paintings. Drawing inspiration from the literature on nighttime lights as a proxy for income, we extract hue, saturation, and brightness from millions of pixels to construct annual indices for Great Britain, Holland, France, Italy, and Germany between 1600 and 1820. These indices track broad trends in existing GDP reconstructions while revealing higher frequency fluctuations – such as those associated with wars, political instability, and climatic shocks – that traditional series smooth over. Our findings demonstrate that light, decomposed into color and brightness components, provides a credible and independent source of information on early modern economic activity.

That is new research by Lars Boerner, Tim Reinicke, Samad Sarferaz, and Battista Severgnini.  Via Ethan Mollick.

The post Colors of growth appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Which economy did best in 2025?

Our annual ranking returns

Endless expanse

This view of the seemingly endless expanses of the Chilean Atacama Desert is definitely worth to be today’s Picture of the Week. The silver full Moon shines bright in the beautiful gradient evening sky. Below it, to the right, the giant dome of ESO’s Extremely Large Telescope (ELT) glows with the golden sunset light.

The ELT is perched atop Cerro Armazones, at an altitude of 3046 m. The dome might look small in the image, but the full 30-minute walk via the set of stairs from the entrance of the dome to its top, indicates its gigantic size: 80 m high and 93 m wide. Weighing about 6100 tonnes, the dome is designed to protect the telescope and its mirrors, including the 39-m wide primary mirror — the biggest eye on the sky.  

To the left of Cerro Armazones the last sunbeams of the evening cast a dark triangular shadow: Cerro Paranal, home to ESO’s Very Large Telescope (VLT), from where this picture was taken by Luca Sbordone, ESO staff astronomer. It’s no wonder that this site hosts so many professional telescopes, as it boasts the darkest skies on Earth. Chile is in fact home to all of ESO’s observatories, thanks to a long-lasting partnership that goes back more than 60 years — may it be as timeless and inspiring as this view. 

Compressing embedded files in Go

Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜️

Code

The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

package embed

import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)

//go:embed data/embed.zip
var embeddedZip []byte

var dataOnce = sync.OnceValue(func() *zip.Reader {
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil {
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
  }
  return r
})

func Data() fs.FS {
  return dataOnce()
}

We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^

The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain

Akvorado, a flow collector written in Go, embeds several static assets:

  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing
embed.zip. It is displayed as a treemap and we can see many embedded files
replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

$ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss

Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op

Each access to an asset requires a decompression step, as seen in this flame graph:

🖼 Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

  3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

The chess culture that is India

Sarwagya Singh Kushwaha has become the youngest player in chess history to earn an official FIDE rating at the age of three years, seven months and 20 days.

Born in 2022, Sarwagya — from Sagar in the central Indian state of Madhya Pradesh — has been rated by FIDE, the international governing body of chess, which requires a player to score points against at least five rated opponents in official events.

The toddler’s first rating of 1572 is considerably above the minimum rating of 1,400, having won five of his eight rated matches. As detailed by chess.com, Sarwagya’s victories have come against opponents including 22-year-old Abhijeet Awasthi (FIDE-rated 1542), 29-year-old Shubham Chourasiya (1559) and 20-year-old Yogesh Namdev (1696).

Sarwagya has broken the record held by another Indian child, Anish Sarkar, who set it at three years, eight months and 19 days old, in November 2024.

Here is more from the NYT, via the excellent Samir Varma.

The post The chess culture that is India appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Sunday Night Futures

Weekend:
Schedule for Week of December 7, 2025

Monday:
• No major economic releases scheduled.

From CNBC: Pre-Market Data and Bloomberg futures S&P 500 and DOW futures are little changed (fair value).

Oil prices were up over the last week with WTI futures at $60.11 per barrel and Brent at $63.76 per barrel. A year ago, WTI was at $69, and Brent was at $74 - so WTI oil prices are down about 15% year-over-year.

Here is a graph from Gasbuddy.com for nationwide gasoline prices. Nationally prices are at $2.90 per gallon. A year ago, prices were at $2.97 per gallon, so gasoline prices are down $0.07 year-over-year.

Sunday 7 December 1662

(Lord’s day). A great snow, and so to church this morning with my wife, which is the first time she hath been at church since her going to Brampton, and Gosnell attending her, which was very gracefull. So home, and we dined above in our dining room, the first time since it was new done, and in the afternoon I thought to go to the French church; but finding the Dutch congregation there, and then finding the French congregation’s sermon begun in the Dutch, I returned home, and up to our gallery, where I found my wife and Gosnell, and after a drowsy sermon, we all three to my aunt Wight’s, where great store of her usuall company, and here we staid a pretty while talking, I differing from my aunt, as I commonly do, in our opinion of the handsomeness of the Queen, which I oppose mightily, saying that if my nose be handsome, then is her’s, and such like. After much discourse, seeing the room full, and being unwilling to stay all three, I took leave, and so with my wife only to see Sir W. Pen, who is now got out of his bed, and sits by the fireside. And after some talk, home and to supper, and after prayers to bed. This night came in my wife’s brother and talked to my wife and Gosnell about his wife, which they told me afterwards of, and I do smell that he I doubt is overreached in thinking that he has got a rich wife, and I fear she will prove otherwise. So to bed.

Read the annotations

Links 12/7/25

Links for you. Science:

Bird flu patient dies, marking second U.S. fatality in 2025
It’s the ‘most important fish in the sea.’ And it’s disappearing.
Figurine of Gander Covering Woman May Show 12,000-year-old Myth in Israel
A Lost Planet Created the Moon. Now, We Know Where It Came From.
Vaccines Do Not Cause Autism, No Matter What The CDC Website Now Says
Mystery creature found in ‘forbidden cloud forest’ of Peru is new species of marsupial

Other:

Border Patrol is taking the powers they want. The US Border Patrol’s desire to be THE national police force takes shape.
Social media isn’t driving the teenage “loneliness epidemic”: Teenagers’ loneliness was the same in the 1970s,1980s, and 1990s long before anyone heard of TikTok or smartphones. (excellent)
America Is Becoming Dallas. Part One: The Lord of Plano.
America Is Becoming Dallas. Part Two: Sprawling to Freedom
How D.C. developers made big money on a taxpayer-funded housing project. Developers with political ties to Mayor Muriel E. Bowser stand to collect millions of dollars more than housing experts say is normal for an affordable housing project. (Bowser’s Green Team for the win! Lolsob)
MD housing secretary: Trump cuts will cause MD homelessness to surge 25%
Trump threatened him with death. Now the Pentagon will probe him.
Federal judge rules Trump’s deployment of National Guard in D.C. is ‘unlawful’
Marjorie Taylor Greene’s departure is a canary in a coal mine for MAGA
The Skeletons in Summers’s Closet
MAGA has a foreigner invader problem
The CPA and the Lawyer Who Served Jeffrey Epstein—and Control His Fortune and Secrets
Nation’s Largest Landlord Is Encouraged to Break the Law With Measly Fine for Price Fixing Scheme That Kept Rents Artificially High and Worsened Homelessness Crisis
Woman deported from Maryland shown on video being dragged in Ghana
The realities of being a pop star.
The Outrageous False Equivalences That Prop Up President Trump
Larry Summers and the Hunger Games: Who remembers the food shock of 2005-2008? Just another global policy disaster from then Treasury Secretary Summers.
Is Marc Andreessen just flat-out dumb?
‘Is the price of doing this worth it?’: North Carolina Republicans worry about Trump immigration raids
The doctor who falsely tied the MMR vaccine to autism takes his victory lap
The MAGA Influencers Rehabilitating Hitler: A growing constituency on the right wants America to unlearn the lessons of World War II.
The Censorship Crybabies Are Now The Censors: FDA’s Vinay Prasad Uses Copyright Claims To Silence Critic
ACFD rescues TikTok-famous toucan from behind Pentagon City dishwasher
Stop Asking How Democrats Will Fight Trump. Start Asking Republicans
Bland, easy to follow, for fans of everything: what has the Netflix algorithm done to our films?
White House to pitch a Trump Obamacare extension with limits
A world of ratfuckers
Game Theory Explains How Algorithms Can Drive Up Prices
One Small Guardrail Finally Held Up Against Trump
Musk’s AI supercomputer, used by U.S. military, secretly relies on Chinese hardware

In Case You Missed It…

…a week of Mad Biologist posts:

The D.C. Occupation: Compounding Tragedy with Farce

LLMs Are an Upgrade to Mediocrity: the Occupation of Chicago Edition

Panicked by a Close Election in Tennessee, Trump Attempts to Bribe Democratic Rep. Cuellar

“…The Laws Are Designed Specifically to Prevent That from Being OK.”

Live coverage: SpaceX to launch of 3,000th Starlink satellite in 2025 on record-setting 32nd flight of Falcon 9 booster

A SpaceX Falcon 9 rocket stands in the launch position at Launch Complex 39A at NASA’s Kennedy Space Center on Dec. 7, 2025, ahead of flying the Starlink 6-92 mission. SpaceX is using the Falcon 9 booster, 1067, which will make its record breaking 32nd flight. Image: Adam Bernstein / Spaceflight Now

Update Dec. 7, 6:32 p.m. EST (2332 UTC): SpaceX scrubbed the launch.

Poor weather on Sunday kept SpaceX from achieving a couple notable milestones for at least one more day.

The mission, dubbed Starlink 6-92, will feature the use of the company’s most flown Falcon booster, tail number B1067. On its 32nd flight, it will deliver to low Earth orbit SpaceX’s 3,000th Starlink satellite of the year.

Liftoff from historic Launch Complex 39A is scheduled for no earlier than Monday, Dec. 8, at 4:14 p.m. EST (2114 UTC), weather permitting. The rocket will fly on a south-easterly trajectory upon leaving Florida’s Space Coast.

Spaceflight Now will have live coverage beginning about an hour prior to liftoff.

Meteorologists with the 45th Weather Squadron forecast a 90 percent chance for favorable launch on Monday with liftoff winds being a potential concern. Teams also cited a low to moderate risk for impacts from upper-level wind shear and booster recovery weather.

The use of B1067 on this mission brings SpaceX one step closer to its current goal of certifying its Falcon boosters for up to 40 missions a piece. The ultimate number of missions a booster flies will partially depend on the types of missions for which it was used and if it is needed on an expendable flight.

SpaceX is looking to achieve the same level of reuse for the payload fairings on a Falcon rocket’s upper stage, but typically only provides updates on those during the launches of customer missions for the government or from other companies.

Under a resilient ridge, prolonged tule fog episode brings cold and damp weather to the Central Valley but anomalously warm/dry weather elsewhere

An increasingly resilient ridge keeps California dry, but with markedly different daily weather in dense tule fog vs non-fog zones, following a very wet and relatively warm autumn Well, the final numbers are now in and they reflect what everyone has been talking about in Southern California: it genuinely was historically wet this fall in […]

The post Under a resilient ridge, prolonged tule fog episode brings cold and damp weather to the Central Valley but anomalously warm/dry weather elsewhere first appeared on Weather West.

Trump Selling Ukraine for Cash

Wall St Journal Investigation Published the Details

President Trump is enabling an aggressive effort for U.S. companies, and some of his friends and business associates, to start massive new business ventures with Russia. Business that could go into the hundreds of billions of dollars. Business that abandons sanctions on Russia such as, “a senior Exxon Mobil executive discussed returning to the massive Sakhalin [oil] project if the two governments gave the green light as part of a Ukraine peace process” and “a college friend of Donald Trump Jr. and campaign donor to his father, has been in talks to acquire a stake in a Russian Arctic gas project if it is released from sanctions”. Business that thwarts a more open and productive global economy by bypassing European entities who might bring healthy, cost-effective competition to these ventures and instead locks in exclusive U.S./Russian deals. Business that is not about a system more open to all businesses, big and small, to engage between our countries but is rather all about the big players, the wealthy, the well-connected, the ones funneling huge amounts into buying those connections. In other words business that steers the U.S. and global economy ever more toward being an exclusive playground of those big players.

All of this hinging on Putin getting what he wants in Ukraine.

The WSJ reported on this in two pieces, “Make Money Not War” and “What Does Putin Want? far more than just the conquest of eastern Ukraine”. They analyzed a lot of publicly known information but also dug underneath what is known with “dozens of officials, diplomats, and former and current intelligence officers from the U.S., Russia and Europe, and American lobbyists and investors close to the administration.” They first published this recently on Friday the 28th. I expected that by Sunday major news sources would be jumping all over this. It’s huge news in itself. The idea is insulting to anyone who cares about Ukraine or about national sovereignty or stopping Putin from warring on neighbors and Europe.

It would also seem to be a huge factor in how Trump will be perceived going forward. Among the many things he has done that would seem to be insults either to his base, like limiting Medicaid that is essential to many red state rural areas, or to almost everyone, like reducing education grants for training nurses, this may be the biggest. Willingness to negotiate on Putin getting what he wants in Ukraine as long as big business players get big deals, some of which will no doubt benefit Trump. I was shocked that there was hardly any coverage of this. The major news sources frequently cover what each other has discovered while giving credit to which reported it first, but looking through a list of the major sources as of Sunday showed none even noting it, much less giving it the top exposure it needs.

There is history and irony in this emphasis on business. The idea of the U.S. engaging heavily in economic give-and-take with adversaries has been a good idea for decades. It’s what was behind economic engagement with China starting with Nixon. The same with Russia after the collapse of the U.S.S.R. If we’re heavily dependent on each other’s economies then we’re less likely to be at war. But those past examples did not involve blackmail. Did not involve an adversary warring on a neighbor and then saying they’d stop there if we gave them lots of mutually profitable business.

The irony is Putin could have had this without war. To make this Ukraine-territory-for-business notion more palatable ideas have been floated of ways it could help Ukraine. That they could have huge data centers to provide A.I. services to the U.S. That they could have big, profitable trade exchanges with Russia. That there could be a whole industry around rebuilding devastated parts of Ukraine. But Putin could have had all that and better circumstances without war. If he had pursued such economics without war there could be a thriving Ukraine economy heavily engaged, not just with the west, but with Russia as well. He could have a Ukrainian populace happy to be a neighbor of, and on good terms with, Russia. All the wealth and life that Russia has lost to this war could have been avoided. He wouldn’t have the ego thrill of putting a pin in his wall map marking part of Ukraine as his, but he and Russia would have bigger benefits than what they’re trying to get now.

Note that Trump and his apologists will have plenty of plausible deniability to spin this. The business transactions can be profitable to the U.S., though in ways, as noted, that are all about the big, well-connected making deals among themselves. The idea of large trade interactions lessening the odds of larger wars is true, but not done this way. The territory issue will probably be presented as, Russia now has certain territories and Trump may present that as something that can’t be expected to change, even though tougher negotiations and greater U.S. and European support could change that. It was bad when Obama relented on Russia taking Crimea, and it’s much worse with their relentless destructive war on Ukraine.

Whether this approach of “give in to Putin and get business out of it” continues is as much of a guessing game as anything Trump does, being so erratic. He has flipped back and forth from seeming to want Ukraine to give in to talk of arming them so well they could drive Russia out. If he ultimately gives up on, or just doesn’t get, this “give in but get business” approach it makes it no less terrible and wrong that this is the current effort.

A big factor in all of this is that Putin is a liar. After some Ukrainian territory is designated permanently Russian and sanctions are lifted and profitable business deals are running there is nothing but a paper promise that Putin won’t just start up again provoking conflict with Ukraine, nibbling at the edges, and setting up further expansion into their territory or any others he thinks he can get, just as he will have gotten out of a deal like this.

What the Trump apologists can’t make disappear are the massive conflicts of interest, Trump’s negotiators being well positioned to make huge amounts themselves out of all this, as the WSJ piece lays out. And they can’t erase Trump’s history of making U.S. interests entangle with his own financial interests, as with his personal business interests with Middle-East countries and with crypto-business players around the globe.

Trump is selling Ukraine for cash. That ought to be a huge story, and a huge blow to his ability to hold onto his voters and to hold all his Republican underlings in line.


PLEASE DONATE IN SUPPORT OF OUR NONPROFIT EFFORTS TO KEEP YOU INFORMED

The post Trump Selling Ukraine for Cash appeared first on DCReport.org.

w/e 2025-12-07

The last complete week of being here alone, with only Pippa the cat and the builders in the garage for company. The garage roof is now nearly finished, looking good, and is watertight. What a novelty.

Pippa and I have our regular routine throughout the day: our respective feeding times, the her-on-my-lap times (morning coffee, afternoon tea, evening watching TV), the heading to bed time.

I can’t decide if caring for an animal like this is good for one’s mental health – the routine, thinking of a creature other than yourself, the probably false belief that when she’s chosen to sit on your lap it’s because you’re a caring person who’s at one with nature – or whether the obsession with satisfying their irrational whims will slowly drive you mad.

I had been relieved we hadn’t had many mice – whole or part – left on the floor recently. But on Friday evening I was heading up to bed, with only a light behind me casting a shadow up the stairs. As I neared the top one of our nifty motion-sensitive lights came on just in time to illuminate a dead rat on the top step, moments before I stepped on it. At least six inches long, tail aside.

Given the skill and effort it must take for a cat to catch and kill a rat, then get it through the cat flap, then carry it up stairs, you’d think Pippa would appear more pleased with herself. But she trotted past, on her way to bed, without even a glance at it.


§ I’ve continued to avoid some more important tasks by fiddling away at a redesign of this site. I do enjoy this kind of tinkering, especially design tinkering, with no client and no deadline. There’s no rush, I just do a little every so often, with time for it to percolate, and then look at it again with fresh eyes a day or two later.

Having something on the go is always good at night too – if I ever find myself wide awake, with my mind inevitably circling towards all the worst thoughts and worries, then thinking about my current design and/or coding project is the most reliable way to get my brain focused on something else.


§ I had a blood test and basic health check at the GP this week, the first I’ve had in a few years, and everything was fine. They didn’t say I would live forever but then they didn’t not say that either. So who can tell.


§ I forgot to mention last week, and maybe the week before, that I started watching The Studio with high hopes, having seen person after person enthusing about it. It was pretty unbearable and, having forced myself to keep trying, I ended up turning it off mid-way through episode 4.

I find it hard to put my finger on why it wasn’t good. It looks good but then so many shows are visually rich and impressive these days, so what. There was something smug about it. And although much of the humour relies on awkwardness and incompetence – which I’d usually love – it all manages to be so over the top as to be unbelievable, while the situations feel uninspired and predictable.


§ I watched more films this week:

  • Jour de Fête (Jacques Tati, 1949). I’d never seen any Tati films so thought I should fill a gap. It was fine! Much slower than I expected for something slapstick, which usually makes me think of Chaplin and Lloyd. A couple of days later I started on Monsieur Hulot’s Holiday but after 20 minutes or so I couldn’t face any more of its ponderousness so that’s probably me and Tati done for now.
  • The Graduate (Mike Nichols, 1967). I’d also never seen this and it was, perhaps unsurprisingly, great! A lot of fun, really good. Currently on Mubi and iPlayer.
  • Perfect Days (Wim Wenders, 2023). Nice, good. It’s a bit, “Ahhh, you see, we should all aspire to be as content with our lot as a quiet man who cleans toilets and smiles when he looks at trees, ahhh.” And I’d have liked it more if the guy’s musical taste didn’t feel so “music Wim Wenders liked when he was young”.
  • Grand Theft Hamlet (Sam Crane, Pinny Grylls, 2024). I wasn’t quite sure what to expect but liked this more than whatever I expected. It’s quite silly and was fun to watch two posh-ish English guys, as GTA characters, attempt to put on a Shakespeare production in Los Santos while, of course, everyone else wants to shoot everyone. I actually laughed out loud at one point, which is rare. There’s a nagging part of me that wonders/worries that some of the more sad and touching scenes were scripted / set up, which would be a shame.
  • Fingernails (Christos Nikou, 2023). I gave up on this about 35 minutes in. It wasn’t bad, and the performances were fine, but the script was pretty clunky. Quite a bit of, “As I explained earlier, [goes on to explain things]”. And a lot of, “Have you taken… The Test?” accompanied by glances at bandaged fingers. Ooh, whatever could this oh-so-mysterious test involve… in a film called Fingernails?!
  • The Apartment (Billy Wilder, 1960). I must confess that I didn’t really enjoy Wilder’s Some Like it Hot but I liked this a lot. It is Too Long but otherwise, top marks all round, funny and touching and thoughtful. Bonus: Sheldrake’s suits looked so good.

§ Every week I think, “Nothing’s happened, there’s nothing to write about,” and yet here I am and, apparently, here you are.


Read comments or post one

FOMC Preview: 25bps Rate Cut Expected

Most analysts expect the FOMC to reduce the Fed Funds rate by 25bps at the meeting this week to a target range of 3-1/2 to 3-3/4 percent.    Market participants currently expect two additional rate cuts in 2026.

Analysis suggests rates are currently slightly restrictive (Cleveland Fed) or even already accommodative (even before this rate cut).  So, to cut rates in this environment, FOMC members are clearly expecting either inflation to decline quickly or an employment recession, or both.  This outlook should show up in the projections (lower inflation, higher unemployment rate).

From Goldman Sachs:
The FOMC is widely expected to deliver a third consecutive 25bp interest rate cut to 3.5-3.75% at what will likely be a contentious December meeting next week. ... The case for a cut is solid, in our view. Job growth remains too low to keep up with labor supply growth, the unemployment rate has risen for three months in a row to 4.4%, other measures of labor market tightness have weakened more on average, and some alternative data measures of layoffs have begun to rise recently, presenting a new and potentially more serious downside risk.
From BofA:
The Fed has signaled that it will cut rates by 25bp to 3.5-3.75% at its Dec meeting. We look for two or three substantive changes in the FOMC statement. The description of labor market conditions is likely to omit the language that the u-rate “remained low”, to reflect the 32bp uptick over the last three months.
...
The SEP is likely to show upgrades to growth in 2025 and 2026. ... However, as a mark-to-market based on the latest data, we think the u-rate for 4Q 2025 will be taken up by a tenth to 4.6%. ... These changes would provide some cover for cutting rates despite the expected upgrades to the growth outlook.
emphasis added
Projections will be released at this meeting. Here are the September projections.  

The BEA's estimate for first half 2025 GDP showed real growth at 1.6% annualized. Most estimates for Q3 GDP are around 3.5%.  That would put the real growth for the first three quarters at 2.2% annualized - well above the top end of the September projections.   So GDP for 2025 will likely be increased.

GDP projections of Federal Reserve Governors and Reserve Bank presidents, Change in Real GDP1
Projection Date202520262027
Sept 20251.4 to 1.71.7 to 2.11.8 to 2.0
Jun 20251.2 to 1.51.5 to 1.81.7 to 2.0
1 Projections of change in real GDP and inflation are from the fourth quarter of the previous year to the fourth quarter of the year indicated.

The unemployment rate was at 4.4% in September.  The unemployment rate will likely increase further this year. There was no data for October due to the government shutdown, and the November report will be released on December 16th - the week after the FOMC meeting - so the FOMC is flying blind this week on the unemployment rate.  However, they will probably increase the 2025 projection (and possibly 2026) as justification for the rate cut.  An unemployment rate of 4.6% over the next few months might be recessionary (according to the Sahm rule).

Unemployment projections of Federal Reserve Governors and Reserve Bank presidents, Unemployment Rate2
Projection Date202520262027
Sept 20254.4 to 4.54.4 to 4.54.2 to 4.4
Jun 20254.4 to 4.54.3 to 4.64.2 to 4.6
2 Projections for the unemployment rate are for the average civilian unemployment rate in the fourth quarter of the year indicated.

As of September 2025, PCE inflation increased 2.8 percent year-over-year (YoY), up from 2.7 percent YoY in August.  Projections for PCE inflation will probably remain unchanged or lowered slightly.

Inflation projections of Federal Reserve Governors and Reserve Bank presidents, PCE Inflation1
Projection Date202520262027
Sept 20252.9 to 3.02.4-2.72.0 to 2.2
Jun 20252.8 to 3.22.3-2.62.0 to 2.2

PCE core inflation increased 2.8 percent YoY, down from 2.9 percent in August.   Projections for 2025 core PCE inflation will likely be decreased.

Core Inflation projections of Federal Reserve Governors and Reserve Bank presidents, Core Inflation1
Projection Date202520262027
Sept 20253.0 to 3.22.5-2.72.0 to 2.2
Jun 20252.9 to 3.42.3-2.62.0 to 2.2

Sunday assorted links

1. The fight over Romansh (New Yorker).

2. How well can LLMs grade?

3. Kelsey Piper responds on Mississippi.  She is probably correct.

4. Future VP?

5. Is the political allocation of gay representatives skewed?

6. If Arnold Kling taught conservative thought.

7. Noah is right.

8. Kennedy Center update.

9. The quiet surrender of Fed independence.

The post Sunday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

2025 Year in Review

Well, nearly another year in the books. How did it go?

Books per year chart

It’s genuinely surprising to me how consistent my reading stays year-over-year. I don’t set reading goals or have a predictable pace, and every year has at least a month in which I’m reading nothing, my momentum growth to a halt. But at the end of every year, it’s range-bound between 19 and 22 for the past five years.

The count is 19 at the moment, but I’m reading a fast-paced sci-fi so it’ll probably be 20 by the end of the year.

A Confederacy of Dunces was easily the most entertaining book this year - an absolute riot, funny and unique. Things Become Other Things, from Craig Mod, was the most affecting book of the year, the only one that made me cry a little. And The Fort Bragg Cartel was the most engaging, can’t-put-it-down book. If you can stomach the difficult material - detailed descriptions of war crimes and domestic abuse - I highly recommend reading it.

Race times

This was a decent year for running, too. I ran five 5ks and two half-marathons in 2025, and achieved my simple goal of running sub-20 in the 5k. The thing about running a mediocre 19:13 PR in high school and then running mid-19s twenty years later is that now I’m in the top 10% of my age group! On a relative basis, I keep getting faster.

Mostly I blame the extreme summer heat for some of the higher times: many of the races had warnings about high humidity, high heat, and bad air quality, warning people from overexertion. A sample from their pre-race email:

The weather forecast is for temperatures in the low 90s. Please dress and hydrate properly, and avoid overexertion. The Air Quality Index is predicted to be over 100 at race start, members of sensitive groups may experience health effects. Limit outdoor exposure if you are sensitive to ozone. This might be a great night to run easy or tempo effort, please adjust your pace expectations!

That said, I think I could still do better. Running low 19 minute times would be lovely and I think within my abilities. I’ve been following an ‘intuitive training plan’ this whole time, which in other words means not having a plan. 2026 I plan to have a plan, and probably the cornerstone of that plan is logging many more easy miles.


How’s work been going? I can point to the Val Town Retrospective that I wrote for most of the answer to that question. 2025 for Val Town was a year of big ups and downs. Simultaneously, the job became both more demanding and I became more adjusted to it: it’s remarkable how adaptable people and organizations can be.

On a day-to-day level, as an engineer, the codebase has grown to the point where it’s a bit difficult to keep all in my head, and there are important components that I shamefully haven’t directly worked on. For a ‘CTO’, needing to have the system memorized might feel like an no-no, but for an organization of this size my job is really to be a general-purpose builder, fixer, and understander.


I was really into rhythmic instrumental music: SML’s take on jazz in Small Medium Large and How Have You Been are amazing, the kind of music that works for focused coding, a dinner party, or a long drive.

I loved John Carroll Kirby’s alternative take on jazz too - which can sound cheesy, like elevator music, until you get a minute in.

Slow Mass’s Low on Foot was probably my album of the year: almost every song is marked with five stars in my music library in Swinsian.


I feel unsatisfied with my productive output in 2025. But this is a permanent condition I think.

First bike bag

Sewing was the big new thing. I sewed about five bags, including three for my bicycle, and rode almost 1,000 miles with them.

Bag 2

It’s a fantastic hobby. Designing the bags exercises my brain in just the right ways, it’s tactile and low-tech. My sewing machine was manufactured around 1970, and works great. I love the learning process involved: my first attempt at sewing a bag for the front rack of my bike yielded clear lessons for bag 2, things like using stiffer fabric where the bag needs support and trying to minimize seams in areas that are on the top, to preserve waterproofing.

Pending another bike, I’m pretty much done with bike bags, but there are plenty more projects on the horizon for the sewing machine.

Besides the flashy bags-from-scratch, it’s been useful for simpler things like:

  • Restuffing my couch cushions and sewing them back closed
  • Repairing the pocket in some running shorts that had developed a hole
  • Hemming some jeans that were too long, and an oversized shirt

It’s been really rewarding, and sewing goes really well with instrumental jazz.


That said, my free-time coding projects have been fewer. I implemented indiepixel, a pixel-art rendering layer in Python for my Tidbyt display. And I maintained Placemark, putting time into simplifying it and adding a handful of new features, like drawing lines with automatic routing.

But that’s about it? The coding I’ve done on weekends has mostly been work-related, and not much of that either. I still have fun coding, but I have to say that it’s changed for me. The tech industry just feels bad in so many ways, from its open embrace of fascism to the nihilistic startups that advertise via rage-bait. LLMs have changed things a lot too: it’s hard to tell what people value anymore, and how people have fun. I’ve written a lot about LLMs, so won’t repeat it all. See: Would LLMs democratizing coding be a pyrrhic victory?, Hallucination City, LLMs pivot to the aesthetics of thinking, and more.

I’ve long aimed to diversify my joys: part of finding a love of music, art, sewing, running, and so on is that they can serve as backup ways to feel happy when the world’s tough. I see some of what’s happening now - people using computers to do art, automating the skillful work they used to do, and I wonder what this leaves time for them to do: in the excess time, where do you find joy?

I’ve been finding most of that joy away from the keyboard, this year. I hope I rediscover some of that spark in 2026. I have been having fun learning Effect and writing some Rust, and there are plenty of ideas left.


Brooklyn continues to be good to me. Living here delivers on my priorities in life: things like never drive and live near friends. By those metrics, it does great, and always surprises me with just how much of the world is packed into the 97 square miles of the borough, and Manhattan and Queens nearby.

And yeah - the election of Zohran Mamdani makes it even better. This year was the first time that I knocked on doors for a mayoral candidate, and so did a majority of my friends. It’s pretty exciting. I think that the next few years will be great for the city, and though it’ll be really tough to deliver on all of his promises, even just having a mayor in office who shows up to the job and wants the best for his constituents will be a welcome change from the previous administration.


I started this blog in 2011 with a vague photo of San Jose and some non-committal prose. So 2026 will be the 15th anniversary of the blog.

Blogging has been, for me, an unalloyed success. It has connected me to people, given me a place to develop my thoughts, made some of my work on the internet - a place always decaying and forgetting - a little more permanent. I absolutely recommend everyone do it.

I know why most people don’t do it: not enough time and too much fear of publishing ‘bad writing.’ Maybe ‘nothing to write about,’ too, though this never seems that real to me, given how the average person I meet has interesting thoughts and ideas to share.

I forget exactly when I removed analytics from the blog, but it was a long time ago. Since then I don’t know what ‘takes off’ or ‘goes viral’ and it’s mostly fine with me. Lately though, I have been discovering other indie blogs with articles that reference or respond to mine, and I really want a way for this to be slightly more social. Not fully social of course - no comments and this is not part of any network - but I want to know about link-backs. That’s probably the focus for 2026.

I think this idea has been going around - my friend Waldo was discussing it the other day, and webmentions came up as an option. I’ve tried webmentions in the past with little success - not many blogs supported them and I got a lot of spam - but it’s worth another shot. It’s hard not to get a little discouraged off the jump because webmentions have spam, their predecessor pingbacks were ripe with abuse, trackbacks had even more spam, and even if I try to find backlinks with ahrefs.io, there are plenty of spam domains there too or SEO schemes. The internet is an adversarial place.

In meta-blog news, this blog has been hosted on Netlify since 2017 and I can’t find a strong reason to switch off. It’s been rock-solid. I’ve been using Jekyll since I started in 2011 and it continues to work great, though if I started from scratch I’d probably use 11ty. It would be nice to have a little more power over server-rendering and deploy on Hetzner, but it seems like it’d be a step-up in complexity.


Riding the C&O Canal

Photo from riding the GAP trail + C&O Canal this year

every strange thing you’ve ever been into, every failed hobby or forgotten instrument, everything you have ever learned will come back to you, will serve you when you need it. No love, however brief, is wasted. - Louise Miller

SpaceX launches 28 Starlink satellites on Falcon 9 rocket from Vandenberg SFB

A SpaceX Falcon 9 rocket lifts off on the Starlink 11-15 mission from Space Launch Complex 4 East at Vandenberg Space Force Base on Dec. 7, 2025. Image: SpaceX

Update Dec. 7, 3 p.m. EST (2000 UTC): SpaceX confirms deployment of the 28 Starlink satellites.

SpaceX closed out the weekend with a mid-morning Falcon 9 rocket launch from Vandenberg Space Force Base in California.

The Starlink 11-15 mission added another 28 broadband internet satellites to its massive low Earth orbit constellation. This was SpaceX’s 115th launch of Starlink satellites so far in 2025.

Liftoff from Space Launch Complex 4 East happened at 9:58 a.m. PST (12:58 p.m. EST / 1758 UTC). The rocket flew on a south-easterly trajectory upon leaving the launch pad.

SpaceX launched the mission using the Falcon 9 booster with the tail number 1088. This was its 12th flight following the launches of NASA’s SPHEREx, Transporter-12 and two missions for the National Reconnaissance Office (NROL-57 and NROL-126).

About 8.5 minutes after liftoff, B1088 performed an autonomous landing on the drone ship, ‘Of Course I Still Love You.’ This marked the 168th booster landing on this vessel and the 545th booster landing to date for SpaceX.

SpaceX has another launch scheduled for later in the day on Sunday from NASA’s Kennedy Space Center. That mission will feature the 3,000th Starlink satellite launched in 2025.

Apollo 17 at Shorty Crater

Apollo 17 at Shorty Crater Apollo 17 at Shorty Crater


Tom Stoppard (1937 –2025)

 Tom Stoppard, the great English playwright, passed away last week. I saw many of his plays, including his last one, about his apparently late in life discovery that he was Jewish, and that his immediate family had fled Czechoslovakia ahead of  the Nazis, while most of the rest had perished, with a few exceptions.

The play tells the story of three generations of assimilated Jews. You, the audience, of course know how it will end, but they don't, and they are optimistic that their current troubles will soon pass.  It's an eerie feeling to watch that play amidst the world's current uncertainties. 

The NYT tells his story through that final play.

When Tom Stoppard Confronted His Background in His Final Play
The playwright, who learned about his Jewish heritage late in life, addressed it in the Tony Award-winning drama “Leopoldstadt.”
   By Marc Tracy

"Stoppard’s final play, too, contained characters whose fates were tragically preordained. The rest is silence." 

Will West Coast Jazz Finally Get Some Respect?

Sandra Evans is a woman on a mission. She wants to make the definitive documentary about West Coast jazz.

Like me, she loves those classic recordings by Chet Baker, Dave Brubeck, Art Pepper, Hampton Hawes, and others. But most jazz films ignore this entire movement—pretending that the history of the genre only took place in New Orleans, Chicago, and New York.

That’s simply not true.

The West Coast players deserve their place in our cultural history. Their music should be heard. Their story ought to be told—and it’s a fascinating story.

I’ve been helping her as best I can. You can see me in this new trailer about her project.

As a young man, I was also on a mission to celebrate the legacy of West Coast jazz. I convinced Oxford University Press to publish a book on the subject—and this turned into my single biggest project when I was in my twenties.

I met many of the jazz elders who helped create West Coast jazz—and saw how they had been unfairly forgotten. Many were living in poverty. Some were playing music on the streets.

It was sad to see.


Please support my work—by taking out a premium subscription (just $6 per month).

Subscribe now


I took this personally, as Michael Jordan might say. I grew up in Los Angeles and later moved to the San Francisco area. This was my own homegrown jazz tradition. I loved it and wanted to share my enthusiasm with others.

But even superstars such as Dave Brubeck and Chet Baker were frequently attacked back then. And someone like Vince Guaraldi was simply ignored—jazz history books pretended he didn’t even exist.

They weren’t real jazz musicians. That’s what I kept hearing.

If you didn’t live through this era of jazz policing on steroids, you can’t even begin to imagine the level of hostility—which was amplified tenfold if a West Coast jazz player dared to have a hit record.

I could cite dozens of other examples of musicians who would have won prizes if they moved to New York. But if they stayed out West, they got little or no respect.

That was how it worked back then.

My West Coast Jazz book was the single biggest project from my early years as a writer.

I got punished too. My grant requests for financial help on my West Coast jazz project were turned down. I wanted to do full oral histories of the leading players on the scene, but nobody wanted to fund it.

Sandra Evans is now fighting the same battle. She is working tirelessly on raising the money she needs to complete her project. If you can help out, please do so. (You can learn more here). Or if you can’t donate, please spread the word—by sharing the video or fundraising link online.


Gift subscriptions for The Honest Broker are now available. Click here for details.

My 2011 Review of Contagion

I happened to come across my 2011 review of the Steven Soderberg movie, Contagion and was surprised at how much I was thinking about pandemics prior to COVID. In the review, I was too optimistic about the CDC but got the sequencing gains right. I continue to like the conclusion even if it is a bit too clever by half. Here’s the review (no indent):

Contagion, the Steven Soderberg film about a lethal virus that goes pandemic, succeeds well as a movie and very well as a warning. The movie is particularly good at explaining the science of contagion: how a virus can spread from hand to cup to lip, from Kowloon to Minneapolis to Calcutta, within a matter of days.

One of the few silver linings from the 9/11 and anthrax attacks is that we have invested some $50 billion in preparing for bio-terrorism. The headline project, Project Bioshield, was supposed to produce vaccines and treatments for anthrax, botulinum toxin, Ebola, and plague but that has not gone well. An unintended consequence of greater fear of bio-terrorism, however, has been a significant improvement in our ability to deal with natural attacks. In Contagion a U.S. general asks Dr. Ellis Cheever (Laurence Fishburne) of the CDC whether they could be looking at a weaponized agent. Cheever responds:

Someone doesn’t has to weaponize the bird flu. The birds are doing that.

That is exactly right. Fortunately, under the umbrella of bio-terrorism, we have invested in the public health system by building more bio-safety level 3 and 4 laboratories including the latest BSL3 at George Mason University, we have expanded the CDC and built up epidemic centers at the WHO and elsewhere and we have improved some local public health centers. Most importantly, a network of experts at the department of defense, the CDC, universities and private firms has been created. All of this has increased the speed at which we can respond to a natural or unnatural pandemic.

Avian flu virus, from 3DScience.com.

In 2009, as H1N1 was spreading rapidly, the Pentagon’s Defense Threat Reduction Agency asked Professor Ian Lipkin, the director of the Center for Infection and Immunity at Columbia University’s Mailman School of Public Health, to sequence the virus. Working non-stop and updating other geneticists hourly, Lipkin and his team were able to sequence the virus in 31 hours. (Professor Ian Sussman, played in the movie by Elliott Gould, is based on Lipkin.) As the movie explains, however, sequencing a virus is only the first step to developing a drug or vaccine and the latter steps are more difficult and more filled with paperwork and delay. In the case of H1N1 it took months to even get going on animal studies, in part because of the massive amount of paperwork that is required to work on animals. (Contagion also hints at the problems of bureaucracy which are notably solved in the movie by bravely ignoring the law.)

It’s common to hear today that the dangers of avian flu were exaggerated. I think that is a mistake. Keep in mind that H1N1 infected 15 to 30 percent of the U.S. population (including one of my sons). Fortunately, the death rate for H1N1 was much lower than feared. In contrast, H5N1 has killed more than half the people who have contracted it. Fortunately, the transmission rate for H5N1 was much lower than feared.  In other words, we have been lucky not virtuous.

We are not wired to rationally prepare for small probability events, even when such events can be devastating on a world-wide scale. Contagion reminds us, visually and emotionally, that the most dangerous bird may be the black swan.

The post My 2011 Review of Contagion appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Europe is under siege

The map above is a depiction of The Deluge, a historical event in which the Polish-Lithuanian Commonwealth — which had been a major European power — was defeated and destroyed under the combined assaults of Russia and Sweden in the 1600s. After having its power broken, Poland was carved up in the 1700s and subjugated by Russia, Prussia, and Austria. It took more than two centuries, until the fall of communism in 1991, for Poland to reemerge as a strong, truly independent country.

The Deluge shows that power and independence are not permanent. If you are surrounded by hostile powers, and if you don’t have the ability to guard yourself against those powers, no amount of historical greatness can save you from being subjugated. This is an important lesson for Europeans to remember right now, as they find their region under siege from Russia, China, and the United States all at once.

The United States no longer cares about the European project

Why would America care about Europe at all? For most of our history, we didn’t. In the 19th century, the U.S. viewed European countries as dangerous rivals. In the early 20th century, Americans prided themselves on not getting involved in European affairs, and were incensed at their government for dragging them into World War 1. Only after World War 2 did Americans start caring about Europe, and we did so for three reasons:

  1. West Europe was a bulwark against Soviet communism.

  2. Europe was a key trading partner.

  3. Many Americans came to value their ancestral ties to Europe.

The first of these reasons vanished in 1991. Europe is still a bulwark against Russia, but Americans no longer feel threatened by Russia. Russian power is far less than what it once was, and Russia’s rightist ideology does not threaten the rightists who now rule America.

As for communism, many (most?) Americans now believe that European countries are socialist. When American conservatives ask where in the world socialism has succeeded, American progressives will always reply “Europe” or “Scandinavia”. Whether Europe or Scandinavia is actually socialist is irrelevant; Americans have come to see it that way.

Europe is still an important trading partner. But Trump and the other people now in charge of the U.S. do not understand trade at all. They think about trade entirely in terms of the net trade balance, rather than in terms of total U.S. exports. Trump & co. don’t care that America sells $650 billion a year to Europe; the fact that Europe sells $800 billion a year to America means that Trump & co. think America is “losing” and would benefit from a cutoff of trade.

Remember that the U.S. is an unusually closed-off, self-sufficient economy, so Americans in general don’t think too hard about trade or try to understand why it’s valuable. Also, the people running now the country are especially ignorant about economic matters.

As for civilizational ties, this is the reason Trump and the MAGA movement have turned so strongly against Europe. The American right values Europe because they think of it as a White Christian homeland — the source and font of Western civilization. Here’s a post I wrote about that earlier this year:

I wrote:

in the American mind, Europe stood across the sea as a place of timeless homogeneity, where the native white population had always been and would always remain…In the mind of many Americans, Europe thus stood as both a refuge and a reservoir. America itself was a rough, contested frontier, but Europe would always be white and Christian. If you ever felt the need to live around a bunch of white people of Christian heritage, you could always go “back”, but for most that wasn’t necessary — just knowing that the Old World was somewhere out there was enough.5

I think Europeans may underestimate how much this perception motivated America’s participation in the Transatlantic Alliance during the Cold War…[T]o conservative Americans in the 20th century — the type of people who joined the John Birch Society — the Cold War was about preserving Christendom from the threat of godless communism.

Anyway, in the 2010s, it dawned on those Americans that this hallowed image of Europe was no longer accurate. With their working population dwindling, European countries took in millions of Muslim refugees and other immigrants from the Middle East and Central and South Asia — many of whom didn’t assimilate nearly as well as their peers in the U.S. You’d hear people say things like “Paris isn’t Paris anymore.”6…At the same time, Europe had long since abandoned its traditional Christian values…

To Americans who valued the idea of America and Europe as part of a single Western civilization, this realization was catastrophic. Suddenly European countries — and the Anglosphere countries of Canada, Australia, and New Zealand — felt like they had left the club…

America’s rightists…want to know that someone, somewhere, is out there preserving an indigenous homeland for their identity groups. And that “someone” has to be Europe and the Anglosphere.

This isn’t a new attitude, either. Remember that in order to persuade a reluctant America to join World War 1, the U.S. government had to depict Germany as an ape abducting a white woman!

If you understand this, then nothing in America’s new National Security Strategy is mysterious, surprising, or confusing. Here’s how War on the Rocks summarizes the Trump administration’s attitude toward Europe:

[I]mmigration is elevated to the central national security problem. The text declares, bluntly, that “the era of mass migration must end,” and that “border security is the primary element of national security.” It frames mass migration as a driver of crime, social breakdown, and economic distortion, and calls for a world where sovereign states cooperate to “stop rather than facilitate destabilizing population flows” and tightly control whom they admit…

[P]rotecting American culture, “spiritual health,” and “traditional families” are framed as core national security requirements…The document insists that “restoration and reinvigoration of American spiritual and cultural health” are prerequisites for long-term security and links this to an America that “cherishes its past glories and its heroes” and is sustained by “growing numbers of strong, traditional families” raising “healthy children.” America is thus cast as defender of so-called traditional values, while Europe lacks “civilizational self-confidence and Western identity.”…

[T]he strategy elevates the culture wars into a governing logic for national security, and it does so through rhetoric that treats ideological and cultural disputes as matters of strategic consequence…This is clearest in the European section…The text…speculates about demographic and cultural shifts in Europe as a way to question whether future governments will share American views of their alliances. The strategy [implies] that cultural alignment is essential to strategic partnership.

The American right sees the “mad brute” in the ape cartoon as the dark-skinned Muslim immigrants who have entered Europe in large numbers in recent years. And they see themselves as needing to save the woman — representing their view of Europe as the traditional font of White Christian civilization — from that mad brute.

This tweet by Elon Musk pretty much sums up the American right’s attitude toward Europe:

This is why no amount of European shaming or moral persuasion can have any effect on the Trump administration — or on any Republican administration in the decades to come. This kind of appeal to friendship is totally useless:

And this kind of bitter, angry hectoring is worse than useless:

The American right — i.e., the people now in charge of the country — do not care intrinsically about democracy, or about allyship, or about NATO, or about the European project. They care about “Western Civilization”. Unless Europe expels Muslim immigrants en masse and starts talking about its Christian heritage, the Republican Party is unlikely to lift a hand to help Europe with any of its problems. Democrats will want to help Europe, but they will only be in power intermittently, and helping Europe will not be high on their priority list.1

Thus, America is not riding to the rescue this time, or for the foreseeable future. I wish things were different, but my wishes count for nothing; this is the reality with which the Europeans must now deal.

Russia and China together are the real menace to Europe

Europeans do not need me to tell them that Putin’s Russia threatens not just Ukraine, but all of Europe. They are well aware of this fact. Russia now regularly flies its drones into Europe, and is probably behind a wave of sabotage attacks on European infrastructure.

How can Russia, a country of just 144 million people and $7 trillion in GDP (PPP), hope to overcome Europe, which has 520 million people and $33 trillion in GDP (including the UK), especially after Russia has expended so many of its young men and materiel in its war with Ukraine already? There are three answers here. The first is gray-zone warfare, including sabotage and political influence campaigns. But that’s only the beginning.

Russia’s second method for fighting Europe is what I call a “Ponzi empire” strategy. Russia has enslaved vast numbers of Ukrainians from the occupied regions of Ukraine to fight against the rest of their country. If Russia conquers the rest of Ukraine, it will similarly enslave the rest of the country’s population, and send them to fight against Poland, the Baltics, and Moldova. If they then defeat Poland, they will enslave the Poles and send them to fight against the next European target, and so on.

This is a very traditional Russian strategy. Enslaved Ukrainians were used to attack Poland in 1939. Enslaved Poles were forced to fight Russia’s wars in the days of the old Tsarist empire, and would have been forced to do so again as part of the Warsaw Pact. Just like zombies turn humans against their own, each slice of Europe that Russia can chop off ends up being turned against the rest.2

Russia’s final strategy for fighting Europe is to rely on Chinese assistance. Russia’s own industrial base is very weak, and relied heavily on imported European parts and machinery that has now been partially cut off. But Chinese tech has largely plugged that hole, as the Carnegie Endowment reports:

Since mid-2025, Chinese components have been detected in Russian drones and missiles, often shipped via front companies disguised as suppliers of industrial cooling equipment…Chinese machinery, including precision optics, lasers, and dual-use machine tools, now dominates Russia’s defense-related manufacturing. In August 2025 alone, China exported a record 328,000 miles of fiber-optic cable and nearly $50 million worth of lithium-ion batteries to Russia, reinforcing its role as the Kremlin’s primary wartime supplier of dual-use materials. Chinese engineers working at Russian drone facilities are adapting civilian quadcopters, such as the Autel Max 4T, for combat use.

China is a far bigger manufacturer than Europe, and can pour essentially infinite war production into Russia if it wants to. And China is now assisting Russia’s gray-zone warfare against Europe:

Since 2024, Chinese ships have been involved in incidents of targeting subsea infrastructure, particularly cutting subsea cables in the Baltic Sea…The country increasingly deploys ambitious espionage and cyber attacks against government networks and critical infrastructure across Europe. These attacks seem to overlap with—or even be actively coordinated with—Russia’s espionage and influence operations across Europe…Increasingly, Russia and China also cooperate in disinformation operations: Chinese campaigns such as “Spamouflage” are amplified by Russian media outlets and diplomatic channels. Both countries employ what look to be synchronized narratives accusing the West of being responsible for the war in Ukraine.

China even provides the Russians with battlefield intelligence, helping them strike and destroy Ukrainian targets in real time. In sum, China is supporting Russia’s war against Ukraine, and will likely support Russia in any further wars it undertakes against the rest of Europe.

With Chinese technology and production, and slave soldiers from East Europe, and with America withdrawing from the Transatlantic Alliance, Russia could conceivably overmatch Europe.

But that’s not the only threat that China poses. On the economic front, China’s new economic strategy — a combination of shutting out European products, sending out a massive wave of subsidized exports, and putting export controls on rare earths — threatens to forcibly deindustrialize Europe. Here’s what The Economist, normally a staunch defender of free trade, recently wrote:

China is not just dumping exports and subsidising its companies, it is also out-competing and out-innovating big European industries, including carmaking. Last year Germany’s trade deficit with China stood at €66bn ($76bn); this year it could widen to over €85bn, around 2% of GDP. Alarmingly, China is exploiting Europe’s dependence, weaponising embargoes or the threat of them in chips and rare earths.

Germany, traditionally Europe’s strongest manufacturing and exporting nation, is already the hardest hit:

China, many European manufacturers have concluded, is threatening to put them out of business, by both fair means and foul…The wails are loudest in Germany, which is Europe’s biggest exporter to China and its biggest investor in it by far…For the Mittelstand, the small manufacturers that constitute a big slice of German industry, China used to be a source not of angst but of profit. Their precision-engineered machine tools were an exquisite fit for its rapid industrialisation. Chinese consumers raced to buy German cars…

Times have changed…Once-stellar growth inside China has, for many foreign firms, slowed to a crawl as competition with local rivals intensifies. In addition, Germany’s previously small trade deficit with China has ballooned…Last year it reached €66bn ($76bn), or around 1.5% of GDP, driven by a collapse in German exports to China and a rush of imports, notably of cars, chemicals and machinery—hitherto German specialities.

Germany’s trade deficit with China this year is expected to surge again, to around €87bn…German cars command only 17% of the Chinese market, down from a peak of 27% in 2020…Worse, Chinese competition also jeopardises sales in other markets. China’s net exports of cars have risen from zero in 2020 to 5m units last year. Germany’s have halved over the same period, to 1.2m units…Such figures have triggered fears in Germany of a wave of deindustrialisation.

The Financial Times has a good article about this as well, and Brad Setser has a good writeup of that article.

This is all on top of the existing headwinds facing European manufacturing — the energy crisis from the cutoff of Russian gas and self-inflicted “green” policies, Trump’s tariffs, and so on.

So Europe finds itself in an extraordinary perilous position right now. Its main protector has suddenly withdrawn. It has a ravenous, brutal empire attacking its borders, supported by the world’s most powerful nation. Its main export markets are shriveling, and its manufacturing industries are under dire threat from waves of subsidized foreign competition. What can it do to fight back?

How Europe can resist the siege

The most important thing Europeans need is to panic. Europe is facing its own Deluge — a sudden pincer movement by hostile great powers that threatens to reduce it to a collection of small vassal states. This is a true crisis, and it will not be solved by social media rhetoric, or by brave declarations by EU leaders. It cannot be regulated away by eurocrats in Brussels. It will require bold policies that change Europe’s economic, political, and social models. Only a strong sense of urgency and purpose can motivate Europe to do what needs to be done.

What needs to be done? One important step is for Europe to act more as a single whole than as a collection of small countries. In the military realm, this means coordinating European militaries and defense industries much more. Matthew C. Klein writes:

From a properly European pespective, the security interests of each country should be shared across all countries, just as, for example, most Americans in Michigan or Maine would view an attack on California or Florida as an attack on them…The first step is to give the Ukrainians, who are already fighting the Russians, as much material and financial support as they need. From the perspective of European security, French, German, and British weapons are far more valuable in Ukraine than in their home countries. If the Ukrainians were subjugated, defending the rest of Europe would become much harder, with the effective EU-Russia border lengthening dramatically…

Europe’s national militaries have had a tendency to favor their home country’s producers, with the result that the continent is filled with subscale defense companies that are often slow and unproductive. Common defense procurement for a continental army should lead to higher output and lower costs—a few large companies handling large orders should have better unit economics than hundreds of artisanal manufacturers—but it would require Europe’s national defense elites to change their perspective. Philipp Hildebrand, Hélène Rey, and Moritz Schularick recently published a useful proposal for how to make this work.

And economically, Europeans can partially compensate for the loss of Chinese (and American) export markets by selling more to each other. The Economist writes:

A second task is for European countries to make better use of the power they have, by integrating their economies…By failing to integrate, the EU is leaving a vast sum of money on the table. A single market that was designed for goods is failing to help economies dominated by services.

And in his famous report on European competitiveness, Mario Draghi wrote:

We have also left our Single Market fragmented for decades, which has a cascading effect on our competitiveness. It drives high-growth companies overseas, in turn reducing the pool of projects to be financed and hindering the development of Europe’s capital markets…The EU’s new industrial strategy rests on a series of building blocks, the first of which is full implementation of the Single Market. The Single Market is critical for all aspects of the strategy: for enabling scale for young, innovative companies and large industrials that compete on global markets; for creating a deep and diversified common energy market, an integrated multimodal transport market and strong demand for decarbonisation solutions; for negotiating preferential trade deals and building more resilient supply chains; for mobilising greater volumes of private finance; and as a result, for unlocking higher domestic demand and investment. Remaining trade frictions in the EU mean that Europe is leaving around 10% of potential GDP on the table, according to one estimate.

And ideally, Europe should form a fiscal union — the EU itself should be able to borrow and spend, not just the member countries. As Klein writes, this needs to be accompanied by a greater tolerance for fiscal deficits — after all, countries borrow in emergencies.

In other words, Europe’s first step in resisting its siege is to act more like a country and less like a zone. It would also help to find some way to bring the UK back into the fold, especially because polls consistently find that British people regret Brexit.

Europe’s other top priority is to provide for the common defense. That means spending more money on the military, of course, and it also means greatly increasing the size of Europe’s nuclear deterrent. But it also means building a defense industrial base capable of resisting a China-backed Russia.

Europe’s current defense-industrial base was built for the Cold War, when battles were decided by heavy vehicles like tanks and ships and planes. Those are still somewhat important, but drones have risen very quickly to dominate the modern battlefield. Right now, drone manufacturing, as well as almost the entire supply chain for battery-powered drones, is overwhelmingly concentrated in China.

Europe needs to be able to build not just drones, but every single thing that goes into making a drone — batteries, motors, various types of computer chips, and so on. European industrial policy should therefore focus on onshoring these industries. In other words, Europe needs to master the entire Electric Tech Stack. (This will also help Europe get back in the EV race.) And it needs to master the AI software — computer vision, swarming tech, and so on — that will soon be needed in order to make drones a truly modern force.

The question of the proper policy instrument to accomplish this goal — tariffs, subsidies, fiscal borrowing, regulatory changes, and so on — is irrelevant. All of these policies should be done as necessary, and it’s better to do too much than too little. Policy procedure needs to be subordinated to the overriding goal of making Europe capable of defending itself. In fact, every European institution needs to be reformed and reverse-engineered in order to enable this.

Europe is also going to have to change its political mindset. Lavish pensions and other elements of Europe’s social model are going to have to be temporarily curbed to help give Europe the fiscal space and physical resources to fight off its enemies. All nuclear plants need to be restarted, and Europe should build more nuclear, ignoring “green” parties and environmental activists who irrationally hate nuclear power. Europe needs to reform its land-use regulation to require greater construction of solar and wind power. And Europe is going to have to back off of its aggressive regulation of AI software, in order to produce cutting-edge autonomous weaponry.

Finally, Europe needs to look for friends and allies — and export markets — other than America. India is an obvious choice. Although India is friendly with Russia, the country would undoubtedly welcome Germany’s help industrializing — and this would allow German companies to sell machines to India, as they once did to China. The EU should open its markets to Indian goods in exchange for Indians doing the same, recognizing that trade balances are less important than total export demand. Japan, South Korea, and other big developing countries like Indonesia, Vietnam, and Brazil are other good potential trading partners.

If Europe manages to unify more and to build up its military power, it will increase the number of great powers in the world by one. A planet with a strong Europe, America, China, Russia, and India is a better planet than one where only the last four of those are strong. If Europe shows it can act with unity and purpose, and that it has military power to be reckoned with, America and China — both countries whose leaders tend to respect raw power — may lose their disdain for the region, and return to a more diplomatic, conciliatory posture.

Ultimately, European weakness and division are the reasons the region is getting bullied by so many other powers. Reversing that weakness and division would make the bullies go away. But Europe’s people, and especially Europe’s elites, have to want it.


Subscribe now

Share

1

And of course if Europe does expel the Muslim immigrants and start talking up its Christian heritage, as the MAGA folks want, Democrats will conclude that Europe is fascist and be reluctant to help it out when they get back in power. Essentially, Europe is finding itself caught in America’s internal culture wars, and there’s no good way out; the only solution is to realize that the U.S. will not be a reliable partner for decades to come.

2

Would Russia actually try to conquer and rule all of Europe directly, as the Nazis tried to do? Unlikely. But would it try to dominate all of Europe the way the USSR dominated the Warsaw Pact? Yes, definitely. And this sort of domination would be very bad for Europeans, as the Poles could tell you.

SpaceX gets approval to build Starship launch complex at Cape Canaveral

Starship SLC-37

he Department of the Air Force has approved plans to convert a former Delta 4 launch site at Cape Canaveral into a complex for SpaceX’s Starship.

The post SpaceX gets approval to build Starship launch complex at Cape Canaveral appeared first on SpaceNews.

Planning sentences to ponder

Planning assistance caused municipalities to build 20% fewer housing units per decade over the 50 years that followed.

Here is the full abstract:

We study how the federal Urban Planning Assistance Program, which subsidized growing communities in the 1960s to hire urban planners to draft land-use plans, affected housing supply. Using newly digitized records merged with panel data across municipalities on housing and zoning outcomes, we exploit eligibility thresholds and capacity to approve funds across state agencies to identify effects. Planning assistance caused municipalities to build 20% fewer housing units per decade over the 50 years that followed. Regulatory innovation steered construction in assisted areas away from apartments and toward larger single-family homes. Textual evidence related to zoning and development politics further shows that, since the 1980s, assisted communities have disincentivized housing supply by passing on development costs to developers. These findings suggest that federal intervention in planning helped institutionalize practices that complicate community growth, with subsequent consequences for national housing affordability.

Hail Martin Anderson!  The above paper is by Tom Cui and Beau Bressler, via Brad, and also Yonah Freemark.

The post Planning sentences to ponder appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Imagine a bigger Seattle

A brief follow-up to my previous post on affordability. Scott Alexander once suggested that building more houses would not necessarily make housing cheaper. That’s because big cities tend to be more expensive, and if you build lots of housing then you are making the city bigger. He then (correctly) noted that it would still be a good thing if the new construction led to higher housing prices, because this would also result in more people being able to live in highly productive areas.

At a theoretical level I have no problem with this argument. However, I argued that when the extra housing comes from supply side reforms, then housing prices are not likely to rise as a result, despite the city becoming bigger. Supply effects would probably dominate induced demand effects. But I’d rather not rehash that debate, as his provocative hypothesis seems like a good way to explain why output is better than “affordability”.

Thanks for reading The Pursuit of Happiness! Subscribe for free to receive new posts and support my work.

Let’s say that Alexander is correct that building lots of housing makes a city more expensive. In that case, many people would argue that housing has become less affordable. As Matt Yglesias pointed out, average people tend to equate affordability with nominal prices. In fact, even if building lots of housing made prices go up, housing would actually become more affordable. To see why, consider the implicit factors underlying in Alexander’s thought experiment.

Assume that metro Seattle doubles its housing stock, pushing the (CSA) population up from roughly 5 million to 10 million. Now Seattle is America’s third largest metro area and also a city with lots of highly educated workers and tech companies. That’s likely to be a highly productive place, full of very high paying jobs. This can be explained by many factors. Network effects make big cities more productive (think Silicon Valley and Boston.) Large size leads to cultural amenities that are popular with highly productive people (think New York and London.)

[For simplicity, assume the Seattle population increase is uniform—both more central city high rises and more suburban greenfield developments.]

If you just looked at nominal housing prices then you might be perplexed as to why Seattle’s population had doubled. Why would more people move to a city where (by assumption) housing became more expensive? But if Alexander is correct, then high prices would have been a response to structural changes in Seattle’s economy that made the city more productive.

Affordability is not just about nominal prices; it implicitly reflects both prices and incomes. The primary reason why big cities are expensive is that their workers are more productive and thus earn higher incomes. Seattle would have gone from being affordable to 5 million people to being affordable to 10 million people. That’s more affordable.

To be clear, I’m not making a tautological claim. To some extent, big cities might attract people despite lower real incomes—think of the artist willing to live in a tiny apartment in NYC in order to be close to the action. But the primary reason why home prices are high in big cities is that incomes are high. Obviously, if you’ve doubled Seattle’s population from 5 million to 10 million then you have in some sense made the city more accessible to more people. That’s good. That’s how we should think about affordability. Affordability is (mostly) output.

Some people would quibble about distributional effects—maybe the poor get driven out. But cities like New York and LA don’t just have more people than smaller cities, they also have more poor people than smaller cities. Were Seattle’s population to rise from 5 million to 10 million, it would almost certainly be the case that Seattle’s poor population would increase in absolute terms, even if it shrank slightly as a share of the population.

PS. My grandmother visited Seattle in 1962 and later gave me a plastic model kit of the Space Needle, which I assembled. We had no computer games in those days, and this model was one of my favorite toys when growing up. There’s a (stupid) debate about whether real incomes have risen since the 1960s (duh!), but it’s especially silly when it comes to children. Their toys are so much better than our boomer toys that it’s like they are living on another planet. Middle class kids now have higher living standards than did the children of billionaires back in the 1960s. BTW, I seem to recall that there were only three billionaires in the 60s.

PPS. I was about to say that for once Trump was correct about something, but even when he’s right he surrounds his accurate comments (“con job” and “fake narrative”) with utter drivel (“trillions”, “worst inflation”, “no affordability”):

After ticking off what he claimed were trillions of dollars of investments and other economic accomplishments, Mr. Trump called the issue of affordability a “fake narrative” and “con job” created by Democrats to dupe the public.

“They just say the word,” he said. “It doesn’t mean anything to anybody. They just say it — affordability. I inherited the worst inflation in history. There was no affordability. Nobody could afford anything.”

Thanks for reading The Pursuit of Happiness! Subscribe for free to receive new posts and support my work.

Talking With Paul Kedrosky

As I say at the beginning of this interview, it’s annoying for economic analysts that two huge things are happening at the same time: a radical change in U.S. trade policy and a giant AI boom. Worse, while I think I know something about tariffs, the more I think about AI the less I believe I understand. So I talked to Paul Kedrosky, investor, tech expert and research fellow at MIT, for some enlightenment. Lots in here that I found startling.

Transcript follows.

. . .

TRANSCRIPT:
Paul Krugman in Conversation with Paul Kedrosky

(recorded 12/03/25)

Paul Krugman: Hi, everyone. Paul Krugman here. I’m able to resume doing some videos for the Substack, and today’s interview is based on me being really annoyed at history. If only one big thing would happen at a time. Unfortunately where we are now is, on the one hand, we have tariffs going to levels that we haven’t seen for 90 years, which should be the big story and where I feel fairly comfortable; but then we also have this AI explosion where I feel completely at sea. I don’t quite understand any of it. I’ve been reading and watching interviews with Paul Kedrosky, who is an investor, analyst, and currently research fellow at MIT, he certainly knows more about it than I do, and I wanted to just have a conversation where I try to understand what the heck is going on, insofar as anybody can.

Hi, Paul.

Paul Kedrosky: Hey, Paul. Both of us “Paul K.,” that’s dangerous.

Krugman: Yeah, welcome on board.

Kedrosky: Thanks for having me.

Krugman: Let me ask first, I have a really stupid and probably impossible question, which is that at a fundamental level what we’re calling “AI”—I think you usually use generative AI, large language models, although they’re not just language now—but at a fundamental level, I don’t understand how it works. Is there a less-than-90-minute explanation of how the whole thing operates?

Kedrosky: There is and I think it’s really important because it helps you be a more informed consumer of their products as a result. I think a really good way to think of these things is as grammar engines and I often call them “loose grammar engines,” meaning that there’s a bunch of rules in a domain that I can instantiate in the form of, whether it’s language, or whether it’s the law, or whether it’s software engineering: these are all grammars, when you abstract away from the nature of how we use them, meaning that they’re actually rules about what’s going on. If I ingest all of that and pull it into a giant network of matrices that weight all of this, then I can therefore do what we call “training on its basis,” it makes pretty good predictions about how that grammar might imply what we should be doing in terms of the “continuations”, the next thing that might be generated, whether it’s a subroutine in software, a PowerPoint slide, some language in an English presentation, or even loosely in the context of an image.

But it’s all this idea that these things are “loose grammars” that are reasonably good at predicting what should come next, the continuations based on the data they’re trained on, which tells you a lot of things about what they’re good at, and it tells you a lot of what they’re bad at.

Krugman: It’s a little bit like if you give me four words of a sentence, correlations out there will tell me what the next word is likely to be. But it’s a lot more elaborate than that, right? There are multiple layers, as I understand it.

Kedrosky: Right. It’s like the old Einsteinian expression of “spooky action at a distance,” where it’s not just the proximity, in terms of the very next thing that’s coming, we call these “tokens,” it’s also about the entire holistic context in which that language is embedded in the context of the grammar. So things that are far away actually have a surprising influence in terms of what might be the next tokens.

So it’s not something as simple as saying, “that box is red, you know that a color should come up next.” It’s not that simple. It has a lot to do with the entire context on which it was trained. In-turn, this ‘spooky action at a distance’-thing tells you about what it might look like. It turns out—this was the surprising thing that in a weird way, surprised even Google in 2017—when the original so-called Transformers paper that led to a lot of the recent developments in AI rose to prominence was that it was created for language purposes. It was created to use in the context of their Google Translate application. They thought, “this is kind of nifty. It doesn’t work too bad for that.” But the idea that embedded in language itself, through this idea of “near and far prediction” and this “spooky action at a distance,” this idea of attention could actually capture a lot of what we call knowledge, and therefore a lot of what seems like inference almost, was surprising to everyone, which is why Google kind of let things go by the wayside.

It took until it appeared inside of other companies like OpenAI, until the technology had a huge impact. So it’s not as simple as just predicting the next token, it’s this idea of: in the context of these attention mechanisms that look at the entire body of where this information is embedded, whether it’s English language or software or the law or any of these domains that you can actually get something that feels to us like, “oh, it understands what I’m thinking or understands the question I’m asking,” which is really just a reflection of—in the context of these large corpuses—what prediction feels like. It feels like this is a kind of continuation of what a normal person would think. What’s interesting is that when I have a colleague doing work on this, if you back sample who it thinks you are—if you think about it in the context of the training models—it has a rough sense that you’re like a 37 year old guy on Reddit. That’s the kind of person that it’s—in this sense—doing the continuation for, because that’s a big chunk of the training corpus. So if you back-engineer out of it what the data actually suggests about it, that can also tell you something. So I often tell people whenever they send me a message like, “a large language model said I should do x, y, z.” For instance, this should be my next car, or this is the answer to the essay question: what you’re really saying is, “a 37 year old guy on Reddit said it,” and you’ve got roughly the same amount of information, so it can be good, or it can be really fraught.

Krugman: We have all these stories about ChatGPT (or similar) telling people what they want to hear and giving them really bad advice. “Guys that look like you tend to make the same mistake,” basically.

Kedrosky: Exactly. Of course, it’s even more fraught now because of the nature of training and how we’ve increasingly exhausted the number of 37 year old guys on Reddit. A lot of the optimization in models now is in what’s called “post training.” So what goes on after the model has been created, where I go out and I say, “here’s the response it will give you to this particular prompt, do you like it?” We call that: reinforcement learning with human feedback. That leads down a path no different than being a professor at MIT obsessed with student ratings. You can become very sleazy, right? All of a sudden now all you care about is whether or not your students like you. This is a dangerous path for all the reasons we know, and it’s no different in the context of models. So not only is there this issue of the corpus itself being very centrally trained with respect to that group, but the models are increasingly trained in the post-training world because we’ve exhausted a lot of the pre-training data—there’s only so much of that out there—that the models become “sycophantic.” They’re tail-wagglingly eager for you to love them. That’s what we’re seeing increasingly.

Krugman: Oh, boy. What strikes me—and I’m by temperament just a skeptic about all these things—but I paid attention out of the corner of my eye to artificial intelligence, and efforts there, for a very long time through decades and decades of immense frustration, when being able to recognize “this is a cat” was a basically an insoluble problem. Then all of a sudden all of this stuff becomes absolutely routine, which is just mind boggling.

Kedrosky: The analogy I make is that we—via the Transformers paper—stumbled into a kind of Saudi Arabia of data. The right way to think about it from my standpoint is that Saudi Arabia of data was the public internet, which suddenly became useful as training data in the context of these massive models that required huge amounts of data and improved on the basis of scaling, meaning that a 10X improvement in the amount of data you trained on led to a predictable increase in the capacity of the model to make what we would call “useful inferences.” That was novel because we could never do that in the past. So the Saudi Arabia of free textual data, no different than any other reservoir, whether it’s the Permian Basin, etc., we’ve increasingly exhausted that data. What you’re seeing now is those old scaling laws, the goddess from 2017, 2019, 2020, GPT-1, all the way up to the present are producing less and less bang for the buck, no different than any extractive model where the remnant of that reservoir is much more expensive to get access to and probably more polluted, probably less useful, probably requires more refining. This is exactly the same, and that’s the point at which we are now.

Krugman: Funny story, I actually knew people who, not worked on but were close to the original Google Translate stuff, and their initial big resource—at least they told me—was OECD documents. Because of the multinational thing, everything is said in four languages. So it was kind of a Rosetta stone.

Kedrosky: No, you’re right. It was a tremendous training corpus for those models. So again, much back to the 37 year old guys on Reddit, once you understand the nature of what’s under the hood, it tells you a lot about why these models are useful and where they are less-so.

The other point I’d make is that it also helps to understand the nature of what training means, because we throw that word around a lot. Training follows this idea of what’s called “gradient descent,” which is that as I make changes, as I do training cycles, incrementally how much improvement do I see, and at what point does it stop or even reverse? In certain domains, the data has a really high rate of gradient descent, meaning that small changes provide a huge signal back to the model. So they’re very good at those things. A good example of that is software itself. If I make minor changes in code, I don’t get minor differences on the other side, I get broken software. So there’s a huge signal that flows back into training when you make minor changes in software, so the gradient descent is very sharp, which makes the models much better on relatively limited data. The English language itself is the exact opposite, if I make minor changes in language and I ask you which one’s better, you’d say, “oh, I don’t know, maybe this one, maybe that one.”

So the notion of learning from language itself versus learning from software is very, very different, which is incredibly important because it tells you why these models are great in the context of software itself, because the gradient descent of learning is so sharp and why they’re so equivocal and sometimes even dangerous in language, because we don’t have that same ability to learn from relatively small morsels of information. And it takes you to the next step, which is why benchmarks themselves in AI are so, I’ll say conflicted, because software is such an extremely good place for models to run, that’s saying “this model is very good at software, therefore, we’re on a path to AGI,” shows a profound misunderstanding of the nature of large language models. Of course they’re good at software. There could hardly be a better domain for training a large language model than software.

Krugman: By the way. Just in case there are some listeners that don’t know, AGI is artificial general intelligence. That’s the holy grail, and I think you’re one of the big skeptics about this being at all what we’re heading towards right now.

Kedrosky: Very much so, for some of the reasons I’m describing, that the nature of large language models, that architecturally—for reasons of data set exhaustion and for reasons of declining returns for increasing investment—we’re kind of at a cul-de-sac already. We’re seeing that happen. So the notion that I can extrapolate from here towards my own private God is belied by the data itself, which shows you that we’re already seeing this sharply asymptotic decline in the rate of improvement of models outside of software, but in almost every other domain.

Krugman: Since we’re talking about investment in terms of the economics and the business side, one of the things that we tend to “think of thinking”—or whatever it is, to the extent we think of this as some kind of thinking-like process—we tend to think of that as being kind of immaterial, as existing in a pure, nonphysical domain. Yet the whole thing about all of this is the extreme physicality of it. We’re talking about particularly huge amounts of capital being deployed, huge amounts of energy being consumed.

Trying to estimate how much CapEx is coming from AI is a huge pain. You have one of the most widely cited estimates but it’s looking a little stale now. I can tell you about why I find it a problem, but why don’t you talk about what’s involved?

Kedrosky: We have this prodigious amount of spending going on, and that was one of the windows through which I got interested in the investment side of this stuff, because it seemed as if it was so large that it was having an impact on economic data itself. I was looking at that early this year—I just did yesterday or the day before—there was a new OECD report on the US showing that in the first half of 2025, the US was arguably in a recession absent AI CapEx spending which was scarcely a ripple in terms of people saying, “hello, we’re running a giant private sector stimulus program that’s keeping the US out of recession,” and yet no one’s talking about it in those terms.

The analogy I make all the time is that when you don’t understand how large AI CapEx is and how consequential it is, you have the causality of policy all messed up. You don’t understand that the thing that’s actually driving the US economy is not the thing you think it is. I often make the joke that it’s like my dog who barks when the mailman comes to the house, and then the mailman leaves and he thinks it’s because of the barking. It’s like, “no, the mailman leaves every day.” It doesn’t matter whether you bark or not, they always keep going. You just have a bad model of causality in this context. That’s no different than what’s happening now in the world of macro, with respect to the role of AI CapEx in the US economy; for example, if you want to believe the tariffs are the primary reason why the US did well in the first half. If you’re of that no-partisan mindset, you’re ignoring the substantial role of AI CapEx on an annualized basis, probably being over $1 trillion, which made it more than half of U.S GDP growth in the first half of the year, which again kept the US out of recession, arguably from a single sector in private sector related spending in that which is just remarkable to me and is really fraught whenever you try to apply another lens and say “no, it was because of this or it was because of that.” No, this was the reason and it helps to explain job growth with someone even in the first half and continues to be, data centers are not a huge job creator. It’s all of these reasons for the capital intensity associated with this one particular sector.

Krugman: What drives me crazy is that you look at the standard, the way the data is cut—basically you look at national accounts—and I’ve seen people say, “oh, well, let’s take communications and information equipment plus software.” But that’s wrong in both directions. Some of that stuff is not AI, on the other hand there’s just a lot of construction of buildings that’s part of AI.

Kedrosky: You can back into it with things like nonresidential fixed investment and try to come in through that angle, which are also fraught. At least one of the ways I tried to triangulate it was build up from the numbers released by the companies themselves, because they’re so eager to brag about how much they’re spending. We can talk about why that is, I think it’s partly a deterrence thing: “I’m willing to spend so much to dominate this market that there’s no reason for you to spend anything at all.” It’s this O.K. Corral phenomenon of trying to deter people from actually contesting the market with you. So you make these giant preemptive announcements, partly to hoard capacity, but partly to deter competitors.

But nevertheless, we’re in this unusual moment where they’re willing to tell you what they’re doing, in a way that actually creates some data you can aggregate up and seize and say, “this is what’s going on with respect to spending” that you might not otherwise see, certainly not from the national accounts.

Krugman: Some respectable people have tried very hard, and I concluded that the BEA data is just not cut in a way that lets us do this, and we have to do something like what you’ve been doing.

Kedrosky: There’s other problems with the data too, which is really amazing to me. There’s also an ongoing business trend survey that’s been coming out where the Census Bureau added a line on AI adoption, trying to be helpful back in 2022. It’s showing that AI adoptions actually began plateauing at around 18% of large corporations already in the third quarter of 2025, which seems ridiculous, obviously, for a host of reasons. But nevertheless, when you go back and look at the actual survey item, you realize that it wouldn’t be out of place ten years ago, it’s about SQL dashboards and all of these machine learning technologies that were ancient ten years ago. So even the attempts to improve the data aren’t very compelling.

Kedrosky: So we’ve got bad data, both in terms of adoption and bad data in the national accounts from the standpoint of what’s actually being spent. An ongoing problem in general is that a lot of our economic statistics are really designed for the economy of 1929.

Kedrosky: That’s right. (laughs)

Krugman: We’ve got an infinite number of categories of textiles. (laughs)

Kedrosky: Yeah, tremendous data on textiles. Not so much on recent adoption of large language models, which is fine; I understand that, but nevertheless, when you introduce a new survey item in 2022 and say that this is oriented towards current adoption of these emerging technologies and it’s all about ancient machine learning technologies, it’s not going to tell you very much.

Krugman: Quick question. Do you have a sense—this may be unfair—of the AI numbers that you’re looking at? How much is equipment and how much is structured as buildings?

Kedrosky: So you can come at it from the standpoint of the data centers themselves, roughly 65-70% of the cost of a data center is specifically the equipment.

Krugman: So it is mostly equipment.

Kedrosky: It is mostly equipment. Obviously the primary beneficiary of that are companies like Nvidia: GPU manufacturers. So it is mostly equipment. Again, there are issues with respect to that being the primary, because obviously there’s a relatively short timeline over which those technologies must be replaced. Michael Burry of “Big Short” fame has been out chattering about this stuff.

I think it’s somewhat misunderstood what’s going on, but nevertheless, I sometimes say “a data center full of GPUs is like a warehouse full of bananas, that’s got a relatively short half life in terms of its usefulness.” That’s important to keep in mind. That’s what makes it different from prior CapEx spending. Moments like railroads, canals, rural electrification, take your pick, because of the nature of the perished ability of the thing that we’re investing in.

Krugman: So let’s talk about chips. As a technological naif: a chip is a chip. RAM is one thing, memory chips, which are commoditized, although I gather there’s a shortage of them now globally?

Kedrosky: Yes, there is in particular these, what are called, HBM, these high bandwidth memory chips, which are the ones that basically interconnect these GPUs and allow them to parallelize the training process. But yes, there’s a shortage in those, not in PC RAM, but in high bandwidth memory.

Krugman: Then there’s GPUs and TPUs—which I don’t quite get. They’re basically these specialized chips that do computational things or I guess GPUs—the G is for general, so less specialized, but still that are much more elaborate.

Kedrosky: It’s actually for “graphics processing units,” weirdly enough, the origins of Nvidia GPUs were back in the day whenever everyone thought the world was going to get taken over by 37 year old guys on Reddit with giant machines where they’re playing games on their personal computers at home. So the reason why GPUs are so good for training is because they were created to be very good at manipulating real time graphics on a screen, which is just a giant set of matrices in terms of the calculations of the positions on the screen, and researchers figured out fairly quickly, “wow, that’s actually useful for doing huge amounts of matrix math,” which underlies most of machine learning and thus large language models. So GPUs really were almost an accident of history in terms of their role in the context of large language models emerging from the graphics world.

Krugman: One big insight that I got from you is—until like a week ago—I understood that these chips depreciate fast, but I thought it was going to be basically depreciation through obsolescence. But it turns out that it’s just very, very different. Do you want to tell us about that?

Kedrosky: Yeah that’s really important because there’s this idea that the reason why this is a warehouse of bananas—or whatever your favorite fruit is in this context—is due to the pace of change in technology. That’s kind of a trope, “oh, everything changes quickly. I have to throw out my phone, my laptop.”

That’s not really the primary driver in most of what are called hyperscale or the largest data centers run by people like Google and Meta and others. You have to think about it in the context of the workload, what’s actually happening inside the data center, and it can loosely be split in two ways: there’s the training aspect of what goes on, so where I’m training new models or enhancements to old models using giant amounts, at least 10 to 20,000 GPUs inside of one of these data centers; and then the other chunk of the activity inside the data center is inference, which is responding to requests I might make when I write some nonsensical question to a chat AI, like Claude or whatever. So those are the things that loosely split in terms of the two things going on inside the data centers. Chips are underlying both of those. But from the standpoint of the wear and tear on the chip, those are very different activities. The analogy I often make is, let’s take training as an example, if I take training and I’m using that for a job, I’m running the chip flat out 24 hours a day, seven days a week, which requires an immense amount of cooling, incurs a lot of thermal stress (heat-stress), and then inference: I’m running it more episodically, maybe more in the day, less at night. People aren’t making as many requests at night, so the load changes fairly dramatically.

So the analogy I make is, imagine both the chips were used for 50 hours for training and 50 hours for inference. Now imagine a car in the same circumstance. I raced a car for 50 hours in two 24 hour races, or I took it every Sunday to church for an entire year, roughly 50 hours, let’s say it’s a half hour there and back. Which car would I like to own? I’d like to own the one that went to church on Sundays, even though it’s 50 hours is 50 hours. Because I realize that racing a car for two 24 hour races, even though the car’s only been run for 48 to 50 hours in a year, is a very different requirement with respect to the stress incurred.

When you use a GPU for training, it’s like those two 24 hour races on your car versus taking it to church for a year on Sundays. And so what happens is, and the data is fairly clear about this, there’s a distribution with respect to a long tail, where some chips last for quite a while, but there’s a high failure rate in the first 2 to 3 years, with a mean time between failure of about two and a half years or so; so long before we might be saying to ourselves, “oh, look, there’s a hot new chip out there that I want to replace this thing with,” you’re actually seeing a steady drip of chip failures. So let’s aggregate up. Imagine you had a 10,000 or even a 20,000 GPU data center. You should expect on the statistics a chip to fail about every 3 or 4 hours. So long before I get to the point where I’m rapidly turning these over because there’s a new generation of chips, I’m turning over a vast chunk of my chips just because they’re failing under thermal stress. This is because these workloads are like running my motor flat-out in that car. It’s high heat, it’s a lot of stress, things begin to break down. This leads to the turnover long before generally speaking, you might turn it over just because there’s some hot new chip out.

Krugman: Wow. So basically as you say, “it’s the training rather than the inference that’s an issue.” But the training is basically just running chips hot. They get heat stroke more or less.

Kedrosky: They get heat stroke and they and it can be really insidious because they don’t necessarily break catastrophically. It’s not like your car suddenly stops. They can actually slow down. You don’t realize that it’s not running as fast as it once was. So there’s all kinds of ways in which it requires a lot of work to figure out, “oh, that chip is running at a subpar level,” so it’s not as neat as, “it just blinked out of existence. Now I need to hot swap in and replace something.” It’s not quite that neat and tidy, which makes it even more time consuming and complicated to make the replacement. But that split understanding and the difference in terms of what it does to GPUs—to chips in the data centers—and thinking about it almost in the context of cars going to church on Sunday versus taking it in a 24 hour race is incredibly important because it tells you a lot about the dynamics of what we should expect in terms of the future capital costs associated with replacing the GPUs and data center. There’s an ongoing replacement wave not driven necessarily by technological change, but driven by the actual thermal stress on the chips themselves.

Krugman: Okay, so we have lots and lots of analogies, with the telecoms boom of the 90s. We all said, “well okay, a lot of companies went bust. The returns never added up.” But on the other hand, you had all this fiber in the ground, which eventually became useful. But you’re saying basically that’s not what’s going to happen here. What we’re going to end up with is a bunch of burned out chips.

Kedrosky: That’s right, a bunch of burnt out cases. That’s exactly it. It’s kind of like The Big Lebowski as a chip, where it’s like, “I’m not sure what’s going to happen here, but all I know is this guy’s long past his due date,” and so that’s a part a big part of the problem here is not just that technological change makes this 10,000 GPU data center less useful, it’s that it’s also gone through cycles of thermal stress, and likely its lifespan isn’t particularly long anyway as a result of what it’s already done. There’s a double whammy here that will make it less useful. So the response of the technology industry to that is generally this idea, they say, “well, that doesn’t really matter that much. What we’ve created is a powered shell. It’s this giant building that’s got power, it’s got cooling, it’s got all the things. So we can just hot swap in all of these GPUs again in future.” And that’s of course assuming away the problem, which is that 60 to 70% of the cost of the data centers—the chips themselves. So I’ll give you the power, the electricity there, the cooling and the walls and the concrete. I’ll give you all that for free. You still have the preponderance of the cost in front of you in terms of replacing the GPUs.

So the notion that I built a fixed asset that’s perpetually useful is really dangerous. I hear this a lot in particular from regional economic development officials who are talking about why they’re offering really extreme subsidies and tax abatements to hyperscalers to build data centers in their area. And they talk about them as—and I’ve heard this expression so many times—that data centers are “the factories of the new industrial revolution.” The analogy is just so fraught, for this exact reason that there isn’t that longevity that you would hope from this—leaving aside the analogy is bad—there isn’t that kind of longevity for these reasons.

Krugman: I think Jim Chanos may have been the first to say this to me, but I know other people have said it. It’s like shale wells, in which a lot of people lost a lot of money because it turned out that shale, gas or oil well, doesn’t keep yielding the same way that a conventional oil or gas well does. It depreciates really fast.

Kedrosky: That’s just another extractive resource economy. It’s an extractive resource economy in surprising ways. So not just in terms of the nature of a declining return from the GPUs themselves, but also the declining return—as I was talking about earlier—of these giant training sets that allowed us to scale up the so-called scaling laws for large language models that got us to the point of GPT-4 and 5 or Claude—that there’s a declining return on that, at a much higher cost.

The cycle times are longer, more training cycles. The amount of cost is higher. So in both ways, the extractive economy that underlies all of this is producing declining returns, which in the context of shale, wasn’t just the one point of failure with respect to declining returns to extraction. There are multiple ways you begin to see that. It’s masked by the capital expenditures because people try to spend their way out of the problem. So I’ll run more training cycles to produce better data. Of course, that doesn’t work. Then they go into the mode of—like Elon Musk has been doing with his Grok model—where I’ll spend half the time doing post training.

So instead of relying on finding new data, I’m going to do all kinds of work to make it more sycophantic in terms of the response it gives to people. If you look at his training data, almost 50% of the of the training cycle time on the latest Grok models was about 50% from post training, which can work but in the limit leads to these kinds of obsequious and sycophantic behaviors that make the response at best unstable and realistically not very helpful.

Krugman: The last number you had was a little bit bigger as a share of GDP than the telecoms boom of the 90s. But presumably you think it’s higher than that now?

Kedrosky: I do. It’s something like, nonresidential fixed investment, probably around 14% now. So, we’re considerably ahead of where we were in the telecom bubble, we are somewhere between rural electrification and World War two rearmament.

Krugman: But not yet like railroads in the 19th century.

Kedrosky: Not yet like railroads, but on a path to a similar place. Given that—and this is really important—we’re in that point where there’s a financial flywheel now, where increasingly the financing of these data centers is in a place where it’s somewhat divorced from what goes on inside the data center, because we’ve created a financing template for how to finance data centers, where you have these SPV, these special purpose vehicles into which a third party contributes capital and the tech company contributes technology. And out the other side magically pops these securities that have great income and yield characteristics that are hugely attractive to investors. They look at it as almost like a synthetic security, where I understand it’s the SPV, the data center that’s producing this.

But on the other side of this is Meta and Google and they’re a prime credit. They’re a really strong credit. So they’re going to keep paying on this. I don’t really care what goes on inside the data center, because I have a lot of confidence in the counterparty in this. We know where all of this kind of thing leads when you have these financing flywheels driven by securitization and high yields and people not caring what goes on inside the actual structure itself: it leads to a lot more construction and eventually over-building.

Krugman: Oh, God. I’m having flashbacks to 2008, 2009. All of the stuff that was “perfectly safe” because after all, AIG was backing it, right?

Kedrosky: Right, exactly. Very much so. When you have the same phenomenon of this look-through mechanism, where you have these legal vehicles, where people look through the legal vehicle and say, “oh, well, it doesn’t matter, because on the other side of this, it’s Google and Meta,” and it’s even more insidious. Some of the private credit providers have been straight up about this, that in the contractual terms that underlie the data centers, if you were to cancel early and no longer continue as a technology company using one of these centers, there are whole provisions which basically force you to pay, in some sense, the net present value of the future lease payments back to the private credit company. They’ve been very clear about—this actually works out in their benefit given time value of money, that actually they don’t mind if you walk away from it early and make the payment because “now I have more capital to do more building.” So in a weird way, there’s a perverse incentive in the system to make bad loans.

Krugman: I do want to ask about circular financing, except that I’ve been looking at all of these pictures showing the money flows among all the players and my eyes were glazing over—I’m supposed to be good at this!—but there is some sense that things are kind of inflated by taking in each other’s washing, is that wrong?

Kedrosky: No, it’s absolutely right that we increasingly will see circumstances where an Nvidia will make an investment in a provider with the provision that they use Nvidia’s chips. In turn, that becomes their primary source of semiconductors for their training centers. Then that in turn feeds back and leads to more buying. And so round and round and round it goes. It gets very sort of incestuous and complicated because we have all of these interlocking combinations. But the reality is it creates the impression of much more demand than there is. And it’s done in part for strategic reasons, because Nvidia is trying to block up a position in the marketplace where it says, “there’s really no point in even looking at a Google chip or an AMD chip or anyone else, because look how much we’re dominating the market, and look at the lengths to which we’re prepared to go to make sure we continue to do that.” So it’s not so much that it’s some kind of malfeasance, it’s just this kind of rogue strategic move that ends up causing this impression of more growth than actually exists, because these companies all believe there’s a kind of land grab, literal and figurative, going on right now, that I need to make sure I populate these things with my technology now, because who knows what other opportunities I’ll get to do it in the past or in the future.

But all this tends to do is create this circularity, and round and round and round it goes. It becomes very difficult to get a true sense of actually what demand looks like. That’s made worse by this hoarding that’s going on where people don’t know what the demand is going to look like in future. But they do know that there’s relative scarcity of access to power. So I want to make sure I lock up every location I can now, and we’ll just let the chips fall out—no pun intended—how they do in future. So there’s this hoarding phenomenon that’s going on, which also leads to overbuilding, this circular phenomenon, and even leads to this kind of Chinatown-like speculation with respect to land grabs that might one day turn out to be useful.

We see the emergence of these companies called powered land companies, which are kind of analogous to what went on in the days leading up to LA taking over the Owens Valley’s water supply, where you show up with numbered companies and you buy up locations and no one knows exactly what you’re doing, and it’s all in anticipation of eventually one day someone wanting that and you say, “haha, I’m already here and I’ve already got the rights to access to power here and so if you want to build a data center, away you go,” and we’ve seen there’s a whole host of these so-called powered land companies that have no interest in building data centers. They just want to kind of go through a Chinatown-like model of preemptively buying the land in anticipation of an eventual buyer showing up.

Krugman: Wow. Power, that’s one of those things that I was completely caught off guard by was the sheer power requirements and how that becomes a constraint.

Kedrosky: So part of the problem is the technology industry itself isn’t used to anyone saying no, they’re kind of like a petulant toddler. So the problem is that that power is the connection to the real world of what’s going on, and so these things have to be grid connected. We have to get power from somewhere. We’re looking at certainly hundreds of megawatt buildouts, but also even into the gigawatts. This is obviously far in excess of what you can straightforwardly attach to an orthodox grid. At the same time though, there’s this huge temptation on the part of utilities to say, “we’ll take this because of the predictability of the load and the high quality of the credit makes it really appealing.”

But then the problem becomes, I have to make whole. So now I have to turn back and probably increase rates to my ratepayers, which is why we’re seeing soaring electrical bills all over the place. We’re even seeing people pushing back and saying, “I don’t want to have data centers connecting in my region” and that in turn turns into what’s called “behind the meter power,” which is you show up but you’re supposed to bring your own power. Well, that’s easier said than done. It turns out it takes a long time to build a nuclear generating station. It turns out that it’s like 4 to 5 years now to bring in natural gas. So people connect to the grid now with the promise that it will eventually be self-sufficient. But who knows whether they’ll ever be self-sufficient. So you get into these perverse situations, like recently in Oregon, where Amazon connected three data centers to the grid and has now registered an official complaint with the Oregon PUC because they can’t get power for any of them, but they were promised. So this is the beginning of what you would expect to have happen, because the temptation to take on these loads is immense, but the loads themselves are so large that it’s not straightforward how you attach it without actually changing the bills back to ratepayers.

Krugman: Yeah, the utilities may like it, but the governor elect of New Jersey probably doesn’t.

Kedrosky: That’s exactly right. Then you get even crazier situations, like a recent one with Allegheny Energy and Power, AEP, where you actually have utilities speculatively buying futures with respect to providing power that they hope will be used by data centers. The data center demand doesn’t show up, and so they in turn turn around.

This is happening right now with AEP. They’re trying to dump that power back into another interconnect. So it’s essentially a secondary distortion of a market. Because they have 700MW of power that’s just burning a hole in their pocket. But that’s because they were borrowing speculatively, trying to control some power such that they could then turn around to data centers and say, “hey, come here.” That didn’t happen, and now they’re dumping power, which is distorting another market.

Krugman: So we have a big problem of power. We have probably much faster depreciation rates than are being built. The question is, what is the prospect of the stuff actually generating the kinds of returns that would justify the investments?

Kedrosky: They’re low, this is why you get into these perverse conversations, which I seem to get drawn into all the time about what that might look like. So you get people doing these top down models and saying, for example—and this one just makes me crazy—that “the TAM (the total available market) for global human labor is like $35 trillion.” What if we get 10% of that? That would be a $3.5 trillion revenue stream, which just for a host of reasons, are indefensible ways of approaching this. It’s partly the old mistake of saying, “if I just got 5% of the Chinese market, I would be a huge business.” Well, no one gets 5% in the Chinese market. You succeed or you fail. But it doesn’t work that way. Same thing with this 10% of the global labor market. But more fundamentally— and this is more your bailiwick than mine—is that a $35 trillion market into which AI makes huge incursions is no longer a $35 trillion market. It’s a massive deflationary force. You have 10% of something, maybe, but I have no idea what it is anymore.

So the idea that you can predictably say, “I will continue to pay as much for labor when it’s done this way versus that way,” just seems naive, at best inept, really self-serving at worst. So all of these models about trying to come up with a defensible, whether it’s top down or bottoms up models where people say, “well, what if 5 billion people worldwide are all paying $100 a month for some kind of large language model subscription? Well, then we’re making enough back.” It’s like, that’s not the way it’s going to happen! That’s an incredibly naive way of thinking about the way this will play out. It’s more likely it’s just running for free on my phone and I don’t even notice. I’m not gonna be paying for it at all.

Krugman: There are not 5 billion people in the world who can afford $100 a month.

Kedrosky: No, of course. It’s just a staggering misinterpretation. So both ways of thinking about it really don’t make a lot of sense. You fall into this—and I use this expression all the time—faith based argumentation where “it has worked out before.” This is what everyone said during the fiber bubble, or this is what everyone said during the dot com bubble, or pick your favorite moment with respect to a technological change. They say, “these things always work themselves out.” I find that a really patronizing approach to the problem, because the scale of the spending is now on a sovereign level, the amounts of debt being raised by companies like Oracle rival a mid-sized European powers’ sovereign debt raising on an annual basis. These are non-trivial numbers, and it’s even rippling through to places like Taiwan where, for example, TSMC now is something like 15% of Taiwan’s GDP. Every other sector in the country is struggling, not just not least because of technology, but also because of tariffs. So we’re creating new fragilities in all kinds of places as we merrily extrapolate our way along on the basis of this debt-fueled spending.

Krugman: Of course, there’s always the possibility that “other players, other approaches.” I mean, it’s a little bit like last year where the Danish economy is all about Novo Nordisk, and it turns out other people can produce weight loss drugs, too.

Kedrosky: The analogy is spot on, because at peak, Novo Nordisk was something like 14% of Danish GDP. So in a weird sort of way, TSMC holds the same role with respect to Taiwan now as a result of it, and faces the same risks with respect to fragility because LLMs, large language models, as the basis of much of the current excitement, are at a kind of natural architectural dead end with respect to some of the things we’ve been talking about. So the idea that it’s going to continue, we can project in the same way and extract the same gains from the same kinds of spending are just incredibly unrealistic. That’s one of the reasons why you’re seeing people increasingly look at other approaches. I think in all likelihood, none of them will lead to anything like AGI. But it doesn’t really matter. The point is that it’s a demonstration of the extractive exhaustion of what we’ve currently done.

Krugman: There is this talk among mostly uninformed circles that I run in, but about smaller models trained on a more limited base to the Chinese approach and that are much cheaper to run and that that would be a huge blow to these companies, if that turns out to be right.

Kedrosky: Absolutely. So you have these small and micro models that are much cheaper to train. Deep Seek was loosely an example last year. This idea of a much less expensive method for training models. We saw it recently with Moonshot’s Kimi model which just came out of some of these Chinese models. So these are in a sense they’re a different approach to the same problem. We’re not taking a new architectural approach. These are still large language models. They’re just at a much smaller scale in terms of the amount of time required to train them and the cost required to use them. So they’re really important, but they’re even more important because, let’s think forward, if I’m right, that the amount of training we do in future has to decline, because the natural architecture that we’ve had with respect to large language models, the economics are dictated then by inference, by the ability of these models to respond to requests.

But most of inference is not you and I. This is a mistake we make all the time. People think that we are the story. With respect to inference, most of the global inference from consumers—from you and I and others—could be satisfied by a single data center in northern Virginia. That’s how small a fraction of the total load we are with respect to inference worldwide.

So 60%, let’s say, is training. We’re maybe 5 or 6% of the total workload of data centers. That bit in the middle, a huge chunk of that, is software itself—is coding, which turns out to be a huge profligate use of tokens. So what you’re forced to project as you go forward, is you say, “well, is everyone on earth going to be writing software using Copilot or Cursor or any of these tools?” That seems unrealistic. So where is the balance going to come from with respect to the increased usage of these models? Then at the same time, you have the incursion of these small models which are going to eat up even more of it at the margin. So it’s very difficult to see how the current extrapolated model, with respect to the workloads at these data centers makes any sense.

Krugman: It’s amazing. One of my pastimes now is watching old tech ads from the 90s. The ads were a lot better, by the way. I don’t know why the 90s were so much more fun than this one, but the old Qwest ads about all the wonders of fiber optics. It all came true, except, not for Qwest.

Kedrosky: Right, which is a sort of the perennial problem here, it’s: you turned out to be the pioneers with the arrow in your back. But yeah, I think that’s a big part. The other thing I think is that what’s really unusual about this bubble and confuses people a lot—or this moment, I’ll say—is that historically, the U.S. has been very good at speculative bubbles. This is one of our main core competencies here. They tend to be about real estate, or they tend to be about technology, or they tend to be about loose credit, and sometimes they even have a government role with respect to some kind of perverse incentive that was created. This is the first bubble to have all four. We’ve got a real estate component, we have a loose credit component, we have a technology component, we have a huge government component, because we’re told “we’re in an existential crisis with China, and we must win this at any cost.” So all of those forces together means that you have people looking at it through four different silos and lenses, rather than just saying, like in the global financial crisis, “it’s always about real estate and credit.” Or in telecom, “it’s about technology and loosely some credit.” This is the first one where you end up in the rational bubble theory of all of this, where everyone feels like they’re doing something rational. Yet in aggregate, all of these different people who are looking at the problem through their own lenses are actually profligate contributors to the problem, because it’s the first one that combines all of the forces that historically have made some of the largest bubbles in U.S. history.

Krugman: Oh, joy. (laughs) Sorry. Well, it does have a sort of a feeling again. I think the bursting of the housing bubble played an important role in my life because it made it possible for me to afford my New York apartment. And, of course, I was paying a lot of attention, though not financial stake but to the tech bubble of the 90s. But now we have the sum of all these things...

Kedrosky: The sum of all bubbles.

Krugman: Wow. Let me just ask. We’re running a little long, but I want to add that one of the interesting posts that you had recently was about—economic geography, location, that’s one of my things—San Francisco is having a revival, you want to talk a little bit about what are the places that are affected?

Kedrosky: So this is probably one of the narrowest moments with respect to risk capital in the last 30 years in terms of either the money is going to one thing or it’s going to nothing, which is to say venture, secondary credit, growth capital, it’s all going into AI, which is having this impact in those centers which are most prone to having companies doing this kind of work.

So San Francisco is a good example, where it’s gone from a relative commercial real estate glut as recently as four years ago to it’s now back to historical norms. And probably by this time next year at the latest we’ll be well below the levels that we saw even 10-15 years ago and entirely driven by this influx of capital around this single sector. So the narrowness is one thing, but the scale of the money flowing in is another, to the point that it’s actually distorting. It’s doing the same thing in New York. It’s doing the same thing in San Francisco, to a lesser extent in other centers. But it’s narrow geographically and it’s narrow sectorally which is really unusual.

I think the flip side of that, and the point I always make, is that whenever all of this capital is flowing to a single thing, it also means that it’s not flowing somewhere else. I think that’s incredibly important to understand. I gave the Taiwan example earlier, where if you’re in AI or semiconductor manufacturing in Taiwan, you’re awash in capital. If you’re a manufacturer of literally everything else, you cannot get a loan. The same thing is true in the U.S, where if you’re an early stage company or a mid-stage company looking for growth capital for almost anything and it doesn’t have an AI component, you’re out of luck, my friend.

This notion of starving not just manufacturers, but growth companies for capital because of the narrowness of the spending almost always has historical consequences. We saw this in the 90s, with the rise of China and sort of coincident with the telecom bubble, and how U.S. manufacturers are increasingly starved of capital because it was all flowing sectorally to telecom. We’re seeing the same thing now. That will play out over the next few years. But it’s dramatic right now.

Krugman: Sounds bizarre unless you know the history, but in international economics it’s “the Dutch disease.” There was this famous period when after the Netherlands discovered natural gas, it really killed their manufacturing sector.

Kedrosky: Exactly. I make the same analogy. I think that’s exactly what’s going on. It plays out in insidious ways. Like let’s say you imagine the tariff policy was going to be effective with respect to offshoring manufacturers, imagine you’re a capital intensive manufacturer trying to onshore and you’re not in the semiconductor sector, how difficult is it to raise capital right now? It’s virtually impossible. It’s really much more difficult than it would be absent the AI spending bubble. Because of this tsunami of cash flowing into a single sector. So even if you believe that policy was likely to be effective, the struggles with respect to getting any capital are dramatic because of this phenomenon. And yet, if you don’t talk about it and understand it, you’ll think that, “oh, well, what we need is probably higher tariffs.” We need to encourage people even more to come. Otherwise we won’t have enough manufacturers manufacturing domestically.

Krugman: It’s just this feeling that—monstrous sums of money, monstrous egos, where does all of this end up?

I have to say, there’s one humble sector that I happen to know is prospering amid all of this, which is the two remaining companies that produce blue books for college exams.

Kedrosky: Oh, yeah.

Krugman: They’re having a revival because we’re going back to handwritten exams.

Kedrosky: You know what? That doesn’t surprise me. I should have thought of that, but I bet that’s exactly right.

Krugman: The problem is the young people don’t know how to write anymore. They literally don’t know cursive. How this thing works is so important, and people like me are thoroughly unequipped. So thank you for helping me a little bit on that front.

Kedrosky: That was great, it was great chatting.

The Unexpected Effectiveness of One-Shot Decompilation with Claude

The Unexpected Effectiveness of One-Shot Decompilation with Claude

Chris Lewis decompiles N64 games. He wrote about this previously in Using Coding Agents to Decompile Nintendo 64 Games, describing his efforts to decompile Snowboard Kids 2 (released in 1999) using a "matching" process:

The matching decompilation process involves analysing the MIPS assembly, inferring its behaviour, and writing C that, when compiled with the same toolchain and settings, reproduces the exact code: same registers, delay slots, and instruction order. [...]

A good match is more than just C code that compiles to the right bytes. It should look like something an N64-era developer would plausibly have written: simple, idiomatic C control flow and sensible data structures.

Chris was getting some useful results from coding agents earlier on, but this new post describes how a switching to a new processing Claude Opus 4.5 and Claude Code has massively accelerated the project - as demonstrated started by this chart on the decomp.dev page for his project:

Chart showing progress in matching code for Snowboard Kids 2. It slowly climbs from 20% to 25% from 3rd September to 17th November, then rises quickly to 45% by 2nd December

Here's the prompt he was using.

The big productivity boost was unlocked by switching to use Claude Code in non-interactive mode and having it tackle the less complicated functions (aka the lowest hanging fruit) first. Here's the relevant code from the driving Bash script:

simplest_func=$(python3 tools/score_functions.py asm/nonmatchings/ 2>&1)
# ...
output=$(claude -p "decompile the function $simplest_func" 2>&1 | tee -a tools/vacuum.log)

score_functions.py uses some heuristics to decide which of the remaining un-matched functions look to be the least complex.

Via Hacker News

Tags: games, ai, prompt-engineering, generative-ai, llms, ai-assisted-programming, coding-agents, claude-code

Quoting Daniel Lemire

If you work slowly, you will be more likely to stick with your slightly obsolete work. You know that professor who spent seven years preparing lecture notes twenty years ago? He is not going to throw them away and start again, as that would be a new seven-year project. So he will keep teaching using aging lecture notes until he retires and someone finally updates the course.

Daniel Lemire, Why speed matters

Tags: productivity

The See-Through 747

December 8, 2025

In the first grade, my two favorite toys were both 747s.

The first was an inflatable replica, similar to those novelty balloons you buy at parades, with rubbery wings that drooped in such violation of the real thing that I’d tape them into proper position. To a six-year-old it seemed enormous, like my own personal Macy’s float. The second toy was a plastic model about twelve inches long. Like the balloon, it was decked out in the livery of Pan Am. One side of the fuselage was made of clear polystyrene, through which the entire interior, row by row, could be viewed. I can still picture exactly the blue and red pastels of the tiny chairs.

Also visible, in perfect miniature near the toy plane’s nose, was a blue spiral staircase. Early 747s were outfitted with a set of spiral stairs connecting the main and upper decks – a touch that gave the entranceway a special look and feel. Stepping onto a 747 was like stepping into the lobby of a fancy hotel, or into the grand vestibule of a cruise ship. In 1982, on my inaugural trip on a 747, I beamed at my first real-life glimpse of that winding column. Those stairs are in my blood — a genetic helix twisting upward to a kind of pilot Nirvana.

That’s a passage found in chapter two of my book.

It’s that second toy, the one with the transparent fuselage, that I bring to your attention. As it happens, I discovered a photograph, buried in an old family album, in which you can see it. While I’ve always remembered the toy, I had no idea that a picture of it existed.

That’s me holding the plane, of course, with my sister and my mother in front. It’s Christmas morning, 1972.

Look closely and you can see the rows of seats, sectioned into different colors. The first class seats look red. On the left wing it says “Pan Am.” You can’t see the spiral stairs, but they’re in there, in the middle of that blue part. It appears the entire fuselage was look-through, not just half of it, as I’d written.

One wonders what sorts of shitty toys are available these days for first-grade airplane buffs.

That plastic plane is long gone, sadly. I’m not saying you should save all of your childhood toys, but be careful. This one, surely, deserved to be set aside. Even so young, I already has aspirations of becoming a pilot. It would’ve made a meaningful keepsake.

The picture, at least, remains.

Last Thursday, by the way, marked the 34th anniversary of the demise of Pan American World Airways. The company ceased operations on December 4th, 1991. I remember watching it on the news, in a hotel room in Burlington, Vermont.

I was fortunate enough to fly twice on an actual Pan Am 747. From Rio de Janeiro to New York, in 1982, and from Frankfurt to New York in the fall of 1991, shortly before the end.

 

The post The See-Through 747 appeared first on AskThePilot.com.

Six New Tips for Better Coding With Agents

I’m hanging out in Sydney with my esteemed co-author and co-conspirator Gene Kim today; we flew in to conduct Vibe Coding workshops and talks this week to the Commonwealth Bank of Australia, some of their partner companies, and the general engineering public. Very cool of CBA to sponsor this training, and Gene and I are super excited for it.

We noticed that we’ve pushed into new territory since our Vibe Coding book was published. The book is all about how to work with coding agents, and all the advice and techniques in it are still incredibly relevant; I use it all daily. But there’s even more to learn, and we continue to uncover new tips and strategies.

I thought I’d share some of the new themes we’ve noticed, in no particular order, hot off the presses. Let’s see which ones resonate with you.

1. Software is now throwaway — expect < 1 year shelf life

This is probably the most obvious one. Anthropic has already begun embracing this idea internally, which is how I first heard about it, from friends there.

26 years ago Joel Spolsky wrote one of the most useful pieces of software advice anyone has ever given, in Things You Should Never Do, Part 1, where he says, in a nutshell, DON’T REWRITE YOUR SOFTWARE!

In this classic essay, well worth a read, Joel gives powerful examples of companies and projects that decided their code base was too old and crufty, so they chose to rewrite it all from scratch. And the results were, predictably, awful. Joel says:

The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?

And he was right! Outstanding essay. But unfortunately, not so timeless as we thought. It proved to have a shelf life of only about a quarter century. We are entering a surprising new phase of software development, in which rewriting things is often easier (and smarter) than trying to fix them.

I first noticed this with unit tests. You’ll use agents to make a giant refactoring to your system, and then all the tests will be broken. The agents inevitably struggle to fix them. So one day I said, screw it, delete all the tests and make me new ones. And it got through that exercise SO much faster. The new tests were great, had great coverage, and importantly, the LLM was able to generate them very quickly, compared to trying to reason through the old system behavior vs the new expected behavior. With new tests, it can focus just on the new behavior, which is a much cleaner cognitive problem.

This generalizes beyond tests: generating almost any code is easier (for AIs) than rewriting it. Hence, recreating software stacks from scratch is starting to become the new normal. We’re seeing it more and more, e.g. companies with mainframes who are concluding that a small team of engineers and biz people could recreate the entire experience with the same API, but with modern architecture and maintainable code, in just a few months. And they’re doing it.

The upshot is that for all the code I write, I now expect to throw it away in about a year, to be replaced by something better. Maybe mine, maybe someone else’s. Doesn’t matter. It’s all just stepping-stones to higher velocity.

This spells trouble for third-party SaaS vendors. Companies are also discovering that they can build bespoke business-automation software so easily that they don’t need to re-up their vendor contracts. SaaS vendors are going to have to work harder to provide value that’s too expensive to recreate. It can be done — Graphite is one example; they now have years of learnings into the nuances of AI code review. I don’t think you would necessarily want to retrace those years of steps yourself, on your company dime. Sourcegraph is another example; they have a code search engine with 10 years of enterprise bug fixes, and even with modern agents, you almost certainly wouldn’t want to try to clone that yourself.

But many SaaS vendors who’ve found niches building business automation software are going to be in real trouble. Because businesses are automating their own processes now, with vibe coding!

2. Agent UX matters at least as much as Human UX

One of the interesting themes I heard at the AI Engineering Conference in NYC a couple weeks ago was that although many people are building tools for AIs, they are finding it very hard to get the AIs to use those tools.

It’s tricky to get AI to use a tool it’s not trained on. They have certain ways of thinking and working, and they tend to reach for familiar tools (e.g. grep instead of a fancier search). I’ve talked with many people who wanted to build a tool for their agents to use, and they’d work with the frontier models to design the perfect agent-friendly interface — one the models swore up and down would get them to use it.

And then haha, no, the agents don’t use it. You prompt and prompt, they ignore and ignore. So what do you do? How do you get them to use your tools?

My Beads issue tracker for agents has been an interesting case study here. It’s only maybe 2 months old and it already has 250+ forks and 5000+ stars. It’s a successful project. But I’ve never looked at the code. It’s fully vibe-coded by agents. Despite that, Beads managed to capture lightning in a bottle — it’s a tool that AIs use, and not only that, they like it. Agents use Beads eagerly and enthusiastically with very little prompting. They make smart decisions, such as filing Beads when they are low on context, instead of doing the work directly. Things you would normally have to prompt them to do, they just do!

I’m no magician. I’ve built plenty of tools that the AIs refused to use; I’ll talk about one of them below. And I’ve built plenty of prompts that the AIs choose to ignore or overlook. It’s not like capturing lightning in a bottle is super reproducible at this point. But I can share some of the things I did with Beads that I think helped.

First, I asked Claude to help me design a new lightweight issue tracker backed by git, with a few other constraints, and then Claude came up with about half of the rest of the design: the SQLite database caching layer, the discovered_by graph link that the models feel is very important for gathering context on issues, the hash IDs, deletion tombstoning, etc.

During the Beads design phase, I mostly argued with Claude, telling it I didn’t like certain choices it was making from a Human UX perspective. Eventually we negotiated our way to something we both liked, something that had good agent UX and also good human UX.

For the agent side, once we had the initial structure in place (the issue tracker itself), the primary UX issue became tooling ergonomics. My agents were trying to use Beads, but they kept giving it the wrong arguments. For example, they’d use — body instead of — description when filing an issue, which would fail. Why? Because they were trained on GH Issues, and GHI’s CLI tool uses — body for filing issues. Reaching for the familiar again!

So in that particular case, told it to add — body as an alias for — description, which it did, and that bit of Agent UX friction went away forever. I’ve done this many, many times in Beads. As the agent works, I watch how it’s using the tool, and whenever it encounters an error, I ask it, how did you want it to work there? How can we change it to make the behavior be more easily guessable?

Over the past few months we’ve made dozens of tweaks, adding flags and commands, and the agents now rarely have trouble using Beads fluently.

I can’t claim to have cracked the agent-UX problem, not by a long shot. I think the role of “Agent UX Designer” feels like it’s ready to emerge as a first-class career for humans. As just one example, I’m working on my third agent orchestrator this year. And even though the architecture is sound, I haven’t found the magic UX formula yet, to where any agent automatically just figures out what to do, and does the right thing most of the time. I’ll get there! In fact, as soon as I solve this problem with my orchestrator, I’m launching it. I’m aiming for Christmas Day. We’ll see.

Once you do find that secret incantation that makes your tool truly agent-friendly, you should get it out there as fast as you can, because it will grow like crazy.

And if you try to launch a tool that agents don’t choose to use of their own volition, with minimal prompting, then you need to go back to the drawing board and fix the agent UX.

The best way to do this is to leverage the Optionality from FAAFO, from our Vibe Coding book. Generate a whole bunch of interfaces, and then experiment with each one, to see which one the agents like best. It’s very much a trial-and-error search problem at this point, until either the agents get better at using new tools, or we get better at learning what they like.

3. Spend 40% of your time on code health, or else you’ll wind up spending >60%.

Gene was curious how I could be so confident in Beads if I’ve never looked at the code. My answer to him was one of the easiest I’ve ever given. If you are vibe coding, i.e., having the AI write all your code for you, then you need to spent at least 30–40% of your time, queries, and money on code health. That’s how you make sure your code is OK. You have the AI conduct regular code inspections. Tons of them.

It’s pretty easy in principle: Every now and then, you pause your regular work, and tell your agents: go find code smells of all shapes and sizes. Have them file Beads for anything that needs followup. Tell the agent to look for large files that need refactoring, areas with low test coverage, duplicated/redundant systems, legacy code, dead code, poorly-documented code, etc. etc. etc. I don’t have a good prompt for this step yet; would appreciate it if anyone has crafted one. But you can also just ask your agent to help craft it.

You’ll also want to ask your agent to do cleanups during the code-health passes. Have it look for files that are in the wrong place, or have misleading names, or need better homes. Have it clean up debug cruft, ancient plans, build artifacts, old docs, anything you don’t need. This is all part of the regular hygiene and maintenance of a vibe-coded code base.

It helps to be creative, and also to ask the agent to be creative, thinking outside the box. After the first round or two of regular code reviews, start having it look for over-engineered subsystems (YAGNI), opportunities where your code could have used a third-party library, and other broad, system-level concerns.

Basically the agent will always find problems, often shocking ones, e.g. where you discover you have two or even three completely redundant systems (databases, logging, telemetry, whatever) that need consolidating. And since agents tend to accrete code without automatic refactoring, your vibe-coded source files will tend to grow to thousands of lines, which makes them harder to agents (and humans) to reason about. So you should tell it regularly to break things up, and then run dedicated sessions to implement the refactoring!

During each code review, have your agent file Beads for everything it discovers. Then have it review the epics and issues (up to 5 times; see below) to ensure the implementation will go smoothly.

Then swarm to fix it all! Do all this at least weekly. For me, I’d estimate I spend about 25–30% of my time and money on code health, and I don’t think it’s enough. As long as I continue to find serious problems with reviews, I need to do more reviews. My current guidance is that you should expect nearly half of your work to be code-health related.

What happens if you don’t follow this rule? You gradually (but rapidly) accumulate invisible technical debt that weighs down your agents in various ways — too much code, conflicting code, obsolete docs, etc. Your agents will begin to work more slowly and you’ll see more bugs in their outputs.

Stay on top of code health, and you’ll keep your vibe-coded code base sprightly.

4. You might be too early: Some projects are ahead of their time.

AI cognition takes a hit every time it crosses a boundary in the code. Every RPC, IPC, FFI call, database call, client/server call, every eval, every single time the AI has to reason cognitively across a boundary or threshold… it gets a little dumber.

I noticed this when working on Efrit, my native-elisp coding agent, which lives inside Emacs. Over the summer I was trying to get Claude and other models to build it for me, and they struggled. Hard. Efrit lives in Emacs, which is a separate process from your coding agent, so already there’s one boundary.

For that particular IPC boundary, there are multiple channels for the agent to talk to Efrit, all of them quite unsatisfying. There’s emacs — batch, which has limitations, and the emacs-server client/server mode, which is also limited for the kind of heavy reflective introspection the agent needs to do for this kind of code base.

So what did I do? I spent a week working with Claude to build a better agent-Emacs bridge. Claude built me the “Agent-Efrit bridge”, a simple and elegant system which uses a polling file channel as a message queue to and from Efrit. It’s beautiful. A tool made for agents, by agents! When it does work, it’s amazing.

Naturally, Claude never uses our fuckin’ bridge we built together. I’ve given up even asking. This is an example of a tool I tried to build, but the AI just refuses to use it.

With Efrit, after that initial bridge there are still other RPCs — the API call to the frontier model, the parsing of its response, and the eval of the elisp code to execute the response. All of these were piling up to make the models dumber. And ultimately, the August 2025 crop of frontier models couldn’t solve this problem. Or at any rate, the returns became so diminishing that I gave up.

So I paused the project! There was plenty of other work to do. A few months went by, a few model releases happened (notably Sonnet 4 and Sonnet 4.5). Efrit sat idle. And then about 2 weeks ago, someone asked to be an Efrit maintainer, since people wanted to use it. But wait, Efrit was still crap! So I thought, what the heck, let’s have Claude 4.5 peek at it.

Claude 4.5 took one look and said, “great idea, awful execution, but we can modernize this.” It produced an incredibly detailed plan to take Efrit to the next level, and I’ve spent the past 2 weeks letting it grind through this plan (serially, no swarming, since swarming on elisp sounds like a bad idea today.) And now Efrit is getting to be approximately on par with modern coding agents.

All I had to do, in order to crack this nut, was wait 3 months (i.e., 2 model releases). Claude is finding Efrit quite easy now, compared to this summer. I cite this as one of many examples of how the models and tools are indeed getting exponentially better. I have a set of projects they can’t do today. Efrit is (well, was) one of them. If you keep a menagerie of “too hard for AI” projects, then you will be able to watch and measure their cognitive progress increasing month by month.

I often bake this philosophy into my project planning. I will deliberately build something that’s just slightly too hard for the agents, knowing that in the next model release, they’re almost certainly going to find it straightforward. I plan for the models to get smarter, by building tools that don’t work that well with today’s models. This is how you get that little bit of extra shelf life out of your software — plan for it to be useful when smarter agents arrive.

If you read this section and concluded, “well, obviously AI isn’t ready to handle my project work; I tried it, it was confused, so I’m just going wait for smarter models,” then I wouldn’t blame you. But be careful! You might not need to wait as long as you think. If you’re just using this as an excuse to procrastinate until the models are smarter, then you’re missing out on honing a massive set of skills you need in order to work with models effectively — even as they do get smarter.

In the next section, we’ll talk about a way you can get even more cognition out of today’s models, without needing to wait. You’ll have them solve even harder problems than you thought they were capable of, all because you didn’t give them enough of a chance before. Let’s take a look!

5. The Rule of Five: When in doubt, have the agent review its own work 5 times.

Jeffrey Emanuel discovered this powerful and unintuitive rule. He found that he gets the best designs, the best plans, and the best implementations, all by forcing agents to review their proposals (and then their work) 4–5 times, at which point it “converges”. It typically takes 4 to 5 iterations before the agent declares that it’s as good as it can get.

Jeffrey described a long, complex series of prompts for this process; I’m sure we’d all be appreciative if he publishes them. But the way he described it to me, you first make them do a task, then you do a series of focused reviews. Each review should be slightly broader and more outlandish than the previous one, or you can do it the opposite order. But you need a mixture of in-the-small and in-the-large reviews. You’re having it look for bad code (or designs), but also bad architecture.

To be slightly more concrete, Jeffrey first asks it to do a couple of regular code reviews, which find all the usual stuff. And you’ll notice right away that even on the second review it will often find things it missed in the first review. But I think most of us stop there, if we even ask at all. It definitely feels weird to ask for the 3rd code review, which is the agent’s 4th pass over the code, counting the generation step. But the 3rd review, especially during the Design phase, is where you start asking it existential questions about whether you’re doing the Right Thing throughout the project.

I tried it, and sure enough, it does take 4–5 iterations, just as Jeffrey described, before the agent will say something like, “I think this is about as good as we can make it.” At that point it has converged. And that, folks, is the first point at which you can begin to moderately trust the output the agent has produced. If you always take the first thing it generates, with no review at all, you’re bound to be disappointed.

I asked Claude what it thought of this Rule of Five, and Claude was enthusiastically supportive. Claude claims that this process matches their own cognition model, which is breadth-first: they solve each problem first in very broad strokes. And then they almost always need more passes for proofreading, refining, and polishing — much like humans do.

At first you’re going to want to do this purely with prompting. Maybe Jeffrey Emanuel will share some of his fancy review prompts. But over time, you’re going to want to automate it, since you’re applying the Rule of Five at every single step in the process, which at a bare minimum, for any nontrivial hunk of work, would be:

- 5 passes over the design

- 5 passes over the Beads implementation plan (this results in far better issues and dependencies, and better execution)

- 5 passes over the implementation (code + 4 reviews)

- 5 passes over the tests

- 5 passes for code health (might as well build it into your dev middle loop)

Yes, this is slower. Yes, this is more expensive (though, probably less so than all the rework you’ll be stuck with if you skip these steps.) Yes, it’s awkward to tell an AI to keep reviewing its work that it just reviewed.

But you should make sure you do it. Rule of thumb: demand at least 2–3 passes on small tasks, and 4–5 passes on big tasks. If you’re not super familiar with the language, the stack, or the domain, then you should err on the side of more reviews.

Do this, and it’ll feel like you’re using a model from the future. They will do far better work than they’ve been doing for you. Try it!

6. Swarm where you can, but beware the Merge Wall

I’ve been focused on agent swarming the past few weeks, after several months chasing quality and reliability without much success. I’ve got a new (third!) orchestrator in the works, and wow. Swarming. Next year is going to be extraordinary.

I’ll share a quick example of how powerful swarming can be when it’s working right. I had a disaster the other day where 30 Beads issues went missing. It was three or four major epics, each with a bunch of child issues. I had put a ton of work into their design, following the Rule of Five, and they were all ready to implement.

But I couldn’t find them.

I wasn’t panicked, since it’s hard to truly lose issues in Beads (we do have some bugs here and there but they are getting closed fast). Beads is all backed by Git, so it’s almost always possible (for the AI) to reconstruct what really happened from the git history, and fix it.

But I was concerned, because, where the hell did my 30 issues go? They weren’t deleted. After a couple minutes of increasingly alarmed searching, I finally figured out where they all went: My swarm had implemented them all! WTF?

There was a minor miscommunication, I guess; I asked my orchestrator to start working on the bug backlog, and it assigned all 30 issues to the eight workers I had already spun up. Some of these were quite complex issues. But while I was busy with other stuff, and not watching, the worker agents implemented and closed all 30 issues.

I was equal parts delighted and flabbergasted when I realized what had happened. I went and checked, and sure enough, they’d done all the work. It was pretty decent work and needed very little touchup — likely because I had used the Rule of Five throughout, and the Beads were in very good shape when it came time to implement.

After my 30 issues were magically implemented, I was sold. I would never not swarm again!

And then, of course, I was utterly unable to reproduce that perfect swarm. Subsequent attempts all ran into merge issues and required a ton of hand-holding and infrastructure tweaks. It will be a couple more weeks before I can swarm reliably. But still, I am completely sold.

I’ll know that my swarm orchestrator is ready to launch when I can swarm the web UI, building it from scratch. My system doesn’t have a non-CLI UI yet; well actually it does, in Emacs, but I doubt you want that one, however cool it might be. (It has Efrit inside it, so it’s pretty damn cool.) But I’m going to build a UI with the swarm, and that’s when I’ll know it’s ready for prime time.

The thing you have to be prepared for when swarming, is the Merge Queue problem. It’s like smacking into a wall. To illustrate, let’s say you have a simple swarm of 3 workers. One worker is redoing the logging system, another is changing the database API, and another is changing the client-server protocol. It’s likely that all three of these subsystems have some overlap, and changing one requires changing another. And their work will collide when they try to merge all the work together.

When you swarm a task, a key problem is that the workers all start from the same baseline (e.g. the same starting git commit), and they all do their work off that baseline. But each worker has the ability to change the baseline dramatically. Let’s say workers A, B, and C all complete and merge in their work. The system may now be completely different from the original baseline. When the fourth agent D finishes its work, a rebase may no longer be feasible. The system may have changed so much that D’s work needs to be completely redesigned and reimplemented on the new system baseline, which includes A, B, and C’s changes.

This is why you need the Merge Queue. You need to serialize the rebases, and give each worker enough context, and context-window space, to fully merge their work into the new baseline.

Some work is inherently parallel, and some work is inherently serial — the latter because of irreducible complexity and task overlap. If you think you’re going to be stuck with an awful merge, then you should probably defer some tasks until the earlier ones complete. But it’s not always possible to tell in advance, so sometimes you’ll have tough merges.

I’ve noticed that projects tend to go through a cycle where they are swarmable for a while, but then you’ll suddenly need to pause and serialize all work for a time. This can happen, for instance, if you’re changing the directory layout of your project — e.g., to make it more accessible to AIs who are trying to guess their way around. You might need to experiment with a bunch of different layouts. But each new project source layout changes all your package imports, scripts and other inter-module references, which would totally break any existing workers. So you have to pause all other work while you do the big package restructuring.

You can think of swarming as a MapReduce-type operation. In the mapper phase, you can spin up virtually unlimited workers. But in the reducer phase you need to merge their work all back together. Unfortunately, as Gene observed, this isn’t really a MR because most MRs have a very simple reduce phase — the workstreams have a monoidal shape, and you can merge their work by doing things like summing counts or whatever.

But with agent swarming, the reduce phase is a nightmare; it’s the exact opposite, in fact: it can be arbitrarily complicated to merge the work of two agents. In the limit, what should we do if Worker A deleted an entire subsystem, and Worker B comes along with a bunch of changes to that (now-deleted) subsystem?

So the swarm merge step is often messy and not entirely automatable. Some cases require either human judgment, or else really good context for AIs to make the call.

I don’t know if we’re going to get a tool that hides the mess. I’ve been talking to investors, many of whom are keenly interested in the next generation of developer tools, and there is a prevailing belief that all we need are proper guardrails, and then these kinds of agentic coding and swarming tools will be accessible to “average” developers, which they certainly are NOT today.

And why is that? Well, as Joel Spolsky observed in Things You Should Never Do Part 1, reading code is by far the hardest part of coding. This is a well-known finding in the Dev Productivity world; they’ve done study after study. And with vibe coding, reading code is… pretty much all you do all day. It’s hard for most developers. The average dev probably thinks 5 paragraphs is an essay. Coding agents make you read enormous waterfalls of both text and code. This is absolutely draining and beyond the capabilities of most devs today.

However, I don’t see eye-to-eye with the investors on this one. I personally do NOT think we will get useful guardrails. If you try to build something with heavy guardrails, you’re going to wind up with Bolt or Lovable, and nobody will use it. Sorry! That’s just not the right model. Instead, I think we’re going to get orchestration tools that are every bit as powerful, messy, quirky, and frustrating as Claude Code and the current batch of terminal-based coding agents.

And the people who figure out how to use these tools, despite the lack of guardrails, will become super-engineers. I’ve been kicking around the idea of a new blog post, the Rise of the Superengineer. Dunno if it’s worth a whole post, but what’s going to happen in 2026 is that a new class of 100x (or maybe 1000x) engineer will emerge — people who have figured out how to wield coding agent orchestrators effectively, deal with the merge problem, planning, swarming, code health, etc. — all the stuff I’ve talked about here, and more. And they will be able to run 100 coding agents at once, and get meaningful work done with them.

This will make them as productive as a team of 50+ regular engineers.

I think my own orchestrator will usefully peak at around 50–80 agents. Maybe I can get it up to 100. It’s not aimed at massive swarms; it’s aimed at leveling you up from manually managing a dozen ad-hoc agents in ad-hoc repo clones all around your filesystem, to managing swarms of well-behaved agents working 5–10 at a time on focused tasks. It will still require your full attention, your full engineering background, and every bit of design taste you can muster, to use these tools. In some ways it’s even harder and more dangerous than using a single coding agent, even with tooling support.

But some people are doing it already! By hand, to be sure, or by building their own homegrown orchestrators. Mark my words, though: next year, you’re going to have engineers who can build an (and likely maintain) an entire company’s software on their own. You’ll have solo unicorns, sure, but also a marketplace of solo uber-contractors who can build companies things they would have had to pay someone like Accenture tens of millions of dollars for.

There will also be small teams of people who figure out how to maximize their velocity when multiple humans work with agent teams. And these small teams are going to change the world. Gene and I are actively wondering whether company size is going to decrease on average, because you will be able to get so much more done with so many fewer people.

But no matter what, the tools are going to be messy from now on. Working with AIs is a little messy and nondeterministic. And I think that’s here to stay.

Wrap-Up

Gene and I went through at least a baker’s dozen ideas this morning, and I’ve chosen the half that seemed the most baked. A few others are becoming clearer, but are still so vague that we don’t really have the right vocabulary to talk about them yet.

Change is coming. Agents are way more powerful than they were 3 months ago. I’ve talked with plenty of (good) engineers lately who still believe that agents have plateaued. Ignoring the 30 years of evidence showing that AI is following Moore’s Law, they feel it’s just going to stop getting better today, out of nowhere. And in their opinion, agents are not good enough yet.

But if you’ve been following and using agents since they landed in February, you’ll know just how much more powerful and capable they have become, even since summertime. It’s not plateauing; heck, it’s not even slowing down. And you can prove it using your backlog of projects that are too hard for AI. Every few months, another one will fall, until there are no more left.

If you’re one of the many engineers who still hasn’t made the switch to AI-first coding, now is a good time to try it again. If you haven’t used an agent in a few months, you’re going to be shocked at how smart and capable they have become. They are full concierges now, able to help you with any computing-related problem. People tell me they even use Beads for their personal TODO lists!

My orchestrator is right around the corner. I’m excited for it. It’s going to make a splash. Hopefully this Christmas!

But you’ll only be able to use it if you already use coding agents for literally everything. If you want to be a 100x super-engineer next year, you need to start learning vibe coding basics today, and make it work for you. Keep in mind all the advice I’ve given here, and read our Vibe Coding book, which just came out on Oct 21st. It’s fresh and relevant, and will help you get into the right mindset with the right techniques and practices.

More to come, soon.

This stuff is so fun!

The EU production function

The central puzzle of the EU is its extraordinary productivity. Grand coalitions, like the government recently formed in Germany, typically produce paralysis. The EU’s governing coalition is even grander, spanning the center-right EPP, the Socialists, the Liberals, and often the Greens, yet between 2019 and 2024, the EU passed around 13,000 acts, about seven per day. The U.S. Congress, over the same period, produced roughly 3,500 pieces of legislation and 2,000 resolutions.1

Not only is the coalition broad, but encompasses huge national and regional diversity. In Brussels, the Parliament has 705 members from roughly 200 national parties. The Council represents 27 sovereign governments with conflicting interests. A law faces a double hurdle, where a qualified majority of member states and of members of parliament must support it. The system should produce gridlock, more still than the paralysis commonly associated with the American federal government. Yet it works fast and produces a lot, both good and bad. The reason lies in the incentives: every actor in the system is rewarded for producing legislation, and not for exercising their vetoes…

Formally, the EU is a multi-actor system with many veto points (Commission, Parliament, Council, national governments, etc.), which should require broad agreement and hence slow decision making. In practice, consensus is manufactured in advance rather than reached through deliberation.

By the time any proposal comes up for an official vote, most alternatives have been eliminated behind closed doors. A small team of rapporteurs agrees among themselves; the committee endorses their bargain; the plenary, in turn, ratifies the committee deal; and the Council Presidency, pressed for time, accepts the compromise (with both Council and Parliament influenced along the way by the Commission’s mediation and drafting). Each actor can thus claim a victory and no one’s incentive is to apply the brakes.

That is from an excellent piece by Luis Garicano.  What would Buchanan and Tullock say?

The post The EU production function appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

A Ukrainian mathematician requests mathematical assistance

an expert in general relativity or a mathematical physicist familiar with PPN methods, weak-field gravitational tests, and variational principles…

For the two technical appendices (ψ-preconditioning and χ-flattening), I would need:
• a quantum algorithms researcher (QSP/QSVT/QLSA/QAE) to assess the correctness of the operator transformations and the potential complexity gains;
• a quantum control or pulse-level compilation engineer (pulse-level, virtual-Z) to evaluate whether the phase-drift compensation algorithm can be implemented realistically on actual hardware.

Please email me if you think you might be of assistance.

The post A Ukrainian mathematician requests mathematical assistance appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Saturday 6 December 1662

Up and to the office, and there sat all the morning, Mr. Coventry and I alone, the rest being paying off of ships. Dined at home with my wife and Gosnell, my mind much pleased with her, and after dinner sat with them a good while, till my wife seemed to take notice of my being at home now more than at other times. I went to the office, and there I sat till late, doing of business, and at 9 o’clock walked to Mr. Rawlinson’s, thinking to meet my uncle Wight there, where he was, but a great deal of his wife’s kindred-women and I knew not whom (which Mr. Rawlinson did seem to me to take much notice of his being led by the nose by his wife), I went away to my office again, and doing my business there, I went home, and after a song by Gosnell we to bed.

Read the annotations

Links 12/6/25

Links for you. Science:

Nowcasting epidemic trends using hospital- and community-based virologic test data
CDC to end all monkey research
‘A little bit of joy’: can tiny rafts save endangered sparrows from rising seas?
Unique videos show how trawling restrictions bring back life to the sea
Despite Trump chaos, NSF avoided feared dip in research financing
As Federal Government Retreats, A Private Fund to Save Sea Otters Steps in

Other:

Top MAGA Influencers Accidentally Unmasked as Foreign Trolls
“Embarrassing” and “Horrifying”: CDC Workers Describe the New Vaccines and Autism Page
These teens are trying to save go-go. Can the music save them, too?
Mamdani’s NYC Can’t Afford NYPD Commissioner Tisch. With ICE on its way, how can we expect an ICE collaborator to protect New Yorkers? Here’s a compromise: make Tisch sanitation commissioner again
When the G.O.P. Medicaid Cuts Arrive, These Hospitals Will Be Hit Hardest. Republicans created a special $50 billion fund to help rural hospitals stay afloat, but the biggest impacts may be in cities. (more here)
How DC Resists Through Protest Art. Posters, go-go bands, murals, and more: Washington has a long history of fighting the power—and the government—through creative and colorful dissent.
Senators Want Extremism Researchers to Surrender Documents Linked to Right-Wing Grudges
How to Fix a Typewriter and Your Life
Patel Under Scrutiny for Use of SWAT Teams to Protect His Girlfriend
Man Detained by ICE Found Dead, Hanging With Hands and Feet Tied—Attorney
Senator whose wife was shot fears for safety after Trump sedition accusation
Why Americans are giving up on Sweetgreen (because most Americans, even in cities, don’t really like salads?)
Millennials Are Stuck in an Old, Lazy Story
Man Who Trump Pardoned for Fraud Is Headed Back to Prison … for Fraud
Senate Democrats are investigating the Kennedy Center for ‘cronyism, corruption’
How Do Americans View Childhood Vaccines, Vaccine Research and Policy?
Kennedy Katch and Kill
Many Top MAGA Trolls Aren’t Even in the U.S. Elon Musk’s new X feature has been very revealing.
Why car insurance costs have soared (and what drivers are doing about it)
The case of a felon who paid lobbyists nearly $1 million to seek a Trump pardon
New York Gets Serious About Food Prices
Israelis are moving abroad in record numbers due to fear and discontent
How the Elite Behave When No One Is Watching: Inside the Epstein Emails
12 Enchanting Holiday Light Displays and Attractions Around the DC Area
Jimmy Cliff, Jamaican reggae singer, actor and cultural icon, dies aged 81
Plunder New England: The Louvre heist grabbed attention, but smaller museums are the more likely targets
The Math Shows Jackson Pollock Painted Like a Child Would
Unleashed dogs in Boston are a source of frustration for some people, and citations have risen
Border Patrol’s Charlotte sting reaches into country clubs, upscale shops
In the Gilded Age 2.0, the rich aren’t just different — they’re intolerable

Real Estate Newsletter Articles this Week:

At the Calculated Risk Real Estate Newsletter this week:

Real House PricesClick on graph for larger image.

Inflation Adjusted House Prices 3.0% Below 2022 Peak

Q3 Update: Delinquencies, Foreclosures and REO

Final Look at Housing Markets in October and a Look Ahead to November Sales

Asking Rents Soft Year-over-year

This is usually published 4 to 6 times a week and provides more in-depth analysis of the housing market.

Saturday assorted links

1. JFV on capital theory.

2. A critique of wheelchair services in England.

3. The political culture that is Iran??

4. The Right to Compute.

5. Helen Perry on whether we are repaganizing.

6. Steve Cropper, RIP.

7. Twelve Frank Gehry projects (NYT).

The post Saturday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Schedule for Week of December 7, 2025

Special Note: There is still uncertainty on when some economic reports will be released. The employment report for November will NOT be released this week.

This will be a light week for economic data.  The FOMC meets this week and is expected to cut rates by 25bp.

----- Monday, December 8th -----

No major economic releases scheduled.

----- Tuesday, December 9th -----

6:00 AM: NFIB Small Business Optimism Index for November.

Job Openings and Labor Turnover Survey10:00 AM: Job Openings and Labor Turnover Survey for October from the BLS.

This graph shows job openings (black line), hires (purple), Layoff, Discharges and other (red column), and Quits (light blue column) from the JOLTS.

obs openings increased in August to 7.23 million from 7.21million in July.

The number of job openings (black) were down 6% year-over-year. Quits were down 3% year-over-year.

----- Wednesday, December 10th -----

7:00 AM ET: The Mortgage Bankers Association (MBA) will release the results for the mortgage purchase applications index.

2:00 PM: FOMC Meeting Announcement. The Fed is expected to cut rates 25bp at this meeting.

2:00 PM: FOMC Forecasts This will include the Federal Open Market Committee (FOMC) participants' projections of the appropriate target federal funds rate along with the quarterly economic projections.

2:30 PM: Fed Chair Jerome Powell holds a press briefing following the FOMC announcement.

----- Thursday, December 11th -----

8:30 AM: The initial weekly unemployment claims report will be released.  There were 191,000 initial claims last week.

U.S. Trade Deficit8:30 AM: Trade Balance report for September from the Census Bureau.

This graph shows the U.S. trade deficit, with and without petroleum, through the most recent report. The blue line is the total deficit, and the black line is the petroleum deficit, and the red line is the trade deficit ex-petroleum products.

The consensus is the trade deficit to be $65.5 billion.  The U.S. trade deficit was at $59.6 billion in August.

10:00 AM: the Q3 2025 Housing Vacancies and Homeownership from the Census Bureau.

10:00 AM: State Employment and Unemployment (Monthly) for September 2025

----- Friday, December 12th -----

No major economic releases scheduled.

Reading List 12/06/2025

World’s largest ring forging, via Chinese Academy of Sciences.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure and industrial technology. This week we look at 3D printed legos, exploding wire detonators, the David Taylor model basin, multi-point metal forming, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

No essay this week, but I’m working on a more involved piece about international construction productivity that should be out next week.

A320 software upgrades

For the past several years most potential safety issues with commercial aircraft seem to have been with Boeing planes. But here’s one with the Airbus A320 family of aircraft, the most popular commercial aircraft in the world. Apparently a bug in a recent version of the elevator aileron computer (ELAC) software can cause issues if the data is corrupted by intense solar radiation. A recent JetBlue flight had an “unexpected pitch down” (sudden drop of altitude) because intense radiation corrupted flight control data. Via Aviation Week:

The issue affects around 60% of the global A320 fleet, including first generation A320s and the newer A320neo variants as well as A319s and A321s in each case. Most ELACs can be fixed by reverting to a previous version of recently-updated software, Airbus said.

But about 1,000 of the oldest affected aircraft need a hardware change to accept the new software. These airframes will need the old hardware re-installed--a process that will take longer.

Airbus identified the issue during its probe into an Oct. 30 incident involving a JetBlue A320. The aircraft, en route to Newark from Cancun, suddenly lost altitude while in cruise.

An A320 “recently experienced an uncommanded and limited pitch down event,” EASA’s EAD said, without identifying the specific flight. “The autopilot remained engaged throughout the event, with a brief and limited loss of altitude, and the rest of the flight was uneventful. Preliminary technical assessment done by Airbus identified a malfunction of the affected ELAC as a possible contributing factor.”

The aircraft had the newest ELAC software installed. Airbus determined reverting to the previous version eliminates the risk.

Average homebuyer age

An oft-cited example of the increasing difficulty of affording a house is the steadily rising age of first-time homebuyers. Young people with lower incomes, the story goes, are being squeezed out of the housing market, driving the average age of first-time buyers up. Here’s a characteristic story earlier this year from the New York Times:

The path to homeownership continues to get longer, with the median age of first-time home buyers hitting an all-time high of 40 in 2025, according to a report from the National Association of Realtors.

“It’s kind of a shocking number,” said Jessica Lautz, deputy chief economist and vice president of research at N.A.R. “And it’s really been in recent years that we’ve seen this steep climb.”

In 1991, the typical first-time buyer was able to purchase a home by the time they were 28 years old. That number gradually climbed to 33 in 2020, then shot up to 36 in 2022 and 38 in 2024.

There are clear reasons behind the trend. Younger Americans are struggling to save for a down payment as they stretch their paychecks to cover student loans, a rising cost of living and, most critically, high rents, which make saving money harder. And even if they have saved diligently, a persistent lack of affordable housing inventory has left them shut out of the market.

However, it’s possible this is an artifact of how the data is collected (via mail-in surveys, which younger people may be less inclined to fill out). Homebuyer age data from the Federal Reserve indicates that, rather than steadily rising, average first time buyer age is declining over time. From the American Enterprise Institute:

NAR’s statistics are based on their annual survey of homebuyers and sellers. For the 2025 report, covering from July 2024 to June 2025, 189,750 surveys were mailed to a “representative sample” of buyers and sellers. However, only 6,103 completed surveys were received, indicating a response rate of just 3.5 percent, with only 21 percent, or 1,281, being FTBs.

The CCP, by contrast, is based on a 5 percent random sample of all credit reports, which reports provide both borrower age and home buying history. The CCP data, for the same period as the NAR, found the average and median FTB was 36.2 and 33 years old, both well under the NAR’s age of 40.

Digging deeper into the NAR and CCP results yields helpful distributions by age bins. While both have nearly identical shares for age 35-44, the NAR’s under age 35 groups are underrepresented by 17 percentage points and the aged 45 to 74 buyers are overrepresented by 18 percentage points respectively, compared to the CCP. The NAR bias to a higher age is perhaps not surprising given that it is a mail survey with 120 questions, which does lend itself to a high response rate by Millennials and GenZ-ers. The CCP data appears to offer a better historical view of the current position of FTBs (see graphic below). To reiterate its findings, FTB average and median age stood at 36.3 and 33 years for the period Q3:24-Q2:25, and there has been minimal FTB average age change since either 2001 or 2021.

3D printed legos

As I’ve noted previously, I’m interested in the progress of 3D printing technology, and how it might be extended to broader types of production — new types of materials, higher precision, printing complex mechanisms, lower unit costs making it more competitive for high-volumes, and so on. In this vein, Modern Engineering Marvels has an interesting story about Lego working for nine years to be able to 3D print legos for mass-produced sets:

The milestone capped a nine-year development program to develop a high-throughput polymer additive manufacturing platform able to reach consumer-level production volumes. Head of Additive Design and Manufacturing Ronen Hadar framed the accomplishment as LEGO’s equivalent of adopting injection moulding in the 1940s. The team’s aspiration wasn’t to replace moulding but to add to the design toolset – to make 3D printed parts “boringly normal” in future sets.

The production system makes use of EOS polymer powder bed fusion technology in the form of an EOS P 500 platform with Fine Detail Resolution. FDR uses an ultra-fine CO₂ laser that enables highly detailed features in nylon-based materials. The LEGO Group chose the process for its combination of dimensional accuracy, mechanical strength, and surface quality-all vital for parts to mesh properly with billions of bricks already in existence. Already, the company has doubled the speed of output from its machines and is looking for even more efficiency gains.

…From an engineering standpoint, this leap from prototype to mass production required the invention of new workflows. Unlike the decades-honed process control of injection molding, additive manufacturing had to come up with fresh answers for color matching and dimensional consistency, integrating current LEGO quality systems.

Filings coherer

Often the initial version of some particular technology is implemented in a way that doesn’t necessarily work the best or most efficiently, but is simply the easiest to get working. The gun-type bomb was chosen for the first atomic weapon because it was the most straightforward to build, but subsequent bombs used the more-complicated but more-efficient implosion mechanism. The first point-contact transistors were similarly eventually replaced with superior bipolar junction transistors.

Here’s another interesting example of one of these temporary technologies, the filings coherer, which was used to detect signals in the first radios. It consists of a glass tube filled with metal filings, connected to wires on either side. Initially, the metal filings have high resistance, limiting the flow of electric current. However, an electromagnetic disturbance — which can be induced by a passing electromagnetic wave — will cause the filings to “cohere”, reducing the resistance and allowing for greater electricity flow. Via Wikipedia:

When a radio frequency signal is applied to the device, the metal particles would cling together or “cohere”, reducing the initial high resistance of the device, thereby allowing a much greater direct current to flow through it. In a receiver, the current would activate a bell, or a Morse paper tape recorder to make a record of the received signal. The metal filings in the coherer remained conductive after the signal (pulse) ended so that the coherer had to be “decohered” by tapping it with a clapper actuated by an electromagnet, each time a signal was received, thereby restoring the coherer to its original state. Coherers remained in widespread use until about 1907, when they were replaced by more sensitive electrolytic and crystal detectors.

ElectroBOOM on Youtube has a good video where he looks at this “coherence” effect. And IEEE Spectrum has a good paper about the history of it — the mechanism behind it seems to have remained poorly understood until well into the 21st century.

Read more

Binding early decision in college admissions: "Go early, or go somewhere else"

 There was a time when only football coaches and presidents had news-making salaries at colleges and universities.  Now top admissions officers--i.e. sales managers--are the subject of this NYT story:

Meet the Millionaire Masters of Early Decision at Colleges
The enrollment chiefs at Tulane and the University of Chicago attracted many early applicants. Now both of them earn a lot of money. 
By Ron Lieber

"The University of Chicago was where fun went to die. Tulane University was where you could die from too much fun.

"Neither place liked its reputation, but in 2016, both felt confident enough in changes on their campuses that they started offering an early decision option for student applicants. Apply by November (or January for the “Early Decision II” option) and get an answer weeks later. You just had to agree to attend if you got in.

"Within a handful of years, two-thirds of Tulane’s first-year class had taken the deal. The University of Chicago found so much success that it recently added an opportunity to apply even earlier, in some cases before the senior year of high school has even begun.


"The enrollment chiefs who made this all happen also found success.

"According to federal filings from 2023, Chicago’s vice president for enrollment and student advancement, James G. Nondorf, received $967,000 over a year from the university and “related” organizations. At Northeastern University, the executive vice chancellor and chief enrollment officer, Satyajit Dattagupta, got $1.079 million in compensation after decamping in 2022 from Tulane, where he had a strong run in a similar role."

...

"James Murphy, who works with Class Action, an advocacy organization, recently ranked schools on this early decision advantage — the difference in admissions rates between early decision and the “regular” round, when applicants get an answer later. Northeastern ranked first, with an early decision advantage that was over 11 times as large. Tulane was second, and its figure was over five times. "

What Tom Whitwell learned in 2025

52 things, here is one of them:

Most characters in the film Idiocracy wear Crocs because the film’s wardrobe director thought they were too horrible-looking to ever become popular. [Alex Kasprak]

Here is the full list.

The post What Tom Whitwell learned in 2025 appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

One Last Note on Tiimo: What’s the Deal With That Icon?

One small update I just appended to my piece Friday taking a look at the winning apps from this year’s App Store Awards:

Lastly, I have questions — some really hard questions — regarding Tiimo’s app icon. Such as, “What is that?”

Perhaps it got picked because it makes Apple’s new OS 26 icons look good by comparison?

 ★ 

Dithering: ‘Alan Dye Leaves Apple’

The December 2025 cover art for Dithering, showing a man dressed as Santa Claus getting a kiss on the cheek under some mistletoe.

Dithering is my and Ben Thompson’s twice-a-week podcast — 15 minutes per episode, not a minute less, not a minute more. It’s a $7/month or $70/year subscription, and included in the Stratechery Plus bundle (a bargain). This year our CMS (Passport — check it out) gained a feature that lets us make some episodes free for everyone to listen to on the website. Today’s episode, regarding Alan Dye leaving Apple for Meta, seems like a good one to do that with. (And, once again, this month’s album art serendipitously captures my mood.)

Give it a listen. Subscribe if you enjoy it.

 ★ 

Apple’s Succession Intrigue Isn’t Strange at All

Aaron Tilley and Wayne Ma, in a piece headlined “Why Silicon Valley is Buzzing About Apple CEO Succession” at the paywalled-up-the-wazoo The Information:

Prediction site Polymarket places Ternus’ odds of getting the job at nearly 55%, ahead of other current Apple executives such as software head Craig Federighi, Chief Operating Officer Sabih Khan and marketing head Greg Joswiak. But some people close to Apple don’t believe Ternus is ready to take on such a high-profile role, and that could make a succession announcement unlikely anytime soon, said people familiar with the company.

Nothing in the rest of the article backs up that “some people close to Apple don’t believe Ternus is ready” claim, other than this, several paragraphs later:

And while his fans believe Ternus has the temperament to be CEO, many of them say he isn’t a charismatic leader in the mold of a Jobs. He has also had little involvement in the geopolitical and government affairs issues that dominate most of Cook’s time these days. On a recent trip to China, for example, Apple’s new COO, Sabih Khan, accompanied Cook to some of his meetings.

No one else in the history of the industry, let alone the company, has the charisma of Steve Jobs. And while I think Polymarket has the shortlist of candidates right, I also think they have them listed in the right order. Sabih Khan probably should be considered an outside-chance maybe, but the fact that he accompanied Cook to China doesn’t make me think, for a second, that it’s in preparation to name him CEO. If Khan were being groomed to become CEO, he’d have started appearing in keynotes already. It’s silly to slag Ternus for not having the charisma of Steve Jobs, when Ternus has been a strong presence in keynotes since 2018, and in the same paragraph suggest Khan as a better option, when Khan has never once appeared in a keynote or public appearance representing Apple.

Some former Apple executives hope a dark-horse candidate emerges. For example, Tony Fadell, a former Apple hardware executive who coinvented [sic] the iPod, has told associates recently that he would be open to replacing Cook as CEO, according to people who have heard his remarks. (Other people close to Apple consider Fadell an unlikely candidate, in part because he was a polarizing figure when he worked at the company. Fadell left Apple in 2010.)

The parenthetical undersells the unlikelihood of Fadell returning to Apple, ever, in any role, let alone the borderline insanity of suggesting he’d come back as Cook’s successor.

It has become one of the strangest succession spectacles in tech. Typically, the kind of buzz that is swirling around Cook occurs when companies are performing badly or a CEO has dropped hints that they’re getting ready to hang up their spurs. Neither applies in Cook’s case, though.

There’s nothing strange about it. Apple has a unique company culture, but so too do its peers, like Microsoft, Amazon, and Google. And just like at those companies, it’s therefore a certainty that Cook’s replacement will come from within the company’s current ranks. Polymarket doesn’t even list anyone other than Ternus, Federighi, Joswiak, and Khan.

As for hints, there is not much need for any hint beyond the fact that Cook is now 65 years old and has been in the job since 2011. But the high-profile multi-source leak to the Financial Times is a pretty obvious fucking additional hint.

 ★ 

Two things that really matter

When analyzing the macro situations of countries or regions, I place more stress than many people do on the following two factors:

1. Human capital: How much active, ambitious talent is there?  And how high are the averages and medians?

2. Matching market demands: Are you geared up to produce what the market really wants, export markets or otherwise?

Those may sound trivial, but in relative terms they remain undervalued.  They are, for instance, the biggest reasons why I do not buy “the housing theory of everything.”

They are also, in my view, the biggest reasons why the UK currently is in economic trouble.  Both #1 (brain drain) and #2 have taken a hit in recent times.  The UK continues to deindustrialize, business consulting is not the future, and London as a financial centre was hurt by 2008, Brexit, and superior innovations elsewhere.  More and more smart Brits are leaving for the US or Dubai.

You also will notice that #1 and #2, when they are in trouble, are not always easily fixed.  That is why reforms, while often a good idea, are by no means an easy or automatic way out of trouble.

These two factors also are consistent with the stylized fact that growth rates from the previous decade are not so predictive of growth rates for the next decades.  Human capital often drives levels more than growth rates.  And matching market demands often has to do with luck, or with shifting patterns of demand that the supplying country simply cannot match.  Once people abandon Toyotas for Chinese electric cars, Japan does not have an easy pivot to make up the loss.

Most other theories of growth rates, for instance those that assign a predominant weight to institutions, predict much more serial correlation of growth rates than we find in the data.  That said, institutions do indeed matter, and in addition to their usual effects they will shape both #1 and #2 over the longer run.

Overall, I believe conclusions would be less pat and economic understandings would be more effective if people paid greater attention to these factors #1 and #2.  Not putting enough weight on #1 and #2 is one of the biggest mistakes I see smart people — and indeed very smart people — making.

Addendum: You will note the contributions of Fischer Black here.  Apart from his contributions to options pricing theory, which are widely known, he remains one of the most underrated modern economists.

The post Two things that really matter appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

WorkOS Radar

My thanks to WorkOS for sponsoring last week at DF. Does your app get fake signups, throwaway emails, or users abusing your free tier? Or worse, bot attacks and brute force attempts?

WorkOS Radar can block all this and more. A simple API gives you advanced device fingerprinting that can detect bad actors, bots, and suspicious behavior. Your users trust you. WorkOS Radar lets you keep it that way.

 ★ 

Many wonders are visible when flying over the Earth at night. Many wonders are visible when flying over the Earth at night.


Strong Atmospheric River Bringing Heavy Rain to the Pacific Northwest; Areas of Snow in the North-Central and Eastern U.S.