Tom Stoppard (1937 –2025)

 Tom Stoppard, the great English playwright, passed away last week. I saw many of his plays, including his last one, about his apparently late in life discovery that he was Jewish, and that his immediate family had fled Czechoslovakia ahead of  the Nazis, while most of the rest had perished, with a few exceptions.

The play tells the story of three generations of assimilated Jews. You, the audience, of course know how it will end, but they don't, and they are optimistic that their current troubles will soon pass.  It's an eerie feeling to watch that play amidst the world's current uncertainties. 

The NYT tells his story through that final play.

When Tom Stoppard Confronted His Background in His Final Play
The playwright, who learned about his Jewish heritage late in life, addressed it in the Tony Award-winning drama “Leopoldstadt.”
   By Marc Tracy

"Stoppard’s final play, too, contained characters whose fates were tragically preordained. The rest is silence." 

Will West Coast Jazz Finally Get Some Respect?

Sandra Evans is a woman on a mission. She wants to make the definitive documentary about West Coast jazz.

Like me, she loves those classic recordings by Chet Baker, Dave Brubeck, Art Pepper, Hampton Hawes, and others. But most jazz films ignore this entire movement—pretending that the history of the genre only took place in New Orleans, Chicago, and New York.

That’s simply not true.

The West Coast players deserve their place in our cultural history. Their music should be heard. Their story ought to be told—and it’s a fascinating story.

I’ve been helping her as best I can. You can see me in this new trailer about her project.

As a young man, I was also on a mission to celebrate the legacy of West Coast jazz. I convinced Oxford University Press to publish a book on the subject—and this turned into my single biggest project when I was in my twenties.

I met many of the jazz elders who helped create West Coast jazz—and saw how they had been unfairly forgotten. Many were living in poverty. Some were playing music on the streets.

It was sad to see.


Please support my work—by taking out a premium subscription (just $6 per month).

Subscribe now


I took this personally, as Michael Jordan might say. I grew up in Los Angeles and later moved to the San Francisco area. This was my own homegrown jazz tradition. I loved it and wanted to share my enthusiasm with others.

But even superstars such as Dave Brubeck and Chet Baker were frequently attacked back then. And someone like Vince Guaraldi was simply ignored—jazz history books pretended he didn’t even exist.

They weren’t real jazz musicians. That’s what I kept hearing.

If you didn’t live through this era of jazz policing on steroids, you can’t even begin to imagine the level of hostility—which was amplified tenfold if a West Coast jazz player dared to have a hit record.

I could cite dozens of other examples of musicians who would have won prizes if they moved to New York. But if they stayed out West, they got little or no respect.

That was how it worked back then.

My West Coast Jazz book was the single biggest project from my early years as a writer.

I got punished too. My grant requests for financial help on my West Coast jazz project were turned down. I wanted to do full oral histories of the leading players on the scene, but nobody wanted to fund it.

Sandra Evans is now fighting the same battle. She is working tirelessly on raising the money she needs to complete her project. If you can help out, please do so. (You can learn more here). Or if you can’t donate, please spread the word—by sharing the video or fundraising link online.


Gift subscriptions for The Honest Broker are now available. Click here for details.

My 2011 Review of Contagion

I happened to come across my 2011 review of the Steven Soderberg movie, Contagion and was surprised at how much I was thinking about pandemics prior to COVID. In the review, I was too optimistic about the CDC but got the sequencing gains right. I continue to like the conclusion even if it is a bit too clever by half. Here’s the review (no indent):

Contagion, the Steven Soderberg film about a lethal virus that goes pandemic, succeeds well as a movie and very well as a warning. The movie is particularly good at explaining the science of contagion: how a virus can spread from hand to cup to lip, from Kowloon to Minneapolis to Calcutta, within a matter of days.

One of the few silver linings from the 9/11 and anthrax attacks is that we have invested some $50 billion in preparing for bio-terrorism. The headline project, Project Bioshield, was supposed to produce vaccines and treatments for anthrax, botulinum toxin, Ebola, and plague but that has not gone well. An unintended consequence of greater fear of bio-terrorism, however, has been a significant improvement in our ability to deal with natural attacks. In Contagion a U.S. general asks Dr. Ellis Cheever (Laurence Fishburne) of the CDC whether they could be looking at a weaponized agent. Cheever responds:

Someone doesn’t has to weaponize the bird flu. The birds are doing that.

That is exactly right. Fortunately, under the umbrella of bio-terrorism, we have invested in the public health system by building more bio-safety level 3 and 4 laboratories including the latest BSL3 at George Mason University, we have expanded the CDC and built up epidemic centers at the WHO and elsewhere and we have improved some local public health centers. Most importantly, a network of experts at the department of defense, the CDC, universities and private firms has been created. All of this has increased the speed at which we can respond to a natural or unnatural pandemic.

Avian flu virus, from 3DScience.com.

In 2009, as H1N1 was spreading rapidly, the Pentagon’s Defense Threat Reduction Agency asked Professor Ian Lipkin, the director of the Center for Infection and Immunity at Columbia University’s Mailman School of Public Health, to sequence the virus. Working non-stop and updating other geneticists hourly, Lipkin and his team were able to sequence the virus in 31 hours. (Professor Ian Sussman, played in the movie by Elliott Gould, is based on Lipkin.) As the movie explains, however, sequencing a virus is only the first step to developing a drug or vaccine and the latter steps are more difficult and more filled with paperwork and delay. In the case of H1N1 it took months to even get going on animal studies, in part because of the massive amount of paperwork that is required to work on animals. (Contagion also hints at the problems of bureaucracy which are notably solved in the movie by bravely ignoring the law.)

It’s common to hear today that the dangers of avian flu were exaggerated. I think that is a mistake. Keep in mind that H1N1 infected 15 to 30 percent of the U.S. population (including one of my sons). Fortunately, the death rate for H1N1 was much lower than feared. In contrast, H5N1 has killed more than half the people who have contracted it. Fortunately, the transmission rate for H5N1 was much lower than feared.  In other words, we have been lucky not virtuous.

We are not wired to rationally prepare for small probability events, even when such events can be devastating on a world-wide scale. Contagion reminds us, visually and emotionally, that the most dangerous bird may be the black swan.

The post My 2011 Review of Contagion appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Europe is under siege

The map above is a depiction of The Deluge, a historical event in which the Polish-Lithuanian Commonwealth — which had been a major European power — was defeated and destroyed under the combined assaults of Russia and Sweden in the 1600s. After having its power broken, Poland was carved up in the 1700s and subjugated by Russia, Prussia, and Austria. It took more than two centuries, until the fall of communism in 1991, for Poland to reemerge as a strong, truly independent country.

The Deluge shows that power and independence are not permanent. If you are surrounded by hostile powers, and if you don’t have the ability to guard yourself against those powers, no amount of historical greatness can save you from being subjugated. This is an important lesson for Europeans to remember right now, as they find their region under siege from Russia, China, and the United States all at once.

The United States no longer cares about the European project

Why would America care about Europe at all? For most of our history, we didn’t. In the 19th century, the U.S. viewed European countries as dangerous rivals. In the early 20th century, Americans prided themselves on not getting involved in European affairs, and were incensed at their government for dragging them into World War 1. Only after World War 2 did Americans start caring about Europe, and we did so for three reasons:

  1. West Europe was a bulwark against Soviet communism.

  2. Europe was a key trading partner.

  3. Many Americans came to value their ancestral ties to Europe.

The first of these reasons vanished in 1991. Europe is still a bulwark against Russia, but Americans no longer feel threatened by Russia. Russian power is far less than what it once was, and Russia’s rightist ideology does not threaten the rightists who now rule America.

As for communism, many (most?) Americans now believe that European countries are socialist. When American conservatives ask where in the world socialism has succeeded, American progressives will always reply “Europe” or “Scandinavia”. Whether Europe or Scandinavia is actually socialist is irrelevant; Americans have come to see it that way.

Europe is still an important trading partner. But Trump and the other people now in charge of the U.S. do not understand trade at all. They think about trade entirely in terms of the net trade balance, rather than in terms of total U.S. exports. Trump & co. don’t care that America sells $650 billion a year to Europe; the fact that Europe sells $800 billion a year to America means that Trump & co. think America is “losing” and would benefit from a cutoff of trade.

Remember that the U.S. is an unusually closed-off, self-sufficient economy, so Americans in general don’t think too hard about trade or try to understand why it’s valuable. Also, the people running now the country are especially ignorant about economic matters.

As for civilizational ties, this is the reason Trump and the MAGA movement have turned so strongly against Europe. The American right values Europe because they think of it as a White Christian homeland — the source and font of Western civilization. Here’s a post I wrote about that earlier this year:

I wrote:

in the American mind, Europe stood across the sea as a place of timeless homogeneity, where the native white population had always been and would always remain…In the mind of many Americans, Europe thus stood as both a refuge and a reservoir. America itself was a rough, contested frontier, but Europe would always be white and Christian. If you ever felt the need to live around a bunch of white people of Christian heritage, you could always go “back”, but for most that wasn’t necessary — just knowing that the Old World was somewhere out there was enough.5

I think Europeans may underestimate how much this perception motivated America’s participation in the Transatlantic Alliance during the Cold War…[T]o conservative Americans in the 20th century — the type of people who joined the John Birch Society — the Cold War was about preserving Christendom from the threat of godless communism.

Anyway, in the 2010s, it dawned on those Americans that this hallowed image of Europe was no longer accurate. With their working population dwindling, European countries took in millions of Muslim refugees and other immigrants from the Middle East and Central and South Asia — many of whom didn’t assimilate nearly as well as their peers in the U.S. You’d hear people say things like “Paris isn’t Paris anymore.”6…At the same time, Europe had long since abandoned its traditional Christian values…

To Americans who valued the idea of America and Europe as part of a single Western civilization, this realization was catastrophic. Suddenly European countries — and the Anglosphere countries of Canada, Australia, and New Zealand — felt like they had left the club…

America’s rightists…want to know that someone, somewhere, is out there preserving an indigenous homeland for their identity groups. And that “someone” has to be Europe and the Anglosphere.

This isn’t a new attitude, either. Remember that in order to persuade a reluctant America to join World War 1, the U.S. government had to depict Germany as an ape abducting a white woman!

If you understand this, then nothing in America’s new National Security Strategy is mysterious, surprising, or confusing. Here’s how War on the Rocks summarizes the Trump administration’s attitude toward Europe:

[I]mmigration is elevated to the central national security problem. The text declares, bluntly, that “the era of mass migration must end,” and that “border security is the primary element of national security.” It frames mass migration as a driver of crime, social breakdown, and economic distortion, and calls for a world where sovereign states cooperate to “stop rather than facilitate destabilizing population flows” and tightly control whom they admit…

[P]rotecting American culture, “spiritual health,” and “traditional families” are framed as core national security requirements…The document insists that “restoration and reinvigoration of American spiritual and cultural health” are prerequisites for long-term security and links this to an America that “cherishes its past glories and its heroes” and is sustained by “growing numbers of strong, traditional families” raising “healthy children.” America is thus cast as defender of so-called traditional values, while Europe lacks “civilizational self-confidence and Western identity.”…

[T]he strategy elevates the culture wars into a governing logic for national security, and it does so through rhetoric that treats ideological and cultural disputes as matters of strategic consequence…This is clearest in the European section…The text…speculates about demographic and cultural shifts in Europe as a way to question whether future governments will share American views of their alliances. The strategy [implies] that cultural alignment is essential to strategic partnership.

The American right sees the “mad brute” in the ape cartoon as the dark-skinned Muslim immigrants who have entered Europe in large numbers in recent years. And they see themselves as needing to save the woman — representing their view of Europe as the traditional font of White Christian civilization — from that mad brute.

This tweet by Elon Musk pretty much sums up the American right’s attitude toward Europe:

This is why no amount of European shaming or moral persuasion can have any effect on the Trump administration — or on any Republican administration in the decades to come. This kind of appeal to friendship is totally useless:

And this kind of bitter, angry hectoring is worse than useless:

The American right — i.e., the people now in charge of the country — do not care intrinsically about democracy, or about allyship, or about NATO, or about the European project. They care about “Western Civilization”. Unless Europe expels Muslim immigrants en masse and starts talking about its Christian heritage, the Republican Party is unlikely to lift a hand to help Europe with any of its problems. Democrats will want to help Europe, but they will only be in power intermittently, and helping Europe will not be high on their priority list.1

Thus, America is not riding to the rescue this time, or for the foreseeable future. I wish things were different, but my wishes count for nothing; this is the reality with which the Europeans must now deal.

Russia and China together are the real menace to Europe

Europeans do not need me to tell them that Putin’s Russia threatens not just Ukraine, but all of Europe. They are well aware of this fact. Russia now regularly flies its drones into Europe, and is probably behind a wave of sabotage attacks on European infrastructure.

How can Russia, a country of just 144 million people and $7 trillion in GDP (PPP), hope to overcome Europe, which has 520 million people and $33 trillion in GDP (including the UK), especially after Russia has expended so many of its young men and materiel in its war with Ukraine already? There are three answers here. The first is gray-zone warfare, including sabotage and political influence campaigns. But that’s only the beginning.

Russia’s second method for fighting Europe is what I call a “Ponzi empire” strategy. Russia has enslaved vast numbers of Ukrainians from the occupied regions of Ukraine to fight against the rest of their country. If Russia conquers the rest of Ukraine, it will similarly enslave the rest of the country’s population, and send them to fight against Poland, the Baltics, and Moldova. If they then defeat Poland, they will enslave the Poles and send them to fight against the next European target, and so on.

This is a very traditional Russian strategy. Enslaved Ukrainians were used to attack Poland in 1939. Enslaved Poles were forced to fight Russia’s wars in the days of the old Tsarist empire, and would have been forced to do so again as part of the Warsaw Pact. Just like zombies turn humans against their own, each slice of Europe that Russia can chop off ends up being turned against the rest.2

Russia’s final strategy for fighting Europe is to rely on Chinese assistance. Russia’s own industrial base is very weak, and relied heavily on imported European parts and machinery that has now been partially cut off. But Chinese tech has largely plugged that hole, as the Carnegie Endowment reports:

Since mid-2025, Chinese components have been detected in Russian drones and missiles, often shipped via front companies disguised as suppliers of industrial cooling equipment…Chinese machinery, including precision optics, lasers, and dual-use machine tools, now dominates Russia’s defense-related manufacturing. In August 2025 alone, China exported a record 328,000 miles of fiber-optic cable and nearly $50 million worth of lithium-ion batteries to Russia, reinforcing its role as the Kremlin’s primary wartime supplier of dual-use materials. Chinese engineers working at Russian drone facilities are adapting civilian quadcopters, such as the Autel Max 4T, for combat use.

China is a far bigger manufacturer than Europe, and can pour essentially infinite war production into Russia if it wants to. And China is now assisting Russia’s gray-zone warfare against Europe:

Since 2024, Chinese ships have been involved in incidents of targeting subsea infrastructure, particularly cutting subsea cables in the Baltic Sea…The country increasingly deploys ambitious espionage and cyber attacks against government networks and critical infrastructure across Europe. These attacks seem to overlap with—or even be actively coordinated with—Russia’s espionage and influence operations across Europe…Increasingly, Russia and China also cooperate in disinformation operations: Chinese campaigns such as “Spamouflage” are amplified by Russian media outlets and diplomatic channels. Both countries employ what look to be synchronized narratives accusing the West of being responsible for the war in Ukraine.

China even provides the Russians with battlefield intelligence, helping them strike and destroy Ukrainian targets in real time. In sum, China is supporting Russia’s war against Ukraine, and will likely support Russia in any further wars it undertakes against the rest of Europe.

With Chinese technology and production, and slave soldiers from East Europe, and with America withdrawing from the Transatlantic Alliance, Russia could conceivably overmatch Europe.

But that’s not the only threat that China poses. On the economic front, China’s new economic strategy — a combination of shutting out European products, sending out a massive wave of subsidized exports, and putting export controls on rare earths — threatens to forcibly deindustrialize Europe. Here’s what The Economist, normally a staunch defender of free trade, recently wrote:

China is not just dumping exports and subsidising its companies, it is also out-competing and out-innovating big European industries, including carmaking. Last year Germany’s trade deficit with China stood at €66bn ($76bn); this year it could widen to over €85bn, around 2% of GDP. Alarmingly, China is exploiting Europe’s dependence, weaponising embargoes or the threat of them in chips and rare earths.

Germany, traditionally Europe’s strongest manufacturing and exporting nation, is already the hardest hit:

China, many European manufacturers have concluded, is threatening to put them out of business, by both fair means and foul…The wails are loudest in Germany, which is Europe’s biggest exporter to China and its biggest investor in it by far…For the Mittelstand, the small manufacturers that constitute a big slice of German industry, China used to be a source not of angst but of profit. Their precision-engineered machine tools were an exquisite fit for its rapid industrialisation. Chinese consumers raced to buy German cars…

Times have changed…Once-stellar growth inside China has, for many foreign firms, slowed to a crawl as competition with local rivals intensifies. In addition, Germany’s previously small trade deficit with China has ballooned…Last year it reached €66bn ($76bn), or around 1.5% of GDP, driven by a collapse in German exports to China and a rush of imports, notably of cars, chemicals and machinery—hitherto German specialities.

Germany’s trade deficit with China this year is expected to surge again, to around €87bn…German cars command only 17% of the Chinese market, down from a peak of 27% in 2020…Worse, Chinese competition also jeopardises sales in other markets. China’s net exports of cars have risen from zero in 2020 to 5m units last year. Germany’s have halved over the same period, to 1.2m units…Such figures have triggered fears in Germany of a wave of deindustrialisation.

The Financial Times has a good article about this as well, and Brad Setser has a good writeup of that article.

This is all on top of the existing headwinds facing European manufacturing — the energy crisis from the cutoff of Russian gas and self-inflicted “green” policies, Trump’s tariffs, and so on.

So Europe finds itself in an extraordinary perilous position right now. Its main protector has suddenly withdrawn. It has a ravenous, brutal empire attacking its borders, supported by the world’s most powerful nation. Its main export markets are shriveling, and its manufacturing industries are under dire threat from waves of subsidized foreign competition. What can it do to fight back?

How Europe can resist the siege

The most important thing Europeans need is to panic. Europe is facing its own Deluge — a sudden pincer movement by hostile great powers that threatens to reduce it to a collection of small vassal states. This is a true crisis, and it will not be solved by social media rhetoric, or by brave declarations by EU leaders. It cannot be regulated away by eurocrats in Brussels. It will require bold policies that change Europe’s economic, political, and social models. Only a strong sense of urgency and purpose can motivate Europe to do what needs to be done.

What needs to be done? One important step is for Europe to act more as a single whole than as a collection of small countries. In the military realm, this means coordinating European militaries and defense industries much more. Matthew C. Klein writes:

From a properly European pespective, the security interests of each country should be shared across all countries, just as, for example, most Americans in Michigan or Maine would view an attack on California or Florida as an attack on them…The first step is to give the Ukrainians, who are already fighting the Russians, as much material and financial support as they need. From the perspective of European security, French, German, and British weapons are far more valuable in Ukraine than in their home countries. If the Ukrainians were subjugated, defending the rest of Europe would become much harder, with the effective EU-Russia border lengthening dramatically…

Europe’s national militaries have had a tendency to favor their home country’s producers, with the result that the continent is filled with subscale defense companies that are often slow and unproductive. Common defense procurement for a continental army should lead to higher output and lower costs—a few large companies handling large orders should have better unit economics than hundreds of artisanal manufacturers—but it would require Europe’s national defense elites to change their perspective. Philipp Hildebrand, Hélène Rey, and Moritz Schularick recently published a useful proposal for how to make this work.

And economically, Europeans can partially compensate for the loss of Chinese (and American) export markets by selling more to each other. The Economist writes:

A second task is for European countries to make better use of the power they have, by integrating their economies…By failing to integrate, the EU is leaving a vast sum of money on the table. A single market that was designed for goods is failing to help economies dominated by services.

And in his famous report on European competitiveness, Mario Draghi wrote:

We have also left our Single Market fragmented for decades, which has a cascading effect on our competitiveness. It drives high-growth companies overseas, in turn reducing the pool of projects to be financed and hindering the development of Europe’s capital markets…The EU’s new industrial strategy rests on a series of building blocks, the first of which is full implementation of the Single Market. The Single Market is critical for all aspects of the strategy: for enabling scale for young, innovative companies and large industrials that compete on global markets; for creating a deep and diversified common energy market, an integrated multimodal transport market and strong demand for decarbonisation solutions; for negotiating preferential trade deals and building more resilient supply chains; for mobilising greater volumes of private finance; and as a result, for unlocking higher domestic demand and investment. Remaining trade frictions in the EU mean that Europe is leaving around 10% of potential GDP on the table, according to one estimate.

And ideally, Europe should form a fiscal union — the EU itself should be able to borrow and spend, not just the member countries. As Klein writes, this needs to be accompanied by a greater tolerance for fiscal deficits — after all, countries borrow in emergencies.

In other words, Europe’s first step in resisting its siege is to act more like a country and less like a zone. It would also help to find some way to bring the UK back into the fold, especially because polls consistently find that British people regret Brexit.

Europe’s other top priority is to provide for the common defense. That means spending more money on the military, of course, and it also means greatly increasing the size of Europe’s nuclear deterrent. But it also means building a defense industrial base capable of resisting a China-backed Russia.

Europe’s current defense-industrial base was built for the Cold War, when battles were decided by heavy vehicles like tanks and ships and planes. Those are still somewhat important, but drones have risen very quickly to dominate the modern battlefield. Right now, drone manufacturing, as well as almost the entire supply chain for battery-powered drones, is overwhelmingly concentrated in China.

Europe needs to be able to build not just drones, but every single thing that goes into making a drone — batteries, motors, various types of computer chips, and so on. European industrial policy should therefore focus on onshoring these industries. In other words, Europe needs to master the entire Electric Tech Stack. (This will also help Europe get back in the EV race.) And it needs to master the AI software — computer vision, swarming tech, and so on — that will soon be needed in order to make drones a truly modern force.

The question of the proper policy instrument to accomplish this goal — tariffs, subsidies, fiscal borrowing, regulatory changes, and so on — is irrelevant. All of these policies should be done as necessary, and it’s better to do too much than too little. Policy procedure needs to be subordinated to the overriding goal of making Europe capable of defending itself. In fact, every European institution needs to be reformed and reverse-engineered in order to enable this.

Europe is also going to have to change its political mindset. Lavish pensions and other elements of Europe’s social model are going to have to be temporarily curbed to help give Europe the fiscal space and physical resources to fight off its enemies. All nuclear plants need to be restarted, and Europe should build more nuclear, ignoring “green” parties and environmental activists who irrationally hate nuclear power. Europe needs to reform its land-use regulation to require greater construction of solar and wind power. And Europe is going to have to back off of its aggressive regulation of AI software, in order to produce cutting-edge autonomous weaponry.

Finally, Europe needs to look for friends and allies — and export markets — other than America. India is an obvious choice. Although India is friendly with Russia, the country would undoubtedly welcome Germany’s help industrializing — and this would allow German companies to sell machines to India, as they once did to China. The EU should open its markets to Indian goods in exchange for Indians doing the same, recognizing that trade balances are less important than total export demand. Japan, South Korea, and other big developing countries like Indonesia, Vietnam, and Brazil are other good potential trading partners.

If Europe manages to unify more and to build up its military power, it will increase the number of great powers in the world by one. A planet with a strong Europe, America, China, Russia, and India is a better planet than one where only the last four of those are strong. If Europe shows it can act with unity and purpose, and that it has military power to be reckoned with, America and China — both countries whose leaders tend to respect raw power — may lose their disdain for the region, and return to a more diplomatic, conciliatory posture.

Ultimately, European weakness and division are the reasons the region is getting bullied by so many other powers. Reversing that weakness and division would make the bullies go away. But Europe’s people, and especially Europe’s elites, have to want it.


Subscribe now

Share

1

And of course if Europe does expel the Muslim immigrants and start talking up its Christian heritage, as the MAGA folks want, Democrats will conclude that Europe is fascist and be reluctant to help it out when they get back in power. Essentially, Europe is finding itself caught in America’s internal culture wars, and there’s no good way out; the only solution is to realize that the U.S. will not be a reliable partner for decades to come.

2

Would Russia actually try to conquer and rule all of Europe directly, as the Nazis tried to do? Unlikely. But would it try to dominate all of Europe the way the USSR dominated the Warsaw Pact? Yes, definitely. And this sort of domination would be very bad for Europeans, as the Poles could tell you.

SpaceX gets approval to build Starship launch complex at Cape Canaveral

Starship SLC-37

he Department of the Air Force has approved plans to convert a former Delta 4 launch site at Cape Canaveral into a complex for SpaceX’s Starship.

The post SpaceX gets approval to build Starship launch complex at Cape Canaveral appeared first on SpaceNews.

Planning sentences to ponder

Planning assistance caused municipalities to build 20% fewer housing units per decade over the 50 years that followed.

Here is the full abstract:

We study how the federal Urban Planning Assistance Program, which subsidized growing communities in the 1960s to hire urban planners to draft land-use plans, affected housing supply. Using newly digitized records merged with panel data across municipalities on housing and zoning outcomes, we exploit eligibility thresholds and capacity to approve funds across state agencies to identify effects. Planning assistance caused municipalities to build 20% fewer housing units per decade over the 50 years that followed. Regulatory innovation steered construction in assisted areas away from apartments and toward larger single-family homes. Textual evidence related to zoning and development politics further shows that, since the 1980s, assisted communities have disincentivized housing supply by passing on development costs to developers. These findings suggest that federal intervention in planning helped institutionalize practices that complicate community growth, with subsequent consequences for national housing affordability.

Hail Martin Anderson!  The above paper is by Tom Cui and Beau Bressler, via Brad, and also Yonah Freemark.

The post Planning sentences to ponder appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Imagine a bigger Seattle

A brief follow-up to my previous post on affordability. Scott Alexander once suggested that building more houses would not necessarily make housing cheaper. That’s because big cities tend to be more expensive, and if you build lots of housing then you are making the city bigger. He then (correctly) noted that it would still be a good thing if the new construction led to higher housing prices, because this would also result in more people being able to live in highly productive areas.

At a theoretical level I have no problem with this argument. However, I argued that when the extra housing comes from supply side reforms, then housing prices are not likely to rise as a result, despite the city becoming bigger. Supply effects would probably dominate induced demand effects. But I’d rather not rehash that debate, as his provocative hypothesis seems like a good way to explain why output is better than “affordability”.

Thanks for reading The Pursuit of Happiness! Subscribe for free to receive new posts and support my work.

Let’s say that Alexander is correct that building lots of housing makes a city more expensive. In that case, many people would argue that housing has become less affordable. As Matt Yglesias pointed out, average people tend to equate affordability with nominal prices. In fact, even if building lots of housing made prices go up, housing would actually become more affordable. To see why, consider the implicit factors underlying in Alexander’s thought experiment.

Assume that metro Seattle doubles its housing stock, pushing the (CSA) population up from roughly 5 million to 10 million. Now Seattle is America’s third largest metro area and also a city with lots of highly educated workers and tech companies. That’s likely to be a highly productive place, full of very high paying jobs. This can be explained by many factors. Network effects make big cities more productive (think Silicon Valley and Boston.) Large size leads to cultural amenities that are popular with highly productive people (think New York and London.)

[For simplicity, assume the Seattle population increase is uniform—both more central city high rises and more suburban greenfield developments.]

If you just looked at nominal housing prices then you might be perplexed as to why Seattle’s population had doubled. Why would more people move to a city where (by assumption) housing became more expensive? But if Alexander is correct, then high prices would have been a response to structural changes in Seattle’s economy that made the city more productive.

Affordability is not just about nominal prices; it implicitly reflects both prices and incomes. The primary reason why big cities are expensive is that their workers are more productive and thus earn higher incomes. Seattle would have gone from being affordable to 5 million people to being affordable to 10 million people. That’s more affordable.

To be clear, I’m not making a tautological claim. To some extent, big cities might attract people despite lower real incomes—think of the artist willing to live in a tiny apartment in NYC in order to be close to the action. But the primary reason why home prices are high in big cities is that incomes are high. Obviously, if you’ve doubled Seattle’s population from 5 million to 10 million then you have in some sense made the city more accessible to more people. That’s good. That’s how we should think about affordability. Affordability is (mostly) output.

Some people would quibble about distributional effects—maybe the poor get driven out. But cities like New York and LA don’t just have more people than smaller cities, they also have more poor people than smaller cities. Were Seattle’s population to rise from 5 million to 10 million, it would almost certainly be the case that Seattle’s poor population would increase in absolute terms, even if it shrank slightly as a share of the population.

PS. My grandmother visited Seattle in 1962 and later gave me a plastic model kit of the Space Needle, which I assembled. We had no computer games in those days, and this model was one of my favorite toys when growing up. There’s a (stupid) debate about whether real incomes have risen since the 1960s (duh!), but it’s especially silly when it comes to children. Their toys are so much better than our boomer toys that it’s like they are living on another planet. Middle class kids now have higher living standards than did the children of billionaires back in the 1960s. BTW, I seem to recall that there were only three billionaires in the 60s.

PPS. I was about to say that for once Trump was correct about something, but even when he’s right he surrounds his accurate comments (“con job” and “fake narrative”) with utter drivel (“trillions”, “worst inflation”, “no affordability”):

After ticking off what he claimed were trillions of dollars of investments and other economic accomplishments, Mr. Trump called the issue of affordability a “fake narrative” and “con job” created by Democrats to dupe the public.

“They just say the word,” he said. “It doesn’t mean anything to anybody. They just say it — affordability. I inherited the worst inflation in history. There was no affordability. Nobody could afford anything.”

Thanks for reading The Pursuit of Happiness! Subscribe for free to receive new posts and support my work.

Talking With Paul Kedrosky

As I say at the beginning of this interview, it’s annoying for economic analysts that two huge things are happening at the same time: a radical change in U.S. trade policy and a giant AI boom. Worse, while I think I know something about tariffs, the more I think about AI the less I believe I understand. So I talked to Paul Kedrosky, investor, tech expert and research fellow at MIT, for some enlightenment. Lots in here that I found startling.

Transcript follows.

. . .

TRANSCRIPT:
Paul Krugman in Conversation with Paul Kedrosky

(recorded 12/03/25)

Paul Krugman: Hi, everyone. Paul Krugman here. I’m able to resume doing some videos for the Substack, and today’s interview is based on me being really annoyed at history. If only one big thing would happen at a time. Unfortunately where we are now is, on the one hand, we have tariffs going to levels that we haven’t seen for 90 years, which should be the big story and where I feel fairly comfortable; but then we also have this AI explosion where I feel completely at sea. I don’t quite understand any of it. I’ve been reading and watching interviews with Paul Kedrosky, who is an investor, analyst, and currently research fellow at MIT, he certainly knows more about it than I do, and I wanted to just have a conversation where I try to understand what the heck is going on, insofar as anybody can.

Hi, Paul.

Paul Kedrosky: Hey, Paul. Both of us “Paul K.,” that’s dangerous.

Krugman: Yeah, welcome on board.

Kedrosky: Thanks for having me.

Krugman: Let me ask first, I have a really stupid and probably impossible question, which is that at a fundamental level what we’re calling “AI”—I think you usually use generative AI, large language models, although they’re not just language now—but at a fundamental level, I don’t understand how it works. Is there a less-than-90-minute explanation of how the whole thing operates?

Kedrosky: There is and I think it’s really important because it helps you be a more informed consumer of their products as a result. I think a really good way to think of these things is as grammar engines and I often call them “loose grammar engines,” meaning that there’s a bunch of rules in a domain that I can instantiate in the form of, whether it’s language, or whether it’s the law, or whether it’s software engineering: these are all grammars, when you abstract away from the nature of how we use them, meaning that they’re actually rules about what’s going on. If I ingest all of that and pull it into a giant network of matrices that weight all of this, then I can therefore do what we call “training on its basis,” it makes pretty good predictions about how that grammar might imply what we should be doing in terms of the “continuations”, the next thing that might be generated, whether it’s a subroutine in software, a PowerPoint slide, some language in an English presentation, or even loosely in the context of an image.

But it’s all this idea that these things are “loose grammars” that are reasonably good at predicting what should come next, the continuations based on the data they’re trained on, which tells you a lot of things about what they’re good at, and it tells you a lot of what they’re bad at.

Krugman: It’s a little bit like if you give me four words of a sentence, correlations out there will tell me what the next word is likely to be. But it’s a lot more elaborate than that, right? There are multiple layers, as I understand it.

Kedrosky: Right. It’s like the old Einsteinian expression of “spooky action at a distance,” where it’s not just the proximity, in terms of the very next thing that’s coming, we call these “tokens,” it’s also about the entire holistic context in which that language is embedded in the context of the grammar. So things that are far away actually have a surprising influence in terms of what might be the next tokens.

So it’s not something as simple as saying, “that box is red, you know that a color should come up next.” It’s not that simple. It has a lot to do with the entire context on which it was trained. In-turn, this ‘spooky action at a distance’-thing tells you about what it might look like. It turns out—this was the surprising thing that in a weird way, surprised even Google in 2017—when the original so-called Transformers paper that led to a lot of the recent developments in AI rose to prominence was that it was created for language purposes. It was created to use in the context of their Google Translate application. They thought, “this is kind of nifty. It doesn’t work too bad for that.” But the idea that embedded in language itself, through this idea of “near and far prediction” and this “spooky action at a distance,” this idea of attention could actually capture a lot of what we call knowledge, and therefore a lot of what seems like inference almost, was surprising to everyone, which is why Google kind of let things go by the wayside.

It took until it appeared inside of other companies like OpenAI, until the technology had a huge impact. So it’s not as simple as just predicting the next token, it’s this idea of: in the context of these attention mechanisms that look at the entire body of where this information is embedded, whether it’s English language or software or the law or any of these domains that you can actually get something that feels to us like, “oh, it understands what I’m thinking or understands the question I’m asking,” which is really just a reflection of—in the context of these large corpuses—what prediction feels like. It feels like this is a kind of continuation of what a normal person would think. What’s interesting is that when I have a colleague doing work on this, if you back sample who it thinks you are—if you think about it in the context of the training models—it has a rough sense that you’re like a 37 year old guy on Reddit. That’s the kind of person that it’s—in this sense—doing the continuation for, because that’s a big chunk of the training corpus. So if you back-engineer out of it what the data actually suggests about it, that can also tell you something. So I often tell people whenever they send me a message like, “a large language model said I should do x, y, z.” For instance, this should be my next car, or this is the answer to the essay question: what you’re really saying is, “a 37 year old guy on Reddit said it,” and you’ve got roughly the same amount of information, so it can be good, or it can be really fraught.

Krugman: We have all these stories about ChatGPT (or similar) telling people what they want to hear and giving them really bad advice. “Guys that look like you tend to make the same mistake,” basically.

Kedrosky: Exactly. Of course, it’s even more fraught now because of the nature of training and how we’ve increasingly exhausted the number of 37 year old guys on Reddit. A lot of the optimization in models now is in what’s called “post training.” So what goes on after the model has been created, where I go out and I say, “here’s the response it will give you to this particular prompt, do you like it?” We call that: reinforcement learning with human feedback. That leads down a path no different than being a professor at MIT obsessed with student ratings. You can become very sleazy, right? All of a sudden now all you care about is whether or not your students like you. This is a dangerous path for all the reasons we know, and it’s no different in the context of models. So not only is there this issue of the corpus itself being very centrally trained with respect to that group, but the models are increasingly trained in the post-training world because we’ve exhausted a lot of the pre-training data—there’s only so much of that out there—that the models become “sycophantic.” They’re tail-wagglingly eager for you to love them. That’s what we’re seeing increasingly.

Krugman: Oh, boy. What strikes me—and I’m by temperament just a skeptic about all these things—but I paid attention out of the corner of my eye to artificial intelligence, and efforts there, for a very long time through decades and decades of immense frustration, when being able to recognize “this is a cat” was a basically an insoluble problem. Then all of a sudden all of this stuff becomes absolutely routine, which is just mind boggling.

Kedrosky: The analogy I make is that we—via the Transformers paper—stumbled into a kind of Saudi Arabia of data. The right way to think about it from my standpoint is that Saudi Arabia of data was the public internet, which suddenly became useful as training data in the context of these massive models that required huge amounts of data and improved on the basis of scaling, meaning that a 10X improvement in the amount of data you trained on led to a predictable increase in the capacity of the model to make what we would call “useful inferences.” That was novel because we could never do that in the past. So the Saudi Arabia of free textual data, no different than any other reservoir, whether it’s the Permian Basin, etc., we’ve increasingly exhausted that data. What you’re seeing now is those old scaling laws, the goddess from 2017, 2019, 2020, GPT-1, all the way up to the present are producing less and less bang for the buck, no different than any extractive model where the remnant of that reservoir is much more expensive to get access to and probably more polluted, probably less useful, probably requires more refining. This is exactly the same, and that’s the point at which we are now.

Krugman: Funny story, I actually knew people who, not worked on but were close to the original Google Translate stuff, and their initial big resource—at least they told me—was OECD documents. Because of the multinational thing, everything is said in four languages. So it was kind of a Rosetta stone.

Kedrosky: No, you’re right. It was a tremendous training corpus for those models. So again, much back to the 37 year old guys on Reddit, once you understand the nature of what’s under the hood, it tells you a lot about why these models are useful and where they are less-so.

The other point I’d make is that it also helps to understand the nature of what training means, because we throw that word around a lot. Training follows this idea of what’s called “gradient descent,” which is that as I make changes, as I do training cycles, incrementally how much improvement do I see, and at what point does it stop or even reverse? In certain domains, the data has a really high rate of gradient descent, meaning that small changes provide a huge signal back to the model. So they’re very good at those things. A good example of that is software itself. If I make minor changes in code, I don’t get minor differences on the other side, I get broken software. So there’s a huge signal that flows back into training when you make minor changes in software, so the gradient descent is very sharp, which makes the models much better on relatively limited data. The English language itself is the exact opposite, if I make minor changes in language and I ask you which one’s better, you’d say, “oh, I don’t know, maybe this one, maybe that one.”

So the notion of learning from language itself versus learning from software is very, very different, which is incredibly important because it tells you why these models are great in the context of software itself, because the gradient descent of learning is so sharp and why they’re so equivocal and sometimes even dangerous in language, because we don’t have that same ability to learn from relatively small morsels of information. And it takes you to the next step, which is why benchmarks themselves in AI are so, I’ll say conflicted, because software is such an extremely good place for models to run, that’s saying “this model is very good at software, therefore, we’re on a path to AGI,” shows a profound misunderstanding of the nature of large language models. Of course they’re good at software. There could hardly be a better domain for training a large language model than software.

Krugman: By the way. Just in case there are some listeners that don’t know, AGI is artificial general intelligence. That’s the holy grail, and I think you’re one of the big skeptics about this being at all what we’re heading towards right now.

Kedrosky: Very much so, for some of the reasons I’m describing, that the nature of large language models, that architecturally—for reasons of data set exhaustion and for reasons of declining returns for increasing investment—we’re kind of at a cul-de-sac already. We’re seeing that happen. So the notion that I can extrapolate from here towards my own private God is belied by the data itself, which shows you that we’re already seeing this sharply asymptotic decline in the rate of improvement of models outside of software, but in almost every other domain.

Krugman: Since we’re talking about investment in terms of the economics and the business side, one of the things that we tend to “think of thinking”—or whatever it is, to the extent we think of this as some kind of thinking-like process—we tend to think of that as being kind of immaterial, as existing in a pure, nonphysical domain. Yet the whole thing about all of this is the extreme physicality of it. We’re talking about particularly huge amounts of capital being deployed, huge amounts of energy being consumed.

Trying to estimate how much CapEx is coming from AI is a huge pain. You have one of the most widely cited estimates but it’s looking a little stale now. I can tell you about why I find it a problem, but why don’t you talk about what’s involved?

Kedrosky: We have this prodigious amount of spending going on, and that was one of the windows through which I got interested in the investment side of this stuff, because it seemed as if it was so large that it was having an impact on economic data itself. I was looking at that early this year—I just did yesterday or the day before—there was a new OECD report on the US showing that in the first half of 2025, the US was arguably in a recession absent AI CapEx spending which was scarcely a ripple in terms of people saying, “hello, we’re running a giant private sector stimulus program that’s keeping the US out of recession,” and yet no one’s talking about it in those terms.

The analogy I make all the time is that when you don’t understand how large AI CapEx is and how consequential it is, you have the causality of policy all messed up. You don’t understand that the thing that’s actually driving the US economy is not the thing you think it is. I often make the joke that it’s like my dog who barks when the mailman comes to the house, and then the mailman leaves and he thinks it’s because of the barking. It’s like, “no, the mailman leaves every day.” It doesn’t matter whether you bark or not, they always keep going. You just have a bad model of causality in this context. That’s no different than what’s happening now in the world of macro, with respect to the role of AI CapEx in the US economy; for example, if you want to believe the tariffs are the primary reason why the US did well in the first half. If you’re of that no-partisan mindset, you’re ignoring the substantial role of AI CapEx on an annualized basis, probably being over $1 trillion, which made it more than half of U.S GDP growth in the first half of the year, which again kept the US out of recession, arguably from a single sector in private sector related spending in that which is just remarkable to me and is really fraught whenever you try to apply another lens and say “no, it was because of this or it was because of that.” No, this was the reason and it helps to explain job growth with someone even in the first half and continues to be, data centers are not a huge job creator. It’s all of these reasons for the capital intensity associated with this one particular sector.

Krugman: What drives me crazy is that you look at the standard, the way the data is cut—basically you look at national accounts—and I’ve seen people say, “oh, well, let’s take communications and information equipment plus software.” But that’s wrong in both directions. Some of that stuff is not AI, on the other hand there’s just a lot of construction of buildings that’s part of AI.

Kedrosky: You can back into it with things like nonresidential fixed investment and try to come in through that angle, which are also fraught. At least one of the ways I tried to triangulate it was build up from the numbers released by the companies themselves, because they’re so eager to brag about how much they’re spending. We can talk about why that is, I think it’s partly a deterrence thing: “I’m willing to spend so much to dominate this market that there’s no reason for you to spend anything at all.” It’s this O.K. Corral phenomenon of trying to deter people from actually contesting the market with you. So you make these giant preemptive announcements, partly to hoard capacity, but partly to deter competitors.

But nevertheless, we’re in this unusual moment where they’re willing to tell you what they’re doing, in a way that actually creates some data you can aggregate up and seize and say, “this is what’s going on with respect to spending” that you might not otherwise see, certainly not from the national accounts.

Krugman: Some respectable people have tried very hard, and I concluded that the BEA data is just not cut in a way that lets us do this, and we have to do something like what you’ve been doing.

Kedrosky: There’s other problems with the data too, which is really amazing to me. There’s also an ongoing business trend survey that’s been coming out where the Census Bureau added a line on AI adoption, trying to be helpful back in 2022. It’s showing that AI adoptions actually began plateauing at around 18% of large corporations already in the third quarter of 2025, which seems ridiculous, obviously, for a host of reasons. But nevertheless, when you go back and look at the actual survey item, you realize that it wouldn’t be out of place ten years ago, it’s about SQL dashboards and all of these machine learning technologies that were ancient ten years ago. So even the attempts to improve the data aren’t very compelling.

Kedrosky: So we’ve got bad data, both in terms of adoption and bad data in the national accounts from the standpoint of what’s actually being spent. An ongoing problem in general is that a lot of our economic statistics are really designed for the economy of 1929.

Kedrosky: That’s right. (laughs)

Krugman: We’ve got an infinite number of categories of textiles. (laughs)

Kedrosky: Yeah, tremendous data on textiles. Not so much on recent adoption of large language models, which is fine; I understand that, but nevertheless, when you introduce a new survey item in 2022 and say that this is oriented towards current adoption of these emerging technologies and it’s all about ancient machine learning technologies, it’s not going to tell you very much.

Krugman: Quick question. Do you have a sense—this may be unfair—of the AI numbers that you’re looking at? How much is equipment and how much is structured as buildings?

Kedrosky: So you can come at it from the standpoint of the data centers themselves, roughly 65-70% of the cost of a data center is specifically the equipment.

Krugman: So it is mostly equipment.

Kedrosky: It is mostly equipment. Obviously the primary beneficiary of that are companies like Nvidia: GPU manufacturers. So it is mostly equipment. Again, there are issues with respect to that being the primary, because obviously there’s a relatively short timeline over which those technologies must be replaced. Michael Burry of “Big Short” fame has been out chattering about this stuff.

I think it’s somewhat misunderstood what’s going on, but nevertheless, I sometimes say “a data center full of GPUs is like a warehouse full of bananas, that’s got a relatively short half life in terms of its usefulness.” That’s important to keep in mind. That’s what makes it different from prior CapEx spending. Moments like railroads, canals, rural electrification, take your pick, because of the nature of the perished ability of the thing that we’re investing in.

Krugman: So let’s talk about chips. As a technological naif: a chip is a chip. RAM is one thing, memory chips, which are commoditized, although I gather there’s a shortage of them now globally?

Kedrosky: Yes, there is in particular these, what are called, HBM, these high bandwidth memory chips, which are the ones that basically interconnect these GPUs and allow them to parallelize the training process. But yes, there’s a shortage in those, not in PC RAM, but in high bandwidth memory.

Krugman: Then there’s GPUs and TPUs—which I don’t quite get. They’re basically these specialized chips that do computational things or I guess GPUs—the G is for general, so less specialized, but still that are much more elaborate.

Kedrosky: It’s actually for “graphics processing units,” weirdly enough, the origins of Nvidia GPUs were back in the day whenever everyone thought the world was going to get taken over by 37 year old guys on Reddit with giant machines where they’re playing games on their personal computers at home. So the reason why GPUs are so good for training is because they were created to be very good at manipulating real time graphics on a screen, which is just a giant set of matrices in terms of the calculations of the positions on the screen, and researchers figured out fairly quickly, “wow, that’s actually useful for doing huge amounts of matrix math,” which underlies most of machine learning and thus large language models. So GPUs really were almost an accident of history in terms of their role in the context of large language models emerging from the graphics world.

Krugman: One big insight that I got from you is—until like a week ago—I understood that these chips depreciate fast, but I thought it was going to be basically depreciation through obsolescence. But it turns out that it’s just very, very different. Do you want to tell us about that?

Kedrosky: Yeah that’s really important because there’s this idea that the reason why this is a warehouse of bananas—or whatever your favorite fruit is in this context—is due to the pace of change in technology. That’s kind of a trope, “oh, everything changes quickly. I have to throw out my phone, my laptop.”

That’s not really the primary driver in most of what are called hyperscale or the largest data centers run by people like Google and Meta and others. You have to think about it in the context of the workload, what’s actually happening inside the data center, and it can loosely be split in two ways: there’s the training aspect of what goes on, so where I’m training new models or enhancements to old models using giant amounts, at least 10 to 20,000 GPUs inside of one of these data centers; and then the other chunk of the activity inside the data center is inference, which is responding to requests I might make when I write some nonsensical question to a chat AI, like Claude or whatever. So those are the things that loosely split in terms of the two things going on inside the data centers. Chips are underlying both of those. But from the standpoint of the wear and tear on the chip, those are very different activities. The analogy I often make is, let’s take training as an example, if I take training and I’m using that for a job, I’m running the chip flat out 24 hours a day, seven days a week, which requires an immense amount of cooling, incurs a lot of thermal stress (heat-stress), and then inference: I’m running it more episodically, maybe more in the day, less at night. People aren’t making as many requests at night, so the load changes fairly dramatically.

So the analogy I make is, imagine both the chips were used for 50 hours for training and 50 hours for inference. Now imagine a car in the same circumstance. I raced a car for 50 hours in two 24 hour races, or I took it every Sunday to church for an entire year, roughly 50 hours, let’s say it’s a half hour there and back. Which car would I like to own? I’d like to own the one that went to church on Sundays, even though it’s 50 hours is 50 hours. Because I realize that racing a car for two 24 hour races, even though the car’s only been run for 48 to 50 hours in a year, is a very different requirement with respect to the stress incurred.

When you use a GPU for training, it’s like those two 24 hour races on your car versus taking it to church for a year on Sundays. And so what happens is, and the data is fairly clear about this, there’s a distribution with respect to a long tail, where some chips last for quite a while, but there’s a high failure rate in the first 2 to 3 years, with a mean time between failure of about two and a half years or so; so long before we might be saying to ourselves, “oh, look, there’s a hot new chip out there that I want to replace this thing with,” you’re actually seeing a steady drip of chip failures. So let’s aggregate up. Imagine you had a 10,000 or even a 20,000 GPU data center. You should expect on the statistics a chip to fail about every 3 or 4 hours. So long before I get to the point where I’m rapidly turning these over because there’s a new generation of chips, I’m turning over a vast chunk of my chips just because they’re failing under thermal stress. This is because these workloads are like running my motor flat-out in that car. It’s high heat, it’s a lot of stress, things begin to break down. This leads to the turnover long before generally speaking, you might turn it over just because there’s some hot new chip out.

Krugman: Wow. So basically as you say, “it’s the training rather than the inference that’s an issue.” But the training is basically just running chips hot. They get heat stroke more or less.

Kedrosky: They get heat stroke and they and it can be really insidious because they don’t necessarily break catastrophically. It’s not like your car suddenly stops. They can actually slow down. You don’t realize that it’s not running as fast as it once was. So there’s all kinds of ways in which it requires a lot of work to figure out, “oh, that chip is running at a subpar level,” so it’s not as neat as, “it just blinked out of existence. Now I need to hot swap in and replace something.” It’s not quite that neat and tidy, which makes it even more time consuming and complicated to make the replacement. But that split understanding and the difference in terms of what it does to GPUs—to chips in the data centers—and thinking about it almost in the context of cars going to church on Sunday versus taking it in a 24 hour race is incredibly important because it tells you a lot about the dynamics of what we should expect in terms of the future capital costs associated with replacing the GPUs and data center. There’s an ongoing replacement wave not driven necessarily by technological change, but driven by the actual thermal stress on the chips themselves.

Krugman: Okay, so we have lots and lots of analogies, with the telecoms boom of the 90s. We all said, “well okay, a lot of companies went bust. The returns never added up.” But on the other hand, you had all this fiber in the ground, which eventually became useful. But you’re saying basically that’s not what’s going to happen here. What we’re going to end up with is a bunch of burned out chips.

Kedrosky: That’s right, a bunch of burnt out cases. That’s exactly it. It’s kind of like The Big Lebowski as a chip, where it’s like, “I’m not sure what’s going to happen here, but all I know is this guy’s long past his due date,” and so that’s a part a big part of the problem here is not just that technological change makes this 10,000 GPU data center less useful, it’s that it’s also gone through cycles of thermal stress, and likely its lifespan isn’t particularly long anyway as a result of what it’s already done. There’s a double whammy here that will make it less useful. So the response of the technology industry to that is generally this idea, they say, “well, that doesn’t really matter that much. What we’ve created is a powered shell. It’s this giant building that’s got power, it’s got cooling, it’s got all the things. So we can just hot swap in all of these GPUs again in future.” And that’s of course assuming away the problem, which is that 60 to 70% of the cost of the data centers—the chips themselves. So I’ll give you the power, the electricity there, the cooling and the walls and the concrete. I’ll give you all that for free. You still have the preponderance of the cost in front of you in terms of replacing the GPUs.

So the notion that I built a fixed asset that’s perpetually useful is really dangerous. I hear this a lot in particular from regional economic development officials who are talking about why they’re offering really extreme subsidies and tax abatements to hyperscalers to build data centers in their area. And they talk about them as—and I’ve heard this expression so many times—that data centers are “the factories of the new industrial revolution.” The analogy is just so fraught, for this exact reason that there isn’t that longevity that you would hope from this—leaving aside the analogy is bad—there isn’t that kind of longevity for these reasons.

Krugman: I think Jim Chanos may have been the first to say this to me, but I know other people have said it. It’s like shale wells, in which a lot of people lost a lot of money because it turned out that shale, gas or oil well, doesn’t keep yielding the same way that a conventional oil or gas well does. It depreciates really fast.

Kedrosky: That’s just another extractive resource economy. It’s an extractive resource economy in surprising ways. So not just in terms of the nature of a declining return from the GPUs themselves, but also the declining return—as I was talking about earlier—of these giant training sets that allowed us to scale up the so-called scaling laws for large language models that got us to the point of GPT-4 and 5 or Claude—that there’s a declining return on that, at a much higher cost.

The cycle times are longer, more training cycles. The amount of cost is higher. So in both ways, the extractive economy that underlies all of this is producing declining returns, which in the context of shale, wasn’t just the one point of failure with respect to declining returns to extraction. There are multiple ways you begin to see that. It’s masked by the capital expenditures because people try to spend their way out of the problem. So I’ll run more training cycles to produce better data. Of course, that doesn’t work. Then they go into the mode of—like Elon Musk has been doing with his Grok model—where I’ll spend half the time doing post training.

So instead of relying on finding new data, I’m going to do all kinds of work to make it more sycophantic in terms of the response it gives to people. If you look at his training data, almost 50% of the of the training cycle time on the latest Grok models was about 50% from post training, which can work but in the limit leads to these kinds of obsequious and sycophantic behaviors that make the response at best unstable and realistically not very helpful.

Krugman: The last number you had was a little bit bigger as a share of GDP than the telecoms boom of the 90s. But presumably you think it’s higher than that now?

Kedrosky: I do. It’s something like, nonresidential fixed investment, probably around 14% now. So, we’re considerably ahead of where we were in the telecom bubble, we are somewhere between rural electrification and World War two rearmament.

Krugman: But not yet like railroads in the 19th century.

Kedrosky: Not yet like railroads, but on a path to a similar place. Given that—and this is really important—we’re in that point where there’s a financial flywheel now, where increasingly the financing of these data centers is in a place where it’s somewhat divorced from what goes on inside the data center, because we’ve created a financing template for how to finance data centers, where you have these SPV, these special purpose vehicles into which a third party contributes capital and the tech company contributes technology. And out the other side magically pops these securities that have great income and yield characteristics that are hugely attractive to investors. They look at it as almost like a synthetic security, where I understand it’s the SPV, the data center that’s producing this.

But on the other side of this is Meta and Google and they’re a prime credit. They’re a really strong credit. So they’re going to keep paying on this. I don’t really care what goes on inside the data center, because I have a lot of confidence in the counterparty in this. We know where all of this kind of thing leads when you have these financing flywheels driven by securitization and high yields and people not caring what goes on inside the actual structure itself: it leads to a lot more construction and eventually over-building.

Krugman: Oh, God. I’m having flashbacks to 2008, 2009. All of the stuff that was “perfectly safe” because after all, AIG was backing it, right?

Kedrosky: Right, exactly. Very much so. When you have the same phenomenon of this look-through mechanism, where you have these legal vehicles, where people look through the legal vehicle and say, “oh, well, it doesn’t matter, because on the other side of this, it’s Google and Meta,” and it’s even more insidious. Some of the private credit providers have been straight up about this, that in the contractual terms that underlie the data centers, if you were to cancel early and no longer continue as a technology company using one of these centers, there are whole provisions which basically force you to pay, in some sense, the net present value of the future lease payments back to the private credit company. They’ve been very clear about—this actually works out in their benefit given time value of money, that actually they don’t mind if you walk away from it early and make the payment because “now I have more capital to do more building.” So in a weird way, there’s a perverse incentive in the system to make bad loans.

Krugman: I do want to ask about circular financing, except that I’ve been looking at all of these pictures showing the money flows among all the players and my eyes were glazing over—I’m supposed to be good at this!—but there is some sense that things are kind of inflated by taking in each other’s washing, is that wrong?

Kedrosky: No, it’s absolutely right that we increasingly will see circumstances where an Nvidia will make an investment in a provider with the provision that they use Nvidia’s chips. In turn, that becomes their primary source of semiconductors for their training centers. Then that in turn feeds back and leads to more buying. And so round and round and round it goes. It gets very sort of incestuous and complicated because we have all of these interlocking combinations. But the reality is it creates the impression of much more demand than there is. And it’s done in part for strategic reasons, because Nvidia is trying to block up a position in the marketplace where it says, “there’s really no point in even looking at a Google chip or an AMD chip or anyone else, because look how much we’re dominating the market, and look at the lengths to which we’re prepared to go to make sure we continue to do that.” So it’s not so much that it’s some kind of malfeasance, it’s just this kind of rogue strategic move that ends up causing this impression of more growth than actually exists, because these companies all believe there’s a kind of land grab, literal and figurative, going on right now, that I need to make sure I populate these things with my technology now, because who knows what other opportunities I’ll get to do it in the past or in the future.

But all this tends to do is create this circularity, and round and round and round it goes. It becomes very difficult to get a true sense of actually what demand looks like. That’s made worse by this hoarding that’s going on where people don’t know what the demand is going to look like in future. But they do know that there’s relative scarcity of access to power. So I want to make sure I lock up every location I can now, and we’ll just let the chips fall out—no pun intended—how they do in future. So there’s this hoarding phenomenon that’s going on, which also leads to overbuilding, this circular phenomenon, and even leads to this kind of Chinatown-like speculation with respect to land grabs that might one day turn out to be useful.

We see the emergence of these companies called powered land companies, which are kind of analogous to what went on in the days leading up to LA taking over the Owens Valley’s water supply, where you show up with numbered companies and you buy up locations and no one knows exactly what you’re doing, and it’s all in anticipation of eventually one day someone wanting that and you say, “haha, I’m already here and I’ve already got the rights to access to power here and so if you want to build a data center, away you go,” and we’ve seen there’s a whole host of these so-called powered land companies that have no interest in building data centers. They just want to kind of go through a Chinatown-like model of preemptively buying the land in anticipation of an eventual buyer showing up.

Krugman: Wow. Power, that’s one of those things that I was completely caught off guard by was the sheer power requirements and how that becomes a constraint.

Kedrosky: So part of the problem is the technology industry itself isn’t used to anyone saying no, they’re kind of like a petulant toddler. So the problem is that that power is the connection to the real world of what’s going on, and so these things have to be grid connected. We have to get power from somewhere. We’re looking at certainly hundreds of megawatt buildouts, but also even into the gigawatts. This is obviously far in excess of what you can straightforwardly attach to an orthodox grid. At the same time though, there’s this huge temptation on the part of utilities to say, “we’ll take this because of the predictability of the load and the high quality of the credit makes it really appealing.”

But then the problem becomes, I have to make whole. So now I have to turn back and probably increase rates to my ratepayers, which is why we’re seeing soaring electrical bills all over the place. We’re even seeing people pushing back and saying, “I don’t want to have data centers connecting in my region” and that in turn turns into what’s called “behind the meter power,” which is you show up but you’re supposed to bring your own power. Well, that’s easier said than done. It turns out it takes a long time to build a nuclear generating station. It turns out that it’s like 4 to 5 years now to bring in natural gas. So people connect to the grid now with the promise that it will eventually be self-sufficient. But who knows whether they’ll ever be self-sufficient. So you get into these perverse situations, like recently in Oregon, where Amazon connected three data centers to the grid and has now registered an official complaint with the Oregon PUC because they can’t get power for any of them, but they were promised. So this is the beginning of what you would expect to have happen, because the temptation to take on these loads is immense, but the loads themselves are so large that it’s not straightforward how you attach it without actually changing the bills back to ratepayers.

Krugman: Yeah, the utilities may like it, but the governor elect of New Jersey probably doesn’t.

Kedrosky: That’s exactly right. Then you get even crazier situations, like a recent one with Allegheny Energy and Power, AEP, where you actually have utilities speculatively buying futures with respect to providing power that they hope will be used by data centers. The data center demand doesn’t show up, and so they in turn turn around.

This is happening right now with AEP. They’re trying to dump that power back into another interconnect. So it’s essentially a secondary distortion of a market. Because they have 700MW of power that’s just burning a hole in their pocket. But that’s because they were borrowing speculatively, trying to control some power such that they could then turn around to data centers and say, “hey, come here.” That didn’t happen, and now they’re dumping power, which is distorting another market.

Krugman: So we have a big problem of power. We have probably much faster depreciation rates than are being built. The question is, what is the prospect of the stuff actually generating the kinds of returns that would justify the investments?

Kedrosky: They’re low, this is why you get into these perverse conversations, which I seem to get drawn into all the time about what that might look like. So you get people doing these top down models and saying, for example—and this one just makes me crazy—that “the TAM (the total available market) for global human labor is like $35 trillion.” What if we get 10% of that? That would be a $3.5 trillion revenue stream, which just for a host of reasons, are indefensible ways of approaching this. It’s partly the old mistake of saying, “if I just got 5% of the Chinese market, I would be a huge business.” Well, no one gets 5% in the Chinese market. You succeed or you fail. But it doesn’t work that way. Same thing with this 10% of the global labor market. But more fundamentally— and this is more your bailiwick than mine—is that a $35 trillion market into which AI makes huge incursions is no longer a $35 trillion market. It’s a massive deflationary force. You have 10% of something, maybe, but I have no idea what it is anymore.

So the idea that you can predictably say, “I will continue to pay as much for labor when it’s done this way versus that way,” just seems naive, at best inept, really self-serving at worst. So all of these models about trying to come up with a defensible, whether it’s top down or bottoms up models where people say, “well, what if 5 billion people worldwide are all paying $100 a month for some kind of large language model subscription? Well, then we’re making enough back.” It’s like, that’s not the way it’s going to happen! That’s an incredibly naive way of thinking about the way this will play out. It’s more likely it’s just running for free on my phone and I don’t even notice. I’m not gonna be paying for it at all.

Krugman: There are not 5 billion people in the world who can afford $100 a month.

Kedrosky: No, of course. It’s just a staggering misinterpretation. So both ways of thinking about it really don’t make a lot of sense. You fall into this—and I use this expression all the time—faith based argumentation where “it has worked out before.” This is what everyone said during the fiber bubble, or this is what everyone said during the dot com bubble, or pick your favorite moment with respect to a technological change. They say, “these things always work themselves out.” I find that a really patronizing approach to the problem, because the scale of the spending is now on a sovereign level, the amounts of debt being raised by companies like Oracle rival a mid-sized European powers’ sovereign debt raising on an annual basis. These are non-trivial numbers, and it’s even rippling through to places like Taiwan where, for example, TSMC now is something like 15% of Taiwan’s GDP. Every other sector in the country is struggling, not just not least because of technology, but also because of tariffs. So we’re creating new fragilities in all kinds of places as we merrily extrapolate our way along on the basis of this debt-fueled spending.

Krugman: Of course, there’s always the possibility that “other players, other approaches.” I mean, it’s a little bit like last year where the Danish economy is all about Novo Nordisk, and it turns out other people can produce weight loss drugs, too.

Kedrosky: The analogy is spot on, because at peak, Novo Nordisk was something like 14% of Danish GDP. So in a weird sort of way, TSMC holds the same role with respect to Taiwan now as a result of it, and faces the same risks with respect to fragility because LLMs, large language models, as the basis of much of the current excitement, are at a kind of natural architectural dead end with respect to some of the things we’ve been talking about. So the idea that it’s going to continue, we can project in the same way and extract the same gains from the same kinds of spending are just incredibly unrealistic. That’s one of the reasons why you’re seeing people increasingly look at other approaches. I think in all likelihood, none of them will lead to anything like AGI. But it doesn’t really matter. The point is that it’s a demonstration of the extractive exhaustion of what we’ve currently done.

Krugman: There is this talk among mostly uninformed circles that I run in, but about smaller models trained on a more limited base to the Chinese approach and that are much cheaper to run and that that would be a huge blow to these companies, if that turns out to be right.

Kedrosky: Absolutely. So you have these small and micro models that are much cheaper to train. Deep Seek was loosely an example last year. This idea of a much less expensive method for training models. We saw it recently with Moonshot’s Kimi model which just came out of some of these Chinese models. So these are in a sense they’re a different approach to the same problem. We’re not taking a new architectural approach. These are still large language models. They’re just at a much smaller scale in terms of the amount of time required to train them and the cost required to use them. So they’re really important, but they’re even more important because, let’s think forward, if I’m right, that the amount of training we do in future has to decline, because the natural architecture that we’ve had with respect to large language models, the economics are dictated then by inference, by the ability of these models to respond to requests.

But most of inference is not you and I. This is a mistake we make all the time. People think that we are the story. With respect to inference, most of the global inference from consumers—from you and I and others—could be satisfied by a single data center in northern Virginia. That’s how small a fraction of the total load we are with respect to inference worldwide.

So 60%, let’s say, is training. We’re maybe 5 or 6% of the total workload of data centers. That bit in the middle, a huge chunk of that, is software itself—is coding, which turns out to be a huge profligate use of tokens. So what you’re forced to project as you go forward, is you say, “well, is everyone on earth going to be writing software using Copilot or Cursor or any of these tools?” That seems unrealistic. So where is the balance going to come from with respect to the increased usage of these models? Then at the same time, you have the incursion of these small models which are going to eat up even more of it at the margin. So it’s very difficult to see how the current extrapolated model, with respect to the workloads at these data centers makes any sense.

Krugman: It’s amazing. One of my pastimes now is watching old tech ads from the 90s. The ads were a lot better, by the way. I don’t know why the 90s were so much more fun than this one, but the old Qwest ads about all the wonders of fiber optics. It all came true, except, not for Qwest.

Kedrosky: Right, which is a sort of the perennial problem here, it’s: you turned out to be the pioneers with the arrow in your back. But yeah, I think that’s a big part. The other thing I think is that what’s really unusual about this bubble and confuses people a lot—or this moment, I’ll say—is that historically, the U.S. has been very good at speculative bubbles. This is one of our main core competencies here. They tend to be about real estate, or they tend to be about technology, or they tend to be about loose credit, and sometimes they even have a government role with respect to some kind of perverse incentive that was created. This is the first bubble to have all four. We’ve got a real estate component, we have a loose credit component, we have a technology component, we have a huge government component, because we’re told “we’re in an existential crisis with China, and we must win this at any cost.” So all of those forces together means that you have people looking at it through four different silos and lenses, rather than just saying, like in the global financial crisis, “it’s always about real estate and credit.” Or in telecom, “it’s about technology and loosely some credit.” This is the first one where you end up in the rational bubble theory of all of this, where everyone feels like they’re doing something rational. Yet in aggregate, all of these different people who are looking at the problem through their own lenses are actually profligate contributors to the problem, because it’s the first one that combines all of the forces that historically have made some of the largest bubbles in U.S. history.

Krugman: Oh, joy. (laughs) Sorry. Well, it does have a sort of a feeling again. I think the bursting of the housing bubble played an important role in my life because it made it possible for me to afford my New York apartment. And, of course, I was paying a lot of attention, though not financial stake but to the tech bubble of the 90s. But now we have the sum of all these things...

Kedrosky: The sum of all bubbles.

Krugman: Wow. Let me just ask. We’re running a little long, but I want to add that one of the interesting posts that you had recently was about—economic geography, location, that’s one of my things—San Francisco is having a revival, you want to talk a little bit about what are the places that are affected?

Kedrosky: So this is probably one of the narrowest moments with respect to risk capital in the last 30 years in terms of either the money is going to one thing or it’s going to nothing, which is to say venture, secondary credit, growth capital, it’s all going into AI, which is having this impact in those centers which are most prone to having companies doing this kind of work.

So San Francisco is a good example, where it’s gone from a relative commercial real estate glut as recently as four years ago to it’s now back to historical norms. And probably by this time next year at the latest we’ll be well below the levels that we saw even 10-15 years ago and entirely driven by this influx of capital around this single sector. So the narrowness is one thing, but the scale of the money flowing in is another, to the point that it’s actually distorting. It’s doing the same thing in New York. It’s doing the same thing in San Francisco, to a lesser extent in other centers. But it’s narrow geographically and it’s narrow sectorally which is really unusual.

I think the flip side of that, and the point I always make, is that whenever all of this capital is flowing to a single thing, it also means that it’s not flowing somewhere else. I think that’s incredibly important to understand. I gave the Taiwan example earlier, where if you’re in AI or semiconductor manufacturing in Taiwan, you’re awash in capital. If you’re a manufacturer of literally everything else, you cannot get a loan. The same thing is true in the U.S, where if you’re an early stage company or a mid-stage company looking for growth capital for almost anything and it doesn’t have an AI component, you’re out of luck, my friend.

This notion of starving not just manufacturers, but growth companies for capital because of the narrowness of the spending almost always has historical consequences. We saw this in the 90s, with the rise of China and sort of coincident with the telecom bubble, and how U.S. manufacturers are increasingly starved of capital because it was all flowing sectorally to telecom. We’re seeing the same thing now. That will play out over the next few years. But it’s dramatic right now.

Krugman: Sounds bizarre unless you know the history, but in international economics it’s “the Dutch disease.” There was this famous period when after the Netherlands discovered natural gas, it really killed their manufacturing sector.

Kedrosky: Exactly. I make the same analogy. I think that’s exactly what’s going on. It plays out in insidious ways. Like let’s say you imagine the tariff policy was going to be effective with respect to offshoring manufacturers, imagine you’re a capital intensive manufacturer trying to onshore and you’re not in the semiconductor sector, how difficult is it to raise capital right now? It’s virtually impossible. It’s really much more difficult than it would be absent the AI spending bubble. Because of this tsunami of cash flowing into a single sector. So even if you believe that policy was likely to be effective, the struggles with respect to getting any capital are dramatic because of this phenomenon. And yet, if you don’t talk about it and understand it, you’ll think that, “oh, well, what we need is probably higher tariffs.” We need to encourage people even more to come. Otherwise we won’t have enough manufacturers manufacturing domestically.

Krugman: It’s just this feeling that—monstrous sums of money, monstrous egos, where does all of this end up?

I have to say, there’s one humble sector that I happen to know is prospering amid all of this, which is the two remaining companies that produce blue books for college exams.

Kedrosky: Oh, yeah.

Krugman: They’re having a revival because we’re going back to handwritten exams.

Kedrosky: You know what? That doesn’t surprise me. I should have thought of that, but I bet that’s exactly right.

Krugman: The problem is the young people don’t know how to write anymore. They literally don’t know cursive. How this thing works is so important, and people like me are thoroughly unequipped. So thank you for helping me a little bit on that front.

Kedrosky: That was great, it was great chatting.

The Unexpected Effectiveness of One-Shot Decompilation with Claude

The Unexpected Effectiveness of One-Shot Decompilation with Claude

Chris Lewis decompiles N64 games. He wrote about this previously in Using Coding Agents to Decompile Nintendo 64 Games, describing his efforts to decompile Snowboard Kids 2 (released in 1999) using a "matching" process:

The matching decompilation process involves analysing the MIPS assembly, inferring its behaviour, and writing C that, when compiled with the same toolchain and settings, reproduces the exact code: same registers, delay slots, and instruction order. [...]

A good match is more than just C code that compiles to the right bytes. It should look like something an N64-era developer would plausibly have written: simple, idiomatic C control flow and sensible data structures.

Chris was getting some useful results from coding agents earlier on, but this new post describes how a switching to a new processing Claude Opus 4.5 and Claude Code has massively accelerated the project - as demonstrated started by this chart on the decomp.dev page for his project:

Chart showing progress in matching code for Snowboard Kids 2. It slowly climbs from 20% to 25% from 3rd September to 17th November, then rises quickly to 45% by 2nd December

Here's the prompt he was using.

The big productivity boost was unlocked by switching to use Claude Code in non-interactive mode and having it tackle the less complicated functions (aka the lowest hanging fruit) first. Here's the relevant code from the driving Bash script:

simplest_func=$(python3 tools/score_functions.py asm/nonmatchings/ 2>&1)
# ...
output=$(claude -p "decompile the function $simplest_func" 2>&1 | tee -a tools/vacuum.log)

score_functions.py uses some heuristics to decide which of the remaining un-matched functions look to be the least complex.

Via Hacker News

Tags: games, ai, prompt-engineering, generative-ai, llms, ai-assisted-programming, coding-agents, claude-code

Quoting Daniel Lemire

If you work slowly, you will be more likely to stick with your slightly obsolete work. You know that professor who spent seven years preparing lecture notes twenty years ago? He is not going to throw them away and start again, as that would be a new seven-year project. So he will keep teaching using aging lecture notes until he retires and someone finally updates the course.

Daniel Lemire, Why speed matters

Tags: productivity

The See-Through 747

December 8, 2025

In the first grade, my two favorite toys were both 747s.

The first was an inflatable replica, similar to those novelty balloons you buy at parades, with rubbery wings that drooped in such violation of the real thing that I’d tape them into proper position. To a six-year-old it seemed enormous, like my own personal Macy’s float. The second toy was a plastic model about twelve inches long. Like the balloon, it was decked out in the livery of Pan Am. One side of the fuselage was made of clear polystyrene, through which the entire interior, row by row, could be viewed. I can still picture exactly the blue and red pastels of the tiny chairs.

Also visible, in perfect miniature near the toy plane’s nose, was a blue spiral staircase. Early 747s were outfitted with a set of spiral stairs connecting the main and upper decks – a touch that gave the entranceway a special look and feel. Stepping onto a 747 was like stepping into the lobby of a fancy hotel, or into the grand vestibule of a cruise ship. In 1982, on my inaugural trip on a 747, I beamed at my first real-life glimpse of that winding column. Those stairs are in my blood — a genetic helix twisting upward to a kind of pilot Nirvana.

That’s a passage found in chapter two of my book.

It’s that second toy, the one with the transparent fuselage, that I bring to your attention. As it happens, I discovered a photograph, buried in an old family album, in which you can see it. While I’ve always remembered the toy, I had no idea that a picture of it existed.

That’s me holding the plane, of course, with my sister and my mother in front. It’s Christmas morning, 1972.

Look closely and you can see the rows of seats, sectioned into different colors. The first class seats look red. On the left wing it says “Pan Am.” You can’t see the spiral stairs, but they’re in there, in the middle of that blue part. It appears the entire fuselage was look-through, not just half of it, as I’d written.

One wonders what sorts of shitty toys are available these days for first-grade airplane buffs.

That plastic plane is long gone, sadly. I’m not saying you should save all of your childhood toys, but be careful. This one, surely, deserved to be set aside. Even so young, I already has aspirations of becoming a pilot. It would’ve made a meaningful keepsake.

The picture, at least, remains.

 

The post The See-Through 747 appeared first on AskThePilot.com.

Six New Tips for Better Coding With Agents

I’m hanging out in Sydney with my esteemed co-author and co-conspirator Gene Kim today; we flew in to conduct Vibe Coding workshops and talks this week to the Commonwealth Bank of Australia, some of their partner companies, and the general engineering public. Very cool of CBA to sponsor this training, and Gene and I are super excited for it.

We noticed that we’ve pushed into new territory since our Vibe Coding book was published. The book is all about how to work with coding agents, and all the advice and techniques in it are still incredibly relevant; I use it all daily. But there’s even more to learn, and we continue to uncover new tips and strategies.

I thought I’d share some of the new themes we’ve noticed, in no particular order, hot off the presses. Let’s see which ones resonate with you.

1. Software is now throwaway — expect < 1 year shelf life

This is probably the most obvious one. Anthropic has already begun embracing this idea internally, which is how I first heard about it, from friends there.

26 years ago Joel Spolsky wrote one of the most useful pieces of software advice anyone has ever given, in Things You Should Never Do, Part 1, where he says, in a nutshell, DON’T REWRITE YOUR SOFTWARE!

In this classic essay, well worth a read, Joel gives powerful examples of companies and projects that decided their code base was too old and crufty, so they chose to rewrite it all from scratch. And the results were, predictably, awful. Joel says:

> The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?

And he was right! Outstanding essay. But unfortunately, not so timeless as we thought. It proved to have a shelf life of only about a quarter century. We are entering a surprising new phase of software development, in which rewriting things is often easier (and smarter) than trying to fix them.

I first noticed this with unit tests. You’ll use agents to make a giant refactoring to your system, and then all the tests will be broken. The agents inevitably struggle to fix them. So one day I said, screw it, delete all the tests and make me new ones. And it got through that exercise SO much faster. The new tests were great, had great coverage, and importantly, the LLM was able to generate them very quickly, compared to trying to reason through the old system behavior vs the new expected behavior. With new tests, it can focus just on the new behavior, which is a much cleaner cognitive problem.

This generalizes beyond tests: generating almost any code is easier (for AIs) than rewriting it. Hence, recreating software stacks from scratch is starting to become the new normal. We’re seeing it more and more, e.g. companies with mainframes who are concluding that a small team of engineers and biz people could recreate the entire experience with the same API, but with modern architecture and maintainable code, in just a few months. And they’re doing it.

The upshot is that for all the code I write, I now expect to throw it away in about a year, to be replaced by something better. Maybe mine, maybe someone else’s. Doesn’t matter. It’s all just stepping-stones to higher velocity.

This spells trouble for third-party SaaS vendors. Companies are also discovering that they can build bespoke business-automation software so easily that they don’t need to re-up their vendor contracts. SaaS vendors are going to have to work harder to provide value that’s too expensive to recreate. It can be done — Graphite is one example; they now have years of learnings into the nuances of AI code review. I don’t think you would necessarily want to retrace those years of steps yourself, on your company dime. Sourcegraph is another example; they have a code search engine with 10 years of enterprise bug fixes, and even with modern agents, you almost certainly wouldn’t want to try to clone that yourself.

But many SaaS vendors who’ve found niches building business automation software are going to be in real trouble. Because businesses are automating their own processes now, with vibe coding!

2. Agent UX matters at least as much as Human UX

One of the interesting themes I heard at the AI Engineering Conference in NYC a couple weeks ago was that although many people are building tools for AIs, they are finding it very hard to get the AIs to use those tools.

It’s tricky to get AI to use a tool it’s not trained on. They have certain ways of thinking and working, and they tend to reach for familiar tools (e.g. grep instead of a fancier search). I’ve talked with many people who wanted to build a tool for their agents to use, and they’d work with the frontier models to design the perfect agent-friendly interface — one the models swore up and down would get them to use it.

And then haha, no, the agents don’t use it. You prompt and prompt, they ignore and ignore. So what do you do? How do you get them to use your tools?

My Beads issue tracker for agents has been an interesting case study here. It’s only maybe 2 months old and it already has 250+ forks and 5000+ stars. It’s a successful project. But I’ve never looked at the code. It’s fully vibe-coded by agents. Despite that, Beads managed to capture lightning in a bottle — it’s a tool that AIs use, and not only that, they like it. Agents use Beads eagerly and enthusiastically with very little prompting. They make smart decisions, such as filing Beads when they are low on context, instead of doing the work directly. Things you would normally have to prompt them to do, they just do!

I’m no magician. I’ve built plenty of tools that the AIs refused to use; I’ll talk about one of them below. And I’ve built plenty of prompts that the AIs choose to ignore or overlook. It’s not like capturing lightning in a bottle is super reproducible at this point. But I can share some of the things I did with Beads that I think helped.

First, I asked Claude to help me design a new lightweight issue tracker backed by git, with a few other constraints, and then Claude came up with about half of the rest of the design: the SQLite database caching layer, the discovered_by graph link that the models feel is very important for gathering context on issues, the hash IDs, deletion tombstoning, etc.

During the Beads design phase, I mostly argued with Claude, telling it I didn’t like certain choices it was making from a Human UX perspective. Eventually we negotiated our way to something we both liked, something that had good agent UX and also good human UX.

For the agent side, once we had the initial structure in place (the issue tracker itself), the primary UX issue became tooling ergonomics. My agents were trying to use Beads, but they kept giving it the wrong arguments. For example, they’d use — body instead of — description when filing an issue, which would fail. Why? Because they were trained on GH Issues, and GHI’s CLI tool uses — body for filing issues. Reaching for the familiar again!

So in that particular case, told it to add — body as an alias for — description, which it did, and that bit of Agent UX friction went away forever. I’ve done this many, many times in Beads. As the agent works, I watch how it’s using the tool, and whenever it encounters an error, I ask it, how did you want it to work there? How can we change it to make the behavior be more easily guessable?

Over the past few months we’ve made dozens of tweaks, adding flags and commands, and the agents now rarely have trouble using Beads fluently.

I can’t claim to have cracked the agent-UX problem, not by a long shot. I think the role of “Agent UX Designer” feels like it’s ready to emerge as a first-class career for humans. As just one example, I’m working on my third agent orchestrator this year. And even though the architecture is sound, I haven’t found the magic UX formula yet, to where any agent automatically just figures out what to do, and does the right thing most of the time. I’ll get there! In fact, as soon as I solve this problem with my orchestrator, I’m launching it. I’m aiming for Christmas Day. We’ll see.

Once you do find that secret incantation that makes your tool truly agent-friendly, you should get it out there as fast as you can, because it will grow like crazy.

And if you try to launch a tool that agents don’t choose to use of their own volition, with minimal prompting, then you need to go back to the drawing board and fix the agent UX.

The best way to do this is to leverage the Optionality from FAAFO, from our Vibe Coding book. Generate a whole bunch of interfaces, and then experiment with each one, to see which one the agents like best. It’s very much a trial-and-error search problem at this point, until either the agents get better at using new tools, or we get better at learning what they like.

3. Spend 40% of your time on code health, or else you’ll wind up spending >60%.

Gene was curious how I could be so confident in Beads if I’ve never looked at the code. My answer to him was one of the easiest I’ve ever given. If you are vibe coding, i.e., having the AI write all your code for you, then you need to spent at least 30–40% of your time, queries, and money on code health. That’s how you make sure your code is OK. You have the AI conduct regular code inspections. Tons of them.

It’s pretty easy in principle: Every now and then, you pause your regular work, and tell your agents: go find code smells of all shapes and sizes. Have them file Beads for anything that needs followup. Tell the agent to look for large files that need refactoring, areas with low test coverage, duplicated/redundant systems, legacy code, dead code, poorly-documented code, etc. etc. etc. I don’t have a good prompt for this step yet; would appreciate it if anyone has crafted one. But you can also just ask your agent to help craft it.

You’ll also want to ask your agent to do cleanups during the code-health passes. Have it look for files that are in the wrong place, or have misleading names, or need better homes. Have it clean up debug cruft, ancient plans, build artifacts, old docs, anything you don’t need. This is all part of the regular hygiene and maintenance of a vibe-coded code base.

It helps to be creative, and also to ask the agent to be creative, thinking outside the box. After the first round or two of regular code reviews, start having it look for over-engineered subsystems (YAGNI), opportunities where your code could have used a third-party library, and other broad, system-level concerns.

Basically the agent will always find problems, often shocking ones, e.g. where you discover you have two or even three completely redundant systems (databases, logging, telemetry, whatever) that need consolidating. And since agents tend to accrete code without automatic refactoring, your vibe-coded source files will tend to grow to thousands of lines, which makes them harder to agents (and humans) to reason about. So you should tell it regularly to break things up, and then run dedicated sessions to implement the refactoring!

During each code review, have your agent file Beads for everything it discovers. Then have it review the epics and issues (up to 5 times; see below) to ensure the implementation will go smoothly.

Then swarm to fix it all! Do all this at least weekly. For me, I’d estimate I spend about 25–30% of my time and money on code health, and I don’t think it’s enough. As long as I continue to find serious problems with reviews, I need to do more reviews. My current guidance is that you should expect nearly half of your work to be code-health related.

What happens if you don’t follow this rule? You gradually (but rapidly) accumulate invisible technical debt that weighs down your agents in various ways — too much code, conflicting code, obsolete docs, etc. Your agents will begin to work more slowly and you’ll see more bugs in their outputs.

Stay on top of code health, and you’ll keep your vibe-coded code base sprightly.

4. You might be too early: Some projects are ahead of their time.

AI cognition takes a hit every time it crosses a boundary in the code. Every RPC, IPC, FFI call, database call, client/server call, every eval, every single time the AI has to reason cognitively across a boundary or threshold… it gets a little dumber.

I noticed this when working on Efrit, my native-elisp coding agent, which lives inside Emacs. Over the summer I was trying to get Claude and other models to build it for me, and they struggled. Hard. Efrit lives in Emacs, which is a separate process from your coding agent, so already there’s one boundary.

For that particular IPC boundary, there are multiple channels for the agent to talk to Efrit, all of them quite unsatisfying. There’s emacs — batch, which has limitations, and the emacs-server client/server mode, which is also limited for the kind of heavy reflective introspection the agent needs to do for this kind of code base.

So what did I do? I spent a week working with Claude to build a better agent-Emacs bridge. Claude built me the “Agent-Efrit bridge”, a simple and elegant system which uses a polling file channel as a message queue to and from Efrit. It’s beautiful. A tool made for agents, by agents! When it does work, it’s amazing.

Naturally, Claude never uses our fuckin’ bridge we built together. I’ve given up even asking. This is an example of a tool I tried to build, but the AI just refuses to use it.

With Efrit, after that initial bridge there are still other RPCs — the API call to the frontier model, the parsing of its response, and the eval of the elisp code to execute the response. All of these were piling up to make the models dumber. And ultimately, the August 2025 crop of frontier models couldn’t solve this problem. Or at any rate, the returns became so diminishing that I gave up.

So I paused the project! There was plenty of other work to do. A few months went by, a few model releases happened (notably Sonnet 4 and Sonnet 4.5). Efrit sat idle. And then about 2 weeks ago, someone asked to be an Efrit maintainer, since people wanted to use it. But wait, Efrit was still crap! So I thought, what the heck, let’s have Claude 4.5 peek at it.

Claude 4.5 took one look and said, “great idea, awful execution, but we can modernize this.” It produced an incredibly detailed plan to take Efrit to the next level, and I’ve spent the past 2 weeks letting it grind through this plan (serially, no swarming, since swarming on elisp sounds like a bad idea today.) And now Efrit is getting to be approximately on par with modern coding agents.

All I had to do, in order to crack this nut, was wait 3 months (i.e., 2 model releases). Claude is finding Efrit quite easy now, compared to this summer. I cite this as one of many examples of how the models and tools are indeed getting exponentially better. I have a set of projects they can’t do today. Efrit is (well, was) one of them. If you keep a menagerie of “too hard for AI” projects, then you will be able to watch and measure their cognitive progress increasing month by month.

I often bake this philosophy into my project planning. I will deliberately build something that’s just slightly too hard for the agents, knowing that in the next model release, they’re almost certainly going to find it straightforward. I plan for the models to get smarter, by building tools that don’t work that well with today’s models. This is how you get that little bit of extra shelf life out of your software — plan for it to be useful when smarter agents arrive.

If you read this section and concluded, “well, obviously AI isn’t ready to handle my project work; I tried it, it was confused, so I’m just going wait for smarter models,” then I wouldn’t blame you. But be careful! You might not need to wait as long as you think. If you’re just using this as an excuse to procrastinate until the models are smarter, then you’re missing out on honing a massive set of skills you need in order to work with models effectively — even as they do get smarter.

In the next section, we’ll talk about a way you can get even more cognition out of today’s models, without needing to wait. You’ll have them solve even harder problems than you thought they were capable of, all because you didn’t give them enough of a chance before. Let’s take a look!

5. The Rule of Five: When in doubt, have the agent review its own work 5 times.

Jeffrey Emanuel discovered this powerful and unintuitive rule. He found that he gets the best designs, the best plans, and the best implementations, all by forcing agents to review their proposals (and then their work) 4–5 times, at which point it “converges”. It typically takes 4 to 5 iterations before the agent declares that it’s as good as it can get.

Jeffrey described a long, complex series of prompts for this process; I’m sure we’d all be appreciative if he publishes them. But the way he described it to me, you first make them do a task, then you do a series of focused reviews. Each review should be slightly broader and more outlandish than the previous one, or you can do it the opposite order. But you need a mixture of in-the-small and in-the-large reviews. You’re having it look for bad code (or designs), but also bad architecture.

To be slightly more concrete, Jeffrey first asks it to do a couple of regular code reviews, which find all the usual stuff. And you’ll notice right away that even on the second review it will often find things it missed in the first review. But I think most of us stop there, if we even ask at all. It definitely feels weird to ask for the 3rd code review, which is the agent’s 4th pass over the code, counting the generation step. But the 3rd review, especially during the Design phase, is where you start asking it existential questions about whether you’re doing the Right Thing throughout the project.

I tried it, and sure enough, it does take 4–5 iterations, just as Jeffrey described, before the agent will say something like, “I think this is about as good as we can make it.” At that point it has converged. And that, folks, is the first point at which you can begin to moderately trust the output the agent has produced. If you always take the first thing it generates, with no review at all, you’re bound to be disappointed.

I asked Claude what it thought of this Rule of Five, and Claude was enthusiastically supportive. Claude claims that this process matches their own cognition model, which is breadth-first: they solve each problem first in very broad strokes. And then they almost always need more passes for proofreading, refining, and polishing — much like humans do.

At first you’re going to want to do this purely with prompting. Maybe Jeffrey Emanuel will share some of his fancy review prompts. But over time, you’re going to want to automate it, since you’re applying the Rule of Five at every single step in the process, which at a bare minimum, for any nontrivial hunk of work, would be:

- 5 passes over the design

- 5 passes over the Beads implementation plan (this results in far better issues and dependencies, and better execution)

- 5 passes over the implementation (code + 4 reviews)

- 5 passes over the tests

- 5 passes for code health (might as well build it into your dev middle loop)

Yes, this is slower. Yes, this is more expensive (though, probably less so than all the rework you’ll be stuck with if you skip these steps.) Yes, it’s awkward to tell an AI to keep reviewing its work that it just reviewed.

But you should make sure you do it. Rule of thumb: demand at least 2–3 passes on small tasks, and 4–5 passes on big tasks. If you’re not super familiar with the language, the stack, or the domain, then you should err on the side of more reviews.

Do this, and it’ll feel like you’re using a model from the future. They will do far better work than they’ve been doing for you. Try it!

6. Swarm where you can, but beware the Merge Wall

I’ve been focused on agent swarming the past few weeks, after several months chasing quality and reliability without much success. I’ve got a new (third!) orchestrator in the works, and wow. Swarming. Next year is going to be extraordinary.

I’ll share a quick example of how powerful swarming can be when it’s working right. I had a disaster the other day where 30 Beads issues went missing. It was three or four major epics, each with a bunch of child issues. I had put a ton of work into their design, following the Rule of Five, and they were all ready to implement.

But I couldn’t find them.

I wasn’t panicked, since it’s hard to truly lose issues in Beads (we do have some bugs here and there but they are getting closed fast). Beads is all backed by Git, so it’s almost always possible (for the AI) to reconstruct what really happened from the git history, and fix it.

But I was concerned, because, where the hell did my 30 issues go? They weren’t deleted. After a couple minutes of increasingly alarmed searching, I finally figured out where they all went: My swarm had implemented them all! WTF?

There was a minor miscommunication, I guess; I asked my orchestrator to start working on the bug backlog, and it assigned all 30 issues to the eight workers I had already spun up. Some of these were quite complex issues. But while I was busy with other stuff, and not watching, the worker agents implemented and closed all 30 issues.

I was equal parts delighted and flabbergasted when I realized what had happened. I went and checked, and sure enough, they’d done all the work. It was pretty decent work and needed very little touchup — likely because I had used the Rule of Five throughout, and the Beads were in very good shape when it came time to implement.

After my 30 issues were magically implemented, I was sold. I would never not swarm again!

And then, of course, I was utterly unable to reproduce that perfect swarm. Subsequent attempts all ran into merge issues and required a ton of hand-holding and infrastructure tweaks. It will be a couple more weeks before I can swarm reliably. But still, I am completely sold.

I’ll know that my swarm orchestrator is ready to launch when I can swarm the web UI, building it from scratch. My system doesn’t have a non-CLI UI yet; well actually it does, in Emacs, but I doubt you want that one, however cool it might be. (It has Efrit inside it, so it’s pretty damn cool.) But I’m going to build a UI with the swarm, and that’s when I’ll know it’s ready for prime time.

The thing you have to be prepared for when swarming, is the Merge Queue problem. It’s like smacking into a wall. To illustrate, let’s say you have a simple swarm of 3 workers. One worker is redoing the logging system, another is changing the database API, and another is changing the client-server protocol. It’s likely that all three of these subsystems have some overlap, and changing one requires changing another. And their work will collide when they try to merge all the work together.

When you swarm a task, a key problem is that the workers all start from the same baseline (e.g. the same starting git commit), and they all do their work off that baseline. But each worker has the ability to change the baseline dramatically. Let’s say workers A, B, and C all complete and merge in their work. The system may now be completely different from the original baseline. When the fourth agent D finishes its work, a rebase may no longer be feasible. The system may have changed so much that D’s work needs to be completely redesigned and reimplemented on the new system baseline, which includes A, B, and C’s changes.

This is why you need the Merge Queue. You need to serialize the rebases, and give each worker enough context, and context-window space, to fully merge their work into the new baseline.

Some work is inherently parallel, and some work is inherently serial — the latter because of irreducible complexity and task overlap. If you think you’re going to be stuck with an awful merge, then you should probably defer some tasks until the earlier ones complete. But it’s not always possible to tell in advance, so sometimes you’ll have tough merges.

I’ve noticed that projects tend to go through a cycle where they are swarmable for a while, but then you’ll suddenly need to pause and serialize all work for a time. This can happen, for instance, if you’re changing the directory layout of your project — e.g., to make it more accessible to AIs who are trying to guess their way around. You might need to experiment with a bunch of different layouts. But each new project source layout changes all your package imports, scripts and other inter-module references, which would totally break any existing workers. So you have to pause all other work while you do the big package restructuring.

You can think of swarming as a MapReduce-type operation. In the mapper phase, you can spin up virtually unlimited workers. But in the reducer phase you need to merge their work all back together. Unfortunately, as Gene observed, this isn’t really a MR because most MRs have a very simple reduce phase — the workstreams have a monoidal shape, and you can merge their work by doing things like summing counts or whatever.

But with agent swarming, the reduce phase is a nightmare; it’s the exact opposite, in fact: it can be arbitrarily complicated to merge the work of two agents. In the limit, what should we do if Worker A deleted an entire subsystem, and Worker B comes along with a bunch of changes to that (now-deleted) subsystem?

So the swarm merge step is often messy and not entirely automatable. Some cases require either human judgment, or else really good context for AIs to make the call.

I don’t know if we’re going to get a tool that hides the mess. I’ve been talking to investors, many of whom are keenly interested in the next generation of developer tools, and there is a prevailing belief that all we need are proper guardrails, and then these kinds of agentic coding and swarming tools will be accessible to “average” developers, which they certainly are NOT today.

And why is that? Well, as Joel Spolsky observed in Things You Should Never Do Part 1, reading code is by far the hardest part of coding. This is a well-known finding in the Dev Productivity world; they’ve done study after study. And with vibe coding, reading code is… pretty much all you do all day. It’s hard for most developers. The average dev probably thinks 5 paragraphs is an essay. Coding agents make you read enormous waterfalls of both text and code. This is absolutely draining and beyond the capabilities of most devs today.

However, I don’t see eye-to-eye with the investors on this one. I personally do NOT think we will get useful guardrails. If you try to build something with heavy guardrails, you’re going to wind up with Bolt or Lovable, and nobody will use it. Sorry! That’s just not the right model. Instead, I think we’re going to get orchestration tools that are every bit as powerful, messy, quirky, and frustrating as Claude Code and the current batch of terminal-based coding agents.

And the people who figure out how to use these tools, despite the lack of guardrails, will become super-engineers. I’ve been kicking around the idea of a new blog post, the Rise of the Superengineer. Dunno if it’s worth a whole post, but what’s going to happen in 2026 is that a new class of 100x (or maybe 1000x) engineer will emerge — people who have figured out how to wield coding agent orchestrators effectively, deal with the merge problem, planning, swarming, code health, etc. — all the stuff I’ve talked about here, and more. And they will be able to run 100 coding agents at once, and get meaningful work done with them.

This will make them as productive as a team of 50+ regular engineers.

I think my own orchestrator will usefully peak at around 50–80 agents. Maybe I can get it up to 100. It’s not aimed at massive swarms; it’s aimed at leveling you up from manually managing a dozen ad-hoc agents in ad-hoc repo clones all around your filesystem, to managing swarms of well-behaved agents working 5–10 at a time on focused tasks. It will still require your full attention, your full engineering background, and every bit of design taste you can muster, to use these tools. In some ways it’s even harder and more dangerous than using a single coding agent, even with tooling support.

But some people are doing it already! By hand, to be sure, or by building their own homegrown orchestrators. Mark my words, though: next year, you’re going to have engineers who can build an (and likely maintain) an entire company’s software on their own. You’ll have solo unicorns, sure, but also a marketplace of solo uber-contractors who can build companies things they would have had to pay someone like Accenture tens of millions of dollars for.

There will also be small teams of people who figure out how to maximize their velocity when multiple humans work with agent teams. And these small teams are going to change the world. Gene and I are actively wondering whether company size is going to decrease on average, because you will be able to get so much more done with so many fewer people.

But no matter what, the tools are going to be messy from now on. Working with AIs is a little messy and nondeterministic. And I think that’s here to stay.

Wrap-Up

Gene and I went through at least a baker’s dozen ideas this morning, and I’ve chosen the half that seemed the most baked. A few others are becoming clearer, but are still so vague that we don’t really have the right vocabulary to talk about them yet.

Change is coming. Agents are way more powerful than they were 3 months ago. I’ve talked with plenty of (good) engineers lately who still believe that agents have plateaued. Ignoring the 30 years of evidence showing that AI is following Moore’s Law, they feel it’s just going to stop getting better today, out of nowhere. And in their opinion, agents are not good enough yet.

But if you’ve been following and using agents since they landed in February, you’ll know just how much more powerful and capable they have become, even since summertime. It’s not plateauing; heck, it’s not even slowing down. And you can prove it using your backlog of projects that are too hard for AI. Every few months, another one will fall, until there are no more left.

If you’re one of the many engineers who still hasn’t made the switch to AI-first coding, now is a good time to try it again. If you haven’t used an agent in a few months, you’re going to be shocked at how smart and capable they have become. They are full concierges now, able to help you with any computing-related problem. People tell me they even use Beads for their personal TODO lists!

My orchestrator is right around the corner. I’m excited for it. It’s going to make a splash. Hopefully this Christmas!

But you’ll only be able to use it if you already use coding agents for literally everything. If you want to be a 100x super-engineer next year, you need to start learning vibe coding basics today, and make it work for you. Keep in mind all the advice I’ve given here, and read our Vibe Coding book, which just came out on Oct 21st. It’s fresh and relevant, and will help you get into the right mindset with the right techniques and practices.

More to come, soon.

This stuff is so fun!

The EU production function

The central puzzle of the EU is its extraordinary productivity. Grand coalitions, like the government recently formed in Germany, typically produce paralysis. The EU’s governing coalition is even grander, spanning the center-right EPP, the Socialists, the Liberals, and often the Greens, yet between 2019 and 2024, the EU passed around 13,000 acts, about seven per day. The U.S. Congress, over the same period, produced roughly 3,500 pieces of legislation and 2,000 resolutions.1

Not only is the coalition broad, but encompasses huge national and regional diversity. In Brussels, the Parliament has 705 members from roughly 200 national parties. The Council represents 27 sovereign governments with conflicting interests. A law faces a double hurdle, where a qualified majority of member states and of members of parliament must support it. The system should produce gridlock, more still than the paralysis commonly associated with the American federal government. Yet it works fast and produces a lot, both good and bad. The reason lies in the incentives: every actor in the system is rewarded for producing legislation, and not for exercising their vetoes…

Formally, the EU is a multi-actor system with many veto points (Commission, Parliament, Council, national governments, etc.), which should require broad agreement and hence slow decision making. In practice, consensus is manufactured in advance rather than reached through deliberation.

By the time any proposal comes up for an official vote, most alternatives have been eliminated behind closed doors. A small team of rapporteurs agrees among themselves; the committee endorses their bargain; the plenary, in turn, ratifies the committee deal; and the Council Presidency, pressed for time, accepts the compromise (with both Council and Parliament influenced along the way by the Commission’s mediation and drafting). Each actor can thus claim a victory and no one’s incentive is to apply the brakes.

That is from an excellent piece by Luis Garicano.  What would Buchanan and Tullock say?

The post The EU production function appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

A Ukrainian mathematician requests mathematical assistance

an expert in general relativity or a mathematical physicist familiar with PPN methods, weak-field gravitational tests, and variational principles…

For the two technical appendices (ψ-preconditioning and χ-flattening), I would need:
• a quantum algorithms researcher (QSP/QSVT/QLSA/QAE) to assess the correctness of the operator transformations and the potential complexity gains;
• a quantum control or pulse-level compilation engineer (pulse-level, virtual-Z) to evaluate whether the phase-drift compensation algorithm can be implemented realistically on actual hardware.

Please email me if you think you might be of assistance.

The post A Ukrainian mathematician requests mathematical assistance appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Saturday 6 December 1662

Up and to the office, and there sat all the morning, Mr. Coventry and I alone, the rest being paying off of ships. Dined at home with my wife and Gosnell, my mind much pleased with her, and after dinner sat with them a good while, till my wife seemed to take notice of my being at home now more than at other times. I went to the office, and there I sat till late, doing of business, and at 9 o’clock walked to Mr. Rawlinson’s, thinking to meet my uncle Wight there, where he was, but a great deal of his wife’s kindred-women and I knew not whom (which Mr. Rawlinson did seem to me to take much notice of his being led by the nose by his wife), I went away to my office again, and doing my business there, I went home, and after a song by Gosnell we to bed.

Read the annotations

Links 12/6/25

Links for you. Science:

Nowcasting epidemic trends using hospital- and community-based virologic test data
CDC to end all monkey research
‘A little bit of joy’: can tiny rafts save endangered sparrows from rising seas?
Unique videos show how trawling restrictions bring back life to the sea
Despite Trump chaos, NSF avoided feared dip in research financing
As Federal Government Retreats, A Private Fund to Save Sea Otters Steps in

Other:

Top MAGA Influencers Accidentally Unmasked as Foreign Trolls
“Embarrassing” and “Horrifying”: CDC Workers Describe the New Vaccines and Autism Page
These teens are trying to save go-go. Can the music save them, too?
Mamdani’s NYC Can’t Afford NYPD Commissioner Tisch. With ICE on its way, how can we expect an ICE collaborator to protect New Yorkers? Here’s a compromise: make Tisch sanitation commissioner again
When the G.O.P. Medicaid Cuts Arrive, These Hospitals Will Be Hit Hardest. Republicans created a special $50 billion fund to help rural hospitals stay afloat, but the biggest impacts may be in cities. (more here)
How DC Resists Through Protest Art. Posters, go-go bands, murals, and more: Washington has a long history of fighting the power—and the government—through creative and colorful dissent.
Senators Want Extremism Researchers to Surrender Documents Linked to Right-Wing Grudges
How to Fix a Typewriter and Your Life
Patel Under Scrutiny for Use of SWAT Teams to Protect His Girlfriend
Man Detained by ICE Found Dead, Hanging With Hands and Feet Tied—Attorney
Senator whose wife was shot fears for safety after Trump sedition accusation
Why Americans are giving up on Sweetgreen (because most Americans, even in cities, don’t really like salads?)
Millennials Are Stuck in an Old, Lazy Story
Man Who Trump Pardoned for Fraud Is Headed Back to Prison … for Fraud
Senate Democrats are investigating the Kennedy Center for ‘cronyism, corruption’
How Do Americans View Childhood Vaccines, Vaccine Research and Policy?
Kennedy Katch and Kill
Many Top MAGA Trolls Aren’t Even in the U.S. Elon Musk’s new X feature has been very revealing.
Why car insurance costs have soared (and what drivers are doing about it)
The case of a felon who paid lobbyists nearly $1 million to seek a Trump pardon
New York Gets Serious About Food Prices
Israelis are moving abroad in record numbers due to fear and discontent
How the Elite Behave When No One Is Watching: Inside the Epstein Emails
12 Enchanting Holiday Light Displays and Attractions Around the DC Area
Jimmy Cliff, Jamaican reggae singer, actor and cultural icon, dies aged 81
Plunder New England: The Louvre heist grabbed attention, but smaller museums are the more likely targets
The Math Shows Jackson Pollock Painted Like a Child Would
Unleashed dogs in Boston are a source of frustration for some people, and citations have risen
Border Patrol’s Charlotte sting reaches into country clubs, upscale shops
In the Gilded Age 2.0, the rich aren’t just different — they’re intolerable

Real Estate Newsletter Articles this Week:

At the Calculated Risk Real Estate Newsletter this week:

Real House PricesClick on graph for larger image.

Inflation Adjusted House Prices 3.0% Below 2022 Peak

Q3 Update: Delinquencies, Foreclosures and REO

Final Look at Housing Markets in October and a Look Ahead to November Sales

Asking Rents Soft Year-over-year

This is usually published 4 to 6 times a week and provides more in-depth analysis of the housing market.

Saturday assorted links

1. JFV on capital theory.

2. A critique of wheelchair services in England.

3. The political culture that is Iran??

4. The Right to Compute.

5. Helen Perry on whether we are repaganizing.

6. Steve Cropper, RIP.

7. Twelve Frank Gehry projects (NYT).

The post Saturday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Friday Squid Blogging: Vampire Squid Genome

The vampire squid (Vampyroteuthis infernalis) has the largest cephalopod genome ever sequenced: more than 11 billion base pairs. That’s more than twice as large as the biggest squid genomes.

It’s technically not a squid: “The vampire squid is a fascinating twig tenaciously hanging onto the cephalopod family tree. It’s neither a squid nor an octopus (nor a vampire), but rather the last, lone remnant of an ancient lineage whose other members have long since vanished.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Schedule for Week of December 7, 2025

Special Note: There is still uncertainty on when some economic reports will be released. The employment report for November will NOT be released this week.

This will be a light week for economic data.  The FOMC meets this week and is expected to cut rates by 25bp.

----- Monday, December 8th -----

No major economic releases scheduled.

----- Tuesday, December 9th -----

6:00 AM: NFIB Small Business Optimism Index for November.

Job Openings and Labor Turnover Survey10:00 AM: Job Openings and Labor Turnover Survey for October from the BLS.

This graph shows job openings (black line), hires (purple), Layoff, Discharges and other (red column), and Quits (light blue column) from the JOLTS.

obs openings increased in August to 7.23 million from 7.21million in July.

The number of job openings (black) were down 6% year-over-year. Quits were down 3% year-over-year.

----- Wednesday, December 10th -----

7:00 AM ET: The Mortgage Bankers Association (MBA) will release the results for the mortgage purchase applications index.

2:00 PM: FOMC Meeting Announcement. The Fed is expected to cut rates 25bp at this meeting.

2:00 PM: FOMC Forecasts This will include the Federal Open Market Committee (FOMC) participants' projections of the appropriate target federal funds rate along with the quarterly economic projections.

2:30 PM: Fed Chair Jerome Powell holds a press briefing following the FOMC announcement.

----- Thursday, December 11th -----

8:30 AM: The initial weekly unemployment claims report will be released.  There were 191,000 initial claims last week.

U.S. Trade Deficit8:30 AM: Trade Balance report for September from the Census Bureau.

This graph shows the U.S. trade deficit, with and without petroleum, through the most recent report. The blue line is the total deficit, and the black line is the petroleum deficit, and the red line is the trade deficit ex-petroleum products.

The consensus is the trade deficit to be $65.5 billion.  The U.S. trade deficit was at $59.6 billion in August.

10:00 AM: the Q3 2025 Housing Vacancies and Homeownership from the Census Bureau.

10:00 AM: State Employment and Unemployment (Monthly) for September 2025

----- Friday, December 12th -----

No major economic releases scheduled.

The Yakverse Chronicles

Those of you who’ve been reading me a while know that I ran a short-run newsletter called the Art of Gig between 2018-20. This was mainly nonfiction essays about the gig economy and indie consulting life, which I published as a two-volume set a couple of years ago. But I also wrote a series of absurdist consulting fiction stories which I didn’t get around to publishing then.

Well, thanks to ChatGPT I was able to put together a nice online Volume 3 (now titled The Yakverse Chronicles) that you can now read online for free at the newly vibe-recoded Art of Gig site. You can also buy the two non-fiction volumes via that link.

These stories haven’t been available online since I shuttered the newsletter in 2020, and I periodically get requests for access from those who remember enjoying them. Well, here you go. I may publish ebook/print versions later, but at least an online version is available now.

Fun fact: the , which I helped start in 2020, was named for the secret society and yak motif that features in these stories. It’s been chugging along for 5 years, and is one of my most rewarding activities these days!

A bit more backstory. Originally, I put these stories into a Roam graph with the intent of trying to bootstrap an extended universe project around them, with contributions from others set in the Yakverse. I’d intended to call this original set of stories “The Original Series.” I haven’t had the bandwidth to pull that off, but if there is still interest, I’m happy to revive that project. If you’re interested, join The Yak Collective, discord and indicate your interest in this thread. Naturally I’ll try to host the project there.

This project has also been an great exercise in AI-assisted new model of self-publishing. I’ve always wanted to do online books, but did not like the heavyweight solutions available. Thanks to AI and vibe-coding, I don’t need them. This simple static html version is all I want. At least for now. I barely had to touch the code. ChatGPT did 99.99% of what I wanted. There are a few rough edges left that I’ll get around to fixing later.

For those interested, here is the publishing protocol (in this case there was a detour through Roam and markdown, but you should be able to do this directly with posts exported as html from Substack).

Yakverse Static Book Production Protocol

1. Ingest & Normalize the Source

  • Accept a ZIP containing Markdown or HTML chapters, a toc.md, and an images/ folder.

  • Extract the directory.

  • Normalize filenames into consistent, slugified chapter identifiers.

  • Load the TOC as the spine that defines canonical chapter order.


2. Parse, Clean, and Canonicalize Content

Apply a deterministic cleaning pass:

  • Strip Roam-specific tokens, block markers, and unwanted formatting.

  • Convert Roam-style dialog bullets into real paragraphs + typographic dialog formatting.

  • Convert bold → italics consistently.

  • Replace Roam image stubs ([[IMGTOKEN]]) with correct relative paths (images/...).

  • Convert outbound links to footnotes and resolve any internal cross-links to the correct chapter pages.

  • Normalize headings, spacing, paragraph breaks, em-dashes, ellipses, and punctuation.


3. HTML Conversion Pipeline

For each chapter:

  • Convert cleaned Markdown → semantic HTML using a consistent template.

  • Insert chapter number, title, and a slugline (“In which…”) generated by reading the content.

  • Inject consistent navigation elements (prev / next / index).

  • Preserve structural invariants: spacing, footnote format, image float behavior.

Bug-fixing loop rule:
Regenerate only the affected chapter, not the whole book, ensuring global consistency remains intact.


4. Global Assembly

  • Generate a unified style.css with typography, layout, image styling, footnotes, and mobile responsiveness.

  • Generate an index.html representing Volume 3 and incorporating the Art of Gig site structure.

  • Integrate the two print volumes (covers, blurbs, Amazon links) into a coherent 3-volume homepage layout.

Additional revision cycle included:

  • Moving to a two-column TOC + cover layout for Volume 3.

  • Rewriting the homepage copy manually and reintegrating into the static generator.

  • Ensuring consistency of metadata, headers, and project framing across all volumes.


5. Cover Production Loop

Three-step process for the Volume 3 cover:

  1. Extract visual style from embedded illustrations (palette, hand-drawn line qualities, tone).

  2. Generate three conceptual cover designs (iconographic, narrative vignette, abstract glyph).

  3. Select one (yak silhouette + glowing briefcase) and regenerate it:

    • With exact preservation of the originally generated image

    • Add title, author, and subtype text

    • Re-run with iteration until the layout matched the original composition precisely

Final spec delivered as a hi-resolution PNG suitable for both web and print.


6. Deployment Prep

  • Package all files (HTML, CSS, images, cover) into a static site.

  • Validate links, image paths, and footnotes.

  • Test on desktop and mobile.

  • Deliver as a deployable ZIP.


7. Optional Future Extensions (Protocol Hooks)

The protocol is built to support:

  • EPUB or MOBI export from the cleaned Markdown

  • Adding search or analytics to the static site

  • Re-running selective chapters through cleaning/formatting rules

  • Using a different source format (HTML instead of Markdown)

  • Rebuilding the homepage with new framing or additional volumes

Everything remains deterministic and supports incremental regeneration.

Reading List 12/06/2025

World’s largest ring forging, via Chinese Academy of Sciences.

Welcome to the reading list, a weekly roundup of news and links related to buildings, infrastructure and industrial technology. This week we look at 3D printed legos, exploding wire detonators, the David Taylor model basin, multi-point metal forming, and more. Roughly 2/3rds of the reading list is paywalled, so for full access become a paid subscriber.

No essay this week, but I’m working on a more involved piece about international construction productivity that should be out next week.

A320 software upgrades

For the past several years most potential safety issues with commercial aircraft seem to have been with Boeing planes. But here’s one with the Airbus A320 family of aircraft, the most popular commercial aircraft in the world. Apparently a bug in a recent version of the elevator aileron computer (ELAC) software can cause issues if the data is corrupted by intense solar radiation. A recent JetBlue flight had an “unexpected pitch down” (sudden drop of altitude) because intense radiation corrupted flight control data. Via Aviation Week:

The issue affects around 60% of the global A320 fleet, including first generation A320s and the newer A320neo variants as well as A319s and A321s in each case. Most ELACs can be fixed by reverting to a previous version of recently-updated software, Airbus said.

But about 1,000 of the oldest affected aircraft need a hardware change to accept the new software. These airframes will need the old hardware re-installed--a process that will take longer.

Airbus identified the issue during its probe into an Oct. 30 incident involving a JetBlue A320. The aircraft, en route to Newark from Cancun, suddenly lost altitude while in cruise.

An A320 “recently experienced an uncommanded and limited pitch down event,” EASA’s EAD said, without identifying the specific flight. “The autopilot remained engaged throughout the event, with a brief and limited loss of altitude, and the rest of the flight was uneventful. Preliminary technical assessment done by Airbus identified a malfunction of the affected ELAC as a possible contributing factor.”

The aircraft had the newest ELAC software installed. Airbus determined reverting to the previous version eliminates the risk.

Average homebuyer age

An oft-cited example of the increasing difficulty of affording a house is the steadily rising age of first-time homebuyers. Young people with lower incomes, the story goes, are being squeezed out of the housing market, driving the average age of first-time buyers up. Here’s a characteristic story earlier this year from the New York Times:

The path to homeownership continues to get longer, with the median age of first-time home buyers hitting an all-time high of 40 in 2025, according to a report from the National Association of Realtors.

“It’s kind of a shocking number,” said Jessica Lautz, deputy chief economist and vice president of research at N.A.R. “And it’s really been in recent years that we’ve seen this steep climb.”

In 1991, the typical first-time buyer was able to purchase a home by the time they were 28 years old. That number gradually climbed to 33 in 2020, then shot up to 36 in 2022 and 38 in 2024.

There are clear reasons behind the trend. Younger Americans are struggling to save for a down payment as they stretch their paychecks to cover student loans, a rising cost of living and, most critically, high rents, which make saving money harder. And even if they have saved diligently, a persistent lack of affordable housing inventory has left them shut out of the market.

However, it’s possible this is an artifact of how the data is collected (via mail-in surveys, which younger people may be less inclined to fill out). Homebuyer age data from the Federal Reserve indicates that, rather than steadily rising, average first time buyer age is declining over time. From the American Enterprise Institute:

NAR’s statistics are based on their annual survey of homebuyers and sellers. For the 2025 report, covering from July 2024 to June 2025, 189,750 surveys were mailed to a “representative sample” of buyers and sellers. However, only 6,103 completed surveys were received, indicating a response rate of just 3.5 percent, with only 21 percent, or 1,281, being FTBs.

The CCP, by contrast, is based on a 5 percent random sample of all credit reports, which reports provide both borrower age and home buying history. The CCP data, for the same period as the NAR, found the average and median FTB was 36.2 and 33 years old, both well under the NAR’s age of 40.

Digging deeper into the NAR and CCP results yields helpful distributions by age bins. While both have nearly identical shares for age 35-44, the NAR’s under age 35 groups are underrepresented by 17 percentage points and the aged 45 to 74 buyers are overrepresented by 18 percentage points respectively, compared to the CCP. The NAR bias to a higher age is perhaps not surprising given that it is a mail survey with 120 questions, which does lend itself to a high response rate by Millennials and GenZ-ers. The CCP data appears to offer a better historical view of the current position of FTBs (see graphic below). To reiterate its findings, FTB average and median age stood at 36.3 and 33 years for the period Q3:24-Q2:25, and there has been minimal FTB average age change since either 2001 or 2021.

3D printed legos

As I’ve noted previously, I’m interested in the progress of 3D printing technology, and how it might be extended to broader types of production — new types of materials, higher precision, printing complex mechanisms, lower unit costs making it more competitive for high-volumes, and so on. In this vein, Modern Engineering Marvels has an interesting story about Lego working for nine years to be able to 3D print legos for mass-produced sets:

The milestone capped a nine-year development program to develop a high-throughput polymer additive manufacturing platform able to reach consumer-level production volumes. Head of Additive Design and Manufacturing Ronen Hadar framed the accomplishment as LEGO’s equivalent of adopting injection moulding in the 1940s. The team’s aspiration wasn’t to replace moulding but to add to the design toolset – to make 3D printed parts “boringly normal” in future sets.

The production system makes use of EOS polymer powder bed fusion technology in the form of an EOS P 500 platform with Fine Detail Resolution. FDR uses an ultra-fine CO₂ laser that enables highly detailed features in nylon-based materials. The LEGO Group chose the process for its combination of dimensional accuracy, mechanical strength, and surface quality-all vital for parts to mesh properly with billions of bricks already in existence. Already, the company has doubled the speed of output from its machines and is looking for even more efficiency gains.

…From an engineering standpoint, this leap from prototype to mass production required the invention of new workflows. Unlike the decades-honed process control of injection molding, additive manufacturing had to come up with fresh answers for color matching and dimensional consistency, integrating current LEGO quality systems.

Filings coherer

Often the initial version of some particular technology is implemented in a way that doesn’t necessarily work the best or most efficiently, but is simply the easiest to get working. The gun-type bomb was chosen for the first atomic weapon because it was the most straightforward to build, but subsequent bombs used the more-complicated but more-efficient implosion mechanism. The first point-contact transistors were similarly eventually replaced with superior bipolar junction transistors.

Here’s another interesting example of one of these temporary technologies, the filings coherer, which was used to detect signals in the first radios. It consists of a glass tube filled with metal filings, connected to wires on either side. Initially, the metal filings have high resistance, limiting the flow of electric current. However, an electromagnetic disturbance — which can be induced by a passing electromagnetic wave — will cause the filings to “cohere”, reducing the resistance and allowing for greater electricity flow. Via Wikipedia:

When a radio frequency signal is applied to the device, the metal particles would cling together or “cohere”, reducing the initial high resistance of the device, thereby allowing a much greater direct current to flow through it. In a receiver, the current would activate a bell, or a Morse paper tape recorder to make a record of the received signal. The metal filings in the coherer remained conductive after the signal (pulse) ended so that the coherer had to be “decohered” by tapping it with a clapper actuated by an electromagnet, each time a signal was received, thereby restoring the coherer to its original state. Coherers remained in widespread use until about 1907, when they were replaced by more sensitive electrolytic and crystal detectors.

ElectroBOOM on Youtube has a good video where he looks at this “coherence” effect. And IEEE Spectrum has a good paper about the history of it — the mechanism behind it seems to have remained poorly understood until well into the 21st century.

Read more

The Bipolar Jets of KX Andromedae

Blasting outward from variable star KX Andromedae, Blasting outward from variable star KX Andromedae,


Binding early decision in college admissions: "Go early, or go somewhere else"

 There was a time when only football coaches and presidents had news-making salaries at colleges and universities.  Now top admissions officers--i.e. sales managers--are the subject of this NYT story:

Meet the Millionaire Masters of Early Decision at Colleges
The enrollment chiefs at Tulane and the University of Chicago attracted many early applicants. Now both of them earn a lot of money. 
By Ron Lieber

"The University of Chicago was where fun went to die. Tulane University was where you could die from too much fun.

"Neither place liked its reputation, but in 2016, both felt confident enough in changes on their campuses that they started offering an early decision option for student applicants. Apply by November (or January for the “Early Decision II” option) and get an answer weeks later. You just had to agree to attend if you got in.

"Within a handful of years, two-thirds of Tulane’s first-year class had taken the deal. The University of Chicago found so much success that it recently added an opportunity to apply even earlier, in some cases before the senior year of high school has even begun.


"The enrollment chiefs who made this all happen also found success.

"According to federal filings from 2023, Chicago’s vice president for enrollment and student advancement, James G. Nondorf, received $967,000 over a year from the university and “related” organizations. At Northeastern University, the executive vice chancellor and chief enrollment officer, Satyajit Dattagupta, got $1.079 million in compensation after decamping in 2022 from Tulane, where he had a strong run in a similar role."

...

"James Murphy, who works with Class Action, an advocacy organization, recently ranked schools on this early decision advantage — the difference in admissions rates between early decision and the “regular” round, when applicants get an answer later. Northeastern ranked first, with an early decision advantage that was over 11 times as large. Tulane was second, and its figure was over five times. "

Publishing Is Getting Smaller—and Maybe Better

Welcome to the latest installment of our interview series here at The Honest Broker—also available on our new YouTube channel. You can also find it on Apple Podcasts and other podcasting platforms.

Today, I’m excited to share my conversation with .


Please support The Honest Broker by taking out a premium subscription (just $6 per month).

Subscribe now


Ross is a busy man. He is not only the writer behind — he’s also a contributor to venues like New York Magazine, the author of the novels Glass Century and Colossus, and editor-in-chief of . So naturally, I wanted to talk to Ross about writing and publishing.

Once we started talking, we couldn’t stop. This interview is cut down from nearly three hours of continuous conversation. We discussed the state of publishing, the difficulty of launching a new culture review, America’s political and literary history, AI art, and the ways that platforms like Substack are changing how we write and what we can get away with.

Below are highlights from the interview. For the rest of our conversation, check out the video at the top of the page.

Ross Barkan

Highlights from the Ross Barkan Interview

Jared: I was prepping for this interview, and I was talking to a mutual friend of ours, . He asked if I was going to talk to you about politics or about literature and writing. I had to confess to him that I didn’t know you wrote about politics. I knew you exclusively from novels and things like The Metropolitan Review.

Ross: I like politics, but my love lies with literature and culture, and I think that comes across on Substack. That’s why Substack’s been so great, because the literary world is very hard to penetrate. Media is hard, but there is a very straightforward way that I could tell someone to break in. Come up with an idea, look at what the publication publishes, find the editor’s email, pitch them. They might not respond, but you can always pitch again, and at some point they might respond.

The media world still moves at a pretty quick pace, and even though it is very desiccated due to all these economic forces, there are still outlets out there. The literary world is still this very strange organism, and it really took Substack for me to have any kind of literary career or stature of any kind. Substack’s not perfect. I don’t want to turn into a Substack fanboy, but it is different. It has opened up so many pathways. You mentioned Alexander Sorondo. We published his 15,000-word profile of William Vollmann in The Metropolitan Review. This is a piece that he could not get published anywhere. And to me, that’s insane.

Jared: I think I was the third reader of Alex’s novel, Cubafruit. He sent me a copy before it was released, and I read it, and I was like ‘This is great. I love it.’ He went through that whole slog. He had an agent who loved his novel. He was getting personalized rejections, and every rejection would be effusive with praise, and they would say “We don’t know where to place this.” He’s a writer who just doesn’t fit into an easy mold. There is no niche for him right now. He’s doing something interesting, and the current media environment doesn’t know where to place him, and so he had to just go find something on his own. Insofar as I’m ever a fan of a platform, it’s because it gives people an opportunity to do something cool.

Ross: I was starting my career at the height of the 2010s digital upstarts. That was supposed to save writing and media, and it did not. And it’s fascinating to see with Substack that it has inculcated genuinely original writing. Sorondo, , , , . They write differently. That’s what’s so exciting.

You don’t see that from a lot of mainstream outlets anymore. There’s less room for literary nonfiction. When you look at New Journalism, with people like Tom Wolfe, or Joan Didion, or Gay Talese — they had very particular styles, right? They didn’t all sound the same. What bothered me about the internet era was it felt like there was a real flatness to the prose. I have really enjoyed this era much more.

Jared: It’s gone past that kind of voiceless, generic, Millennial snark.

Ross: I called it Gawker speak. The snark voice. That was so dominant. Very irony-drenched, very casual, humorous but kind of bitter. When it started, there was something refreshing about it. But then that took over the internet. I felt like the capital ‘L’ literary was lost in that, and I also felt like other types of idiosyncratic writing could not break through in the same way.

Jared: So tell me a little bit about the thought that went behind founding The Metropolitan Review. What are you trying to accomplish with this? Because I do think it sits in a really interesting space. It’s long-form. Pieces are usually over 3,000 words, which on Substack is huge. You’re also going to have a print edition.

Ross: Yes, we are planning for a print edition. It’s going to be very nice. We have a great team: Lou Bahet, Vanessa Ogle, Django Ellenhorn. Lou wanted to do something longer-lasting, that could sit on your bookshelf. So, we’re taking our time to get it there, and we have a printer in place, and now it’s really just getting these logistics in order.

I wanted to start a publication that’s going to review books, because it is harder and harder to get books reviewed. I also wanted a publication that lets the writer be the writer. We do edit The Metropolitan Review, but the edit will never erase someone’s voice. You will sound like yourself. We don’t have a single house style.

Jared: I think that the choice you’re making to have a premium printing is the right one. I read a lot of science fiction and fantasy authors as well. They expect that their books won’t sell well unless they get big on TikTok. But you make your money by then later printing 1,500 copies of an ultra-premium edition.

Ross: That’s amazing. It’s a great idea.

Jared: We’re not going to see a return to the old paperback era, where you could make a living just churning out science fiction paperbacks over the weekend. But you can cultivate a fan base that will pay for the real thing, and they don’t just want a hardback. They want a really luxurious item that they can put on their shelf.

Ross: Exactly.

Jared: When I was reading Glass Century, I noticed that the way you write it had an ambivalence about identity throughout. Very early on, there’s a fake wedding between Mona and Saul. There are these little conversations that let people know everyone’s ethnic backgrounds. Saul will tell you the difference between German Jews and Russian Jews. So, I thought this was an ‘identity-first’ book. But your main character, Mona…she doesn’t care. Was that intentional?

Ross: I think I have always felt very ambivalent about it. I am a secular Jew. I had a bar mitzvah and did a little Hebrew school, but I never identified myself first as a Jew or even as a white person. I always felt that I am me and my interests. Some would say that’s a luxury and a privilege, and I accept that. Having grown up in New York City, I always understood identity was very complicated.

I didn’t grow up with a great distinction between the German and the Russian Jew, but I knew my history. I understood that the German Jew and the Russian Jew are, in fact, quite distinct. I’m descended from the Russian Jews, the Jews who came from the pogroms of the 19th century. The fled here during the era of mass immigration, and that’s why I’m always going to be pro-immigrant. So, it’s a little similar to Saul in the book. The German Jew is very assimilated, was wealthier. They also tended to be almost Christian-passing.

Jared: I think Saul uses the phrase ‘barely part of the tribe.’

Ross: Yes, that’s how the German Jew is seen. Dianne Feinstein, the late senator from California, is a great example of this. She was a German Jew who grew up in San Francisco. She attended a Catholic school, and that was something that was not uncommon if you were part of this older community.

Jared: I liked Glass Century because it was a book that took identity seriously, but it didn’t make it the only focus.

Ross: I’ve found the woke/anti-woke binary very exhausting. I think we need to move back to universal values. There’s a very healthy way to talk about identity. You can’t talk about American history without the sins of slavery. Before we recorded this, I walked to the Texas capitol. It was very interesting to see a Confederate monument. And it sits there, very distinctly, a block away from an Austin pride flag. We can’t dismiss the sins of the past, but we have to acknowledge progress.

Jared: One of my favorite cultural institutions in the United States is the Library of America. I ask myself ‘Why doesn’t every country have a publisher that does this?’ And I love the fact that in the Library of America, you read the foundational documents of the United States. You read Black writers during Reconstruction. You read New York City Jews in the 1970s. It’s remarkable.

Ross: You see throughout American history that there’s this push-pull. It’s cycles. It’s battles. It is great terror and great failure, mixed with great hope and great success. And that is the story of America that should be told, because that is the story of America.

Jared: Let me read something to you: ‘We are fattened, bloody flesh sacks doomed to obsolescence, pacified by programs that will do all the thinking and feeling for us. If we stop thinking, what is left? To submit? To putter along like amoebae?’

That’s you writing about the New Romanticism and AI. So why don’t you tell us how you really feel?

Ross: New Romanticism is very interesting to me. Ted Gioia originated the idea, and it drew me in right away. I think it captures a general mood. A growing number of people are very disenchanted with technology as it stands today. They are looking to older forms, a return to a more interpersonal and in-person dynamic, a real turn away from techno-optimism. I believe that we are in a space where we understand technology’s ill effects.

Why is it for tens of thousands of years, humanity could paint paintings, write novels, use imagination? Why do you need a machine to replace that? I get why you need a machine to lift a heavy object. I get why you need a machine to do complex mathematical calculations. I get why you need a machine to do medical exams. We need machines for many things. The proponents of AI don’t really say why you need a machine to make what is now very mediocre art.

Jared: Do you have a book recommendation for our audience?

Ross: Ken Kesey is very famous for writing One Flew Over the Cuckoo’s Nest, which is a great book. No one knows his second novel, Sometimes a Great Notion, which is this wonderful epic of the Pacific Northwest. It was a formative novel in my youth, and I highly recommend it.

Jared: Ross Barkan, thanks for joining me.

Ross: Thank you for having me. This was wonderful.

What Tom Whitwell learned in 2025

52 things, here is one of them:

Most characters in the film Idiocracy wear Crocs because the film’s wardrobe director thought they were too horrible-looking to ever become popular. [Alex Kasprak]

Here is the full list.

The post What Tom Whitwell learned in 2025 appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Northrop Grumman continues solid rocket motor development and test program

SMART Demo test

Northrop Grumman tested a solid rocket motor Dec. 4 as part of an internal program to advance solid rocket propulsion technologies.

The post Northrop Grumman continues solid rocket motor development and test program appeared first on SpaceNews.

Orbex trails other European Launcher Challenge companies as U.K. delays funding decision

Orbex received far less funding than the other four ESA European Launcher Challenge companies after the U.K. deferred a decision on funding allocations.

The post Orbex trails other European Launcher Challenge companies as U.K. delays funding decision appeared first on SpaceNews.

China faces temporary emergency launch gap after space station lifeboat crisis

China could be without emergency launch capability to Tiangong space station for months, leaving no rapid-response option for any new crisis following the Shenzhou-20 incident.

The post China faces temporary emergency launch gap after space station lifeboat crisis appeared first on SpaceNews.

Mobile networks want to use the satellite airwaves we need to track climate change

Researchers at JPL, alongside colleagues in Belize, used 20 years of data from MODIS, an instrument on NASA’s Aqua satellite, to assess risk to Belize’s coral reefs due to human activity and climate change. MODIS captured this image of the Yucatán Peninsula, including Belize, in February 2022. Credits: NASA

At next year’s World Radiocommunications Conference (WRC-25), governments will face a choice that goes to the heart of how we monitor our warming planet. Some regulators are wondering whether to […]

The post Mobile networks want to use the satellite airwaves we need to track climate change appeared first on SpaceNews.

The space economy isn’t for everyone

Starlink satellite stack

Projections for the booming space economy often come with trillion-dollar headlines, but the lion’s share of near-term revenue looks destined for just a handful of massive constellations with the funds to invest in vertical integration. It’s relatively slim pickings for the many other manufacturers, launch providers and technology suppliers hoping to ride the wave. Manufacturing […]

The post The space economy isn’t for everyone appeared first on SpaceNews.

★ 2025 App Store Award Winners: Tiimo, Essayist, and Detail

Apple, today: “Announcing the 2025 App Store Awards”:

This year’s winners represent the best-in-class apps and games we returned to again and again. We hope you enjoy them as much as we do.

I did not enjoy all of them as much as Apple did.

Tiimo

iPhone app of the year Tiimo bills itself as an “AI Planner & To-do” app that is designed with accommodations for people with ADHD and other neurodivergences. Subscription plans cost $12/month ($144/year) or $54/year ($4.50/month). It does not offer a native Mac app, and at the end of onboarding/account setup, it suggests their web app for use on desktop computers. When I went to the web app, after signing in with the “Sign in With Apple” account I created on the iPhone app, Tiimo prompted me to sign up for an annual subscription for $42/year ($3.50/month), or monthly for $10 ($120/year). The in-app subscriptions offer a 30-day free trial; the less expensive pay-on-the-web subscriptions only offer a 7-day free trial. The web app doesn’t let you do anything without a paid account (or at least starting a trial); the iOS app offers quite a bit of basic functionality free of charge.

From Apple’s own description for why it gave Tiimo the award:

Built to support people who are neurodivergent (and anyone distracted by the hum of modern life), Tiimo brought clarity to our busy schedules using color-coded, emoji-accented blocks. The calming visual approach made even the most hectic days feel manageable.

It starts by syncing everything in Calendar and Reminders, pulling in doctor’s appointments, team meetings, and crucial prompts to walk the dog or stand up and stretch. Instead of dumping it all into a jumbled list, the app gives each item meaning by automatically assigning it a color and an emoji. (Tiimo gave us the option to change the weightlifter emoji it added to our workout reminders, but its pick was spot on.)

While on the move with coffee in one hand and keys in the other, we sometimes talked to Tiimo with the Al chatbot feature to add new tasks or shift appointments. When we felt overwhelmed by our to-do list, Tiimo kept us laser-focused by bubbling up just high-priority tasks, while its built-in Focus timer (accessible from any to-do with a tap) saved us from the pitfalls of multitasking.

But Tiimo really stood out when we faced a big personal project, like getting our Halloween decorations up before Thanksgiving. With the help of Al, the app suggested all the smaller tasks that would get us there: gathering the decorations from the garage, planning the layout, securing the cobwebs, and doing a safety check.

Aside from the web app, Tiimo is iOS exclusive, with apps only for iPhone, iPad, and Apple Watch. No Android version. It seems to do a good job with native platform integration (Calendar integration is free; Reminders integration requires a subscription). Animations in the app feel slow to me, which makes the app itself feel slow. And, personally, I find Tiimo’s emphasis on decorating everything with emoji distracting and childish, not clarifying.

The app seems OK, but not award-worthy to me. But, admittedly, I’m not in the target audience for Tiimo’s ADHD/neurodivergent focus. I don’t need reminders to have coffee in the morning, start work, have dinner, or to watch TV at night, which are all things Tiimo prefilled on my Today schedule after I went through onboarding. As I write this sentence, I’ve been using Tiimo for five minutes, and it’s already prompted me twice to rate it on the App Store. Nope, wait, I just got a third prompt. That’s thirsty, and a little gross. (And, although I’m not an ADHD expert, three prompts to rate and review the app in the first 10 minutes of use strikes me as contrary to the needs of the easily distracted.)

Essayist

Mac app of the year Essayist bills itself as “The Word Processor designed for Academic Writing” (capitalization verbatim). Subscriptions cost $80/year ($6.67/month) or $10/month ($120/year). Its raison d’être is managing citations and references, and automatically formatting the entire document, including citations, according to a variety of standards (MLA, Chicago, etc.). Quoting from Apple’s own description of Essayist:

Essayist gives you an easy way to organize a dizzying array of primary sources. Ebooks, podcasts, presentations, and even direct messages and emails can be cataloged with academic rigor. Using macOS Foundation Models, Essayist extracts all the key info needed to use it as a source.

For example, paste a YouTube URL into an entry and Essayist automatically fills in the name of the video, its publication date, and the date you accessed it. Drag in an article as a PDF to have Essayist fill in the title, author, and more — and store the PDF for easy access. You can also search for the books and journal articles you’re citing right in the app.

Essayist is a document-based (as opposed to library-based) app, and its custom file format is a package with the adorable file extension “.essay”. The default font for documents is Times New Roman, and the only other option is, of all fonts, Arial — and you need an active subscription to switch the font to Arial. (Paying money for the privilege to use Arial... Jiminy fucking christ. I might need a drink.) I appreciate the simplicity of severely limiting font choices to focus the user’s attention on the writing, but offering Times New Roman and Arial as the only options means you’re left with the choice between “the default font’s default font” and “font crime”. The Essayist app itself has no Settings; instead, it offers only per-document settings.

The app carries a few whiffs of non-Mac-likeness (e.g. the aforementioned lack of Settings, and some lame-looking custom alerts). The document settings window refers to a new document, even after it has been saved with a name, as “Untitled” until you close and reopen the document. Reopened documents do not remember their window size and position. But poking around with otool, it appears to be written using AppKit, not Catalyst. I suspected the app might be Catalyst because there are companion iOS apps for iPhone and iPad, which seem to offer identical feature sets as the Mac app. Essayist uses a clever system where, unless you have a subscription, documents can only be edited on the device on which they were created, but you can open them read-only on other devices. That feels like a good way to encourage paying while giving you a generous way to evaluate Essayist free of charge. There is no Android, Windows, or web app version — it’s exclusive to Mac and iOS.

I’ve never needed to worry about adhering to a specific format for academic papers, and that’s the one and only reason I can see to use Essayist. In all other aspects, it seems a serviceable but very basic, almost primitive, word processor. There’s no support for embedding images or figures of any kind in a document, for example. [Correction: Essayist does support figures, but I missed the UI for how to insert them.]

Detail

iPad app of the year Detail bills itself, simply and to the point, as an “AI Video Editor”. The default subscription is $70/year ($5.83/month) with a 3-day free trial; the other option is to pay $12/month ($144/year) with no free trial. After a quick test drive, Detail seems like an excellent video editing app, optimized for creating formats common on social media, like reel-style vertical videos where you, the creator, appear as a cutout in the corner, in front of the video or images that you’re talking about. The iPhone version seems equally good. The iPad version of Detail will install and run on MacOS, but it’s one of those “Designed for iPad / Not verified for macOS” direct conversions. But they do offer a standalone Mac app, Detail Studio, which is a real Mac app, written using AppKit, which requires a separate subscription to unlock pro features ($150/year or $22/month). Detail only offers apps for iOS and MacOS — no Windows, Android, or web.

From Apple’s own acclaim for Detail:

When we used Detail to record a conversation of two people sitting side by side, the app automatically created a cut that looked like it was captured with two cameras. It zoomed in on one speaker, then cut away to the other person’s reaction. The app also made it easy to unleash our inner influencer. We typed a few key points, and the app’s AI wrote a playful script that it loaded into its teleprompter so we could read straight to the camera.

Most importantly, Detail helped us memorialize significant life moments all while staying present. At a birthday party, we propped an iPad on a table and used Detail to record with the front and back cameras simultaneously. The result was a split-screen video with everyone singing “Happy Birthday” on the left and the guest of honor blowing out the candles on the right. (No designated cameraperson needed.)

Detail has a bunch of seemingly genuinely useful AI-based features. But putting all AI features aside, it feels like a thoughtful, richly featured manual video editor. I suspect that’s why the AI features might work well — they’re an ease-of-use / automation layer atop a professional-quality non-AI foundation. Basically, Detail seems like what Apple’s own Clips — recently end-of-life’d — should have been. It turns your iPad (or iPhone) into a self-contained video studio. Cool.


Of these three apps — Tiimo on iPhone, Essayist on Mac, and Detail on iPad — Detail appeals to me the most, and strikes me as the most deserving of this award. If I were to start making videos for modern social media, I’d strongly evaluate Detail as my primary tool.

Apple still has no standalone category for AI apps, but all three of these apps emphasize AI features, and Apple itself calls out those AI features in its praise for them. It’s an obvious recurring theme shared by all three, along with their shared monetization strategies of being free to download with in-app subscriptions to unlock all features, and the fact that all three winners are exclusive to iOS and Mac (and, in Tiimo’s case, the web).

Netflix Agrees to Buy Warner Bros., Including HBO, for $83 Billion

Meg James, reporting for The Los Angeles Times (News+ link):

The two companies announced the blockbuster deal early Friday morning. The takeover would give Netflix such beloved characters as Batman, Harry Potter and Fred Flintstone.

Fred Flintstone?

“Our mission has always been to entertain the world,” Ted Sarandos, co-CEO of Netflix, said in a statement. “By combining Warner Bros.’ incredible library of shows and movies — from timeless classics like Casablanca and Citizen Kane to modern favorites like Harry Potter and Friends — with our culture-defining titles like Stranger Things, KPop Demon Hunters and Squid Game, we’ll be able to do that even better.”

Not sure Squid Game belongs in the same comparison as Citizen Kane, but the Warners library is incredibly deep. Stanley Kubrick’s post-2001: A Space Odyssey films were all for Warner Bros.

Netflix’s cash and stock transaction is valued at about $27.75 per Warner Bros. Discovery share. Netflix also agreed to take on more than $10 billion in Warner Bros. debt, pushing the deal’s value to $82.7 billion. [...] Warner’s cable channels, including CNN, TNT and HGTV, are not included in the deal. They will form a new publicly traded company, Discovery Global, in mid-2026.

I don’t know if this deal makes sense for Netflix, but Netflix has earned my trust. Netflix is a product-first company. They care about the quality of their content, their software, their service, and their brand. If you care about the Warner/HBO legacy, an acquisition by Netflix is a much, much better outcome than if David Ellison had bought it to merge with Paramount.

The LA Times article goes on to cite concerns from the movie theater industry, based on Netflix’s historic antipathy toward theatrical releases for its films. Netflix is promising to keep Warner Bros.’s film studio a separate operation, maintaining the studio’s current support for theatrical releases. I hope they do. I grew up loving going to the movies. I still enjoy it, but the truth is I go far less often as the years go on. Movie theaters shouldn’t be a protected class of business just because there’s so much affection and nostalgia for them. If they continue sliding into irrelevance, so be it. That’s how disruption, progress, and competition work.

 ★ 

Castelion raises $350 million to scale hypersonic missile production

The company founded by SpaceX veterans is opening a solid rocket motor manufacturing campus in New Mexico

The post Castelion raises $350 million to scale hypersonic missile production appeared first on SpaceNews.

Crooked as We Wanna Be, Say the Corrupt GOP Six

Kate goes deeper on the new definition of “eve” the Court promulgated to help Republicans hold the House next year. (I don’t think it’ll be enough, but that’s another matter.) They take a principle that has some logic in extreme cases: there needs to be some balance between the merits of a case and potential disruption to an election. But given that we have House elections every two years, one year out cannot be the “eve” of an election. In any case, it’s more evidence of what we already know: we’re dealing with a corrupt Court at war with the Constitution. They do what they need to do to get the result they want. Read Kate.

Dithering: ‘Alan Dye Leaves Apple’

The December 2025 cover art for Dithering, showing a man dressed as Santa Claus getting a kiss on the cheek under some mistletoe.

Dithering is my and Ben Thompson’s twice-a-week podcast — 15 minutes per episode, not a minute less, not a minute more. It’s a $7/month or $70/year subscription, and included in the Stratechery Plus bundle (a bargain). This year our CMS (Passport — check it out) gained a feature that lets us make some episodes free for everyone to listen to on the website. Today’s episode, regarding Alan Dye leaving Apple for Meta, seems like a good one to do that with. (And, once again, this month’s album art serendipitously captures my mood.)

Give it a listen. Subscribe if you enjoy it.

 ★ 

Apple’s Succession Intrigue Isn’t Strange at All

Aaron Tilley and Wayne Ma, in a piece headlined “Why Silicon Valley is Buzzing About Apple CEO Succession” at the paywalled-up-the-wazoo The Information:

Prediction site Polymarket places Ternus’ odds of getting the job at nearly 55%, ahead of other current Apple executives such as software head Craig Federighi, Chief Operating Officer Sabih Khan and marketing head Greg Joswiak. But some people close to Apple don’t believe Ternus is ready to take on such a high-profile role, and that could make a succession announcement unlikely anytime soon, said people familiar with the company.

Nothing in the rest of the article backs up that “some people close to Apple don’t believe Ternus is ready” claim, other than this, several paragraphs later:

And while his fans believe Ternus has the temperament to be CEO, many of them say he isn’t a charismatic leader in the mold of a Jobs. He has also had little involvement in the geopolitical and government affairs issues that dominate most of Cook’s time these days. On a recent trip to China, for example, Apple’s new COO, Sabih Khan, accompanied Cook to some of his meetings.

No one else in the history of the industry, let alone the company, has the charisma of Steve Jobs. And while I think Polymarket has the shortlist of candidates right, I also think they have them listed in the right order. Sabih Khan probably should be considered an outside-chance maybe, but the fact that he accompanied Cook to China doesn’t me make think, for a second, that it’s in preparation to name him CEO. If Kahn were being groomed to become CEO, he’d have started appearing in keynotes already. It’s silly to slag Ternus for not having the charisma of Steve Jobs, when Ternus has been a strong presence in keynotes since 2018, and in the same paragraph suggest Khan as a better option, when Khan has never once appeared in a keynote or public appearance representing Apple.

Some former Apple executives hope a dark-horse candidate emerges. For example, Tony Fadell, a former Apple hardware executive who coinvented [sic] the iPod, has told associates recently that he would be open to replacing Cook as CEO, according to people who have heard his remarks. (Other people close to Apple consider Fadell an unlikely candidate, in part because he was a polarizing figure when he worked at the company. Fadell left Apple in 2010.)

The parenthetical undersells the unlikelihood of Fadell returning to Apple, ever, in any role, let alone the borderline insanity of suggesting he’d come back as Cook’s successor.

It has become one of the strangest succession spectacles in tech. Typically, the kind of buzz that is swirling around Cook occurs when companies are performing badly or a CEO has dropped hints that they’re getting ready to hang up their spurs. Neither applies in Cook’s case, though.

There’s nothing strange about it. Apple has a unique company culture, but so too do its peers, like Microsoft, Amazon, and Google. And just like at those companies, it’s therefore a certainty that Cook’s replacement will come from within the company’s current ranks. Polymarket doesn’t even list anyone other than Ternus, Federighi, Joswiak, and Khan.

As for hints, there is not much need for any hint beyond the fact that Cook is now 65 years old and has been in the job since 2011. But the high-profile multi-source leak to the Financial Times is a pretty obvious fucking additional hint.

 ★ 

Lisa Jackson on The Talk Show Back in 2017

This interview was both interesting and a lot of fun. Worth a listen or re-listen.

 ★ 

Apple Announces a Few Other Executive Transitions

Apple Newsroom, yesterday:

Apple today announced that Jennifer Newstead will become Apple’s general counsel on March 1, 2026, following a transition of duties from Kate Adams, who has served as Apple’s general counsel since 2017. She will join Apple as senior vice president in January, reporting to CEO Tim Cook and serving on Apple’s executive team.

In addition, Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, will retire in late January 2026. The Government Affairs organization will transition to Adams, who will oversee the team until her retirement late next year, after which it will be led by Newstead. Newstead’s title will become senior vice president, General Counsel and Government Affairs, reflecting the combining of the two organizations. The Environment and Social Initiatives teams will report to Apple chief operating officer Sabih Khan. [...]

Newstead was most recently chief legal officer at Meta and previously served as the legal adviser of the U.S. Department of State, where she led the legal team responsible for advising the Secretary of State on legal issues affecting the conduct of U.S. foreign relations.

Monday’s announcement that AI head John Giannandrea is retiring and the hierarchy for AI related projects being further reshuffled under software head Craig Federighi was significant, but not surprising, given how things went this year for Apple with AI.

Wednesday’s announcement that VP of design and Liquid Glass frontman Alan Dye is leaving Apple for Meta was a shock, both inside and outside the company. As I wrote this week, I think it’s great news for Apple, but not by plan.

This news yesterday is just typical planned retirements. The timing is slightly unfortunate though. In the eyes of observers unfamiliar with the company, they might be misconstrued as signs of executive upheaval, occurring on the heels of the minor and major dramas of Giannandrea’s and Dye’s departures. The Jackson / Adams / Newstead transitions announced yesterday are nothing of the sort.

Jackson had a very nice run at Apple and carved out a rather unique position within the company. Apple’s environmental efforts expanded tremendously under her leadership. I’ve never met anyone with a bad word to say about her, and in my own interactions, found her downright delightful.

As for Adams, the responsibilities of Apple’s general counsel are generally far afield from my interests. The only two times I’ve mentioned her at DF were when she got the job in 2017, and a passing reference when the FBI sent a letter to Apple, addressed to Adams, in 2020 regarding the locked phone of a mass shooter in Pensacola, Florida. That’s a sign of a good run for a general counsel — it’s a job where no news is good news.

Lastly, I wouldn’t read anything into Newstead coming to Apple by way of Meta. But it is a bit funny that it was announced the day after Dye left Apple for Meta. She seems to have an excellent wide-ranging background to spearhead Apple’s government affairs. Her stint in the State Department was during the first (now seemingly sane) Trump administration, but she clerked for liberal Supreme Court Justice Stephen Breyer.

 ★ 

Democrats Need to Treat the Supreme Court Like the Villain It Is

“I drew a picture of me taking bribes!” “Very good Donald, you’ll be allowed to do that soon enough.” (Official White House Photo by Andrea Hanks)

The Cross Section is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

On Thursday, the Supreme Court offered the latest in its ongoing series of shocking-but-not-surprising rulings, this one giving its stamp of approval to Texas’ vulgar redistricting, which the state undertook at President Trump’s instruction. As usual, the decision was judicial Calvinball, made according to an ever-changing set of rules trotted out in order to achieve the singular end to which the Court is devoted: Republicans Always Win.

When this happens, one of the court’s three liberals writes an angry dissent, people like me pen outraged op-eds, some Democrats in Congress say they’re deeply troubled by the decision, and nothing much changes. In the short run, nothing much can. But this Court has created a crisis, and Democrats need to start thinking about how they’re going to solve it.

The solution has to come from all levels, both the grassroots and elected Democrats. Pressure needs to be built so that when the 2028 Democratic presidential nominating contest begins (it will commence immediately after the 2026 midterms), all the candidates feel compelled to take serious, aggressive stands in favor of dramatic and sweeping.

Not with expressions of deep concern, and not with a promise to appoint a commission to study the issue. Joe Biden appointed a commission — do you remember it? You don’t, because the members very sincerely did their work, and then Biden ignored it for two and a half years, until he released a “plan” for court reform a week after he withdrew from the 2024 race.

No more of that. Democratic voters have to force their candidates to embrace real, aggressive Supreme Court reform. That means not only coming up with a plan (including, most likely, term limits and court expansion) but being unrestrained in how they talk about these six villains who are destroying our democracy. They are every bit as much of a threat to what we hold dear as Donald Trump is, but if Democrats won’t say so, they can’t sow the ground for the reform that is so terribly necessary.

The redistricting case

Let’s spend a moment on the redistricting case, because while it may not be the most appalling thing this court has ever done, it’s illustrative of the way it operates and why it must be reined in — and it’s one more step toward the realization of the goal that has animated Chief Justice John Roberts’ entire career, the eventual destruction of the Voting Rights Act.

After its trial, the district court ruled that in pursuing its aim of maximizing the number of Republican House seats, the Texas legislature violated the Constitution and clear Supreme Court precedent. The legislature set out to break up “coalition districts” that include a majority made up of multiple racial groups into separate Black districts (which will vote Democratic) and Hispanic districts (which they believe will vote Republican, though this may or may not turn out to be true). This, the district court found, was an explicit racial gerrymander and therefore illegal.

But the six conservative justices decided to toss the district court’s extensive fact-finding in the trash, for one reason only: It didn’t produce the result they wanted. In three snide paragraphs, Justice Samuel Alito dismissed the 160-page district court ruling, for two absurd reasons. First, Alito wrote, “the District Court failed to honor the presumption of legislative good faith” on the part of Texas Republicans. That is simply bogus; rather than relying solely on anyone’s good faith, the district court examined the evidence at length, and judged accordingly. Second, said Alito, the district court supposedly erred by rejecting the new map, which no one has ever voted under, “on the eve of an election,” i.e. an election that is 11 months away.

This is called the “Purcell principle,” which begins from the quite reasonable idea that courts should refrain from changing the rules of elections, including district lines, right before an election takes place. But in practice, Purcell has become an infinitely flexible tool that the conservatives apply or ignore at their whim. If a change in the rules will benefit Democrats, Purcell is deployed to strike it down, no matter how far away the next election actually is. If the change will benefit Republicans, no matter how close the election is, Purcell is placed gently back into its scabbard, and the changes are allowed. But in this case, the Court deployed Purcell to force a change — a new map, rather than the one that has been in place since 2021 — while claiming it was doing exactly the opposite.

The Supreme Court is out of control, and Democrats have to say so

That’s how this Court operates; there are dozens of other cases to illustrate the point, up to and including the case in which it granted Donald Trump, far and away the most corrupt president to ever occupy the Oval Office, the right to do all the crimes he wants. But most critically for the future, the court has created a set of principles and tools meant for itself to use to achieve the policy goals it wants. Purcell is one; the end of “Chevron deference,” which previously said that federal agencies can decide how to apply statues, is another; the “major questions doctrine,” which they invented out of whole cloth, is a third. The mother of them all is “originalism,” which states that any law’s constitutionality is determined by whether a right-wing justice’s law clerk can find a quote from the Federalist Papers or a bill considered in the Virginia House of Burgesses in 1750 that seems to support whatever outcome the conservatives want.

The six conservatives have deployed these tools again and again to advance the interests of the Republican Party and their own policy preferences. Precedents have no bearing, the plain text of statutes has no meaning, and their own authority has no limit. They are out of control, and their reign of judicial terror must come to an end.

Any Democrat who says “Voters don’t really care about this stuff” needs a good smack in the head. The answer to that problem is to make them care. Republicans do this all the time; if they have something they wish was on the agenda, they force it on the agenda, no matter how ridiculous it is or how removed it is from people’s lives. How many Americans cared five years ago about whether some middle school trans kid a hundred miles from where they live wanted to play softball? But they care about it now, because Republicans made them care.

Democrats need to do the same with the Supreme Court — loudly, angrily, personally, relentlessly. If they don’t, the next Democratic president is utterly screwed. That president is going to have an extraordinarily challenging job before them, perhaps the most difficult since FDR took office in 1933. Trump will have left the federal government in ruins, and rebuilding it will take years — and a kind of urgent, vigorous exercise of authority that goes against Democrats’ inclinations toward caution and consensus. If and when that Democrat gets down to the business of rebuilding, the Court’s conservatives are going to do everything in their power to sabotage that effort. Without court reform, four years later that Democrat will be considered a failure, and another Republican will take office to continue Trump’s work of destruction.

Avoiding the practicality trap

If Democratic candidates do what I’m suggesting, they will immediately be met with a particular response, both from centrists within their party and from the elite news media. Shifting attention from the crisis the Court has created, they will descend upon the aggressive Democrats advocating court reforms with questions about practicality. Surely you’re being unrealistic, they’ll say, fingers wagging furiously at this naïve, pie-in-the-sky notion.

We saw this vividly in 2020 with the debate Democrats had about health care. The party’s constituents were eager for progress, not just to shore up the Affordable Care Act but to create genuinely universal (and affordable) coverage. Even the “moderates” in the race, including Joe Biden, presented plans far more progressive than the ACA itself. But those who advocated some form of single-payer, especially Bernie Sanders, were buried under an avalanche of “But how would you pay for it???” scolding, the kind of question never asked of anything Republicans propose except in the most perfunctory way.

This is how establishment media respond to any sign of vigor on the part of Democrats, by using their own instincts toward seriousness against them, and it’s what they’ll do when Democrats advocate court reform. Can you get the votes in Congress? Isn’t it going to be complicated? Won’t it cause confusion? The answer is not to just ignore any question of practicality, but to always shift the debate back to the villainy of this Supreme Court and the need to stop them. There are plenty of reform proposals out there, and they’re perfectly practical; Congress can change the size of the court (as it has done many times before), impose term limits, and much more. All that’s necessary is the will to do it.

That will can be created, if Democrats are willing to push hard enough. But they have to start now.

Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.

Leave a comment

Subscribe now

MAGA’s Affordability Crisis Will Soon Get Worse

Long ago, probably in a long-forgotten hotel room, I watched a 1974 movie called The Internecine Project — now available, as you’ll see if you follow the hyperlink, on YouTube. In truth, it’s a pretty bad movie. But what made it memorable was the unusual nature of the villain, played by James Coburn: An evil, murderous chair of the President’s Council of Economic Advisers. Yes, you read that correctly – an evil, murderous chair of the CEA.

In the opening scene, Coburn is on a talk show, being quizzed about the high rate of inflation. He responds that people shouldn’t be upset about rising prices, because their incomes are rising even faster. Presumably the scriptwriters intended this to show how smarmy and cynical he is.

And that’s why Donald Trump won the 2024 election. No, Democrats didn’t lose because they use big words, or advocate for open borders, or talk too much about trans rights. None of those things actually happened to any significant degree, regardless of what Trump or the self-defeating wing of the Democratic Party says. They lost because Americans were angry about higher prices and not mollified by the fact that most people’s wages had risen more than overall consumer prices. In this coming Sunday’s primer, I’ll talk more about the underlying economics of the affordability issue, and in the following primer I’ll talk about specific strategies for Democrats to adopt to address it.

But what I want to focus on today is the politics of affordability. As current polls show, swing voters are increasingly blaming Trump, rather than Biden, for the cost of living. And the public’s ire is likely to get worse for the Republicans as time goes on.

In the summer of 2024, as Trump was lagging in the polls behind Kamala Harris, he began to repeatedly and explicitly promise not simply to reduce inflation but to deliver large declines in consumer prices: “Starting the day I take the oath of office, I will rapidly drive prices down.” Although economists warned that there was no way he could deliver on those promises, enough voters believed him to swing the election.

Now that Trump has in fact utterly failed to deliver, those voters — especially those Black and Latino voters who believed him — have swung back to the Democrats with a vengeance:

A graph with red line and black text

AI-generated content may be incorrect.

Trump is handling this reversal with his usual style and grace: in the past few days he has repeatedly called affordability a “hoax” and a “con job.” According to Axios, he’s planning a nationwide retribution tour to convince voters that things are going great and that they’re wrong to be so down on the economy. Democratic strategists must be rubbing their hands with glee.

And if you are one of those Republicans reconsidering your future career options, know that things are going to get worse. A lot worse.

Lately I’ve been revisiting the work of the political scientist Suzanne Mettler. Mettler asked why so many people who are dependent on government social programs vote for conservatives who want to slash those programs. She focused in particular on Kentucky, where 28 percent of the population is covered by Medicaid, yet which gave Trump a more than 30 point margin last year.

My quick summary of Mettler’s analysis emphasizes two points. First, many people who benefit from government social programs don’t actually think of them as social programs. This is true not only for implicit aid like the tax exemptions for mortgage interest payments and employer-provided health insurance, but also for explicit aid programs like Medicare and Social Security. What Mettler documented is that many people who depend on government benefits don’t consider them “benefits” but rather something they’ve earned.

Medicare Recipients Against Handouts - The New York Times

Second, Mettler documented that recipients of means-tested government benefits such as Medicaid and food stamps are relatively poor, less educated, and often fail to vote.

I will add a third point: Most Americans aren’t close followers of policy debates. Telling them how an election promise is likely to affect their future benefits simply doesn’t register for most people. Instead, there has be a clear demonstration of the policy change before it is made real to them. Take the example of Obamacare, which was famously unpopular before it went into effect. But once people experienced the benefits of Obamacare it went on to garner very strong public support. Furthermore, most people don’t mobilize in support of popular programs until it’s very obvious that they’re under imminent threat. Trump’s anti-Obamacare rhetoric during the 2016 campaign didn’t appear to hurt him, but his actual attempt to kill the program in 2017 helped Democrats win big in the 2018 midterms.

Which brings us to the health care earthquake that’s soon to hit — an earthquake that, based on my read of Mettler, is going to inflict significant political damage on the Republicans.

For those who haven’t been keeping up: The Affordable Care Act requires that health insurance companies offer the same policies to everyone, with no discrimination based on pre-existing conditions. It also provides significant subsidies to help people pay insurance premiums — specifically, limiting the amount families have to pay out of pocket as a percentage of their income — with the subsidies on a sliding scale based on income. These subsidies have an important secondary benefit: They encourage even healthy people to buy insurance, which improves the risk pool and therefore holds overall premiums down.

Mandatory disclaimer for liberals: Yes, it would be much simpler just to have single-payer healthcare, paid for with progressive taxes. But that wasn’t politically possible when Obamacare was created, and it still isn’t. Obamacare was more or less the best we could get.

As originally drafted, however, Obamacare was underpowered and underfinanced. Insurance was still hard for many Americans to afford, even with the subsidies. And there was an upper income limit for the subsidies: you still received substantial support as long as your income was less than four times the poverty line, but as soon as you crossed that line all support was cut off. This is the kind of “notch” everyone who studies tax and benefit policy is adamant that you want to avoid.

So in 2021 the Biden administration enhanced the subsidies. Out of pocket payments were reduced for everyone. And the “notch” was eliminated: maximum premium payments as a percentage of income were capped no matter how high one’s income was, although this limit wasn’t relevant for the truly affluent.

But the legislation providing these enhanced subsidies expires at the end of this month. And Republicans in Congress are adamantly opposed to maintaining them. Even Trump has pleaded with his party to agree to a temporary extension, but seems to be getting nowhere. Visceral GOP dislike for anything that helps ordinary Americans may be partly to blame. Moreover, bolstering the ACA would be an implicit admission by the Republicans that they have been wrong all along about health care.

So let’s think about the politics of what’s about to happen: Millions of Americans are about to see a sudden rise in health care costs — not a hypothetical future rise, but a sudden jump on January 1.

And who will be hit worst? Here are Charles Gaba’s estimates for Florida:

A graph of a financial report

AI-generated content may be incorrect.

Almost all ACA enrollees will be paying more. However, the really huge premium increases will fall on older Floridians who are relatively well off — that is, those with incomes above the maximum allowable to receive subsidies. According to Gaba, these people are likely to see their insurance bills rising by more than $2500 a month — more than $30,000 a year! And these people, unlike many Medicaid or food stamp beneficiaries, have a high propensity to vote.

This ACA premium shock will hit as other forces are exacerbating the sense of crisis over affordability. Businesses are starting to fully pass onto consumers the cost of Trump’s tariffs. Electricity prices are soaring as data centers inflict the cost of their enormous power demands on consumers. In addition, Trump’s deportation policies are increasing the cost of food.

Trump may believe that affordability is a con job, but it isn’t. It’s going to hit him and his allies hard. And it couldn’t happen to a more deserving group of people.

MUSICAL CODA

TIL: Subtests in pytest 9.0.0+

TIL: Subtests in pytest 9.0.0+

I spotted an interesting new feature in the release notes for pytest 9.0.0: subtests.

I'm a big user of the pytest.mark.parametrize decorator - see Documentation unit tests from 2018 - so I thought it would be interesting to try out subtests and see if they're a useful alternative.

Short version: this parameterized test:

@pytest.mark.parametrize("setting", app.SETTINGS)
def test_settings_are_documented(settings_headings, setting):
    assert setting.name in settings_headings

Becomes this using subtests instead:

def test_settings_are_documented(settings_headings, subtests):
    for setting in app.SETTINGS:
        with subtests.test(setting=setting.name):
            assert setting.name in settings_headings

Why is this better? Two reasons:

  1. It appears to run a bit faster
  2. Subtests can be created programatically after running some setup code first

I had Claude Code port several tests to the new pattern. I like it.

Tags: python, testing, ai, pytest, til, generative-ai, llms, ai-assisted-programming, coding-agents, claude-code

Two things that really matter

When analyzing the macro situations of countries or regions, I place more stress than many people do on the following two factors:

1. Human capital: How much active, ambitious talent is there?  And how high are the averages and medians?

2. Matching market demands: Are you geared up to produce what the market really wants, export markets or otherwise?

Those may sound trivial, but in relative terms they remain undervalued.  They are, for instance, the biggest reasons why I do not buy “the housing theory of everything.”

They are also, in my view, the biggest reasons why the UK currently is in economic trouble.  Both #1 (brain drain) and #2 have taken a hit in recent times.  The UK continues to deindustrialize, business consulting is not the future, and London as a financial centre was hurt by 2008, Brexit, and superior innovations elsewhere.  More and more smart Brits are leaving for the US or Dubai.

You also will notice that #1 and #2, when they are in trouble, are not always easily fixed.  That is why reforms, while often a good idea, are by no means an easy or automatic way out of trouble.

These two factors also are consistent with the stylized fact that growth rates from the previous decade are not so predictive of growth rates for the next decades.  Human capital often drives levels more than growth rates.  And matching market demands often has to do with luck, or with shifting patterns of demand that the supplying country simply cannot match.  Once people abandon Toyotas for Chinese electric cars, Japan does not have an easy pivot to make up the loss.

Most other theories of growth rates, for instance those that assign a predominant weight to institutions, predict much more serial correlation of growth rates than we find in the data.  That said, institutions do indeed matter, and in addition to their usual effects they will shape both #1 and #2 over the longer run.

Overall, I believe conclusions would be less pat and economic understandings would be more effective if people paid greater attention to these factors #1 and #2.  Not putting enough weight on #1 and #2 is one of the biggest mistakes I see smart people — and indeed very smart people — making.

Addendum: You will note the contributions of Fischer Black here.  Apart from his contributions to options pricing theory, which are widely known, he remains one of the most underrated modern economists.

The post Two things that really matter appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The Week Observed: December 5, 2025

What City Observatory Did This Week

Oregon’s Climate Failure: Transportation Emissions Keep Rising Despite Pledges

Oregon loves to talk climate action, but when it comes to actually reducing transportation emissions—the state’s largest source of greenhouse gases—it’s failing spectacularly. Portland metro area transportation emissions have increased 0.8 percent annually over five years, even as the state’s official goal calls for cutting them by more than five percent per year.

The culprit? A decade of wildly optimistic assumptions. When Oregon’s Land Conservation and Development Commission set targets in 2009, planners bet heavily on technology: cleaner cars, rapid electrification, consumers ditching SUVs for efficient vehicles. Every single assumption proved wrong. Americans are keeping their gas-guzzlers longer, buying bigger trucks and SUVs, and adopting electric vehicles at a crawl.

The 2009 rules wisely required a reality check every four years. That moment arrived December 5, when LCDC was supposed to reckon with these failures and adjust course. Instead? Staff recommended essentially no changes to targets that are already being missed by miles.

This is policy malpractice dressed up as climate leadership—making ambitious long-term pledges while ignoring present-day failure. With each passing year of inaction, the remaining window to address climate change shrinks. Oregon needs honest accounting and aggressive course correction, not more aspirational targets destined to be ignored.

 

Must Read

Time for foundations to hold grantees accountable for NIMBY lobbying.  In many ways, one of the biggest problems confronting the US–and a key cause of homelessness–is growing housing unaffordability, which is closely associated with local land use restrictions.  A wave of reformers have tried to push back against the “Not in My Backyard (NIMBY) opposition, to make it easier to build more housing, to increase supply, lower rents, and improve affordability.  But along the way, they’ve run into widespread opposition from ostensibly “progressive” groups, that have taken the side of anti-housing groups.   Nate Resnikoff, writing in Inside Philanthropy argues that these groups have benefited significantly from foundation funding, from many of the foundations that purport to care about poverty and affordability.  Resnikoff reports that

. . . at least $260 million in funding by 15 major foundations since 2018 to nonprofits across California that have actively opposed pro-housing legislation in recent years.

Resnikoff offers the example of Oakland’s PolicyLink, which has gotten tens of millions in grants from the Ford Foundation, the California Endowment and other donors.  PolicyLink has produced NIMBY-themed communications materials like a “Housing Justice Narrative Toolkit” which downplays any role for increasing housing supply as a way to deal with affordability and homelessness.  PolicyLink opposed key YIMBY legislation (making it easier to build more housing) because it was a “trickle-down, market-based model>

As Resnikoff writes, anti-YIMBY nonprofits in California

. . .  continue to argue that legalizing more housing production in high-cost cities will actually raise rents, despite — again — the large and continually growing body of evidence to the contrary. This was part of the grounds for their opposition to SB 79, a 2025 bill that legalizes denser housing construction near transit. No fewer than 29 nonprofits — including Strategic Actions for a Justice Economy (SAJE) (the recipient of at least $4.64 million from large foundations since 2018) and Public Advocates (which has received grants from top California foundations that include the Hewlett Foundation, the Silicon Valley Community Foundation and The California Endowment) — cosigned a letter opposing the bill.

 

Pennies for streetcars get scrutiny, billions for freeway widening get a free pass.  In Milwaukee, there’s a stark contrast between two proposed transportation expenditures.  One the city’s $4 million or so annual subsidy to cover the operating costs of Milwaukee’s “Hop” streetcar line.  The other is the state highway department launching a multi-year, multi-billion dollar project to widen I-94, that’s likely to disrupt traffic (and business) for the next eight years or more.  If it stays on budget–as few such projects do–The I-94 East West widening  will cost $1.7 billion to widen about 3.5 miles of freeway (a mere $500 million per mile).  Local columnist Dan Schaefer writes that while one—the streetcar—is hotly debated in the City Council, the highway widening gets hardly a mention.

If I tell you they’re going to be spending billions to widen a highway in America, nobody panics because it’s all part of the plan. Spend a miniscule fraction of that on a light rail project and everybody loses their mind.

He quotes a recent letter to the editor in the city’s paper of record as emphasizing this point:

“Wisconsin, despite occasional detours into “alternative transit,” has wisely charted a course for true progress: adding lanes. The wisdom of this choice manifests itself all around us. Just look at the flawless, free-flowing utopias of Los Angeles, Chicago and Atlanta, where traffic congestion was solved decades ago.”

There’s a deep double standard in most discussions of transportation policy, with expensive and perennially ineffective spending on roads meriting no serious discussion, while even modest efforts to provide alternatives are put under a microscope.

New Knowledge

Demand destruction and community revival:  the Hammersmith Bridge.  Long-time readers of City Observatory will know the story of induced travel (that adding more road capacity simply elicits more travel, and does nothing to reduce congestion).  They’ll likely also know the story of traffic evaporation (the opposite effect:  when road capacity is abruptly reduced, travel also quickly declines.  A new, and very nuanced story from London shows how traffic flows, business activity, and community well-being changes when a major transport artery is severed.

In April 2019, city officials closed the Hammersmith Bridge in  London, after finding serious cracks in the bridge’s more than century-old cast iron support members.

As is so often the case, the closure prompted predictions of carmaggedon:  a combination of tortuous traffic jams and delays, and lasting economic damage to nearby businesses because of the loss of accessibility.  The bridge has been closed to vehicles since 2019, but despite dire predictions of economic dislocation, nothing like that has happened.  In fact, traffic patterns and the local economy have evolved in ways that are surprising.

When the bridge closed, about 25,000 vehicles crossed it daily and (Transport for London) TfL predicted a severe economic impact.

Six years later, 9,000 of those journeys have vanished – not diverted to other crossings, but simply evaporated. Yet the local economy has adapted, air quality has improved, and overall traffic congestion has lessened.

This counterintuitive outcome begs the question: are we actually solving the right problem?

The closure of the bridge quickly prompted changes in commuting patterns, and significantly, did not aggregate economic damage to nearby neighborhoods.  Charge card data showed that consumer spending in the local area increased faster than in the rest of London, driven mostly by increased patronage for local-serving retailers.  None of this is to say that closing bridges is panacea, but the experience here shows that urban systems are much more resilient and adaptable than we generally think and that many trips get taken simply because there’s transport capacity to provide the trip.

In the News

Leslie Carlson, of In Common Agency and City Observatory’s Joe Cortright co-authored an op-ed in the Portland Business Journal on the role that popular protests have played in reviving Portland’s civic spirit.  This fall, Portland residents have met masked, armed federal ICE paramilitaries sent to the city based on false Presidential claims that the city was war torn with a combination of absurdity and joy, in the process providing global symbol for protest.  It’s something to build on.

For years, many in the business community have lamented rising office vacancies and lagging foot traffic on downtown streets. Between Portland’s joyful protests at the ICE facility and our “No Kings” marchers, we have begun to reverse this trend, giving millions of people across the globe a glimpse of what makes our city great, while bringing tens of thousands of people downtown.

 

Oregon’s climate fail

For decades, Oregon has acknowledged the reality of climate change, and repeatedly pledged to do something about it.  Sadly, though, when it comes to the single largest source of greenhouse gases in the state, transportation emissions, Oregon is losing ground:  rather that cutting back emissions robustly as it has pledged to do since 2007, transportation greenhouse gases continue to increase.

Data gathered by ClimateTrace.org shows that over the past five years, transportation related greenhouse gas emissions in the Portland metropolitan area (Clackamas, Multnomah and Washington Counties) have increased from 8.2 million metric tons to about 8.5 million metric tons about 0.8 percent per year.  In contrast, Oregon’s adopted goal for reducing transportation greenhouse gas emissions is for them to fall by more than five percent per year.

Data

All this is particularly important today, because the state land use planning agency, the Land Conservation and Development Commission, is discharging its mandated duty to take a look at how well state efforts to reduce greenhouse gas emissions gas emissions) is working.  The 2009 Legislature directed LCDC and to come up with measures to improve land use planning to reduce car dependence and contribute to state efforts to deal with climate change by planning for more compact development that lowers vehicle miles of travel (and thereby greenhouse  gas  emissions).

At the time the state first adopted rules, there was considerable uncertainty about the path of future technology:  would cleaner cars, cleaner fuel, and vehicle electrification drive down per mile emissions enough to lower transportation greenhouse gases on their own, or would additional efforts to reduce emissions by lowering vehicle miles traveled (VMT) be necessary.  In adopting its rules a decade ago, LCDC made some heroic assumptions:  the technology would steadily improve, that consumers would rapidly replace old, dirty vehicles with newer, cleaner ones, that the vehicle fleet would shift to smaller, more efficient cars, rather than larger, dirtier trucks and SUVs, and that much of the fleet would be electrified.  Then, based on those assumptions, LCDC determined that localities around the state should plan to reduce VMT to make up the rest of the needed reduction in greenhouse gases from transportation.

Unfortunately, virtually every assumption about cleaner cars, cleaner fuel, and less driving has proven to be wrong, and has consistently over-estimated emission reductions.  Owners are holding on to dirty old cars even longer than before; new cars tend to be larger, less efficient trucks and SUVs; relatively few cars are electric.

The upshot of all of these wrong assumptions is that the state is not just making less progress than hoped; rather total transportation emissions are increasing.  In short, the policy is failing.

To their credit, the architects of Oregon’s strategy recognized that their assumptions about future technology trends were, at best, speculative.  And, as a result, they prescribed a requirement that LCDC look every four years to see whether the targets they set for to guide land use planning still made sense in light of actual progress.  That issue comes before the Land Conservation and Development Commission on December 5.  Unfortunately, the commissions staff report simply ignores all of these many shortcomings, and proposes essentially no changes to the adopted targets.

It’s easy to pledge that, decades from now, you’ll do things to reduce greenhouse gases.  Oregon has done this for decades.   Much of the time that was available to address this problem in a serious way has already passed.  The failure to honestly acknowledge how little progress we’ve made, and how large is the task before us, makes a mockery of the state’s earnest pledge.

My mental model of the AI race

I left a loose end the other day when I said that AI is about intent and context.

That was when I said "what’s context at inference time is valuable training data if it’s recorded."

But I left it at that, and didn’t really get into why training data is valuable.

I think we often just draw a straight arrow from “collect training data,” like ingest pages from Wikipedia or see what people are saying to the chatbot, to “now the AI model is better and therefore it wins.”

But I think it’s worth thinking about what that arrow actually means. Like, what is the mechanism here?

Now all of this is just my mental model for what’s going on.

With that caveat:

To my mind, the era-defining AI company is the one that is the first to close two self-accelerating loops.

Both are to do with training data. The first is the general theory; the second is specific.


Training data for platform capitalism

When I say era-defining companies, to me there’s an era-defining idea, or at least era-describing, and that’s Nick Srnicek’s concept of Platform Capitalism (Amazon).

It is the logic that underpins the success of Uber, Facebook, Amazon, Google search (and in the future, Waymo).

I’ve gone on about platform capitalism before (2020) but in a nutshell Srnicek describes a process whereby

  • these companies create a marketplace that brings together buyers and sellers
  • they gather data about what buyers want, what sellers have, how they decide on each other (marketing costs) and how decisions are finalised (transaction costs)
  • then use that data to (a) increase the velocity of marketplace activity and (b) grow the marketplace overall
  • thereby gathering data faster, increasing marketplace efficiency and size faster, gathering data faster… and so on, a runaway loop.

Even to the point that in 2012 Amazon filed a patent on anticipatory shipping in 2012 (TechCrunch) in which, if you display a strong intent to buy laundry tabs, they’ll put them on a truck and move them towards your door, only aborting delivery if you end up not hitting the Buy Now button.

And this is also kinda how Uber works right?

Uber has a better matching algorithm than you keeping the local minicab company on speed dial on your phone, which only works when you’re in your home location, and surge pricing moves drivers to hotspots in anticipation of matching with passengers.

And it’s how Google search works.

They see what people click on, and use that to improve the algo which drives marketplace activity, and AdSense keyword cost incentivises new entrants which increases marketplace size.

So how do marketplace efficiency and marketplace size translate to, say, ChatGPT?

ChatGPT can see what success looks like for a “buyer” (a ChatGPT user).

They generate an answer; do users respond well to it or not? (However that is measured.)

So that usage data becomes training data to improve the model to close the gap between user intent and transaction.

Right now, ChatGPT itself is the “seller”. To fully close the loop, they’ll need to open up to other sellers and ChatGPT itself transitions to being the market-maker (and taking a cut of transactions).

And you can see that process with the new OpenAI shopping feature right?

This is the template for all kinds of AI app products: anything that people want, any activity, if there’s a transaction at the end, the model will bring buyers and sellers closer together – marketplace efficiency.

Also there is marketplace size.

Product discovery: OpenAI can see what people type into ChatGPT. Which means they know how to target their research way better than the next company which doesn’t have access to latent user needs like that.

So here, training data for the model mainly comes from usage data. It’s a closed loop.

But how does OpenAI (or whoever) get the loop going in the first place?

With some use cases, like (say) writing a poem, the “seed” training data was in the initial web scrape; with shopping the seed training data came as a result of adding web search to chat and watching users click on links.

But there are more interesting products…

How do product managers triage tickets?

How do plumbers do their work?

You can get seed training data for those products in a couple ways but I think there’s an assumption that what happens is that the AI companies need to trick people out of their data by being present in their file system or adding an AI agent to their SaaS software at work, then hiding something in the terms of service that says the data can be used to train future models.

I just don’t feel like that assumption holds, at least not for the biggest companies.

Alternate access to seed training data method #1: just buy it.

I’ll take one example which is multiplayer chat. OpenAI just launched group chat in ChatGPT:

We’ve also taught ChatGPT new social behaviors for group chats. It follows the flow of the conversation and decides when to respond and when to stay quiet based on the context of the group conversation.

Back in May I did a deep dive into multiplayer AI chat. It’s really complicated. I outlined all the different parts of conversational turn taking theory that you need to account for to have a satisfying multiplayer conversation.

What I didn’t say at the end of that post was that, if I was building it, the whole complicated breakdown that I provided is not what I would do.

Instead I would find a big corpus of group chats for seed data and just train the model against that.

And it wouldn’t be perfect but it would be good enough to launch a product, and then you have actual live usage data coming in and you can iteratively train from there.

Where did that seed data come from for OpenAI? I don’t know. There was that reddit deal last year, maybe it was part of the bundle.

So they can buy data.

Or they can make it.

Alternate access to seed training data #2: cosplay it.

Every so often you hear gossip about how seed training data can be manufactured… I remember seeing a tweet about this a few months ago and now there’s a report:

AI agents are being trained on clones of SaaS products.

According to a new @theinformation report, Anthropic and OpenAI are building internal clones of popular SaaS apps so that they can train AI agents how to use them.

Internal researchers are giving the agents cloned, fake versions of products like Zendesk and Salesforce to teach the agents how to perform the tasks that white collar workers currently do.

The tweet I ran across was from a developer saying that cloning business apps for the purpose of being used in training was a sure-fire path to a quick acquisition, but that it felt maybe not ok.

My point is that AI companies don’t need sneak onto computers to watch product managers triaging tickets in Linear. Instead, given the future value is evident, it’s worth it to simply build a simulation of Linear, stuff it with synthetic data, then pay fake product managers to cosplay managing product inside fake Linear, and train off that.

Incidentally, the reason I keep saying seed training data is that the requirement for it is one-off. Once the product loop has started, the product creates it own. Which is why I don’t believe that revenue from licensing social network data or scientific paper is real. There will be a different pay-per-access model in the future.

I’m interested in whether this model extends to physical AI.

Will they need lanyards around the necks of plumbers in order to observe plumbing and to train the humanoid robots of the future?

Or will it be more straightforward to scrape YouTube plumbing tutorials to get started, and then build a simulation of a house (physical or virtual, in Unreal Engine) and let the AI teach itself?

What I mean is that AI companies need access to seed training data, but where it comes from is product-dependent and there are many ways to skin a cat.


That’s loop #1 – a LLM-mediated marketplace loop that (a) closes on transactions and (b) throws off usage data that improves market efficiency and reveals other products.

Per-product seed training data is a one-off investment for the AI company and can be found in many ways.

This loop produces cash.


Coding is the special loop that changes everything

Loop #2 starts with a specific product from loop #1.

A coding product isn’t just a model which is good at understanding and writing code. It has to be wrapped in an agent for planning, and ultimately needs access to collaboration tools, AI PMs, AI user researchers, and all the rest.

I think it’s pretty clear now that coding with an agent is vastly quicker than a human coding on their own. And not just quicker but, from my own experience, I can achieve goals that were previously beyond my grasp.

The loop closes when coding agents accelerate the engineers who are building the coding agents and also, as a side effect, working on the underlying general purpose large language model.

There’s an interesting kind of paperclip maximisation problem here which is, if you’re choosing where to put your resources, do you build paperclip machines or do you build the machines to build the paperclip machines?

Well it seems like all the big AI companies have made the same call right now which is to pile their efforts into accelerating coding, because doing that accelerates everything else.


So those are the two big loops.

Whoever gets those first will win, that’s how I think about it.

I want to add two notes on this.


On training data feeding the marketplace loop:

Running the platform capitalism/marketplace loop is not the only way for a company to participate in the AI product economy.

Another way is to enable it.

Stripe is doing this. They’re working hard to be the default transaction rails for AI agents.

Apple has done this for the last decade or so of the previous platform capitalism loop. iPhone is the place to reach people for all of Facebook, Google, Amazon, Uber and more.

When I said before that AI companies are trying to get closer to the point of intent, part of what I mean I that they are trying to figure out a way that a single hardware company like Apple can’t insert itself into the loop and take its 30%.

Maybe, in the future, device interactions will be super commoditised. iPhone’s power is that is bundles together an interaction surface, connectivity, compute, identity and payment, and we have one each. It’s interesting to imagine what might break that scarcity.


On coding tools that improve coding tools:

How much do you believe in this accelerating, self-improving loop?

The big AI research labs all believe – or at least, if they don’t believe, they believe that the risk of being wrong is worse.

But, if true, “tools that make better tools that allow grabbing bigger marketplaces” is an Industrial Revolution-like driver: technology went from the steam engine to the transistor in less than 200 years. Who knows what will happen this time around.

Because there’s a third loop to be found, and that’s when the models get so good that they can be used for novel R&D, and the AI labs (who have the cash and access to the cheapest compute) start commercialising wheels with weird new physics or whatever.

Or maybe it’ll stall out. Hard to know where the top of the S-curve is.


Auto-detected kinda similar posts:

Chessboard Alignment

Luckily, the range is limited by the fact that the square boundary lines follow great circles.

Friday 5 December 1662

Up, it being a snow and hard frost, and being up I did call up Sarah, who do go away to-day or to-morrow. I paid her her wages, and gave her 10s. myself, and my wife 5s. to give her. For my part I think never servant and mistress parted upon such foolish terms in the world as they do, only for an opinion in my wife that she is ill-natured, in all other things being a good servant. The wench cried, and I was ready to cry too, but to keep peace I am content she should go, and the rather, though I say nothing of that, that Jane may come into her place.

This being done, I walked towards Guildhall, thither being summoned by the Commissioners for the Lieutenancy; but they sat not this morning. So meeting in my way W. Swan, I took him to a house thereabouts, and gave him a morning draft of buttered ale; he telling me still much of his Fanatique stories, as if he were a great zealot, when I know him to be a very rogue. But I do it for discourse, and to see how things stand with him and his party; who I perceive have great expectation that God will not bless the Court nor Church, as it is now settled, but they must be purified. The worst news he tells me, is that Mr. Chetwind is dead, my old and most ingenious acquaintance. He is dead, worth 3,000l., which I did not expect, he living so high as he did always and neatly. He hath given W. Symons his wife 300l., and made Will one of his executors.

Thence to the Temple to my counsel, and thence to Gray’s Inn to meet with Mr. Cole but could not, and so took a turn or two in the garden, being very pleasant with the snow and frost. Thence to my brother’s, and there I eat something at dinner and transcribed a copy or two of the state of my uncle’s estate, which I prepared last night, and so to the Temple Church, and there walked alone till 4 or 5 o’clock, and then to my cozen Turner’s chamber and staid there, up and down from his to Calthrop’s and Bernard’s chambers, till so late, that Mr. Cole not coming, we broke up for meeting this night, and so taking my uncle Thomas homewards with me by coach, talking of our desire to have a peace, and set him down at Gracious-street end, and so home, and there I find Gosnell come, who, my wife tells me, is like to prove a pretty companion, of which I am glad. So to my office for a little business and then home, my mind having been all this day in most extraordinary trouble and care for my father, there being so great an appearance of my uncle’s going away with the greatest part of the estate, but in the evening by Gosnell’s coming I do put off these thoughts to entertain myself with my wife and her, who sings exceeding well, and I shall take great delight in her, and so merrily to bed.

Read the annotations

“…The Laws Are Designed Specifically to Prevent That from Being OK.”

In a very good piece by Elie Mystal about Defense Secretary Hegseth’s recent murder spree, he writes (boldface mine):

What it all comes down to is this: If we’re not at war, Hegseth is a murderer; if we are at war, Hegseth is still a murderer. Hegseth and MAGA keep trying to throw up justifications to allow them to kill 83 defenseless people without evidence, and I’m telling you that the laws are designed specifically to prevent that from being OK.

In a more general sense, this is something that has become all too common in the Trump era. There is a belief in the validity of legal casuistry that defies the obvious meaning of the law. For example, there were (still are) Trumpist apparatchiks who argued that the 22nd Amendment did not prevent Trump from running for a third term, even though it’s obvious what the 22nd Amendment says, as well as the original intent (remember that?) of its drafters.

So remember, when Trumpists make a ridiculous argument, we might have to take the argument seriously because they have power, but the laws often say otherwise, and we shouldn’t let them gaslight us into thinking otherwise.

Links 12/5/25

Links for you. Science:

In which I fruitlessly beg NIH grant-seeking folks to focus on what is actually important.
New Flu Variant Could Bring Another Severe U.S. Season
Washington resident dies of complications from bird flu strain never before reported in humans
The missing heritability question is now (mostly) answered
CDC data confirms US is 2 months away from losing measles elimination status
How ProPublica Investigated a Bird Flu Outbreak in America’s Heartland

Other:

Today is the day Trump lost control of the Republicans
MAGA Disarray And The Accumulated Debt Of Corruption. They’d bring America down rather than face accountability, but accountability is not optional.
Olivia Nuzzi Addresses The Worm
You Won’t Have Marjorie Taylor Greene To Kick Around Anymore
Basketball Is Too Hard For All This Soft Tissue
Imagining A Lame-Duck Power Grab
Joe Rogan Subreddit Bans ‘Political Posts’ But Still Wants ‘Free Speech’
Shocker: Appointing A 29-Year-Old Derelict Pet Store Owner With No Politics Experience To San Francisco Board Of Supervisors Not Actually A Masterstroke
Ukraine Is Jamming Russia’s ‘Superweapon’ With a Song
How a Cash-Strapped Louisiana is Profiting from Trump’s Deportation Frenzy
Larry Summers Recedes From Public Life Due To His Gross Emails With Jeffrey Epstein
“I shouldn’t have done this.”
Hollywood’s A.I. Doppelgänger Hunters
Musk and Epstein in Hell
A lot of axolotls: the amphibian-themed banknote Mexicans don’t want to spend
A Climate ‘Shock’ Is Eroding Some Home Values. New Data Shows How Much.
The online clip factory that’s radicalizing teen boys. How a wildly popular YouTube “dating” show heaps mockery on women to create “so many more incels”
Raising taxes on the ultrarich: A necessary first step to restore faith in American democracy and the public sector
Trump had a bad week. Fox News polling suggests things are going to get worse.
The Glen Powell Paradox
Donald Trump is relishing his pardon power — and using it
Not-So-Private Ryan: Congressman Pat Ryan, a Democrat from the deep purple heart of New York’s Hudson Valley, thinks his party has a lot of work to do despite its off-year election win. Herewith, he makes the case for a broad Democratic coalition, praises the pugnacity of A.O.C., and explains why Trump isn’t a lame duck—yet.
How Trump’s MRI spurred the Epstein vote
Rep. Tlaib’s Economic Dignity for All Agenda Would Bring Economic Security for Millions
‘Maybe It’s Time to Pick a Fucking Side,’ Says Murphy After Trump Calls for Execution of Lawmakers
Grand jury investigating potential misconduct in probes of Trump critics (Night of the Long Sporks)
‘Time for Them to Leave’: Charlotte Communities Rise Up Against ICE Invasion
Charlotte raids expose hollow core of Trump’s immigration crackdown
‘The Main Course Is Inflation’: Thanksgiving Costs Surge Under Trump
The Way Billionaires Are Using AI May Cause Concern They Have Actual Brain Damage
The trustbuster advising Mamdani who could bring aggressive approach to affordability
Is This Finally and Blessedly the End of the Larry Summers Era?
Charges dropped against Utah doctor accused of throwing away $28,000 in COVID vaccine doses

AAR Rail Traffic in November: "Continued Economic Uncertainty Reflected in Rail Volumes"

From the Association of American Railroads (AAR) AAR Data Center. Graph and excerpts reprinted with permission.
Continued Economic Uncertainty Reflected in Rail Volumes
...
In November 2025, total U.S. rail carloads were up 1.5% over November 2024, and 9 of the 20 major rail carload categories posted year-over-year gains. ...

U.S. rail intermodal shipments, which are driven primarily by consumer goods, fell 6.5% in November 2025 from November 2024. Year-to-date intermodal volume through November was 13.00 million containers and trailers, up 1.9% (nearly 247,000 units) over last year.
emphasis added
Intermodal
The AAR Freight Rail Index (FRI) combines seasonally adjusted month-to-month rail intermodal shipments with carloads excluding coal and grain. The index is a useful gauge of underlying freight demand associated with the industrial and consumer economy. The index fell 0.4% in November 2025 from October 2025, its seventh decline in the past eight months. The index is 4.4% below its year-earlier level, largely because of the intermodal slowdown in recent months.

They have to be able to talk about us without us

It’s absolutely vital to be able to communicate effectively and efficiently to large groups of people. I’ve been lucky enough to get to refine and test my skills in communicating at scale for a few decades now, and the power of talking to communities is the one area where I’d most like to pass on what I’ve learned, because it’s this set of skills that can have the biggest effect on deciding whether good ideas and good work can have their greatest impact.

My own work crosses many disparate areas. Over the years, I’ve gotten to cycle between domains as distinct as building technology platforms and products for developers and creators, enabling activism and policy advocacy in service of humanist ideals, and more visible external-facing work such as public speaking or writing in various venues like magazines or on this site. (And then sometimes I dabble in my other hobbies and fun stuff like scholarship or research into areas like pop culture and media.)

What’s amazing is, in every single one of these wildly different areas, the exact same demands apply when trying to communicate to broad groups of people. This is true despite the broadly divergent cultural norms across all of these different disciplines. It can be a profoundly challenging, even intimidating, job to make sure a message is being communicated accurately, and in high fidelity, to everyone that you need to reach.

That vital task of communicating to a large group gets even more daunting when you inevitably realize that, even if you were to find the perfect wording or phrasing for your message, you’d still never be able to deliver your story to every single person in your target audience by yourself anyway. There will always be another person whom you’re trying to reach that you just haven’t found yet. So, is it hopeless? Is it simply impossible to effectively tell a story at scale if you don’t have massive resources?

It doesn’t have to be. We can start with one key insight about what it takes to get your most important stories out into the world. It’s a perspective that seems incredibly simple at first, but can lead to a pretty profound set of insights.

They have to be able to talk about us without us.

They have to be able to talk about us without us. What this phrase means, in its simplest form, is that you have to tell a story so clear, so concise, so memorable and evocative that people can repeat it for you even after you’ve left the room. And the people who hear it need to be able to do this the first time they hear the story. Whether it’s the idea behind a new product, the core promise of a political campaign, or the basic takeaway from a persuasive essay (guess what the point of this one is!) — not only do you have to explain your idea and make your case, you have to be teaching your listener how to do the same thing for themselves.

This is a tall order, to be sure. In pop music, the equivalent is writing a hit where people feel like they can sing along to the chorus by the time they get to the end of the song for the first time. Not everybody has it in them to write a hook that good, but if you do, that thing is going to become a classic. And when someone else has done it, you know it because it gets stuck in your head. Sometimes you end up humming it to yourself even if you didn’t want to. Your best ideas — your most vital ideas — need to rest on a messaging platform that solid.

Delivering this kind of story actually requires substance. If you’re trying to fake it, or to force a narrative out of fluff or fakery, that will very immediately become obvious. When you set out to craft a story that travels in your absence, it has to have a body if it’s going to have legs. Bullshit is slippery and smells terrible, and the first thing people want to do when you leave the room is run away from it, not carry it with them.

The mission is the message

There’s another challenge to making a story that can travel in your absence: your ego has to let that happen. If you make a story that is effective and compelling enough that others can tell it, then, well…. those other people are going to tell it. Not you. They’ll do it in their own words, and in their own voices, and make it theirs. They may use a similar story, but in their own phrasing, so it will resonate better with their people. This is a gift! They are doing you a kindness, and extending you great generosity. Respond with gratitude, and be wary of anyone who balks at not getting to be the voice or the face of a message themselves. Everyone gets a turn telling the story.

Maybe the simple fact that others will be hearing a good story for the first time will draw them to it, regardless of who the messenger is. Sometimes people get attached to the idea that they have to be the one to deliver the one true message. But a core precept of “talk about us without us” is that there’s a larger mission and goal that everyone is bought into, and this demands that everyone stay aligned to their values rather than to their own personal ambitions around who tells the story.

The truth of whomever will be most effective is the factor used to decide who will be the person to tell the story in any context. And this is a forgiving environment, because even if someone doesn’t get to be the voice one day, they’ll get another shot, since repetition and consistency are also key parts of this strategy, thanks to the disciplined approach that it brings to communication.

The joy of communications discipline

At nearly every organization where I’ve been in charge of onboarding team members in the last decade or so, one of the first messages we’ve presented to our new colleagues is, “We are disciplined communicators!” It’s a message that they hopefully get to hear as a joyous declaration, and as an assertion of our shared values. I always try to explicitly instill this value into teams I work with because, first, it’s good to communicate values explicitly, but also because this is a concept that is very seldom directly stated.

It is ironic that this statement usually goes unsaid, because nearly everyone who pays attention to culture understands the vital importance of disciplined communications. Brands that are strictly consistent in their use of things like logos, type, colors, and imagery get such wildly-outsized cultural impact in exchange for relatively modest investment that it’s mind-boggling to me that more organizations don’t insist on following suit. Similarly, institutions that develop and strictly enforce a standard tone of voice and way of communicating (even if the tone itself is playful or casual) capture an incredibly valuable opportunity at minimal additional cost relative to how much everyone’s already spending on internal and external communications.

In an era where every channel is being flooded with AI-generated slop, and when most of the slop tools are woefully incapable of being consistent about anything, simply showing up with an obviously-human, obviously-consistent story is a phenomenal way of standing out. That discipline demonstrates all the best of humanity: a shared ethos, discerning taste, joyful expression, a sense of belonging, an appealing consistency. And best of all, it represents the chance to participate for yourself — because it’s a message that you now know how to repeat for yourself.

Providing messages that individuals can pick up and run with on their own is a profoundly human-centric and empowering thing to do in a moment of rising authoritarianism. When the fascists in power are shutting down prominent voices for leveling critiques that they would like to censor, and demanding control over an increasingly broad number of channels, there’s reassurance in people being empowered to tell their own stories together. Seeing stories bubble up from the grassroots in collaboration, rather than being forced down upon people from authoritarians at the top, has an emotional resonance that only strengthens the substance of whatever story you’re telling.

How to do it

Okay, so it sounds great: Let’s tell stories that other people want to share! Now, uh… how do we do it? There are simple principles we can follow that help shape a message or story into one that is likely to be carried forward by a community on its own.

  • Ground it in your values. When we began telling the story of my last company Glitch, the conventional wisdom was that we were building a developer tool, so people would describe it as an “IDE” — an “integrated development environment”, which is the normal developer jargon for the tool coders use to write their code in. We never described Glitch that way. From day one, we always said “Glitch is the friendly community where you'll build the app of your dreams” (later, “the friendly community where everybody builds the internet”). By talking about the site as a friendly community instead of an integrated development environment, it was crystal clear what expectations and norms we were setting, and what our values were. Within a few months, even our competitors were describing Glitch as a “friendly community” while they were trying to talk about how they were better than us about some feature or the other. That still feels like a huge victory — even the competition was talking about us without us! Make sure your message evokes the values you want people to share with each other, either directly or indirectly.
  • Start with the principle. This is a topic I’ve covered before, but you can't win unless you know what you're fighting for. Identify concrete, specific, perhaps even measurable goals that are tied directly to the values that motivate your efforts. As noted recently, Zohran Mamdani did this masterfully when running for mayor of New York City. While the values were affordability and the dignity of ordinary New Yorkers, the clear, understandable, measurable principle could be something as simple as “free buses”. This is a goal that everyone can get in 5 seconds, and can explain to their neighbor the first time they hear it. It’s a story that travels effortlessly on its own — and that people will be able to verify very easily when it’s been delivered. That’s a perfect encapsulation of “talk about us without us”.
  • Know what makes you unique. Another way of putting this is to simply make sure that you have a sense of self-awareness. But the story you tell about your work or your movement has to be specific. There can’t be platitudes or generalities or vague assertions as a core part of the message, or it will never take off. One of the most common failure states for this mistake is when people lean on slogans. Slogans can have their use in a campaign, for reminding people about the existence of a brand, or supporting broader messaging. But very often, people think a slogan is a story. The problem is that, while slogans are definitely repeatable, slogans are almost definitionally too vague and broad to offer a specific and unique narrative that will resonate. There’s no point in having people share something if it doesn’t say something. I usually articulate the challenge here like this: Only say what only you can say.
  • Be evocative, not comprehensive. Many times, when people are passionate about a topic or a movement, the temptation they have in telling the story is to work in every little detail about the subject. They often think, “if I include every detail, it will persuade more people, because they’ll know that I’m an expert, or it will convince them that I’ve thought of everything!” In reality, when people are not subject matter experts on a topic, or if they’re not already intrinsically interested in that topic, hearing a bunch of extensive minutia about it will almost always leave them feeling bored, confused, intimidated, condescended-to, or some combination of all of these. Instead, pick a small subset of the most emotionally gripping parts of your story, the aspects that have the deepest human connection or greatest relevance and specificity to the broadest set of your audience, and focus on telling those parts of the story as passionately as possible. If you succeed in communicating that initial small subset of your story effectively, then you may earn the chance to tell the other more complex and nuanced details of your story.
  • Your enemies are your friends. Very often, when people are creating messages about advocacy, they’re focused on competition or rivals. In the political realm, this can be literal opposing candidates, or the abstraction of another political party. In the corporate world, this can be (real or imagined) competitive products or companies. In many cases, these other organizations or products or competitors occupy so much more mental space in your mind, or your team’s mind, than they do in the mind of your potential audience. Some of your audience has never heard of them at all. And a huge part of your audience thinks of you and your biggest rival as… basically the same thing. In a business or commercial context, customers can barely keep straight the difference between you and your competition — you’re both just part of the same amorphous blob that exists as “the things that occupy that space”. Your competitor may be the only other organization in the world that’s fighting just as hard as you are to create a market for the product that you’re selling. The same is true in the political space; sometimes the biggest friction arises over the narcissism of small differences. What we can take away from these perspectives is that our stories have to focus on what distinguishes us, yes, but also on what we might have in common with those whom we might otherwise have perceived to have been aligned with the “enemy”. Those folks might not have sworn allegiance to an opposing force; they may simply have chosen another option out of convenience, and not even seen that choice as being in opposition to your story at all.
  • Find joy in repetition. Done correctly, a disciplined, collaborative, evocative message can become a mantra for a community. There’s a pride and enthusiasm that can come from people becoming proficient in sharing their own version of the collective story. And that means enjoying when that refrain comes back around, or when a slight improvement in the core message is discovered, and everyone finds a way to refine the way they’re communicating about the narrative. A lot of times, people worry that their team will get bored if they’re “just telling the same story over and over all the time”. In reality, as a brilliant man once said, there’s joy in repetition.
  • Don’t obsess over exact wording. This one is tricky; you might say, “but you said we have to be disciplined communicators!” And it’s true: it’s important to be disciplined. But that doesn’t mean you can’t leave room for people to put their own spin on things. Let them translate to their own languages or communities. Let them augment a general principle with a specific, personal connection. If they have their own authentic experience which will amplify a story or drive a point home, let them weave that context into the consistent narrative that’s been shared over time. As long as you’re not enabling a “telephone game” where the story starts to morph into an unrecognizable form, it’s perfectly okay to add a human touch by going slightly off script.

Share the story

Few things are more rewarding than when you find a meaningful narrative that resonates with the world. Stories have the power to change things, to make people feel empowered, to galvanize entire communities into taking action and recognizing their own power. There’s also a quiet reward in the craft and creativity of working on a story that travels, in finding notes that resonate with others, and in challenging yourself to get far enough out of your own head to get into someone else’s heart.

I still have so much to learn about being able to tell stories effectively. I still screw it up so much of the time, and I can look back on many times when I wish I had better words at hand for moments that sorely needed them. But many of the most meaningful and rewarding moments of my life have been when I’ve gotten to be in community with others, as we were not just sharing stories together, but telling a united story together. It unlocks a special kind of creativity that’s a lot bigger than what any one of us can do alone.

Collections: Hoplite Wars, Part IIIa: An Archaic Phalanx?

This is the third part of our four-part series (I, II) discussing the debates surrounding ancient Greek hoplites and the formation in which they (mostly?) fought, the phalanx. Last week, we looked at how the equipment which defined the hoplite – hoplite (ὁπλίτης), after all, means ‘equipped man’) – and how it weighs in on the debate.

And what I expressed last time is that I found the ‘strong’ versions of both the orthodox and heterodox arguments uncompelling. The notion that the hoplite was effectively an ultra-encumbered turtle who couldn’t fight outside of a close huddle simply doesn’t stand up when comparing hoplite equipment – heavy, but not extremely so, somewhat constrained, but not particularly so – to other historical heavy infantry equipment. At the same time, the heterodox vision, where hoplites are as at home in open-order or fluid skirmishing as they are in the confines of a shield wall doesn’t hold up either. You can fight that way with hoplite equipment, but the panoply is terribly adapted for it while being very well adapted for the context of a shield wall, suggesting to me that this was always its primary intended purpose (albeit with a meaningful amount of flexibility built in).

We’re now going to carry those observations forward to discuss tactics. To the degree that the board public understands the hoplite debates, they understand it as a debate over tactics and often reduce it to the question, “did they shove?” But there are quite a few more tactical questions here than simply the question of the nature of the othismos. As with some of the previous questions, a lot of these questions are linked but weakly so, meaning it is possible to a degree to ‘mix and match’ without adopting a position that is incoherent. So we’ll begin by outlining what I view as the main differences here and also some of the significant elements of those positions I see as meaningfully unsatisfactory.

As we’ll see chronology also matters here: while the orthodox school generally imagines hoplite warfare to have emerged all at once (a position we’ve already seen can no longer be sustained given the archaeological evidence), reached tactical maturity in the phalanx relatively quickly and then remained rigid and relatively unchanged until the end of the fifth century, the heterodox school instead argues for a lot more chronological change.

Now, I wanted to do the discussion of tactics in a single post so that we could get into some of the interesting implications for polis society more quickly, but there really are too many moving parts and I realized – at the point where I had run out of most of the week, written 7,000 words and barely gotten through the Archaic – that this post needed to be split. The split is, as a result, horribly awkward.

This week, we’re going to look at the ‘strong’ orthodox hoplite model (and dismiss it) and then at parts of the ‘strong’ heterodox model (which we’ll also find unsatisfying, but not entirely without value), before finally working through what a ‘proto-phalanx‘ of the late 600s or 500s might have looked like, thinking in terms of comparative models and what little evidence we have.

Then next week we’ll turn to the ‘mature’ phalanx of the classical period, looking at how we might imagine it functions – tactics, ‘standard’ depth, role of supporting arms, etc. – along with the broader question of defining what exactly the phalanx is (and why I think a more flexible definition is more useful).

Since we’re leaving the definitional work to next week we’re going to avoid calling much of anything a ‘phalanx’ this week, even though these two posts are fundamentally about the phalanx. One of the things I view as a real problem in this debate are the hard definitional boundaries imposed by both sides, which derive from an overly rigid vision – Konijnendijk’s ‘Prussians’ again – of how the phalanx functioned. The problem is that while the orthodox insist that anything called a phalanx must fit that rigid (and as we’ll see, quite implausible) model, heterodox scholars often insist that anything that does not fit the model is not a phalanx in order to push the date for ‘the phalanx’ back. In my view it is well past time to let the evidence lead the definition rather than the other way around – the phalanx is what the phalanx does, not how we define it – so we’ll lead with the evidence and revisit the definitional scrum only at the end.

As always, if you like what you are reading, please share it as I rely on word-of-mouth to find readers! And if you really like it, you can support this project over at Patreon; I don’t promise not to use the money to buy a full hoplite panoply, but I also don’t not promise to do that. And if you want updates whenever a new post appears, you can click below for email updates or follow me on Twitter and Bluesky for updates when posts go live and my general musings; I have largely shifted over to Bluesky (I maintain some de minimis presence on Twitter), given that it has become a much better place for historical discussion than Twitter.

Let Us Shove Off

As we’ve noted – nearly ad nauseam at this point – the orthodox and heterodox ‘camps’ differ both in their understanding of the chronology by which something called ‘the phalanx’ developed, but also their sense of the mechanics of what something called ‘the phalanx’ was and how it functioned. I think both tactical models are substantially flawed. I should note while putting this together Paul Bardunais linked his own synthesis (presented here in video form) which I hadn’t seen developed in full. It is not exactly my synthesis, but it is actually pretty close (I think it is a perfectly good, defensible, plausible model, which is more than I can say for the ‘strong’ models we’re about to discuss) as we’ll see and it is good to see someone working on a synthesis position.

One crucial difference between the orthodox and heterodox models of hoplite warfare is that orthodoxy generally imagines a tactically stable (or stagnant) phalanx: it doesn’t change after emerging and rapidly reaching ‘mature’ form. By contrast, the heterodox model assumes significant development over time. Now I do want to treat the evidence for tactics in the Archaic and Classical periods separately, because as we’ve already seen, I think the heterodox school is fundamentally correct in assuming meaningful change over time, but first I think it is worthwhile to dispense with the orthodox tactical vision, at least in its narrowest form. We ought to do that in the beginning because – since the orthodox view is that the phalanx is tactically stagnant – this model is supposed to be valid in every period. So rather than repeat myself, we can deal with it once here.

The modern version of orthodox hoplite tactics comes directly from The Western Way of War and so that is the ‘strong’ version of the model I will focus on here. The orthodox vision is that in a phalanx formation, hoplites were densely spaced (file widths of 45-60cm, shoulder-to-shoulder), they advanced at a run and then collided at speed with the two formations smashing together at full tilt. Then, the orthodox suppose the othismos was a kind of rugby-scrum style shoving match where the formations tried to push through each other (while also striking over and beneath shields) and as gaps and tears formed in the line from this pushing action, one phalanx would fall apart. Such fighting naturally fully excluded light infantry and cavalry. Moreover, as we’ve seen chronologically, the orthodox camp argues this form of warfare developed swiftly in the 8th and early 7th century and remained pure and unchanged from then to the late fifth century, a long period of relatively static hoplite warfare.

That vision exists within a sort of assumed framework, particularly among earlier scholars, as Roel Konijnendijk notes in his book,1 that derives more from early modern gunpowder warfare than from ancient warfare: there is an assumption of rigid command and control, supported by both training in arms (that is practice with weapons as opposed to just fitness training) and drill (that is, practice moving in unison) of a sort that is, bluntly put, not really attested in our sources until the late Classical period (if even then). Victor Davis Hansen’s work, coming later out of the Face of Battle school instead emphasizes the amateur citizen-soldier nature of hoplites (and thus doesn’t really assume lots of drill or practice) but keeps the rigid tactical system.

This vision is, frankly, nuts. No other shield wall behaves this way, shoving in a mass rugby scrum. It is physically possible – these presses have been demonstrated, it will not necessarily crush the men in the middle – but it cuts against human psychology in combat (humans tend not to want to stay in the ‘danger zone’ of enemy weapons – called ‘measure’ – for very long) and more important against the sort of casualty figures we get, which suggest losses for victors in hoplite battles could be relatively low and thus most casualties occurred after the rout.2 If this kind of shoving were normal, we’d expect knives and daggers, not spears, to be the weapon of choice (and I should note that while Greek swords are generally on the short side, a xiphos is not a knife or a dagger) and one man with a knife pressed at the front could make a terrible mess very quickly as he can easily stab over the shields of his enemies into the neck from the side where even the Corinthian helmet offers less than perfect protection. Indeed, notably, something like a combat dagger isn’t even a standard element of the hoplite’s kit (rare to see them in artwork) and won’t be a standard piece of equipment in the Eastern Mediterranean until the early Roman imperial period (by which point the Romans have fallen in love with a devilish dagger from Spain they call a pugio).3

Crucially, as heterodox scholars have been pointing out for decades now, nothing in the source tradition requires us to interpret othismos (a term that is not used in every or even most hoplite battles!) this literally: plenty of cultures describe ‘presses’ and ‘pushes’ of infantry that are not literal shoving. At no point does any source clearly describe the othismos as literal shoving; instead it is used to mean what we might term ‘coming into contact’ or ‘shock’ (e.g. Hdt. 7.225.1, 9.62.2, Xen. Anab. 5.2.17, etc.etc.), that is, two formations moving into melee range, or in the sense of a given ‘push’ of effort to achieve victory – we use the same phrase metaphorically of infantry assaults with guns that don’t involve anyone getting within 50 yards of a shoving match. While we start to see lines of men in Greek artwork, seemingly in close-order, as early as the 650s, we never see obvious scenes of mass shoving or even a lot of ‘combat grappling’ (it is hard to grapple with one hand secure in a two-point grip on a shield).4 It is striking that the orthodox school in its modern incarnation is thus arguing that the primary mode of high-status Greek hoplite warfare – the supposed shoving othismos – is both the core of experience of battle in the late Archaic and Classical Greek world and also never depicted in artwork, not even once. That is simply, to me, an unsustainable reading of the evidence.

I am struck that early modern European artwork furnishes more examples of nearly-scrum-like engagements (see below) involved in the push-of-pike, but even in the most chaotic push-of-pike scenes, soldiers are not shoving but instead have recourse to draw their swords (generally the katzbalger, which at 70-80cm is not very much larger than a xiphos or kopis) and cut with them.

Via Wikipedia, the classic Hans Holbein the Younger scene of a push of pike (early 16th cent.). I should note not every artist depicts these clashes this way – often they do seem to have been ‘poking matches’ at the edge of pike’s reach, but evidently could produce melees of this sort. That said, while we do see some men grappling at very close range with daggers, many still use their pikes or else draw their swords, suggesting there is still enough space, even in this mass, to use such weapons.

One may well imagine that two shield walls coming together may have created a temporary press similar to crowd collapses or rushes that happen sometimes at overcrowded concerts and similar crowded spaces, but there’s no sign this was the intended goal. As we’ll see in a moment, I suspect rival hoplite formations probably did often collide at some speed (though not perhaps intentionally), but if they did, I would expect them to ‘accordion’ back out rather than for the men in the rear to press their friends into the points of enemy spears. Crowd crushes happen because the psychological pressure is urging people in the back to push forward but in combat the psychological pressure is urging everyone to move away from the enemy.

Given how speculative and awkward the ‘shoving’ othismos is (as opposed, as we’ll see, to othismos-as-pulse) it is a bit frustrating that it persists in many reenactment circles, presumably because – as Roel Konijnendijk once suggested to me – it is a reasonably ‘safe’ way to do a hoplite reenactment as opposed to, you know, jabbing sharp weapons at people.

Problems pile up for the orthodox model from there. The very tight shoulder-to-shoulder spacing seems quite clearly to be a product of reasoning from modern musket formations; no shock formation I know of was ever this dense (including early modern pike formations). As we’ll see in a moment, I don’t think the spacing was loose generally (> 100cm file width), but I also do not think it was ultra-tight generally (< 60cm). Since we’re not shoving, after all, we need some space to actually use our shield and weapon (though nowhere near as much space as some heterodox scholars imagine, more on that next week).

Meanwhile, the developmental timeline does not work either: hoplite equipment didn’t emerge suddenly and so the ‘mature’ all-hoplite phalanx couldn’t have done so either. Moreover, as the heterodox will frequently note, light troops and cavalry continue to appear frequently in Archaic artwork and battle scenes, often intermingled with hoplites, suggesting they still have a battlefield role. Tyrtaeus, writing in the mid-7th century describes “You light-armed men, wherever you can aim/from the shield-cover, pelt them with great rocks/and hurl at them your smooth-shaped javelins” (Fr. 11 West, trans. West), which sure implies that the light-armed have a job to do even c. 650 or so and that it involves being at least in the same zip-code as the shield wall of hoplites (since they are aiming “from the shield-cover”). And of course throwing javelins and rocks would hardly be feasible if the two opposing lines were locked in contact in a shoving match, as you’d end up hitting your own fellows as often as the enemy. So this orthodox vision will not do, especially for the Archaic.

So what will work?

The Archaic Phalanx Did Not Pine For the Fjords

Having beaten up quite a lot on the orthodox vision, I think we must now turn and beat up a bit on the heterodox vision, particularly the version developed by Hans van Wees. Now here I want to note that while the orthodox school has effectively a single vision of hoplite combat, the heterodox school can sometimes contain multitudes and so not every ‘heterodox’ scholar shares Hans van Wees’ combat model. However it is also the case that Hans van Wees is also pretty much the only scholar in print to lay out a complete model, so we have to deal with it.

And I want to begin with a fairly big reasoning problem involving some dead birds. Hans van Wees, it must be noted, is coming at the question of Greek warfare chronologically from the ‘other side’ in that his work before Greek Warfare: Myths and Realities (2004) was focused on war and violence in Homer, so he is advancing forward from the early archaic towards the classical rather than reasoning backwards from the classical towards the archaic.

Van Wees presents in Greek Warfare and again in his chapter in Men of Bronze (2013) warfare among the Dani people of the highlands of Western Papua New Guinea as a kind of ‘key’ to understand Homeric warfare and thus early hoplite warfare. He cites for this Gardner and Heider, Gardens of War: life and death in the New Guinea Stone Age (1968), the print publication of this research, but most people, if they are aware of this work will be aware of it through the famous and foundational documentary film made during that research, Dead Birds (1963), also made by Robert Gardner. The film presents an idealized vision of a single battle among the Dani people, a people living with stone-age technology (no metal working) in the highlands of Papua New Guinea, though the footage is actually a pastiche of several battles fitted together. That said, Dead Birds is essentially the only footage we have of a society waging a real life-and-death battle with contact weapons.

This is an important piece of scholarship and a crucial tool in our understanding of warfare in the past and I have been on and on so far about how I think the study of hoplite warfare would benefit from comparative evidence so you may be expecting me to praise the use of this material as a tool for understanding Greek warfare, but I cannot.

Van Wees clearly reads this warfare – and perhaps, though he does not cite it, watches the film – and sees in it things Homer is describing (remember, he is coming at this originally as a Homerist): initially massed ranks that break up into no-order open skirmishes, spear-throwing, front line fighters advancing and retreating and so on.5 The failure here is not the effort to use comparative evidence (that’s a good instinct) but the failure to ask if the comparandum – the thing being compared6is a good match for warfare in the Greek archaic?

Via Wikipedia, warriors of the Dani people from the central highlands. Now we need to suspend our cultureal assumptions for a moment and avoid focusing on if these fellows look ‘strange’ (we probably look strange to them and all of us would look strange to the Greeks). Instead, we want to ask are these fellows equipped to fight similarly to hoplites or other iron-age Greeks.
And the answer just has to be ‘no, obviously not.’ They don’t have helmets, or shields, or armor, or shields, or clothing, or shields, or iron-tipped spears, or shields, or swords of any kind OR SHIELDS.

Because it pretty clearly isn’t. In this documented last phase of Dani warfare (they don’t do these battles anymore), the Dani still had an effectively stone-age level of technology, compared to iron-age Greeks. I cannot stress this enough: that is a very big difference, an enormous gap in weapons and armor capabilities which in turn comes with enormous implications for tactics. Metal – be it bronze or iron (much less steel) – is so much better a material for weapons that it significantly alters battlefield dynamics.

The Dani fight not only unarmored, but almost entirely nude and do not generally use shields in contrast to armored Greeks and Homeric heroes whose armor ‘clatters’ (ἀρᾰβεῖν, ‘to rattle, clang, clatter’ (of armor)) to the ground when slain and who regularly bear shields. In part, this is because Dani weapons are much less lethal than iron-age weapons, a point that jumps out if one actually watches Dead Birds. These men are trying to kill each other (and to not be killed) but fighting at distance it takes a lot of luck for their weapons to actually inflict lethal harm (and indeed, the casualties for these battles are very low). An arrow with a bone tip, or a spear that is merely a sharpened wooden stake can only be so sharp. Multiple individuals in Dead Birds are hit by arrows or javelins which simply do not penetrate to lethal depth (though one man does eventually die of a wound) despite striking the target. Remember these are unarmored, nude combatants who have been hit directly with a weapon. The contrast with what a sharp, iron-tipped broadhead arrow launched from a war bow can do against an unarmored target is quite stark; ancient and medieval artwork regularly show combatants with arrows transfixed in their bodies – all the way through and out the other side. As is typical with ‘first system‘ warfare, the high casualty bursts in Dani warfare come not from battles, which are generally symbolic affairs, but from ambushes and raids.

But even Homer’s heroes are clearly practicing ‘second system‘ warfare: they are laying siege to a large fortified city, with an army that Homer clearly understands to includes tens of thousands of warriors (Homer’s Catalog of Ships, 2.494-756 describes the Greeks as bringing a total of 1,186 ships; if taken literally it might imply an army of c. 150,000 though of course this is all subject to heroic exaggeration). Those warriors wield weapons – typically described by Homer as bronze, though iron is known to him – and wear body armor, helmets and carry large shields. As van Wees notes (op. cit., 166), the most prominent weapon in early Archaic artwork is actually the sword (spears are very common too), a weapon which the Dani did not have and were not capable of manufacturing with any material available to them. Homer’s own world is part of a broader military system that by 750 BC includes large, sophisticated professional armies in the Middle East (the Neo-Assyrians), employing complex siege craft (indeed, more complex than what the Greeks will have for centuries) and increasingly true cavalry. Homer seems to be blending a vague memory of late bronze age warfare (chariots! bronze weapons!) with early iron age warfare on the edge of ‘civilization.’7

So while in absolute chronology the Dani are c. 2,700 years in Homer’s future, in a kind of relative developmental chronology, their warfare is at least two thousand years in Homer’s past (taking the Greek bronze age to start very roughly at c. 3200). We might as well be trying to use footage of Roman warfare as the key to understanding the World Wars. Sure, humans and human psychology doesn’t change, so there may be some valuable insights (and indeed there are some about human psychology in combat which are useful in pushing back against the orthodox model) but we would need to be alert to everything that is different, which is a lot.

Approaching Archaic warfare through the lens of Homer, the Dani and Dead Birds sets van Wees’ entire foundation askew. That doesn’t mean everything in his model is wrong, but it throws a lot of things off.

In particular, the van Wees model of archaic hoplite warfare runs thusly: hoplites emerge in the context of a kind of warfare that looks a lot like the way the Dani fight: extended skirmishes with missiles, with individual warriors occasionally running forward to take more risk (and be more lethal) doing battle at closer range, sometimes with javelins, sometimes with contact weapons (swords and spears). This is, for van Wees, the environment in which the hoplite emerges. Hoplites initially show up carrying two spears (one for throwing), which to van Wees suggests continued participation in the skirmish (see my doubt below) rather than being pure ‘shock’ specialists. For much of the archaic, in van Wees’ model, hoplites continue to fight in open order or even no order at all, with unarmored skirmishers – poorer Greeks – mixed in with them, taking cover behind the shields of hoplites in an intermixed and largely unorganized formation.

Over time, the hoplite grows gradually in importance, with other warriors not vanishing from artwork or literature (Tyrtaeus, importantly) but being less prominent, but those lights remain scattered ‘here and there’ amidst the hoplites even well into the sixth century, with light infantry prominent on the battlefield even to the Persian Wars at the end of the archaic. Van Wees admits no regular formation for hoplites prior to the first explicit mention of such in text in 426 (Aristophanes, Babylonians, F. 72) and contends that intervals less than six feet (180cm!) would have been unworkable even in the classical period (op. cit. 185).

For van Wees, these formations do not rush into a collision and then the ‘shoving-match’ othismos, but rather charge to release the psychological pressure of the fear of battle (thus the Spartans, better disciplined, walking into contact)8 but then slow down to a stop eis doru (‘into spear’s reach’) to then jab with spears at each other with overhead strikes. Formation collapse is thus not a result of shoving, but rather the line of hoplites collapses due to psychological pressure and casualties (more the former than the latter).

And I should be clear at the outset: some of this is workable. But a lot of it is not.

As we’ve already seen, I think the idea that the hoplite panoply emerged for open-order skirmishing is simply not tenable: no one commits to open order or no-order skirmishing wearing heavy armor and using a large round shield (instead, globally, the most common ‘kit’ for this kind of fighting in metal-working societies is little or no armor, but relatively large oblong shields that can provide full coverage for the body from missiles). Van Wees insists that a hoplite could advance and retreat just as well wearing their heavier equipment as a light infantryman (op. cit., 171) and that is just…obviously not true. The man in 4-8kg of equipment (a ‘light’) is obviously going to be able to run down the man in 18kg of equipment (the hoplite). That is a real liability in a ‘Dead Birds‘ combat scenario because the ‘front’ moves so far forward and so far back: either side often mounts sudden advances which send the other side scurrying backwards – but if you are wearing 2-3 times as much kit as your mates, when your line scurries backwards to get out of range (and those lights aren’t sticking around for you, they’re unarmored and so in real danger of being instantly killed by close range javelin or arrow shots) you are going to fall behind and those enemy lights are going to catch you and all of the armor in the world isn’t going to save you in a fight outnumbered four-to-one.

And I think here is a good time to stop and talk about how hard it can be to interpret artwork and we can take for our example one of the most important pieces of evidence in all of this, the hoplite artwork on the Chigi Vase (c. 645 BC).

Via Wikimedia Commons, three images of the Chigi Vase’s hoplite scene (there is a second scene below), c. 645 BC. Use the flutist to keep your bearings as to how these images come together – there is only the one guy playing the flute (an aulos, technically). So from (our) left to right, we have a shield and some weapons on the ground and men looking like they’re gearing and running to join a battle line (bottom left), then we have the flutist, then a battle line (top) meeting another, with men in lines, spears raised and then (bottom right) we have a better view of the second battle line, with shields presented as overlapping and a second line of men coming behind it.

And the thing is almost every aspect of that evidence – which seems clear at first glance – is open to multiple interpretations, especially in the context of a two-decade old fight where no one wants to admit they might have been wrong. We can begin with the weapons: while orthodox scholars will point to a dense formation of hoplite-armed heavy infantry (with no light infantry in sight!) Hans van Wees and other heterodox scholars point to the fact that each hoplite here carries two spears, potentially with throwing loops and suggest that this two-spear configuration (which fades out by the end of the 600s) is indicative of hoplites still skirmishing.

And I want to stop for a minute and examine that point because I think it is suggestive of one of the problems I keep coming back to in these debates, because “having a throwing spear alongside a thrusting spear means you probably skirmish” is a position that cannot survive a working knowledge of ancient Mediterranean warfare much less warfare generally. After all, Roman heavy infantry famously carry two javelins (the pilum) and yet are very clearly shock heavy infantry.9 Likewise, in Spain among both Iberians and Celtiberians, a javelin (frequently of the soliferreum type, sometimes of other types) was a standard weapon to pair with the ubiquitous thrusting spear; we very frequently find them in pairs in grave deposits suggesting they were basically always carried one-and-one, yet Fernando Quesada Sanz has spent the last two decades arguing – persuasively – that Iberian and Celtiberian warriors fought frequently as ‘line infantry’ in a sort of shield wall.10 Likewise, we know that in certain periods, Gallic infantry carried javelins and no one would accuse the Gauls of generally operating like skirmish infantry. More broadly, history is full of examples of shock infantry that expected to shoot a single volley at close range right before closing into combat, be that Roman volley-and-charge with pila in the third century BC or post-gunpowder shock tactics like with the 17th century Highland Charge or the contemporary Swedish Gå–På (“go on”). It is significant that these hoplites still carry a throwing spear, but it absolutely does not make them skirmishers.

But the heterodox folks are right that there is a lot of interpretive difficulty here. Van Wees (op. cit.) wants to read the image as representing a single moment of combat, with some men fighting in the front, others holding back and still more gearing up in the ‘everyone do their own things’ Dead Birds style of battle, but of course one could just as easily read the image as chronological, showing the battle line forming up, then marching into battle (it’s a pity we don’t have more of the other side). On the other hand is the question of what to do with the fact that each battle line is shown in two ranks, one separated by a flutist, the other just by an open interval. The orthodox reading is that this is an indication of formation depth, a crucial component in their definition of the phalanx, whereas the heterodox note that there’s a separation here, no sign of shoving and so perhaps the second rank is well behind the first, a distant reserve. Everett Wheeler, in exasperation, pointed out once that contact infantry basically never fight without depth in just a single thin line and I tend to think he is right about that objection, but there is certainly no shoving othismos here.11 In terms of spacing, I read these soldiers are tightly spaced, indicating a close-order formation, but the heterodox will dismiss such closeness as artistic license, noting that soldiers are often drawn more tightly packed in artwork than they would have been in reality.

We might note that what we see here looks somewhat similar to something like the Bayeux Tapestry, which we know to depict a shield wall, but of course a chasm of time and art style separates the two, so this is hardly decisive.

Via Wikipedia (though I have cropped) the English shield-wall at Hastings (1066) as depicted on the Bayeux Tapestry (c. 1070s). Note the one little archer fellow, drawn smaller than the heavy infantry around him (because they’re more important) expressing the idea of some English archers being present, although to go by our sources for the battle, not many (far more Norman archers).

For my own part, my reading of the Chigi Vase is closer to the orthodox one: those men are in close order and the second rank of each formation does imply depth even if the artist has created some space for us to see the flutist. I think what is being expressed here is a chronological sequence, showing the formation forming up, then advancing and finally coming into contact, likely showing us the moment of volley before the charge. In this sense it is actually similar to the chronological scroll of the Bayeux Tapestry, where many scenes ‘blend’ into each other. The fact that the opposing formation is also shown at least two depth suggests to me that depth – not a sequence of two widely separated lines – is intended. We’ll come back to definitions next week, but I would call the thing on the Chigi Vase a ‘phalanx’ of a sort (we’re going to see my definition of ‘phalanx’ is a bit broader than some). But as you can see everyone has their own interpretation and the chances of convincing anyone of anything – something that seems promising when you first look at it – are slim.

At the same time van Wees is fundamentally right about some things. Light infantry with bows and javelins do not go away in Archaic artwork, though they do diminish over time, from being perhaps half of all depicted figures in the early Archaic to only showing up infrequently in ones and twos by the end. That might indicate an actual reduction in their numbers, but a even a fairly casual reading of Herodotus suggests otherwise: they’re still there, but they’ve become less politically and socially important and so are less frequently depicted or described. So we need a model of archaic battle which allows for both hoplites and light infantry with ranged weapons to share the battlefield; the ‘all hoplite’ Archaic phalanx of the orthodox school will not do with the evidence.

Towards Better Models

Instead, we need to think with iron-age comparanda about how heavy infantry work in concert with lighter ranged infantry. One possible comparison, contemporary to the Greek archaic, is the warfare system dominant in the Near East at the time: Neo-Assyrian infantry working in matched pairs of shield-bearing contact infantry (with spears) and foot archers. As best we can tell (our evidence is not fantastic) these fellows were expected to set up relatively static battlefield formations, with the shield-bearers providing both protection from ranged attack (with their large but thin shields) and also from sudden cavalry or contact infantry attack (with their spears). The archers could then safely develop ‘fire.’12 This has the advantage of being contemporary and there are lines in Tyrtaeus and artwork that support the idea of light infantry sheltering behind the shields of hoplites (van Wees, op. cit., 166-77 assembles the relevant examples). But that Neo-Assyrian paired infantry was also, from what we know, a quite well organized, professional standing infantry force which is not very much like our hoplites and the status distinction ran the other way (it was archery, not contact warfare, which seems to have been the higher status way to fight) and nothing gives us the sense that hoplites are fighting with lights in something like assigned pairs save perhaps some hint for the Spartans towards the end of the Persian Wars (op. cit. 182) and even then it is hardly strong evidence. I think we need to be aware that this combat model was, certainly by the late archaic if not earlier, available to the Greeks (at least some of them), but I do not think it was how they organized.

Another potential comparandum here is the early medieval shield walls I’ve alluded to before. I thought I would have to write a whole big paragraph about this, but actually Paul Bardunais walked through exactly this comparison and reconstruction, using a lot of knowledge gleaned from reenactment and safe combat sparring experiments and I don’t think I can improve very much on it. He presents this ‘hybrid’ shield wall as having a few ranks of heavy infantry, in relatively close order (we’ll get to intervals blow) at the front forming a protective wall, with light infantry skirmishers deployed behind. They might equally be able – with some difficulty – to filter through the ranks (since ‘close order’ does not mean ‘shoulder-to-shoulder’) so your skirmishers could move out in advance to screen the shield wall or drop back behind it if pressed. In this system, the shield wall becomes a kind of ‘base’ from which skirmishers can operate and since, as noted, hoplites are still often carrying a throwing spear of their own, it can also project some amount of ranged threat.

I think this is a workable mental model, though it seems like it may need a bit of modification to fully fit the evidence. I want to be clear that isn’t me saying it is wrong. Greek artists in the archaic tend to show skirmishers intermixed with hoplites when they show them, but it is really tricky to know how to gauge that. As you are presumably seeing from the artwork I’m showing here, going from a stylized 2D representation of a formation to understanding the actual formation is tricky and artists often have to distort, compressing intervals (very frequent in medieval artwork where formations we know were not shoulder-to-shoulder get compressed until they look it, cf. also the Columns of Trajan and Marcus Aurelius for the same effect), removing depth (so showing only a single rank) and so on. Likewise, my reading of Tyrtaeus’ description of hoplites in battle suggests that while there are certainly light infantrymen running about, there is an offensiveness to the ideal hoplite, who doesn’t just stand under ranged fire but gets in close to the enemy that speaks to me of something closer to what Bardunais terms a ‘bludgeon’ shield wall (which he associated with the classical period).

By fierce deeds let him teach himself to fight,
and not stand out of fire – he has a shield –
but get in close, engage and stab with lance
or sword, and strike his adversary down.
Plant foot by foeman’s foot, press shield on shield,
thrust helm at helm and tangle plume with plume,
opposing breast to breast: that’s how you fight,
with the long lance or sword-grip in your hand.
– Tyrtaeus fr. 11 West (trans. M.L. West)

I might suggest a third comparative model: warfare in pre-gunpowder coastal West Africa, within the range of the tsetse fly. While north of this region, in the Sahel (too dry for the tsetse fly), warfare was dominated by cavalry, the tsetse fly’s sleeping sickness is lethal to horses and so warfare further south along the coast (along the Gulf of Guinea, down through to the Congo River) was an infantry affair. Armies here consisted of two kinds of troops, a broad (lower status) militia force which composed the bulk of the army and were armed as relatively light skirmishers and then a ‘core’ of better trained professional warriors maintained by local kings who formed the backbone of the army and were better equipped (notably including large shields, although not much body armor). A battle between two armies might begin with the engagement of skirmishers, intended to soften up the enemy force (and perhaps screen the higher status warriors). But at the right moment those higher status warriors with their large shields and contact weapons would charge forward in a dense mass, ideally scattering the enemy (who would have their own ‘base’ of heavier warriors too), thus winning the victory. Here the battlefield is open enough for the skirmishing troops to work in and around the ‘heavies’ who initially function as a defensive bulwark to the army but then at the right moment are deployed offensively.13

undefined
Via Wikimedia Commons, an African warrior with weapons, including a several iron-tipped javelins and a large shield, c. 1641. This warrior was painted fighting in Brazil, but was likely originally from the Kongo people.

Now I want to immediately caveat this model (I’ve spent so much time harrying van Wees for not doing so, I can hardly not do so myself), there are some major differences. The first is armor: this West African system had large shields (generally oblong, more useful against missiles, rather than round) but not much body armor and that’s a really big difference. They do have iron weapons, so those shields are necessary to limit the lethality of the skirmish and that professional core of contact infantry might wield deadly iron swords and iron-tipped spears (just like early hoplites). However, whereas warfare in Greece (and much of Eurasia) was about control of land, warfare in this part of West Africa was frequently about control of people (really, control of laborers) and as a result there is an emphasis in the local kit on capture weapons like clubs, not because these guys are primitive, but because they want to take enemies alive as captives. Those are some pretty meaningful differences and so I am by no means suggesting sub-Sahelian West African pre-gunpowder warfare as a 1-to-1 of early Archaic hoplite warfare: instead it is just another tool we can use to think about how people might combine light infantry and something like a shield wall.

But you can see how this model might work, especially if we work in elements of Bardunais’ model as well. Towards the close of the 8th century, the wealthier Greeks begin to start equipping themselves as ‘specialist’ contact infantry (albeit still carrying perhaps a single throwing weapon), probably suggesting that ‘contact infantry’ (as distinct from skirmisher) was a role that had already existed and was generally the higher status role (as, frankly, Homer clearly seems to think). Fairly quickly these fellows end up grouped together rather than mixed up indiscriminately with the skirmishers, either in a single block as the core of the army (the ‘West African’ model) or as a line in the front of it (the ‘Early Medieval’ model), but still working hand in glove with the skirmishers. As these fellows group up, the equipment that makes the most sense in that context – what will eventually be the hoplite kit – begins to predominate.

By the late 600s, we see the last of the throwing spears carried by hoplites in artwork drop away, which suggests that these fellows are now exclusively contact infantry. That in turn suggests to me that ‘shock action’ has likely been the decisive part of the fight – or at least perceived as such – for some time. As noted above, I suspect that one retained throwing spear was not for the skirmish, but rather for volley-and-charge tactics. Instead I suspect this body of heavy infantry has been, probably for most of the 600s, been being used a bit like those West African troops: screened by the skirmishers, providing protection to them but then being expected to close, hurl spears and engage for a decisive shock action. The decline of throwing spears may indicate that the pre-shock skirmish phase is starting to be truncated to the point that it is no longer even useful to carry a second spear you aren’t going to get a chance to throw at a good target. That ‘at a good target’ may be operative: another hoplite in a shield wall is not all that vulnerable to a single thrown spear, but a skirmishing ‘light’ might be – as the pre-shock skirmish phase gets shorter and more and more focus goes into the direct clash of hoplites, that might lead to the diminished use of a simple throwing spear.14 Light infantry is still doing things, but their diminished place in artwork may represent their increasingly subordinate role, that by c. 600 or perhaps 550, an ‘offensive shield wall’ composed of hoplites is understood to be the decisive component of battle (albeit screened and supported by ‘lights’).

That model of Archaic warfare puts me more or less in the middle between the ‘strong’ gradualism of van Wees et al. and the ‘strong’ orthodox position, but I think it best fits the evidence we have.

But that leaves a fairly big pair of questions, because you’ll notice in all of this I have avoided using a very important word: the phalanx. We need to push into the classical period – where our sources at last get decent – and ask what is a phalanx and how does it function? Which is where we will turn next week.

Rocket Report: Blunder at Baikonur; do launchers really need rocket engines?

Welcome to Edition 8.21 of the Rocket Report! We’re back after the Thanksgiving holiday with more launch news. Most of the big stories over the last couple of weeks came from abroad. Russian rockets and launch pads didn’t fare so well. China’s launch industry celebrated several key missions. SpaceX was busy, too, with seven launches over the last two weeks, six of them carrying more Starlink Internet satellites into orbit. We expect between 15 and 20 more orbital launch attempts worldwide before the end of the year.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Another Sarmat failure. A Russian intercontinental ballistic missile (ICBM) fired from an underground silo on the country’s southern steppe on November 28 on a scheduled test to deliver a dummy warhead to a remote impact zone nearly 4,000 miles away. The missile didn’t even make it 4,000 feet, Ars reports. Russia’s military has been silent on the accident, but the missile’s crash was seen and heard for miles around the Dombarovsky air base in Orenburg Oblast near the Russian-Kazakh border. A video posted by the Russian blog site MilitaryRussia.ru on Telegram and widely shared on other social media platforms showed the missile veering off course immediately after launch before cartwheeling upside down, losing power, and then crashing a short distance from the launch site.

Read full article

Comments

Classical music of 2025

These are the releases that I kept on listening to, in no particular order:

Aart Bergwerff, Bach, Six Trio Sonatas for Organ.

Jonathan Ferrucci, Bach Toccatas.

Tom Hicks, Chopin Nocturnes.  So little rubato, this one took time getting used to but now I love it.

Linos-Ensemble, Schoenberg-Webern-Berg, The Waltz Arrangements.  I am surprised I like this one at all, it brings together the two main strands of Viennese music at the time.

Yuja Wang, Shostakovich Piano Concerti and pieces from Op.87.

Cuarteto Casals, Shostakovich, complete String Quartets.

i am selecting these based on a) are they truly great and important pieces of classical music, and b) does this particular recording add something to the interpretations already out there?

The post Classical music of 2025 appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Q3 GDP Tracking: Mid 3%

The advance release of Q3 GDP has been cancelled. Q3 GDP will be released on Dec 23rd.

From BofA:
Since our last weekly publication, 3Q GDP tracking increased from 2.8% q/q sarr to 3.0% The upward revision was largely due to the strong September durable goods report that led us to revise higher our equipment estimate. [December 5th estimate]
emphasis added
From Goldman:
We lowered our Q3 GDP tracking estimate by 0.3pp to +3.5% (quarter-over-quarter annualized) and our Q3 domestic final sales estimate by 0.2pp to +2.6%. [December 5th estimate]
GDPNowAnd from the Atlanta Fed: GDPNow
The GDPNow model estimate for real GDP growth (seasonally adjusted annual rate) in the third quarter of 2025 is 3.5 percent on December 5, down from 3.8 percent on December 4. After this morning’s personal income and outlays release from the US Bureau of Economic Analysis, the nowcast for third-quarter real personal consumption expenditures growth declined from 3.1 percent to 2.7 percent. [December 5th estimate]

Friday assorted links

1. Best DC art works? (FT)  Surely Manet’s The Railway should be on the list?  Does Dulles Airport count?  The Iwo Jima Memorial or Vietnam Memorial?  Maybe even the Air Force Memorial?

2. The raccoon culture that was Virginia and I suppose still is a little bit?

3. Stoppard’s liberal individualism.

4. Jerry Z Muller on conservatism.

5. SPEAK, new organization for free speech in the UK.

6. On heritability debates.  And a comment from Pinker.

7. The new Annie Jacobsen book on biological warfare.

The post Friday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

PCE Measure of Shelter Declined to 3.7% YoY in September

Here is a graph of the year-over-year change in shelter from the CPI report and housing from the PCE report this morning, both through September 2025.

ShelterCPI Shelter was up 3.6% year-over-year in September, down slightly from 3.6% in August, and down from the cycle peak of 8.2% in March 2023.

Housing (PCE) was up 3.7% YoY in September, down from 3.9% in August and down from the cycle peak of 8.3% in April 2023.

Since asking rents are mostly flat year-over-year, these measures will slowly continue to decline over the next year as rents for existing tenants continue to increase.

PCE Prices 6-Month AnnualizedThe second graph shows PCE prices, Core PCE prices and Core ex-housing over the last 3 months (annualized):

Key measures are above the Fed's target on a 3-month basis. 

3-month annualized change:
PCE Price Index: 2.8%
Core PCE Prices: 2.7%
Core minus Housing: 2.6%

New Anonymous Phone Service

A new anonymous phone service allows you to sign up with just a zip code.

Wholesale Used Car Prices Increased in November; Unchanged Year-over-year

From Manheim Consulting today: Manheim Used Vehicle Value Index: November 2025 Trends
The Manheim Used Vehicle Value Index (MUVVI) rose to 205.4, reflecting a 1.3% increase in November’s wholesale used-vehicle prices (adjusted for mix, mileage, and seasonality) compared to October. The index is mostly unchanged compared to November 2024. The long-term average monthly move for November is a decrease of 0.6%.
emphasis added
Manheim Used Vehicle Value Index Click on graph for larger image.

This index from Manheim Consulting is based on all completed sales transactions at Manheim’s U.S. auctions.

The Manheim index suggests used car prices increased in November (seasonally adjusted) and were mostly unchanged YoY.

Personal Income Increased 0.4% in September; Spending Increased 0.3%

From the BEA: Personal Income and Outlays, September 2025
Personal income increased $94.5 billion (0.4 percent at a monthly rate) in September, according to estimates released today by the U.S. Bureau of Economic Analysis. Disposable personal income (DPI)—personal income less personal current taxes—increased $75.9 billion (0.3 percent) and personal consumption expenditures (PCE) increased $65.1 billion (0.3 percent).

Personal outlays—the sum of PCE, personal interest payments, and personal current transfer payments—increased $70.7 billion in September. Personal saving was $1.09 trillion in September and the personal saving rate—personal saving as a percentage of disposable personal income—was 4.7 percent.
...
From the preceding month, the PCE price index for September increased 0.3 percent. Excluding food and energy, the PCE price index increased 0.2 percent.

From the same month one year ago, the PCE price index for September increased 2.8 percent. Excluding food and energy, the PCE price index increased 2.8 percent from one year ago.
emphasis added
The September PCE price index increased 2.8 percent year-over-year (YoY), up from 2.7 percent YoY in August.

The PCE price index, excluding food and energy, increased 2.8 percent YoY, down from 2.9 percent in August.

The following graph shows real Personal Consumption Expenditures (PCE) through August 2025 (2017 dollars). Note that the y-axis doesn't start at zero to better show the change.

Personal Consumption Expenditures Click on graph for larger image.

The dashed red lines are the quarterly levels for real PCE.

Personal income was at expectations and spending was below expectations.

Inflation was slightly lower than expected.

International Space Station prepares for new commander, heads into final five years of planned operations

The International Space Station is pictured from the SpaceX Dragon crew spacecraft during a fly around of the orbiting lab that took place following its undocking from the Harmony module’s space-facing port on Nov. 8, 2021. Image: ESA / NASA / T. Pesquet

After 25 years of continuous human presence, the International Space Station is heading into its final half decade of planned habitation.

NASA and its international partners are planning to intentionally deorbit the orbiting laboratory around 2030 or shortly thereafter. SpaceX was contracted valued at up to $843 million to build the United States Deorbit Vehicle (USDV), which will help guide the space station towards a splashdown in an uninhabited portion of the Pacific Ocean.

On Sunday, Dec. 7 NASA astronaut Mike Fincke will assume the role of ISS Commander, taking over from Russian cosmonaut Sergey Ryzhikov. The cosmonaut along with his colleague, Alexey Zubritsky and NASA astronaut Jonny Kim, will then board their Soyuz spacecraft and undock Monday evening to complete their 245-day mission in orbit.

The seven-member Expedition 73 crew gathers together for a portrait on Nov. 27, 2025, celebrating NASA astronaut Mike Fincke’s (center) 500 cumulative days in space over four missions since 2004. In the front from left are, Roscosmos cosmonaut Sergey Ryzhikov, NASA astronaut Zena Cardman, Mike Fincke, and NASA astronaut Jonny Kim. In the back are, JAXA (Japan Aerospace Exploration Agency) astronaut Kimiya Yui and Roscosmos cosmonauts Oleg Platonov and Alexey Zubritsky. Image: NASA

With funding from the recent budget bill from Congress and renewed promise from NASA Administrator nominee Jared Isaacman to “maximize the scientific value of every dollar that Congress affords the agency,” the space station will continue to be a bustling hub of science for its final five years.

On Thursday, the Center for the Advancement of Science in Space (CASIS) announced an extension of its cooperative agreement with NASA to allow the non-profit to continue managing the ISS National Laboratory through 2030. This allows CASIS, through the ISS National Lab, to continue managing up to 50 percent of the flight allocation on cargo missions and up to 50 percent of U.S. Operating Crew time for science backed by them.

The ISS National Lab backed more than 940 payloads launched to the space station during the period of CASIS management, which began in 2011.

“For nearly 14 years, NASA has entrusted CASIS with managing this incredible asset for our nation and for the benefit of humanity,” said Ramon (Ray) Lugo, principal investigator and chief executive officer of CASIS. “We are honored that NASA has extended this unique partnership through 2030, and we will continue to work in collaboration, pushing the limits of space-based R&D for the benefit of life on Earth while driving a robust and sustainable market economy in space.”

Going to and fro

While the science planned for the ISS is in no short supply, the methods of getting it and its inhabitants to and from the space station is a trickier matter.

The most recent wrinkle came in the wake of the Soyuz MS-28 launch. After NASA astronaut Chris Williams along with Russian cosmonauts Sergey Kud-Sverchkov and Sergei Mikaev, the mobile service platform, which allows technicians access to the engine section of the rocket prior to launch, collapsed into the flame duct at Site 31.

According to Russia-based journalist, Anatoly Zak, there are varying estimates of how long repairs could take, with at least one source telling him that it could take “up to two years” and that the immediate path forward wasn’t clear.

In a statement published to its official Telegram account, Roscosmos said that the damaged would be fixed “in the nearest time,” but didn’t provide details. Spaceflight Now reached out to the Russian space agency for comment and is waiting to hear back.

For its part, NASA mostly diverted questions to Roscosmos. Russia’s Progress cargo spacecraft not only deliver supplies but also propellants for the Russian-side of the complex, used to maintain the station’s orbital altitude and also to assist with attitude control.

Some reboost functions are being performed by a SpaceX Cargo Dragon vehicle outfitted with a special boost kit in its unpressurized trunk. A NASA spokesperson said that this Dragon, launched to the ISS on the Commercial Resupply Services 33 (CRS-33) mission “will undock in late January 2026, before splashing down and returning critical science and hardware to teams on Earth.”

“Station has sufficient capability for reboost and attitude control, and there are no expected impacts to this capability,” a spokesperson said on Thursday.

As for crew capabilities, it’s unclear how much the Site 31 pad damage will delay the launch of the Soyuz MS-29 mission, if at all. A July 2025 press release from NASA announcing its astronaut, Anil Menon, as a crew member stated that the Soyuz MS-29 mission would launch in June 2026.

However, on Thursday, aspokesperson for the agency said the mission “has always been scheduled to launch in July 2026.”

As for U.S. crewed missions, the SpaceX Crew-12 mission is the next up to bat after NASA confirmed that the next flight of a Boeing CST-100 Starliner spacecraft (Starliner-1) would be a cargo-only mission.

Starliner may carry crew on its next voyage, but that depends on the outcome of the Starliner-1 mission.

Sketching the future

In these final five years, NASA and its partners will begin winding down station operations and in the immediate years before its demise, the station will be slowly lowered using orbital drag and the station’s thrusters over the course of two to two-and-a-half years, according to Dana Weigel, ISS Program Manager, during a post-launch Crew-11 briefing.

“The Russian segment is prime for doing all of that. So, all of the attitude control, debris avoidance, anything we do with actively lowering is from the Russian segment,” Weigel said.

“Once we get down to the point of actually deorbiting, our current plan is to have the Russian segment do attitude control and the USDV do actual thrusting and boost,” she added. “That gives us additional layers of redundancy, so that if something happened with the attitude control, you can then switch over to the USDV. So, it’s very much an integrated plan and an integrated solution.”

An artist’s impression of SpaceX’s ISS Deorbit Vehicle pushing the lab toward a controlled re-entry and breakup in the 2030 timeframe, after a formal decision to retire the lab complex after three decades of operation. Graphic: SpaceX

One way that delays to future Progress vehicle launches may impact the station is also in stocking up on fuel for those future lowering burns as well as attitude control.

“Part of what Roscosmos is working on right now is fuel delivery. So, we’ve got to get the fuel reserves on station to the point where they can do their portions of this,” Weigel said in early August. “Latest predictions are that will probably be at the right level in early 2028 and we’ll probably start drifting down in mid-2028. We’ve got to make sure we have the fuel there and everyone’s ready to go. And then the USDV will arrive mid-2029.”

As for the crews onboard, assuming the current schedule holds, the final years onboard station may look something like the following:

  • Feb. 2026 – SpaceX Crew-12
  • July 2026 – Soyuz MS-29
  • Oct. 2026 – SpaceX Crew-13 or Starliner-2
  • March 2027 – Soyuz MS-30
  • June 2027 – Dragon or Starliner
  • Nov. 2027 – Soyuz MS-31
  • Feb. 2028 – Dragon or Starliner
  • July 2028 – Soyuz MS-32
  • Oct. 2028 – Dragon or Starliner
  • March 2029 – Soyuz MS-33
  • June 2029 – Dragon or Starliner
  • Nov. 2029 – Soyuz MS-34
  • Feb. 2030 – Dragon or Starliner

Asked whether NASA would want its final crew onboard station to be comprised of seasoned veterans instead of making sure its newest astronauts get flight experience Weigel told Spaceflight Now following the Crew-11 briefing that it’s a complicated question.

“I think there are so many different factors that can work on that. One of the things from a medical consideration standpoint is we do limit radiation exposure for crew members and if we’re asking for a year-long mission, we have to factor all of that in for crew health,” Weigel said.

“So, in an ideal sense, you’d say, ‘Yeah, send me somebody who’s flown, who’s great at spacewalks, this, that and the other.’ But too much experience puts you over the radiation limit.”

Ludwig Amadeus Minelli (5 December 1932 – 29 November 2025), leader of Dignitas assisted suicide organization

 The Washington Post has the story

Ludwig Minelli, founder of leading assisted suicide group, ends his life at 92.  Dignitas, which Mr. Minelli founded, has helped thousands of people to die, some from countries where assisted suicide is illegal.  By Maham Javaid

 "Ludwig Minelli, who became a leader of the death-with-dignity movement as the founder of Dignitas, a Swiss organization with more than 10,000 members that provides and advocates for access to assisted suicide, died Saturday, ending his life through the process he helped promote. He was 92 and would have celebrated his 93rd birthday on Friday.

...

"Mr. Minelli, a lawyer specializing in human rights, was the general secretary of Dignitas, which since 1998 has helped thousands of people from around the world, including from countries where assisted suicide is illegal, to die. 

...

"Mr. Minelli and his group claimed responsibility for major milestones in the field of assisted death. In 2011 the European Court of Human Rights confirmed the right and freedom of a competent individual to decide on the manner and the time of their own end of life. In 2022, the German Federal Constitutional Court declared a law that made providing professional assistance in suicide impossible in Germany was unconstitutional. The same year, Austria also revoked a blanket prohibition on assisted suicide.

"In recent years, Australia, Canada and New Zealand have shifted their stance on assisted dying.

"Dignitas has participated in nearly 4,200 accompanied suicides since Mr. Minelli founded the group in 1998, the group reported in 2024. More than a third of those people lived in Germany, and there were over 600 people each from France and Britain. The group says it has more than 10,000 members. "

#########

Here is the statement/obituary from Dignitas: Passing of a pioneer and warrior 

Punished for Bleeding: How Periods In Prison Become A Trap

Many incarcerated women and trans people are forced to choose between maintaining their dignity and health — or facing penalties.

The tampons were stacked and bound together with a rubber band. The incarcerated people at the Patrick O’Daniel Unit — a women’s prison in Central Texas — referred to these bundles as “dynamite sticks.”

Behind bars, these household items could be a liability. People on their periods might beg their peers for tampons or even take them. Correctional officers might write someone up for having more than the 12 tampons permitted per month, which was the practice until the state removed those limits in 2019. The punishments for those violations could range from losing phone or visitation privileges, to fines to solitary confinement.

Jennifer Toon would hide her dynamite sticks behind the bookshelves of the prison library where she worked. “I saw girls get written up because they’re hoarding. Like, they’re stashing in their cubicle,” said Toon, who was incarcerated twice over two decades and last released in 2018.

But the prison commissary at that time could barely keep extra tampons in stock, she said — and that was assuming people had the money to afford them. To guard against this low supply, Toon and others at the prison would collect a personal stash and tuck them into nooks and crannies so they wouldn’t face consequences. Any infractions on their record could affect something as significant as their eligibility for parole.

“Who wants to get a major case over having extra tampons? And that sounds really ridiculous to people on the outside, but I mean, that would happen,” said Toon, who is now the executive director of Lioness Justice Impacted Women’s Alliance, an advocacy nonprofit in Austin, Texas.

The system was a vicious cycle, and in many cases it felt like a trap. Across the country, incarcerated women, trans and nonbinary people are punished for having periods, according to a new analysis published by the Prison Policy Initiative (PPI), in partnership with researcher Miriam Vishniac, the founder and director of the Prison Flow Project, a database focused on access to menstrual products in U.S. prisons.

While prison disciplinary policies do not cite periods directly, the PPI report identified at least six types of prison policies used to punish menstruating people: These include rules concerning damage of prison property, personal hygiene requirements, contraband restrictions, “feigning” illness and being absent from an assigned location.

For example, in Texas, where Toon was incarcerated, “any item possessed in excess of the amounts authorized,” could be considered contraband and punished as a “level 2” offense, which is the second most severe offense category in the state’s disciplinary rulebook. This can result in a loss of good conduct credits that go toward eligibility for parole, educational or work opportunities and other benefits.

Stories from people inside underscore a larger culture of control and dehumanization that incarcerated people endure, Toon and Vishniac said. It also reflects how little attention is given to the health needs of women and trans people in the criminal legal system. Prisons and jails are largely designed with cisgender men in mind, given that they make up about 90 percent of the country’s incarcerated population.

Formerly incarcerated people like Toon have reported male correctional staff and supervisors being oblivious to how menstruation works. They don’t appear to understand, for example, why women might go through more toilet paper than incarcerated cisgender men, or that the quality among different menstrual products varies.

“I knew I needed the tampons because the pads that we were issued were just terrible,” Toon said. “They’re going to fall apart in your panties.”

People using standard store-bought products outside of prison will typically go through three to six tampons or menstrual pads each day during a period, which can last for seven days. But the menstrual pads provided in prisons were “not much more useful than a panty liner,” said Stacy Burnett, 50, who was incarcerated for three stretches of time in New York before her release in 2019. During Burnett’s time inside, each person received two packs of 12 pads as well as about 8 tampons per month. The tampons were better for flow control, Burnett said, but could only handle lighter flow days and would still require using a pad as a backup.

“The quality of products provided or available for purchase is usually extremely poor — so poor that they do not fulfill their intended function,” said Vishniac, who completed her dissertation at the University of Edinburgh on the topic. “People have to use six pads at a time to prevent leakage, but they have strict limits on how many they are allowed.”

As a result of the limited access to period products and the poor quality, menstruating people have several options:

  • Bleed freely through their uniforms — and risk being written up for poor hygiene or damaging prison property
  • Hoard and hide as many tampons as they could find (or purchase from the commissary) — and risk being written up for contraband
  • Barter and trade tampons with other incarcerated people — and risk being written up for for improper exchange of property
  • Make their own tampons out of whatever they could get their hands on: toilet paper, dirty rags, fabric torn from a t-shirt or filling from their mattresses — and risk both an infection and being written up for misuse of prison property
  • Use their tampons and pads for multiple days — and risk an infection like Toxic Shock Syndrome. Many guidelines recommend that menstrual products are changed every 4 to 6 hours.

Or, they could “beg like dogs” for more period products, Vishniac said. “It was never as simple as asking for a product and getting it, because employees are trained to question every request incarcerated people make,” she said.

red tampon
Trans and gender-nonconforming people who menstruate often faced added scrutiny when requesting tampons, and were mocked or questioned for needing them. (Emily Scherer for The 19th)

For Nathan Osborne, asking prison staff for period products opened the door to being mocked and degraded. Osborne, a 65-year-old transgender man, first became incarcerated in California in 1981 and was released from custody three months ago. He had a complicated relationship with menstruating as a man and often felt shame.

It didn’t help that when he requested menstrual products, “You would get the look; you would get, ‘Oh, men don’t have periods, why do you need a tampon?” Osborne said. That humiliation took a toll, so he started making his own tampons by tightly wadding up tissue paper and inserting it inside himself. Plenty of others did this, he said, but one day he was caught during a strip search.

“[The wad of paper] stuck out a little bit. I didn’t have it all the way in,” he said. “So they took me and had me strapped down and had the doctor go up in me and pull it out, because they were trying to say that it was narcotics.”

Osborne said the doctor warned him that doing this again could cause an infection and sent him on his way. He felt violated by the experience, but he also left with a lingering question: What other choice did he have?

Oftentimes, the most damaging punishment behind bars isn’t being officially written up or losing privileges. It’s the demeaning comments from prison staff. Vishniac said all staff do not participate in this culture of shame, but the ones who do instill a sense of fear that ripples through women’s correctional units.

Like Osborne, Toon experienced strip searches while imprisoned before 2018. She remembers one day when she was scheduled to leave the prison to attend a conference for peer health educators, incarcerated women assigned to teach others in prison about sexual violence prevention, HIV/AIDS awareness and other health-related topics. Getting to attend the conference was “a treat,” Toon said. It was something she was looking forward to.

But in order to leave the prison, she had to be strip searched. Toon knew the routine: She and the other incarcerated women shuffled into the tiny room known as the “strip shack” near the back gate of the prison and began to undress. Typically this process can require the removal of clothing, underwear, as well as any pads or tampons. To avoid having to remove her tampon in front of 20 people, Toon said she learned a trick to clip the tampon string short enough so the staff could not tell. But this time, a woman staffer noticed the extra unwrapped tampon that fell out of Toon’s pocket.

“I know you have a tampon in there.”  — “there” being Toon’s vagina.

“I want to see it,” Toon recalled the woman officer saying.

“You’re not going anywhere until I see it.”

“So here I am, in front of 20 women, I squat down and I had to get in it,” Toon recalled. “I had to reach all the way in there and get that little string and I pulled it out.”

Droplets of blood fell to the floor as Toon pulled out her second-day tampon. The woman officer “looked at me with so much disgust,” Toon said. Toon looked over at her friend, Janet, who had tears running down her face.

Some cities and states are trying to make a shift in this culture. In response to questions from The 19th, a spokeswoman with the Texas Department of Criminal Justice said the culture Toon described “would be inaccurate to the state of TDCJ today.” In 2019, the department started providing unlimited access to menstrual products, according to the TDCJ spokeswoman. The department also “completed a large educational campaign,” concerning menstrual health care in women’s facilities, she said, and hired a consultant to work with the agency to improve female services and programming.

New York, Maryland, Alabama and Colorado have passed legislation requiring that people in state prisons receive menstrual products for free, though implementation and enforcement have been inconsistent. At least 14 states have passed a Dignity of Incarcerated Women Act aimed at improving certain conditions, including the quality and accessibility of period products.

But Vishniac emphasized that a singular law is simply a Band-Aid that does not address the root of the larger prison culture.

“I think some of the bigger changes that are really necessary — the oversight, the accountability, the transparency — those require us to grapple a bit more with a system that we have a really hard time questioning,” Vishniac said. “If we really, truly want to make sure that nobody is bleeding on themselves or punished for bleeding on themselves, we have to also understand that this stigmatization, and mass incarceration and warehousing people is part of that.”


CLICK TO SUPPORT OR NONPROFIT NEWSROOM AND OUR EFFORTS TO PROTECT YOUR RIGHTS

The post Punished for Bleeding: How Periods In Prison Become A Trap appeared first on DCReport.org.

Welcome to the Crazy CAFE

To let Americans buy smaller cars, Trump had to weaken fuel-efficiency standards. Does that sound crazy? Small cars, of course, have much higher fuel efficiency. Yet this is exactly how the Corporate Average Fuel Economy (CAFE) standards work.

Photo Keith Hopper, https://www.iobt.org/temple-blog/210-small-lessons-from-a-kei-truck-by-keith-hopper

Since 2011, fuel-economy targets scale with a vehicle’s “footprint” (wheelbase × track width). Big vehicles get lenient targets; small vehicles face demanding ones. A microcar that gets 40 MPG might be judged against a target of 50-60 MPG, while a full-size truck doing 20 MPG can satisfy a 22 MPG requirement.. The small car is clearly more efficient, yet it fails the rule that the truck passes.

The policy was meant to be fair to producers of large vehicles, but it rewards bloat. Make a car bigger and compliance gets easier. Add crash standards built around heavier vehicles and it’s obvious why the US market produces crossovers and trucks while smaller and much less expensive city-cars, familiar in Europe and Asia, never show up. At a press conference rolling back CAFE standards, Trump noted he’d seen small “kei” cars on his Asia trip—”very small, really cute”—and directed the Transportation Secretary to clear regulatory barriers so they could be built and sold in America.

Trump’s rollback—cutting the projected 2031 fleet average from roughly 50.4 MPG to 34.5 MPG—relaxes the math enough that microcars could comply again. Only Kafka would appreciate a fuel-economy system that makes small fuel-efficient cars hard to sell and giant trucks easy. Yet the looser rules remove a barrier to greener vehicles while also handing a windfall to big truck makers. A little less Kafka, a little more Tullock.

The post Welcome to the Crazy CAFE appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

They need to make you hate some group

Photo by Fibonacci Blue via Wikimedia Commons

In the 2010s, a bunch of right-wing types suddenly became big fans of Martin Luther King Jr.’s views on race. If you saw someone on Twitter quote MLK’s nostrum that people should “not be judged by the color of their skin but by the content of their character”, it was almost certainly someone on the right — quite a change from the type of person who probably would have cited King’s words half a century earlier. This is from an Associated Press story back in 2013:

King’s quote has become a staple of conservative belief that “judged by the color of their skin” includes things such as unique appeals to certain voter groups, reserving government contracts for Hispanic-owned businesses, seeking more non-white corporate executives, or admitting black students to college with lower test scores.

Many progressives railed against the idea of a colorblind society, arguing that statistical disparities between racial groups — income gaps, wealth gaps, incarceration gaps, and so on — couldn’t be remedied without writing race into official policy and becoming much more race-conscious in our daily lives.

In the policy space, this idea manifested as DEI, which implemented racially discriminatory hiring policies across a broad swath of American business, government, academia, and nonprofits. In the media space, this manifested as a torrent of op-eds collectively criticizing white people as a group — “White men must be stopped: The very future of mankind depends on it”, “It’s Time for White People to Understand Their Whiteness”, “What is Wrong With America is Us White People”, and so on. Reputable institutions brought in speakers who made claims like “Whites are psychopaths,” and so on. Making nasty jokes about white people carried few if any professional consequences.

In that kind of environment, it’s understandable that lots of people on the right would turn to individualist principles like the ones espoused by MLK in his famous speech. Asking to be judged by the content of your character is a reasonable defense against people who are trying to judge you based on your membership in a racial group.

Fast-forward a few years, however, and the shoe is on the other foot. The Wall Street Journal released an editorial urging us not to blame Afghan immigrants as a group for the Afghan man who shot two National Guardsmen in Washington, D.C. a week ago:

[I]t would be a shame if this single act of betrayal became the excuse for deporting all Afghan refugees in the U.S…Tens of thousands are building new lives here in peace and are contributing to their communities. They shouldn’t be blamed for the violent act of one man.

Stephen Miller, Trump’s powerful Homeland Security Advisor, responded with a dismissal of individualism and an indictment of Afghans as a group:

This is the great lie of mass migration. You are not just importing individuals. You are importing societies. No magic transformation occurs when failed states cross borders. At scale, migrants and their descendants recreate the conditions, and terrors, of their broken homelands.

And that same week, it was revealed that some Somalis in Minnesota had committed a massive welfare fraud:

Federal prosecutors charged dozens of people with felonies, accusing them of stealing hundreds of millions of dollars from a government program meant to keep children fed during the Covid-19 pandemic…At first, many in the state saw the case as a one-off abuse…But…Over the last five years, law enforcement officials say, fraud took root in pockets of Minnesota’s Somali diaspora as scores of individuals made small fortunes by setting up companies that billed state agencies for millions of dollars’ worth of social services that were never provided…Federal prosecutors say…more than $1 billion in taxpayers’ money has been stolen[.]

In the wake of those revelations, Trump condemned Somalis as a group:

Here are Trump’s exact words:

I don’t want em in our country. Their country’s no good for a reason. Their country stinks…I could say that about other countries too…We don’t want em…We have to rebuild our country. You know, our country’s at a tipping point. We could go bad…We’re going to go the wrong way if we keep taking garbage into our country. Ilhan Omar is garbage. Her friends are garbage…And from where they came from, they got nothing…When they come from Hell, and they complain, and do nothing but bitch, we don’t want em in our country. Let em go back to where they came from, and fix it.

Here you see the very same idea that Stephen Miller expressed. Trump and Miller both judge people by their ethnic group, and they judge those ethnic groups by the condition of their ancestral country. Somalia is a bad place, therefore Somalis are bad, therefore if you’re a Somali you’re bad and you shouldn’t be allowed into America. Afghanistan is a bad place, therefore Afghans are bad, therefore if you’re an Afghan you’re bad and you shouldn’t be allowed into America.

In fact, this idea was very popular a century ago, when America enacted harsh restrictions on immigration. Restrictionists argued that immigrants from South and East Europe were undesirable, because South and East Europe were relatively underdeveloped places. For example, here’s what Francis Walker, the president of MIT and a staunch opponent of immigration, wrote in The Atlantic in 1896:

Only a short time ago, the immigrants from southern Italy, Hungary, Austria, and Russia together made up hardly more than one per cent of our immigration. To-day the proportion has risen to something like forty per cent…The entrance into our political, social, and industrial life of such vast masses of peasantry, degraded below our utmost conceptions, is a matter which no intelligent patriot can look upon without the gravest apprehension and alarm. These people have no history behind them which is of a nature to give encouragement…They are beaten men from beaten races; representing the worst failures in the struggle for existence…They have none of the ideas and aptitudes which fit men to take up readily and easily the problem of self-care and self-government. [emphasis mine]

This is a form of racial collectivism. It’s judging people by ethnic, racial, and national groups instead of as individuals. In his landmark 1955 book Strangers in the Land, which chronicles the anti-immigration movement of the late 19th and early 20th centuries, historian John Higham labeled this attitude “racism”. Today, of course, we can’t use that word, since it has been repurposed to mean so many other things. But the word feels like a perfect fit — it’s an ideology (an “ism”) that holds that people are to be judged according to the collective accomplishments of their race.

When I see people on the right spouting this sort of rhetoric, I think: What happened to MLK? What happened to judging people based on the content of their character? What happened to the colorblind society? What happened between 2018 and now that makes collective judgment of racial groups suddenly ok?

The answer, of course, is “The right got the upper hand in American politics.” It turns out that individualism is a bit like free speech — a principle that lots of people tend to support when their tribe is losing, only to abandon it as soon as they’re back on top. A lot of people really do believe in individualism, of course, especially in America. But a lot of others just use it as a cynical shield when they’re on the defensive. And we’re finding out that most of the MAGA movement was always the latter type.

MAGA’s overriding goal is immigration restriction. They care about this much more than any other policy issue — more than inflation, more than trade, more than crime, more than anything. And the reason they want immigration restriction, I believe, is because they think that Somalis and Afghans and Haitians and so on are going to make America more like those countries. When Trump and Miller talk about this, I think they’re being completely honest. And after Trump is gone, I think this idea will be at the core of the new right-wing ideology that will sustain the MAGA movement. Racial collectivism is absolutely at the core of their worldview.

But MAGA has a big problem: While that worldview has some appeal to Americans, overall they aren’t on board. Every poll we have shows pro-immigration sentiment on the rise again, after a dip during the Biden years:

Source: Gallup
Source: Gallup

A lot of Americans are also in favor of individualism — that is, of treating people based on their individual traits rather than what group they belong to. Americans of most races supported the recent Supreme Court decision banning racial preferences in university admissions; even black Americans were about evenly split. And while Americans disagree about lots of racial issues, they tend to overwhelmingly say they support things like equal opportunity regardless of race.

And although there are differences in American attitudes toward immigrants from different regions of the world, the differences aren’t huge, and they don’t perfectly line up with how developed the regions are. For example, here’s a 2015 poll by Pew, finding that immigration from Africa is viewed more favorably than immigration from the much more developed regions of Latin America and the Middle East:

Source: Pew

Here’s a 2021 poll from Cato that finds the same pattern:

Source: Cato

So although some Americans are probably evaluating immigrants based on their racial group and on the condition of their source country, like Trump and Miller are, Americans in general probably don’t think this way. They get mad at illegal immigration, and at the disorderly quasi-legal immigration that Biden tolerated — but illegal entry is an individual action, not a group trait.

Which makes sense. The U.S. immigration system is highly selective; Lazear (2017) shows that selectivity accounts for a very large fraction of the average educational attainment of different immigrant groups in America.

As Matt Yglesias points out, nowhere is this more evident than with Indian immigrants. The country of India is still poor; despite solid recent growth, its GDP per capita is lower than that of El Salvador or Guatemala. Infrastructure has improved a lot but is still subpar, and the country has pockets of startling poverty. By the racial-collectivist logic of Miller and Trump, or of the restrictionists of a century ago, Indian immigrants should be turning America into a third-world country.

And yet the exact opposite is happening. Indian Americans are arguably the most successful group in the United States. They have the highest median household income of any national ancestry group, and the highest average level of education. Even Indians who are poor when they arrive in America end up making well above the median — a level of mobility rivaled only by Chinese Americans. There are more billionaires in America from India than from any other ethnic group.

Nor has Indian immigration turned anywhere in America into a version of India. Fremont, California is probably the city over 100,000 population with the greatest percentage of Indians — about 29%. And yet Fremont is one of the cleanest, nicest, richest, safest towns in the whole country, with a murder rate so low that many European countries would envy it, and arguably the best public schools in the country. A recent survey identified Fremont as the happiest city in America.

Almost all of the MAGA people screaming about Indian immigration on the internet live in places less nice than Fremont.

A big part of this, of course, is because immigration from India is so selective. India is the world’s most populous country; it’s not too hard to grab a few million smart people from a country that big. But this isn’t the only reason. American institutions are also important.

As another example, take El Paso. The overwhelming majority of people in El Paso are of Mexican descent. Mexican immigration is among the least selective, because Mexico is so close to America and there was so much illegal immigration in the past. And yet despite being filled with ethnic Mexicans, El Paso looks absolutely nothing like Juarez, the Mexican city that sits right next to it on the opposite side of the border. El Paso’s murder rate is 3.8, very low for an American city, while Juarez is one of the most violent, chaotic cities on planet Earth.

Mexicans didn’t turn El Paso into Mexico, and the reason is American institutions. America’s economy offers El Paso’s residents the chance to get ahead without joining drug gangs. American culture is a more positive-sum, less violent culture than Mexico’s. And the U.S. Military has a big presence in El Paso, because Fort Bliss is there. Even without selectivity, institutions matter a lot.

So Stephen Miller is just flat-out wrong. Immigrants do not recreate the conditions of their homelands in America. Yes, there is some amount of carryover, including some negative influences like the old Sicilian mafia, or modern gangs like MS-13. But the differences between American immigrant populations and their source countries far outweigh the similarities.

In order for MAGA to win, they need to convince America otherwise — they need to persuade you, the American citizen, that the fiction that undergirds their ideology is actually true. To this end, they need to get you to judge people in terms of their group, rather than as individuals. So they keep looking around for a group they think they can convince you to fear, to disdain, and ultimately to hate.

Remember last year, during the campaign season, when Trump and JD Vance declared that Haitian immigrants were eating people’s pets in Springfield, Ohio?

It was all B.S., of course. News crews descended on Springfield, but not even the most right-wing reporters could find a credible report of a single pet being eaten. JD Vance awkwardly begged the internet to “keep the cat memes flowing”, and never apologized for smearing a whole group of people, but at some point everyone realized it was a hoax.

That’s why you didn’t hear anything about cat-eating Haitian-Ohioans before the campaign season of 2024. And that’s why you haven’t heard anything about it since then. It wasn’t real; you were being played.

Now they’re trying again, with the Somalis of Minnesota. This time, they probably have a better shot at success. For one thing, Somalis in America are much poorer than their Haitian-American counterparts — Haitians in the U.S. have slightly below average income and average education levels, they commit few crimes, and they’re not prominent in politics. They’re basically just quiet middle-class people living pretty normal American lives.

Somalis, on the other hand, are an extremely poor group, with very high poverty rates and much lower income than Haitians, or immigrants in general; this is due to the fact that most of them are refugees or descendants of refugees, which are the least selected type of immigrants. Somalis are Muslim, unlike Haitians, which makes them both visually distinct (because of the hijab) and mentally associated with civilizational conflict. They’re not known for violence, but now they’re associated with Minnesota’s massive organized welfare fraud.

And unlike the Haitians of Ohio, the Somalis of Minnesota are prominent and powerful in local politics. They managed to do a sort of takeover of the Minneapolis Democratic Party, nominating one of their own, Omar Fateh, as the Democratic candidate over incumbent mayor Jacob Frey. Frey managed to beat Fateh in the general election, but only by appealing to a rival Somali clan and making flamboyant appeals to the Somali community.

This is hardly unprecedented in American politics — Irish immigrants built political machines that dominated the politics of many American cities in the 19th century. Given many decades, it’s likely that Somalis will assimilate, the same way the Irish did, and turn the organizational skills that allowed them to swindle the state of Minnesota and take over the Minneapolis Democrats to some more constructive use, like building drone factories (or whatever humans are doing 80 years from now).

But “many decades” is a very long time for Americans to wait in order not to worry about culture clash. And Americans aren’t used to urban ethnic machine politics these days,1 and the notion of an iconic American city being at the mercy of clan rivalries from one of the world’s poorest and most violent nations will naturally lend force to Trump’s argument that Somalis are trying to make Minnesota into another Somalia.

If Trump and MAGA succeed in getting a critical mass of regular Americans to reject Somalis categorically, as a racial group, then they win a crucial victory — not over the Somalis, who pose them no actual threat, but in terms of changing the terms of the discourse around race and immigration in America.

Once MAGA can convince you that “Are the Somalis bad?” is a legitimate question to ask, they then pretty much automatically get to ask the same question about every other group in America. They get to ask “Are Afghans bad?”, and “Are Haitians bad?”. They’ll get to ask “Are Jews bad?”, “Are Indians bad?”, and “Are Chinese people bad?”. Eventually they might even get around to asking “Are Italians bad?”, and so on. They will push as far as they can.

Even if those questions get answered in the affirmative — even if Italians and Indians and Haitians can all successfully defend their right to be in America by appealing to the court of MAGA opinion — the mere fact that they had to defend themselves as racial groups, instead of as individuals, will redefine what America is all about. It will move America toward being an estate society — a society where groups are accorded rights and privileges instead of individuals.

In the 20th century, American liberals successfully overcame all of the people who wanted to make the country a racial estate society — Jim Crow was outlawed, immigration laws were made (more or less) race-neutral, and so on. Liberals accomplished this by appealing to Americans’ deep-seated value of individualism — of the idea that people shouldn’t be judged by the group they were born into. That idea, captured most eloquently in MLK’s famous speech but repeated ad infinitum by leaders, writers, and activists, ultimately carried the day and made America the liberal nation I grew up in.

What I fear is that by embracing identity politics in the 2010s, progressives have thrown away liberals’ ultimate weapon. Appeals to individualism carry much less moral force when the people making those appeals just spent the last decade decrying colorblindness as a tool of systemic racism (or embracing people who made that claim).

This is not to say that rightists’ push to turn America into a balkanized racial hierarchy is progressives’ fault — it isn’t. Rightists are always trying to do this sort of thing; it’s not a reaction to anything progressives did. But there’s a reason this sort of racial collectivism was defeated and suppressed for a hundred years, and there’s a reason it’s breaking through now when it couldn’t before.


Subscribe now

Share

1

To be honest, they weren’t very relaxed about it in the 19th century either; anti-Irish sentiment resulted in vicious pogroms, gang wars, and whole newspapers devoted to spreading vicious anti-Irish rumors.

Political pressure on the Fed

From a forthcoming paper by Thomas Drechsel:

This paper combines new data and a narrative approach to identify variation in political pressure on the Federal Reserve. From archival records, I build a data set of personal interactions between U.S. Presidents and Fed officials between 1933 and 2016. Since personal interactions do not necessarily reflect political pressure, I develop a narrative identification strategy based on President Nixon’s pressure on Fed Chair Burns. I exploit this narrative through restrictions on a structural vector autoregression that includes the President-Fed interaction data. I find that political pressure to ease monetary policy (i) increases the price level strongly and persistently, (ii) does not lead to positive effects on real economic activity, (iii) contributed to inflationary episodes outside of the Nixon era, and (iv) transmits differently from a typical monetary policy easing, by having a stronger effect on inflation expectations. Quantitatively, increasing political pressure by half as much as Nixon, for six months, raises the price level by about 7% over the following decade.

That is not entirely a positive omen for the current day.

The post Political pressure on the Fed appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

★ Alan Dye Was in Tim Cook’s Blind Spot

NBC News, back in March 2018:

Speaking at a town hall event hosted by MSNBC’s Chris Hayes and Recode’s Kara Swisher, Cook said Facebook put profits above all else when it allegedly allowed user data to be taken through connected apps. [...]

When asked what he would do if he were in Zuckerberg’s position, Cook replied: “What would I do? I wouldn’t be in this situation.”

“The truth is we could make a ton of money if we monetized our customer, if our customer was our product,” Cook said. “We’ve elected not to do that.”

“Privacy to us is a human right. It’s a civil liberty, and something that is unique to America. This is like freedom of speech and freedom of the press,” Cook said. “Privacy is right up there with that for us.”

Perhaps Cook now needs to define “us”.

This was a rather memorable interview. Cook’s “What would I do? I wouldn’t be in this situation” is one of the stone-coldest lines he’s ever zinged at a rival company. (In public, that is.) That was just ice cold. Cook is a consummate diplomat. Most non-founder big company CEOs are. Satya Nadella, Sundar Pichai, Andy Jassy — none of them are known for throwing shade, let alone sharp elbows, at competitors. Cook has made an exception, multiple times, when it comes to Facebook/Meta (and to a lesser degree, Google).

So it’s not just that Alan Dye jumped ship from Apple for the chief designer officer role at another company.1 It’s not just that he left for a rival company. It’s that he left Apple for Meta, of all companies. Given what Cook has said about Meta publicly, one can only imagine what he thinks about them privately. Apple executives tend to stay at Apple. The stability of its executive team is unparalleled. But Dye is a senior leader who not only left for a rival, but the one rival that Cook and the rest of Apple’s senior leadership team consider the most antithetical to Apple’s ideals.

It would have been surprising if Dye had jumped ship to Google or Microsoft. It would have been a little more surprising if he’d left for Amazon, if only because Amazon seemingly places no cultural value whatsoever on design, as Apple practices it. But maybe with Amazon it would have been seen as Andy Jassy deciding to get serious about design, and thus, in a way, less surprising after the fact. But leaving Apple for Meta, of all companies, feels shocking. How could someone who would even consider leaving Apple for Meta rise to a level of such prominence at Apple, including as one of the few public faces of the company?

So it’s not just that Alan Dye is a fraud of a UI designer and leader, and that Apple’s senior leadership had a blind spot to the ways Dye’s leadership was steering Apple’s interface design deeply astray. That’s problem enough, as I emphasized in my piece yesterday. It’s also that it’s now clear that Dye’s moral compass was not aligned with Apple’s either. Tim Cook and the rest — or at least most? — of Apple’s senior leadership apparently couldn’t see that, either.


  1. I’d have thrown OpenAI in that list of companies where it would have been surprising, but not shocking, for Dye to leave Apple for. But that simply wasn’t possible given Jony Ive’s relationship with Sam Altman, LoveFrom’s collaboration with OpenAI with the io project, and Ive’s utter disdain for Dye’s talent, leadership, and personality. ↩︎

Corrupt Court Corrupts

SCOTUS rescues Texas gerrymander, says lower court erred by rejecting the map on “eve” of election.

Text a community college librarian

I take tap dance evening classes at the College of San Mateo community college. A neat bonus of this is that I'm now officially a student of that college, which gives me access to their library... including the ability to send text messages to the librarians asking for help with research.

I recently wrote about Coutellerie Nontronnaise on my Niche Museums website, a historic knife manufactory in Nontron, France. They had a certificate on the wall claiming that they had previously held a Guinness World Record for the smallest folding knife, but I had been unable to track down any supporting evidence.

I posed this as a text message challenge to the librarians, and they tracked down the exact page from the 1989 "Le livre guinness des records" describing the record:

Le plus petit

Les établissements Nontronnaise ont réalisé un couteau de 10 mm de long, pour le Festival d’Aubigny, Vendée, qui s’est déroulé du 4 au 5 juillet 1987.

Thank you, Maria at the CSM library!

Tags: research, museums, libraries

Subtests in pytest 9.0.0+

pytest 9.0.0 was released on November 8th 2025. I just got around to looking at the release notes and the biggest new feature is subtests, previously available as the separate pytest-subtests plugin.

I copied the documentation into Claude Code and told it to find a good place to try them in the Datasette test suite. It suggested tests/test_docs.py, which currently make heavy use of the @pytest.mark.parametrize mechanism. I wrote about how that works a few years ago in Documentation unit tests - here's an example test:

@pytest.mark.parametrize("setting", app.SETTINGS)
def test_settings_are_documented(settings_headings, setting):
    assert setting.name in settings_headings

Understanding subtests

The idea behind subtests is to allow a test to programatically create new subtest within itself at runtime.

My above example does the same thing using @pytest.mark.parametrize - but it relies on the list of settings being known at test collection time. This might not be possible for things that need to be introspected after the test has run some initial setup code.

Here's the above test ported to use subtests instead:

def test_settings_are_documented(settings_headings, subtests):
    for setting in app.SETTINGS:
        with subtests.test(setting=setting.name):
            assert setting.name in settings_headings

subtests is a new default pytest fixture - if you list that as a parameter to your test function you can use it in the body of the test.

Using with subtests.test(...) creates a new subtest. Here I'm doing that in a loop. The keyword arguments passed to subtests.test() are used to identify the subtest in the test report.

That's all it takes! Here's a commit that ported several of my parameterized tests to use subtests instead.

How subtests differ from parametrize

If you use @pytest.mark.parametrize pytest will behave as if every one of your parameter combinations is a separate test function. Running the old pytest tests/test_docs.py tests looked like this:

============================= test session starts ==============================
platform darwin -- Python 3.14.0, pytest-9.0.1, pluggy-1.6.0
SQLite: 3.50.4
rootdir: /private/tmp/datasette
configfile: pytest.ini
plugins: anyio-4.12.0, xdist-3.8.0, timeout-2.4.0, asyncio-1.3.0
asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=None, asyncio_default_test_loop_scope=function
collected 120 items                                                            

tests/test_docs.py ..................................................... [ 44%]
...................................................................      [100%]

============================= 120 passed in 0.84s ==============================

Porting to subtests causes each test to be reported just once:

...
collected 9 items                                                             

tests/test_docs.py .........                                            [100%]

============================== 9 passed in 0.15s ==============================

But... if you add -v for verbose output you get back a report that does include every subtest. Truncated, that looks like this:

...
collected 9 items                                                                                                

tests/test_docs.py::test_settings_are_documented SUBPASSED(setting='default_page_size')                    [ 11%]
tests/test_docs.py::test_settings_are_documented SUBPASSED(setting='max_returned_rows')                    [ 11%]
...
tests/test_docs.py::test_settings_are_documented PASSED                                                    [ 11%]
tests/test_docs.py::test_plugin_hooks_are_documented SUBPASSED(plugin='actor_from_request')                [ 22%]
tests/test_docs.py::test_plugin_hooks_are_documented SUBPASSED(plugin='actors_from_ids')                   [ 22%]
tests/test_docs.py::test_plugin_hooks_are_documented PASSED                                                [ 22%]...

...
tests/test_docs.py::test_rst_heading_underlines_match_title_length PASSED                                  [ 66%]
tests/test_docs.py::test_homepage PASSED                                                                   [ 77%]
tests/test_docs.py::test_actor_is_null PASSED                                                              [ 88%]
tests/test_docs.py::test_signed_cookie_actor PASSED                                                        [100%]

============================== 9 passed, 116 subtests passed in 0.15s ==============================

The last line shows how many subtests passed in addition to how many tests.

It looks to me like subtests run substantially faster than the eqpuivalent parameterized tests. I'm more interested in the fact that subtests can now be programatically generated at runtime based on test setup code.

Thoughts on Go vs. Rust vs. Zig

Thoughts on Go vs. Rust vs. Zig

Thoughtful commentary on Go, Rust, and Zig by Sinclair Target. I haven't seen a single comparison that covers all three before and I learned a lot from reading this.

One thing that I hadn't noticed before is that none of these three languages implement class-based OOP.

Via Hacker News

Tags: go, object-oriented-programming, programming-languages, rust, zig

The Resonant Computing Manifesto

The Resonant Computing Manifesto

Launched today at WIRED’s The Big Interview event, this manifesto (of which I'm a founding signatory) encourages a positive framework for thinking about building hyper-personalized AI-powered software - while avoiding the attention hijacking anti-patterns that defined so much of the last decade of software design.

This part in particular resonates with me:

For decades, technology has required standardized solutions to complex human problems. In order to scale software, you had to build for the average user, sanding away the edge cases. In many ways, this is why our digital world has come to resemble the sterile, deadening architecture that Alexander spent his career pushing back against.

This is where AI provides a missing puzzle piece. Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations.

There are echos here of the Malleable software concept from Ink & Switch.

The manifesto proposes five principles for building resonant software: Keeping data private and under personal stewardship, building software that's dedicated to the user's interests, ensuring plural and distributed control rather than platform monopolies, making tools adaptable to individual context, and designing for prosocial membership of shared spaces.

Steven Levy talked to the manifesto's lead instigator Alex Komoroske and provides some extra flavor in It's Time to Save Silicon Valley From Itself:

By 2025, it was clear to Komoroske and his cohort that Big Tech had strayed far from its early idealistic principles. As Silicon Valley began to align itself more strongly with political interests, the idea emerged within the group to lay out a different course, and a casual suggestion led to a process where some in the group began drafting what became today’s manifesto. They chose the word “resonant” to describe their vision mainly because of its positive connotations. As the document explains, “It’s the experience of encountering something that speaks to our deeper values.”

Tags: ai, alex-komoroske, ai-ethics

Django 6.0 released

Django 6.0 released

Django 6.0 includes a flurry of neat features, but the two that most caught my eye are background workers and template partials.

Background workers started out as DEP (Django Enhancement Proposal) 14, proposed and shepherded by Jake Howard. Jake prototyped the feature in django-tasks and wrote this extensive background on the feature when it landed in core just in time for the 6.0 feature freeze back in September.

Kevin Wetzels published a useful first look at Django's background tasks based on the earlier RC, including notes on building a custom database-backed worker implementation.

Template Partials were implemented as a Google Summer of Code project by Farhan Ali Raza. I really like the design of this. Here's an example from the documentation showing the neat inline attribute which lets you both use and define a partial at the same time:

{# Define and render immediately. #}
{% partialdef user-info inline %}
    <div id="user-info-{{ user.username }}">
        <h3>{{ user.name }}</h3>
        <p>{{ user.bio }}</p>
    </div>
{% endpartialdef %}

{# Other page content here. #}

{# Reuse later elsewhere in the template. #}
<section class="featured-authors">
    <h2>Featured Authors</h2>
    {% for user in featured %}
        {% partial user-info %}
    {% endfor %}
</section>

You can also render just a named partial from a template directly in Python code like this:

return render(request, "authors.html#user-info", {"user": user})

I'm looking forward to trying this out in combination with HTMX.

I asked Claude Code to dig around in my blog's source code looking for places that could benefit from a template partial. Here's the resulting commit that uses them to de-duplicate the display of dates and tags from pages that list multiple types of content, such as my tag pages.

Tags: django, python, ai, generative-ai, llms, ai-assisted-programming, htmx, coding-agents, claude-code

Emergent Ventures winners, 50th cohort

Geby Jaff, Berkeley, publication medium for AI-generated science.

Laura Ryan, London, data for the AIs.

Tara Rezaei, MIT, general career support/AI/o1.

Mihir Rao, Princeton, bio and AI.

Lorna MacLean, London, AI medical diagnosis of endometriosis.

David Yu, Waterloo, Ontario/Taiwan, fellowship program for agentic Taiwanese college students.

Aniket Panjwani, Lombard, Illinois, EconNow, AI-based software for economics.

Zixuan (Eric) Ma, GMU, to write about China.

Ivan Khalamendyk, Lviv, “I’m an independent Ukrainian physicist developing a ψ-field model of the universe – a single real wave ψ(x,t) that reproduces quantum matter, forces and gravity.”

José Luis Sabau, Mexico City, Perpetuo, Substack for Mexico.

Soleil Wizman, Yale University, longevity.

The post Emergent Ventures winners, 50th cohort appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

How Classmates Became One Of The Most Familiar Names In Digital Connection For Adults Across The Country

Classmates has been part of the internet landscape long enough to feel like a legacy brand, yet it keeps finding ways to stay useful in a world that reinvents itself every five minutes. People who grew up during the first wave of social media often treat it like a digital time capsule, a place where they can revisit pieces of their past without digging through the attic or tracking down old yearbooks. That sense of familiarity still matters, especially as the broader tech world keeps chasing whatever the next trend promises to be. Classmates lean into something simpler. It preserves real history, it offers organization in a space that usually rewards chaos, and it gives users a way to see their own stories in context..

A Platform Built Around Personal History

While most social platforms want to speed up your day, Classmates.com tends to slow it down in a good way. People arrive with a purpose. They want to revisit graduation photos, track down an old club they forgot they joined, or see how many editions of their high school yearbook have survived the decades. The site has essentially built a curated archive out of the moments everyone swears they will keep track of, but rarely do. For a business, that creates a unique position. Nostalgia is powerful, but it only works when it stays grounded and authentic. Classmates.com handles that by keeping its focus on actual artifacts and verifiable information so that users always feel like they are stepping back into something real, not a reconstructed version of their past.

Why Digital Memory Still Matters

As people rely on their devices for almost everything, the idea of preserving personal history can feel like an afterthought. Still, the appetite for grounded digital memory continues to grow. Classmates capitalizes on that by organizing what would otherwise live in scattered boxes, lost email threads, or forgotten phone galleries. It turns those fragments into something coherent. That is valuable for users and equally valuable for brands that need consistent engagement without resorting to gimmicks. The platform stands at a midpoint between personal storytelling and digital archiving, which gives it a staying power many trend driven platforms struggle to maintain.

Expanding Beyond Reconnection And Into Shared Experiences

Social connections often strengthen when people do something together, which is why digital platforms continue to experiment with collaborative features. Classmates.com has explored ways to enhance that sense of participation by creating spaces where users can bounce between memory and activity. After reconnecting you can video chat, engage in playing games online with friends or even meetup in person for a coffee. Shared experiences give older connections new life. For a platform built on reunion energy, leaning into interactive engagement helps it remain relevant to users who want more than a static look back at their past.

A Quiet Strength In An Overcrowded Market

Tech evolves quickly, and consumer expectations evolve with it. Classmates.com does not try to compete with platforms that chase instant novelty. Instead, it focuses on clarity and purpose. That strategy has helped it maintain a steady audience that values durability over hyperactive change. While new apps appear every year promising reinvention, Classmates continues to refine its tools for browsing, organizing, and discovering long term personal history. The brand occupies a rare corner of the digital world where consistency feels like an asset, not a sign of complacency.

How Brands Interpret The Value Of Longevity

Businesses often talk about the importance of retention, but Classmates demonstrate what it looks like in practice. Users may not log in every day, yet when they return, they usually have a reason. That kind of intentional engagement is hard to manufacture. Companies watching that behavior can see how longevity, trust, and clarity can shape brand identity without extravagant branding or constant reinvention. It proves that a steady presence can be just as influential as rapid growth, especially when people want something familiar that still works the way they expect it to.

Where Classmates Positions Itself For The Future

Digital heritage is becoming an industry of its own. As more platforms fade or pivot, the need for stable archives grows. Classmates.com continues to serve users who care about preserving their real stories in a simple, navigable format. The company appears to be building on that foundation rather than chasing entirely new identities. That approach gives it room to expand thoughtfully, whether through enhanced discovery tools, better cross generational access, or features that help users bring their offline memorabilia into their online collections.

Classmates stand out by knowing exactly what it offers and leaning into it with calm confidence. In a space dominated by reinvention, it has carved out a thoughtful niche that treats personal history with care and clarity. It reminds users that some corners of the internet are meant to preserve, not overwhelm.

Photo: rawpixel.com via Freepik.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post How Classmates Became One Of The Most Familiar Names In Digital Connection For Adults Across The Country appeared first on DCReport.org.

Friday: Personal Income and Outlays

Mortgage Rates Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

Thursday:
• At 10:00 AM ET, Personal Income and Outlays for September. The consensus is for a 0.4% increase in personal income, and for a 0.4% increase in personal spending. And for the Core PCE price index to increase 0.2% (up 2.9% YoY).

• Also at 10:00 AM, University of Michigan's Consumer sentiment index (Preliminary for December).

Apollo 17 at Shorty Crater

Apollo 17 at Shorty Crater Apollo 17 at Shorty Crater


Tracking Weekend Storm Impacts