SIGINT During World War II

The NSA and GCHQ have jointly published a history of World War II SIGINT: “Secret Messengers: Disseminating SIGINT in the Second World War.” This is the story of the British SLUs (Special Liaison Units) and the American SSOs (Special Security Officers).

AI Applications in Cybersecurity

There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here’s where to register to attend, or participate, in the fourth.

Some really great stuff here.

My excellent Conversation with Nate Silver

Here is the audio, video, and transcript.  Here is part of the episode summary:

Tyler and Nate dive into expected utility theory and random Nash equilibria in poker, whether Silver’s tell-reading abilities transfer to real-world situations like NBA games, why academic writing has disappointed him, his move from atheism to agnosticism, the meta-rationality of risk-taking, electoral systems and their flaws, 2028 presidential candidates,  why he thinks superforecasters will continue to outperform AI for the next decade, why more athletes haven’t come out as gay, redesigning the NBA, what mentors he needs now, the cultural and psychological peculiarities of Bay area intellectual communities, why Canada can’t win a Stanley Cup, the politics of immigration in Europe and America, what he’ll work on next, and more.

Excerpt:

COWEN: If you think about the Manifold types in terms of the framework in your book, how they think about risk — is there a common feature that they’re more risk-averse, or that they worry more? Is there a common feature that they like the idea that they hold some kind of secret knowledge that other people do not have? How do you classify them? They’re just high in openness, or what is it?

SILVER: They’re high in openness to experience. I think they’re very high in conscientiousness.

COWEN: Are they? I don’t know.

SILVER: Some of them are. Some of them are, yes.

COWEN: I think of them as high variance in conscientiousness, rather than high in it.

[laughter]

SILVER: The EAs and the rationalists are more high variance, I think. There can be a certain type of gullibility is one problem. I think, obviously, EA took a lot of hits for Sam Bankman-Fried, but if anything, they probably should have taken more reputational damage. That was really bad, and there were a lot of signs of it, including his interviews with you and other people like that. It contrasts with poker players who have similar phenotypes but are much more suspicious and much more street smart.

Also, the Bay Area is weird. I feel like the West Coast is diverging more from the rest of the country.

COWEN: I agree.

SILVER: It’s like a long way away. Just the mannerisms are different. You go to a small thing. You go to a house party in the Bay Area. There may not be very much wine, for example. In New York, if the host isn’t drinking, then it’d be considered sacrilege not to have plenty of booze at a party. Little things like that, little cultural norms. You go to Seattle — it feels like Canada to me almost, and so these things are diverging more.

COWEN: Why is belief in doom correlated with practice of polyamory? And I think it is.

SILVER: If you ask Aella, I guess, she might say, if we’re all going to die or go to whatever singularity there is, we might as well have fun in the meantime. There’s some of that kind of hedonism. Although in general, it’s not a super hedonistic movement.

COWEN: It seems too economistic to me. Even I, the economist — I don’t feel people think that economistically. There’s more likely some psychological predisposition toward both views.

SILVER: I guess you could argue that society would be better organized in a more polyamorous relationship. People do it implicitly in a lot of ways anyway, including in the LGBTQ [laughs] community, which has different attitudes toward it potentially. and if there’s not as much childbearing, that can have an effect, potentially. I think it’s like they are not being constrained by their own society thing that is taken very seriously in that group. There’s enough disconnectedness and aloofness where they’re able to play it out in practice more.

That creeps a little bit into Silicon Valley too, which can be much more whimsical and fanciful than the Wall Street types I know, for example.

Recommended.  Here is my 2024 episode with Nate, here is my 2016 episode with him.

The post My excellent Conversation with Nate Silver appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

How North Korea Infiltrated American Companies With Fake Tech Workers

For the past few months, The Wall Street Journal’s Bob McMillan has been writing a series of stories on fake North Korean workers who have infiltrated American companies. In this episode, we break the whole situation down with McMillan, who is a longtime friend and a top-notch security reporter.

Subscribe now

The short of the tale is this: North Koreans hop on LinkedIn and other job sites and pose as American remote workers looking for gigs. Once they get hired, the North Koreans then recruit Americans to help them deal with some of the job mechanics like submitting tax paperwork and running company laptops from inside the US.

McMillan has found some Americans who are managing dozens of laptops at their homes on behalf of these North Korean workers. Each morning, the American patsy wakes up, turns the laptops on, and then logs their North Korean workers into their jobs. It’s a practice now known at laptop farming.

The North Koreans tend to be pretty good workers! That is until they start siphoning off money and intellectual property for the Great Leader.

Last month, Arizona resident Christina Marie Chapman pled guilty to wire fraud and other crimes linked to this scheme. Per the Department of Justice, Chapman “was sentenced today to 102 months in prison for her role in a fraudulent scheme that assisted North Korean Information Technology (IT) workers posing as U.S. citizens and residents with obtaining remote IT positions at more than 300 U.S. companies. The scheme generated more than $17 million in illicit revenue for Chapman and for the Democratic People’s Republic of Korea (DPRK or North Korea).”

All told, the DoJ reckons North Korea has pulled in hundreds of millions of dollars from its network of laptop farmers. McMillan writes about it all here.

If you’re an employer on the lookout for one of these fake remote workers, you’ll want to scan for Kevins in your organization who are really into the Minions. We explain in the episode - promise.

Enjoy!

The Core Memory podcast is made possible by the genius investors at E1 Ventures. We’re not sure if E1 is into the Minions or not, but they are into investing in great companies.

Share

A 90-minute video of making a batch of woodblock prints “from blank paper to finished print” from the printer’s POV. Relaxing & ASMR-adjacent.

💬 Join the discussion on kottke.org

Ariane 6 launches European weather satellite

Ariane 6 3rd launch

An Ariane 6 successfully launched a European weather satellite with an Earth science hosted payload Aug. 12 in the third flight of that vehicle.

The post Ariane 6 launches European weather satellite appeared first on SpaceNews.

Live coverage: SpaceX to launch 28 Starlink satellites on Falcon 9 rocket from Cape Canaveral

File: A SpaceX Falcon 9 rocket stands at Space Launch Complex 40 (SLC-40) at Cape Canaveral Space Force Station. Image: Adam Bernstein/Spaceflight Now

SpaceX is preparing to launch a batch of 28 of its Starlink V2 Mini satellites into low Earth orbit minutes before sunrise on Thursday morning.

Liftoff of the Falcon 9 rocket on the Starlink 10-20 mission from pad 40 at Cape Canaveral Space Force Station is scheduled for 8:29 a.m. EDT (1229 UTC). This will be the 69th orbital launch from Florida so far this year.

Spaceflight Now will have live coverage beginning about an hour prior to liftoff.

The 45th Weather Squadron forecast a 90 percent chance of favorable weather conditions for liftoff during the four-hour launch window. Meteorologists said on Wednesday that they only had slight concerns with interference from “any sneaky cumulus that pushes onshore.”

“Overnight/early morning winds are expected to maintain a more southwesterly flow for both primary and backup launch windows, which should help limit any cumulus clouds from tracking onshore from activity that develops over the Atlantic,” launch weather officers wrote.

SpaceX will use the Falcon 9 first stage booster with the tail number B1085 to launch this mission on its 10th flight. Its previous missions included NASA’s Crew-9, Firefly Aerospace’s Blue Ghost Mission 1 and Fram2.

About 8.5 minutes after liftoff, B1085 will target a landing on the droneship ‘Just Read the Instructions.’ If successful, this will be the 132nd landing on this vessel.

For military space, what tasks should be automated?

SALT LAKE CITY – It’s easy to talk about satellite autonomy but significant work remains to determine exactly which tasks should be handled by machines, according to speakers at the 2025 Small Satellite Conference. Military aircraft have extensive built-in autonomy thanks to decades of experience identifying useful features in combat exercises. U.S. Space Force satellite […]

The post For military space, what tasks should be automated? appeared first on SpaceNews.

Strengthening ties in orbit: the expanding U.S.-UAE space partnership

The United Arab Emirates photographed by an Expedition 38 crew member on the International Space Station. Credit: NASA

The United States and the United Arab Emirates (UAE), though differing in size and history, have forged a vibrant partnership in space. In just over a decade, this bond has accelerated the UAE’s rise as a spacefaring nation while opening new avenues for U.S. industry and diplomacy.  With the UAE’s capital and speed, and the […]

The post Strengthening ties in orbit: the expanding U.S.-UAE space partnership appeared first on SpaceNews.

Burnt space insurers are getting back the game

Illustration of the SARah-Passiv Earth observation satellite pair on either side of the SARah-Active satellite. Credit: OHB

Insurers are returning to the space industry after retreating in the wake of harrowing losses just a few years ago. At least three firms have announced capacity for space risks in recent months: Phemis and Hive, both revived from former space underwriting teams, and Whitecap,a solo effort led by an underwriter from a now-defunct insurance […]

The post Burnt space insurers are getting back the game appeared first on SpaceNews.

Rogue expands staff ahead of planned double launch

SALT LAKE CITY – Rogue Space Systems is reorganizing to prepare for growth in its space logistics business and a double launch in 2027. Brook Leonard, a retired U.S. Space Force major general, is the new CEO. Former CEO Jon Beam will serve as Rogue president and chief strategy officer. David Franklin, a retired Space […]

The post Rogue expands staff ahead of planned double launch appeared first on SpaceNews.

Impulse Space sees strong demand for GEO rideshare program

Impulse GEO rideshare

A year after announcing plans to offer rideshare missions to geostationary orbit, Impulse Space says the demand has been strong enough to plan an annual series of them.

The post Impulse Space sees strong demand for GEO rideshare program appeared first on SpaceNews.

Gabe Zimmerman on customer needs and scaling SmallSat production

Gabe Zimmerman

In this episode of Space Minds, host Mike Gruss speaks with Gabe Zimmerman, Director, In-Space at Ursa Major.

The post Gabe Zimmerman on customer needs and scaling SmallSat production appeared first on SpaceNews.

Cambrian Works Selects Astroscale U.S. as its Mission Partner for NASA Swift Observatory Boost Mission Concept Study

Cambrian Works and Astroscale logos

Commercial mission concept could give the Neil Gehrels Swift Observatory a new lease on life, preserving its search for the universe’s most powerful explosions

The post Cambrian Works Selects Astroscale U.S. as its Mission Partner for NASA Swift Observatory Boost Mission Concept Study appeared first on SpaceNews.

Burt’s parting thoughts on the Space Force and its next chapter

Lt. Gen. DeAnna Burt, deputy chief of space operations, is retiring after 33 years of service. She reflects on the progress achieved, and the challenges ahead for the military space branch

The post Burt’s parting thoughts on the Space Force and its next chapter appeared first on SpaceNews.

Flying ‘standby’ proves popular for SpaceX rideshares

F9 T-11 launch

Five years into its program to provide smallsat rideshare launch services, SpaceX is emphasizing flexibility to accommodate growing demand. SpaceX’s rideshare program has now launched more than 1,400 satellites across more than 30 missions, said Ronnie Foreman, the company’s senior sales manager for rideshare. Foreman spoke during an Aug. 12 side session at SmallSat 2025. […]

The post Flying ‘standby’ proves popular for SpaceX rideshares appeared first on SpaceNews.

White House issues executive order to revamp commercial space regulations

Falcon 9 launch

The White House has issued a widely anticipated executive order addressing several commercial space regulatory issues, from launch licensing reform to mission authorization.

The post White House issues executive order to revamp commercial space regulations appeared first on SpaceNews.

Kongsberg gearing up for Arctic smallsat expansion

Norway’s Kongsberg Defence & Aerospace is preparing to expand its small satellite footprint over the Arctic amid rising surveillance and communications needs in the increasingly strategic region.

The post Kongsberg gearing up for Arctic smallsat expansion appeared first on SpaceNews.

What lies in the heart of Orion? What lies in the heart of Orion?


Was It Ghislaine All Along?

Audrey Strauss, Acting United States Attorney for the Southern District of New York, speaks during a news conference to announce charges against Ghislaine Maxwell for her alleged role in the sexual exploitation and abuse of multiple minor girls by Jeffrey Epstein, Thursday, July 2, 2020, in New York. (AP Photo/John Minchillo)

I’m mildly fascinated by this piece in New York Magazine’s Intelligencer section. It’s the review of a new biography of Andrew, Duke of York, by a guy named Andrew Lownie. (The piece appears to be free for a limited time.) What sparked my interest is the major if not central role of Ghislaine Maxwell and thus Jeffrey Epstein. In fact, the upshot of the whole thing is to make Maxwell much more central and dominating figure in the Epstein story than perhaps even Epstein himself, certainly in Andrew’s life and perhaps in Epstein’s as well.

At one level I could not care less about any of these people. As I’ve noted in my other Epstein posts, I’m interested in the story because of the way other people are interested in it — lots of people — and how that interest both intersects with our politics and in some material ways explains our politics.

As I noted, this biography and article portrays Maxwell not as Epstein’s procurer and sometimes girlfriend, but instead as the center of gravity manipulating or perhaps using and guiding both Epstein and Andrew, and seemingly many others. She’s less of an extension of Epstein’s criminality and more of a guiding force.

Where all of this connected for me is adding to the impression that I’ve gotten from a decent amount of Epstein-iana I’ve read and watched over the last month or so. As I wrote a few weeks ago, I’ve always been very skeptical of the idea that Epstein was running either a high end pedophilia/prostitution ring or, from another perspective, a blackmail and extortion racket. Those ideas just seem too fanciful, and more importantly there’s just very, very little evidence of either being true. What does come through very clearly in the various things I’ve read about the man is that he was an inveterate social climber, someone obsessed with building social and intellectual prestige. You see this in the way that he collected college professors and intellectuals, contributing several million dollars to Harvard University alone as part of that effort. A Times article from a week ago sheds more light on Epstein’s Manhattan townhouse and it starts with a typed letter Woody Allen wrote as a gift for Epstein’s 63rd birthday. It’s one of many letters other luminaries and moguls served up: Ehud Barak, Mort Zuckerman et al.

Allen’s is very interesting. Whatever your views of Allen, he’s a perceptive man. The letter is obviously tongue in cheek to a degree, embellished and exaggerated. But if I’m reading it right, the essence is on the level: Epstein’s movers and shakers dinners didn’t start out as elegant and sumptuous affairs. As Allen relates it, the first versions had him as something more like an arriviste bro living alone in a cavernous seven-story townhouse with only the clumsiest idea of how entertaining at that stratospheric level is done. He made progress over time. The review of the Lownie books suggested Maxwell was an important part of that. And not only how to put on a proper dinner/salon but the mores and methods to build connections and friendships with people who operated at her station in the world. The Lownie books suggests that Maxwell used Prince Andrew — who was too entitled, thirsty and simple to grasp how she exploited him — as a tool to burnish her and Epstein’s reputation for clout and insiderdom. You go to an Epstein event and there’s Prince Andrew!

None of this is necessarily here or there in whatever President Trump fears being released in the trove of material the federal government seized from Epstein upon his arrest in 2019. But I think it’s a more accurate picture of what this guy and this world were about: a collector of people and an inveterate social climber using money, charm, other famous people — and yes, young women and girls — as magnets to pull the wealthy, powerful, glamorous and respected into his orbit. That’s always seemed the more plausible understanding of this than the global blackmail ring.

Perplexity Is on the Prowl to Buy Web Browsers

The Information (paywalled, alas):

In December, Perplexity discussed possibly buying the six-year-old The Browser Co. with that company’s leaders, according to two people with knowledge of the discussions. Talks with the company, which operates an AI-powered web browser, Dia, did not progress, and no price was discussed.

OpenAI has also spoken to The Browser Co. executives about possibly selling to the ChatGPT creator, according to two people familiar with the discussions. Those discussions went further, to the point of a possible price, but ended after the two sides couldn’t agree on terms.

Then, earlier this summer, Perplexity offered to buy Brave, a San Francisco–based company that runs a privacy-focused web browser and search engine, for around $1 billion, primarily in Perplexity’s stock, according to a person with direct knowledge of the situation. But the two sides couldn’t agree on price and the deal discussions didn’t move forward, the person said.

Meanwhile, investors in Perplexity and DuckDuckGo tried to arrange meetings between the two companies’ leaders, according to people close to the companies. Perplexity CEO Srinivas and Gabriel Weinberg, CEO of 17-year-old DuckDuckGo, met and discussed Perplexity’s interest in acquiring browsers as a way of reaching more consumers, one of the people said. The conversations didn’t lead to any offer.

DuckDuckGo is even more privacy-focused than Brave. It’s their entire brand. Perplexity isn’t privacy-focused at all. And Perplexity has already made their own browser, Comet, which is so cool you currently have to pay $200/month to get access to it. Comet is, unsurprisingly, forked from Chromium, and pretty much looks like an uglier version of Chrome. Brave Browser looks almost exactly like Chrome (and is probably my favorite Chromium-based browser).

None of this makes any sense, so it’s no surprise most of these talks didn’t go far, and Brave rejected the supposed $1 billion offer in Perplexity stock, which as far as I’m concerned might as well have been $1 billion in Monopoly money. Why not call Eddy Cue and see if Apple wants to sell Safari?

 ★ 

★ Max Read’s ‘A Literary History of Fake Texts in Apple’s Marketing Materials’

Max Read, two years ago, “A Literary History of Fake Texts in Apple’s Marketing Materials”:

I’m talking about the mocked-up texts and emails Apple puts together to demonstrate new messaging features in its operating-system updates, presumably written by some well-paid professionals in Apple’s marketing department. These eerily cheery, aggressively punctuated messages suggest an alternate dimension in which polite, good-natured, rigorously diverse groups of friends and coworkers use Apple products exactly how they are designed to be used, without complaint or error. [...]

If there is still mystery in Apple events, it is located here, in the uncanny fictional world suggested in these images: Who are these people? And what is wrong with them that they text like this?

A proper literary study of fake Apple texts has yet to be undertaken, but with the help of the Wayback Machine, we can sift through more than a decade’s work of marketing materials to identify certain trends and themes. For the sake of precision, let’s begin our survey in 2011, with the launch of iMessage in iOS 5. Here, so far as I can tell, is the first-ever fake Apple iMessage conversation.

I’ve been sitting on this one since shortly after Read published it, and came upon it again today in my pile of I-should-post-about-this-but-I-have-a-lot-I-need-to-say-about-it links. Now — just under four weeks away from Apple’s expected keynote for the iPhones 17 and new Apple Watches — seems as good a time as ever to finally link to it. It really is a lot of fun, and Read seemingly found every marketing screenshot of Messages or Mail from 2011–2023 that he could.

But re-reading it today, I realize why I sat on it. There’s a cynicism to the whole thing that grates. Read is disdainful of everything about these messages — their cheerful tone, professional-grade photography, even their attentive punctuation. But of course they’re not realistic. Of course every person in every chat “use[s] Apple products exactly how they are designed to be used, without complaint or error.” Of course everyone is always happy and friendly and having a good time. Of course the groups are always diverse.1 What other kinds of fictional people are going to be portrayed by Apple in their marketing screenshots? Ugly unhappy illiterates who take bad photos and never go anywhere? It’d be really weird if Apple’s fake texts for keynotes were anything other than idyllic — if the photos kinda sucked, if words were misspelled and entirely lowercase, if punctuation were omitted.

So of course the fake texts in Apple marketing are, upon consideration, obviously phony. What I’ve long thought interesting is just how much effort Apple clearly puts into them. They’re good phony. Pitch-perfect for Apple’s Designed-in-California brand. A lot of work goes into the fake trips and parties portrayed and described in these threads, and it shows. But they’re not so interesting as to distract from the keynote. Imagine if a screenshot flew by with a Messages thread between colleagues gossiping about someone getting fired for expense account fraud, or about an extramarital affair. The purpose of these fake texts is the opposite of the supposed intention of the Liquid Glass design language: it’s fake content meant to put the emphasis on the real software. They’re actually worth the deep dive Max Read produced to document them. They’re genuinely interesting for what they are — but somehow Read can’t bring himself to say that, despite taking the time to document them. The withering cynicism of his tone is at odds with the fact that he took the time to document their history so thoroughly.

Searching the DF archive for Read’s name, I came up with one hit, and it explains his overly-cynical schtick. Read was editor-in-chief at Gawker, before Peter Thiel and his puppet Hulk Hogan (RIP) sued them out of business in 2016. And when I previously mentioned Read (in 2020), it was because he was one of two ex-Gawkerites who sold a show to Apple TV+ called Scraper2 about a thinly-veiled fictional version of Gawker, but which show was nixed, after several episodes had already been filmed, supposedly by Tim Cook himself, out of his personal loathing for Gawker. To be clear, I’m not suggesting Read took an overly snarky attitude to describing Apple’s fake-text literary history because Cook pulled the plug on Scrapers. I’m saying that Gawker was infused by the sort of attitude that holds all marketing in contempt. I, of course, firmly believe that many subjects are worthy of withering scorn. But the Gawker attitude was that no subject was worthy of anything but withering scorn. I never could abide that, and there’s something like it undergirding this otherwise splendid piece from Read. It’s like an otherwise delightful cocktail with one distinctive unpleasant ingredient, which ingredient was added, deliberately, to imbue the libation with an aftertaste of spite.


  1. Except for age. You’ll spot few, if any, gray hairs in the photos and Memojis these characters share. That’s not a complaint. Youth is aspirational, and there are gray hairs enough amongst the executives who present these keynote segments. ↩︎︎

  2. A perfect title, I have to say. “Scraper” would have been a better name for Gawker than “Gawker” was. (Much like how The Studio’s “Continental Studios” sounds like a real century-old peer to Paramount and Columbia Pictures.) ↩︎

Updated Design for Pebble Time 2 Watch

Eric Migicovsky:

First off, for those who didn’t catch the news from a few weeks ago — we’ve been able to recover the Pebble trademark! Our new watches will change from being called Core 2 Duo → Pebble 2 Duo, and Core Time 2 → Pebble Time 2.

The big news today is that we’re revealing the final design for Pebble Time 2. The design that we showed off back in March were preliminary designs. We’ve been able to tweak and improve the industrial design quite a bit since then. I think it’s turned out fantastically well! I even have a working albeit early engineering sample on my wrist.

These look good. Fundamentally Pebble-y but with smaller-than-ever (for Pebble) screen bezels.

I stand by what I wrote back in March, though. They should make just one new watch, not two. In March I suggested making that one single new model the black-and-white display one, to lean into Pebble’s differentiation from Apple Watch and other leading smartwatches. Seeing these new designs for the color display Time 2 — and Migicovsky’s obvious personal enthusiasm for this model — makes me think this should be the one true new Pebble.

Even their naming scheme is confusing. The $150 plastic, 1.2-inch black-and-white-display model is the Pebble 2 Duo. The $225 steel, 1.5-inch color-display model is the Pebble Time 2. Why is the “2” in different places? Nothing about the names “Duo” or “Time” suggests which one is higher-end than the other. If anything, “Time” sounds more simplistic to my ears, like maybe it only tells the time — but that’s the nicer one.

Maybe they know something I don’t, and pre-orders are strong for the uglier, plastic, black-and-white-display 2 Duo. But even if that’s true, that’s selling into the existing Pebble fanbase. If they have any hope of expanding to new users, they ought to put all their wood behind one arrow, and that ought to be the clearly superior, better-looking, bigger-display Time 2. The Time 2 costs just $75 more, but seems way more than $75 better.

 ★ 

We Did It. You Did It.

As you can see we hit our goal of raising $500,000 during this year’s drive. The drive will continue until later in the month. So if you didn’t get a chance to contribute, by all means the door remains very wide open. We can always put more dollars to good use. But $500,000 was the goal because that’s the number we need/needed to make good on our plans. So we’ll ramp back the reminders and pleas and so forth. We hit the finish line we needed to hit. We’re all set.

I’m writing this to thank you. One of our challenges running TPM is not treating things as routine even as they become in some sense factually routine. Our audience, you, just contributed half a million dollars in four weeks simply because we asked and said we would put it to good use. That’s amazing. And you’ve had our back, caught us in this organizational trust fall every time we’ve done this, which now goes back five years. It’s a testament to the trust you put in our team and the quality you see in their work. I’m thankful to them for doing that work. I’m thankful to you for recognizing it, for valuing it. This organization, this community has an extraordinary commerce in dedication and trust, passing those back and forth between the people who write the articles and those who read them. It’s a pleasure and an honor to be associated with all of it. Truly.

Thursday: Unemployment Claims, PPI

Mortgage Rates Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

Thursday:
• At 8:30 AM ET, The initial weekly unemployment claims report will be released. The consensus is for initial claims to increase to 228 thousand from 226 thousand last week.

• Also at 8:30 AM, The Producer Price Index for July from the BLS. The consensus is for a 0.2% increase in PPI, and a 0.2% increase in core PPI.

pyx: a Python-native package registry, now in Beta

pyx: a Python-native package registry, now in Beta

Since its first release, the single biggest question around the uv Python environment management tool has been around Astral's business model: Astral are a VC-backed company and at some point they need to start making real revenue.

Back in September Astral founder Charlie Marsh said the following:

I don't want to charge people money to use our tools, and I don't want to create an incentive structure whereby our open source offerings are competing with any commercial offerings (which is what you see with a lost of hosted-open-source-SaaS business models).

What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.

An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. [...]

It looks like those plans have become concrete now! From today's announcement:

TL;DR: pyx is a Python-native package registry --- and the first piece of the Astral platform, our next-generation infrastructure for the Python ecosystem.

We think of pyx as an optimized backend for uv: it's a package registry, but it also solves problems that go beyond the scope of a traditional "package registry", making your Python experience faster, more secure, and even GPU-aware, both for private packages and public sources (like PyPI and the PyTorch index).

pyx is live with our early partners, including Ramp, Intercom, and fal [...]

This looks like a sensible direction to me, and one that stays true to Charlie's promises to carefully design the incentive structure to avoid corrupting the core open source project that the Python community is coming to depend on.

Via @charliermarsh

Tags: open-source, packaging, python, uv, astral, charlie-marsh

Screaming in the Cloud: AI’s Security Crisis: Why Your Assistant Might Betray You

Screaming in the Cloud: AI’s Security Crisis: Why Your Assistant Might Betray You

I recorded this podcast conversation with Corey Quinn a few weeks ago:

On this episode of Screaming in the Cloud, Corey Quinn talks with Simon Willison, founder of Datasette and creator of LLM CLI about AI’s realities versus the hype. They dive into Simon’s “lethal trifecta” of AI security risks, his prediction of a major breach within six months, and real-world use cases of his open source tools, from investigative journalism to OSINT sleuthing. Simon shares grounded insights on coding with AI, the real environmental impact, AGI skepticism, and why human expertise still matters. A candid, hype-free take from someone who truly knows the space.

This was a really fun conversation - very high energy and we covered a lot of different topics. It's about a lot more than just LLM security.

Tags: ai, prompt-injection, podcast-appearances, lethal-trifecta, corey-quinn

How Does A Blind Model See The Earth?

How Does A Blind Model See The Earth?

Fun, creative new micro-eval. Split the world into a sampled collection of latitude longitude points and for each one ask a model:

If this location is over land, say 'Land'. If this location is over water, say 'Water'. Do not say anything else.

Author henry goes a step further: for models that expose logprobs they use the relative probability scores of Land or Water to get a confidence level, for other models they prompt four times at temperature 1 to get a score.

And then.. they plot those probabilities on a chart! Here's Gemini 2.5 Flash (one of the better results):

A global map visualization showing land probability data from Google/Gemini-2.5-flash model, with longitude on x-axis (-180° to 180°) and latitude on y-axis (-80° to 80°), using a blue-to-green color scale where blue represents water (0.0 probability) and green represents land (1.0 probability), clearly showing continental outlines including North America, South America, Africa, Europe, Asia, and Australia against blue ocean backgrounds.

This reminds me of my pelican riding a bicycle benchmark in that it gives you an instant visual representation that's very easy to compare between different models.

Via @natolambert

Tags: ai, generative-ai, llms, evals

simonw/codespaces-llm

simonw/codespaces-llm

GitHub Codespaces provides full development environments in your browser, and is free to use with anyone with a GitHub account. Each environment has a full Linux container and a browser-based UI using VS Code.

I found out today that GitHub Codespaces come with a GITHUB_TOKEN environment variable... and that token works as an API key for accessing LLMs in the GitHub Models collection, which includes dozens of models from OpenAI, Microsoft, Mistral, xAI, DeepSeek, Meta and more.

Anthony Shaw's llm-github-models plugin for my LLM tool allows it to talk directly to GitHub Models. I filed a suggestion that it could pick up that GITHUB_TOKEN variable automatically and Anthony shipped v0.18.0 with that feature a few hours later.

... which means you can now run the following in any Python-enabled Codespaces container and get a working llm command:

pip install llm
llm install llm-github-models
llm models default github/gpt-4.1
llm "Fun facts about pelicans"

Setting the default model to github/gpt-4.1 means you get free (albeit rate-limited) access to that OpenAI model.

To save you from needing to even run that sequence of commands I've created a new GitHub repository, simonw/codespaces-llm, which pre-installs and runs those commands for you.

Anyone with a GitHub account can use this URL to launch a new Codespaces instance with a configured llm terminal command ready to use:

codespaces.new/simonw/codespaces-llm?quickstart=1

Screenshot of a GitHub Codespaces VS Code interface showing a README.md file for codespaces-llm repository. The file describes a GitHub Codespaces environment with LLM, Python 3.13, uv and the GitHub Copilot VS Code extension. It has a "Launch Codespace" button. Below shows a terminal tab with the command "llm 'Fun facts about pelicans'" which has generated output listing 5 pelican facts: 1. **Huge Beaks:** about their enormous beaks and throat pouches for scooping fish and water, some over a foot long; 2. **Fishing Technique:** about working together to herd fish into shallow water; 3. **Great Fliers:** about being strong fliers that migrate great distances and soar on thermals; 4. **Buoyant Bodies:** about having air sacs beneath skin and bones making them extra buoyant; 5. **Dive Bombing:** about Brown Pelicans diving dramatically from air into water to catch fish.

While putting this together I wrote up what I've learned about devcontainers so far as a TIL: Configuring GitHub Codespaces using devcontainers.

Tags: github, projects, python, ai, til, openai, generative-ai, llms, llm, github-codespaces, anthony-shaw

Hackification

Bing image generator

On Monday I wrote about Donald Trump’s disastrous press conference touting the economy along with Stephen Moore, a former chief economist at the Heritage Foundation. As I noted, Moore is a dishonest partisan hack, which is only to be expected, but also bizarrely incompetent, incapable of ever getting his facts right. To explain the phenomenon, I invoked Hannah Arendt:

Totalitarianism in power invariably replaces all first-rate talents, regardless of their sympathies, with those crackpots and fools whose lack of intelligence and creativity is still the best guarantee of their loyalty.

Let me call this Arendt’s Law: Totalitarian and wannabe totalitarian regimes only hire incompetent hacks.

So when Trump nominated E.J. Antoni, the current chief economist at Heritage, to head the Bureau of Labor Statistics, it seemed safe to assume that he would be cut from the same cloth. But although Trump’s Truth Social post announcing the nomination declared that Antoni is a Highly Respected Economist, I and most of the economists I talk with knew nothing about him.

Fortunately, Menzie Chinn of the University of Wisconsin, who actually is a Highly Respected Economist and whose blog Econbrowser has been influential for many years, has been on Antoni’s case for a while. And sure enough, Arendt’s Law remains undefeated.

Before I get to Chinn’s work, yesterday morning’s Antoni headline. I’ve argued since before Trump took office that this administration would eventually get around to cooking the economic books. But I didn’t expect it right away. The process of constructing a monthly jobs report is complex. You can’t just take a Sharpie and write in the numbers you want. Corrupting the data would require firing or intimidating a large number of people, which would, I thought, take time.

But one should never underestimate the audacity of hacks. On Monday Antoni went on Fox Business and suggested that the BLS should stop issuing monthly jobs reports until the “problems” at the agency are fixed.

I guess that would be one way to let Trump continue claiming that the economy is booming — just stop publishing the data showing that it isn’t.

True, Antoni did say that the BLS should continue issuing quarterly reports, but scrapping monthly numbers would give Trump’s people more time to corrupt the data — and wanna bet that if the next quarterly report looks bad, Antoni, if confirmed at the BLS, would find reasons to hold off on its release?

Incidentally, as Claudia Sahm reminds us, the BLS is legally required to issue monthly employment reports. So Antoni’s proposal, aside from being a transparently corrupt attempt to hide bad news, would be flatly illegal. I’m pretty sure that canceling publication of the Consumer Price Index, which will be next on the agenda once the full impact of Trump’s tariffs is felt, would also be illegal. But does that sort of thing matter these days?

But let me get to Menzie Chinn who, as I said, has been on Antoni’s case for a while. It turns out that for someone with almost no publications, who has been largely invisible from policy discourse, Antoni has been responsible for a surprisingly large number of bad economic analyses.

Chinn puts special emphasis on a 2024 paper circulated by Antoni and Peter St. Onge claiming that real GDP peaked at the end of 2021, and never recovered — that is, that the U.S. economy was in a deep recession for Joe Biden’s last three years in office.

Chinn tried to replicate their results, and even using what they claimed was their (weird) methodology couldn’t get anywhere close to their numbers. My guess is that a forensic analysis, should anyone bother (I don’t recommend it) would find that Antoni and St. Onge committed a Stephen Moore: They just made some math mistakes or copied some numbers down incorrectly.

But why bother? An economist who gets results completely at odds with every other piece of available evidence — remember, in 2024 The Economist described the U.S. economy as “The envy of the world” — owes it both to himself and his readers to provide a detailed, reproducible explanation of why his story is so different. Antoni didn’t.

I’d like to think that Antoni’s utter professional inadequacy for the role of BLS Commissioner will keep Congress from confirming him. But as I mentioned Monday, Stephen Moore already had a well-established reputation for surreal incompetence by the time Trump tried to install him on the Federal Reserve Board. Yet he would probably have been confirmed anyway if unsavory facts about his personal life hadn’t surfaced.

So there’s a good chance that Antoni will, in fact, take over the BLS. And the result will be the total destruction of one of the world’s greatest statistical agencies — an agency that has, among other things, been a crucial aid to business decision-making. It won’t even matter whether the Trumpists cook the books (although they will.) For from the moment Antoni takes full control, nobody will believe any numbers coming out of BLS.

Fortunately, the same thing won’t be happening to other government agencies providing crucial information, like the Centers for Disease Control. Oh, wait.

MUSICAL CODA

Wednesday 13 August 1662

Up early, and to my office, where people come to me about business, and by and by we met on purpose to enquire into the business of the flag-makers, where I am the person that do chiefly manage the business against them on the King’s part; and I do find it the greatest cheat that I have yet found; they having eightpence per yard allowed them by pretence of a contract, where no such thing appears; and it is threepence more than was formerly paid, and than I now offer the Board to have them done. We did not fully end it, but refer it to another time.

At noon Commr. Pett and I by water to Greenwich, and on board the pleasure-boats to see what they wanted, they being ordered to sea, and very pretty things I still find them, and so on shore and at the Shipp had a bit of meat and dined, there waiting upon us a barber of Mr. Pett’s acquaintance that plays very well upon the viollin. Thence to Lambeth; and there saw the little pleasure-boat in building by the King, my Lord Brunkard, and the virtuosoes of the town, according to new lines, which Mr. Pett cries up mightily, but how it will prove we shall soon see.

So by water home, and busy at my study late, drawing a letter to the yards of reprehension and direction for the board to sign, in which I took great pains. So home and to bed.

Read the annotations

After first operational launch, here’s the next big test for ULA’s Vulcan rocket

United Launch Alliance delivered multiple US military satellites into a high-altitude orbit after a prime-time launch Tuesday night, marking an important transition from development to operations for the company's new Vulcan rocket.

This mission, officially designated USSF-106 by the US Space Force, was the first flight of ULA's Vulcan rocket to carry national security payloads. Two test flights of the Vulcan rocket last year gave military officials enough confidence to certify it for launching the Pentagon's medium-to-large space missions.

United Launch Alliance's third 202-foot-tall (61.6-meter) Vulcan rocket lifted off from Cape Canaveral Space Force Station, Florida, at 8:56 pm EDT Tuesday (00:56 UTC Wednesday). Two methane-burning BE-4 main engines, supplied by Jeff Bezos' space company Blue Origin, and four solid-fueled boosters from Northrop Grumman powered the rocket off the launch pad with nearly 3 million pounds of thrust.

Read full article

Comments

Match The Pipes

Donuts

I once worked for a CEO who brought in a box of donuts every Saturday morning. If the donuts weren’t gone by Monday morning, we’d get a tongue lashing about how we didn’t care enough.

We set up a rota of employees who lived near the office to go in and throw away the donuts Sunday night.

Quick Quiz, No Tricks

How much water flows through a pipe of capacity 5?

Pipe with capacity 5 and a question mark on the right side

5. Told you—no tricks. How much water flows through a sequence of pipes, both with capacity 5.

Now it's 2 pipes connected

Same same. 5. (Yes, we can worry about turbulence & leakage at the boundary & all that. I’m making an analogy.)

What happens to the capacity of the system if we expand the capacity of the second pipe to 20?

The second pipe is now bigger

What comes out of the second pipe? 5. Why? Because that’s all that went into it. Doesn’t matter how big the second pipe is. We can make it a million & all that will trickle out is 5.

To complete the picture, what if we expand the first pipe?

Instead the first pipe is bigger

Capacity as measured at the end? 5. Again, doesn’t matter how much we expand the first pipe. We can’t put, as my Pappy would have said, 10 pounds of manure in a 5 pound bag. Well, Pappy wouldn’t have said “manure”, but we’re being professional here.

Application

As an executive you accept a trade. You live a life of service, no longer able to do the detailed work. For this loss of control you get wider influence, greater rewards.

One benefit of an executive position is that you can see the whole process, in ways those at the coalface cannot.

Product → Design → Engineering → Operations & Sales & Marketing is exactly one of these multi-pipe situations. Each department’s incentives align with improving their own capacity. You are uniquely positioned to look across the whole chain. (You are also responsible for improving across the whole chain.)

What can you do?

  • Find the narrowest pipe. Look up-stream & you’ll find half done work piling up. You can do this. They can’t. They are all working to keep their pipe going.

  • Make sure there’s enough buffer to keep the pipe busy, but no more. This may require reducing the output of upstream pipes. You can do this. They can’t. No one can voluntarily reduce their output.

  • Expand the narrowest pipe. Don’t bother with the pipes that aren’t narrowest. At best expanding them won’t help. At worst you’ll flood the system. You can do this. They can’t. They are overwhelmed with daily work.

  • Reduce demand on the narrowest pipe. Can you route some of its work elsewhere, even if that elsewhere isn’t ideal? Again, you can do this. They can’t. The folks getting their work re-routed won’t be happy. Boo hoo.

  • Stop improvement & shift attention when a new pipe is narrowest. You can do this. They can’t. Your people got credit for improvements (if you did it right). They want more credit.

Mea Culpa

I read the “80 hours weeks or else” pronouncement, got mad, & clapped back.

Here’s why I got emotional. That kind of pressure doesn’t work to get more accomplished. There is a natural limit to how much work comes out of a system. Pressure stresses the system, it doesn’t change the system, not in any positive way. So pressure is low benefit. How about cost?

I’ve seen mental health breakdowns (& had one myself). I’ve seen divorces, kids scarred. I’ve lived through 2 colleague suicides. The cost of pressure is high & permanent. Also, & this is what pisses me off, the consequences accrue to those least able to push back.

So I applied Thinkie Complain & Propose & wrote the above.

Conclusion

Drucker introduces so many effective management skills. Deming too. Matching the pipes is just the start.

You’ve chosen a grave responsibility, with life-changing consequences for your decisions. I wish for you the toolbox to go with that responsibility.


Interested in executive coaching? Augmented coding creates a slew of new Match the Pipes problems to go with the ones we were already struggling with. You can get help.

Corporations aren't the reason your rent is too high

Photo by Jim.henderson via Wikimedia Commons

Donald Trump is choking off U.S. manufacturing with tariffs, replacing statistical agency personnel with apparatchiks who will manipulate data to make the President look good, and so on. Yet some progressives remain convinced that the key to winning back the country is to harness a wave of populist anger by attacking big corporations. I’m not sure I see the political logic there, but I guess I’m not much of an expert on politics.

Anyway, I’m sympathetic to the notion that monopoly power has increased in the U.S. economy since the turn of the century, and that this is making life harder for some Americans. But corporate power is simply not the cause of many of the problems regular Americans face — there are a lot of other things going on too. And because antitrust progressives insist on fitting every problem into the paradigm of corporate power, they end up believing a number of false things about the world. One example I’ve written about before is that of health insurers, whom antitrust progressives view as the chief architect of everything that’s wrong with the U.S. health system; in fact, these companies make almost no profits and are fairly efficient.

Another important example is the housing market. Overall, housing has not actually gotten more expensive throughout America; if you compare median personal income to the CPI measure for rent of primary residence, you’ll see that income has actually gone up slightly faster than rent since 1980:

But in the attractive cities where most people would like to live if they had the choice, rent has gone up much faster than in the decayed Rust Belt cities and small towns where most Americans would prefer not to live. The rental crisis is a local one, but it’s real.

Abundance liberals blame this problem on lack of housing supply, and support YIMBY policies to build more housing in cities. But although some progressives are coming around on this, many are strongly opposed to the abundance agenda. Instead, they want to blame high rents on powerful companies who buy up all the houses and then jack up prices.

A few years ago, this manifested as a panic about BlackRock buying up large amounts of the housing stock in America. This was a silly mistake; BlackRock doesn’t buy homes, except indirectly by investing in stocks called REITs. People were probably thinking of Blackstone, a much smaller asset manager with a similar name, which does buy up homes.

In addition to this silly mistake, the broader panic just wasn’t based on facts. In 2021, Derek Thompson did a great job of debunking the myth:

The U.S. has roughly 140 million housing units, a broad category that includes mansions, tiny townhouses, and apartments of all sizes. Of those 140 million units, about 80 million are stand-alone single-family homes. Of those 80 million, about 15 million are rental properties. Of those 15 million single-family rentals, institutional investors own about 300,000; most of the rest are owned by individual landlords. Of that 300,000, the real-estate rental company Invitation Homes—in which BlackRock is an investor—owns about 80,000. (To clear up a common confusion: The investment firm Blackstone, not BlackRock, established Invitation Homes. Don’t yell at me; I didn’t name them.)

Megacorps such as BlackRock, then, are not removing a large share of the market from individual ownership. Rental-home companies own less than half of one percent of all housing, even in states such as Texas, where they were actively buying up foreclosed properties after the Great Recession. Their recent buying has been small compared with the overall market.

The actual number of homes Blackstone (or BlackRock) was buying was tiny — far too tiny to affect rental prices in any significant way, except perhaps in a few very localized areas.

But somehow, despite its lack of connection to reality, the meme stuck around, and the size of the problem grew as the story was repeated around the internet. There are still people who think BlackRock is buying up much of the housing in America. In fact, even some right-wingers are convinced of this:

The exact form of the claim varies. Sometimes it’s 44% of the housing that the evil corporations are buying up, sometimes it’s just 20%. Sometimes it’s BlackRock alone that’s responsible, sometimes it’s the private equity industry:

But the meme remains false. Many news outlets have debunked it over the years. For example, Logan Mohtashami posted the following charts in Yahoo Finance in 2024:

The first of the two charts shows that institutional buyers — which includes private equity, BlackRock, etc. — own only a tiny sliver of the homes in America. The second chart shows that there was a corporate home-buying spree in 2022, but it never even hit 5% of home purchases at its peak — far lower than the rumors claim.

Kriston Capps also wrote a great debunking of the “corporate landlords” myth in 2024. Here was his chart:

Almost zero of the U.S. housing stock gets bought by owners who own more than 9 units. Corporate landlords just aren’t significant enough to be driving the rental crisis in America’s most desirable cities. (Of course, this doesn’t stop anticorporate types from mocking the very idea that high rents are caused by something other than market power.)

In fact, it gets even worse for the antitrust story here. It turns out that corporate landlords probably don’t even do what antitrust people think they do! The common story is that corporations buy up all the houses in an area, thus creating a local monopoly, and using that local monopoly to jack up rents — which in turn causes gentrification and pushes poor people and minorities out of the neighborhood.

Except Konhee Chang, an economics job market candidate, found evidence that corporate landlords actually make housing cheaper for lower-income folks, and lead to diversification instead of gentrification!

Using property-level data on tenants, home prices, rents, and acquisition timing, I show that increasing rental supply in American suburbs, where rentals are scarce and expensive relative to owner-occupied housing, reduces segregation by enabling lower-income, disproportionately non-White renters to move into neighborhoods where they otherwise could not afford to own. In response, nearby incumbent households are more likely to move out, perceiving renters as a disamenity. Large-scale landlords expand rental supply by converting owner-occupied homes into rentals, exploiting cost efficiencies from geographic concentration.

Chang found that corporate landlords drive down rents and slightly raise the price of buying a house:

Source: Chang (2024)

This is only to be expected, since what the corporate landlords are doing is buying up housing (which raises purchase prices because it increases demand) and converting it to rental units (which lowers rents because it increases supply).1

So as far as we can tell, corporate landlords — at least, right now — aren’t causing the harms that the antitrust progressives (and some right-wing pundits) claim. The cause of the housing crisis in desirable metro areas must lie elsewhere. The obvious culprit here is just supply limitations — i.e., land-use regulations and NIMBYism. And the obvious way to address that is the abundance agenda.2

For antitrust progressives, the problem with that conclusion is that it doesn’t place the blame on their class enemies. If corporate power isn’t the problem when it comes to high rents, then it means fewer opportunities for progressives to harness populist rage against the business class. In addition, antitrust progressives probably still believe that they can harness a wave of anticorporate populist sentiment to buoy Democrats back to victory. I would be very surprised if that strategy worked.

But in any case, the story that corporate landlords are making American housing unaffordable is simply false. It’s just another free-floating memetic myth that keeps getting in the way of our ability to solve our very real problems. And the zeal with which antitrust progressives have embraced and propagated that myth should make us a little more pessimistic about their ability to accomplish positive change in the current political economy.


Subscribe now

Share

1

This ought to be an open-and-shut case, but a progressive economist popped up to try to argue:

As I explained, the reason welfare goes down in the paper is that corporate landlords push down prices enough that poor Black and Hispanic people are able to move in to previously richer, whiter neighborhoods. Chang, the author of the paper, assumes — probably correctly — that rich white homeowners do not like to live next to poor Black and Hispanic renters. And so the rich white homeowners lose utility from corporate landlords, because they no longer get to exclude people from their neighborhoods along racial and class lines!

Somehow, this point was lost on the progressive economist, who ended up defending white flight as a socially desirable thing.

2

Of course, even the abundance agenda won’t totally be able to negate the local impacts of big increases in demand for housing in coastal cities. Demand for life in those cities is just extremely high. But supply increases will blunt those impacts, and create affordability further from the center of New York City, San Francisco, etc.

Links 8/13/25

Links for you. Science:

This system is critical to Americans’ health. We must defend it.
Real-world effectiveness and causal mediation study of BNT162b2 on long COVID risks in children and adolescents
In Patagonia, a Frog Makes a Comeback
Strategies and mechanisms of contact-dependent predation in bacteria
What U.S. science stands to lose without international graduate students and postdoctoral researchers
‘Things keep evolving into anteaters.’ Odd animals arose at least 12 separate times

Other:

How Axios rebranded conservative ideology as objectivity
Elon Musk’s Secret Army of Progressive Lobbyists. Why are the people being paid to fix the problems DOGE created also taking money from Musk?
Cheap Tricks for Hard Problems
On Epstein, Democrats have Mike Johnson running scared
Did Trump Just Confess He Learned about Virginia Giuffre before Jeffrey Epstein Recruited Someone Else at Mar-a-Lago?
The party is the problem
US to allow federal workers to promote religion in workplaces
Work time reduction via a 4-day workweek finds improvements in workers’ well-being
Imagining A Democratic Majority. What do they intend to do about all this?
Faculty Support of George Mason’s President Draws Federal Investigation. The Faculty Senate at George Mason University in Virginia adopted a resolution supporting the school’s president and his work related to diversity. The Justice Department says it will investigate. (cancel culture something something)
Trump Wants to Deport a Children’s Hospital Chaplain to Egypt, Where He ‘Faces Death’
Like D.C., New Jersey has its own unjust budget
Mexico’s Molar City Could Transform My Smile. Did I Want It To?
Scientist on green card detained for a week without explanation, lawyer says
Tech elites are turning AI into ChatGPTrump: The administration’s new AI plan gives Silicon Valley everything it wants — for a price.
Is Donald Trump A Pathological Liar? (pretty funny)
The Actual Conspiracy Theory Surrounding Trump and Epstein (if the FBI is sitting on something like this, I’m not sure even Trump can survive; also this is a very graphic description of some of the affidavits that are public)
Poll: New York Dems side with Mamdani on Israel, Netanyahu
Joe Rogan’s friends followed him to Texas. They all seem to hate it.
A “Modest” Number of Deaths. One member of the CDC’s “new” vaccine advisory committee made a surprising comment during a recent meeting.
The Pedro Pascal smear campaign
Toddler bites cobra to death after venomous 3ft-long snake coiled itself around his hands in India
GOP senator floats broken plan to buy off Americans mad about tariffs
Generative AI–Asahi Linux Documentation (lmfao)
D.C. Council struggles with final vote on nearly $22 billion budget
Mamdani Has Done Something Special. Progressives Need Black Voters to Make It Last.
Where Did All the Las Vegas Tippers Go? (Trump take Vegas)
Marine Veteran’s Wife Arrested While Seeking Her Green Card, Leaving Behind Toddler and Newborn
CEO Brags That He Gets “Extremely Excited” Firing People and Replacing Them With AI

The Absurdity of Trump’s D.C. Police Response

Since the national punditocracy is commenting on the Grave Implications of federalizing D.C.’s police (which are actually kinda grave!), I want to give a D.C. native and resident’s brief view of what Trump is doing (like all such reflections, this will be personal, which is to say, myopic). Monday night, at least a dozen police officers were stationed along 18th Street NW, starting from P Street all the way to Willard St. By stationed, I mean police officers’ cars were blocking one lane of traffic (on a two lane road) at every intersection.

In one of the safest places in the city. This is a waste of resources, and it seems designed to placate a bunch of pants-shitting suburbanites who don’t want to experience a single moment of annoyance or discomfort. Note that “pants-shitting suburbanites” includes a bunch of Republican staffers and apparatchiks who have moved to D.C. and expect it to be a lily white suburb. If you’re scared to walk down 18th, north of the circle on an August evening, I don’t know what to tell you. Maybe this city just isn’t for you.

Anyway, fuck Trump and fuck Republicans.

Wednesday assorted links

1. Brian Potter, The Origins of Efficiency, now available for pre-order.

2. Does China underconsume?

3. Biotech’s lost archive.

4. Those new service sector jobs?

5. Small business favoritism.

The post Wednesday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

The Decline in Summer Teen Employment

Here is a look at the change in teen employment over time.

The graph below shows the employment-population ratio for teens (16 to 19 years old) since 1948.

The graph is Not Seasonally Adjusted (NSA), to show the seasonal hiring of teenagers during the summer.

A few observations:
1) Although teen employment has recovered some since the great recession, overall teen employment had been trending down. This is probably because more people are staying in school (a long term positive for the economy).

2) Teen employment was significantly impacted in 2020 by the pandemic.

Teen Employment Click on graph for larger image.

3) A smaller percentage of teenagers are obtaining summer employment. The seasonal spikes are smaller than in previous decades. 

The teen employment-population ratio was 35.2% in July 2025, down from 37.9% in July 2024.  Excluding 2020 due to the pandemic, this is the lowest ratio since 2015 following the financial crisis.

The teen participation rate was 42.0% in July 2025, down from 43.6% the previous July. 

This has pushed the teen unemployment rate (NSA) up to 16.1% from 13.2% in July 2024.

So, a smaller percentage of teenagers are joining the labor force during the summer as compared to previous years. This could be because of fewer employment opportunities, or because teenagers are pursuing other activities during the summer.

The decline in teenager participation is one of the reasons the overall participation rate has declined (of course, the retiring baby boomers is the main reason the overall participation rate has declined over the last 20+ years).

Thread Meeting

Hey, so did you ever finish your video series about Cassie and the caterpillar morph? I loved the first three, but never ... no, sorry, I get it, this isn't the place. Sorry! Sorry.

My talent podcast with Yonatan Ben Shimon

Here is YouTube, here is Spotify.  Yonatan is a very smart venture capitalist.  About 34 minutes, recommended!

The post My talent podcast with Yonatan Ben Shimon appeared first on Marginal REVOLUTION.

       

Comments

 

Selling Out to Putin

Donald Trump, would-be master dealmaker, is setting up Ukraine and us for an appeasing sellout to Russian leader Vladimir Putin and abandonment of an ally or a failed summit with no purpose.

By agreeing to the summit at all — perhaps without including direct participation of Ukrainian President Volodymyr Zelenskyy, Trump already has given Putin a victory in a face-to-face summit that recognizes Russia’s assumed place in the world as a Big Power ready to slice up the world map.

Even before it is scheduled to start Friday in an unpublicized site in Alaska, the promised meeting feels depressingly wrong on several grounds.

Any other U.S. president would have done substantial groundwork by diplomats from both sides before acceding to a summit. Trump skipped the work, thinking that all international agreements are simple, and the result of personal relationships between leaders, and dependent on what he has heard from his friend Steve Witkoff, who has met five times with Putin. (Why is non-diplomat Witkoff doing this instead of Secretary of State Mario Rubio?)

Now Trump says the meeting is “is really a feel-out meeting, a little bit,” to find out Putin’s thinking.

To anyone but Trump, Putin’s thinking has been clear even before he invaded Ukraine. He wants to swallow the former Soviet republic whole, or at least bite off as much as he can, replace its government with a friendly leader and keep Ukraine from joining NATO or Europe — all with disdain for killing loads of civilians.

To underscore his disdain for a ceasefire, Putin has escalated militarily even in the last two days. Does dealmaker Trump think there is anything else substantive to learn?

Trump and Putin

As we all have followed, Trump has a special affinity for Putin. The threatened drop dead date for Putin to agree to a ceasefire has passed without any Trump action — exactly the kind of inaction for which he has accused Democratic predecessors.

No other U.S. president would seek to settle this war with “land swaps” that involve no swaps, but a giveaway of Ukrainian territory — especially without the approval of the Ukrainian leader at the table, who has specifically rejected them. No other U.S. president would walk away from Ukraine as an ally for democracy without prior assurances from Russia about security safeguards that Russia specifically rejects. In any case, the “swaps” seem one-way; Russia isn’t talking about withdrawing from designated, contested areas.

It smacks of 1938 appeasement by Britain’s Neville Chamberlain with Adolph Hitler that solved exactly nothing before spreading as World War II.

In like fashion, Putin expects Trump to hand him what apparently cannot win on the battlefield — control of about 20 percent of Ukraine, with no guarantee not to go back for more.

Erratic Acts Towards Friend and Foe

Trump is letting Putin manipulate him again, as columnist Max Boot writes in The Washington Post.

Boot argues that this premature summit came about because of Trump’s obsession with winning a Nobel Peace Prize for ending a complicated war that he promised to stop in 24 hours of becoming president.

That obsession has led to a series of ever-changing stances about support for Ukraine, including attacks on Zelenskyy, and frustration that finally recognized that Russia is the problem here, not Ukraine. But Russia is not stopping its bombs, missiles and drones from attacking Ukraine’s civilian population.

Trump is using his tariffs and trade policies as a weapon to force India to stop buying Russian oil, but other countries, including China, are buying plenty of it.  Trump is back to claiming that “President Putin I believe wants to see peace.”

Land in Ukraine is not Trump’s to dissect, and even if he were Zelenskyy’s designated agent, Trump’s idea of negotiation appears to be to give away the prize without any guarantee that this indeed will be the end. Russia broke previous agreements to stop taking Ukrainian land before launching its invasion three years ago.

As the Institute for the Study of War notes, Russia has been trying, and failing, to capture all of Donbas since 2014. Putin is simply trying to achieve at the negotiating table what his troops have not been able to achieve on the ground, argues Boot.

Trump seems to believe Putin’s assertions that if he gains full control over eastern Ukrainian sectors and Crimea, Russia will stop. With the battle lines frozen, the Wall Street Journal reports, a final end to the war would supposedly be negotiated later.

Does anyone besides Trump believe this? How about a summit in which Trump is sufficiently prepared to know what constitutes a good, achievable goal that both sides can accept?

Would Trump offer a similar deal to Israel? He hasn’t. Would he offer China half of Taiwan? No. Would he give South Korean land to the North Korean leader or any of the Middle East territories to Iran? Of course not.

We are left not only with a continuing war, but with a dithering U.S. president who abandons even his own promise of sanctions aimed at forcing Russia to stop. Offering Putin a summit as an international reward for nothing seems the opposite of any art of deal.


CLICK HERE TO DONATE IN SUPPORT OF FREEDOM OF THE PRESS AND OUR NONPROFIT NEWSROOM

 

The post Selling Out to Putin appeared first on DCReport.org.

2nd Look at Local Housing Markets in July

Today, in the Calculated Risk Real Estate Newsletter: 2nd Look at Local Housing Markets in July

A brief excerpt:
Tracking local data gives an early look at what happened the previous month and also reveals regional differences in both sales and inventory.

Closed sales in July were mostly for contracts signed in May and June, and mortgage rates, according to the Freddie Mac PMMS, 6.82% in May and 6.82% in June (somewhat higher than for closed sales in June).

Closed Existing Home SalesIn July, sales in these early reporting markets were down 1.4% YoY. Last month, in June, these same markets were up 3.8% year-over-year Not Seasonally Adjusted (NSA).

Important: There were the same number of working days in July 2025 (22) as in July 2024 (22). So, the year-over-year change in the headline SA data will be similar to the NSA data.
...
Many more local markets to come!
There is much more in the article.

The New Pillars of European Research: How Nordic CROs Are Reinventing Clinical Trials

As clinical research becomes more global and data-driven, Europe continues to serve as a vital region for innovation and high-quality execution. While countries like Germany and France have traditionally dominated the pharmaceutical research landscape, a quiet transformation is happening further north. The Nordic region — particularly Sweden and Denmark — is becoming an increasingly strategic hub for Contract Research Organizations (CROs) that support complex clinical development programs across a range of therapeutic areas.

What’s behind the rise of these Northern European players, and how do they compare to other CROs in the broader European context?

Why Sponsors Are Rethinking Their CRO Strategy

The clinical trials ecosystem has grown dramatically more complex over the past decade. Sponsors now face rising costs, intricate regulatory demands, and increased pressure to generate high-quality data faster. In this environment, the role of a reliable clinical trials CRO has never been more critical.

A well-established CRO doesn’t just execute studies — it helps sponsors navigate protocol design, identify ideal patient populations, handle regulatory affairs, and ensure data integrity throughout the trial. With the growing need for regional specialization, many biopharma companies are turning to local CROs that understand the clinical and cultural landscape of specific countries.

Sweden and Denmark are becoming go-to locations for this kind of partnership — not just because of their healthcare infrastructure, but because of the way CROs in these nations have evolved.

What Sets Nordic CROs Apart?

CROs in the Nordic region operate with a unique blend of efficiency, precision, and innovation. Unlike some larger, globally spread CROs, these providers often offer tailored services, fast response times, and deep knowledge of local trial ecosystems.

Here’s what makes them stand out:

  • Strong national health registries and EHR systems for data-rich recruitment
  • High standards for regulatory compliance and ethics oversight
  • Culturally aligned with transparency, quality, and collaboration
  • Efficient public healthcare systems that facilitate fast trial setup
  • High levels of English proficiency and digital adoption

This combination of factors enables sponsors to launch and manage studies with fewer delays and higher confidence in data accuracy.

CRO in Sweden: A Quiet Giant of Research Excellence

Sweden has long been a leader in public health and medical technology, but in recent years it has also emerged as a major force in the CRO landscape. A CRO in Sweden offers sponsors the advantage of one of the world’s most structured and integrated healthcare systems. Patient data is securely accessible, enabling faster identification of eligible participants.

Moreover, Swedish CROs often work in close collaboration with academic institutions, government agencies, and ethics boards — making regulatory pathways smoother and communication more streamlined.

Denmark’s Clinical Research Strength Lies in Integration

Like Sweden, Denmark offers a unique ecosystem where CROs thrive by being close to national health data and a supportive innovation policy environment. Working with a CRO in Denmark gives sponsors access to cutting-edge technologies, AI-driven data platforms, and experienced clinical teams.

Danish CROs are particularly well-versed in cross-border trial coordination, thanks to Denmark’s strong participation in EU-wide research programs and high connectivity with neighboring countries.

Key Services Offered by Nordic CROs

While each CRO varies in scope and size, most top-tier providers in the Nordic region offer comprehensive support for all phases of clinical development.

Typical services include:

  • Clinical trial protocol design and feasibility assessments
  • Site identification and initiation
  • Patient recruitment and retention strategy
  • Regulatory submission and compliance management
  • Clinical monitoring and site audits
  • Data collection, management, and statistical analysis

Some providers also specialize in post-marketing studies and real-world evidence generation, thanks to robust national data infrastructure and favorable conditions for long-term follow-up.

Choosing the Right CRO for Your Clinical Research

Not all CROs are created equal, and sponsors must weigh factors such as therapeutic specialization, trial phase experience, and regulatory expertise. Working with a regional CRO — especially one based in Northern Europe — offers advantages beyond logistics. It enables a more agile, culturally attuned collaboration that’s often difficult to achieve with global giants.

When evaluating CROs in Sweden, Denmark, or any part of Europe, consider the following questions:

  • Do they have experience in your target therapeutic area?
  • Can they demonstrate successful trial execution in similar markets?
  • How robust is their data management and regulatory strategy?
  • Are they agile enough to adjust to protocol amendments or real-world challenges?

A high-performing CRO should act not just as a vendor, but as an extension of your clinical team — one that is committed to outcomes, patient safety, and regulatory success.

Northern Europe’s CRO Momentum Is Just Beginning

The growing appeal of Nordic CROs reflects a broader trend in decentralizing clinical research — away from traditional hubs and toward regions that offer smart infrastructure, streamlined processes, and scientific credibility. Sweden and Denmark, in particular, are proving that smaller countries can lead big initiatives when the environment supports it.

Sponsors seeking efficiency, precision, and meaningful partnerships in their research journey are increasingly looking north — and finding CROs ready to deliver.


CLICK HERE TO DONATE IN SUPPORT OF FREEDOM OF THE PRESS

The post The New Pillars of European Research: How Nordic CROs Are Reinventing Clinical Trials appeared first on DCReport.org.

Covid vaccines saved millions of lives, mostly of older people

Here's a recent estimate of lives saved by Covid vaccines.

Ioannidis JPA, Pezzullo AM, Cristiano A, Boccia S. Global Estimates of Lives and Life-Years Saved by COVID-19 Vaccination During 2020-2024. JAMA Health Forum. 2025;6(7):e252223. doi:10.1001/jamahealthforum.2025.2223 

"Question  What was the global impact of COVID-19 vaccinations on deaths during the 2020-2024 period?

Findings  This comparative effectiveness study found that COVID-19 vaccinations averted 2.5 million deaths during 2020-2024 (sensitivity range estimates, 1.4-4.0 million) and saved 15 million life-years (sensitivity range estimates, 7-24 million life-years). The estimated benefits had a steep age gradient.

Meaning  COVID-19 vaccinations had a substantial benefit on global mortality during 2020-2024, but this benefit was mostly limited to a minority of the population of older individuals."

My Book "The Origins of Efficiency" is Now Available for Preorder

I’m happy to announce that my book, The Origins of Efficiency, is now (officially) available for preorder, and will be released on September 23rd.1 You can preorder on Amazon, Stripe, Barnes and Noble, or Bookshop.com.

Why I wrote this book

Seven years ago, I joined a well-funded, high-profile construction startup called Katerra. At the time I was a structural engineer with about ten years of experience, and over the course of my career I had become increasingly disillusioned with the inefficient state of the construction industry. Over and over again, engineers and architects were designing similar sorts of buildings, instead of building multiple copies of the same design. Contractors were putting up buildings on-site, by hand, in a way that didn't seem to have changed much in over a century.

Productivity statistics told much the same story: unlike productivity in other industries, like manufacturing or agriculture, construction productivity has been flat or declining for decades, and it’s only getting more expensive to build homes, apartments, and other buildings over time.

Katerra had a plan for changing all that. It would efficiently build its buildings in factories rather than on-site, doing for construction what Henry Ford had done for cars. Katerra already had several factories in operation or under construction when I joined, and planned to build even more. At the time, I thought Katerra’s approach was exactly what was needed to change the industry. It seemed obvious that factory-built construction would be more efficient, but that some combination of incentives, inertia, and risk aversion kept the industry locked into using old, inefficient methods of production. I thought that with a sufficient jolt, the industry could be disrupted and new, better ways of building could take root. Katerra had raised over $2 billion in venture capital to supply that jolt.

But within a little over three years, Katerra had burned through all its venture capital, gone through increasingly brutal rounds of layoffs (one of which cut the engineering team I was leading by around 75%), and declared bankruptcy. I survived the layoffs, but had left the company several months before its final end. When Katerra closed its doors, I was working at a new engineering job, back to designing the same sort of buildings, built using the same old methods I’d used for most of my career.

Since then, the construction industry hasn’t improved its track record of productivity. In fact, post-Covid, construction costs had one of their steepest rises in history.

As Katerra was collapsing, I became obsessed with understanding why things had gone so wrong, and what it would take to actually make the construction industry more efficient. I started writing Construction Physics just after I left the company, as a way to work through an understanding of how the construction industry worked and why Katerra had failed to change it.

Some of Katerra’s failure can be blamed on operational missteps: trying to develop too many products at once before finding product-market fit, failing to integrate acquired companies, and so on. But it also gradually became clear that Katerra’s fundamental thesis — that construction would be much cheaper and more efficient if it was done in factories — was either incorrect, or woefully incomplete. Time and again at Katerra, the engineering team would design some new factory-produced building, only for the costs to come back too high. Time and again executives would complain how hard it was for Katerra, with its expensive factories and high overheads, to compete against “Bubba and his truck,” who could put up buildings using little more than hand tools. Rather than enabling radically cheaper construction, as Ford’s assembly line had done for the Model T, Katerra’s factories were making it hard to even compete in the existing market.

It also became clear that Katerra was far from the first company to try and fail to transform the industry using factory-built construction. I learned that such attempts stretch back decades: from the Lustron Corporation in the 1940s, to Stirling Homex and the other Operation Breakthrough companies in the 1960s, to Veev in the 2020s. Often these companies failed as quickly as Katerra did. Other times, they managed to carve out successful businesses: National Homes successfully built prefab homes for nearly 50 years. But no one had managed to use prefab to become the Henry Ford of housing.

And the problem didn’t seem limited to the US. While prefab, factory-built construction is often much more popular in other countries than it is in the US, nowhere did I find it ushering in dramatic cost reductions. Toyota, arguably one of the best manufacturers in the world, formed a prefab housing company in Japan in the 1970s to apply its manufacturing expertise to homebuilding. But while in car manufacturing Toyota was so efficient and successful that it eventually set the global standard, Toyota remains a niche player in homebuilding, and its homes are surprisingly expensive compared to site-built construction.2 In Sweden, nearly all single-family homes are factory-built, but their construction costs are higher than conventionally-built homes in the US.

It eventually became clear to me that we didn’t really understand what it took to make some process more efficient. Those of us hoping to use factory-built construction to drive down costs were acting as a sort of cargo cult — duplicating what we saw work elsewhere, without truly understanding the mechanisms that made the process successful. To understand why the construction industry was so resistant to efficiency improvements, and why it never seemed to get cheaper to construct buildings, I needed to understand how, specifically, things get cheaper to produce over time.

This book is the fruit of that effort.

Over the course of roughly a year and a half, I looked at dozens of production processes — from cigarette making to package delivery, from steel production to tomato harvesting — to see how they improved over time. I read about the history of virtually every industrial improvement system: from scientific management to Lean production, from DFMA to SMED, from Taguchi methods to statistical process control. I read the work of economic historians who had studied industrial progress like Joel Mokyr, Alfred Chandler, David Hounshell, Gregory Clark, Sam Hollander, and John Enos. I bought lots and lots of books. The bibliography of the book has around 600 separate references, and on its own would be one of the longest pieces of text I’ve ever produced. (Writing this book is what forced me to start using proper reference management software.)

I initially thought that turning all this reading into a book would be essentially the same process as writing the newsletter. At the time, my newsletters averaged around 2,000 words each, and I estimated that the book would be around 100,000 words, or require around 50 newsletters’ worth of work. But I was sorely mistaken. It turns out that it's comparatively simple to write 50 stand alone pieces of text that collectively add up to the length of a book. Tying many disparate ideas into a single, coherent thesis, and then explaining that thesis via separate chapters that draw on and reference each other, is much more complex. (This is why you see so many books which are loose collections of thematically similar chapters: it’s much easier to write that sort of book.)

Over the course of writing the newsletter I had developed a writing process that worked very well, but this process was ill-suited to produce a book-length work. Once I signed the contract with Stripe Press, I thought I’d immediately able to start cranking out chapters, but for roughly six months I struggled to produce anything useful (there were multiple times when I sent a draft chapter to someone, then quickly followed up with “Don’t read that, it’s trash,” and started over).

Writing the newsletter is essentially a linear process: I read everything I can on a topic, take note of the especially interesting or relevant facts and information, and then come up with a structure for an essay that can incorporate it all. Once this structure is in hand, it's simply a matter of expanding the basic structure into a draft, and a draft into the final version. For the book, I eventually figured out that I needed to turn this linear process into a cyclical one. Instead of going from reading to structure to draft to final version, I would read, come up with some tentative ideas for a structure, then use those ideas to guide further reading, repeating the process until I felt like things had “clicked” and my structure was robust. Only then could I start fleshing out the basic ideas into a draft.

After several months of this, I was able to condense down the mass of information I had collected into a relatively simple framework: a small number of specific things you can do to make a process more efficient.

The book is 10 chapters, plus an introduction and conclusion. Chapter 1 explains this basic framework: the structure of a production process and the options you have for making it more efficient. Chapters 2 through 6 explain each one of these strategies, drilling into the details and nuances of how they work, when they don’t work, and examples of how they’ve reduced costs or improved operations in actual production processes. Chapters 7, 8, and 9 explain how these various strategies can work together and reinforce each other, collectively creating enormous efficiency improvements. Chapter 10 explains what happens when these paths to efficiency improvement are blocked, using the construction industry as an example.

I’m immensely proud of this book; if it sounds interesting or valuable to you, I hope you’ll pre-order it.

1

It’s technically been listed on Amazon for months, but only recently have the cover and everything else been finalized.

Welcome to the Cosmopolis

Tomorrow, Wednesday 13th, at 10 AM PT (1700 UTC) I’ll be doing a talk I’m kinda excited about, aiming to bring together a bunch of questions and themes I’ve been circling all summer, and in a looser way, for several years, across all my projects, both consulting and personal. It’s a level of synthesis I find it easiest to do in big, unwieldy talks (I should call these contraption talks). I last took a shot at synthesizing this particular set of themes at an event in Singapore in 2023 (slides).

My rather ponderous title is Cosmopolis, Metropolis, Nation-State: 3 protocols for articulating civilizational memory. This biggish essay is a prequel for the talk. You don’t have to read (or llmread) it, but you’ll probably get more out of the talk if you do.

Here is the YouTube livestream link (also embedded below). You can also watch via Substack live.

Part of the purpose of this talk is to preview the Protocol Symposium I’m helping organize the week of September 12-19 and drum up participation. It’s a week-long fully online event comprising a Protocol School, a Protocol Foundations Workshop, and a hackathon/coworking track for protocolish projects. The registration deadline for both is August 22 (10 days from today). The symposium is free, but capacity is limited so sign up early.

For me personally, this talk is a first attempt at an ambitious synthesis challenge I’ve set myself — describing a achievable class of cosmopolitan futures I’d actually want to live in. Futures that comprehend both emerging computing technologies and the big, ongoing, decidedly non-cosmopolitan shifts in the global political landscape.

I’ve been writing about these topics for several years now, especially in the Mediocre Computing series (for the tech piece) and the Protocol Narratives series (for the geopolitics piece). These themes also show up elsewhere in my writing and speechifying.

I think I have the broad contours of the synthesis marked out now, so I feel ready to do a first draft talk. What in mechanical engineering is called a test assembly. This essay is by way of being a mise en place for cooking up the talk (I still have to do my slides).

Central to the synthesis is the idea of a cosmopolis.

The Cosmopolis

The core idea in the synthesis is that major new technologies always induce sprawling technologically mediated geographies — or cosmopolises — that don’t map cleanly to conventional geographic units of governance. Instead, they act as a new kind of “soil” for new kinds of societies, driving unbundling and rebundling of geographic units of governance. This is the reason we speak of “land grabs” around new technologies.

The cosmopolis, it turns out, is not just a powerful unit of societal analysis, but a powerful unit of constitutive synthesis, categorically related to the other two geographic units in my title, the nation-state and the metropolis. You cannot set out to build a cosmopolis the same way you can set out to build a nation-state or metropolis, as a missionary project, but you can act intentionally in ways that help it cohere and emerge.

  • A nation-state is a territorially defined political unit.

  • A metropolis is a dense agglomeration of physical network nodes that coincide within a tight geography, in the form of a converged physical supernode that requires high human density to function.

  • A cosmopolis is the geographic field of diffusion of a set of behaviors associated with a particular powerful technology.

Territorial logic, nodal logic, and behavioral logic all induce societal protocols of various sorts, but it is my contention that of the three kinds of entities, the cosmopolis, by virtue of being the most ethereal, tends to co-evolve with technology the most. As a result, a cosmopolis is typically also a technological frontier of some sort, digesting the fruits of ongoing technological evolution into new and persistent civilizational layers that transform political geography.

Cosmopolises are constrained, but not defined, by physical technologies of connection, for both bits and atoms, across space and time. They coincide with nation-states and metropolises at various points, but cannot be identified with either category.

Depending on the generality, diffusivity, and maturity of the catalytic technology, a cosmopolis might span no more than a regional pocket (such as “Silicon Valley” or “Shenzen” in their respective pre-global early decades), extend across continents (such as American and Chinese style internet ecologies today), or the entire planet (such as the global air travel system and the “frequent flyer cosmopolis” it might be said to induce).

We systematically underestimate the power of the cosmopolis as a civilizational unit because it is an emergent entity, lacking the intentional patterns of top-down political organization that defines nation-states and metropolises. Most cosmopolises lack even the rudimentary affordance of a name we can use to point to it.

This does not mean, however, that cosmopolises are undesigned wildernesses. While they do comprehend and accommodate wildness in ways that nation states and metropolises struggle to, they also embody the strongest forms of civilized order.

The element of design enters a cosmopolis as a functionally narrow but composable unit of behavioral logic, the protocol. The architecture of a cosmopolis is a result of a vast number of protocol-design decisions made around a powerful new core technology.

Protocols exist within the territorial logic of nation states and network logic of metropolises too, of course, but they are constitutive in the case of cosmopolises.

When you take away the protocols of a cosmopolis, nothing remains. This is both its greatest strength, and greatest weakness. To a greater extent than competing constructs, it is made up purely of ideas, not materialities. A natural harbor region with pleasant weather and a river running through it is always a proto-city. A region bound by natural geographic borders is always a proto political-territory. But a book without a cosmopolis of literacy is just kindling, or a doorstop.

The computer, and the computational cosmopolis it induces, is merely the most recent cosmopolitical technology. We can identify similar structures throughout history. Arguably, even the Bronze Age ought to be considered the cosmopolis of tin, since it relied on a globalized trade in tin, and associated metallurgical knowledge.

But I want to focus on a particular class of cosmopolises that includes the emerging computational one — those that reshape civilizational memories. We might think of these as first-class cosmopolises. At once the most ethereal and fragile on the one hand, and the most powerful and inexorable on the others.

In the past (particularly in my Breaking Smart essays), I have referred to technologies that induce cosmopolises as soft technologies, but now I prefer the term memory technologies. While all technologies leave traces behind that can be “read” as texts, not all technologies induce intentional modes of remembering and knowing, or new modes of consciousness (new modes of unconsciousness though, are a dime a dozen in every technological era).

Let’s start with a particularly important one: the cosmopolis of print.

The Cosmopolis of Print

The story of print, as it concerns us, is told in one of the big reads that strongly influenced my evolving thesis over the summer, Elizabeth Eisenstein’s The Printing Revolution in Early Modern Europe, (a pick in our book club this year) which I’ve been recommending to everyone.

The book clearly draws out the difference between the technology of the printing press, and the protocols of print culture that took root in the abstract soil it created. The emergent European effects of these protocols — the Renaissance, the Reformation, and the Enlightenment — soon reshaped the entire world.

Based on Eisenstein’s model, the impact of the Gutenberg press can be separated into two components.

First, we have the direct impact, in the form of the emergence of a small, specialized industry (with associated trades and crafts) to build printing presses and keep them supplied with consumables such as lead, ink and paper.

Second, we have the much larger space of print-based protocols that reshaped the rest of society according to the logic of print (what McLuhan called the “Gutenberg Galaxy,” a rather cosmopolitical name; Eisenstein’s book is a more scholarly update to McLuhan’s book). For instance, extensive scholarly travel between sites of manuscript preservation (often monasteries) gave way to a culture of larger personal and institutional libraries and extensive correspondence — the so-called “Republic of Letters” corner of the larger cosmopolis of print. Antiquities-style manuscript trading and scribal copying protocols gave way to distribution and book stocking/selling protocols.

At a larger scale of emergence, we got the awkwardly chimerical constitutive notion of a nation-state, the nation bit being tied to literal soil, in the sons-of-soil (or autochthonous) sense, and the state bit being tied to the more conceptual soil of print culture (in a mode that was made explicit in Benedict Anderson’s notion of imagined communities bound together by printed artifacts like newspapers). Whether through scriptures or constitutions, the post-Westphalian political landscape was populated by entities defined by the creative tension between soil and type. From this tension, early-modern Europe emerged as the core of the cosmopolis of print by 1700.

We can understand the historical process of getting there as normalization through protocolization.

Protocolization as Normalization

The notion of a cosmopolis introduces, somewhat surreptitiously, the idea that an extant technological field is a historically and culturally contingent rather than necessary factor (an idea loosely in the spirit of Yuk Hui’s notions of cosmotechnics and cosmopolitics). More than one distinctive cosmopolis may emerge in response to a technological stimulus, and the set of cosmopolises may not be either mutually exclusive or collectively exhaustive in relation to either the planet or the political world. A cosmopolis is not a planetarity. It is a smaller unit of analysis, and a legibly embodied geographic reality in a way a planetarity is not. We can sketch out cosmopolises on maps.

If you’ve been following my writing for longer than a decade, you’ll note that this is a significant update to my older thinking and writing on this theme.1

In brief, new technologies induce new normals through protocolization of what is initially a weird and scary sort of monstrousness irrupting across a frontier. Beyond that frontier lies a new kind of territory, a new kind of “soil” on which societies can be built. Protocols are the engines of what I called manufactured normalcy a decade ago, and cosmopolises correspond loosely to what I called Manufactured Normalcy Fields in that essay (though cosmopolis is both a less clumsy and more comprehensive term).

While a major new technology is being protocolized into normalcy, we experience Disturbed Realities. When the process is complete, we get the soil of a new cosmopolis beneath our feet, which we take for granted, and barely notice enough to name. As I argued in the linked article and my 2023 talk, through this process Labatutian unnameables transform into Lovecraftian horrors, and Lovecraftian horrors into Ballardian banalities.

The protocol-generated cosmopolis is the constitutive form of the entity that David Foster Wallace tried to point to in his classic commencement address This is Water. Protocols are how you make a new kind of water out of a new technology.

Print “created” normal modernity by two means.

First, it created a new “soil” for elite governance, shifting political power from land and artisanal crafts based on oral culture to scalable knowledge-based behaviors that required print culture to exist and persist.

Second, it introduced what Eisenstein calls “fixity” (or “hardness” in our emerging language of protocol-speak) into historical memory, stabilizing, refining, and canonizing new understandings of past, present, and future.

The Articulation of Memory

Unlike the cosmopolis of gunpowder, or the cosmopolis of sail (which emerged in the same period), the cosmopolis of print was based on a powerful memory technology. As such, it subsumed the others, and overrode their logics where they were in conflict.

In scribal culture, it is doubtful whether the pen was truly mightier than the sword, but in the print era, there is no doubt — the printing press was definitely mightier than either the gun or the ship (or the converged mercantilist metropolis-based mobile supernode that combined both, the gunship).

Print refactored how we situate ourselves in the world, in space and time. Those who understood their place in the world through print constituted the cosmopolis of print.

Print led to a new kind of articulation of civilizational memory, where by articulation I mean the patterns of joinery and relative mobility in a mechanistically construed assemblage (aka “contraption”!) that constitutes our understanding of the world and modes of agency within it. It is no accident that memory has emerged as a central topic in protocols research over the past three years ( runs a Special Interest Group on memory for the Summer of Protocols program).

Civilizational memory can be understood in terms of three entangled strands — affective, declarative, and procedural. These correspond, roughly, to myth-based, event-based, and econo-legal senses of history.

The US constitution, for instance, embodies all three elements in a single print-based artifact.

  1. There is the sacralized mythology (bordering on theology, complete with a reverential hermeneutic tradition) of what the Founding Fathers believed, desired, or intended. This is affective memory.

  2. There is the creation of the constitution as a genesis event that occurred in a particular place and time (a metropolitan region — Philadelphia, 1787), with associated discoverable and disputable historical facts. This is declarative memory.

  3. And finally, there are the 250 years of legal tradition continuously refining and enacting the idea of America in behavioral terms, through constitutional (and constitutive) understandings of matters such as rights, contracts, currencies, traffic laws, standards for weights and measures, and so on. This is procedural memory.

While all three elements are present in all units of societal organization, to a first approximation, nation-states organize affective memories into a vibe-based territorial logic, metropolises organize declarative memories into capability based physical network supernodes that are dense population centers (there’s a reason a lot of factual history, as opposed to mythology, plays out in major cities), and cosmopolises organize procedural memories into widely diffused infrastructures.

This last category is particularly important to our understanding of protocolization as normalization, since unlike the affective and declarative strands of historical memory, which typically remain in the foreground, procedural strands have a habit of disappearing from view into the background upon maturity, turning into David Foster Wallace’s “water” (my “soil”).

The settling and ossification of procedural memory, into Whitehead’s “operations we can perform without thinking about them,” arguably constitutes the primary dimension of normalization through protocolization. Declarative and affective memories comprise the shifting and transient contents of the protocols of a cosmopolitical dispensation. To the extent they persist at all, they tend to turn into procedural scaffolding.

Territories and cities can forget. Only the cosmopolis truly remembers.

Historical memories either die as fading oozes of affectively and noisily recalled events associated with cryptic, monumental ruins, or turn into the scaffoldings of living realities that we struggle to recognize as history at all. It becomes harder with time to recall the lessons of the fall of the Roman Empire with sufficient gravitas. Affective and declarative memories transform into some mix of camp, vibes, nostalgia, and texts only historians can read. But procedural memories can remain alive and potent in ways we are barely aware of.

The most important parts of the Roman Empire, perhaps, did not decline and fall at all, but turned into an invisible cosmopolis all around us, buried under newer cosmopolitical layers.

Memory as Engineered Argument

One of the more significant conceptual advances we’ve made in three years through the Summer of Protocols program is a deceptively simple definition: a protocol is an engineered argument.

What we understand as normalcy is not an equilibrium so much as an ongoing argument we consciously manage, to navigate endemic creative tensions in a civilizational order. Among the most important such tensions are the ones that arise from “arguments” among older and newer cosmopolitical layers, which take the form of arguments between modes of memory. Tensions not between the old and the new, but between old and new ways of knowing and remembering. New cosmopolises do not resolve inherited historical tensions so much as re-enshrine them in new forms.

Eisenstein’s “fixity of print” is one example of such re-enshrinement of tensions across all three strands of historical memory.

The affective strand of what was still understood as a mythic “Christendom” (defined in relation to Islam through the Crusades) was fixed by re-enshrining the Bible in a set of printed editions corresponding to the political map of 16th/17th century religious conflict. The ecology of the printed Bible mirrored — and to some extent helped ossify into persistent modern borders — the contours of that conflict.

The event-based strand acquired fixity through the establishment of a fixed and and increasingly chronologically accurate relationship to the political events of classical antiquity, establishing a managed tension between the Christian and pagan heritages of Europe.

And finally, the procedural strand acquired fixity as economic-legal traditions began to stabilize through improvements in producing printed currencies, contracts, and governance documents. These re-enshrined governance tensions as old as Greece and Rome in new forms.

The resulting cosmopolis of print, which took shape through wave after wave of transformative macro-level changes, is what we model as the cumulative result of three overlapping epochal events — the Renaissance, the Reformation, and the Enlightenment. A set of newly re-enshrined historical arguments.

The accumulation of procedural memories in the form of the protocols of print created what we understand as “modernity,” roughly between 1770-1825. I pick that period because two modern technologies — fossil power and interchangeable parts manufacturing — began creating the next cosmopolitical layer in that period. It is important to note though, that these newer technologies did not induce first-class waves of protocolization. Though they transformed the planet to an extent that we now recognize a new planetary geological era, the Anthropocene, they did not create new articulations of civilizational memory. Rather, they worked within the articulations wrought by print culture.

If there is to be a new cosmopolitan consciousness for the Anthropocene, it will be induced by the protocols of computation, not print.

Cosmopolises and Consciousness

It is no accident that the same word, enlightenment, is used for both a historical transformation of Europe’s idea of itself, and the sorts of transformations of interiority wrought by spiritual practices. First-class cosmopolises emerge when a technology is potent enough to induce highly contagious transformations both in the material condition of the world and in the minds of the humans inhabiting it. The fact of a new catalytic technology being a memory technology is the primary “tell” of a first-class cosmopolis in the making.

The archetypal kind of technology that has this sort of interior-exterior effect is language. So we should not be surprised to find, in inventorying the cosmopolitical layers of world history, that a large proportion can be characterized in linguistic terms. Languages we typically think of as “classical,” with a sprawling footprint across the elite classes of multiple political regions over a period of time, on a scale larger than the largest units of stable political integration, are among the most obvious ones. The cosmopolises associated with Greek, Latin, Sanskrit, classical Chinese, Persian, and Arabic are prominent examples. The presence of a lively cosmopolis is often thought of as “soft power” projecting from a “hard power” but this is an impoverished understanding of the phenomenon.

I learned the term “cosmopolis” from a book about Southeast Asia that characterized the early history of the region as being part of the “Sanskrit cosmopolis.” In the terms of reference we are developing here, it was a historical articulation of memory. Affective memories still linger in traditions of performance of epics like the Ramayana and Mahabharata. Declarative memories have now hardened into a fairly accurate historiography of the pre-Islamic era, dominated by Hindu and Buddhist kingdoms. And procedural memories take the form of lingering habits of governance loosely clustered around the Mandala concept. And all three feature in still-live tensions.

It is worth noting that the Indic strand of Southeast Asian culture is in fact dominated by a non-Sanskritic language (Tamil). That we still refer to it as the Sanskrit cosmopolis underlines the extent to which cosmopolitanism is elite-coded (in this case, associated with the diffusion of the Sanskrit-based procedural memories of the Brahmin political-administration caste).

The Sanskrit cosmopolis is now not so much dead as buried alive-and-asleep beneath newer cosmopolitical layers — Islamic, European-Colonial, American, modern Chinese. And it sits above older native cosmopolitical cultures, such as the semi-legendary memories of Sundaland, and associated animistic articulations of memory (the focus of a set of explorations earlier this year, South Beast Asia).

A cosmopolis then, is not just a consciousness, it is an accretive and immortal sort of consciousness. To emerge, a cosmopolis must negotiate its presence with older cosmopolitical layers, shaping and being shaped by them, resulting in a re-enshrinement of the past.

And once, installed, cosmopolises can never be buried or killed, since they too turn into legacy articulations of memory that can never entirely be reconstituted within new civilizational logics. Only re-enshrined.

To be a cosmopolitan then, is to identify with, and primarily situate oneself in, the accretive, immortal geology of a layered, evolving conceptual territory whose logic inexorably overwhelms the logic of literal soil over historical time scales.

But over shorter time-scales, the logic of soil matters.

Accretion vs. Exclusion

Given enough time, the behavior-based ordering logic of the cosmopolis transcends both the territorial logic of the nation-state and the structural logic of a particular converged site of intensive knowledge work — the metropolis.

The nation-state is the most recent of a parade of historical forms based on exclusionary control of territory. The present form is characterized by a particular tension between territories and texts that is relatively modern, but shares with its predecessors the logic of land and borders as the dominant organizing principle.

Nation-states obviously demand exclusive control of a territory, organized around dominantly autochthonic identities (the idea of “heritage Americans” is an example of an age-old pattern). Less obviously, they also demand exclusive control of historical memories, constructing them in particular exclusionary ways, erasing inconvenient threads of memory (particularly affective and declarative memory, though procedural memories, despite being hardier, are not invulnerable).

It should come as no surprise then, that the nation-state, throughout its brief existence of 400 years, has consistently defined itself in opposition to the much older, accretive logic of the cosmopolis. The central dilemma of the nation-statist is the curious durability of the cosmopolitical, despite its seemingly more fragile foundations in concepts, rather than the atoms that constitute blood and soil.

That memes can eat genes has been the perennial existential crisis of the territorialist.

The correct view of the metropolis is rather more surprising, and for a long time, I confused myself by identifying the metropolis as the site of the cosmopolitical, rather than a competing class of entity.

Large, diverse, pluralist cities, as is widely recognized, have historically tended to outlast entire eras of exclusionary territorial logic. The city outlived the feudal and imperial eras, and seems set to outlive the industrial nation-state era. Cities like Istanbul have endured through entirely distinct civilizational phases over thousands of years.

The reason is obvious. Even though important cities may be constrained by territorial factors (access to water, grain, energy), they are not defined by them. They are defined by the knowledge capital they aggregate, activate, and harness, in the form of dense agglomerations of human minds. The more enduring metropolises typically sit on fault lines between adjacent but not-quite-compatible territorial logics, playing them off against each other.

The relationship between cities and protocols is not immediately obvious. Certainly, like the cosmopolis, the metropolis evolves as an accretion of living procedural memories. Affective and declarative memories of Christian and pagan-Greek periods may be obscured today, but the procedural memories of both lurk beneath Islamic Istanbul. At a more banal level, infrastructural technologies — roads, waterways, bridges, plumbing — typically endlessly rehearse their histories in tightly confined metropolitan spaces, since there’s typically not enough room to reimagine them from first principles.

What defines a metropolis, in fact, is not the territorial bounding box it exists in, at the pleasure of larger containing entities, but the many overlapping protocols of which it constitutes a converged physical nexus, or supernode. The city has historically been a supernode of so many important networks — education, trade, finance, military power — that one may be forgiven for identifying a cosmopolis with the set of metropolitan regions that happen to dominate it.

Perhaps most importantly, the metropolis, like the cosmopolis, typically also exists in a state of perpetual partial tension with the territorially defined host entities that contain it (most recently, the nation-state).

It is not surprising then, that the term cosmopolitan is often as a synonym for metropolitan. It is tempting to conclude that to first order, they are two sides of the same reality.

They are not.

A cosmopolis, construed as a behavioral logic extending over a region of space-time, is not co-extensive with the metropolitan cores that might make up its backbone. Indeed, the more advanced the technological basis, the more the two diverge geographically. Computer programmers may aggregate in cities, but computers aggregate in datacenters in the technological hinterland. Computer chips may be designed in the American cities, but they are fabricated in Asian suburbs. Cities (and travel between them), may account for a large fraction of energy use, but energy is produced in sparsely staffed refineries in low-population regions, and increasingly, in remote regions boasting plentiful wind or sunshine.

Changes in shipping technology can create and destroy entire port-based cities (containerization, famously, destroyed the breakbulk-port-based economies of many large cities, by moving the heavily automated operations to neighboring smaller cities with cheaper land).

The cosmopolis and the metropolis then, are at best occasional allies, rather than co-extensive realities. To the extent they stand united against the logic of the nation-state and its territorially based ancestors, it is a case of the enemy of my enemy is my friend, rather than a full convergence of interests.

For a great many ideologues and idealists of modernity, this is an uncomfortable idea. To be an architect of computational postmodernity, for many, is to be an architect of the convivial, human-centric city of the future. But in the battle for the post-nation-state future, the metropolis and the cosmopolis are destined to be rivals, competing to fill the growing twin vacuums of collapsing state capacity and nationalism narratives.

Already, some of the signs are clear. Metropolitan areas increasingly feature a deep commitment to “IRL” culture, nostalgic forms of aesthetic and cultural localism at odds with the emerging economics of dense living, wars over housing that rehearse geopolitical patterns of border-based territorial conflicts on a fractally smaller scale, and a worrying fondness for archaic “heritage” infrastructural forms at odds with computation-based cosmopolitical logics.

The Computational Schism

Which brings me to the idea I will be talking about on Wednesday (and possibly writing up as a follow-up essay). The Computational Cosmopolis.

What I hope to do is unpack the emerging logic of global organization being shaped by computation as it nears the end of easy Moore’s Law gains and low energy usage.

We are headed towards a world where computers run everything, consume an alarming (but not unreasonable, when calibrated against biological life) amount of energy, and perhaps most importantly, radically re-articulate historical memory.

When I wrote the Breaking Smart essays a decade ago, many of the most important frontiers of computing were hard to reconnoiter. Blockchains were young, fragile, and seemed like toys. Deep learning was limited to surreal dream-like imagery, translation, and gobbledygook generative text. It had not yet mastered language or solved Go. Self-driving cars seemed so near, yet so far (I took on a long bet arguing that they would not be mainstream in ten years, and won — by a whisker).

The hypothesis that informed that essay collection, “software is eating the world,” clearly needs updating.

Among other things, we’ve realized in the last decade that it’s not about the software but the data. This was a point that was already dimly recognized in 2015. Google researchers wrote an influential paper as early as 2009, The Unreasonable Effectiveness of Data, prefiguring the future we’re now in. The main argument was that more data and simpler algorithms beats less data and more complex algorithms.

With deep learning, we’re realizing the extraordinary extent to which that’s true. Meta-true in fact, since code is now just another unstructured data type, to be gathered in by the exabyte, processed with industrial-scale model training protocols, and provisioned as models with billions (soon to be trillions) of parameters.

The main challenge with scaling AI todays is orchestrating data movements at a vast scale — think gigawatt datacenter scale. Multiplication of matrices of near-cosmic dimensions (as is perhaps appropriate for cosmotechnological infrastructures).

The interplay of computation and data is subtler in the world of blockchains, but equally unmistakeable. While AI deals with data along the quantitative axis, measured in low-precision exabytes, blockchains deal with data along a qualitative axis, working with properties such as immutability, auditability, verifiability, censorship resistance, Sibyl resistance, unbreakable (including quantum-resistant) encryption, long-term (decades to centuries) persistence, and provenance.

If the authors of the 2009 Google paper had thought to cover blockchains (an unreasonable expectation, since the Bitcoin whitepaper was published in October 2008), they might have speculated about the “unreasonable economics of validated data.”

The main problem with scaling blockchains is scaling their historical memory capacity without weighing them down with the enormous computational demands of validating it. Even with the gradual ascendancy of proof-of-stake blockchains and the increasingly favorable comparison with the energy usage of AI compute, blockchains remain energy-hungry beasts. And a big part of that hunger has to do with data handling and memory preservation rather than proof-of-work computations.

The Bitcoin world (which has remained attached to PoW) has scaled back its ambitions, contenting itself with the idea of serving as a post-nation-state payment-rail and store-of-value infrastructure. The Ethereum world will soon embark on what is perhaps the most difficult chapter of its roadmap — what Vitalik Buterin has dubbed the “Purge” phase — advances meant to lower the burden of maintaining the increasingly voluminous historical records in the immutable ledger.

And if you think blockchains and AI represent the peak of the resource demands from the emerging computational cosmopolis, think again. Robotics, sensor networks, mixed reality infrastructures, and the Internet of Things (which may have disappeared from the headlines but continues to grow robustly in the background) will likely increase the energy demands of computation by at least another factor of 2-3.

We’re just beginning to understand that these frontiers of computation too, are primarily defined and shaped by their data-management demands. A robot is primarily a locus of edge-learning with a local firehose of camera and sensor data, and a memory-point-of-view. Every IoT device pumps streams of data in and out of the world’s compute and memory infrastructure, articulating, perhaps for the first time in history, something like a proprioceptive memory in the body politic of the computational cosmopolis. Climate technologies, once we are past the current phase of head-in-sand denialism, will add their own data firehoses to the mix.

Perhaps what we are witnessing is the birth of a cosmopolis characterized by a vastly deeper and more comprehensive kind of fixity than that introduced by print technology. An articulation of civilizational memory so rich, deep, and alive, it constitutes something like a planetary awakening, not merely into a new consciousness, but a new memory of itself.

Perhaps what is happening is that the emerging articulation of civilizational memory is turning eternity — an objective temporal infinity — into immortality (a subjective temporal infinity). This is the essence of the computational cosmopolis.

In my talk on Wednesday, I hope to dive into some of the specifics of what I think is happening, or about to happen, and how best we can situate ourselves within the transformation. I will also share more about the Protocol Symposium, and why that might be a portal for entering the emerging computational cosmopolis for at least some of you. If you’re interested in participating in any component, I recommend you register sooner rather than later. The deadline, again, is August 22.

1

A decade ago, I offered up a notion of “generative pluralism” as my synthesis of technological determinism and social determinism, which constructed the future as, in some sense, a necessary function of the ascendant technologies of an era. Now, while I still think the future is a necessary function of the technological present, I don’t think technology is sufficient to define the future. This opens the door for multiple technological futures. So for instance, there is more than one world that corresponds to the forecast that “software is eating the world.” There is more than one software-eaten world out there, in the fan of futures we are navigating. The subset we end up inhabiting constitutes a set of cosmopolises.

Free the Patient: A Competitive-Federalism Fix for Telemedicine

During the pandemic, many restrictions on telemedicine were lifted, making it far easier for physicians to treat patients across state lines. That window has largely closed. Today, unless a doctor is separately licensed in a patient’s state—or the states have a formal agreement—remote care is often illegal. So if you live in Virginia and want a second opinion from a Mayo Clinic physician in Florida, you may have to fly to Florida, unless that Florida physician happens to hold a Virginia license.

The standard framing says this is a problem of physician licensing. That leads directly to calls for interstate compacts or federalizing medical licensure. Mutual recognition is good. Driver’s licenses are issued by states but are valid in every state. No one complains that Florida’s regime endangers Virginians. But mutual recognition or federal licensing is not the only solution nor the only way to think about this issue.

The real issue isn’t who licenses doctors. It’s that patients are forbidden from choosing a licensed doctor in another state. We can keep state-level licensing, but free the patient. Let any American consult any physician licensed in any state. That’s competitive federalism—no compacts, no federal agency, just patient choice.

A close parallel comes from credit markets. After Marquette Nat. Bank v. First of Omaha (1978), host states could no longer block their residents from using credit cards issued by national banks chartered elsewhere. A Virginian can legally borrow on a South Dakota credit card at South Dakota’s rates. Nothing changed about South Dakota’s licensing; what changed was the prohibition on choice.

Consider Justice Brennan’s argument in this case:

“Minnesota residents were always free to visit Nebraska and receive loans in that state.” It hadn’t been suggested that Minnesota’s laws would apply in that instance, he added. Therefore, they shouldn’t be applied just because “the convenience of modern mail” allowed Minnesotans to get credit without having to visit Nebraska.

Exactly analogously, everyone agrees that Virginia residents are free to visit Florida and be treated by Florida physicians. No one suggests that Virginia’s laws should follow VA residents to Florida. Therefore, VA’s laws shouldn’t be applied just because the convenience of modern online tools allow Virginians to get medical advice and consultation without having to visit Florida.

In short, patients should be allowed to choose physicians as easily as borrowers choose banks.

The post Free the Patient: A Competitive-Federalism Fix for Telemedicine appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Where are all of these meteors coming from?  Where are all of these meteors coming from?


Palantir might be the most over-valued firm of all time

What would make it worth buying?

MBA: Mortgage Applications Increase in Latest Weekly Survey

From the MBA: Mortgage Applications Increase in Latest MBA Weekly Survey
Mortgage applications increased 10.9 percent from one week earlier, according to data from the Mortgage Bankers Association’s (MBA) Weekly Mortgage Applications Survey for the week ending August 8, 2025.

The Market Composite Index, a measure of mortgage loan application volume, increased 10.9 percent on a seasonally adjusted basis from one week earlier. On an unadjusted basis, the Index increased 10 percent compared with the previous week. The Refinance Index increased 23 percent from the previous week and was 8 percent higher than the same week one year ago. The seasonally adjusted Purchase Index increased 1 percent from one week earlier. The unadjusted Purchase Index increased 1 percent compared with the previous week and was 17 percent higher than the same week one year ago.

The 30-year fixed mortgage rate declined to 6.67 percent last week, which spurred the strongest week for refinance activity since April. Borrowers responded favorably, as refinance applications increased 23 percent, driven mostly by conventional and VA applications,” said Joel Kan, MBA’s Vice President and Deputy Chief Economist. “Refinances accounted for 46.5 percent of applications and as seen in other recent refinance bursts, the average loan size grew significantly to $366,400. Borrowers with larger loan sizes continue to be more sensitive to rate movements.”

Added Kan, “Given the relative attractiveness of ARM rates compared to fixed rate loans, ARM applications increased 25 percent to their highest level since 2022, and the ARM share of all applications was almost 10 percent. However, lower rates were not enough to entice more homebuyers back into the market, as purchase applications were only up around one percent over the week, although still stronger than last year’s pace.”
...
The average contract interest rate for 30-year fixed-rate mortgages with conforming loan balances ($806,500 or less) decreased to 6.67 percent from 6.77 percent, with points increasing to 0.64 from 0.59 (including the origination fee) for 80 percent loan-to-value ratio (LTV) loans.
emphasis added
Mortgage Purchase Index Click on graph for larger image.

The first graph shows the MBA mortgage purchase index.

According to the MBA, purchase activity is up 17% year-over-year unadjusted. 

Red is a four-week average (blue is weekly).  

Purchase application activity is still depressed, but above the lows of October 2023 and slightly above the lowest levels during the housing bust.  

Mortgage Refinance Index
The second graph shows the refinance index since 1990.

The refinance index increased and is picking up a little with lower mortgage rates.

Africa’s cultural landmarks: rock-hewn churches of Tigray, Ethiopia

Photo of a person in white walking along a cliffside path near a stone structure with a scenic landscape view.

Ascend steep cliffs to discover Ethiopia’s ancient churches carved into rock, still serving as places of worship today

- by Aeon Video

Watch at Aeon

David Beckworth on stablecoins and stability

A key reason for the global financial cycle, as outlined by Hélène Rey, is that many firms and financial institutions in developing countries borrow heavily in U.S. dollars while their revenues, assets, and cash flows are denominated in local currency. When the Fed tightens policy, the dollar appreciates, global financial conditions tighten, and these firms suddenly find themselves squeezed by rising dollar debt burdens and falling asset values. This balance sheet shock forces cutbacks and retrenchment. This is one of the key channels through which U.S. monetary policy spills over globally.

But what Rashad Ahmed noted in our dicussion is that if households and firms begin holding dollar assets via stablecoins—in addition to borrowing in dollars—they begin to build a natural hedge on their balance sheets. A stronger dollar no longer only increases liabilities; it also raises the value of their dollar assets, helping to offset the shock. In effect, stablecoins can act as a decentralized balance sheet stabilizer, muting one of the very mechanisms that drives global financial volatility.

Here is the full post, featuring also a podcast on the topic.

The post David Beckworth on stablecoins and stability appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The “Incriminating Video” Scam

A few years ago, scammers invented a new phishing email. They would claim to have hacked your computer, turned your webcam on, and videoed you watching porn or having sex. BuzzFeed has an article talking about a “shockingly realistic” variant, which includes photos of you and your house—more specific information.

The article contains “steps you can take to figure out if it’s a scam,” but omits the first and most fundamental piece of advice: If the hacker had incriminating video about you, they would show you a clip. Just a taste, not the worst bits so you had to worry about how bad it could be, but something. If the hacker doesn’t show you any video, they don’t have any video. Everything else is window dressing.

I remember when this scam was first invented. I calmed several people who were legitimately worried with that one fact.

*Saving Can-Do*

The author is Philip K. Howard and the subtitle is How to Revive the Spirit of America.  The book is short, to the point, in the “abundance + state capacity” genre.  Excerpt, noting I will not double indent:

“Three major changes are needed to restore the authority to achieve results: a new legal framework, a new institution that can inspire trust in ongoing decisions, and a special commission to design the details of these changes.

New legal framework defining official authority.

Here’s a sketch of what a new infrastructure decision-making framework might look like:

  1. Separate agencies should be designated as decision-makers for each category of infrastructure.  The head of that agency should have authority to approve permits. For federal approvals, all decisions should be subject to White House oversight. For projects with national or reigonal significance, federal decisions should preempty state and local approvals.
  2. Fifty years of accumulated mandates from multiple agencies should be restated as public goals that can be balanced against other public goals….a recodification commission is needed to reframe thousands of pages of detailed regulatory prescriptions into codes that are goal-oriented and honor public tradeoffs. But unti this canhappen, Congress should authoritze the executive brranch to approve permits “notwithstanding provisions of law to the contrary” — provided the executive branch identifies the relevanto provisions and provides a short statement of why the approvals are in the public interest.
  3. Processes should be mainly tools for transparency and should be understood by courts as general principles reviewed for abuse of discretion, not as rules requiring strict compliance. NEPA has been effectively rewritten by judicial fitat, so it should be amended to return to its original goals — to provide enviromental transparency, public comment, and a political judgment.
  4. The jurisdiction of courts must be sharply limited. Lawsuits should be allowed foro approvals that transgress boundaries of executive responsbility, not inadequate review of process, unless these are so deficient as to be arbitrary.

Changing law is always politicall difficult, but the second challenge is perhaps even harder: creating new institutions that can inspire trust.”

TC again: All worth a ponder.

The post *Saving Can-Do* appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Configuring GitHub Codespaces using devcontainers

GitHub Codespaces provides full development environments in your browser, and is free to use with anyone with a GitHub account. Each environment has a full Linux container and a browser-based UI using VS Code.

I'm a huge fan of Codespaces for running workshops: it means you can skip that awful half hour at the beginning of any workshop where you try to ensure everyone has a working development environment.

With Codespaces a fresh development environment is a case of clicking a button and then waiting for a couple of minutes. If you break it, click the button again to get a new one.

Codespaces generally launch from a GitHub repository, which can be configured to use a specific configuration. Here's the pattern I'm using for these, inspired by this Python 3.13 example by Pamela Fox.

A Python environment with some VS Code plugins

My simonw/codespaces repository contains a very simple configuration that provides Python 3.13 anh Node.js LTS plus VS Code with some useful plugins.

The only required file is .devcontainer/devcontainer.json. Here's that file in full:

{
  "name": "Python 3.13",
  "image": "mcr.microsoft.com/devcontainers/python:3.13-bullseye",
  "features": {
    "ghcr.io/devcontainers/features/node:1": {
      "version": "latest"
    }
  },
  "customizations": {
    "vscode": {
      "settings": {
        "python.defaultInterpreterPath": "/usr/local/bin/python",
        "python.linting.enabled": true
      },
      "extensions": [
        "ms-python.python",
        "ms-python.vscode-pylance",
        "ms-python.vscode-python-envs",
        "GitHub.copilot"
      ]
    }
  },
  "postCreateCommand": "pip install uv"
}

This would work with just the name and image fields.

I'm using Microsoft's Dev Containers base image for Python 3.13 on Debian Bullseye.

mcr.microsoft.com/devcontainers/python:3.13-bullseye

There's a full list of those images in the src/ directory of their devcontainers/images repository. They have them for Go, Rust, Java, PHP and more.

It's useful to have Node.js LTS installed so that NPM etc works out of the box. Here I'm using the "features" object to add that.

You can find a list of more features in the devcontainers/features repository, again in the src/ directory.

That features/node:1 one is defined by this install.sh script.

I copied the "vscode" block from Pamela, but I added the "GitHub.copilot" extension to enable Copilot in VS Code out of the box.

I also added that last line:

"postCreateCommand": "pip install uv"

Anything in postCreateCommand will be run after the container is first created. Here I'm using pip to install uv, after which uv tool install X etc will be available.

Providing a link to launch the container

Once you have added a .devcontainer/devcontainer.json to a repository you can construct a link that will launch that repository as a Codespace like so:

https://codespaces.new/simonw/codespaces?quickstart=1

Here's the documentation for this feature.

The ?quickstart=1 parameter causes the page to consider any Codespaces you already have running against that repository and suggest using those rather than starting a new one. It's a better option in my opinion.

Bonus: Installing and configuring LLM

I created a second Codespaces repo, simonw/codespaces-llm, which is almost identical to the above except the postCreateCommand contains the following:

"postCreateCommand": "pip install uv && uv tool install llm && llm install llm-github-models && llm models default github/gpt-4.1"

Here's that but more readable:

pip install uv &&
uv tool install llm &&
llm install llm-github-models &&
llm models default github/gpt-4.1

This install uv, then uses uv tool install to install llm, then uses the llm install command to install the llm-github-models plugin, and finally sets the default model used by LLM to github/gpt-4.1.

The net effect of this is that the user will then be able to run commands like this:

llm "Fun facts about pelicans"

GitHub Codespaces automatically sets a GITHUB_TOKEN environment variable with a token for the current user.

The llm-github-models plugin provides access to the GitHub Models collection, which can be accessed using that GITHUB_TOKEN as an API key.

Usage of GPT-4.1 is free using that key (albeit rate-limited), so setting the default model to github/gpt-4.1 means users get access to a very competent model for free!

Time to Call Conservatives Out For Their Weakness

Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.

The Cross Section is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

When confronted with the latest lawless and authoritarian move from America’s mad king and the band of thugs with whom he has surrounded himself, there is no single perfect answer to the question “How should we react to this?” Outrage, warnings, correction of misinformation, lawsuits, organizing, community support? Yes to all.

But let me put in a word for good old-fashioned mockery. Trump’s insistence that he has to send in troops to patrol the mean streets of Washington, D.C. should be countered with all of the above. But one of the main assertions of authoritarianism is that it is strong, which is something a great many people are attracted to. So if you want to undermine it, one fruitful strategy is to convince people that it’s actually weak.

Before we get to the lily-livered conservative babies and their terror of urban density, let’s start with some facts, because while liberals like to flatter themselves that they are the rational ones and conservatives are emotional, the truth is that we all respond to both facts and feelings. So yes, it is important that everyone understands that despite the lies Trump tells, crime in Washington is way down, as it is in most of the country. Here’s a chart showing the long-term trend, from Jeff Asher:

The basic story is that crime peaked in the early 1990s and has been on a long decline since. While there was a spike during covid — which happened in both urban and rural areas (though nobody talks about the latter) — the decline has continued. But as a general rule, people don’t believe it. Significant numbers of Americans always say that crime is going up:

But as that chart shows, people are influenced by what they hear. Why was there an increase in Republicans saying crime was up in 2016? Because their presidential nominee was going around the country telling everyone that we were all living through The Purge, except instead of one day a year it was every day. Then when crime actually did increase in 2020, he and other Republicans told people it wasn’t just bad, it was the worst thing anyone had ever seen, that every American city was in flames and the streets had become rivers of blood.

Behold the hellscape that is Washington, D.C. (Photo by BeyondDC, CC BY-NC 2.0)

How to fight back against the fear-mongering

The reaction of most Democrats to Republican fear-mongering is usually to say “We too are concerned about crime! We too are tough! Which we will demonstrate by advocating policies that are, while not quite as draconian as what Republicans suggest, still pretty darn tough!” This is seldom a compelling argument.

It’s also not particularly successful as a policy matter; the cities that have had the most striking recent success in reducing violent crime, such as Baltimore and Boston, have done so with comprehensive strategies built on deep community engagement, which is much more labor-intensive than just telling cops to go out and crack some skulls.

What Democrats usually neglect is the emotional component of the issue. When Trump says, as he did Monday, that “Our capital city has been overtaken by violent gangs and bloodthirsty criminals, roving mobs of wild youth, drugged out maniacs and homeless people,” he’s not appealing to the rational part of people’s brains.

So one effective way to counter that messaging is to begin defining Republican fears about urban crime and the Republican rhetoric meant to make everyone afraid of it as the product of weakness. Let’s say you hear something like this:

You can say that Congressman Burchett is misguided about crime rates, but you can also say that he’s a sniveling little candy-ass hiding in his office because he’s afwaid of the dark, while America’s liberals happily enjoy the fruits of urban living. Or this, from a testosterone-fueled hunk of man-meat who works at a right-wing legal organization:

I’m not a big fan of people zooming around in groups on ATVs, but I have it on good authority that ATVs are quite popular in rural America, and some of the people who ride them there even pop the occasional wheelie. So what might be different about the ATV riders — pardon me, “gang of youths” — that master Chamberlain witnessed in D.C. that made him shout “Crime!”, tears of panic streaming down his face as he cowered behind a mailbox? Hmm…give it a moment, it’ll come to you.

Folks like Burchett and Chamberlain are getting dragged with the appropriate vigor on social media, but what I’d really like to see is mockery of Republican weakness become the norm from a wider variety of voices on the left, including Democratic elected officials. Imagine if every time you saw a Democratic member of Congress talk about what’s happening now they said not just “This is an unacceptable authoritarian power grab,” but also “If Donald Trump and the other whiny fraidy-cats in his party want to tighten up their diapers and come visit America’s vibrant and dynamic cities where millions of us are living fulfilling lives every day, we’ll be happy to show them around.”

A little childish? Sure. And I’m not begrudging anyone’s decision to live in a suburb, a small town, or a remote mountain cabin; everyone has different tastes and different things they value in a place to live. But these people are weaklings. And they ought to be mocked.

Leave a comment

Subscribe now

Exclusive: Inside San Francisco's Robot Fight Club

For the past few months, Cix Liv – real name – has been operating his company REK out of a no-frills warehouse space off Van Ness in San Francisco. The office has a couple of makeshift desks with computers and a bunch of virtual reality headsets on some shelves. More to the point, REK also has four humanoid-style robots hanging from gantries, and they’ve been outfitted with armor, boxing gloves, swords and backstories.

These machines represent the start of a robot fight club for Liv and his small REK team. They’re at the vanguard of a movement taking place in San Francisco to create a new sport in which robots piloted remotely by people will do battle inside of cages. This sport would mix flavors from mixed martial arts, pro wrestling, the tech world and anime, and, in so doing, would intermingle skill and theater in equal doses. “This is going to be the next UFC,” says Liv. “When this guy's walking around and he has full swords, you can feel the pounding in the ground. You know deep in your soul that this thing could kill you. It’s like when you see a lion or something and the hairs go up on the back of your neck. Once people can really feel this and see this, it’ll be fully mainstream.”

Core Memory is a reader-supported publication. Please subscribe to help us keep bringing you stories like this.

We’ve obviously had robot competitions for years. BattleBots started back in the 1990s, giving hardware nerds a chance to show off their cool contraptions. Those bots, though, were mostly ground dwellers and gimmicky. Now, however, we’re amid the rise of humanoid robots being built by lots of start-ups in the U.S. and, more notably, China. The humanoids bring with them the chance to create a combat sport that looks more familiar to mainstream viewers and the opportunity for better storytelling as we anthropomorphize the bots and develop tales for their pilots.

Some early stabs at these robot battle competitions have taken place recently in San Francisco. Two underground, invite-only events have been held in the parking garage of a downtown building as part of the Ultimate Fighting Bots (UFB) league. Pairs of robots squared off against each other, while a couple hundred people cheered them on from the sides. During intermissions at the first fight, humans stepped into the ring to chase each other with tasers while the fighters prepared their bots for the next battle. (Yes, really.)

The people behind UFB are Michael Cho and Xenia and Vitaly Bulatov, who are married. Cho is a longtime entrepreneur who has been working in the robotics field for years and originated the UFB concept. The Bulatovs also work in the robotics realm and have been organizing the events in San Francisco and pumping social media full of clips from their contests.

The scene is reminiscent of the early parts of the movie Big Hero 6 in which people of all types turn up in shady lairs with their battle bots while onlookers imbibe and gamble. Only the backdrop for that movie was San Fransokyo, and San Fransokyo was edgier and had better bots. These first competitions have been somewhere between exciting and farcical with the robots overheating and bumbling around in between their moments of ferocity. Liv has been the burgeoning star of these fights, winning the first competition and then shifting into a sort of Joe Rogan announcer role for the second, and he thinks he has a plan to uplevel the competitions.

Cix Liv with his trophy from the first robot battle

LIV IS a large, brawny man who favors muscle tees that promote his biceps. He grew up in the worlds of online gaming and virtual reality. His name is, in fact, a gaming handle, and he made it legal after an identity theft incident. “I called some agency and they said that I could have my credit frozen for the rest of my life or legally change my name,” he says. “They meant the second option as kind of a joke. But, I was like, ‘Fuck it. I’m Cix now.’”

The Liv part is a nod to a company Cix founded in 2016 that let people livestream their virtual reality sessions. The technology developed by the company (LIV) stood as one of the first major efforts to transport the action taking place inside a VR headset out onto a screen for other people to see. Many people consider LIV videos to be the reason that the game Beat Saber took off as a viral success. And VR is now key to REK’s robot combat sport plans.

In REK’s idealized vision, pilots will don VR headsets, slide their arms into combat controllers and enter a virtual fighting cockpit. The pilots will then be able to initialize a series of attacks that are translated by software into movements carried out by the robots. We’re talking full-on punches and slaps and swings, and we’re talking about sword-wielding robots trying to butcher each other as god intended.

The technology required to make all this work is somewhere well beyond daunting. REK has already started training AI models on fighting moves gathered from existing data sets and videos and has been converting that training into maneuvers that can actually be performed by the bots. It’s also built an early prototype of its VR software that gives the pilot a holographic view of a robot’s body, surroundings, health and other performance metrics.

One problem for REK and anyone else that wants to get into this sport is that the current humanoids on the market aren’t really made for fighting. The ones being used most often in the San Francisco fights so far come from Booster Robotics. They’re the size of children and weigh 30kg each. The bots do have some balancing and fighting skills built in but tend to overheat when their actuators are fired in quick succession for, say, a flurry of jabs. To get around this overheating issue, the robots lower their torque automatically, which then lessens the power of the jabs. These robots also can recover pretty well when they’re falling forward but do a bad job of recovering when they’ve been pushed back.1

REK has focused on using bots made by Unitree Robotics. It has two mid-sized bots and then two larger bots that weigh about 90kg each. These robots have a wide range of motion and can even come with some built-in boxing skills. Still, they’re not general-purpose fighters either and suffer from balancing issues as well. The big ones cost up to $100,000 each, and they’re usually found in university and corporate settings where they’re being used for research rather than as fodder for combat sports training.

Subscribe now

Both Booster and Unitree are based in China, which is outpacing the U.S. when it comes to humanoid robots. The key technology on these systems is the actuators that control the movements in the limbs, and China is the actuator capital of the world. When REK wants to make a software change to the bots or order a replacement part, it’s often gated by dealing with overseas engineers and shipping schedules. “It would help if I knew Mandarin,” Liv says.

What REK really has going for it is its vast virtual reality experience. There’s Liv and then the company’s chief technology officer Amanda Watson. Before REK, she worked at Oculus/Meta for more than seven years, spending some of that time sitting alongside John Carmack. Watson led the development of much of the Link technology that made it possible to connect a VR headset to a gaming PC via Wi-Fi, which let players stop being tethered via cables to their computers. Also on the REK team is Nima Zeighami, who has spent more than a decade working on virtual reality, augmented reality, and game engines, and did a stint at Leia building cameras and light-field displays.

Amanda Watson being a VR guru

Watson has a track record of approaching the latency problems that can plague VR in novel ways, which could be key here. REK must make the pilots’ moves feel natural and real-time if the fights are to resemble true combat sports. “If you know the real issues associated with latency, then you can control for them,” Watson says. “A good programmer can make it look as though things are happening very responsively.”

IN JULY, Liv posted a video in which one of REK’s robots went nuts. The team had sent a full-body command to the robot, but its feet weren’t touching the ground at the time because it hung from a gantry. The robot could not depend on its usual stability mechanisms and compensated by flailing around widely. Liv had genuine fear and panic on his face as he approached the hulking mass and tried to decide how to stop it from wrecking itself and the REK office and its humans.2

This video told me two things. First, it’s obvious that the journey toward a robot fighting league will be an arduous one. The humanoid robots are not yet being mass produced, so they’re expensive and the costs for experimentation – accidental or otherwise – will be high. Add on all the software and virtual reality work that needs doing, and the slog becomes very real. Second, it told me that Liv might just have the media savvy needed to create a robot fighting league. He posted this low moment for REK on purpose, knowing it would go viral and generate interest in the cause.

Share

While Liv has been participating in the Ultimate Fighting Bots events, it’s unclear if he and REK will stay linked to UFB or go off and create their own league. (We’ll have more on the UFB backers soon.) At the moment, Liv is convinced that REK is the only company working on the underlying technology components required to turn robot battles into a popular, spectator sport. “We’re building all this really complicated tech to make the mass consumption of this possible,” he says. “Our method of controlling the robots will be more capable than what anyone else is doing.” In one scenario, REK creates the software and hardware foundation of this new sport and sells its technology to others.

So far, REK has been mostly self-funded. “I’d made some money from a prior start-up and was deciding whether to get a mortgage on a house or have robots,” Liv says. “I chose to do this instead of having a house.”

During my two-day stint hanging out at the REK office, I would see Liv light up again and again when talking about what future fights could look like. His mind goes to patriotic storylines. “You could have Japanese fighters show up with a samurai and battle against a robot outfitted in chainmail from the U.K.,” he says. “Or you could have Tesla’s Optimus versus Unitree. If Elon was taking on China, it would be broadcast to the whole world, and you would literally have state-level engineers trying to ensure their country wins.”

Liv also likes the opportunities this type of sport offers for unconventional competitive pairings. Men versus women. Kids versus adults. The young versus the old. “It would be the first combat sport where you have parity and everyone is competing on a level playing field,” he says.

REK has already been building out the backstories for its bots. One of them is Derek – the robot who doesn’t want to fight because he’s peaceful at heart but has to fight to earn his freedom. He’s the one that spun out of control. “Poor Derek just wants to be free,” Watson jokes. Another machine is named Rambot and has dog tags belonging to John Rambo.

China is clearly ahead on producing the hardware that these bots need but, so far, has not embraced the full-on fighting aspect. Companies there have been doing sedate demos. Liv is convinced that San Francisco and the U.S. will be the robot battling leaders. “China is ahead on production and engineering,” Liv says. “But the U.S. is still the cultural battleground for the world. The winner of the robot fights will probably be an American using Chinese robots – at least for now.”

In its current incarnation, the technology is cool but unpolished. The robots can look goofy at times, and they’re far from executing long series of sophisticated moves. The scene itself, though, is exciting, and there is a feeling of inevitability to all of this. Like, obviously we’re going to have humanoids try and slaughter each other. And obviously people will want to be jacked in controlling these things as the slaughter takes place. If nothing else, it seems better to have bots damaging themselves than humans damaging themselves. “Gen Z doesn’t want to get punched in the face anymore,” Liv says. “Parents don’t want their kids getting concussions. The future of combat sports is robots.”

Of course, the bots will probably have their own opinions on all of this soon enough.

Share

1

Cix used this knowledge to his advantage in the first robot fight. Instead of spamming his controller like some other pilots, he made judicious punching decisions that guaranteed maximum torque. He also tried to nudge the other robots toward a spot in the ring where a bump in the floor would cause them to tumble backwards.

2

The flailing immediately disconnected the robot’s Ethernet cable, making it impossible to send it a stop command.

SafeBase by Drata

ULA launches Vulcan rocket on first Space Force mission

United Launch Alliance’s Vulcan rocket roared off the pad at Space Launch Complex 41 to begin the USSF-106 mission for the U.S. Space Force. This was the first national security launch using a Vulcan rocket and the 101st national security mission for ULA. Image: Adam Bernstein/Spaceflight Now

United Launch Alliance fired off it’s first fully operational Vulcan rocket Tuesday, boosting two military satellites into space in the first Space Force-sanctioned flight of a new launcher that eventually will replace the company’s Atlas 5 and already-retired Deltas.

Equipped with four solid-fuel strap-on boosters for additional takeoff power, the 198-foot-tall Vulcan’s two methane-fueled BE-4 engines thundered to life at 8:56p.m. EDT, instantly propelling the rocket away from pad 41 at the Cape Canaveral Space Force Station.

Arcing over the Atlantic Ocean on an easterly trajectory, the Vulcan put on a spectacular sky-lighting show as it roared aloft atop nearly 3 million pounds of thrust and a jet of brilliant exhaust visible for scores of miles around.

The four strap-on boosters were jettisoned about 90 seconds after liftoff, followed three-and-a-half minutes later by burnout and separation of the Vulcan’s 109-fioot-tall first stage.

The four solid rocket boosters that helped augment the power of the BE-4 engines on the Vulcan booster were jettisoned less than five minutes into the launch of the USSF-106 mission on Aug. 12, 2025. Image: Michael Cain/Spaceflight Now

The Centaur second stage’s two hydrogen-fueled Aerojet Rocketdyne RL10C engines ignited and took over from there, but in keeping with standard policy for military missions, ULA ended its launch commentary at that point and the rest of the flight was carried out in secrecy.

At least two satellites were believed to be on board: one fully classified spacecraft and an experimental satellite that will carry out tests of upgraded atomic clocks and navigation technology that could lead to more accurate, jam-proof Global Positioning System-type data for military and commercial users.

Both satellites were bound for geosynchronous orbit 22,300 miles above the equator where spacecraft take 24 hours to complete one orbit and thus appear stationary in the sky.

GPS satellites operate in 12,500-mile-high orbits, but Navigation Technology Satellite 3, or NTS-3, will operate from its much higher perch using an advanced phased array antenna that can electronically direct signals to receivers in multiple locations across broad regions.

Is is the Pentagon’s first experimental navigation satellite since GPS precursors were launched in the 1970s. Along with the NTS-3 satellite, designed and built by L3Harris Technologies, the program includes a ground-based control system and receivers linked by software that enables rapid reprogramming as needed for upgrades or to utilize different signals.

“GPS is such an integral part of our lives today,” said Joanna Hinks, a senior aerospace engineer with the Air Force Research Laboratory at Kirtland Air Force Base in New Mexico. “You probably all use it in ways that you didn’t even realize throughout your morning.

“And with NTS-3, we are going to be experimenting with a number of different technologies that look at how we can continue to evolve and augment GPS to make sure that it remains the gold standard that our warfighters need.”

An artist’s impression of the NTS-3 experimental navigation technology satellite. Graphic: L3Harris

While the major goal of the flight is launching the USSF-106 payloads, the launch marked a major milestone for United Launch Alliance.

It was the third launch of the powerful new Vulcan after two test flights last year and the first to be “certified” by the Space Force to carry costly national security spy satellites and other expensive military spacecraft.

“This mission is headed directly to geosynchronous orbit and will be one of our longest missions to date,” said Gary Wentz, ULA vice president of government and commercial programs. “This is the sole purpose of this vehicle. It was purposely designed to support these missions doing direct inject to geo for the Space Force.”

The Vulcan is replacing ULA’s already-retired Delta family of rockets and the venerable Atlas 5, which is powered by a Russian-built RD-180 first stage engine. Criticism of ULA’s use of Russian engines for launches of American military satellites and NASA spacecraft helped fuel congressional pressure for a new all-American launcher.

Thirteen Atlas 5’s are left in ULA’s inventory, all of them slated for civilian launches as ULA, a partnership of Boeing and Lockheed Martin, transitions to an all-Vulcan fleet.

In the meantime, SpaceX dominates the world launch market with its partially reusable and highly successful kerosene-fueled Falcon 9 and triple-core Falcon Heavy rockets. So far this year, SpaceX has launched 97 Falcon 9s.

But ULA President and CEO Tory Bruno said the Vulcan’s first stage, using high-performance BE-4 engines provided by Blue Origin (owned by Amazon-founder Jeff Bezos), and its high-power Centaur upper stage make the rocket particularly well suited for launching heavy military payloads into hard-to-reach orbits.

“It is specifically designed for these exotic orbits that are primarily for the government,” he said. “And this particular mission is the quintessential example. It is a direct injection to geosynchronous orbit. That means that it is a very, very long-duration mission.”

United Launch Alliance’s Vulcan rocket roared off the pad at Space Launch Complex 41 to begin the USSF-106 mission for the U.S. Space Force. This was the first national security launch using a Vulcan rocket and the 101st national security mission for ULA. Image: Michael Cain/Spaceflight Now

He said the first stage is, in effect, delivering the Centaur to space with a full load of propellant “to go from LEO (low-Earth orbit) to somewhere else, like all the way to the geo belt, which is 20 times higher up. And what that translates to in capability (is) certainly more mass and more accuracy than is easily done by others.”

While he didn’t mention SpaceX or its Falcon Heavy by name, or ULA’s retired Delta 4 Heavy, Bruno said “if you’re a typical three-core heavy launch vehicle and … really derived from a vehicle optimized for that LEO mission, you’re going to have to have three cores to get out there, and you’re going to have to expend all of them.

“And here’s the really complicated rocket science. You know, one core is cheaper and more efficient than three expendable cores. It’s literally that simple.”

That, coupled with the high-energy Centaur upper stage, gives ULA the capability to launch heavy payloads directly to high orbits without requiring satellites to use their own thrusters — and limited propellant — in transit.

ULA is expanding its ground infrastructure and expects to launch nine flights in 2025, reaching a cadence of two per month by end of year. The company expects to launch between 20 and 25 flights in 2026.

Viasat Unveils HaloNet Capability Portfolio for Near-Earth Communications and Beyond

Viasat logo square

Viasat’s modular portfolio of capabilities offers space-to-ground connectivity and relay solutions for resilient and responsive data transport

The post Viasat Unveils HaloNet Capability Portfolio for Near-Earth Communications and Beyond appeared first on SpaceNews.

NASA emphasizes smallsats for science amid budget uncertainty

Nicola Fox, NASA associate administrator for science, delivers the keynote address at SmallSat 2025 Monday. Credit: Allison Bills / SmallSat

SALT LAKE CITY –The head of NASA’s science directorate said the agency remains committed to using small satellites to carry out a variety of missions, although those plans face uncertain budgets. In a keynote at the Small Satellite Conference here Aug. 11, Nicola Fox, NASA associate administrator for science, highlighted the role that smallsats were […]

The post NASA emphasizes smallsats for science amid budget uncertainty appeared first on SpaceNews.

Tom Walkinshaw on pocket-sized satellites

Tom Walkinshaw

In this episode of Space Minds, host Mike Gruss speaks with Tom Walkinshaw, founder and CEO of Alba Orbital, a Scotland-based manufacturer specializing in pocket-sized satellites known as PocketQubes.

The post Tom Walkinshaw on pocket-sized satellites appeared first on SpaceNews.

ESCAPADE trajectory design creates new options for Mars smallsat missions

ESCAPADE

An alternative trajectory developed for a NASA Mars smallsat mission could enable other missions to go to Mars outside the constraints of conventional launch windows.

The post ESCAPADE trajectory design creates new options for Mars smallsat missions appeared first on SpaceNews.

Rocket Lab completes acquisition of U.S. satellite sensor maker Geost

Geost’s technology is used in U.S. national security missions, including missile warning and tracking, space surveillance, and tactical reconnaissance

The post Rocket Lab completes acquisition of U.S. satellite sensor maker Geost appeared first on SpaceNews.

TraCSS moving past beta test of space traffic coordination system

Dmitry Poisik

The Office of Space Commerce is moving beyond the beta-test phase of its space traffic coordination system as it prepares to enter service in January.

The post TraCSS moving past beta test of space traffic coordination system appeared first on SpaceNews.

Fully funded key-market coverage plan lifts AST SpaceMobile

Illustration of an AST SpaceMobile BlueBird cell service satellite. Credit: AST SpaceMobile

AST SpaceMobile shares rose more than 8% after the direct-to-smartphone operator said it has secured all the funding needed to deploy 45–60 satellites, enough to provide continuous coverage in the United States and other key markets.

The post Fully funded key-market coverage plan lifts AST SpaceMobile appeared first on SpaceNews.

Deep-space radar in Australia begins tracking satellites for AUKUS partners

The facility is the first location in the Deep-Space Advanced Radar Capability (DARC) network

The post Deep-space radar in Australia begins tracking satellites for AUKUS partners appeared first on SpaceNews.

Todd Master on Umbra’s evolution

Todd Masters

In this episode of Space Minds, host Mike Gruss speaks with Todd Master, Chief Operating Officer of Umbra, about the evolving small satellite market, supply chain challenges, and the company’s expansion from SAR data services into satellite components.

The post Todd Master on Umbra’s evolution appeared first on SpaceNews.

Pale Blue teams up with Mitsubishi Electric to advance water propulsion

Japanese water propulsion startup Pale Blue is exploring jointly developing systems with Mitsubishi Electric, after the satellite maker joined the University of Tokyo spin-off’s $10 million Series C funding round.

The post Pale Blue teams up with Mitsubishi Electric to advance water propulsion appeared first on SpaceNews.

ULA’s Vulcan Centaur launches first national security mission

The launch of USSF-106 marks the Defense Department’s formal shift to flying national security payloads exclusively on domestic rockets powered by U.S.-made engines

The post ULA’s Vulcan Centaur launches first national security mission appeared first on SpaceNews.

Mission Control offers in-orbit testbed for AI models

SALT LAKE CITY – Mission Control Space Services is inviting organizations to test machine-learning models on the Canadian startup’s Persistence mission launched in June. “Whether you’re from a for-profit company, nonprofit or even a school, we think that the need for autonomy to meet the requirements of an increasingly complex space environment is here to […]

The post Mission Control offers in-orbit testbed for AI models appeared first on SpaceNews.

Wednesday: MBA Mortgage Applications

Mortgage Rates Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

Wednesday:
• At 7:00 AM ET, The Mortgage Bankers Association (MBA) will release the results for the mortgage purchase applications index.

Chrysalis: Designing a Generation Ship

Chrysalis: Designing a Generation Ship

If you want to explore the history of generation ships in science fiction, you might start with a story by Don Wilcox. Writing in 1940 for Amazing Stories, Wilcox conceived a slick plot device in his “The Voyage that Lasted 600 Years,” a single individual who comes out of hibernation once every century to see how the rest of the initial crew of 33 is handling their job of keeping the species going. Only room for one hibernation chamber, and this means our man becomes a window into social change aboard the craft. The breakdown he witnesses forces him into drastic action to save the mission.

In a plot twist that anticipates A. E. van Vogt’s far superior “Far Centaurus,” Wilcox has his ragged band finally arrive after many generations at destination, only to find that a faster technology has long ago planted a colony there. Granted, Konstantin Tsiolkovsky had written about generation ships before Wilcox, and in a far more learned way. Fictional precedents like Laurence Manning’s “The Living Galaxy” (Wonder Stories, 1934) and Olaf Stapledon’s Star Maker (1937) imagined entire worlds as stellar wanderers, but we can give Wilcox a nod for getting the concept of generations living and dying aboard a constructed craft in front of the public. Heinlein’s “Universe” wouldn’t appear until 1941, and the generation ship was soon to become a science fiction trope.

We can hope that recent winners of the generation ship contest for Project Hyperion have produced designs that avoid the decadence and forgetfulness that accompany so many SF depictions. We do, after all, want a crew to reach destination aware of their history and eager to add to the store of human knowledge. And we have some good people working these issues, scientists such as Andreas Hein, who has been plucky enough to have led Project Hyperion since 2011. Working with the Initiative for Interstellar Studies, Hyperion has announced a contest winner that leverages current technologies and speculates in the best science fiction tradition about how they can be extended.

Hein is an energetic visionary, a man who understands that imaginative forays can help us define key issues and sketch out solutions. The winning design is reminiscent of the kind of space habitats Gerard O’Neill advocated, a 58-kilometer multi-layered cylinder dubbed Chrysalis that offers space enough for Earth-like amenities such as grasslands and parks, art galleries and libraries. The notion includes animals, though only as a token of biodiversity in a culinary scene where vegetarianism is the order of the day.

Interstellar Necessities

What intrigues me about the Chrysalis design is that the need for cultural as well as physical survival in a society utterly closed off for centuries is emphasized. Thus Chrysalis offers habitable conditions for 1,000 people plus or minus 500, with care to ensure the handing off of experience and knowledge to future generations, critical both for societal health as well as the maintenance of the ship’s own technologies. This presumes, after all, the kind of closed-loop life support we have yet to prove we can create here on Earth (more on that in a minute). Gravity is provided through rotation of the craft.

Chrysalis is designed around a journey to Proxima Centauri, with the goal of entering into orbit around Proxima b in some 400 years. And here we hit an immediate caveat. Absent any practical means of propelling something of this magnitude to another star at present (much less of building it in the first place), the generation ship designers have no choice but to fall back on extrapolation. As in the tradition of hard science fiction, the idea is to stick rigorously within the realm of known physics while speculating on technologies that could one day prove feasible. This is not intended as a criticism; it’s just a reminder of how speculative the Chrysalis design is given that I keep seeing that 400 year figure mentioned in press coverage of the contest. We might well have said 600. Or 4,000. Or 40,000.

Image: Chrysalis, the Project Hyperion winner. Credit: Project Hyperion/i4IS.

Like the British Interplanetary Society’s Daedalus starship, Chrysalis is envisioned as using deuterium and helium-3 to power up its fusion engines, with onboard power also fed by fusion generators within the ship. The goal is 0.01C with 0.1G acceleration during the acceleration phase and deceleration phase. As to cruise, we learn this about the fusion power sources that will prove crucial:

All Chrysalis power generators consist of toroidal nuclear fusion reactors housed in the hull frame structure and the habitat axial frame structure separating the various stages. The multiple redundancy of the generators for each shell and each stage guarantees a high tolerance to failure in the event of the failure of one or more reactors. The D and He3 liquid propellant is contained in the propellant tank units located in the forward and after interface propellant bays of the habitat module…

Inside Chrysalis

What would it be like to live aboard a generation starship? The Chrysalis report is stuffed with images and ideas. I like the concept of structures designed around capturing what the team calls ‘generational memories.’ These appear to be tall, massive cylinders designed around what can only be called the aesthetics of worldship travel. Thus:

Each treelike structure hosts multi-story and multi-purpose environments [such] as halls, meeting rooms, and other kinds of infrastructure used by all the inhabitants as collective spaces. There are enough of these public environments to have redundant spaces and also to allow each generation to leave a mark on creation (paintings, sculptures, decorations, etc) for future generations…

The Chrysalis slide show makes it tricky to capture the extensive interior design in a blog format like this, but I advocate paging through it so you can blow the imagery up for a closer look at the included text. As with some of the O’Neill concepts, there is an almost idyllic feel to some of these vistas. Chrysalis is divided into five sections, and within each section there are levels that rotate to provide artificial gravity. The report refers to Chrysalis as a ‘biome ark,’ saying that within each stage there are two shells for dedicated biomes and one for agricultural food production.

Here, of course, we run into a key problem (and readers of Kim Stanley Robinson’s novel Aurora (2015) certainly get a taste of this conundrum). Let me quote the Chrysalis report, which describes ‘controlled ecological bio-regenerative life support systems (CEBLSS)’:

Through a controlled ecological BLSS all chemicals are recycled and reused in a closed loop ecosystem together with a circular bio-economy system in which all organic wastes from the living environments are reintroduced and composted in the agricultural soils.

The acronym nudges the idea into credibility, for we tend to use acronyms on things we’ve pinned down and specified. But the fact is that closed-loop life support is as big a problem as propulsion when it comes to crafting a ship made to sustain human beings for perhaps thousands of years. The Soviet BIOS-1 and subsequent BIOS projects made extensive experiments with human crews, succeeding with full closure for up to 180 days in one run at Krasnoyarsk, while in the U.S., Biosphere 2 ran into serious problems in CO2 and food production. As far as I know, the Chinese Yuegong-1 experiments produced a solid year of closed ecological life support, although I haven’t been able to verify whether this system was 100 percent closed.

Daily Life Between the Stars

So I think we’re making progress, and the Chrysalis report certainly lays out how we might put closed-loop life support to work on the millennial scale. But all this does make me reflect on the fact that we’ve spent most of our energies in interstellar studies trying to work out propulsion, when we’re still in the early days when it comes to onboard ecologies, no matter how beautifully designed. In the same way, we know how to get a payload to Mars, but how to get a healthy crew to the Red Planet and back is still opaque. We need a dedicated orbital facility studying both near and long-term human physiology in space.

The Chrysalis living spaces are made to order as science fiction settings. Interior walls can be functional screens producing panoramic views from Earth environments to overcome the spatial (and psychiatric) limitations of the craft. The inhabitants are given the capability of continually engineering their own living spaces through customizable 3D printing technologies so that the starship itself can be seen as evolving as the crew generations play out their lives. Individuals are provided with parks and gardens to enhance privacy, no small consideration in such a ship. The authors’ slide show goes into considerable detail on ecology and sustainability, social organization and mental health.

In a lovely touch, the team envisions a ‘Cosmos Dome,’ a giant glassy structure where the plenary council for the mission would transact its business. One gets a goose bump or two here, reminiscent as all this is of, say, the control room in Heinlein’s Orphans of the Sky. Burst in there and you suddenly are reminded of just where you are, with Sol behind and Alpha Centauri ahead.

How exactly to select and train a crew, or maybe I should say ‘initial passenger list,’ for such a mission? The Hyperion team’s forays into sociology are curious and almost seem totalitarian. Consider their Antarctic strategy: Three or four generations of crew will live in experimental biospheres in Antarctica…

…to select and monitor all the characteristics that an interstellar population should have. In addition, the creation of a strong group identity and an almost tribal sense of cooperation among the generations of inhabitants is intended to enhance the inter-generational cooperative attitude of the future Chrysalis starship population.

If I’m reading this correctly, it presupposes people who are willing to consign their entire lives to living in Antarctica so that their descendants several generations along can get a berth on Chrysalis. That’s a pretty tough sell, but it emphasizes how critical the suppression of conflict in a tiny population can be. I’m reminded of John Brunner’s “Lungfish,” which ran in the British SF magazine Science Fantasy in 1957 (thanks to Elizabeth Stanway, whose “Journey of (more than) a Lifetime” covers generation ship fictional history well). Here the descendants have no interest at all in life on a planet. As Brunner says:

These had been children like any other children: noisy, inquisitive, foolhardy, disobedient…. And yet they had grown up into these frighteningly self-reliant people who could run the ship better than the earthborn any time they put their minds to it, and still refused to take the initiative.

Definitely an outcome to be avoided!

Language and Stability

The Chrysalis team describes their crew’s mental stability as being enhanced by many reminders of their home:

Chrysalings will also be able to take walks within the different terrestrial biomes of Shell 1 to be in contact with natural elements and plants of the terrestrial biosphere. In Shell 2 there will be opportunities to do concerts, experience theater activities, access ancient Earth materials (books, art objects, etc.), make crafts and other handmade hobby activities. Shell 2 is the real beating heart of the society, where people come together and can freely co-create new cultures and ideas. Thanks to the use of recyclable materials with which the buildings were constructed, residents can also decide to recreate new architectural forms with different shapes and spaces more suitable to their cultural style.

I think the linguistic notion here is quite a reach, for the team says that to avoid language problems, everyone on board the spacecraft will speak a common initial artificial language “used and improved by the Antarctic generations in order to render it a natural language.” And a nod to Star Trek’s holodeck:

The inhabitants may also occasionally decide to meet in simulated metaverses through a deep integration system for cyberspace…to transcend the physical barriers of the starship and experience through their own twin-avatar new worlds or simulations of life on Earth.

Image: The people behind Chrysalis. Left to right: Giacomo Infelise (architect/designer), Veronica Magli (economist/social innovator), Guido Sbrogio (astrophysicist), Nevenka Martinello (environmental engineer/artist), Federica Chiara Serpe (psychologist). Team Chrysalis.

Anyone developing a science fiction story involving generation ships will want to work through the Chrysalis slide show, as the authors leave few aspects of such a journey untouched. I’ve simply been cherry-picking items that caught my eye out of this extensively developed presentation. If we ever become capable of sending humans and not just instruments to nearby stars, we’ll have to have goals and aspirations firmly fixed, and compelling reasons for sending out an expedition that will have no chance of ever returning. Just defining those issues alone is subject for investigations scientific, medical, biological and philosophical, not to mention the intricate social issues that humans pose in closed environments. Chrysalis pushes the discussion into high relief. Nice work!

‘Unusual Agreements’

Amrith Ramkumar and Robbie Whelan, reporting for The Wall Street Journal (gift link):

Nvidia and Advanced Micro Devices have agreed to give the Trump administration a portion of the sales from their artificial-intelligence chips to China, unusual agreements that deepen their relationships with the U.S. government.

The Trump administration will receive 15% of the sales as part of a deal to approve exports of Nvidia’s H20 AI chip to China, according to people familiar with the matter. That could amount to billions of dollars given demand for the H20 chips and is the latest example of the White House employing novel tactics to raise revenue. The administration has reached the same agreement with AMD for its MI308 chip, the people said. Details of the arrangements and the financial structures are still being worked out.

Tripp Mickle, reporting on the same story for The New York Times (also a gift link):

Nvidia and Advanced Micro Devices are expected to pay the United States 15 percent of the money they take in from selling artificial intelligence chips to China, as part of a highly unusual financial agreement with the Trump administration.

On Wednesday, Jensen Huang, Nvidia’s chief executive, met with President Trump at the White House and agreed to give the federal government its 15 percent cut, essentially making the federal government a partner in Nvidia’s business in China, said the people familiar with the deal. The Commerce Department began granting licenses for A.I. chip sales two days later, these people said. [...]

There are few precedents for the Commerce Department agreeing to grant licenses for exports in exchange for a share of revenue. But the unorthodox payments are consistent with Mr. Trump’s increasingly interventionist role in international business deals involving American companies. In June, the administration approved investment by Nippon Steel, a Japanese company, in U.S. Steel in a deal that included a so-called golden share in the company, a rarely used practice where the government takes a stake in a business.

Unusual agreements is quite the euphemism for a shakedown. US companies pay the Treasury a share of their revenue all the time, of course. That’s called taxation. But taxes are laws, written by Congress. There’s no tax here. Congress has played zero role whatsoever in these deals.

CNN, today:

President Donald Trump defended a deal he struck with Nvidia CEO Jensen Huang to allow the sale of certain semiconductor chips to China in exchange for the company giving the U.S. government 15 percent of the revenue.

“I said, ‘I want 20 percent if I’m going to approve this for you,’” Trump told reporters Monday during a White House press conference. “For the country, for our country. I don’t want it myself. … And he said, ‘Would you make it 15?‘ So we negotiate a little deal.”

The tell here, revealing just how fucked up this whole thing is getting, is that Trump felt the need to say “For the country, for our country. I don’t want it myself.”

 ★ 

Perplexity Made an Offer to Buy TikTok — Well, Half of TikTok — Back in January

Haleluya Hadero and Christopher Rugaber, reporting for the AP back on January 26, six days into Trump 2.0:

Perplexity AI has presented a new proposal to TikTok’s parent company that would allow the U.S. government to own up to 50% of a new entity that merges Perplexity with TikTok’s U.S. business, according to a person familiar with the matter. The proposal, submitted last week, is a revision of a prior plan the artificial intelligence startup had presented to TikTok’s parent ByteDance on Jan. 18, a day before the law that bans TikTok went into effect.

They should have added a similar provision to their Chrome offer sheet today. Give 25% to the US Treasury and another 25% to Trump’s future presidential “library”.

 ★ 

Perplexity Jumps the Shark, Makes Clownish $34.5 Billion Stunt Offer to Buy Chrome From Google

Katherine Blunt, reporting for The Wall Street Journal (main link is a paywall-puncturing gift link; also on News+):

Artificial-intelligence startup Perplexity on Tuesday offered to purchase Google’s Chrome browser for $34.5 billion as it works to challenge the tech giant’s web-search dominance.

Perplexity’s offer is significantly more than its own valuation, which is estimated at $18 billion. The company told The Wall Street Journal that several investors including large venture-capital funds had agreed to back the transaction in full. Estimates of Chrome’s enterprise value vary widely but recent ones have ranged from $20 billion to $50 billion.

Perplexity apparently also told the Journal that the story was theirs exclusively, despite the fact that they also revealed the stunt offer to Bloomberg as well. Prefixing a headline with “Exclusive:” is irresistible catnip to business/investor-oriented publications. The Journal, at least, had the good sense to raise a skeptical eyebrow at the premise in its headline (“Perplexity Makes Longshot $34.5 Billion Offer for Chrome”1). Bloomberg, not so much (“AI Startup Perplexity Makes $34.5 Billion Bid for Google’s Chrome Browser”).

The whole premise is ludicrous. Start with the fact that Perplexity is only valued at $18 billion. Add to that the fact that Perplexity is almost certainly overvalued at that price. I don’t know anyone who uses Perplexity, and Perplexity doesn’t develop or run their own LLMs.

But all of this stuff about Google possibly being forced (as a remedy in the US v. Google antitrust case they lost) to sell Chrome doesn’t consider that Chrome, on its own, divested from Google and thus disconnected from Chrome users’ Google accounts, is likely worth little to nothing. I wrote about this at length back in April. Chrome is tremendously valuable to Google. It has very little value on its own. Chrome generates no revenue on its own — it simply serves as an outlet for Google to show its own lucrative search ads without paying traffic acquisition fees to a browser owned by someone else (like, say, Apple or Mozilla or Samsung). Chromium is open source. Microsoft Edge is forked from it. Brave is forked from it. Opera (remember them?) forked from it over a decade ago. Perplexity (or any actually credible would-be buyer of Chrome) could just start their own fork.

There are two things Chrome has that other Chromium browsers don’t: billions of users, and integration with Google account services. Chrome has those billions of users because of the Google account integration. Severed from Google, Chrome users would lose those essential features — possibly including Google Search — and they’d likely begin switching away in droves.

I wrote just last week that Perplexity looks like a scam. Someone is spreading rumors that Apple is sniffing around at buying them, despite the fact that the two companies are an absurdly bad cultural match. I think what’s happening is that the LLM chatbot field is maturing (exemplified by OpenAI’s launch of ChatGPT 5 last week), and Perplexity CEO Aravind Srinivas is getting increasingly desperate. Desperate moves to seek an edge in product, and desperate moves to seek publicity that Perplexity’s product can’t garner on its meager merits.


  1. Color me mildly surprised that the Journal’s style guide spells longshot closed up. ↩︎

 ★ 

‘The Quid Pro Quo Arrangement Is Unprecedented’

Demetri Sevastopulo and Michael Acton, reporting for the Financial Times:

Nvidia and AMD have agreed to give the US government 15 per cent of the revenues from chip sales in China, as part of an unusual arrangement with the Trump administration to obtain export licences for the semiconductors. [...]

The quid pro quo arrangement is unprecedented. According to export control experts, no US company has ever agreed to pay a portion of their revenues to obtain export licences.

But the deal fits a pattern in the Trump administration where the president urges companies to take measures, such as domestic investments, for example, to prevent the imposition of tariffs in an effort to bring in jobs and revenue to America.

This FT report starts out on shaky ground, using the same “unusual agreement” euphemism as the WSJ and NYT reports, but they soon found a little backbone with “The quid pro quo arrangement is unprecedented.”

This is not merely unusual. It is unprecedented. Seemingly, too, plainly unconstitutional.

Quipped a friend: “Nvidia and AMD’s general counsels must be wondering how much of this money they can eventually get back, if we ever reverse the banana republic index.”

 ★ 

[Sponsor] Dekáf Coffee Roasters

Forget everything you think you know about decaf.

Nine single origins. Six signature blends. Four Mizudashi cold brews. All micro-lot and top-rated coffees shipped to you within 24 hours of roasting. No shortcuts. No crash. This is coffee at its most refined, just without the caffeine.

20% off for DF readers with code “DF” at dekaf.com.

We’re betting you’ll never look at decaf the same way again. But that’s kind of the point.

 ★ 

Predator and Rake-Stomp, The Curious Folkways of Trump 2.0

After Friday and Monday’s Backchannels, full of the ominous progress of the Trump White House, we can see again today the dual nature of Trumpism, both predatory and absurd, methodical and feckless. The key to grappling with Trumpism is recognizing that both are simultaneously true and neither reality invalidates the other. Trump’s federalization of the DC Metro police is a case in point. The President can take control of the DC police for up to 48 hours. With notification of relevant committees of Congress, the president can maintain that control for an additional 30 days. After that he requires Congress’s authorization to continue to control the DC police.

Can Trump clean up the DC crime hellscape in 32 days? It seems unlikely. Will Congress allow him to continue past 32 days? Possibly. But by no means certainly. Trump’s margins remain razor thin and it’s the kind of issue where at least a few Republicans might refuse. Will the President remained focused on becoming the DC police chief and mayor or will the whole effort go by the wayside? Was any of it more than an excuse for a news-cycle-driving press conference?

Meanwhile, The Washington Post reports that the Pentagon is in the planning stages for a “rapid reaction force” of state National Guard troops in a constant state of readiness to be available to deploy in as little as one hour to whatever American city happens to be hellscaping at the given moment. The Post doesn’t publish the report on the plan, which it characterizes as “pre-decisional” (a very Pentagon diction). But they relay elements of it. From those tidbits it appears to be written by someone whose subtext might best to summarized as “this is probably not a good idea.” It notes that it’s probably bad for retention and morale of the state National Guards since most Guardsmen don’t relish being deployed against American civilians. They also don’t like the constant pace of deployments that goes with having hundreds locked down on military bases in Arizona and Alabama for 90 days at a time, which is required in order to be able to deploy to hellscaping cities in as little as one hour.

Then there’s the expense. Such a rapid mandated turnaround time for deployment requires vehicles, planes and all the hardware required for such deployments to be in a constant state of inspection and readiness. That costs a lot of money. (The report then notes that it might be cheaper and wiser, if admittedly much less cool, to book flights on American and Southwest airlines. Low energy, yes. But that’s what the report appears to recommend.)

Then there’s what the report refers to obliquely as the political “friction” created by the whole effort as well as potential legal difficulties involved in deploying Guard from one state to another absent permission, what we might call the backdrop “what the fuck” factor that haunts the whole endeavor.

Of course the whole reason you’re using the National Guard, a part-time force, to do this deeply full-time mission (basically a domestic 82nd Airborne) is because there is a mountain of law and regulation against deploying the regular military against U.S. civilians or on American soil. You can do it, to a degree, under certain conditions and restrictions. But it’s much more complicated and frequently runs into cases and uses that are plainly illegal.

This half-cocked workaround captures most of what is happening in round two of Trumpism. They are actively trying to subvert the Constitution and the federal state. They routinely try to do things they are not allowed to do. To get as close as possible to doing so usually involves both breaking laws as well as doing things in highly inefficient ways, even ways that might amount to not doing them at all save for a high octane roll out. Both realities are true simultaneously. Neither invalidates the other.

These things are feckless, incomplete, much less than they seem in many cases. And yet they come at a sufficient pace that even incomplete they leave small assertions of degenerate illegal power, along with pathways and mechanisms to assert it. And these new pathways, new organizations and mechanics build on each other over time.

So Close

Today marks four weeks since we kicked off this year’s annual TPM Journalism Fund drive. You can help us meet our goal today. We’re currently at $483,504, just under $17,000 short of our goal of raising $500,000. Want to help push us over the top? Just click right here.

Cassette 1.0

New app from Devin Davies, developer of the ADA-winning Crouton:

Cassette is an app for iPhone and iPad that helps you enjoy your home videos like the good old days. With a retro interface Cassette auto plays through videos from your devices’ Photo Library. Mirror your device to a nearby Apple TV for a true kick back experience.

The VHS retro pastiche is fun, but don’t get the wrong impression from it. Cassette is not an app that makes your modern videos look like they were shot on an old camcorder. (Rarevision VHS is a fun app for that, if that’s what you’re looking for.)

Cassette’s tape-playing pastiche is more about putting you in the right mindset. Instead of watching one video from your Photos library, or two or three, it mimics popping in a tape labelled, say, “2014” and sitting back and watching an entire hour of videos from a decade ago. It just launched today but I’ve been using the beta via TestFlight for a few weeks, and you really have to try it to see how effective it is. Davies made a nice video teaser to pitch the app. It’s good, you should watch it.

The way I’d pitch it is that Cassette is to the videos in your Photos library what the Kodak Carousel was to your 35mm film slides back in the 1960s. Nostalgia. It’s delicate, but potent.

 ★ 

Cusp

Our drive is currently at $499,844. The assist is in motion. Come in for the slam dunk. Click here.

Tuesday 12 August 1662

Up early at my office, and I find all people beginning to come to me. Among others Mr. Deane, the Assistant of Woolwich, who I find will discover to me the whole abuse that his Majesty suffers in the measuring of timber, of which I shall be glad. He promises me also a modell of a ship, which will please me exceedingly, for I do want one of my own. By and by we sat, and among other things Sir W. Batten and I had a difference about his clerk’s making a warrant for a Maister, which I would not suffer, but got another signed, which he desires may be referred to a full board, and I am willing to it. But though I did get another signed of my own clerk’s, yet I will give it to his clerk, because I would not be judged unkind, and though I will stand upon my privilege. At noon home and to dinner alone, and so to the office again.

Where busy all the afternoon till 10 o’clock at night, and so to supper and to bed, my mind being a little disquieted about Sir W. Batten’s dispute to-day, though this afternoon I did speak with his man Norman at last, and told him the reason of my claim.

Read the annotations

Quoting Nick Turley

I think there's been a lot of decisions over time that proved pretty consequential, but we made them very quickly as we have to. [...]

[On pricing] I had this kind of panic attack because we really needed to launch subscriptions because at the time we were taking the product down all the time. [...]

So what I did do is ship a Google Form to Discord with the four questions you're supposed to ask on how to price something.

But we got with the $20. We were debating something slightly higher at the time. I often wonder what would have happened because so many other companies ended up copying the $20 price point, so did we erase a bunch of market cap by pricing it this way?

Nick Turley, Head of ChatGPT, interviewed by Lenny Rachitsky

Tags: chatgpt, discord, generative-ai, openai, llm-pricing, ai, llms

LLM 0.27, the annotated release notes: GPT-5 and improved tool calling

I shipped LLM 0.27 today (followed by a 0.27.1 with minor bug fixes), adding support for the new GPT-5 family of models from OpenAI plus a flurry of improvements to the tool calling features introduced in LLM 0.26. Here are the annotated release notes.

GPT-5

  • New models: gpt-5, gpt-5-mini and gpt-5-nano. #1229

I would have liked to get these out sooner, but LLM had accumulated quite a lot of other changes since the last release and I wanted to use GPT-5 as an excuse to wrap all of those up and get them out there.

These models work much the same as other OpenAI models, but they have a new reasoning_effort option of minimal. You can try that out like this:

llm -m gpt-5 'A letter advocating for cozy boxes for pelicans in Half Moon Bay harbor' -o reasoning_effort minimal

Setting "minimal" almost completely eliminates the "thinking" time for the model, causing it to behave more like GPT-4o.

Here's the letter it wrote me at a cost of 20 input, 706 output = $0.007085 which is 0.7085 cents.

You can set the default model to GPT-5-mini (since it's a bit cheaper) like this:

llm models default gpt-5-mini

Tools in templates

I think this is the most important feature in the new release.

I added LLM's tool calling features in LLM 0.26. You can call them from the Python API but you can also call them from the command-line like this:

llm -T llm_version -T llm_time 'Tell the time, then show the version'

Here's the output of llm logs -c after running that command.

This example shows that you have to explicitly list all of the tools you would like to expose to the model, using the -T/--tool option one or more times.

In LLM 0.27 you can now save these tool collections to a template. Let's try that now:

llm -T llm_version -T llm_time -m gpt-5 --save mytools

Now mytools is a template that bundles those two tools and sets the default model to GPT-5. We can run it like this:

llm -t mytools 'Time then version'

Let's do something more fun. My blog has a Datasette mirror which I can run queries against. I'm going to use the llm-tools-datasette plugin to turn that into a tool-driven template. This plugin uses a "toolbox", which looks a bit like a class. Those are described here.

llm install llm-tools-datasette

# Now create that template
llm --tool 'Datasette("https://datasette.simonwillison.net/simonwillisonblog")' \
  -m gpt-5 -s 'Use Datasette tools to answer questions' --save blog

Now I can ask questions of my database like this:

llm -t blog 'top ten tags by number of entries'

The --td option there stands for --tools-debug - it means we can see all tool calls as they are run.

Here's the output of the above:

Top 10 tags by number of entries (excluding drafts):
- quora — 1003
- projects — 265
- datasette — 238
- python — 213
- ai — 200
- llms — 200
- generative-ai — 197
- weeknotes — 193
- web-development — 166
- startups — 157

Full transcript with tool traces here.

I'm really excited about the ability to store configured tools

I want to build a tool that can render SVG to an image, then return that image so the model can see what it has drawn. For reasons.

  • New methods on the Toolbox class: .add_tool(), .prepare() and .prepare_async(), described in Dynamic toolboxes. #1111

I added these because there's a lot of interest in an MCP plugin for Datasette. Part of the challenge with MCP is that the user provides the URL to a server but we then need to introspect that server and dynamically add the tools we have discovered there. The new .add_tool() method can do that, and the .prepare() and .prepare_async() methods give us a reliable way to run some discovery code outside of the class constructor, allowing it to make asynchronous calls if necessary.

  • New model.conversation(before_call=x, after_call=y) attributes for registering callback functions to run before and after tool calls. See tool debugging hooks for details. #1088
  • Raising llm.CancelToolCall now only cancels the current tool call, passing an error back to the model and allowing it to continue. #1148

These hooks are useful for implementing more complex tool calling at the Python API layer. In addition to debugging and logging they allow Python code to intercept tool calls and cancel or delay them based on what they are trying to do.

  • Some model providers can serve different models from the same configured URL - llm-llama-server for example. Plugins for these providers can now record the resolved model ID of the model that was used to the LLM logs using the response.set_resolved_model(model_id) method. #1117

This solves a frustration I've had for a while where some of my plugins log the same model ID for requests that were processed by a bunch of different models under the hood - making my logs less valuable. The new mechanism now allows plugins to record a more accurate model ID for a prompt, should it differ from the model ID that was requsted.

  • New -l/--latest option for llm logs -q searchterm for searching logs ordered by date (most recent first) instead of the default relevance search. #1177

My personal log database has grown to over 8,000 entries now, and running full-text search queries against it often returned results from last year that were no longer relevant to me. Being able to find the latest prompt matching "pelican svg" is much more useful.

Everything else was bug fixes and documentation improvements:

Bug fixes and documentation

  • The register_embedding_models hook is now documented. #1049
  • Show visible stack trace for llm templates show invalid-template-name. #1053
  • Handle invalid tool names more gracefully in llm chat. #1104
  • Add a Tool plugins section to the plugin directory. #1110
  • Error on register(Klass) if the passed class is not a subclass of Toolbox. #1114
  • Add -h for --help for all llm CLI commands. #1134
  • Add missing dataclasses to advanced model plugins docs. #1137
  • Fixed a bug where llm logs -T llm_version "version" --async incorrectly recorded just one single log entry when it should have recorded two. #1150
  • All extra OpenAI model keys in extra-openai-models.yaml are now documented. #1228

Tags: projects, python, ai, datasette, annotated-release-notes, generative-ai, llms, llm, llm-tool-use, gpt-5

Trump Plays the Carnage Card

First, some personal news. For those who may not know, I received a great honor this weekend:

I have now added “Deranged BUM” to my Substack profile.

But enough about me. Let’s move on to the subject of today’s post, Trump’s takeover of Washington, DC, where he has seized control of the city’s police force and sent in the National Guard.

Actually, there’s some relationship between Trump’s rage-tweeting about yours truly and his move against the nation’s capital.

Ever since that latest weak jobs report, Trump has been frantically trying to convince the American public that the economy is doing great. He is failing, and predictably so. Experience shows that trying to talk up the economy when people don’t perceive it as good never works, even if the data are favorable. It’s even less likely to work when the data actually aren’t good, and calling people who point out economic weakness BUMs isn’t likely to help.

On the other hand, telling people things are bad even when they’re actually good can work. This is sometimes true when it comes to the economy. It’s definitely true when we’re talking about crime.

In his press conference announcing that he was seizing power in the District of Columbia, Trump declared that

Our capital city has been overtaken by violent gangs and bloodthirsty criminals, roving mobs of wild youth, drugged out maniacs and homeless people.

He forgot to mention deranged bums. Anyway, the media were in general pretty good at pointing out that crime in DC has in fact been falling rapidly. According to the U.S. attorney’s office, violent crime is at a 30-year low. The invaluable Jeff Asher has a chart:

Source: Jeff Asher

As I understand it, there are some technical data issues for 2022. But the basic picture is that DC is safer than it has been since the 1960s. The same is true for the nation as a whole:

But will Trump be universally ridiculed for his absurd claims? Will people understand that what we’re seeing, aside from an attempt to seize even more power, is an attempt to change the subject from the weakening economy and the Epstein affair?

I’m not sure.

Residents of DC will surely notice that Trump’s description of a violence-ridden dystopia bears no resemblance to the city they actually inhabit. But we know that crime is an issue on which people tend to believe that things are getting worse even when they are getting markedly better.

As you can see from the chart above, there was a truly epic decline in crime from the early 1990s to the mid-2010s. Yet throughout that period, according to Gallup, a large majority of Americans said that crime was getting worse. What’s going on?

One possible answer is that there are lies, damned lies and statistics. Maybe official crime numbers are, as Trump would say, RIGGED — although that would be really hard to do with murders, which are kind of hard either to fabricate or to conceal, and have fallen even more than overall crime. Or maybe people’s lived experience just doesn’t match what the crime data say.

But I don’t buy that explanation, among other things because I’m a New Yorker. Much of the nation sees the Big Apple as a dystopian hellhole, but anyone who actually lives there can tell you that the city feels quite safe — certainly safer than at any earlier point in my adult life.

Or if you don’t consider me a reliable narrator, look at actual behavior. According to Trump officials, people are afraid to ride the subway because they’re terrified of crime. But actual subway ridership has been soaring since Covid. It’s still somewhat depressed on weekdays, in part because remote work means fewer commuters, but weekend ridership — which mostly means people who could choose not to take the subway if they were terrified of crime — is rising fast:

But if the data showing falling crime are accurate, and people aren’t behaving as if they personally are terrified of crime, how can public perceptions about crime trends be so negative?

Part of the answer is the old line “if it bleeds it leads.” There are occasional acts of violence on the New York subway, and they make the news. The system’s overall safety — taking the subway is much, much safer than driving — doesn’t.

But I’d also argue that a large part of the answer is that many Americans believe that crime is running rampant — just not where they happen to live. Fox News tells suburban and small-town Americans that New York and Los Angeles are crime-ridden hellscapes, and they believe it.

According to Gallup, last year 56 percent of Americans believed that crime in the United States was an extremely or very serious problem — but only 14 percent said it was an extremely or very serious problem “in the area where you live.”

Returning for a second to my home subject, we saw something quite similar in assessments of the economy during the Biden years, when American had a much more positive assessment of their local economies than they did of the economy as a whole:

Source: Federal Reserve

Which brings us back to Trump’s claim that he’s seizing power in DC because the city is descending into lawless chaos. Anyone who either lives there are looks at crime data knows that this is malicious nonsense. But we can’t take it for granted that the rest of the country will understand that that he’s lying.

And if I may say, it’s the responsibility of the news media to make that clear. Don’t say “Trump makes contentious claims about DC crime.” Don’t say that there’s “dispute over DC crime data.” Just say that he’s lying.

MUSICAL CODA

Claude Sonnet 4 now supports 1M tokens of context

Claude Sonnet 4 now supports 1M tokens of context

Gemini and OpenAI both have million token models, so it's good to see Anthropic catching up. This is 5x the previous 200,000 context length limit of the various Claude Sonnet models.

Anthropic have previously made 1 million tokens available to select customers. From the Claude 3 announcement in March 2024:

The Claude 3 family of models will initially offer a 200K context window upon launch. However, all three models are capable of accepting inputs exceeding 1 million tokens and we may make this available to select customers who need enhanced processing power.

This is also the first time I've seen Anthropic use prices that vary depending on context length:

  • Prompts ≤ 200K: $3/million input, $15/million output
  • Prompts > 200K: $6/million input, $22.50/million output

Gemini have been doing this for a while: Gemini 2.5 Pro is $1.25/$10 below 200,000 tokens and $2.50/$15 above 200,000.

Here's Anthropic's full documentation on the 1m token context window. You need to send a context-1m-2025-08-07 beta header in your request to enable it.

Note that this is currently restricted to "tier 4" users who have purchased at least $400 in API credits:

Long context support for Sonnet 4 is now in public beta on the Anthropic API for customers with Tier 4 and custom rate limits, with broader availability rolling out over the coming weeks.

Via @claudeai

Tags: ai, generative-ai, llms, anthropic, claude, llm-pricing, long-context

Kent's Going Home

So I’m going to have a place. Nine years after getting divorced & pushing the reset button on my finances & my emotional life, the seller accepted my offer on a house. I won’t say much about it in a vain attempt to preserve my privacy, but open a window & you can hear waves. I’ve always wanted to hear waves.

I’m pretty emotional, as you might guess. I’m …

Read more

The Coffee of Oaxaca

The Coffee Culture of Oaxaca

“Oaxaca coffee is squarely not a “modern” coffee profile— its signal isn’t loud or head-turning, but rather clear and quiet — subtly elegant. It doesn’t scream at you with experimental processing methods like anaerobic fermentation, the well-branded honey process, or even the natural process in a region where natural is not the tradition. Nor does it showcase sensational modern flavors through the now widely-planted hybrid and/or highly-selected varieties across the world. …

Oaxaca coffees are the opposite of these nouveau profiles in the best possible way. Soft-spokenly old school, they are quiet while complex enough to make you want more: a coffee pro’s coffee.…”

https://redfoxcoffeemerchants.com/truly-classic-oaxaca-coffees/

Coffee is one of my few remaining vices. A morning vice (in a North Beach café once in a while). If I have coffee after noon, I’ll be sleepless that night. But it sure perks up my mornings.

My coffee world was reborn in Oaxaca. Oaxacan coffee is incredelible!

Adjacent to the large Cafiver coffee roasting factory near Orizaba

Reminds me of my discovery, maybe 10 years ago, of single malt scotch. A whole new ballgame.

Licor de cafe. I’ve recently quit drinking…er, actually about 90% of my drinking…and here was a treat in that 10% fun allowance. A better coffee liquor than Kahlua. There were little chunks of ice floating on top…

I got this kilo of very dark beans in, I believe, Xalapa (or Coatepec), where Chilón went to get coffee for his two boys. It’s robust, earthy — two qualities I like in food, wine, places — and people.

Live From California with Lloyd Kahn is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

For years, I’ve been getting Malabar Gold green espresso beans (from The Coffee Project <orders@coffeeproject.com>), and roasting them in a funky old roaster. I have a couple of months of roasted Oaxaca beans here and when they’re gone, I’ll see if I can get green beans from Oaxaca, or get more roasted beans (and keep in airtight jars).

Gonna be my coffee from now on…
Er, ahem…

Above: around Orizaba. You tap on your glass and the waiter comes by with hot milk to make a what the French call cafe au lait.

Pastries everywhere n Oaxaca were really good.
Apple crepes with morning coffee in Texcoco

Thanks for reading Live From California with Lloyd Kahn! This post is public so feel free to share it.

Share

LLMs as Parts of Systems

LLMs as Parts of Systems

Towers of Hanoi is a boring game, anyway.

Over on the Kiro blog, I wrote a post about Kiro and the future of AI spec-driven software development, looking at where I think the space of AI-agent-powered development tools is going. In that post, I made a bit of cheeky oblique reference to a topic I think is super important. I asked Kiro to build a Towers of Hanoi game.

It’s an oblique reference to Apple’s The Illusion of Thinking paper, and the discourse that followed it. The question of whether LLMs can scalably play Towers of Hanoi is an interesting theoretically and scientifically, but not the most important question. The more important one is can systems built with LLMs play these games?. By picking me Towers of Hanoi in that other post, I was pointing out that the answer is clearly yes. And has been for several LLM generations.

As a system builder, I’m much more interested in what systems of LLMs and tools can do together. LLMs and code interpreters. LLMs and databases. LLMs and browsers. LLMs and SMT solvers. These systems can do things, today, that LLMs alone simply can’t, and will never be able to do. More importantly, they can do things today orders of magnitude more cheaply and quickly than LLMs can, even in the case where they can do the same things.

You know, this kind of thing:

> Generate a python snippet that counts the number of rs in a string.

def count_rs(input_string):
    return input_string.lower().count('r')

Trivial? Yes. But I’ve now created a system that that can solve problems that this LLM can’t. A better LLM can, but at about six orders of magnitude higher cost per example. Systems, fundamentally, are more than the sum of their components. A good system can do things that no component can do alone.

The trivial example is trivial, but you can imagine how that power could extend to being able to use decades of progress in algorithms. And not only count, but much more powerful things like SMT solvers, or ILP approximation, or MCMC. But, given the efficiency, even trivial things like count, sum, and grep are interesting.

A more advanced example is Amazon Bedrock’s Automated Reasoning Checks. Automated Reasoning checks use LLMs to extract the underlying rules from a set of documents, and facts from an LLM or agent response, and then uses an SMT solver to verify that the facts are logically consistent with the rules. Danilo’s blog post goes through a detailed example of what this looks like, if you’d like to understand more.

Automated Reasoning checks, like my trivial example above, combine LLMs with other methods of reasoning to create a system that’s greater than the sum of its parts. LLMs are used for what they’re good at - extracting facts and rules from the messy natural language that humans use. SMT solvers are used for what they’re great at - reasoning precisely through a set of logical steps, and providing formal justification for that reasoning. The current generation of LLMs can’t do this type of reasoning alone, but systems composed of LLMs and other tools (SMT solvers in this case) can do it.

The hype and buzz around LLMs makes it easy to forget this point, but it’s a critical one. LLMs are more powerful, more dependable, more efficient, and more flexible when deployed as a component of a carefully designed system.

It’s a very exciting time to be a systems person, because we’ve been handed a new and extremely powerful component that can be used to build better systems with new capabilities. Some old ideas will go away, but the fundamental ideas won’t. They’ll be more relevant than ever.

What About The Bitter Lesson?

Is this view of LLMs as parts of systems consistent with Rich Sutton’s The Bitter Lesson? Sutton says:

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.

and

We have to learn the bitter lesson that building in how we think we think does not work in the long run.

I’m not proposing that these systems, systems composed of LLMs and other computational and storage tools, should build in how we think we think1. The way I read Sutton’s point is not at all incompatible with the idea that there are better (more efficient, more reliable, etc) and worse (less efficient, less reliable, etc) ways to do computing. For example, Sutton doesn’t seem to be implying that generating and executing (or memoizing, retrieving, and executing) code to do a task is less good than doing it with a lot of linear algebra.

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Indeed. Computing, from Python to SMT, has been a powerful tool of discovery for over eighty years. Making these tools available to the systems we build, and specifically to the AI agents we build, gives them more power and leverage. Not by encoding the way we think, but by encoding the things computer science has learned about the way the universe works.

Footnotes

  1. One could look at the heuristics that SAT solvers use to guide their searches as an encoding of how we think we think, at least to some extent. I’m not a deep expert, but I do think it’s reasonable to believe that a more computation approach will allow us to discover new and more effective search heuristics for some classes of problems. The fundamental algorithms (like DPLL and CDCL) look pretty compute-maximalist to me.

Space Force officials take secrecy to new heights ahead of key rocket launch

After more than a decade of development and testing, US military officials are finally ready to entrust United Launch Alliance's Vulcan rocket to haul a batch of national security satellites into space.

An experimental military navigation satellite, also more than 10 years in the making, will ride ULA's Vulcan rocket into geosynchronous orbit more than 22,000 miles (nearly 36,000 kilometers) over the equator. There are additional payloads buttoned up inside the Vulcan rocket's nose cone, but officials from the US Space Force are mum on the details.

The Vulcan rocket is set for liftoff from Cape Canaveral Space Force Station, Florida, at 7:59 pm EDT (23:59 UTC) Tuesday. There's an 80 percent chance of favorable weather during the one-hour launch window. It will take several hours for the Vulcan rocket's Centaur upper stage to reach its destination in geosynchronous orbit. You can watch ULA's live launch webcast below.

Read full article

Comments

At least five interesting things: Cool research edition (#68)

I’ve got quite a few great podcasts for you today. One is this excellent live show that Erik Torenberg and I did with Dwarkesh Patel, in which we interview Dwarkesh about his thoughts on AI and the economy. The picture of me here is quite silly-looking, but the conversation was excellent:

I also went on Pascal-emmanuel Gobry’s podcast to debate him about illegal immigration:

And some Japanese media folks at a company called Glasp interviewed me about AI and jobs, and about foreign direct investment in Japan!

Finally, here’s an episode of Econ 102, where Erik and I discuss Javier Milei and various other topics:

Anyway, on to the roundup!

1. It doesn’t look like AI is taking jobs yet

It’s practically conventional wisdom that AI is going to take jobs away from large numbers of humans, leaving them without anything useful to do in the economy. People are so convinced of this that they’ll jump at practically any hint in the data that allows them to believe that it’s happening. A little while ago I wrote a post about why both economists and popular commentators are getting way over their skis on this:

Anyway, Sarah Eckhardt and Nathan Goldschlag of the Economic Innovation Group have a good new report on this, which shows that as far as we can tell, AI isn’t taking jobs yet — at least, not on any measurable scale.

Eckhardt and Goldschlag start with a measure of predicted AI exposure for various jobs. These measures don’t tell you which jobs are going to be replaced by AI; instead, they just tell you which jobs currently involve more tasks that can probably be done by AI. These measures actually have a pretty good track record at predicting which workers will end up using AI.

Basically, Eckhardt and Goldschlag find no correlation — or even a negative correlation — between that measure of AI exposure and any measure of labor market distress. For example, here’s the unemployment rate of workers with varying degrees of predicted AI exposure (1 is the least exposed, 5 is the most exposed):

Source: EIG

There has been a recent rise in unemployment, but it’s concentrated among the people who are least exposed to AI, while those who are the most exposed almost all still have jobs. The same is true when we look only at recent college graduates, who have been the focus of the most concern in the media:

Source: EIG

And the same is true when we look at which workers are exiting the labor force completely:

Source: EIG

And one more interesting finding is that the most-exposed workers are actually less likely to switch to less-exposed occupations than they were before generative AI hit the market! In other words, coders and paper-pushers are not becoming plumbers to protect themselves from AI:

Source: EIG

The researchers also try using alternative measures of AI exposure, and they find pretty much the same thing.

In other words, AI job displacement just hasn’t happened yet. It may happen in the future, but so far, every time people have jumped at a particular data point to claim it’s finally happening, it has turned out to be a mirage.

2. Bernie’s bad chart

Bernie Sanders and his followers deeply believe that America’s economy is in a prolonged state of crisis — that capitalist economic policies have steadily immiserated the American public, creating a country where regular people are economically drowning even as corporate fat cats enrich themselves. Their absolute faith in this narrative often leads them to interpret economic statistics in dubious or even ridiculous ways. The latest example of this is when Bernie Sanders posted a chart of housing versus wages:

This is a pretty ridiculous chart. Why would you plot home prices on the same y-axis as weekly income? Does anyone think these two things should be even remotely close to the same size? Do we think people should be able to afford a house on a single week of income? That’s ridiculous.

A non-ridiculous way to present this data would be to divide home prices by weekly earnings. That would show us how many weeks a typical worker would need to work in order to afford a home. But actually, the “median weekly earnings” number is for full-time workers only, so instead we should use median personal income, which counts everybody. Here’s what that looks like:

In the 80s and 90s, it took about 8 years of work to afford a home. Since then, the number has climbed to about 10 years — a significant and concerning drop in affordability, but not a catastrophic drop. A breakdown by the Economic Innovation Group shows that mortgages are about as affordable as they ever were, but down payments have gotten less affordable:

Source: Ben Glasner

While the drop in housing affordability over the past half century is certainly a problem, it’s not the kind of crisis that Bernie paints it as. Using silly charts in service of alarmist narratives ultimately just weakens trust in your movement — or at least, it should.

3. Personalist dictatorships are probably bad for the economy

The three most powerful countries in the world are now all ruled by strongmen. In China, Xi Jinping has subdued all rivals, and concentrated what used to be a dispersed bureaucratic oligarchy under his own personal rule. In Russia, Putin is effectively an emperor. The U.S. is still officially a democracy, but democratic norms and institutions are eroding rapidly, and in April a majority of Americans called Trump a “dictator”.

The question is what effect these personalistic regimes will have on the economy. China’s growth over the past four decades pretty much proves that democracy isn’t necessary for a strong or even dominant economy. But there’s a difference between countries ruled by a single strongman, and countries ruled by a system of elite institutions that distribute power among a number of oligarchs.

A new paper by Blattman, Gehlbach, and Yu shows that personalist regimes tend to experience lower economic growth than either democracies or autocracies with more distributed power. The difference isn’t huge, but you can see it on a graph:

It’s not clear which direction the causation runs here; it could be that countries with bad economies tend to turn to strongmen to save them. But Blattman et al. test for this using variables that tend to predict regime transitions, and they don’t find any change. That implies that personalist regimes actually make mistakes that slow down economic growth.

Xi, Putin, and Trump certainly don’t exactly seem to be violating that rule of thumb. China’s growth has slowed relentlessly under Xi, and his industrial policy seems to be simply driving Chinese companies into unprofitability rather than extricating the country from its economic slump. Putin’s war in Ukraine is slowly crushing the life out of the Russian economy, while Trump’s tariffs continue to wear down the resilient U.S. economy.

The trend toward strongmen is a bad one.

4. Will AI save us from the social media trolls?

Does it ever seem like modern political discourse is dominated by crazy idiots? Well, that’s because it is. In a new paper entitled “Dark personalities in the digital arena: how psychopathy and narcissism shape online political participation”, Ahmed and Masood find that your intuition isn’t wrong:

This cross-national study investigates how psychopathy, narcissism, and fear of missing out (FoMO) influence online political participation, and how cognitive ability moderates these associations. Drawing on data from the United States and seven Asian countries, the findings reveal that individuals high in psychopathy and FoMO are consistently more likely to engage in online political activity….Conversely, higher cognitive ability is uniformly associated with lower levels of online political participation. Notably, the relationship between psychopathy and participation is stronger among individuals with lower cognitive ability in five countries, suggesting that those with both high psychopathy and low cognitive ability are the most actively involved in online political engagement.

Almost everyone blames recent political trends on their chosen enemy group, but the real culprit is social media, which has elevated the worst people in our society to positions of influence from which they were previously shut out.

What force can defeat the terrible power of social media and its armies of crazy idiots? In my Fourth of July post, I expressed hope that AI algorithms could be harnessed to defeat the hordes of humanity’s worst:

LLMs give platforms the ability to cheaply and quickly filter content according to sentiment. Simply having an LLM downrank angry content and uprank positive content would lean against the natural tendencies of social media technology. Call it Digital Walter Cronkite.

Of course, that solution would depend on the willingness of platform owners like Elon Musk to unleash algorithms in service of moderation and reasonability. That seems a bit like wishful thinking, I admit.

But there’s another possibility, which is that AI itself will simply naturally drive humans off of social media, by generating infinite amounts of slop. A new paper by Campante et al. finds evidence that AI-generated images nudge news consumers toward more trustworthy human-gatekept media:

We study how AI-generated misinformation affects demand for trustworthy news…Readers were randomly assigned to a treatment highlighting the challenge of distinguishing real from AI-generated images. The treatment raised concern with misinformation…and reduced trust in news…Importantly, it affected post-survey browsing behavior: daily visits to [the mainstream newspaper’s] digital content rose by 2.5%…[S]ubscriber retention increased by 1.1% after five months…Results are consistent with a model where the relative value of trustworthy news sources increases with the prevalence of misinformation, which may thus boost engagement with those sources even while lowering trust in news content.

A similar effect might happen with AI agents, which are already flooding social media with trash commentary. As X and other social media companies lose the battle against the bot swarms, human users may stop relying on those feeds for their window on the world. The psychopaths and attention-seekers might simply get drowned out in the automated cacophony.

That would be a weird end to the age of mass social media, but honestly it’s not the worst ending I could think of.

5. No, China isn’t just selling stuff to America through third countries

The American and Chinese economies continue to decouple. Even though Trump keeps “pausing” his tariffs on China, China is selling less and less to America:

Source: Bloomberg

But you’ll notice that China’s exports to Europe and Southeast Asia are still growing strongly. This has led some commentators to claim that China is simply shipping its good through third-party countries, avoiding tariffs (or the threat of tariffs) by essentially just slapping a different “made in” label on stuff that was actually made in China.

Those claims are wrong. You can see that they’re wrong by looking at the actual products that China is selling to countries in Southeast Asia (the region usually accused of transshipping Chinese goods to the U.S.), versus the products it used to sell to the U.S. The two sets of products don’t match up very well, meaning that only a modest portion of Chinese trade with Southeast Asia could reflect diversion of trade from the U.S. Gerard DiPippo did this exercise:

Facing U.S. tariffs, China’s exports to the U.S. are down—while exports to Southeast Asia are up. Is that trade diversion and potential transshipment? My estimate: at most 34% of the increased PRC exports to SE Asia in Q2 could reflect trade diverted from the United States.

In fact, this is an upper bound. Many of the countries being accused of transshipping Chinese goods — Mexico, Vietnam, etc. — have their own industries as well, which export a lot to the U.S. Increased U.S. imports from those countries are likely to at least partially — or perhaps mostly — be locally made goods.

All this goes to show that you can’t draw conclusions about decoupling just from macro data.

6. The rise of the power trad couples

There are a number of popular ideas out there about who marries whom. One is that rich men primarily want physically attractive wives and don’t care about social status, education, and so on. Another is that power couples tend to be dual earners.

In fact, both of these stereotypes are wrong. As Lyman Stone shows in a post for the Institute for Family Studies, rich men tend to marry highly educated, high-earning women women who become housewives after marriage. Here are some charts:

Source: Lyman Stone
Source: Lyman Stone
Source: Lyman Stone

I don’t like the use of the word “overwhelmingly” in any of these charts, but the point is clear — rich men, who presumably have greater choice in who they marry, often tend to prefer women who are educated and high-income before marriage, but many of these women become homemakers after marriage. Call it the “power trad” couple.


Subscribe now

Share

Links 8/12/25

Links for you. Science:

Ancient human highways revealed beneath the sea
Driving a protective allele of the mosquito FREP1 gene to combat malaria
How your research can survive a US federal grant termination
Science Agency Staffers Speak Out about Trump Administration’s Actions
The first 100% effective HIV prevention drug is approved and going global
Nearly 4,000 NASA employees opt to leave agency through deferred resignation program

Other:

Calling D.C. a ‘horror show,’ Trump takes over MPD and deploys the National Guard
National Capital Region Delegation Statement On Trump’s Police Actions In The District Of Columbia
The AI explosion means millions are paying more for electricity
The Professors Who Supported the Student Deportation Frenzy
ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship
Trump Team’s Plans to Exploit Public Lands Follow the Blueprint of Reagan’s Interior Secretary. James Watt led a similar effort to privatize natural resources for mining, energy development, logging, and sprawl.
Trump has turned FEMA itself into a disaster
Speaking Reassurance to Power
What scrapping a $3 billion coastal project means for Louisiana’s future
A Love Letter to Music Listings
D.C.’s sole mental health crisis team for kids under threat in new budget. The Child and Adolescent Mobile Psychiatric Service, more commonly called ChAMPS, faces a more than 60 percent funding cut.
ICE Agents Invade a Manhattan Little League Field. Youman Wilder has coached local kids for twenty-one years—including four who have gone pro. When masked agents tried to interrogate his players, he told them, “You don’t have more rights than they do.”
Tom Lehrer, master satirist of Cold War era, dies at 97
‘Epstein’ is the language of America’s unheard. The elites still don’t listen.
Trump is creating a selfish, miserable world. Here’s what we can do
How Anti-Woke Went Intellectually Bankrupt
Sure Why Not (Josh Shapiro is not good)
The Elite Panic at the Heart of Liberal Attacks on Mamdani
Bad Wolf
These West Virginians love Trump and food stamps—but they can’t have both
Momdani’s Master Class on How to Talk to Voters
DOGE types keep departing because without Musk, this place is no fun
The Real Reason for Zohran’s Success Should Rattle National Democrats
What the f-ck is the Donald J. Trump Center for the Performing Arts? (unclear if this is something Trump requested, or if this is working towards the fuhrer)
Homeless people visited ER less after moving into King County’s hotels
A transit-first vision for RFK doesn’t end with Metrorail
D.C.’s ‘Two-Tiered Justice System’: How the RENTAL Act Threatens Black Tenants
Oh My God, TAKE IT DOWN Kills Parody
This construction project was on time and on budget. Then came ICE.
Lawyers question legality of Alligator Alcatraz, ask federal judge to intervene
A Democrat for the Trump Era

Are consumers hostile to high-falutin’ claims?

We find that decreases in Michelin stars improve consumer review ratings…The analysis of review content further shows that a loss in Michelin stars leads consumers to become less focused on value and become less demanding regarding service.

Here is the paper.  Has implications for online life, GPT-5 reviews, and much more.  Via the excellent Kevin Lewis.

The post Are consumers hostile to high-falutin’ claims? appeared first on Marginal REVOLUTION.

       

Comments

 

Cleveland Fed: Median CPI increased 0.3% and Trimmed-mean CPI increased 0.2% in July

The Cleveland Fed released the median CPI and the trimmed-mean CPI.

According to the Federal Reserve Bank of Cleveland, the median Consumer Price Index rose 0.3% in July. The 16% trimmed-mean Consumer Price Index increased 0.2%. "The median CPI and 16% trimmed-mean CPI are measures of core inflation calculated by the Federal Reserve Bank of Cleveland based on data released in the Bureau of Labor Statistics’ (BLS) monthly CPI report".

Inflation Measures Click on graph for larger image.

This graph shows the year-over-year change for these four key measures of inflation. 

On a year-over-year basis, the median CPI rose 3.6% (unchanged from 3.6% YoY in June), the trimmed-mean CPI rose 3.2% (unchanged from 3.2%), and the CPI less food and energy rose 3.1% (up from 2.9%). 

Core PCE is for June was up 2.8% YoY, unchanged from 2.8% in May.  

Tuesday assorted links

1. Bells ringing before Vespers.

2. Sensible comments on AI.

3. At least thirty-three crypto kidnappings this year?

4. University presidents arguing with each other.

5. Some reasons why DC has been getting worse, of course add Work from Home (which generally I favor, but bad for DC) to that list.  And violence in DC is quite high.  Note I favor DC absorption by Maryland, not rule by the feds.  At least most of DC should be absorbed, you might keep a narrow sliver, the part with the major government buildings, as an autonomous sliver.  There is no reason why Chevy Chase has to belong too an autonomous district rather than to Maryland.

6. Scott Sumner on Lerner symmetry.

7. Is air travel getting worse?

The post Tuesday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Infinte Remix: The Ghibli Moment

Watch it now on YouTube. Like, share and subscribe.

Infinite Remix: The Ghibli Moment Transcript

We uploaded everything. Every song, every selfie, every secret.

And now AI is remixing all of it.

And we are astonished... and confused... and angry.

But this is only the latest chapter in a very old story.

We’ve always remixed. Nothing begins from scratch, everything begins from something else. Every melody, every shot, every move, gets passed on, sampled, transformed, made new.

But now there's a plot twist. And this plot twist is upending our ideas of creativity, ownership, and meaning — it hits hard.

Every second of every day, machines are remixing everything we've ever uploaded.

So we can now tap into our collective imagination and orchestrate it. This is called generative AI, gen AI: ChatGPT, Veo, Sora, Midjourney, Claude, and a flood of others.

Gen AI is simultaneously amazing. And so awful.

It's useful and exciting. It's also invasive and empty.

This is the AI experience. It's a warp speed journey through wonder and horror.

For better or for worse, AI is the new creative frontier. And it is the terrain we will navigate together.

  • I'll show you the big picture of what's happening.

  • What this means for your creative future.

  • When to embrace these tools.

  • When to resist them.

And most importantly: what should we make, when we can make anything?

Welcome to Infinite Remix.

My name is Kirby, and like you, I'm trying to figure out what it means to be creative when machines can create.

Let's start with the big picture. This all started 30 years ago.

The Great Upload

In the nineties, we began uploading ourselves, one kilobyte at a time.

We're emailing. We're chatting. We're blogging. There were a lot of cats involved.

We want to find our people. See what they’re doing. Show what we can do.

We want to matter, even if it's just for a second.

Then this happened.

We get the internet on our phones. We don't just go online anymore. We live there.

We upload every joke, every opinion, every stray thought. We confess, console, insult, perform, and sell and sell and sell.

Anything that's not bolted down becomes data.

In just a few decades, we sent hundreds of billions of terabytes upward into the cloud.

This was The Great Upload.

It's miraculous. It's monstrous. It's transcendent. It's radioactive.

We love it. We fear it. It changed us.

And for a while—not very long, really—it seemed like that was the whole story. We had this endless horizon of human creativity to explore and add to.

But then... the upload started to... I guess, dream?

This happened because of something called deep learning. Deep learning watches everything we’ve made, learns the patterns, and predicts what should come next.

It doesn’t feel, it doesn’t care. It predicts. But it’s so good at this that it can sort of create.

Deep learning transformed The Great Upload into generative AI. Not just an archive, but a remix engine.

When it works, gen AI feels like magic. It can pull ideas from thin air, remix your thoughts in ways you never imagined, and give you raw material that sparks something new.

Gen AI is creatively exciting. People are doing things we've never seen before. The results are hilarious, surreal, disturbing, sometimes beautiful.

And gen AI is just fun.

You can make your kid a Pixar character.

You can create a realistic video of whatever goofy thing you can think up.

And the cat memes have risen to new heights.

But of all the AI trends that went viral, only one felt like it truly meant something, only one touched something sacred. This was The Ghibli Moment.

The Ghibli Moment

I strongly feel that this is an insult to life itself.

Hayao Miyazaki

"An insult to life itself." This is what Hayao Miyazaki said in 2016 after watching an AI-generated animation.

But the truth is, Miyazaki was talking about something a lot more important than AI art.

Studio Ghibli and its visionary cofounder Hayao Miyazaki create highly stylized, densely human films that are widely beloved.

Their aesthetic requires painstaking manual labor. This four-second shot took 15 months to animate.

AI art, on the other hand, is fast, easy, and very good at mimicking styles.

In March 2025, the iconic Ghibli style collided with the gen AI behemoth.

ChatGPT got an update that could convincingly recreate the Ghibli look. And the internet ran with it, cranking out Ghibli-ized family photos and memes.

The backlash from artists and Ghibli fans was intense.

Al won’t make your photos Ghibli. Ghibli is hand drawn and each character has insane emotional depth. As scary it is to admit the result looks decent, it's nothing like Ghibli and will never be. Big fuck you to Al. This is a terrible advancement of technology.

Some of this anger was misdirected. Most people weren’t trying to replace Ghibli, or even make art. They were playing. They wanted to see themselves in this beautiful world, they wanted to be a part of something magical.

But the critique still stands. Something vital is missing from these images... and all AI art.

These outputs are undeniably impressive — the colors, the textures, it all looks right. ChatGPT was trained on greatness, and it shows.

But look closer and the cracks appear. A hand with four fingers. Things that look right at first but make no sense on closer inspection. Expressions that look simple and generic.

Now contrast this with that four-second shot from _The Wind Rises_.

Every frame was hand-tuned by Miyazaki himself.

Notice that everyone here has a story. A group struggling to move a cart. A panicked horse. A mother shielding her children.

In a Ghibli film, everything has purpose. Everything has meaning. Everything is cared for.

And you feel this when you experience their films. You can feel the life of another person. They were here.

You can also get this with a great song. A perfectly designed app. A meal made by someone who loves you. That sense that a person meant for this to matter.

These experiences have depth. You can sense the unmistakable presence of a human soul, shaped by their life, their taste, and their search for meaning.

AI images don't have depth. They resemble depth.

What AI gives you is the average of everything it’s seen.

And greatness stands out from the average, but even masterpieces have plenty of ordinary parts.

Master artists have worked this way for centuries.

The Renaissance painter Michelangelo painted the most important parts himself. But assistants filled in less important sections, like backgrounds and drapery.

Right now, that’s where AI is most impactful: helping with the ordinary.

With AI you can run your own workshop, your own studio, staffed with infinite, tireless assistants.

You may not make a Miyazaki masterpiece, but you can make something greater than what you could have done on your own, and perhaps even deeper and more meaningful.

And that famous Miyazaki quote? It's not even about AI art.

He wasn’t reacting to the technology — he was reacting to the content: this grotesque, flailing humanoid crawling across the screen.

After he watches this clip, he then talks about a close friend with a disability and how much he suffered.

What offended Miyazaki wasn’t how it was made. It was the lack of empathy, the lack of care, the lack of soul.

That's what he’s really afraid of. That's what we're really afraid of. Not machines, not styles.

We’re afraid of a world without soul.

Close

We are the very first humans to confront machines that learn from us... and remix us.

It’s a collective existential moment. And whether we’re amazed or afraid, many of us are saying the same thing:

“Yeah guys, I think we're cooked."

But to me, human creators seem very far from done.

I used gen AI more in this video than ever before. And it is light years from replacing me. Even this... is mostly me. It's my voice. It's based on video of me.

What AI did was change how I work. It's faster, weirder, more expansive. It extended what I could do.

That’s the future I see, a collaboration. A blend of human and machine, where AI handles the ordinary, and we bring the extraordinary: the depth, the meaning, the soul.

What we all need to do is something bold: let's see AI as it is—no panic, no hype.

Let's unpack what AI really can do. And use it to create work that matters.

Continue the Journey

Hi everybody—if this resonated with you and you want to keep going, I’ve got something for you.

I’m running a 5-week live cohort class called Infinite Remix Live, plus a one-time 3-hour workshop.

These are all about using gen AI tools like the brand new ChatGPT-5, Claude, Midjourney, Veo, and more to create faster and smarter while keeping your human voice at the center.

It's for creative professionals, marketers, educators, writers, filmmakers, entrepreneurs—and the creatively curious.

If you missed enrollment, no worries. You can sign up to get notified about future classes, explore our on-demand courses, or dig into some great free content.

Hope to see you there.

My podcast with Mitch Daniels

Done with Liberty Fund, I very much enjoyed chatting with him.  Here is the link.  Here is their episode summary:

Tyler Cowen joins Mitch Daniels to explore AI’s promise, economic threats from debt and regulation, and the need for bold, intelligent policy to secure economic growth, innovation, and individual liberty. Cowen discusses his views on immigration, COVID lockdowns and addresses societal fear of confronting rapid technological change.

A very down to earth conversation, in the best sense of that term.

The post My podcast with Mitch Daniels appeared first on Marginal REVOLUTION.

       

Comments

 

Early Look at 2026 Cost-Of-Living Adjustments and Maximum Contribution Base

The BLS reported earlier today:
The Consumer Price Index for Urban Wage Earners and Clerical Workers (CPI-W) increased 2.5 percent over the last 12 months to an index level of 316.349 (1982-84=100). For the month, the index increased 0.1 percent prior to seasonal adjustment.
CPI-W is the index that is used to calculate the Cost-Of-Living Adjustments (COLA). The calculation dates have changed over time (see Cost-of-Living Adjustments), but the current calculation uses the average CPI-W for the three months in Q3 (July, August, September) and compares to the average for the highest previous average of Q3 months. Note: this is not the headline CPI-U and is not seasonally adjusted (NSA).

• In 2024, the Q3 average of CPI-W was 308.729.

The 2024 Q3 average was the highest Q3 average, so we only have to compare Q3 this year to last year.

CPI-W and COLA Adjustment Click on graph for larger image.

This graph shows CPI-W since January 2000. The red lines are the Q3 average of CPI-W for each year.

Note: The year labeled is for the calculation, and the adjustment is effective for December of that year (received by beneficiaries in January of the following year).

CPI-W was up 2.5% year-over-year in July (down from 2.6% YoY in June), and although this is early - we need the data for July, August and September - my early guess is COLA will probably be close to 3% this year, up from 2.5% in 2025.

Contribution and Benefit Base

The contribution base will be adjusted using the National Average Wage Index. This is based on a one-year lag. The National Average Wage Index is not available for 2024 yet, although we know wages increased solidly in 2024. If wages increased 5% in 2024, then the contribution base next year will increase to around $185,000 in 2026, from the current $176,100.

Remember - this is an early look. What matters is average CPI-W, NSA, for all three months in Q3 (July, August and September).

Trans People in Georgia Prisons Are Being Forced to Detransition. Now They’re Suing.

19 News Logo

A Class Action Lawsuit Representing Nearly 300 Incarcerated People Says the State’s Refusal To Allow Gender-Affirming Care Constitutes Cruel and Unusual Punishment.

A group of incarcerated transgender women and men have sued Georgia corrections officials, challenging a new law that prevents them from receiving gender-affirming medical care. The lawsuit, filed Friday morning, accuses the state of violating the Eighth Amendment, which prohibits cruel and unusual punishment.

Five transgender plaintiffs — two men and three women — brought the class action lawsuit on behalf of nearly 300 other people in Georgia state prisons, who argue that the state’s law will have “catastrophic consequences.” In some cases it is forcing trans people who have already received hormone replacement therapy and other services for years to detransition without their consent.

“We are very much in the thick of seeing policies like this be adopted,” said Chinyere Ezie, a senior staff attorney at the Center for Constitutional Rights, which brought the lawsuit with co-counsel Bondurant Mixson & Elmore LLP. “It’s really unfortunate, I think that it has and will cost people’s lives. I think that the plan is to really just eradicate trans people from public life, to really — contrary to medicine — make the treatment of gender dysphoria a culture war, as opposed to a serious medical need that requires treatment.”

In May, Gov. Brian Kemp, a Republican, signed SB 185, a bill passed by the state’s majority conservative legislature that prohibits the use of state funds or resources for surgery, hormone replacement therapy, cosmetic procedures and other treatments used to address gender dysphoria. The law states explicitly that incarcerated people may still receive treatments like hormone replacement therapy if they are medically necessary for conditions other than gender dysphoria.

Ezie told The 19th that the bill’s sponsors indicated during hearings that incarcerated trans people would not be allowed to pay for treatment themselves, either. Her team has also heard this from their clients, she said. The 19th reached out to the Georgia Department of Corrections to confirm whether incarcerated trans people can pay for these procedures themselves.

In May, Gov. Brian Kemp, a Republican, signed SB 185, a bill passed by the state’s majority conservative legislature that prohibits the use of state funds or resources for surgery, hormone replacement therapy, cosmetic procedures and other treatments used to address gender dysphoria. The law states explicitly that incarcerated people may still receive treatments like hormone replacement therapy if they are medically necessary for conditions other than gender dysphoria.

Ezie told The 19th that the bill’s sponsors indicated during hearings that incarcerated trans people would not be allowed to pay for treatment themselves, either. Her team has also heard this from their clients, she said. The 19th reached out to the Georgia Department of Corrections to confirm whether incarcerated trans people can pay for these procedures themselves.

Gender dysphoria is a condition recognized by medical journals and professionals. It is defined as the sense of discomfort or anxiety a person feels when their physical gender feels out of sync with their gender identity. This can lead to long-term mental health effects, including periods of depression, thoughts or acts of self-harm. Forced detransition due to anti-trans legislation, coupled with the poor conditions and discrimination that incarcerated trans people often experience, can worsen these mental health consequences. From a physical standpoint, doctors recommend that any termination of hormone replacement therapy takes place gradually over three to six months, rather than cold turkey.

“Taking away individuals’ access to gender-affirming therapies while in prison constitutes cruel and unusual punishment and increases the likelihood of abuse and detrimental health consequences,” Jan T. Mooney, an Atlanta-based psychologist, and Mark Spencer, an Atlanta-area internal medicine physician, wrote in an April column about the Georgia bill.

“Abrupt cessation or forced weaning of medically necessary, ongoing treatment is a health risk. Physical effects of hormone withdrawal are accompanied by psychological distress, which may manifest as anxiety, depression, and suicidality,” they continued.

The class action lawsuit comes 10 years after Ezie first sued over another ban on gender-affirming care in Georgia prisons. In that case Ashley Diamond, a Black transgender woman, sued the Georgia Department of Corrections in 2015 after being held in men’s prisons and denied hormone treatments.

The case drew attention from the U.S. Department of Justice, which said at the time that blanket policies barring hormone therapy violate the Eighth Amendment “because they do not provide for individualized assessment and treatment.” Diamond won an undisclosed settlement in 2016, and her case prompted policy changes in Georgia meant to facilitate better treatment of incarcerated transgender people. But six years later, Diamond sued again, asserting that the state failed to provide her adequate health care or protect her from sexual assault after a second incarceration. Ultimately, Diamond dropped that lawsuit to protect her mental health, according to her lawyers.

To see the same issues come up again in 2025 feels like being “in a time machine going back in time,” said Ezie, who represented Diamond 10 years ago.

“I think that is a feeling that’s shared by many trans rights activists,” Ezie said. “It feels like, rather than seeing a forward march of progress when it comes to securing basic rights and basic dignity for transgender people, we are now fighting to hold on to very basic legal wins that you know we achieved, at times, decades ago.”

Ezie and the legal team at the Center for Constitutional Rights are hopeful that the courts will overturn Georgia’s law, as they have in other similar cases. For example, Wisconsin enacted a law in 2005 that barred prison doctors from providing hormone therapy or gender-affirming surgery to incarcerated transgender people in state custody. But a federal court ruled that denying this medical care constitutes cruel and unusual punishment. Another state-level class action lawsuit in Colorado resulted in a negotiated settlement agreement between the state and the Department of Corrections requiring an overhaul of how it houses incarcerated transgender women and provides medical care to all trans people behind bars.

If the Center for Constitutional Rights is successful in Georgia, Ezie anticipates that there will still be a long road ahead in the effort to challenge anti-trans legislation and policies.

“We’re going to continue to use the courts, we’re going to continue to organize, we’re going to continue to — as we did prior to the passage of this bill — lobby against bills like this that seek to cause so much preventable harm,” she said. “This is why we fight.”


IF YOU SUPPORT FREEDOM OF THE PRESS PLEASE CONSIDER A DONATION TO HELP PROTECT OURS

The post Trans People in Georgia Prisons Are Being Forced to Detransition. Now They’re Suing. appeared first on DCReport.org.

‘AI’ Is Still Very Good at Mediocrity

Yesterday’s post about AI rolled out a day early, so if you missed it, you can find it here.

Meanwhile, here is a very pretty front door (Swann St. NW, between 18th and New Hampshire Ave., Dupont Circle):

Front door

BLS: CPI Increased 0.2% in July; Core CPI increased 0.3%

From the BLS:
The Consumer Price Index for All Urban Consumers (CPI-U) increased 0.2 percent on a seasonally adjusted basis in July, after rising 0.3 percent in June, the U.S. Bureau of Labor Statistics reported today. Over the last 12 months, the all items index increased 2.7 percent before seasonal adjustment.

The index for shelter rose 0.2 percent in July and was the primary factor in the all items monthly increase. The food index was unchanged over the month as the food away from home index rose 0.3 percent while the food at home index fell 0.1 percent. In contrast, the index for energy fell 1.1 percent in July as the index for gasoline decreased 2.2 percent over the month.

The index for all items less food and energy rose 0.3 percent in July, following a 0.2-percent increase in June. Indexes that increased over the month include medical care, airline fares, recreation, household furnishings and operations, and used cars and trucks. The indexes for lodging away from home and communication were among the few major indexes that decreased in July.

The all items index rose 2.7 percent for the 12 months ending July, after rising 2.7 percent over the 12 months ending June. The all items less food and energy index rose 3.1 percent over the last 12 months. The energy index decreased 1.6 percent for the 12 months ending July. The food index increased 2.9 percent over the last year.
emphasis added
The change in core CPI was above expectations. I'll post a graph later today after the Cleveland Fed releases the median and trimmed-mean CPI.

Heroines of organ donation

Last week I had the opportunity to see a screening of the movie Abundant, about non-directed organ donors, who have donated organs to strangers.  Here is a snapshot of me and two of the donors who tell their story in the film, Laurie Lee and Laura Diaz Moore.  They are both pretty inspiring.

 

 See also this story.

YoY Measures of Inflation: Services, Goods and Shelter

Here are a few measures of inflation:

The first graph is the one Fed Chair Powell had mentioned two years ago as something to watch.  

Services ex-ShelterClick on graph for larger image.

This graph shows the YoY price change for Services and Services less rent of shelter through July 2025.

Services were up 4.0% YoY as of July 2025, up from 3.8% YoY in June.

Services less rent of shelter was up 3.8% YoY in July, unchanged from 3.8% YoY the previous month.

Goods CPIThe second graph shows that goods prices started to increase year-over-year (YoY) in 2020 and accelerated in 2021 due to both strong demand and supply chain disruptions.

Durables were up 1.2% YoY as of July 2025, up from 0.6% YoY the previous month.

Commodities less food and energy commodities were at 1.1% YoY in July, up from 0.6% YoY the previous month.

ShelterHere is a graph of the year-over-year change in shelter from the CPI report (through July) and housing from the PCE report (through June)

Shelter was up 3.7% year-over-year in July, down from 3.8% in June. Housing (PCE) was up 4.1% YoY in June, unchanged from 4.1% in May.

This is still catching up with private new lease data (this includes renewals whereas private data is mostly for new leases).

Core CPI ex-shelter was up 2.5% YoY in July, up from 2.1% YoY in June.

Launch preview: ULA to launch first national security mission on a Vulcan rocket

A United Launch Alliance Vulcan rocket stands at Space Launch Complex 41 at Cape Canaveral Space Force Station ahead of the launch of the USSF-106 mission. Image: Michael Cain/Spaceflight Now

More than four months after it was certified to fly national security payloads for the United States government, United Launch Alliance is on the cusp of launching just such a mission with its Vulcan rocket.

The 202-foot-tall (61 m) rocket will launch a pair of satellites on a mission collectively referred to as United States Space Force (USSF)-106. The two-stage rocket will fly on a trajectory that will take it due east from the launch pad at Cape Canaveral Space Force Station.

“This mission is heading directly to geoscynchronous orbit and will be one of our longest missions to date,” said Gary Wentz, the vice president of Government and Commercial Programs for ULA during a prelaunch teleconference. “It was purposefully designed to support these missions, direct inject to GEO for the Space Force. This is our 101st mission for national security space and we’re proud to deliver the majority of our country’s critical satellites to orbit.”

On Monday, ULA rolled the rocket about a third of a mile from the government Vertical Integration Facility (VIF-G) to the pad at Space Launch Complex 41. The trip took a little more than an hour from first motion to settling down at the pad.

Spaceflight Now will have live coverage of this mission beginning about 1.5 hours ahead of liftoff, which is scheduled for 7:59 p.m. EDT (2359 UTC), the opening of a one-hour long window. This livestream will also contain coverage of the countdown and launch for Arianespace’s Ariane 6 rocket, which has an instantaneous launch time of 8:37 p.m. EDT (0037 UTC).

The 45th Weather Squadron forecast an 80 percent chance for favorable weather during the hour-long launch window, with cumulus clouds and solar activity being the two potential impacts.

The mission will be the third launch for so far for ULA in 2025. The first two being Atlas 5 rockets that carried a total of 54 satellites for Amazon’s Project Kuiper broadband constellation.

A United Launch Alliance Vulcan rocket stands at Space Launch Complex 41 at Cape Canaveral Space Force Station ahead of the launch of the USSF-106 mission. Image: Michael Cain/Spaceflight Now

Returning to national security space launches

The launch of the USSF-106 mission comes a few years after it was originally intended to fly, but critically marks ULA’s return to launching payloads as part of the National Security Space Launch (NSSL) program for the Space Force and the National Reconnaissance Office (NRO).

Its last such launch, using an Atlas 5 rocket, was the USSF-51 mission, which launched a little more than a year ago on July 30, 2024. A decade prior to that, following Russia’s initial invasion of Ukraine, U.S. launch providers were required to cut ties with Russian-made engines and shift to American-built hardware.

That put ULA on the path to developing Vulcan and Northrop Grumman to move away from its Antares 230+ rocket, also powered by engines with heritage from the former Soviet Union.

After years of development and two certification flights, ULA’s Vulcan rocket was cleared to fly NSSL payloads.

“We’re excited to be here today. Pretty historic point in our program’s history,” said Col. James Horne, USSF-106 Mission Director. “We officially end our reliance on Russian-made engines with this launch and we continue to maintain our assured access to space with two, independent — at least two independent —rocket service companies that we can leverage to get our capabilities in orbit.”

Getting Vulcan cleared to fly wasn’t a simple process though. An anomaly with one of the nozzles on a Northrop Grumman-built solid rocket motor during the second certification mission in October 2024 caused a months-long delay in finishing certification to begin launching NSSL missions.

“We worked very closely with the ULA and Northrop Grumman teams, as we always do in those sorts of situations. We’ve done a couple of full-scale static fires, extensive sub-scale analysis and modeling to get to launch [Tuesday] at an acceptable risk,” Horne said. “So, that’s the process that we had to work through as we got ready for this mission and we handled that by our mission-specific certification process.

“So we certified the design of the vehicle in March and then we worked through our mission-specific risk analysis to get to launch [on Tuesday.]”

He said additionally, the Space Force also looked closely at multiple other aspects of the rocket, like the two Blue Origin-built BE-4 engines that power the Vulcan booster at liftoff, given that it was a brand new engine that flew for the first time on a Vulcan rocket.

“I think we got excellent data from Cert-2 that showed just how capable of an engine that is with its ability to overcome the issue we saw with the SRB,” Horne said. “We qualified a lot of new structures on this rocket: the tanks and the composite interstage adapters and heat shields and things like that.”

Horne said while Vulcan is now certified to fly what he called “A and B missions,” the Space Force is still in the process of certifying the so-called “heavy version” of the Vulcan rocket, which sports six solid rocket motors.

ULA’s first launch with six solids will be a flight for Amazon’s Project Kuiper constellation, which will launch 45 of those broadband satellites into low Earth orbit. Horne said that flight alone won’t be sufficient to gain certification for the Vulcan in this configuration.

“The Kuiper launch will factor into certification, for final certification for the pad out of Vandenberg and for that variant of Vulcan to launch from that, but there’ll be further analysis and certification activities related to the heavy launch and ULA has a good schedule and pace for that,” Horne said. “That will be in advance of any heavy missions that we need.”

After it launches the USSF-106 mission, the next NSSL flight for ULA will be USSF-87. A launch date for the mission hasn’t been announced, but last week ULA President and CEO Tory Bruno said there could be a couple of Atlas 5 rocket launches before the next national security mission.

Upgrading GPS

The USSF-106 mission is comprised of two satellites. One of those remains a mystery with Space Force officials unwilling to provide any details during Monday’s prelaunch news briefing.

The other, which is described as the primary payload onboard the Vulcan rocket is the brainchild of the Air Force Research Laboratory and is called the Navigation Technology Satellite-3 (NTS-3). The mission and launch together come with a $250 million price tag.

Dr. Joanna Hicks, a senior research aerospace engineer within the AFRL’s Space Vehicles Directorate and principal investigator for the NTS-3 said she’s excited to finally launch a satellite that she’s helped work on for years.

It follows in the footsteps of NTS-1 and NTS-2, which were launched in the 1970s.

“This is the first experimental navigation satellite in 48 years. The last one was NTS-2 that launched in 1977,” Hicks said. “At the lab, we think that we are overdue for an experiment in this area. GPS is such an integral part of our lives today… and with NTS-3, we are going to be experimenting with a number of different technologies that look at how we can continue to evolve and augment GPS to make sure that it remains the gold standard that our war fighters need.”

Rendering of NTS-3 Navigational Satellite over North America. Graphic: L3Harris Technologies

Once deployed from Vulcan’s Centaur upper stage, the NTS-3 satellite will take a couple weeks for checkouts and commissioning on orbit before it can start its work. Hicks said she and her team would conduct more than 100 PNT (position, navigation, timing) experiments that could help augment the GPS system.

Some of those experiments include better time-keeping methods and testing an electronically steerable phased array antenna, which Hicks said will help “deliver higher power to get through interference to the location where it’s needed.” Another is called Chimera, which the AFRL said is meant to “jointly authenticate satellite orbit data and measurements of the range between the satellite and user, to provide an extremely robust protection against GPS spoofing for civil users.”

“As a reprogrammable architecture, we don’t have to have everything planned out before we go on orbit and before we see what the threats are,” Hicks said. “This is not just on the satellite side. We’re pairing that with reprogrammable user equipment that’s able to receive new signals that we’ve defined even after we’ve launched. So we’re very excited about that.”

The prime contractor that manufactured the satellite is L3Harris Technologies, which built upon Northrop Grumman’s ESPAStar satellite bus. The company was reportedly awarded an $84 million contract for the spacecraft in 2018, which passed its critical design review in 2020.

At that point, launch was anticipated in 2022. However, the satellite wasn’t delivered to AFRL’s integration and test facility at Kirtland Air Force Base in New Mexico until January 2023, which shifted the planned launch date to later that year.

“L3Harris, as the prime contractor for the program, has been responsible for the design, development, integration and test of the space vehicle,” said Andrew Builta, the vice president of Strategy and Business Development at L3Harris Technologies. “We also developed a portion of the ground control segment, supported launch vehicle integration, integration with the control and user segment and will support on orbit operations.”

The various tests will involve work both in the laboratory as well as out in the field. Hicks said some of the learns from this, like calibrating a spot beam antenna can be applied to the next generation of GPS satellites, called GPS 3-F, which are being developed and built by Lockheed Martin.

“One of the things that NTS-3 is testing… is the multi-orbit constellation concept,” Hicks said. “So, can we receive signals from NTS-3 at GEO as well as GPS at MEO (medium Earth orbit) and take advantage of all of them. Maybe in the future, we’ll be able to put some of these technologies in LEO, for example.

“We don’t currently have that as a planned mission, but that’s something that could conceivably happen in the future.”

Why love matters most

Black and white photo of a thoughtful woman and two children, one holding a crying baby, indoors near wooden doors.

For Iris Murdoch, morality is not about duties and rules but stopping our ego fantasies and attending to others with love

- by Cathy Mason

Read at Aeon

Why did air conditioning spread so fast in Mexico?

A common theme in the vast literature on climate change is the estimation of models using historical data to make predictions many decades into the future. Although there is a large and growing number of these types of studies, researchers rarely return later to check the accuracy of their predictions. In this paper, we perform such an exercise. In Davis and Gertler (2015), we used household-level microdata from Mexico to predict future air conditioning adoption as a function of income and temperature. Revisiting these predictions with 12 years of additional data, we find that air conditioning in Mexico has accelerated, significantly exceeding our predictions. Neither errors in predicting income growth or rising temperatures, nor migration patterns, nor an overly restrictive model can explain the large prediction gap. Instead, our results point to the failure to account for falling electricity prices and technological changes in air conditioner efficiency as key drivers of the prediction gap.

That is from a new NBER working paper by Lucas W. Davis and Paul Gertler.  As of 2022, the rate of air conditioning access in Mexico was about 18.5%, only slightly less than that of Europe.

The post Why did air conditioning spread so fast in Mexico? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Emergent Ventures winners, 45th cohort

Anya Singh, Hawthorne, CA/YC, to help protect IP.

Patrick Murphy, Limerick, travel grant.

Daryna Hrybchuk, 18, Lviv, general career support.

Ari Shtein, Ann Arbor, Michigan/Yale, 17, general career support.

Vadzim Rayinchick, Belarus/SF, “Confessions”.

Garret Thomas Molloy, Dublin/Stanford, travel and study grant.

Jon Cooper, UK and Stanford, AI and historical archives.

Jerusalem Demsas, general career support for new projects.

Manuel Martin Morante, Extremadura, to visit MIT, eventual biotech start-up.

Jal Patel, 16, Regina, Canada, general career support for AI and biotech.

Ayana Farooq, Mississauga, brain neurons.

Adria Moret, Barcelona, AI and philosophy, so LLMs understand animal welfare better.

The post Emergent Ventures winners, 45th cohort appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Tuesday: CPI

Mortgage Rates From Matthew Graham at Mortgage News Daily: Mortgage Rates Steady Ahead of High Stakes Inflation Report
The average top tier 30yr fixed rate held exceptionally steady last week after moving just a bit lower over the weekend. By comparison, today's rates are much closer to Friday's latest levels and still very close to the lowest we've seen since October, 2024.

If the two key economic considerations for interest rates are jobs and inflation, the two key economic reports are the jobs report seen earlier this month and the Consumer Price Index which comes out tomorrow morning. It's often repeated that the PCE Price Index is a preferable gauge of inflation, but CPI comes out 2 weeks earlier and thus gets most of the market's attention.

Just like last month, market participants are watching to see the extent of tariff-driven inflation in tomorrow's data. If it contributes to a higher-than-expected result, we'll likely see some upward pressure on rates. [30 year fixed 6.58%]
emphasis added
Tuesday:
• At 6:00 AM ET, NFIB Small Business Optimism Index for July.

• At 8:30 AM, The Consumer Price Index for July from the BLS. The consensus is for a 0.2% increase in CPI, and a 0.3% increase in core CPI.  The consensus is for CPI to be up 2.8% year-over-year and core CPI to be up 3.0% YoY.

Critical Fire Weather in the West; Monitoring Tropical Storm Erin

CPHC Central North Pacific Outlook


Central North Pacific 2-Day Graphical Outlook Image
Central North Pacific 7-Day Graphical Outlook Image


ZCZC HFOTWOCP ALL
TTAA00 PHFO DDHHMM

Tropical Weather Outlook
NWS Central Pacific Hurricane Center Honolulu HI
Issued by NWS National Hurricane Center Miami FL
800 PM HST Wed Aug 13 2025

For the central North Pacific...between 140W and 180W:

Tropical cyclone formation is not expected during the next 7 days.

$$
Forecaster Gibbs
NNNN


NHC Atlantic Outlook


Atlantic 2-Day Graphical Outlook Image
Atlantic 7-Day Graphical Outlook Image


ZCZC MIATWOAT ALL
TTAA00 KNHC DDHHMM

Tropical Weather Outlook
NWS National Hurricane Center Miami FL
200 AM EDT Thu Aug 14 2025

For the North Atlantic...Caribbean Sea and the Gulf of America:

Active Systems:
The National Hurricane Center is issuing advisories on Tropical
Storm Erin, located over the central tropical Atlantic.

1. Southwestern Gulf (AL98):
A broad area of low pressure near the west coast of the Yucatan
Peninsula is producing disorganized showers and thunderstorms. The
broad low is forecast to emerge off the coast later this morning and
move west-northwestward across the southwestern Gulf during the next
day or two, where environmental conditions are marginally conducive
for further development. The system is forecast to move inland over
northeastern Mexico by late Friday, ending its chances of formation.
* Formation chance through 48 hours...low...20 percent.
* Formation chance through 7 days...low...20 percent.



Forecaster Kelly


NHC Eastern North Pacific Outlook


Eastern North Pacific 2-Day Graphical Outlook Image
Eastern North Pacific 7-Day Graphical Outlook Image


ZCZC MIATWOEP ALL
TTAA00 KNHC DDHHMM

Tropical Weather Outlook
NWS National Hurricane Center Miami FL
1100 PM PDT Wed Aug 13 2025

For the eastern and central North Pacific east of 180 longitude:

Tropical cyclone formation is not expected during the next 7 days.

$$
Forecaster Gibbs
NNNN