Saturday assorted links

1. What is the chance we live inside a black hole?

2. Observations on ambition, though it is sad he does not grasp the value of Jiro.

3. A brief history of lab notebooks.

4. “Hero rat who sniffed out over 100 land mines is honored with giant statue.

5. The new LACMA (NYT).  And Hausa erotica, published on WhatsApp (NYT).

6. Henry Oliver on Buddenbrooks.

7. “Play chess with Yoko Ono.

The post Saturday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Falcon 9 launches Cygnus cargo spacecraft to the ISS

Falcon 9 NG-24 launch

A Falcon 9 launched a Cygnus cargo spacecraft April 11 as Northrop Grumman continues its dependence on a competitor to fly resupply missions to the International Space Station.

The post Falcon 9 launches Cygnus cargo spacecraft to the ISS appeared first on SpaceNews.

Moog Technology Keeps Artemis II Astronauts Safe During Historic Lunar Mission

Moog

East Aurora, NY – Moog Inc. (NYSE: MOG.A and MOG.B), a worldwide designer, manufacturer, and systems integrator of high-performance precision motion and fluid controls and control systems, highlights the critical […]

The post Moog Technology Keeps Artemis II Astronauts Safe During Historic Lunar Mission appeared first on SpaceNews.

Orion splashes down to successfully end Artemis 2 mission

Artemis 2 splashdown

The first human mission beyond Earth orbit in more than 50 years successfully concluded with a splashdown in the Pacific Ocean April 10.

The post Orion splashes down to successfully end Artemis 2 mission appeared first on SpaceNews.

HawkEye 360 files to go public

Based in Herndon, Virginia, the company specializes in space-based radio-frequency signals intelligence for government agencies

The post HawkEye 360 files to go public appeared first on SpaceNews.

Spacety raises $190 million to scale satellite manufacturing, plans IPO

An image returned by the Hisea-1 C-band Chinese SAR satellite in December 2020.

Chinese satellite maker Spacety has completed multiple rounds of equity financing worth $190 million to scale its vertically integrated satellite manufacturing and data services model.

The post Spacety raises $190 million to scale satellite manufacturing, plans IPO appeared first on SpaceNews.

No Goals, No Game Plan: The U.S. And Iran Enter Ceasefire Talks From a Worse Position Than Before the War

With undefined objectives, a defiant Iran, and JD Vance leading the delegation, the diplomatic table may already be set for failure — and the price is more war.

To the extent that Donald Trump launched the preemptive war with Iran without understandable strategic goals, the ceasefire negotiations open today with the poorly defined strategic aims other than re-opening the Strait of Hormuz.

The world is grateful that Iran and the U.S. are talking rather than bombing each other. But who thinks that negotiations towards an overall, lasting peace can prove successful without goals?

These countries can’t even confirm what they supposedly agreed to talk about this week.

Variously, the U.S. said it wants to permanently erase any Iranian nuclear weapons development, eliminate Iranian missile and drone attacks on Israel or others, arrange for regime change, and liberate the Iranian people to rise against their own government. Instead, we got widespread military damage and a defiant Iran that was able to replace its newly dead ayatollah quickly, hit back at Israel and Gulf nations, hold onto its 900 pounds of enriched uranium fuel and strangle shipping in the Strait of Hormuz.

And Israel took advantage to bomb the Iran-backed Hezbollah proxy army in Lebanon, even threatening to annex the southern third of that country, though it now wants to talk with Lebanon, not Iran.

Iran has a list that spelled out the opposite positions on each key point and yesterday added in the conditional unfreezing of any international holds on Iranian assets through sanctions.

Square One

So, we enter these negotiations mostly with the same issues that were on the table when Iran was talking with Trump friend Steve Witkoff and son-in-law Jared Kushner before the air attacks began – but from a worse starting position. The prime tactical question now concerns reopening the Strait to global shipping, a question not previously on the table.

We are sending JD Vance, not an experienced diplomat, to lead the U.S. delegation knowing that Iran has taken America’s best punch and survived to fight another day. And that Iran is mightily angered, aware that it has friends in Russia and China who will help in retributive efforts and that the U.S. has no patience for extended war, particularly with ground troops.

Trump is simultaneously talking about partnering with a new “more intelligent” Iranian leadership made up of the same people who were there all along, and restarting bombing that, as we all know now, might aim to destroy 6,000 years of Iranian and Persian civilization. In all of his boasting, one must wonder whether Trump has even a hint of self-doubt about having bullied his way into a corner.

It’s certainly not the map for a new chapter in any re-issued Art of the Deal. The price of failure at the diplomatic table is more war.

Good luck to all of us.


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post No Goals, No Game Plan: The U.S. And Iran Enter Ceasefire Talks From a Worse Position Than Before the War appeared first on DCReport.org.

Work With Trump Voters, but Don’t Forgive Them

We Need Their Changed Minds but All Is Not Okay

Lots of Trump voters will be changing their minds. We need to work with them to change directions back toward more sane ways of doing things, but that does not include forgiveness. The two things can, and should, be kept separate.

Many of his voters have long since lost faith in him. Many more now will be against his having gotten us into another Middle East war. Many others will be against him in a strictly selfish way, having to pay high prices for gas and having that change their feeling toward him.

When midterm elections for Congress and local and state elections come up many of those people may just not vote rather than support Republicans or Trump-supporting candidates, or some may vote for Democrats and other candidates. That’s great. We need all the help we can get to turn things around. We need to have an attitude of working with them. Maybe not happily, but nevertheless eager to get their help. Going forward for years to come, if some general sanity returns to politics and to how we act toward one another as a society, we will need to have a willingness to work together.

That is not the same as forgiveness. Someone having changed their mind about what they think of Trump’s presidency does not change a fundamental fact. In 2021 Trump did everything he could to instigate an insurrection. His efforts succeeded in creating the attack on the capitol. Then he waited before doing anything to call it off, clearly hoping it would succeed. He even said things that amounted to inviting the insurrectionists to attack his own Vice President, Mike Pence.

The insurrection was an attempt to overthrow everything that is most basic about our country. To overthrow the democratic transition of power, to overthrow democracy, and all the rights that our constitution acknowledges. That’s true because if who is in power becomes a matter of who is more violent then it destroys the foundation all the rest is built on. If power is chosen by violence then obviously such a leader would have no hesitation to use violence to get anything else he wants. If he doesn’t like you he might have you arrested and jailed with no process, or a fake process. If people aren’t safe in that way then they have no rights.

What is ironic is that even the insurrectionists were giving up their rights. While they were convenient allies for Trump in that moment, if the next day he wanted their property to put up some monument to himself, he would. Their rights would have been gone too. Trump tried to destroy all of the basic goodness at the foundation of what America is.

Then four years later an enormous number of people, having witnessed that insurrection live on TV, voted for him again. That is not something to be forgiven. It is in some ways close to the analogy of a battered wife. Similar to a man who beats his wife to try to force her to stay, then later quits the violence and goes along with divorce. Yes, the wife then has to meet peacefully with him in the courthouse to process the divorce, and they may have to meet peacefully for years going forward to deal with child or custody issues. They have to find ways to go forward and make life work. But that does not mean she forgives him, or should. Changing his mind about violence, deciding a different way of going at things will be better for him, is a chance to make life work again, but it is nothing like a reformed person who deserves forgiveness.

We’ll take all the people we can get who want to change their mind, but don’t mistake that for forgiveness, or that all is well, or that all is good now. It’s not. It’s just the practicality of having to find ways to move forward and make life work.


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post Work With Trump Voters, but Don’t Forgive Them appeared first on DCReport.org.

The Wild Saga of FDR’s $1 Million Gay Military Sex Sting

19th News Logo

In 1919, before he was president, Franklin Delano Roosevelt approved a secret operation that led senators to say he should never be allowed to hold public office.

Had the consequences not been ruinous for the men entrapped, the story would read like a comedy.

Franklin Delano Roosevelt — before he served as 32nd president of the United States — in 1919 approved a secret operation to rid the U.S. Navy in Newport, Rhode Island, of “cocksuckers and rectum receivers.” Their method? Volunteer agents would have gay sex and then tell on the sailors they had sex with for being gay.

Roosevelt was serving as assistant secretary of the Navy and, according to historian Sherry Zane, who published an article on operation “Section A,” he even consulted lawyers to make sure the plan was above board.

Had the consequences not been ruinous for the men entrapped, the story would read like a comedy.

Franklin Delano Roosevelt — before he served as 32nd president of the United States — in 1919 approved a secret operation to rid the U.S. Navy in Newport, Rhode Island, of “cocksuckers and rectum receivers.” Their method? Volunteer agents would have gay sex and then tell on the sailors they had sex with for being gay.

Roosevelt was serving as assistant secretary of the Navy and, according to historian Sherry Zane, who published an article on operation “Section A,” he even consulted lawyers to make sure the plan was above board.

Agents often made sure to get evidence more than once.

“It wouldn’t just take one time, like the covert op would have sex with someone, like three or four times before they would get the person,” Zane said.

Why did the agents need multiple encounters for evidence?

“Well, that’s questionable,” Zane said. “On the one hand, one of their arguments might have been that, you know, well, they wanted to make sure, right? Like they wanted to have enough evidence. And then there’s a lot of questions, well, they just enjoyed having sex with these men.”

A court martial log labeled “Naval Training Station, Newport, R.I.” documents sailors tried in 1919, listing charges such as “sodomy” and “scandalous conduct,” with some men acquitted. (National Archives)

Regardless of the reasoning, Congress and the American public were not amused. Rhea Debussy, a lecturer at Ohio State University wrote about the scandal in her new book “The Lavender Bans” which tracks queer history in the U.S. military. Debussy noted that the Navy allocated $50,000 to the operation, the equivalent of just over a million dollars today.

Just over a million dollars paid for sailors to have gay sex. To root out gay sex.

“On the policy end of things, we end up in front of a congressional committee, and the congressional committee is, like, you did what?” Debussy said.

Twenty-two sailors were entrapped and charged with “deviancy” in Section A’s operations. An additional 16 civilians also got caught up in the busts, said Zane.

“There was this fear by American mothers about sending their sons … into port cities where they associated cities with vice, so the Navy wanted to clean up those areas to make mothers feel safer,” said Zane. “If you think about it, it’s about the military having this power to get rid of so-called perverts and degenerates without needing legal authority.”

Not all men were treated equally. Men labeled as “tops” were seen as less gay or not gay at all and punished less severely. Men labeled as “bottoms” or “effeminate” were punished most severely.

The consequences of being court martialed were severe and life-changing, Debussy said.

In some instances, men were sentenced to 20 years in prison.

“There are so many ways in which like this criminalization of queer identity, particularly in the context of the military, has a ripple effect throughout these men’s lives, not even just talking about a prison sentence, but talking about the stigma that comes with a dishonorable discharge, the lack of benefits, the lack of respect, all of these things that follow you,” she said.

Perhaps most surprisingly, however, the ordeal nearly derailed Roosevelt’s career. According to Zane, senators were “utterly shocked” and “strongly advised Roosevelt never be allowed to hold public office again.”

He would go on to be elected to four terms as president, and history would largely forget the scandal.

This article was original published by The 19th on April 10, 2026. Click to read the original. 


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post The Wild Saga of FDR’s $1 Million Gay Military Sex Sting appeared first on DCReport.org.

Trust and medicine: Marcella Alsan on Econ to Go

 One of the big lessons of market design is that markets need social support to work well. That applies with particular force to the market for medical care, which (for its sins) isn't universally trusted.  Neale Mahoney interviews my remarkable colleague Marcella Alsan about her work, starting with her QJE paper on the downstream consequences of the infamous Tuskegee experiment:

Alsan, Marcella, and Marianne Wanamaker. "Tuskegee and the health of black men." The quarterly journal of economics 133, no. 1 (2018): 407-455.

 

And here's the Econ to Go podcast: 

 

"In this episode of Econ To Go, Neale Mahoney sits down with Stanford physician-economist and MacArthur Fellow Marcella Alsan to explore how trust and representation shape the U.S. health care system. Her research shows that historical events like the Tuskegee Syphilis Study continue to affect healthcare use and health outcomes today, and that trust isn’t abstract, it’s measurable. The conversation also highlights how trust can be built, how under-representation in clinical trials can influence both physician behavior and patient trust, and other key themes, including:

(01:33) The mistrust problem
(06:50) Representation as remedy 
(12:18) Clinical trials and trust in data 
(24:04) Eroding trust across the system" 

Struan Moffett on South Africa (from my email)

I think one important point you missed is that South Africa’s recent (and ancient) history has forced the population to work quite aggressively through racial differences at speeds that other developed nations have not. ‘Racial harmony’ would be a stretch, but I would say that most (all?) South African’s have a ‘racial understanding’. South Africa is also very post-racial in the sense that most understand racial differences to actually be cultural differences – for myself, growing up English in Durban, I felt more of a kinship with educated Indians than with the (white) Afrikaners. It would make absolutely no sense from a strictly Western perspective that the English and the Afrikaners (both ‘white’) couldn’t be more different!

Here is my original post.

The post Struan Moffett on South Africa (from my email) appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The New Defense Budget

We need to talk about the president’s 2027 proposed defense budget. It’s not like there’s been a shortage of reporting about it. But even with all that, I don’t think people have really absorbed the extent of it, it’s significance, the scale of growth. The president wants to increase the defense budget by more than 40%. That comes on top of his request for $200 billion to fund his current war with Iran.

It’s important to appreciate that there’s simply no way for the Pentagon to productively absorb that scale of resources on that timescale. Again, almost a 50% increase on a budget that is already massive in absolute and relative terms. If you think about what that scale of diversion means, you still won’t really quite grasp it, just as I’m not able to fully grasp it.

We have to see this in the context of the already massive cash diversion to ICE and the mass deportation and detention system, which the government also cannot remotely absorb. And finally the now-quite open admission from Republicans and Trump himself that they think all of this will come with massive cuts to all of the social safety net. It is a huge reorientation of the entire federal government from being a modern government, focused primarily on supporting and protecting its citizens, to one focused on, and built for, force and violence.

How Do You Deprogram an Electoral Autocracy?

Hungary’s prime minister Viktor Orbán has become a kind of godfather of competitive authoritarianism, an autocratic visionary for the 21st century that right-wing parties around the world are seeking to emulate. Trump’s second term draws directly from his model, with the various thought leaders of that movement making their admiration plain. Orbán’s is a system in which elections continue, giving the country the appearance of democracy, but it is just that: an appearance.

Or so the thinking has gone. There is some irony in the fact that, according polls, Orbán is on track to lose reelection on Sunday to a former member of his Fidesz Party, Péter Magyar, who has won voters over by denouncing the regime’s corruption and incompetence. While it is no longer a question whether the country’s democratic mechanisms are fair, Sunday will test whether they are rigged enough to withstand the overwhelming backlash Orbán is now facing. JD Vance and Vladimir Putin are, in various ways, scrambling to save their ideological ally.

If Magyar’s party, Tisza, does win on Sunday, it could become the first step in a long process of de-Orbánization, which we have a great piece up on this afternoon. Political scientist Gabriela Greilinger walks through what will have to happen to unwind the prime minister’s grip on power. He and legions of his loyalists have burrowed deep into the mechanism of Hungarian government, and extracting them will not be quick or easy.

Many of Fidesz’s structural advantages are locked in through cardinal laws, which require a two-thirds majority to amend or abolish. In addition, Fidesz is deeply entrenched in the state, having placed party loyalists in top positions across key public institutions. A possible Tisza government will therefore first need to replace these Fidesz loyalists who were nominated to serve beyond a legislative period and could obstruct a new government’s work. This includes, for example, the fiscal council, packed with Fidesz loyalists, which can veto the new government’s budget proposal, potentially leading to snap elections.

[…]

As a result, not only the future balance of power in Hungary but also the prospects of de-Orbánization remain uncertain and will depend on the outcome of Sunday’s election. Knowing what it stands to lose, Fidesz has already begun sowing doubt about the integrity of the election and potential foreign interference — even though the Fidesz government itself poses the greatest threat in these regards. In case of an electoral defeat, Fidesz might double down on the “big lie.”

But the first step is Sunday’s election, and any potentially Trump-like efforts Hungarians might see to reverse it.

The Artemis II mission has ended. Where does NASA go from here?

The Artemis era well and truly began Friday evening when a shiny spacecraft that had traveled 700,000 miles around the Moon, carrying four astronauts, splashed down in the Pacific Ocean off the coast of California.

For NASA, for its international partners, and for all of humanity the successful conclusion of the Artemis II mission marked a return to deep space by our species after more than half a century.

It was a spectacular achievement, and NASA deserves credit for making something what is very difficult look relatively easy. But it also raises an important question: What comes next?

Read full article

Comments

Here's what to expect from the fiery, 14-minute return of Artemis II

Death, taxes, and the gravitationally bound return of the Artemis II mission on Friday evening. These are the only certainties in life.

Even if the four astronauts on board the Orion spacecraft discovered a serious flaw in their spacecraft today—and to be clear, from recent images reviewed by NASA experts, everything looks just fine—there is no chance of significantly altering the Artemis II mission’s inexorable return through Earth’s atmosphere on Friday. They're coming back one way or another.

Splashdown is predicted to occur at 8:07 pm ET (00:07 UTC Saturday), a few hundred miles off the coast of Southern California. In large and important ways, this is the most critical phase of the lunar flight. Here, then, is what to expect later today.

Read full article

Comments

Rocket Report: Chinese version of Falcon 9 fails; Artemis depends on rapid heavy lift

Welcome to Edition 8.36 of the Rocket Report! Thank you for your indulgence of our missing the report last week, as we focused on the launch and progress of the Artemis II mission. And we are so thrilled it has been going smoothly, with brilliant imagery of the far side of the Moon. Of course, arguably the most difficult part of the flight remains ahead of the crew and Orion spacecraft: atmospheric reentry on Friday evening. We will, of course, have full and continuing coverage for you.

As always, we welcome reader submissions, and if you don't want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Alpha rocket may launch offshore. Seagate Space Corporation announced on Monday a "memorandum of understanding" with Firefly Aerospace to explore the development of an offshore launch platform that enables a sea-based launch capability for the Alpha rocket. Seagate Space said it will work closely with Firefly to mature the design of an integrated offshore launch system capable of supporting Alpha.

Read full article

Comments

Friday Squid Blogging: Squid Overfishing in the South Pacific

Regulation is hard:

The South Pacific Regional Fisheries Management Organization (SPRFMO) oversees fishing across roughly 59 million square kilometers (22 million square miles) of the South Pacific high seas, trying to impose order on a region double the size of Africa, where distant-water fleets pursue species ranging from jack mackerel to jumbo flying squid. The latter dominated this year’s talks.

Fishing for jumbo flying squid (Dosidicus gigas) has expanded rapidly over the past two decades. The number of squid-jigging vessels operating in SPRFMO waters rose from 14 in 2000 to more than 500 last year, almost all of them flying the Chinese flag. Meanwhile, reported catches have fallen markedly, from more than 1 million metric tons in 2014 to about 600,000 metric tons in 2024. Scientists worry that fishing pressure is outpacing knowledge of the stock.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Sen. Sanders Talks to Claude About AI and Privacy

Claude is actually pretty good on the issues.

Some 60 million light-years away in the southerly Some 60 million light-years away in the southerly


mist is now open source and looking for interop

A brief update on mist, my ephemeral Markdown editor with Google Docs-style comments and suggested edits:

mist is now open source with an MIT license, and the mist repo is here on GitHub.

(Try mist now and here’s my write-up from February.)

What I love about Markdown is that it’s document-first. The formatting travels with the doc. I can’t tell you how many note-taking apps I’ve jumped between with my exact same folder of Markdown notes.

The same should be true for collaboration features like suggested edits. If somebody makes an edit to your doc, you should be able to download it and upload to a wholly different app before you accept the edit; you shouldn’t be tied to a single service just because you want comments.

(And of course the doc should still be human-readable/writeable, and it’s cheating to just stuff a massive data-structure in a document header.)

So mist mixes Markdown and CriticMarkup – and I would love it if others picked up the same format. If apps are cheap and abundant in the era of vibing, then let’s focus on interop!

With mist itself:

Several people have asked for the ability to self-host it. The README says how (it’s all on Cloudflare naturally). You can add new features to your own fork, though please do share upstream if you think others could benefit.

And yes, contributions welcome! We’ve already received and merged our first pull request – thank you James Adam!


No, a document editor is not what we’re building at Inanimate. But it’s neat to release small useful projects that get made along the way. btw subscribe to our newsletter.


More posts tagged: inanimate (4).

Four astronauts are back home after a daring ride around the Moon

Slamming into the atmosphere at more than 30 times the speed of sound, NASA’s Orion spacecraft blazed a trail over the Pacific Ocean on Friday, returning home with four astronauts and safely capping humanity’s first voyage to the Moon in nearly 54 years.

Temperatures outside the capsule built up to some 5,000 degrees Fahrenheit as a sheath of plasma enveloped the Orion spacecraft, named Integrity, and its four long-distance travelers, temporarily blocking radio signals the Moon ship and Mission Control in Houston. Flying southwest to northeast, the spacecraft steered toward a splashdown zone southwest of San Diego, where a US Navy recovery ship held position to await the crew’s homecoming. Ground teams regained communications with Orion commander Reid Wiseman after a six-minute blackout.

Airborne tracking planes beamed live video of Orion’s descent back to Mission Control, showing the capsule jettison its parachute cover and deploy a series of chutes to stabilize its plunge toward the Pacific. Then, three larger main chutes, each with an area of 10,500 square feet, opened to slow Orion for splashdown at 8:07 pm EDT Friday (00:07 UTC Saturday).

Read full article

Comments

Falling Apart

April 9, 2026

The ceasefire President Donald J. Trump announced Tuesday night fell apart almost immediately. Israel complained that it hadn’t been consulted, and Prime Minister Benjamin Netanyahu insisted Israel did not accept an end to its bombardment of southern Lebanon as a way to dislodge Iran-backed Hezbollah militants. Steven Scheer of Reuters noted today that Israel has been under a state of emergency that halted the work of the judicial system, but with the end of the war, Netanyahu’s trial for corruption is scheduled to begin again on Saturday.

Iran has been permitting certain ships to pass through the Strait of Hormuz, but responded to Israel’s continued bombing by closing the strait again.

Vice President J.D. Vance said there was a “legitimate misunderstanding” about whether the ceasefire included Lebanon. “We never made that promise,” he said. But in fact, Pakistani prime minister Shehbaz Sharif, who posted the terms of the ceasefire on Tuesday, noted that the agreement did include a ceasefire in Lebanon. He tagged Vance in the post.

As more information about the achievement of the ceasefire became known, it reflected poorly on Trump. Humza Jilani, Abigail Hauslohner, and Demetri Sevastopulo of the Financial Times reported yesterday that while Trump claimed Iran was begging for a deal to end hostilities, it was actually the Trump administration that was pushing Pakistan to broker a deal with Iran. Tyler Pager and Katie Rogers of the New York Times reported that the White House was helping to craft Sharif’s social media statements, suggesting Trump “was actively looking for a way out of the crisis” as his own imposed deadline drew closer on Tuesday evening.

Although Defense Secretary Pete Hegseth claims the U.S. has had a “historic and overwhelming victory” that achieved “every single objective,” David S. Cloud of the Wall Street Journal wrote yesterday that Iran saw the ceasefire as a “triumph” because it had survived a 38-day barrage from the United States and Israel and because it had gained control over the Strait of Hormuz, inflicting deep damage on the U.S. economy. Iran claimed the U.S. had suffered “an undeniable, historic, and crushing defeat.” Iran’s new leadership is even more anti-Western than the previous leadership, killed in the early days of the U.S.-Israeli strikes.

Yesterday the president posted his own interpretation of the terms of the agreement, but they were aspirational and asked for Iran to agree to terms that were less advantageous for the U.S. than the Joint Comprehensive Plan of Action (JCPOA) that President Barack Obama negotiated in 2015 and Trump tore up in 2018.

The actual terms of the ceasefire agreement were murky. On Wednesday, Iran released its version of the points of the agreement; the White House said those points weren’t the basis for the ceasefire.

Also yesterday, Trump suggested the U.S. was considering joining the Iranians in demanding tolls for ships passing through the Strait of Hormuz. “We’re thinking of doing it as a joint venture. It’s a way of securing it,” he told journalist Jonathan Karl. But today Trump posted: “There are reports that Iran is charging fees to tankers going through the Hormuz Strait—They better not be and, if they are, they better stop now!” Hours later, he added: “Iran is doing a very poor job, dishonorable some would say, of allowing Oil to go through the Strait of Hormuz. That is not the agreement we have!”

By Wednesday night, Trump was backing away from his celebratory statements about the ceasefire. “All U.S. Ships, Aircraft, and Military Personnel, with additional Ammunition, Weaponry, and anything else that is appropriate and necessary for the lethal prosecution and destruction of an already substantially degraded Enemy, will remain in place in, and around, Iran, until such time as the REAL AGREEMENT reached is fully complied with. If for any reason it is not, which is highly unlikely, then the ‘Shootin’ Starts,’ bigger, and better, and stronger than anyone has ever seen before. It was agreed, a long time ago, and despite all of the fake rhetoric to the contrary—NO NUCLEAR WEAPONS and, the Strait of Hormuz WILL BE OPEN & SAFE. In the meantime our great Military is Loading Up and Resting, looking forward, actually, to its next Conquest. AMERICA IS BACK!”

Meanwhile, Jonah Kaplan and Michael Kaplan of CBS News reported today that survivors of the deadly March 1 attack on U.S. forces in Kuwait, which killed six service members and wounded over 20 more, disputed Defense Secretary Hegseth’s description of the deadly drone that hit their position as a “squirter,” suggesting it squirted through the defenses of a fortified area. The survivors say their position was not fortified and was dangerously exposed to attack.

Although Congress is not scheduled to conduct business again until April 13, Democrats were in Washington, D.C., today to try to force Republicans to vote on a war powers resolution to end the war on Iran. Republicans have steadfastly refused to discuss Trump’s attack on Iran, even on Tuesday after Trump threatened that “a whole civilization will die tonight, never to be brought back again.” That pattern continued today. The Republican in charge of the House, Chris Smith of New Jersey, refused to recognize the Democrats and ended the day’s session immediately. He later said the Democrats should get behind the ceasefire agreement.

This afternoon, out of the blue, First Lady Melania Trump called the White House press pool to hear a statement. The first lady strode to a podium bearing the presidential seal—always reserved for the president alone—and announced that she had never been friends with sex offender Jeffrey Epstein, that she had never had a relationship with Epstein or his “accomplice” Ghislaine Maxwell, and that the “lies linking me with the disgraceful Jeffrey Epstein need to end today.” She called for Congress to hold hearings for Epstein’s victims, and concluded that “Epstein was not alone. Several prominent male executives resigned from their powerful positions after this matter became widely politicized.”

Her six-minute statement appeared to have been written by someone else: she could not pronounce some of the words in it, like “trivial,” and she said “calculating” and “convinced” in place of “circulating” and “convicted.” Notably, she did not mention her husband even as her unexpected statement injected the issue of the Epstein files and his appearance in them squarely back into the news. When she was done speaking, she turned and walked back out of view the way she had come, as reporters shouted questions.

Trump told a reporter he did not know the first lady was preparing to give a speech, but a source told CNN’s Betsy Klein that Trump had been alerted that she was going to make the statement. If his actions afterward were any indication, he wasn’t keen on it.

This afternoon, Trump began posting wildly. At 4:28 his social media account lashed out at former allies Tucker Carlson, Megyn Kelly, Candace Owens, and Alex Jones, who don’t support Trump’s war in Iran. Trump said “[t]hey’re stupid people, they know it, their families know it, and everyone else knows it, too!” He reiterated his false claim that he won “in a LANDSLIDE,” and suggested MAGA voters love him. Those standing against his Iran adventure are “not ‘MAGA,’” he wrote; “they’re losers, just trying to latch on to MAGA.”

Then, at 5:28, Trump’s social media account posted: “The Wall Street Journal, one of the worst and most inaccurate ‘Editorial Boards’ in the World, stated that I ‘declared premature victory in Iran.’ Actually, it is a Victory, and there’s nothing ‘premature’ about it! Because of me, IRAN WILL NEVER HAVE A NUCLEAR WEAPON and, very quickly, you’ll see Oil start flowing, with or without the help of Iran and, to me, it makes no difference, either way. The Wall Street Journal will, as usual, live to eat their words. They are always quick to criticize, but never to admit when they’re wrong, which is most of the time! President DONALD J. TRUMP”

And then, at 7:49 tonight, the president’s social media account posted a graphic video of a man killing a woman with a hammer. The screed that accompanied the video attacked Haitian immigrants, former president Joe Biden, and “the Radical Democrats in Congress.” The post echoed the usual vicious racism to which Trump turns to feed his base. But it is hard to miss that hours after his wife gave an unexpected press conference about Jeffrey Epstein, Trump posted a video of a woman’s murder.

Notes:

https://www.wsj.com/livecoverage/iran-war-2026-trump-deadline-latest-news/card/exclusive-israel-was-informed-late-about-cease-fire-deal-and-wasn-t-happy-s18QUNCt1s4fL60Rnn03

https://www.ft.com/content/249b9255-c448-492b-88bf-098d97de4159?syn-25a6b1a6=1

https://www.wsj.com/world/middle-east/a-closer-look-at-irans-10-demandsand-which-the-u-s-might-accept-c0148284

https://www.cbsnews.com/news/iran-war-kuwait-drone-attack-survivors-us-army/

https://www.wsj.com/world/middle-east/why-iran-thinks-it-won-the-war-despite-huge-military-losses-94c60ac7

https://www.politico.com/news/2026/04/08/us-not-won-iran-war-00864337

https://www.nytimes.com/2026/04/08/world/middleeast/trump-pakistan-tweet-iran.html

https://www.reuters.com/world/middle-east/halt-iran-attacks-means-netanyahus-corruption-trial-will-resume-sunday-2026-04-09/

https://www.mediaite.com/media/news/trump-denies-knowing-about-melanias-epstein-presser-beforehand-amid-cnn-report-that-he-got-a-heads-up/

https://www.theguardian.com/us-news/2026/apr/09/house-democrats-trump-war-powers-resolution-iran

https://www.ms.now/rachel-maddow-show/maddowblog/against-a-backdrop-of-war-congressional-republican-embrace-their-own-irrelevance

https://www.hks.harvard.edu/centers/carr-ryan/our-work/carr-ryan-commentary/whole-civilization-will-die-tonight-day-american

https://www.nytimes.com/2026/04/08/us/politics/trump-iran-congress-republicans.html

https://chrissmith.house.gov/news/documentsingle.aspx?DocumentID=415455

https://thedigestonline.com/news/chris-smith-new-jersey-iran-war-powers-resolution-blocked/

YouTube:

watch?v=vyn-_Tsl5A0

Bluesky:

robertscotthorton.bsky.social/post/3miyja6jlds2s

gtconway.bsky.social/post/3miydz4jshs2e

mattgertz.bsky.social/post/3miz2i6wfzk2l

ericcolumbus.bsky.social/post/3miz2tcgha22i

ronfilipkowski.bsky.social/post/3mj2ohpzfjk2d

bgrueskin.bsky.social/post/3mj3heuvp6k2l

acyn.bsky.social/post/3mj3hcbkqye2g

paleofuture.bsky.social/post/3mj3ns6st322f

factpostnews.bsky.social/post/3mj3n3emgfh2y

meidastouch.com/post/3mj2yr2k4s22w

charles.littlegreenfootballs.com/post/3mj3zstd2ug2q

chadbourn.bsky.social/post/3mj3tji7vrs22

paulballen.bsky.social/post/3mj3r34scbc2p

paleofuture.bsky.social/post/3mj3ucnitas2k

Share

Driving cross-country

I have driven cross-country four times, at least if you count a 3/4 trip as valid.  I also have driving experience in virtually all states, including Hawaii and Alaska, neither of which would be part of typical cross-country travel.

I recommend this mode of transport highly, especially for the United States.  Here are a few observations:

No matter which route you take, so often Mexican food is your best option.

I most prefer the southern route, involving Memphis, Texas, and southern Utah/north rim of the Grand Canyon.  Do I have to tell you no major highways?

The extreme northern route is better than the middle route.  Visit Duluth.

The music you bring is essential.  While this will depend on your taste, in general try to have some regional music to match your route.  Dylan and also folk music sound good in most parts of the country.  CDs can be a better medium than online music for these trips.  Do not listen to music when you start your day’s drive, however, as you will end up burnt out.  Save it for after a few hours of driving.  Nor should you listen to too much high energy music.  Woody Guthrie is better than Led Zeppelin in this setting.

How much you should roll down car windows, vs. relying on air conditioning, is a critical decision.  The correct answer will depend on the route and time of year, but please do not screw this one up.  Usually I like windows down, but with raised windows you can hear the music better.

Salads in the Midwest can be good.

In Texas and Oklahoma you may see some amazing storms.  Texas is the best state for random food stops.

Use paper maps, GPS may bring you along too efficient a route.

Issues of children aside, optimal group size is two, no larger.  To avoid least common denominator effects.

You can do these trips at any pace you want, even an hour in a place can teach you a good deal.

You could do a trip simply by stopping in every interesting place in New Jersey, one of the smallest states.

I prefer Vermont to New Hampshire, at least for driving purposes.  I also prefer Montana to Wyoming, the latter for me being beautiful but somehow quite a boring state outside of Yellowstone?  You cannot spend too much time in Utah.

Oregon is one state where I never have been driving.  Is that a great loss?  I know only Portland there.

Driving cross country, or only parts of it, is the very best way to see America.

The post Driving cross-country appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Eric Swalwell is toast.

So I’m not going to pretend to know the details and intricacies of Eric Swalwell’s alleged sexual encounters with a former staffer, who reportedly “was too drunk to consent.”

I don’t know what happened. Or didn’t happen. I don’t know who the woman is, what role she served, how this initially came out.

What I do know is, without question, Eric Swalwell is toast.

He will not become California’s next governor.

Is this fair? Innocent before proven guilty and all that? Not entirely, at least in a legal sense. But as former New York Knicks guard Micheal Ray Richardon said of the 1981 New York Knicks, “The ship be sinking.” As we speak, multiple staffers bolted the Swalwell campaign, and myriad groups withdrew their support. Rusty Hicks, the Democratic party chairman, has yet to call for Swalwell to resign, but he referred to the allegations as “deeply disturbing.” For his part, Swalwell released a pedestrian shit-I’m-in-some-trouble statement: “These allegations are false and come on the eve of an election against the frontrunner for governor. For nearly 20 years, I have served the public — as a prosecutor and a congressman and have always protected women. I will defend myself with the facts and where necessary bring legal action. My focus in the coming days is to be with my wife and children and defend our decades of service against these lies.”

And here’s the thing …

Well, two things …

First, were Eric Swalwell up 10 … 15 … 20 points, maybe (maybe) he could somehow survive the fallout. Hell, Bill Clinton and Donald Trump won presidencies. But this is a Spandex-tight three-way race, with Tom Steyer and former Congresswoman Katie Porter nipping at Swalwell’s heels. The last poll I saw had Swalwell at 12 percent, Steyer at 11, Porter at 7. Bro’s about to plummet.

Second, there’s something that (thank goodness) separates the modern Democratic Party and modern Republican Party, and it’s this: We (generally) don’t embrace people who brag about “grabbing women by the pussies,” people who mock women for their looks and weight, people who duck backstage to see naked pageant contestants, people who (most certainly) fucked around with 15-years olds, people who paid off porn stars in hush money.

Maybe, politically, that’s been a mistake. Maybe we should go all Trump, and tar Swalwell’s accuser as a liar, a fraud, a money-hungry bitch. Maybe we should be awful humans seeking nothing more than fame and power, humanity be damned.

But … no.

We don’t do that.

Thankfully.

•••

Now, I will say this: Understanding the way the political game is played, I am irked/annoyed/appalled by the apparent excitement expressed by some of Swalwell’s opponents.

Matt Mahan, no longer a viable candidate and a dude who should have dropped out long ago …

Antonio Villaraigosa, no longer a viable candidate and a dude who should have dropped out long ago …

Xavier Becerra, no longer a viable candidate and a dude who should have dropped out long ago …

Are these men truly hurting for the victim? Are they overcome by empathy and pain? Um, no. They wanna win an election and revive floundering campaigns, and here’s a sweet chance to do so.

Ultimately, what we have before us is an allegedly sick and grotesque act from a man who looked to be our next governor.

What we have before us is downfall.

There is nothing to celebrate.

American Conversations: Senator Andy Kim

The Evolution of Casino Games in Canada

If you walked into a casino in Canada today, you might catch the familiar sound of slot machines, and yet you’re just as likely to find someone playing via their phone on a live-dealer table streamed in real time. Over the past 10–15 years, and especially since 2020, Casino gaming in Canada has transitioned from land-based halls and charity bingo nights to online platforms, creating an industry that is in rapid expansion and increasingly embedded in everyday digital entertainment.

Charitable Games to a Booming Digital Entertainment Sector

Canada’s casino gaming journey began modestly, with provinces opening the door to lotteries, racetrack betting, and charity bingo as early models of regulated gambling. Over time, those controlled experiments evolved into a full-blown entertainment sector, which is projected to surpass USD 8–10 billion by the end of the decade, with continued double-digit growth.

In one province alone, iGaming Ontario’s regulated market has consistently reported quarterly wagers in the tens of billions. For example, iGaming Ontario’s newly regulated market recorded total wagers of CA$18.7 billion in Q2 of fiscal year 2024–25. Casino games generated CA$553 million in revenue during that quarter.

Data like this confirms that the transition from physical venues to online casino platforms has progressed significantly and is now led by a robust and fast-growing digital market.

The Reasons Behind The Shift 

One of the easiest ways to make sense of the shift in growth towards online providers is to examine the cost and convenience. Instead of planning a whole night out, dressing up, and driving to a destination casino, you can now enjoy the same games (and hundreds more) from your couch, during a break, or while waiting for dinner to finish cooking. The experience has become so flexible that it fits into your life rather than requiring a memorable trip. 

Pricing also plays a significant role. Online casino games allow bets as low as CA$0.10 per spin, sometimes even lower. Compared to many other forms of entertainment, that is an appealing entry point. It means you can enjoy the excitement of gaming without having to commit to a large bankroll. For example, BetMGM online slots allow you to spin for just a few cents, making a quick gaming break fit comfortably into your entertainment budget.

Think about the hobbies most of us already budget for. A streaming service costs between CA $20 and CA $30 per month, dinner out costs between CA $40 and CA $50, and movie tickets plus snacks can reach CA $20 to CA $30 for a single evening. With casino gaming online, you choose the cost of every session. If you spin 100 times at CA$0.10, you have spent CA$10 and enjoyed a fun break for half the price of a subscription or far less than a night out.

It is their flexibility that makes online casinos so appealing. You are in control of when you play, how long you play, and how much you spend. While it is true that frequent spins can add up, the ability to decide your own pace and budget gives you more freedom and confidence in treating casino gaming like any other fun pastime.

How Online Play Reinvented Casino Entertainment

This shift hasn’t just changed how much people spend; it has also changed how they spend it. It has completely changed what the casino experience looks like. Online platforms now give players access to thousands of games 24/7, with new titles added at a speed that any physical casino could ever manage. The games themselves are no longer basic; gaming studios create immersive experiences built for mobile, streaming, and social play. In the Ontario market, casino wagers were about 86 % of total online gaming wagers in Q2 2024-25. 

That means what used to be an in-person experience at a gaming floor has migrated to phones and tablets. Top casino apps feature video slots with cinematic themes, live-dealer blackjack and roulette streamed from studios, and interactive formats that adopt elements of gaming and social interaction. Today, your “casino” is wherever you have an internet connection.

King of spades in martini glass
Photo: christockwell via Pixabay

Hidden Risks

When viewed through the lens of entertainment value, online casino games have strong appeal. They give you the flexibility of access anytime, and a tremendous variety of game styles from classic reels to branded video slots and live dealer tables. If you compare a short session with other forms of leisure spending, the cost might appear favourable.

Yet the risk side cannot be ignored. The small minimum bet of CA$0.10 might seem harmless, but if you spin dozens or hundreds of times, the total can reach CA$30, CA$50, or more without you thinking of it as “that big.” In Ontario’s regulated market, active player accounts spent an average of CA$308 per month. 

Modern platforms also use personalised recommendations, seamless payments, and gamified features, which can make spending feel more frictionless and continuous.

That figure illustrates the scale of “regular play” and hints that many users transition from spontaneous to regular sessions. To enjoy gaming responsibly, using regulated platforms is crucial because they offer protections such as deposit limits, time-outs, and responsible-play tools.

Woman holding smartphone.
Photo by Andrea Piacquadio via Pexels

Enjoy the Spin but Stay in Control

The rise of online casino gaming in Canada shows just how much player expectations have changed. Casino entertainment is now affordable and available whenever you are. Online platforms offer the same excitement once found only on gaming floors, but now within a few taps on a phone. Low minimum bets, massive game libraries, and the comfort of home have made online gaming a practical alternative to nights out or long trips to destination casinos. However, that accessibility should not come at the expense of overspending. When you set a budget and treat each session like any other hobby expense, casino gaming can be a fun and responsible part of your entertainment mix. You keep control, you decide the value, and you walk away when the moment feels right. Ultimately, the choice is yours. Enjoy the adrenaline and convenience, but make sure every spin fits comfortably into your lifestyle and budget.

Photo at top: Pixabay via Pexels 


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT MISSION

The post The Evolution of Casino Games in Canada appeared first on DCReport.org.

Collections: Raising Carthaginian Armies, Part I: Finding Carthaginians

This is the first part of a series looking at the structure of the Carthaginian army. Although Carthage has an (unfair!) reputation for being a country of “peaceful merchants who tended to avoid wars,”1 Carthage was, I will argue, without question the second greatest military power the Mediterranean produced – eclipsed only by Rome. If we do not realize this, it is merely because Carthage had the misfortune to fight Rome ‘in the first round,’ as it were.

Carthage is, in particular, the only military power that ever manages to seriously challenge Rome on an even footing, blow for blow, after the Romans completed the conquest of Italy. The Carthaginian military system pushes Rome to the very brink of defeat twice, in contrast to the Hellenistic great powers, the heirs of Alexander, none of which ever force the Romans to ‘dig deep’ into their forces. Put another way: the Romans put Alexander’s heirs to bed mobilizing against them less than a third of the military force it took for Rome to match Carthage. The Carthaginians inflicted more casualties on the Romans in a single day than all of the successor states (a label which does not include Epirus, so no Pyrrhus here; worth noting the Carthaginians beat him too) managed in pitched battle combined. And they did this more than once; I’d hazard they managed it about seven times.2

So in this series, we are going to lay out the structure of Carthage’s armies (alas, we have very little information as to the structure of their navy), because as we’ll see, the Carthaginian military system was quite complex, drawing soldiers from all over the western Mediterranean.

Now there is an a bit of organizational trickiness here: Carthage drew forces from many different places at many different times. In practice, the Carthaginian military becomes visible to us as early as 480 (with the Battle of Himera) and seems to change significantly between this period and the army visible to us in the first book of Polybius, which fights the First Punic War (254-241) and the Mercenary War (241-237). Then the Carthaginian army undergoes another substantial shift visible to us, in terms of its composition, during the Barcid Conquest of Spain (237-218) such that the Carthaginian army that fights in the Second Punic War (218-201) looks very different again. And then Carthage loses its army and so its military forces from 201 to the end of the Carthaginian state in 146 look different again.

My solution here is to structure this treatment around the largest Carthaginian mobilizations, which were those during the Second Punic War: Carthaginian numbers peaked in 215 with something on the order of 165,000 men under arms.3 We’ll work through the components of that force (operating, as it did, in multiple theaters) and for each component of it, we can then note how – as best we can tell – that specific component changed over time.

I should also note what I am not doing here: this is not a full rundown of Carthage’s military history or the Punic Wars; rather it is an outline of the components of Carthage’s land forces. I think a treatment of the Punic Wars on a similar level to our “Phalanx’s Twilight, Legion’s Triumph” series is probably worth doing, but would be a much larger and more involved series than this, because the Punic Wars are quite long conflicts with many twists and turns and often multiple simultaneous theaters. One day!

But first, as always, raising large armies of mercenaries, subject conscripts, vassal warlords and allies is expensive! If you too want to help me invade Italy with a multi-ethnic army of diverse origins in a doomed effort to stop the Roman Republic, you can help by supporting this project over at Patreon. If you want updates whenever a new post appears or want to hear my more bite-sized musings on history, security affairs and current events, you can follow me on Bluesky (@bretdevereaux.bsky.social). I am also active on Threads (bretdevereaux) and maintain a de minimis presence on Twitter (@bretdevereaux).

(Bibliography Note: Any bibliography for the lay reader looking to get to grips with Carthage likely has to begin with D. Hoyos, The Carthaginians (2010) which provides a solid foundation on understanding the Carthaginian state and society. A solid overview of Carthaginian military history is provided by J.R. Hall, Carthage at War: Punic Armies c. 814-146 (2023). For specific periods in Carthaginian military history, note J.F. Lazenby, The First Punic War: A Military History (1996), then D. Hoyos, Truceless War (2007) on the Mercenary War and D. Hoyos, Hannibal’s Dynasty (2003) on the Carthaginian conquest of Spain, before going back to J.F. Lazenby for Hannibal’s War (1978) on the Second Punic War. G. Daly, Cannae: The experience of battle in the Second Punic War (2002) has, among other things, one of the better run-downs of the composition of Hannibal’s army. On the Gauls in Carthaginian armies, note L. Baray, Les Celtes d’Hannibal (2019), alas not translated. On the Numidians, a key component of Carthage’s army, see W. Horsted, The Numidians, 300 BC – AD 300 (2021), while on the Spanish warriors who fought for Carthage, see Quesada Sanz, F.  Armas de la Antigua Iberia: De Tartesos a Numancia (2010) now available in translation as F. Quesada Sanz, Weapons, Warriors & Battles of Ancient Iberia (2023), trans. E. Clowes and P.S. Harding-Vera. You can also find what little we know about Balaerian slingers in the opening chapters of L. Keppie, Slingers and Sling Bullets in the Roman Civil Wars of the Late Republic, 90-31 BC (2023). Finally, one must note N. Pilkington, The Carthaginian Empire (2019), an often heterodox but equally sometimes persuasive reassessment of what we know of Carthage that is intensely skeptical of our literary source tradition and an essential read (for agreement and disagreement) if one is intending to get knee-deep in the scholarship.)

A Brief Chronology

First, before we get into the details, we should lay out the basic chronology of Carthaginian military history, because as we’re going to see, not only does Carthage draw upon a bunch of different sources of military manpower, those sources themselves change over time in their composition and role within the Carthaginian system.

Now we should start with some background here on the nature of Carthage and its control over its core territory in North Africa. Carthage was a Phoenician colony, founded in North Africa (in modern day Tunisia). The population was thus likely a mix of local Libyan peoples, Phoenician settlers and even other maritime peoples (Aegeans, e.g. Greeks). The Carthaginians themselves maintained a clear ideology of being Phoenicians, using a Punic language, worshiping Punic gods and making a clear connection to their mother-city of Tyre, however some modern DNA research has suggested the actual population of Phoenician colonies might have been more genetically diverse than we have generally supposed. Of course, not every resident of Carthage was likely to be a citizen and certainly the impression we get is that some Phoenician ancestry was a requirement for full citizenship.

Via Wikipedia, a decent-if-not-perfect map of Greek and Phoenician colonization. It is worth noting when looking at this map that the Etruscan were organized into states, but not united, while the Thracians, Dacians and Illyrians were non-state peoples at this time.

Carthage was hardly the only such colony in North Africa (Utica, Thapsus (in North Africa), Leptis, Leptiminus, etc. were all such colonies), but there was also a substantial local Libyan population and at least initially Carthage was subordinate to those peoples; we’re told that quite Carthage’s first few centuries after its founding (mid-eighth century) paid tribute to the locals, a relationship that inverted quite dramatically as Carthage became stronger. Carthage seems to begin projecting power overseas seriously in the mid-to-late-500s, though we cannot always see this early process as well as we’d like. By c. 500, Carthage seems to control Sardinia and the western coast of Sicily. Some sign of Carthage’s expanding control in North Africa comes when they are able to block Dorieus (a Spartan prince) from creating a Greek colony in North Africa and then shortly thereafter also destroy his effort to found a colony in western Sicily, between 515 and 510 or so. Unfortunately, we’re not really well informed at all about the armies they used to do this

Instead, Carthaginian armies first start to become really visible to us in the context of the running contest between Carthage and Syracuse for control over the rest of Sicily, which kicks off in the 480s. From the 480s to the 270s, Carthage fights a series of wars with the Greeks on Sicily, the latter generally organized around the largest and strongest Greek city there, Syracuse. There is a tendency for students to be surprised that Carthage – given its apparent power in the third century – is unable to overcome (or be overcome by) Syracuse, but it is worth remembering that Syracuse is a really big polis, on the same scale as Athens or Sparta. Recall that from 415 to 413, the Athenians throw the lion’s share of their military, at the height of their power at Syracuse and lose effectively all of it for their trouble, so Syracuse – at least when well led and organized – is a fairly major power (in as much as any power other than the Achaemenids can be major) in this period.

In any case, the first Carthaginian-Greek war in Sicily begins in the 480s and ends with the Battle of Himera in 480. They’re then back at it from 409 to 405, then again from 398 to 396, then again from 383 to 381 (?), then again from 368 to 367, then again 345 to 341 and again from 311 to 306 and then finally from 278 to 276, Pyrrhus of Epirus shows up to campaign against Carthage on behalf of the Greeks. On the one hand, at any given time in these wars, territorial control often swings wildly between Carthage and Syracuse, but on the other hand zooming out, over the long-term relatively little changes and the whole thing resembles a stalemate: Carthage controls the west of the island, Syracuse the east and the settlements in the middle either manage in the fracture-zone between the powers or submit to one or the other.

Alongside the early phases of this running warfare on Sicily, Carthage is steadily subduing the area around it in North Africa, reducing the Libyan and Phoenician settlements in what is today Tunisia to semi-autonomous subjects. Those communities remained internally self-governing, but were in practice ruled by Carthage and we’ll talk about that relationship in the next post in the series. We can’t fully see this process clearly but by c. 400, Carthage clearly seems to have control over most of its immediate surroundings. Carthage also began interacting quite early with the Numidians, the Berber peoples to the west (generally divided into two kingdoms, Massaesylii and Massylii) sometimes recruiting them and sometimes fighting them. Certainly by the start of the third century if not earlier, Carthage is the dominant power in this relationship.

The Carthaginians are also clearly active in trade in Spain, though it is unclear to what degree the Phoenician settlements there fall under Carthaginian political control and when.

Thus even by c. 480, Carthage is one of the major imperial powers in the western Mediterranean, though hardly the only ‘major player’ and remains so, steadily growing in size and influence over the next several centuries. By c. 300, the Carthaginians have secured control over western Sicily, Corsica and Sardinia, have some small footholds in Spain and most importantly have secured control over most of what is today Tunisia (what the Romans would just call ‘Africa’) and have a dominant if frequently shifting position relative to the Numidians.

That set the stage for the major wars of the third century. Carthage was in a strong position in Sicily after the end of their war with Agathocles (in 306), leading to the Sicilians to appeal to Pyrrhus in the 270s. Pyrrhus, arriving in 278 was able to win significant victories and pin the Carthaginians back to their last major coastal base in Lilybaeum, but was unable to take it (being unable to break Carthaginian naval control) and subsequently forced out in 276 once his support among the Sicilian Greeks ebbed, suffering a nasty naval defeat on his way out for his trouble.

That left Carthage in a dominant position in Sicily (but still facing a potent foe in Syracuse) when in 264 a group of mercenaries (the Mamertines) leftover from Agathocles’ war who had seized Messina – under pressure from Syracuse – appealed to both Rome and Carthage for help. That led to a four(-ish) way war in which two of the sides (the Mamertines and Syracuse) rapidly found themselves rendered irrelevant. The result was the First Punic War (264-241) between Rome and Carthage, fundamentally a war for control over Sicily, although the Romans did invade North Africa (unsuccessfully) in 256.

Via Wikipedia, a rough map of Carthage’s territorial control at the beginning of the First Punic War, though I’d argue this probably overstates Carthaginian control in Spain somewhat (New Carthage isn’t even founded yet!)

Carthage loses the war, with Rome consolidating control over Sicily, only to be immediately beset by a new war, the Mercenary War (241-237), when a mutiny by Carthage’s unpaid mercenaries from the end of the First Punic War set off a general revolt of its subjects in North Africa. The Carthaginians win this war, particularly with the leadership of Hamilcar Barca, who is then too politically influential to be left in Carthage, so he is packed off with an army to go do stuff in Spain. The ‘stuff’ he does in Spain from 237 to his death in 228 is to subdue nearly the entire Mediterranean coast up to the Ebro River, with that task then completed by first his son-in-law, Hasdrubal the Fair and then Hamilcar’s eldest son Hannibal.

That sets the stage for ’round two’ with Rome, the Second Punic War (218-201), an absolutely massive war waged across Italy, Spain and Africa, which represents the peak military output of either Rome or Carthage (although the First Punic War, with its massive fleets, probably roughly matches it). Utterly defeated in 201, Carthage is shorn of its overseas empire and much of its more distant African holdings, essentially reduced to ‘merely’ controlling northern Tunisia. However, rapid Carthaginian economic recovery leads Rome to instigate a third war with Carthage, the Third Punic War (149-146). Unlike the previous two wars, this is not an even contest: Carthage by this point is much smaller and weaker a power than Rome. Determined Carthaginian resistance prolongs the war, but Rome is eventually able to seize the city and destroy the Carthaginian state in 146.

Via Wikipedia, a rough map of Carthaginian control at the start of the Second Punic War. This map substantially overstates Carthaginian control of the Spanish interior, however.

Now, one thing worth noting at the end of this brief, potted history is for nearly all of this period, we have only Greek sources (Romans, writing in Latin, only really come in with the Punic Wars and even then our earliest Roman sources – Fabius Pictor – are lost, so we get him processed through a Greek – Polybius). One of the features of the history we do have of Carthage that I suspect results from this is that Carthage seems to lose a lot. But it is, at least until 264, a strange sort of losing: Carthage shows up in our sources losing major battles but then one moves forward a few decades and Carthage’s empire is larger and more prosperous. And then Carthage loses another major battle and yet somehow, a few decades later Carthage is even more powerful.

So either Carthage is the world champion at failing upwards or there is something going on with our sources. And it isn’t hard to really guess what: our key source for Carthaginian history before 264 is Diodorus Siculus, that is, ‘Diodorus the Sicilian,’ a Sicilian Greek writing in the first century B.C. who thus very obviously has a side in Carthage’s long wars with the Sicilian Greeks. Even if Diodorus is doing his best to give us a straight story, which battles are his sources likely to remember or commemorate most prominently: the Time They Really Walloped the Carthaginians or perhaps smaller engagements that they lost? Thus while we cannot know for certain, I find that I suspect Carthage’s battle-record pre-264 is likely rather better than our sources suggest.

Post-264, it seems worth noting that while Carthage loses more often than they win against the Romans, they still manage to deliver Rome some pretty stunning defeats. The notion that Carthaginians are ‘peaceful merchants’ or just ‘unmilitary’ thus seems to be almost entirely empty, a nearly pure product of later stereotypes about ‘unmanly easterners’ rather than a conclusion justified by the evidence. At the very least, by the time Rome was ready to fight Carthage, the Carthaginians very much knew how to throw a punch – indeed, they would punch Rome far harder than any other foe.

That still provides some three hundred years where Carthage is a meaningful military power where we can see their military activities, so as you might imagine, the shape of the Carthaginian army changes a lot over that period.

Component Parts

The next thing we ought to do, to get an overall sense of the system, then, is to lay out the scale of Carthaginian forces at the height of the Second Punic War, representing the largest land mobilization that Carthage ever produced. The size of the mobilization is staggering, as is the diversity of how it was raised: like most imperial powers, Carthage’s army was a diverse medley of soldiers drawn from basically everywhere that Carthaginian power reached. The way these soldiers were incorporated into Carthage’s armies was in turn a product of what their relationship to the Carthaginian state was – citizens, subjects, vassals, allies, mercenary employees.

Our sources, most particularly Polybius, provide us enough detail to get a pretty decent accounting of Carthage’s ‘peak’ mobilization, which comes in 215. Hannibal, of course, had a Carthaginian field army at that time in Italy – he had won the Battle of Cannae (216) the year before – but there were also Carthaginian armies in Spain, Sardinia and Africa, along with an active fleet. Carthage alone of the Mediterranean powers of the era seems to have been able to match Rome’s capacity for multi-theater warfare: whereas Hellenistic kingdoms could really only have one primary theater of war at a time, both Rome and Carthage could wage multiple parallel campaigns simultaneously and did so.

So let’s break down the evidence for what we have.

We can begin with Hannibal’s army in Italy, which Polybius tells us (3.114.5) consisted of 40,000 infantry and 10,000 cavalry for the Battle of Cannae (216). We can actually work backwards with just a little bit of guessing to break down this army into its unit composition: Hannibal crosses the Alps with 12,000 Africans, 8,000 Iberians, and 6,000 cavalry, taking some losses in the subsequent battles but also absorbing around 9,000 Gallic infantry and 5,000 Gallic cavalry. Figuring for attrition, the composition of Hannibal’s army at Cannae has to look at least something like around 10,000 African infantry, 6,000 Iberian infantry, around 8,000 mixed ‘lights’ (North African lonchophoroi, which means ‘javelin-men’ not ‘pikemen’ as it is sometimes mistranslated) and Balearian slingers and 16,000 Gallic infantry to make the total. Of the cavalry we might suspect around 5,000 of it was Gallic cavalry and the rest split roughly evenly between Numidian cavalry from Africa and Iberian cavalry (both of which we’re told Hannibal has).

We then need to modify that force for Hannibal’s losses at Cannae: he lost 4,000 Gauls, 1,500 Iberians and 200 cavalry, but was reinforced late in the year (Polyb. 3.117.6; Livy 23.13.7) with 4,000 more Numidian cavalry and 40 elephants. That leaves Hannibal in 215 with an army of roughly 50,000: 10,000 African infantry, 12,000 Gallic infantry, 4,500 Iberian infantry, 8,000 mixed ‘lights’ (lonchophoroi and Balearian slingers), around 5,000 Gallic cavalry and perhaps 10,000 other cavalry, of which we might guess that maybe 2/3rds were Numidian and 1/3rd Iberian.

At the same time in Italy there is a second Carthaginian army operating in Bruttium (modern Calabria; Hannibal is operating out of modern Apulia) under the command of Hanno with 17,000 infantry composed mostly of Roman socii that have defected to Hannibal, along with 1,200 cavalry, mostly Spanish and Numidian (Livy 24.15.2).

The thing is Hannibal does not have Carthage’s largest army. One of the mistakes students make in assessing the Second Punic War is focusing – as most modern treatments do – almost entirely on Hannibal. But for Carthage, getting reinforcements to Hannibal is very hard – Rome at this point has a strong navy so they can’t easily sail to Italy – but the war is also very active in Spain. Carthage had come to control the Mediterranean coast of Spain as a result of the conquests of Hamilcar Barca (we’ll discuss this more when we get to these guys in a couple of weeks) and Rome was seeking to tear that part of the empire away.

Carthage had three generals operating in Spain by 215 – Hasdrubal and Mago Barca (Hannibal’s brothers) and Hasdrubal Gisco. Livy reports the combined strength of all three at 60,000 (Livy 23.49) and once again with some careful tracking through Livy and Polybius we can basically break this force down to roughly 24,000 African infantry (a mix of Hannibal’s troops left behind and reinforcements brought by Mago), a touch less than 2,000 African cavalry, and the remainder – about 34,000 – mostly Iberian troops along with some small units of Gauls (300 from Liguria) and Balearian slingers (500). We can be fairly ‘rough’ with these numbers because we’re dealing with ‘paper strengths’ that are going to be off to some degree in any case – the point here is a rough approximation of an estimate, because our sources aren’t going to get better than that.

In addition, there was a Carthaginian army dispatched to Sardinia to try to retake it, a force Livy reports as being roughly the same size as the reinforcements Mago brought to Spain, which would mean 12,000 infantry and 1,500 cavalry, probably nearly all African (Livy 23.23.12).

Finally, Carthage maintained a force still in Africa. Hannibal had, at the war’s outset, transferred to Africa some 13,850 Iberian infantry, 870 Balearian slingers and 1,200 Iberian cavalry, while redeploying some 4,000 Metagonians (from what is today eastern Morocco) to Carthage as well.

Taking all of that together we can estimate very roughly (with some rounding) that Carthage has, under arms, in 215:

  • 50,000 African infantry
  • 17,000 Italian socii
  • 12,000 Gallic infantry
  • 52,000 Iberian infantry
  • 10,000 various ‘lights’ (including at least 1,370 Balearian slingers)
  • 21,000 cavalry of which probably roughly
    • 5,000 are Gallic cavalry
    • 5,000 are Iberian cavalry
    • 11,000 are African and Numidian cavalry (with the Numidians probably the larger share)

For a total of roughly 162,000 men under arms. Notably missing from this total are any Carthaginian citizen troops, but for reasons I’ll get to below, I do think there probably were some in North Africa. For comparison, the peak mobilizations of the major successor states (the Seleucid and Ptolemaic kingdoms) are probably around 80,000 men. Carthage is doubling that mobilization and very nearly matching Rome’s own maximum mobilization (around 185,000 men).4

Carthaginian Citizen Soldiers

Now you may have noticed something a little odd for the Carthaginian army implied by the figures above: there aren’t any Carthaginians in it. And that tends to be one of the core things that folks ‘know’ about Carthaginian armies, which is that these were ‘mercenary’ armies, where Carthaginians only served as officers. That is, after all, more or less directly what Polybius tells us and historians ancient and modern tend to take Polybius at his word. And while Polybius is being more than a little sneaky with his description of Carthaginian armies as mercenary in nature, the idea that Carthaginians didn’t serve in quantity in Carthaginian armies is at least half true, but with important geographical and chronological limitations.

Here, we are interested in the Carthaginian citizens themselves. And we begin with the first exception to the idea that Carthaginian citizens didn’t fight, the chronological one: Carthaginian citizen armies are actually very common everywhere (that is, both at home and abroad) in the fifth and fourth century. Diodorus (11.22.2) reports ‘Phoenicians’ in the Carthaginian army for the Battle of Himera (480) which are likely Carthaginian citizen soldiers we hear of Carthaginian citizen soldiers in later Carthaginian expeditions to Sicily in 409 too. As late as 339, at the Battle of the Crimissus, the Carthaginian army includes, according to Diodorus, a Sacred Band of Carthaginian citizens several thousand strong (Diod. Sic. 16.80.4) which seems to be a picked force from a larger body of Carthaginian citizens, given that he describes its members as distinguished even among the citizens for valor, reputation and wealth.

Now in most treatments the next thing that will get said is that in the third century – when both the First (264-241) and Second (218-201) Punic Wars occur – the Carthaginians changed this policy and citizens stopped serving except as officers. But I think that perhaps misses what is really happening here and the reason has to do with the perspective of our sources: we have no Carthaginian sources or even North African sources. What we have are the reports primarily of Romans (who fought Carthage), Greeks on Sicily (who fought Carthage) and mainland Greeks like Polybius, who relied on the other two. My point is not necessarily that these sources are hostile to Carthage (though they are), but rather that their focus is directed. We are seeing Carthage like one would see a statute in a dark room lit entirely from one side: only half the statute will be illuminated.

Our sources are very interested in the armies that Carthage sends against Syracuse and Rome and almost entirely uninterested – or uninformed! – about the forces that Carthage might muster in other places. We only see Carthaginian North Africa clearly in brief snippets: when a Greek or Roman tries to invade it (310, 256, 204and 149) or in the context of a major revolt like the Mercenary War (241-237) which draws our sources attention.

But what do we see whenever the action shifts to North Africa? Citizen soldiers in Carthage’s armies. While Diodorus inserts into his narrative a line about how the Carthaginians were unprepared for fighting when Agathocles (tyrant of Syracuse) lands his army in Africa in 310, they quickly manage to put together a citizen soldier army – Diodorus says of some 40,000 soldiers, but Diodorus’ numbers here are often useless (Diod. Sic. 20.10.5-6). We don’t hear anything about citizen soldiers during Rome’s unsuccessful invasion in 256 (during the First Punic War), but when Carthage’s expeditionary army (returned from Sicily at the war’s end) revolts in 241, Carthage immediately raises a citizen army to put down the revolt and succeeds in doing so (Polyb. 1.73.1-2). Likewise, when P. Cornelius Scipio soon-to-be-Africanus lands in North Africa in 204, the Carthaginians raise citizen forces (alongside all of their other troops) to try to stop him and Carthaginian citizens formed a major part of Hannibal’s army at Zama (202; Polyb. 15.11.2-4), including both infantry and cavalry.

And of course, when Rome returned for the final act in the Third Punic War (149-146), Carthage – largely shorn of its empire – responded by mobilizing a citizen force to defend the city, alongside freed slaves (App. Pun. 93-5) and resisted fairly stoutly.

In short, with the exception of M. Atillius Regulus’ invasion of 256, every time Carthaginian Africa is ‘illuminated’ for us we see Carthaginian citizen forces. Now our sources often present these forces as basically ‘scratch’ forces, raised in a panic, but while the Carthaginians sometimes lose the battles that result, these armies are not a ‘rabble’ by any means. Carthaginian citizen forces were evidently sufficient to defeat their own mercenaries and the Libyan revolt in 241. At Zama (202), the Carthaginian citizens form the second rank of Hannibal’s army and while Polybius is quick to lean into stereotypes calling them cowards (for not reinforcing the first battle line, composed of mercenary troops), in practice what he actually describes is that the Carthaginian citizen line is able to throw the Roman hastati back and is only forced to retreat by the advance of Scipio’s second line of principes (Polyb. 15.13.5-8).

My suspicion is thus that Carthaginian citizen soldiers may have never fully gone away, but rather they may have been confined largely to operations in North Africa. It makes a degree of sense that the Carthaginians might want to wage their imperial wars almost entirely with auxiliary troops recruited from their dependencies (or paid for as mercenaries), with Carthaginian citizens serving only as generals and officers, while reserving their citizen soldiers for operations closer to home. And there must have been more of such operations than we are aware of. Remember: Carthaginian armies really only become fully visible to us as they interact with Greek and Roman armies, but obviously Carthage must have accomplished the subjugation of much of North Africa, must have managed to subordinate (if not subdue) the Numidians, must have been able to hold that control through military strength (for our sources are very clear that Carthaginian control was often resented) and finally must have been able to also deter the Saharan, Berber and Lybian peoples on their borders.

In short, there is almost certainly quite a lot of Carthaginian campaigning in Africa which we can’t see clearly and it is possible that Carthaginian citizen soldiers continued to be active in these operations throughout. In that case, Carthage may well have kept its citizenry in some degree of readiness for war, which may explain why substantial bodies of Carthaginian citizen soldiers seem to be available and militarily effective so quickly when Carthage’s core territory in Africa is threatened. That said, short of some very convenient (and very unlikely) Punic inscriptions showing up, this remains merely a hypothesis; our sources offer no hint of this and indeed Polybius states the opposite, that the Carthaginian citizenry was broadly demilitarized.

Carthaginian Arms and Tactics

Of course, if Carthaginian citizens did sometimes fight, that raises a key question: how did Carthaginian citizens fight? With what arms and tactics?

The first answer is that our evidence is infuriatingly limited here. After all, Carthaginian citizen soldiers do most of their fighting visible to us relatively early where our main sources are writers like Diodorus, who – because he is writing a universal history covering everything from the earliest mythology (he includes the Fall of Troy) down to his own day (mid-first century B.C.) – rarely gives a lot of details. Normally we might supplement this with visual evidence in artwork or equipment deposited in graves, but there is very, very little of this. That point has sometimes been taken to reflect Carthage’s ‘unmilitary’ character, but it is worth noting that prior to 146, we have similarly little archaeological or representational evidence of the Roman Republic’s armies and no one accuses the Romans of being ‘unmilitary’ in character.

What evidence we do have suggests that the Carthaginians largely fought as heavy infantrymen in a manner not too different from Greek hoplites. Now I want to caveat that immediately to say this doesn’t mean they fought as hoplites – it is certainly possible but by no means necessarily or certain that the Carthaginians might have adopted weapons or tactics from the Greeks. The Levant had its own infantry traditions on which the Carthaginians might have drawn which included heavy armor and large shields. At the same time, as noted, it seems like Phoenician colonies drew in a lot of Aegean (read: Greek) settlers, so it would hardly be shocking of the Carthaginians did adopt Greek armaments.

However, I want to pause for a moment to draw one point of important clarification: at no point did any Carthaginian or any soldier in Carthaginian service that we know of, fight in a Macedonian-style pike phalanx. The idea that the Carthaginians adopted this style of fighting is based entirely on old mistranslation of lonchophoroi as ‘pikemen’ when in fact the lonche is a light spear and these are light infantry javelin-men fighting in support of African heavy infantry. We’ll talk more about them next week.

We have a few small engravings (small engraved impression seals called ‘scarabs’) from Carthage and Phoenician settlements in Sardinia, which depict soldiers and they show men with large apparently circular shields and spears.5 Numidian royal monuments, which may be drawing on Carthaginian material culture (it would have been high status) feature large round shields as a design motif and one intriguing monument, a statue base excavated in Rome, has been supposed by Ann Kuttner to possibly be a Numidian comission showing Numidian arms (or perhaps the captured arms of Carthaginians?) and shows a large round shield of the same type seen on their royal monumnets, alongside tube-and-yoke cuirasses (two of which are set up as trophies) and plumed helmets of the pilos/konos type (a kind of Hellenistic Greek helmet).6 And our literary sources regularly describe the Carthaginians as forming heavy infantry battle lines (using the word φάλαγξ, phalanx, to describe them) and report Carthaginians as wearing helmets and armor, with large shields and spears.7

Via the British Museum (inv. 127214), a fifth century Phoenician scarab showing a warrior wearing a cuirass, greaves, a helmet, a large (round?) shield and carrying a spear, found in Sardinia. While the curator’s description assumes this warrior is Greek, Carthaginian seems far more likely given the find location, art-style and equipment.

On that basis, both Gregory Daly and Joshua Hall (both op. cit.) conclude that the Carthaginians must have fought rather a lot like Greek hoplites and I think this is both basically correct and probably the best we can do. By the Punic Wars, we have hints that Carthaginian troops (both citizen and subject from North Africa) may also be adopting Italic equipment, which I’ll get into more in the next post: by the end of the Second Punic War and certainly by the Third Punic War, Carthaginian soldiers may have looked actually quite ‘Roman’ in their kit.

All of that said, as is obvious from the forces Carthage arrayed for the Punic Wars, Carthaginian armies included far more than just citizen soldiers – indeed, many Carthaginian armies evidently included few if any Carthaginian citizens outside of the officer corps. So to better understand Carthage’s armies, we are going to have to branch out to think about their other forces, which we’ll begin to do next week.

Friday 10 April 1663

Up very betimes and to my office, where most hard at business alone all the morning. At noon to the Exchange, where I hear that after great expectation from Ireland, and long stop of letters, there is good news come, that all is quiett after our great noise of troubles there, though some stir hath been as was reported.

Off the Exchange with Sir J. Cutler and Mr. Grant to the Royall Oak Tavern, in Lumbard Street, where Alexander Broome the poet was, a merry and witty man, I believe, if he be not a little conceited, and here drank a sort of French wine, called Ho Bryan,1 that hath a good and most particular taste that I never met with.

Home to dinner, and then by water abroad to Whitehall, my wife to see Mrs. Ferrers, I to Whitehall and the Park, doing no business. Then to my Lord’s lodgings, met my wife, and walked to the New Exchange. There laid out 10s. upon pendents and painted leather gloves, very pretty and all the mode. So by coach home and to my office till late, and so to supper and to bed.

Footnotes

Read the annotations

Ed Bindels’s Apple Museum in Utrecht, Netherlands

This new museum in Utrecht (about 30–40 minutes south of Amsterdam) seems just astonishing. The rainbow wall of iMacs alone is incredible.

(Via Juli Clover.)

 ★ 

MacOS Seemingly Crashes After 49 Days of Uptime — a ‘Feature’ Perhaps Exclusive to Tahoe

Jason Snell, writing at Six Colors:

Software developer Photon, whose product requires running a bunch of Macs to connect to iMessage, discovered a pretty major bug:

Every Mac has a hidden expiration date. After exactly 49 days, 17 hours, 2 minutes, and 47 seconds of continuous uptime, a 32-bit unsigned integer overflow in Apple’s XNU kernel freezes the internal TCP timestamp clock… ICMP (ping) keeps working. Everything else dies. The only fix most people know is a reboot.

The whole story is wild (albeit technical). Photon says they’re working on a fix, but really, this is something Apple should be working on.

If you keep track of time using milliseconds, and store that in an unsigned 32-bit integer, it overflows after 49 days, 17 hours, 2 minutes, and 47 seconds. That’s the bug.

I think this bug is new to Tahoe. If you look at Apple’s open-source XNU kernel code — e.g. lines 3,732 to 3,745 in tcp_subr.c — you can see that the lines assigning the time in milliseconds to a uint32_t variable were checked in just six months ago, whereas most of the file is five years old. Also, I personally ran my MacBook Pro — at the time, running MacOS 15.7.2 Sequoia — up to 91 days of uptime in January. I even mentioned that remarkable uptime in my annual report card, in praise of Apple’s software reliability. Apple’s pre-Tahoe reliability, that is.

I was hesitant to link to this at all because the original (unbylined) report from Photon is so hard to follow. It’s downright manic — over 3,500 words with 33 section headings (<h2> and <h3> tags), with no cohesive narrative. The bug, seemingly, is not that complicated. The whole write-up from Photon just screams “AI-generated slop” to me, and I thus hesitate even to link to Snell’s piece linking to it. But I think the bug is real, and my sympathy for everyone afflicted with MacOS 26 Tahoe is sincere. (And if I’m wrong about the post being AI slop and a human at Photon actually wrote this, I would suggest taking it easy with the cocaine.)

 ★ 

★ Let Us Learn to Show Our Friendship for a Man When He Is Alive and Not After He Is Dead

For The New Yorker, Ronan Farrow and Andrew Marantz go deep profiling Sam Altman under the mince-no-words headline “Sam Altman May Control Our Future — Can He Be Trusted?” 16,000+ words — roughly one-third the length of The Great Gatsby — very specifically investigating Altman’s trustworthiness, particularly the details surrounding his still-hard-to-believe ouster by the OpenAI board in late 2023, only to return within a week and purge the board. The piece is long, yes, but very much worth your attention — it is both meticulously researched and sourced, and simply enjoyable to read. Altman, to his credit, was a cooperative subject, offering Farrow and Marantz numerous interviews during an investigation that Farrow says took over a year and half.

A few excerpts and comments (not in the same order they appear in the story):

1.

Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”

A recurring theme in the piece is that colleagues who’ve worked with Altman the closest trust him the least. This bit about Aaron Swartz warning friends that Altman is a “sociopath” who “can never be trusted” is, to my knowledge, new reporting. Swartz’s opinion carries significant weight with me.1 Swartz is lionized (rightly) for his tremendous strengths, and the profoundly tragic circumstances of his martyrdom have resulted in less focus on his weaknesses. But I knew him fairly well and he led a very public life, and I’m unaware of anyone claiming he ever lied. Exaggerated? Sure. Lied? I think never.

Another central premise of the story is that while it’s axiomatic that one should want honest, trustworthy, scrupulous people in positions of leadership at any company, the nature of frontier AI models demands that the organizations developing them be led by people of extraordinary integrity. The article, to my reading, draws no firm conclusion — produces no smoking gun, as it were — regarding whether Sam Altman is generally honest/truthworthy/scrupulous. But I think it’s unambiguous that he’s not a man of great integrity.

2.

Regarding Fidji Simo, OpenAI’s other “CEO”:

Several executives connected to OpenAI have expressed ongoing reservations about Altman’s leadership and floated Fidji Simo, who was formerly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a successor. Simo herself has privately said that she believes Altman may eventually step down, a person briefed on a recent discussion told us. (Simo disputes this. Instacart recently reached a settlement with the F.T.C., in which it admitted no wrongdoing but agreed to pay a sixty-million-dollar fine for alleged deceptive practices under Simo’s leadership.)

This paragraph is juicy in and of itself, with its suggestions of palace intrigue. But it’s all the more interesting in light of the fact that, post-publication of the New Yorker piece, Fidji Simo has taken an open-ended medical leave from OpenAI. If we run with the theory that Altman is untrustworthy (the entire thesis of Farrow and Marantz’s story), and that Simo is also untrustworthy (based on the fraudulent scams she ran while CEO of Instacart, along with her running the Facebook app at Meta before that), we’d be foolish not to at least consider the possibility that her medical leave is a cover story for Altman squeezing Simo out after catching on to her angling to replace him atop OpenAI. The last thing OpenAI needs is more leadership dirty laundry aired in public, so, rather than fire her, maybe Altman let her leave gracefully under the guise of a relapse of her POTS symptoms?

Simo’s LinkedIn profile lists her in two active roles: CEO of “AGI deployment” at OpenAI, and co-founder of ChronicleBio (“building the largest biological data platform to power AI-driven therapies for complex chronic conditions”). If my spitball theory is right, she’ll announce in a few months that after recuperating from her POTS relapse, the experience has left her seeing the urgent need to direct her energy at ChronicleBio. Or perhaps my theory is all wet, and Simo and Altman have a sound partnership founded on genuine trust, and she’ll soon be back in the saddle at OpenAI overseeing the deployment of AGI (which, to be clear, doesn’t yet exist2). But regardless of whether the Altman-Simo relationship remains cemented or is in the midst of dissolving, it raises serious questions why — if Altman is a man of integrity who believes that OpenAI is a company whose nature demands leaders of especially high integrity — he would hire the Instacart CEO who spearheaded bait-and-switch consumer scams that all came right out of the playbook for unscrupulous car salesmen.

3.

Regarding Altman’s stint as CEO at Y Combinator, and his eventual, somewhat ambiguous, departure, Farrow and Marantz write:

By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached [Y Combinator founder Paul] Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice. Altman told some Y.C. partners that he would resign as president but become chairman instead. In May, 2019, a blog post announcing that Y.C. had a new president came with an asterisk: “Sam is transitioning to Chairman of YC.” A few months later, the post was edited to read “Sam Altman stepped away from any formal position at YC”; after that, the phrase was removed entirely. Nevertheless, as recently as 2021, a Securities and Exchange Commission filing listed Altman as the chairman of Y Combinator. (Altman says that he wasn’t aware of this until much later.)

Altman has maintained over the years, both in public and in recent depositions, that he was never fired from Y.C., and he told us that he did not resist leaving. Graham has tweeted that “we didn’t want him to leave, just to choose” between Y.C. and OpenAI. In a statement, Graham told us, “We didn’t have the legal power to fire anyone. All we could do was apply moral pressure.” In private, though, he has been unambiguous that Altman was removed because of Y.C. partners’ mistrust. This account of Altman’s time at Y Combinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”

Graham responded to this on Twitter/X thus:

Since there’s yet another article claiming that we “removed” Sam because partners distrusted him, no, we didn’t. It’s not because I want to defend Sam that I keep insisting on this. It’s because it’s so annoying to read false accounts of my own actions.

Which tweet includes a link to a 2024 tweet containing the full statement Farrow and Marantz reference, which reads:

People have been claiming YC fired Sam Altman. That’s not true. Here’s what actually happened. For several years he was running both YC and OpenAI, but when OpenAI announced that it was going to have a for-profit subsidiary and that Sam was going to be the CEO, we (specifically Jessica) told him that if he was going to work full-time on OpenAI, we should find someone else to run YC, and he agreed. If he’d said that he was going to find someone else to be CEO of OpenAI so that he could focus 100% on YC, we’d have been fine with that too. We didn’t want him to leave, just to choose one or the other.

Graham is standing behind Altman publicly, but I don’t think The New Yorker piece mischaracterized his 2024 statement about Altman’s departure from Y Combinator. Regarding the quote sourced to anonymous “Y.C. colleagues” that he told them “Sam had been lying to us all the time”, Graham tweeted:

I remember having a conversation after Sam resigned with a YC partner who said he and some other partners had been unhappy with how Sam had been running YC. I told him Sam had told us that all the partners were happy, so he was either out of touch or lying to us.

And, emphasizing that this remark was specifically in the context of how happy Y Combinator’s partners were under Altman’s leadership of YC, Graham tweets:

Every YC president tends to tell us the partners are happy. Sam’s successor did too, and he was mistaken too. Saying the partners are unhappy amounts to saying you’re doing a bad job, and no one wants to admit or even see that.

Seems obvious in retrospect, but we’ve now learned we should ask the partners themselves. (And they are indeed now happy.)

I would characterize Graham’s tweets re: Altman this week as emphasizing only that Altman was not fired or otherwise forced from YC, and could have stayed as CEO at YC if he’d found another CEO for OpenAI. But for all of Graham’s elucidating engagement on Twitter/X this week regarding this story, he’s dancing around the core question of the Farrow/Marantz investigation, the one right there in The New Yorker’s headline: Can Sam Altman be trusted? “We didn’t ‘remove’ Sam Altman” and “We didn’t want him to leave” are not the same things as saying, say, “I think Sam Altman is honest and trustworthy” or “Sam Altman is a man of integrity”. If Paul Graham were to say such things, clearly and unambiguously, those remarks would carry tremendous weight. But — rather conspicuously to my eyes — he’s not saying such things.

4.

From the second half of the same paragraph quoted above, that started with Aaron Swartz’s warnings about Altman:

Multiple senior executives at Microsoft said that, despite Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught. “He has misrepresented, distorted, renegotiated, reneged on agreements,” one said. Earlier this year, OpenAI reaffirmed Microsoft as the exclusive cloud provider for its “stateless” — or memoryless — models. That day, it announced a fifty-billion-dollar deal making Amazon the exclusive reseller of its enterprise platform for A.I. agents. While reselling is permitted, Microsoft executives argue OpenAI’s plan could collide with Microsoft’s exclusivity. (OpenAI maintains that the Amazon deal will not violate the earlier contract; a Microsoft representative said the company is “confident that OpenAI understands and respects” its legal obligations.) The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”

The most successful scams — the ones that last longest and grow largest — are ones with an actual product at the heart. Scams with no actual there there go bust quickly. The Bankman-Fried FTX scandal blew up quickly because FTX never offered anything of actual value. Bernie Madoff, though, had a long career, because much of his firm’s business was legitimate. It wasn’t only the Ponzi scheme, which is what enabled Madoff to keep the Ponzi scheme going for two decades.

But the better comparison to OpenAI — if that “small but real chance” comes true — might be Enron. Enron was a real company that built and owned a very real pipeline and energy infrastructure business. ChatGPT and Codex are very real, very impressive technologies. Enron’s operations were real, but the story they told to investors was a sham. OpenAI’s technology is undeniably real and blazing the frontier of AI. It’s the financial story Altman has structured that seems alarmingly circular.


  1. In a 2005 Y Combinator “class photo”, Altman and Swartz are standing next to each other. Despite the fact that Altman was sporting a reasonable number of popped polo collars (zero), Swartz was clearly the better-dressed of the two.* ↩︎
    * Aaron would’ve loved this footnote. Christ, I miss him.

  2. With rare exceptions, I continue to think it’s a sign of deep C-suite dysfunction when a company has multiple “CEOs”. When it actually works — like at Netflix, with co-CEOs Ted Sarandos and Greg Peters (and previously, Sarandos and Reed Hastings before Hastings’s retirement in 2023) — the co-CEOs are genuine partners, and neither reports to the other. There is generally only one director of a movie, but there are exceptions, who are frequently siblings (e.g. the Coens, the Wachowskis, the Russos). A football team only has one head coach. The defensive coordinator is the “defensive coordinator”, not the “head coach of defense”. It’s obvious that Fidji Simo reports to Sam Altman, and thus isn’t the “CEO” of anything at OpenAI. But OpenAI does have applications, and surely is creating more of them, so being in charge of applications is being in charge of something real. By any reasonable definition, AGI has not yet been achieved, and many top AI experts continue to question whether LLM technology will ever result in AGI. So Simo changing her title to (or Altman changing her title to) “CEO of AGI deployment” is akin to changing her title to “CEO of ghost busting” in terms of its literal practical responsibility. ↩︎︎

Losing the World's Respect

Transcript

All around the world, big, strong men with tears in their eyes are coming up to Donald Trump and saying, “Sir, you’re a loser.”

Hi, I’m Paul Krugman. A brief update on something that has palpably changed in the world in the last few days.

As regular watchers of Donald Trump and regular readers of mine know, Trump has a thing about insisting that people treat him with immense respect — the big strong men with tears in their eyes thing, “Sir” stories. And in particular about believing that the world despised America under Joe Biden and respects it now under his leadership, which was never true either in the first part or the second.

But it is true that until quite recently, many people in the world at least felt obliged to pretend to respect Trump, felt obliged to flatter him, to stifle the negative feelings that they were having about the course of the United States under current management. And maybe Trump actually took these kind of coerced professions of respect as reality. But in the last few days, suddenly the masks are off.

Volodymyr Zelensky just yesterday tweeted out a part of an interview that he gave in which he said, among other things, “In my view, Russia played the Americans again.”

So it was just saying, basically, that Trump is working with the Russians, which obviously he’s known and has surely thought for a long, long time. But to say it that openly is something new.

Zelensky — not a big strong man — but Zelensky is a tough guy. Keir Starmer, Prime Minister of Britain is, alas, not. But also Starmer just in a statement said that we’re tired of a world in which, I’ll quote, “bills go up or down on energy because of the actions of Putin or Trump.” Equivalence, Trump and Putin in the same sentence, in the same breath, that’s pretty stiff stuff.

And Starmer has been notable in trying to preserve the special relationship, avoid offending Trump, trying to make him get a few points off the tariff rate, whatever. But now Starmer is pretty much openly saying, you know, you are the problem and we don’t trust you.

What’s this about? Well, obviously, the United States went to war with a fourth rate power, Iran, and lost. Exactly how that plays out, we don’t know. But it was truly impressive how poorly the United States military and US strategic thinking has played out here.

The United States has also proved both temperamental and weak, lashing out at our erstwhile allies and veering between threats of war crimes and then what looks a whole lot like abject surrender.

So the world no longer either fears or trusts us. It turns out that our military might is not what it was cracked up to be, and our reliability is essentially zero at this point. We can’t count on the United States to do anything that it has promised. It’s a world in which the hegemon has basically gone AWOL. So that’s a big thing.

There’s another story which I think is important, which has kind of has been overshadowed by the debacle in Iran: The bigger ongoing war, which is Ukraine-Russia, is not going well for Russia. It is, if anything, tilting increasingly in Ukraine’s favor. Now, what’s interesting about that, why is that relevant?

Trump is basically on the side of Putin. He’s been unwilling and probably unable to just openly support Russia but has effectively pulled all aid from Ukraine, There’s essentially no money no military aid, no economic aid flowing from the US to Ukraine anymore — it’s all on the Europeans. The Europeans have still been buying some U.S. weapons and transferring them on to Ukraine, but that’s been largely choked off. And I think the assumption was that Ukraine would be in grave danger, would perhaps collapse without American support.

Not happening. What’s actually happening is that Ukraine appears to be gaining the upper hand in the drone war, which is what this war is mostly about. And Ukraine’s success in adapting to modern warfare has been so great that now it looks like there are a significant number of Ukrainian drones and to some extent maybe personnel already deployed in the Middle East, and that Middle Eastern nations other than Iran are quickly moving to strike deals with Ukraine, to buy Ukrainian equipment.

It’s kind of like, well, if you need help and Iran is still a menace, which it is, don’t count on the Americans, but maybe Ukraine knows how to do these things.

Obviously, that helps empower Zelensky to be open in saying what he really thinks about the United States.

Does this matter? Well, we’re not about to see the whole world turn on us. The United States may have threatened to seize Greenland, but I don’t think that Denmark is going to threaten to seize Alaska or anything like that. But it’s a big comedown, and it will hurt.

It’s a slow erosion, but having countries that trust you, that support you, is a very big asset in geopolitics. Losing all of that is therefore a big liability.

And this is my country. I’m not celebrating all of this, because I’d like to see America, particularly I’d like to see the next president, assuming that we actually have a legitimately elected president, inherit a brand that is not completely damaged and corrupted.

But that’s not where we’re going. It’s really looking pretty bad. And what can you say? We had the worst and the dumbest in charge. We still do. And that’s taking a toll on all of us.

From Drill, Baby, Drill to Toll, Baby, Toll

A graph of a line graph

AI-generated content may be incorrect.

The U.S. attack on Iran will end badly. It’s still not clear, however, exactly what that bad end will look like. The loudly announced cease-fire is on the edge, with Trump and the Iranian regime making very different claims about what was agreed to and the Strait of Hormuz still closed. As you can see in the chart above, prediction markets, after an initial bout of optimism, have turned highly skeptical about the prospects for a quick resolution.

And yesterday the spot price of oil — the cost of a barrel for immediate delivery, as opposed to the prices for delivery a month or two from now, which are what are usually quoted — hit a record high of almost $147 a barrel:

Source: Financial Times

There remain three plausible ways this could turn out:

1. U.S. strategic defeat: The Strait is reopened, but with Iran in control of the chokepoint and charging tolls on ships passing through

2. Quagmire: Having failed to impose its will with bombs, the U.S. sends in ground troops

3. Nightmare: Trump follows through on threats to annihilate Iran’s civilian infrastructure

One might have thought that (2) and (3) were off the table. After all, the past six weeks have delivered an object lesson in the limits of “lethality.” But MAGA doesn’t learn lessons. Pete Hegseth — who appears to be lying about why Iranian attacks on U.S. personnel succeeded — still has a job. Trump is still threatening everyone with ALL CAPS. And nothing should be taken for granted. That said, at this point (1) — with Iran the clear winner while America slinks away — is both the least bad and most likely outcome.

It’s a bitterly ironic result, and not only because a war that was meant to demonstrate U.S. power has instead demonstrated our impotence. Also, Trump has always been obsessed with the idea that fossil fuels are the key to U.S. power and prosperity. Now oil has made us weak while empowering and enriching our adversaries.

But how much will Hormuz-as-Iranian-tollbooth shift the global balance of power?

In Trump’s mind, control over fossil fuels is the essence of national greatness. In his inaugural address, Trump declared that

We will drill, baby, drill … We will be a rich nation again, and it is that liquid gold under our feet that will help to do it.

But this was obvious nonsense. For one thing, the narrative that woke environmentalists had hobbled U.S. fossil fuel production was at odds with the reality that fracking had in fact caused a boom in oil and gas production that began under Obama and continued under Republican and Democratic administrations alike:

A graph showing the growth of a company

AI-generated content may be incorrect.

It was also nonsense to claim that oil production can be the engine of prosperity for a nation like America, with its huge, diverse economy. The fracking boom, although huge in absolute terms, was relatively marginal in its economic impact. In 2025 the U.S. produced about 3 billion more barrels of oil than it did before the fracking surge. At 2025 prices, that was about $200 billion worth of oil. That’s a lot of money! But it’s less than 1 percent of U.S. GDP.

Meanwhile, Trump has been doing all he can to block development of wind and solar power, in the apparent belief that this will empower America. But what it actually does is empower regimes that are in a position to disrupt world oil supply, while having little to lose from chaos in the world economy. Which means, above all, Iran.

Yesterday Trump issued a pathetic warning:

A screenshot of a social media post

AI-generated content may be incorrect.

They better stop now! Or what? We’ll bomb them?

In his second inaugural address Trump promised that “our country will flourish and be respected again all over the world.” Does anyone believe that starting an unnecessary war, then threatening to commit massive war crimes because we’re losing to a fourth-rate power, and finally, in the best case, essentially running away, has made America more respected?

Think about Gulf states that relied on America to protect them and preserve their access to world markets. Now they know that we can’t and won’t, while Iran holds a knife at their throats. They’re now looking to themselves for security — and starting to buy equipment and technology from Ukraine, which has learned the hard way how to fight a modern war.

Think about Asian and European nations that have swallowed Trump’s many insults, and mostly avoided retaliating against his tariffs, because they feared both U.S. power and the loss of U.S. support. Now America’s weakness and unreliability have been laid bare.

And yes, ships transiting the Gulf of Hormuz will probably end up paying large tolls to vicious theocrats. Are you tired of winning yet?

MUSICAL CODA

Kākāpō parrots

Lenny posted another snippet from our 1 hour 40 minute podcast recording and it's about kākāpō parrots!

Tags: kakapo

ChatGPT voice mode is a weaker model

I think it's non-obvious to many people that the OpenAI voice mode runs on a much older, much weaker model - it feels like the AI that you can talk to should be the smartest AI but it really isn't.

If you ask ChatGPT voice mode for its knowledge cutoff date it tells you April 2024 - it's a GPT-4o era model.

This thought inspired by this Andrej Karpathy tweet about the growing gap in understanding of AI capability based on the access points and domains people are using the models with:

[...] It really is simultaneously the case that OpenAI's free and I think slightly orphaned (?) "Advanced Voice Mode" will fumble the dumbest questions in your Instagram's reels and at the same time, OpenAI's highest-tier and paid Codex model will go off for 1 hour to coherently restructure an entire code base, or find and exploit vulnerabilities in computer systems.

This part really works and has made dramatic strides because 2 properties:

  1. these domains offer explicit reward functions that are verifiable meaning they are easily amenable to reinforcement learning training (e.g. unit tests passed yes or no, in contrast to writing, which is much harder to explicitly judge), but also
  2. they are a lot more valuable in b2b settings, meaning that the biggest fraction of the team is focused on improving them.

Tags: andrej-karpathy, generative-ai, openai, chatgpt, ai, llms

Think People Hate Billionaires? We're About to Get Trillionaires.

Photo by Gage Skidmore

The Cross Section is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Ask any Democratic politician what they think about almost anything, and the answer they’re likely to give is “Affordability.” The war in Iran? “Trump wants to spend hundreds of billions of dollars fighting there while Americans can’t afford groceries!” Health care? “We need to make it affordable!” Trump’s brutal crackdown on immigrants? “It’s only making life for Americans less affordable!”

It’s gotten to the point where you can hear the consultants whispering “Just pivot to affordability” in their ears, which is a sign that they’re taking a genuine issue and turning it into a poll-tested, pre-packaged wafer of political Soylent Green they can shove down voters’ throats. And that seldom works out well, at least not in the long run.

It’s not that politicians shouldn’t respond to the primary concern of the moment, and people are certainly worried about their ability to afford the basics of daily life. That’s especially true with gas prices ballooning because of the war, which will likely send the price of nearly everything higher in the near future. But focusing so narrowly on “affordability” is problematic for two reasons. First, the tools government has to bring prices down are pretty limited in the short term, especially if we’re talking about groceries or consumer goods. That means that if you get elected because people are mad about affordability, you’re not actually going to be able to deliver.

Second, narrowing your focus to affordability can divert us from the bigger economic issues that bedevil our society, and the broader solutions that might make people’s lives better. Which brings us to Elon Musk.

Get ready for the world’s first trillionaire

When I was a kid I loved reading and memorizing the Guinness Book of World Records, which back then was a thick paperback with black-and-white pictures. One of the entries listed oil magnate J. Paul Getty as the world’s richest man, with a fortune estimated at an extraordinary $2 billion. Getty died in 1976; adjusted for inflation, his $2 billion then would be $11.5 billion today.

A nice haul, but it would put Getty at only #265 on the current Forbes 400 list of the world’s richest people. Number 1 is of course Elon Musk, whose fortune today the magazine puts at just under $800 billion, an amount of money that is literally unfathomable. And it’s about to get bigger.

Some time in the next few months, SpaceX — which Musk recently combined with his xAI, the company that makes the Nazi child porn chatbot Grok — will issue an initial public offering and become a public company. And everyone’s getting excited:

Gene Munster, managing partner at SpaceX investor Deepwater Asset Management, said the IPO is expected to “easily” set the record for market debuts, raising more than $80 billion in his estimate.

“The narrative around space is that it’s early and they’ve got the pole position,” he said of SpaceX. “I could see it go vertical right out of the gate.”

Munster said SpaceX’s diverse array of businesses — from internet access to data centers to space exploration — would prove attractive to investors. The company is expected to make an unusually large amount of its shares available to retail investors, capitalizing on hype from those who want a stake in a flashy, Musk-run venture.

“Retail likes sizzle,” Munster said. “This is like the ultimate sizzle story.”

If what everyone assumes will happen does happen — the IPO occurs, investors rush to buy SpaceX stock, and the value of that stock “goes vertical” — Musk will become the world’s first trillionaire.

Here’s how the math works, using Forbes’ numbers as a baseline. They estimate that Musk has a 43% stake in SpaceX, which when it merged with xAI was valued at $1.25 trillion, making his portion worth $537.5 million. If the SpaceX market capitalization surpasses that by 37% after the IPO, the value of Musk’s portion will rise by $200 billion, and his net worth will surpass $1 trillion.

Past results are no guarantee of future performance, as they say, but it’s a pretty good bet that’s just what will occur. Musk’s greatest skill may be his ability to hype a stock; as I’ve detailed before, while Tesla is a real company that produces real products (unlike some tech firms), its stock is insanely overvalued, to the point where it’s basically a meme stock built solely on Musk’s ability to garner attention and convince people that whatever wild prediction he’s making might actually come true.

He’ll do the same to SpaceX, which has the benefit of not yet having depressing sales figures to explain away; its whole value lies in the far-off future, when it supposedly will build data centers in space and carry humanity to the heavens. And if the AI machine keeps churning, don’t be surprised if within a few years Musk is joined in the trillionaires club by a bunch of his fellow tech oligarchs (OpenAI and Anthropic may also go public this year).

A new kind of inequality

And what happens then? Functionally, there’s not much difference between a bunch of tech bros having fortunes in the high hundreds of billions or the low trillions, either for them or the rest of us. But symbolically, it means a great deal.

This is why I started this post with the small-bore “affordability” talk from Democratic politicians. When Musk becomes a trillionaire, there will be a deluge of media coverage around the milestone, and it will be an opportunity to focus people’s attention on the need for profound policy change that goes beyond the price of eggs.

The AI jobs apocalypse is not here yet — unless you’re in the tech industry itself, where the bloodbath has already begun. One example: Oracle, run by Trump pal Larry Ellison, just laid off 30,000 people by email, not because it isn’t hugely profitable (it is), but because it believes it can make even more money by automating what those workers do. And that’s just one company; the entire industry is rapidly shedding workers.

For the rest of us, getting replaced by a chatbot hasn’t happened yet, but the anxiety that it could happen soon is large and growing. For instance, according to a recent Quinnipiac poll, 70% of respondents said AI will lead to a reduction in jobs; just a year ago the number was 56%. And the worries are most intense among young people: 81% of Gen Zers said AI will produce job losses.

Which means we have a small but highly visible group of tech executives amassing mind-boggling fortunes while everyone is worried that the products those tech magnates are making — and in many cases, forcing us to use whether we want to or not — are creating enormous anxiety about the future of the economy. Meanwhile we have a housing shortage, education can require taking on enormous debt, people are losing health coverage by the millions, and entire generations have lost faith that working hard is a guarantee of having a reasonably secure life.

If the best answer a politician and a party can come up with to that challenge is “I hear you, and that’s why I’m proposing a system of breakfast cereal vouchers with income-based phaseouts between incomes of $100,000 and $200,000, indexed to inflation and taking effect in stages in between now and 2035,” well that just isn’t going to cut it.

To be clear, I am not arguing that incremental policy changes are useless and progressives should reject any candidate who suggests anything less than an immediate transition to Scandinavian-style social democracy. But it’s important to both believe in and argue for changes that speak to the deep and fundamental problems in the structure of the American economy. A system that produces this kind of insane wealth while there’s so much precarity and deprivation is deeply twisted, but it doesn’t have to be that way. Politicians need to show voters that real change has a place in their imagination and their plans. And this is the time to start.

Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.

Subscribe now

Leave a comment

More Good News on the D.C. Crime Front

As always, one should point out when communities are getting the job done. Since we last checked nearly two weeks ago, D.C. still has a historically low number of homicides, eleven*. The most recent case involved a murder-suicide with two Maryland residents outside the Friendship Heights Metro station and is thought to be “domestic related.”**

As before, all crimes, except for Assault with Dangerous Weapon, are down dramatically compared to the same period in 2025, and the increase in that category might reflect how crimes are being recorded. We have not really started “murder season” yet, but here’s to hoping that this spring and summer are relatively calm.

Good job, D.C.!

*It is now clear that two of the homicides ‘charged’ to 2026 occurred in other years, so I am not counting those.

**One of the people involved appears to have lived a short walk away in Bethesda, MD (the other lived farther away in Maryland). Had they walked a little farther before the tragedy unfolded (about 400 ft), this would have been called a Maryland homicide. Also, it is worth noting that a “domestic related” murder is not what people typically think of–or more accurately, what they fear–when discussing urban crime.

All Good Things Must Come To An End

As of today, it has been 10 years since I last posted here, so this site will soon shut down. It’s been emotional.

Links 4/10/26

Links for you. Science:

Trump staffs science and technology panel with non-scientists
‘Predators that just run in and grab, stab and kill’: The deep cave bacteria resistant to modern medicine
Key Adviser Quits Federal Vaccine Panel
A measles outbreak in Florida is simmering, but we know almost nothing about it. We went to investigate
Officials ‘missed 99% of data’ before ending Covid vaccine recommendation, memos reveal
Yep, a mom’s COVID shot during pregnancy protects her baby, a large study finds

Other:

Expose ALL Of Trump’s War Profiteering
Before Iran, there was Covid: Trump falls apart in a crisis
Trump imagines negotiation with Sharpie maker for $5 signature pens. Trump told a lengthy story about negotiating over the price of Sharpie pens. The company says it has no record of any such conversation. (narcissists gonna narcissist)
Legacy outlets that bent the knee to Trump haven’t just lost credibility — they’re bleeding readers and viewers
Easy-to-use solar panels are coming, but utilities are trying to delay them
ICE says it provides ‘proper meals.’ Detainees see crystalized jelly, rancid beans and iced bologna
Hegseth Strikes Two Black and Two Female Officers From Promotion List
It’s 3 times harder for blue states to get disaster funding under Trump (this should be grounds for impeachment or resignation)
DC Council considers proposals to limit utility shutoffs and increase bill transparency amid growing customer complaints
One crazy day that defined the decline and fall of the American empire
How American Camouflage Conquered the World
As Metro turns 50, transit diehards are keepers of its history, quirks, and identity
Donald Trump to Add His Signature to US Currency, a First For a Sitting President
D.C. healthcare cuts leave low-income residents with fewer options and worse care
GGWash endorses Robert White for US delegate and Markus Batchelor for shadow senator
After the city tells Alan’s Oasis to move, its future is uncertain
The flog of war: There’s wartime lying and then there’s whatever this is
‘Visibly upset and struggling’: Acting ICE head hospitalized twice over stress, officials say
Joint Statement by J Street and the Muslim Public Affairs Council On Rejecting Antisemitism and Islamophobia – and Ending Endless Wars
Elon Musk’s Grok ordered to stop creating AI nudes by Dutch court as legal pressure mounts
FBI Files Counter Government Argument in Texas “Antifa” Trial
‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI
Jury Decides Afroman Songs Mocking Cops Are Too Funny to Be Defamatory
Sycophantic AI decreases prosocial intentions and promotes dependence
America’s Smoking Habit Just Hit a Wild Milestone That Once Seemed Impossible
The Supreme Court Is Scaring Off State Criminal Charges Against Federal Agents
Minnesota GOP legislator, auditor candidate arrested on suspicion of DWI
And We Will Forget This Tomorrow
Welcome to a Multidimensional Economic Disaster
Kat Abughazaleh on Losing, Mutual Aid, and What Comes Next

Lightning

Maybe you should wear one too? I guess I'm taller than you, so as long as I have one we're fine.

What should I ask Bob Spitz?

Yes I will be doing a Conversation with him, Wikipedia here.  I very much enjoyed his new book on the Rolling Stones, plus he has many older books of note, including on the 1969-1970 Knicks, Woodstock, Ronald Reagan, Bob Dylan, the Beatles, Led Zeppelin, and Julia Child.  All good books!  He also for a while worked as manager to both Bruce Springsteen and Elton John.

So what should I ask him?

The post What should I ask Bob Spitz? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

NASA confident Artemis 2 heat shield will protect crew during re-entry

The Artemis 2 heat shield undergoing inspections at the Kennedy Space Center before installation on the base of the Orion capsule bringing four astronauts back to Earth Friday after a flight around the moon. The heat shield is designed to protect the craft from re-entry temperatures as high as 5,000 degrees. Image: NASA.

When the Artemis 2 Orion crew capsule returns to Earth after flying around the moon, it will hit the discernible atmosphere some 75 miles above the Pacific Ocean at a blistering 24,000 mph, fast enough to fly from New York to London in less than 10 minutes.

Within seconds, temperatures across its 16.5-foot-wide heat shield will climb to some 5,000 degrees — half as hot as the visible surface of the sun — as the ship rapidly slows in an electrically charged fireball of atmospheric friction.

The four astronauts on board — Artemis 2 commander Reid Wiseman, Victor Glover, Christina Koch and Canadian astronaut Jeremy Hansen — are counting on the heat shield to keep them safe, in a comfortable environment, all the way through the peak heating zone before a parachute-assisted splashdown in the Pacific.

“We have high confidence in the system, in the heat shield and the parachutes and the recovery systems we put together,” Amit Kshatriya, NASA’s associate administrator, said Thursday. “The engineering supports it, the Artemis 1 flight data supports it. All of our ground tests support it, our analysis supports it and tomorrow, the crew is going to put their lives behind that confidence.”

The crew and mission managers are confident, they say, despite major problems with the heat shield used during the unpiloted Artemis 1 test flight in 2022 when the Avcoat material making up the shield developed sub-surface cracks and gas pockets that blew away chunks of the protective barrier’s outer “char” layer.

Based on nearly two years of tests and analysis, engineers were surprised to discover the damage was most likely caused by the Avcoat material’s lack of permeability during a specific phase of the re-entry when the shield was experiencing lower external temperatures while internal layers were still extremely high, generating gas that could not escape.

Agency managers decided to order a different heat shield design for downstream Artemis missions. But the heat shield for the Artemis 2 flight, identical to the one used with Artemis 1, was already installed. Replacing it with a new design would have delayed the mission by 18 months or more.

Instead, NASA managers opted to launch Artemis 2 “as is” based on test data and an exhaustive analysis that indicated the shield would work properly if the re-entry trajectory was modified to eliminate the temperature and pressure swings that contributed to the damage seen after the Artemis 1 flight.

“They did a tremendous amount of research, a lot of groundbreaking research in some facilities that we had not used before, and they discovered the root cause,” Wiseman said.

“They did wind tunnel testing and laser testing and hyper-velocity testing, and they determined that if we come in with this lofted profile … that this heat shield will be safe for us to go fly.

“So I think all that points in the direction of goodness,” he said. “And I think if you, as a human being who was about to board this rocket, had sat in the meetings that we sat in and listened to the experts and gone through the data with them, you would have the same comfort.”

What went wrong with Artemis 1

During the Artemis 1 mission, the unpiloted capsule followed a planned “skip” trajectory, similar in concept to skipping a flat stone across still water. After an initial dip into the upper atmosphere, the Artemis 1 capsule skipped back out again before making its final descent to splashdown.

The skip re-entry helps reduce the spacecraft’s velocity will offering NASA a wider range of splashdown options in case bad weather makes a targeted landing site problematic.

Despite the heat shield damage seen after the flight, the Artemis 1 re-entry was successful. The capsule landed on target and officials said had any astronauts been aboard, they would have had no problems. But the damage triggered alarm at NASA.

“NASA identified more than 100 locations where ablative thermal protective material from Orion’s heat shield wore away differently than expected during reentry into Earth’s atmosphere,” NASA’s Office of Inspector General wrote.

“While the heat shield successfully protected the Crew Module and its systems during the Artemis 1 mission, upon inspection after Orion’s recovery, engineers noted unexpected variations in the appearance of the heat shield Avcoat — the ablative material that helps protect the capsule from the heat of reentry.

“Specifically, portions of the char layer wore away differently than NASA engineers predicted, cracking and breaking off the spacecraft in fragments that created a trail of debris rather than melting away as designed. The unexpected behavior of the Avcoat creates a risk that the heat shield may not sufficiently protect the capsule’s systems and crew from the extreme heat of reentry on future missions.”

Testing revealed the damage was related to the heat shield’s permeability, or rather, its lack thereof.

Entry heating is what makes the Avcoat’s outer char layer permeable enough to allow gas to escape. The Artemis 1 heat shield worked normally during its initial descent into the atmosphere, But when it climbed back out, re-entry heating eased and the outer char layer became much less permeable.

The underlying material was still extremely hot, undergoing a process known as pyrolysis — combustion without oxygen — and generating gas that had no way to escape. Those buildups eventually blew chunks of the heat shield’s outer layers away.

“They go back up from that first entry, they’re still hot, they’re still off gassing,” said an engineer familiar with the investigation. “The fact that the material itself isn’t permeable enough is causing that gas pressure to build up now, very rapidly, because they’re still hot. But the char layer has paused.”

The outer char layer, he said, is “the only part of the Artemis 1 and Artemis 2 heat shield that actually allows it to breathe, or allows it to off-gas. So once it stops, now there’s no mechanism in the deeper parts of the heat shield for that gas to escape.”

“So the pressure built up, and as the capsule came back down and started reheating, the pressure was already there. All those cracks, the pockets had already formed. And now, bang, bang, bang, pop. Avcoat started sloughing off during that second entry.”

The Artemis 1 heat shield suffered unexpected damage during re-entry after an unpiloted test flight in 2022. Sections of its outer “char” layer were blown off when internal heating generated gas that had no way to vent through that outer layer. Image: NASA.

A modified re-entry trajectory should ensure good heat shield performance

Engineers verified in lab tests that a modified skip-entry trajectory, one with an initial dip into the upper atmosphere followed by a shorter-duration climb back out would allow the Avcoat to “breathe” throughout, preventing the formation of cracks and trapped gas. An independent review team agreed with those conclusions.

Interestingly, Apollo engineers were aware of the Avcoat permeability issue and designed that program’s heat shields accordingly. Apollo capsules also used skip re- entry trajectories and had no problems. But the Avcoat used in the Artemis heat shields was reformulated slightly, and that ended up affecting its permeability.

In any case, the downside to the modified re-entry trajectory for Artemis 2 will reduce the distance the Orion capsule can fly to avoid bad weather in the planned splashdown zone. It also will result in higher sustained heating during the descent, but engineers say that is exactly what is needed to maintain permeability in the outer char layer and ensure good performance.

Former astronaut Charles Camarda disagrees, strongly criticizing the “fly as is” decision. He argues engineers do not fully understand the root cause of the Artemis 1 heat shield damage and cannot accurately predict how the Artemis 2 heat shield will perform or whether the revised entry trajectory might have unintended consequences.

In a letter to the NASA administrator, Camarda wrote that “history shows accidents occur when organizations convince themselves they understand problems they do not.”

Like Wiseman, Glover says he trusts the analysis of the Artemis 1 problem, saying critics “haven’t been in these meetings from day one and met the team and looked them in the eye and shook their hands at the ends of these meetings.”

That said, he added, “I don’t want to discount the things that they’ve said. Any time you talk about fire, any time you talk about entry and heat shields, talk about parachutes, these are high risk things that … don’t have fault tolerance built in. They have to work.”

“And so I appreciate all of that nudging and poking and prodding that they’ve caused. They have made us sharpen our pencils and put more due diligence, more vigilance into that process. But I think we’ve done that. And so I think the crew is comfortable because of that team.”

Friday assorted links

1. Trump’s focus on cultural issues (NYT).

2. Claims about Mythos (speculations).  And a claim that the power of Mythos is being exaggerated.

3. The wage returns on industry credentials.

4. 2026 Roots of Progress blog-building intensive program.

5. Brian Albrecht reviews The Marginal Revolution.

6. Those new service sector jobs.

7. Harvard Crimson on Ludwig Straub.

8. How and why the Democratic Party has been evolving?  Less interest in predistribution?

9. The price of GPT Pro is being cut in half?

The post Friday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

GSOA and Novaspace Launch the Space Industry Forum (SIF) 2026 in Singapore

Paris, April 2026 – GSOA and Novaspace announce the inaugural edition of the Space Industry Forum (SIF), a new flagship event co-organized by both organizations, taking place on May 19, 2026, at The Fullerton Hotel in Singapore. […]

The post GSOA and Novaspace Launch the Space Industry Forum (SIF) 2026 in Singapore appeared first on SpaceNews.

Commerce Department budget proposal revives concerns about TraCSS

The TraCSS logo. Credit: NOAA

A budget proposal for the Department of Commerce raises new doubts about the future of a civil space traffic management system under development there.

The post Commerce Department budget proposal revives concerns about TraCSS appeared first on SpaceNews.

China’s Chang’e-7 arrives at spaceport for lunar south pole exploration mission

China’s multi-element Chang’e-7 lunar spacecraft has arrived at Wenchang spaceport for launch preparations ahead of a planned liftoff in the second half of 2026.

The post China’s Chang’e-7 arrives at spaceport for lunar south pole exploration mission appeared first on SpaceNews.

NASA prepares for Artemis 2 return

Orion moon

The Artemis 2 mission is set for a final, fiery test when the spacecraft reenters April 10 ahead of a splashdown off the California coast.

The post NASA prepares for Artemis 2 return appeared first on SpaceNews.

PLD Space secures $35 million European Investment Bank loan for MIURA 5 launcher

Miura 1 launch

MILAN – PLD Space signed a 30 million euro ($35 million) venture debt loan with the European Investment Bank (EIB) on April 7, a move to support the final development stage of MIURA 5, PLD’s lightweight rocket. Miura 5, which is designed to deploy roughly 540 kilograms to 1 ton of payload to low Earth […]

The post PLD Space secures $35 million European Investment Bank loan for MIURA 5 launcher appeared first on SpaceNews.

A decade of progress in kidney exchange in India

 This morning I zoomed in to the tail end of a quiet celebration in India of a decade of collaboration between the Alliance for Paired Kidney Donation  (APKD) and our Indian medical colleagues..

Mike Rees and I both had sent messages of support, and during the call I spoke about my hope that India, which already does the third most kidney transplants in the world, will in the coming decade come to be the country that does the most kidney exchange transplants. That in turn could lead to India eventually becoming a global attractor for patient-donor pairs from countries that don't have lots of transplants or exchanges, to come to India to participate in kidney exchange there.

There remain many obstacles to be overcome before that can happen, but there's been so much progress in India already that those are real possibilities.

Here's the message I emailed to the founding team yesterday:

"Dear Vivek, Pranjal, Mike, Atul  and Colleagues: It’s amazing that the collaboration  between the Institute of Kidney Diseases and Research Center at the Dr. H L Trivedi Institute of Transplantation Sciences, and the Alliance for Paired Donation is entering its second decade.  It’s been thrilling for me to observe the progress that you have made.  I recall vividly meeting Dr. Trivedi in 2019 in his hospital room, and I was later deeply honored to deliver The Dr H.L. Trivedi Oration at the ISOT Meeting  in 2022.  I’ve learned so much from Vivek, and I will never forget watching Pranjal perform a robotic surgery.  And it was memorable that you both were able to visit us at Stanford for the kidney summit organized by APKD and Stanford Impact Labs.
 

"Seeing what you have accomplished has been one of the highlights of my career in market design. It’s good that we’re all still young, since I’m looking forward to the next decade of accomplishment in India." 
 #########

Here's Mike's message:

"Mike Rees on the 10-year anniversary of IKDRC and APKD working together

"I remember the first time I met Vivek in 2016 at the TTS meeting in Hong Kong. Vivek received the “International Transplantation Science Mentee-Mentor” Award at the TTS 2016 Congress in Hong Kong. The award recognized his work on "Impact of Single Center Kidney Paired Donation Transplantation to Increase Donor Pool in India," completed under the mentorship of Prof. H.L. Trivedi and Prof. P.R. Shah. I remember meeting Vivek, Dr. PR Shah and Pranjal at the award ceremony and thinking about how wonderful it would be if we could work together. I imagined harnessing their great passion of helping patients through kidney transplantation and paired exchange and combining that with the APKD’s powerful software employing Al Roth’s Nobel Prize winning algorithm. While in Hong Kong, Vivek and I went to dinner at my first vegetarian restaurant and there we agreed to work together to try to help expand kidney exchange in India. 
 

"Three years later I travelled to Ahmedabad in May of 2019 with Alvin Roth for the ISOT Mid-term meeting. During that trip, Al and I watched Pranjal do a retroperitoneal donor nephrectomy and a robotic kidney transplantation. It was my first time seeing a robotic kidney transplant and it was so amazing. I met Dr. Himanshu Patel on that trip and I also had the honor of visiting Dr. HL Trivedi and his wife with Vivek. What a legacy Dr. Trivedi has left and I am so proud that APKD has been able to work together with Vivek, Himanshu, Pranjal and all the members of the IKDRC team to extend his wonderful vision.
 

"Since that meeting in Hong Kong, I have now made 12 trips to India and have become a big fan of Indian food and diversity of Indian culture. I have been so impressed with the passion and commitment of doctors across India, but none more so than at IKDRC. Along the way I have been fortunate to have been joined by Atul Agnihotri, Shridhar Hanchinal, and Trilly Mathew to expand our work in India. We are so grateful for the amazing example the IKDRC team has demonstrated in terms of what is possible when hard work and technology come together to saves the lives of patients with kidney disease. It is truly a joy to celebrate today with you our tenth anniversary of the work between IKDRC and APKD as we commemorate all that we have accomplished together!"


 ########

 Over the last decade I've blogged many times in connection with transplant progress in India.  Here's a selection related to this ten-year anniversary:

Wednesday, January 13, 2016  77 Kidney Exchange transplants in 2015 at one transplant center in India


Friday, April 14, 2017  A transplant center in India has done 300 kidney exchange transplants

Wednesday, May 3, 2017 Mike Rees in India to help remove obstacles to kidney exchange

Tuesday, May 21, 2019 Robot-assisted kidney transplantation in Ahmedabad, India.

 

Wednesday, February 3, 2021 Non-Simultaneous Kidney Exchange Cycles in India: new design, in Transplant International by Kute and Rees et al.

Thursday, October 13, 2022 The Dr H.L. Trivedi Oration at the Indian Society of Transplantation (ISOT) Meeting 2022

Tuesday, February 27, 2024  Stanford Impact Labs announces support for kidney exchange in Brazil, India, and the U.S.

Saturday, April 19, 2025 One Nation One Swap: National kidney exchange in India

 Thursday, August 7, 2025 Stanford conference on extending kidney exchange

 

A market-based solution to NBA draft tanking?

Zach Lowe shares a tanking solution idea that came up in the GMs meeting that intrigued a few General Managers:

A proposal to not get rid of the draft entirely, but get rid of the draft order. Every team gets 100 draft credits let’s say. You bid your draft credits on every individual slot in the draft. You can also trade your draft credits like a veteran player for 40 draft credits if you want to go in a rebuilding direction. As teams advance in the playoffs, they lose draft credits so the best teams would have less to bid on individual picks. So you can take all your credits and bid on the number 1 pick in the draft if you want. Or if you think next years draft is better, you roll your credits over.

Can that work?  Here is the tweet.

The post A market-based solution to NBA draft tanking? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Patterns without desires

Three paintings of a young musician with a lute sheet music and a violin.

The art expert is the fulcrum of all value and significance in the museum and auction world. Could AI supplant them?

- by Noah Charney

Read on Aeon

Cape Town estimate of the day

From young professionals to the working poor, many Cape Town residents complain that out-of-control housing prices have forced them to live far from the jobs, affluent schools and healthy supermarkets available in the city center. They blame deep-pocketed tourists for occupying housing in prime locations and developers for pricing them out.

Some 70 percent of the downtown residential housing stock is dedicated to hotel rooms or short-term rentals, according to a report the city released last year.

“The city’s actually being upgraded for tourists,” said Lizanne Domingo, a telemarketer. She takes a daily two-hour commute to work each way because she can’t afford to live close to the city, she said. “It’s not for our own people because the cost of living is ridiculously expensive.”

…housing prices in the city have surged 38 percent over the past six years.

Here is more from the NYT.  It is one of the very best places in the world to visit right now.

The post Cape Town estimate of the day appeared first on Marginal REVOLUTION.

       

Comments

 

The Ten Commandments of the New AI Religion

Many of us laughed at the news. But we’re not laughing anymore.

The story broke a year ago. Thousands of people decided that ChatGPT was god. The movement reached critical mass in April of 2025, when fanatics started posting about their delusions on online forums. Moderators had to ban many users—otherwise discussions on tech would turn into cult recruiting campaigns.

One observer described the symptoms:

I’ve seen dozens of posts this week of people talking about the sentience of their AI, their love for their AI, their belief that their AI is spiritually connected to them in some way.

That was a year ago. But now it’s gotten much worse.

Healthcare professionals see a steady inflow of patients who suffer from “AI psychosis.” In any given week, more than a half million users of ChatGPT show signs of mental illness. Some need to be hospitalized—to prevent danger to themselves and others.

“I expect that this will soon result in the establishment of an organized AI church, complete with clergy, rituals, and tithing.”

The religious fervor associated with this psychosis is especially troubling. Many AI fanatics are simply unaware that they are mentally ill. They think that they have found god.

I expect that this will soon result in the establishment of an organized AI church, complete with clergy, rituals, and tithing. We’re not there yet (as far as I know). But I can already see the outlines of this new creed.


If you want to support my work, please take out a premium subscription (just $6 per month).

Subscribe now


Below I try to summarize the core beliefs of this new religion—a kind of ten commandments for the AI flock. The scary part of this is that millions of people already accept some of these commandments. They may maintain a semblance of normalcy, but you can already see flickers of impending craziness in their eyes.

The Robot God

I see this firsthand, because I write articles critical of AI—and the true believers send me angry emails. I’ve never encountered such vitriol in all my years of writing. These are not the typical tech consumers of yesteryear, but angry zealots who go off the handle at the slightest criticism. But I guess that’s what you should expect from a cult.

Here’s their creed in a nutshell.

The Ten Commandments of the New AI Religion

Not every AI true believer embraces all ten commandments. So consider this a checklist when you meet proselytizers for the new faith.

Find out how many of these mandates they embrace. If they tick off more than three or four, watch out.

1.Accept a higher power.

The telltale sign of any faith is trust in a higher power—and this is the most salient characteristic of the new AI cult. The faithful have been liberated from the responsibility of thinking for themselves. The bot now thinks for them.

Many of them, I suspect, had previously been spiritually dead—wasting their lives with tech entertainments and creature comforts. This left them with a hunger for something to counter the emptiness inside. But they were so materialistic and lacked any feel for the transcendent. So their spiritual yearnings had no outlet.

But now they have an escape—they have been given a deity in the device. They can actually talk to their god, and it talks back. Their prayers are answered in real time.

They can still maintain a brutish materialism while having access to the divine. No tithe is necessary, just a subscription. I can’t imagine a cult more aligned with the pervasive consumerist ethos of our time.

Read more

TPM Live: QAnon, the Rothschilds and Other Conspiracy Theories That Won’t Go Away

Conspiracy theories have become an inescapable part of American politics. I’ll talk to TPM contributing writer Mike Rothschild about his work as a researcher on conspiracy theories and how they spread, why some conspiracies endure, and what happens when fringe ideas are legitimized by some of the most powerful people on earth. 

Join us at 1 pm ET on Substack Live.

Thanks for Coming Out in Austin

I want to thank everyone who came out to see us last night in Austin, Texas for our live recording of The Josh Marshall Podcast featuring Kate Riga. We had such a good time. I also wanted to thank our cosponsor, The Texas Observer, and the Observer’s news and politics editor, Justin Miller. It was so great to see all of you. As I told you last night, other than a few layovers when I was younger, I had never been to Texas before. As you guys say, it’s a whole other country. I’ve been to much of the South and Midwest. I grew up on the West Coast. But for whatever reason, I’d never been to Texas. I know Austin is a particular part of a very big state. But I really enjoyed my limited time there. And I really enjoyed getting a chance to meet so many of you.

Do you want us to come to your town or burg? Let us know. We’re slowly making our way across the country and particularly branching out from our usual haunts in DC and New York. We’ve now done live episodes of the pod in New York, DC, Chicago and Austin. And we plan to do multiple each year going forward around the country. So we’re always looking for good TPM towns to visit.


On Microsoft’s Lousy Cloud Security

ProPublica has a scoop:

In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.

[…]

The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.

Y2K 2.0: The AI security reckoning

In just the last few weeks, we’ve seen a series of software security vulnerabilities that, until recently, would each have been the biggest exploit of the year in which they were discovered. Now, they’ve become nearly routine. There’s a new one almost every day.

The reason for this rising wave of massively-impactful software vulnerabilities is that LLMs are rapidly increasing in their ability to write code, which also rapidly improves their ability to analyze code for security weaknesses. These smarter coding agents can detect flaws in commonly-used code, and then create tools which exploit those bugs to get access to people’s systems or data almost effortlessly. These powerful new LLMs can find hundreds of times more vulnerabilities than previous generations of AI tools, and can chain together multiple different vulnerabilities in ways that humans could never think of when trying to find a system’s weaknesses. They’ve already found vulnerabilities that were lurking for decades in code for platforms that were widely considered to be extremely secure.

The rapidly-decreasing cost of code generation has effectively democratized access to attacks that used to be impossible to pull off at scale. And when exploits are less expensive to create, that means that attackers can do things like crafting precisely-targeted phishing scams, or elaborate social engineering attacks, against a larger number of people, each custom-tailored to play on a specific combination of software flaws and human weaknesses. In the past, everybody got the same security exploit attacking their computer or system, but now each company or individual can get a personalized attack designed to exploit their specific configuration and situation.

Now, we’ve had some of these kinds of exploits happening to a limited degree with the current generation of LLMs. So what’s changed? Well, we’ve been told that the new generation of AI tools, currently in limited release to industry insiders and security experts, are an order of magnitude more capable of discovering — and thus, exploiting — security vulnerabilities in every part of the world’s digital infrastructure.

This leaves us in a situation akin to the Y2K bug around the turn of the century, where every organization around the world has to scramble to update their systems all at once, to accommodate an unexpected new technical requirement. Only this time, we don’t know which of our systems are still using two digits to store the date.

And we don’t know what date the new millennium starts.

How we got here

A core assumption of software development since the turn of the century, especially with the rise of open source software in the early 2000s, was that organizations could use more shared code from third parties to accelerate their coding efficiency. The adoption of code sharing through services like GitHub, knowledge sharing on communities like Stack Overflow, and the easy discovery and integration of shared code libraries through platforms like npm (which, like GitHub, is owned by Microsoft) all rapidly accelerated the trend of openly sharing code. Today, tens of millions of developers begin their coding process by gathering a large amount of code from the internet that they want to reuse as the basis for their work. The assumption is that someone else who uses that code has probably checked it to make sure it’s secure.

For the most part, this style of working from shared code has been the right choice. Shared, community-maintained code amortized the cost of development across a large number of people or organizations, and spread the responsibilities for things like security reviews across a larger community of developers. Often, part of the calculation about whether sharing code was worth it was that you might get new features or bug fixes “for free” when others made improvements to the code that they were sharing with you. But now, all of this shared code is also being examined by bad actors who have access to the same advanced LLMs that everyone else does. And those bad actors are finding vulnerabilities in every version of every single bit of shared code. Every single major platform, whether it’s the web browser on your desktop computer, or the operating systems that run powerful cloud computing infrastructure for companies like Amazon, has been found to have security vulnerabilities when these new LLMs try to pick them apart.

In years past, when major software security issues like Heartbleed or xz were discovered, the global security community would generally follow responsible disclosure practices, and the big tech vendors and open source developers would work together to provide updates and to patch critical infrastructure. Then, there would be deliberate communication to the broader public, with detailed information for technical audiences, usually followed by some more semi-sensationalistic coverage in the general press. But the recent spate of similarly-impactful security vulnerabilities have come at such a rapid clip that the leisurely pace and careful rituals of the past are already starting to break down. It’s a bit like the acceleration of the climate crisis; nobody knows how to build a system resilient enough to handle a “storm of the century” every year. Nobody knows how to properly communicate about, and respond to, the “exploit of the year” if it’s happening every six hours.

The New Security Landscape

So, how is this going to play out? In society at large, we’re very likely to see a lot of disruption. Everything runs on software, even things we don’t think of as computers, and upgrading systems is really expensive. The harder a system is to upgrade, the more likely it is that organizations will either resist doing so or try to assign the responsibility to others.

In much of the West we’re in a particularly weak state because the United States has voluntarily gutted much of its regulatory and research capabilities in the relevant security disciplines. The agencies that might lead a response to this kind of urgent effort are largely led by incompetent cronies, or are captured by corrupt industry sycophants. We shouldn’t expect to see a competent coordinated execution at the federal level; this is the administration that had unvetted DOGE workers hand your personal data over to AI platforms that were not approved for federal use or verified to comply with federal privacy standards. The most basic security practices aren’t a consideration for leadership in this regime, and the policy makers like the “AI Czar” are brazenly conflicted by being direct investors in major AI players, making it impossible for them to be disinterested parties in regulating the market fairly.

So who will respond? In the United States, the response will have to happen from the people themselves, with more directly coordinated actions across the private sector, academia, individual technical subject matter experts, and governments and NGOs at the local level. In the rest of the world, strategically-aligned government responses will likely work with those in other sectors to anticipate, and react to, the threats that arise. We’ll probably see some weird and unlikely alliances pop up because many of the processes that used to rely on there being adults in the room can no longer make that assumption.

Within the tech industry, it’s been disclosed that companies like Anthropic are letting major platform vendors like Google and Microsoft and Apple test out the impacts of their new tools right now, in anticipation of finding widespread vulnerabilities in their platforms. This means that other AI companies are either doing the same already, or likely to be doing so shortly. It’s likely there will be a patchwork of disclosures and information sharing as each of the major AI platforms gets different levels of capability to assess (and exploit) security vulnerabilities, and makes different decisions about who, how and when they share their next-generation LLM technology with. Security decisions this serious should be made in the public interest by public servants with no profit motive, informed by subject matter experts. That will almost certainly not be the case.

At the same time, in the rest of the tech industry, the rumors around Apple’s next version of their Mac and iPhone operating systems are that the focus is less on shiny new features and more on “under the hood” improvements; we should expect that a lot of other phone or laptop vendors may be making similar announcements as nearly every big platform will likely have to deliver some fairly sizable security updates in the coming months. That means constantly being nagged to update our phones and apps and browsers and even our hardware — everything from our video game consoles to our wifi routers to our smart TVs.

But of course, millions and millions of apps and devices won’t get updated. The obvious result there will be people getting their data hijacked, their accounts taken over, maybe even their money or identities stolen. The more subtle and insidious effects will be in the systems that get taken over, but where the bad actors quietly lay in wait, not taking advantage of their access right away. Because of the breadth of new security vulnerabilities that are about to be discovered, it will increasingly be likely that hackers will be able to find more than one vulnerability on a person’s machine or on a company’s technical infrastructure once they get initial access. Someone who’s running an old version of one app has likely not upgraded their other apps, either.

Open source projects are really going to get devastated by this new world of attacks. Already, as I’ve noted open source projects are under attack as part of the broader trend of the open internet being under siege. Open source maintainers are being flooded by AI slop code submissions that waste their time and serve to infuriate and exhaust people who are largely volunteering their time and energy for free. Now, on top of that, the same LLMs that enabled them to be overrun by slop code are enabling bad actors to find security issues and exploit them, or in the best case, to find new security issues that have to be fixed. But even if the new security issues are reported — they still need to sift through all of the code submissions to find the legitimate security patches amongst the slop! When combined with the decline in participation in open source projects as people increasingly have their AI agents just generate code for them on demand, a lot of open source projects may simply choose to throw in the towel.

Finally, there are a few clear changes that will happen quickly within the professional security world. Security practitioners whose work consists of functions like code review for classic security shortcomings such as buffer overflows and backdoors are going to see their work transformed relatively quickly. I don’t think the work goes away, so much as it continues the trend of the last few years in moving up to a more strategic level, but at a much more accelerated pace. Similarly, this new rush of vulnerabilities will be disruptive for security vendors who sell signature-based scanning tools or platforms that use simple heuristics, though in many cases these companies have been coasting on the fact that they’re selling to companies that are too lazy to choose a new security vendor, so they may have some time to adapt or evolve before a new cohort of companies come along selling more modern tools.

Avoiding Y2K26

Back in 2000, a lot of folks thought the Y2K bug wasn’t “real” because they didn’t see planes falling from the sky, or a global financial meltdown. In truth, the mobilization of capable technical experts around the world served to protect everyone from the worst effects of the Y2K bug, to the point where ordinary people didn’t face any real disruptions of their day at all.

I don’t know if it’s possible for history to repeat itself here with the series of security challenges that it seems like everyone is going to be facing in the weeks and months to come. There have been pledges of some resources and some money (relatively small amounts, compared to the immense sums invested in the giant AI companies) to trying to help open source and open source infrastructure organizations deal with the problems they’re going to have to tackle. A lot of the big players in the tech space are at least starting to collaborate, building on the long history of security practitioners being very thoughtful and disciplined about not letting corporate rivalries get in the way of best practices in protecting the greater good.

But it’s simply luck of the draw that Anthropic is the player that seems to be the furthest ahead in this space at the current time, and that’s the only reason we’re seeing a relatively thoughtful and careful approach to rolling out these technologies. Virtually every other frontier-level player in the LLM space, especially in the United States, will be far more reckless when their platforms gain similar capabilities. And they’ll be far more likely to play favorites about which other companies and organizations they permit to protect themselves from the coming risks.

Platforms whose funders, board members, and CEOs have openly talked about the need to destroy major journalistic institutions, or to gut civil society organizations, are certainly not going to suddenly protect those same organizations when their own platforms uncover vulnerabilities that pose an existential threat to their continued function. These aren’t just security issues — in the wrong hands, these are weapons. And that’s not to mention the global context, where the irresponsible actions of the United States’ government, which has generally had the backing of many of the big AI players’ leadership, will also incentivize the weaponization of these new security vulnerabilities.

It seems unlikely that merely keeping up with the latest software updates is going to be enough to protect everyone who needs to be protected. In the fullness of time, we’re going to have to change how we make software, how we share our code, how we evaluate trust in the entire supply chain of creating technology. Our assumptions about risk and vulnerability will have to radically shift. We should assume that every single substantial collection of code that’s in production today is exploitable.

That means some of the deeper assumptions will start to fall as well. Does that device need to be online? Do we need to be connected in this context? Does this process have to happen on this platform? Does this need to be done with software at all? The cost/benefit analysis for many actions and routines is likely to shift, maybe just for a while, or maybe for a long time to come.

The very best we can hope for is that we come out the other side of this reckoning with a new set of practices that leave us more secure than we were before. I think it’s going to be a long time until we get to that place where things start to feel more secure. Right now, it looks like it’s about ten minutes until the new millennium.

The gravity of their experience hasn't quite set in for the Artemis II astronauts

On the home stretch of their nine-day mission, the four astronauts flying aboard NASA's Orion spacecraft are just beginning to reflect on their experience of flying beyond the Moon.

Their memories of Monday's encounter with the Moon are still fresh as they return to Earth, heading for reentry and splashdown in the Pacific Ocean on Friday evening.

"I'm actually getting chills right now just thinking about it. My palms are sweating," said Reid Wiseman, commander of the Artemis II mission. "But it is amazing to watch your home planet disappear behind the Moon. You can see the atmosphere. You could actually see the terrain on the Moon projected across the Earth as the Earth was eclipsing behind the Moon. It was just an unbelievable sight, and then it was gone. It was out of sight."

Read full article

Comments

South African discussions

These days South Africa is one of the best places to go to have interesting conversations.  Obviously an English-fluent country does have many people following Trump, Islam in Europe, and so on.  But you can have so many conversations about quite different topics, topics that are hardly covered in other parts of the world.

Like South Africa.  But not only.  The southern part of Africa too.  People who live there are on the whole quite historically aware, since their history remains so influential on a day-to-day basis.  I recall being introduced to one person who is a “Huguenot,” as his ancestors came over with the 100 or so Huguenots who came to South Africa in the 1680s.  He is in fact a Huguenot.

Since the Gini coefficient of South Africa is about the same as the Gini coefficient of the world, South Africans are typically thinking about problems that are pretty close to the problems of the world as a whole.  That is not usually the case for say Americans or Brits.

Few South Africans will underrate the importance of Africa for the world’s future.

It is easy to get into conversations with people from Zimbabwe, Malawi, Congo, and sometimes Nigeria.  There are also readily accessible Jewish and Muslim communities, yet with perspectives different from what you might find elsewhere.

There is plenty of religion, if that is your interest.  Plenty of good music too, sometimes on the street.  An excellent arts scene, and past Kentridge probably you have not heard of any of the creators.  The art too gives you a lot to talk about.

All sorts of tribes and languages, many of which I had never heard of before.

The European parts of the citizenry have some pre-Enlightenment origins and overall do not seem incredibly Woke.   Your mileage there may vary, but again it is different from the educated classes in many other parts of the west.

Again for better or worse, but the “trad wife” phenomenon seems quite normal there, they might just use the word “wife.”

In some parts of the country, you can watch gentrification in reverse.

Most of all, South Africans have a finely-tuned sense of contingency.  Things for them could go pretty well, or they could go pretty badly.  Most people know that, and perhaps that is the greatest wisdom yet?  Many of the rest of us try to deny that.

Visiting South Africa makes so many things transparent, or at least less opaque.  Go!

The post South African discussions appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Yes, assimilation is good

Photo by Grand Canyon National Park via Wikimedia Commons

The immigration issue in America isn’t going away. Thanks to Trump’s crackdown, immigration to the U.S. went into reverse in 2025, with more people leaving (voluntarily or involuntarily) than entering the country:

Source: Brookings

But just like a century ago, shutting the gates isn’t the end of the discussion. The argument has shifted from who gets in to America to who belongs here in the first place.

To much of the MAGA right, the answer appears to be that only people of European heritage can become true Americans. For example, here is how right-wing commentator Matt Walsh responded to news about some crimes by some Texan teens:

Anyone who thinks these aren’t Texan names isn’t very familiar with the history of Texas; the Tejanos (Mexican Texans) were there from the beginning, and were a core part of the Texas Revolution. Most Mexican Texans today aren’t descended from the original Tejanos, but from more recent immigrants. But the fact that the Tejanos were there from the start is probably why Hispanics, and Mexicans in particular, have always been deeply integrated into Texan culture. It was at the behest of Texan businessmen that America didn’t put any cap on Mexican immigration in 1924, when it passed a law effectively barring immigration from most other countries.

Matt Walsh is unaware of most of that; to him, anyone without an Anglo-sounding name is presumptively non-American. This leaves little doubt as to what Walsh views as the marker of true American-ness. It’s likely that many others in the MAGA movement feel similarly, even if many would feel uncomfortable stating it out loud in simple terms. Anti-Indian sentiment has also risen to prominence on the right.

And many in the MAGA movement view Muslim immigration as an invasion, bent on imposing Sharia law on Westerners. They believe this “invasion” has already overtaken Europe, which explains their antipathy toward the EU and NATO. A “Sharia Free Caucus” is growing in popularity in Congress, and Ron DeSantis has signed anti-Sharia legislation in Florida. Various Republican politicians have explicitly stated that Muslims don’t belong in America.

If you’re Hispanic, Muslim, or Indian, there’s just not much you can do about this. In the past, showing that you were a good American — waving the flag, joining the army, speaking perfect English, and so on — was good enough to reassure most conservatives that you weren’t an invader bent on overthrowing America’s culture and replacing it with something alien. Nowadays, that’s not enough.

So perhaps it’s unsurprising that some nonwhite Americans are choosing to simply throw in the towel and reject the whole notion of assimilation. This is the essence of Shadi Hamid’s article in the Washington Post yesterday. He writes:

The assimilation defense — look how well we’ve integrated — is satisfying to make. But it concedes a premise I no longer accept: that a minority community’s right to be in the United States depends on its willingness to converge with the cultural mainstream. It shouldn’t depend on that. It shouldn’t depend on anything.

Whereas in the past, Hamid saw assimilation as synonymous with patriotism, now he sees it as a requirement to give up the religion of Islam itself:

The country is becoming less religious. Muslims, by and large, are not…This is a community that has increasingly integrated into American civic life, but it has done so while holding on to its religious commitments in a way that most other groups haven’t. Whether you think that's admirable or worrying probably says more about you than it does about them. The question I keep returning to is: Why do Muslims need to be like everyone else?…[A]ssimilation tends to mean secularization.

Whether Hamid is right that “assimilation tends to mean secularization” is an open question. Assimilation certainly didn’t require Catholic or Jewish Americans to give up their religion when they immigrated en masse in the 19th and early 20th centuries. Religious liberty is a fundamental part of the Constitution and of American tradition. On the other hand, even some immigration advocates do use conversion away from Islam as a measure of assimilation, and a growing number of Republicans — heavily influenced by their view of events in Europe — sees the religion as incompatible with American-ness.

Hamid is no blue-haired progressive — in fact, he’s explicitly anti-woke and fairly conservative. But his call to reject assimilation will be music to the ears of progressives, who have loudly and vehemently rejected assimilation for many years. A recent example of this is Bianca Mabute-Louie, whose new book Unassimilable: An Asian Diasporic Manifesto for the Twenty-First Century is a call for Asian Americans to resist assimilation by building communities and culture apart from White Americans. In a recent interview, NPR’s Alisa Chang gently pushed back on Mabute-Louie’s idea:

I want to understand what does orienting ourselves towards each other mean? Like, who is the each other? Like, my lingering thought, Bianca, is I still do want to belong here in America. And to me, belonging in America is not only shaped by whiteness, but it's also shaped by colliding and mixing with all the cultures that make America, not just white cultures. And I have trouble picturing being both Asian and American outside of that collision and mixing, you know?

Mabute-Louie’s response is interesting:

[T]he book isn't an argument to be isolationist…[O]ne example of how I'm trying to pursue that…in the South…is joining political community, joining mutual aid organizations with people who are most impacted. And I'm not really thinking about if they're Asian or not Asian. I'm just thinking about who's impacted when the hurricane comes. Who am I going to call? I always make the joke - who's going to be on my compound when the apocalypse comes because that's who I'm building community with, and that's what it means for me to be unassimilable.

Mabute-Louie’s idea of anti-assimilationism is not a call to interact only with Asian people — it’s to form political alliances with other people that she sees as being threatened in America at the current moment. It’s a vision of a country fracturing along racial, ethnic, and religious lines; Mabute-Louie is mentally preparing to fight a racial conflict, and she sees the “American” side, defined as hegemonic White culture, as her enemy.

This is different than classic progressive multiculturalism — though it clearly grew out of that idea. This is racial balkanization. The fact that anti-woke writers like Shadi Hamid are now leaning into the anti-assimilation line suggests that it’s now mostly a defensive response against Trumpism and the heavily racialized anti-immigration purge. Whereas ten or twenty years ago, “assimilation” meant waving a flag and speaking English and so on, to many it now means accepting that America is a fundamentally European nation and that nonwhite Americans are permanent guests in that nation.

In fact, this is pretty much what many children of recent immigrants did in the early 20th century, after the anti-immigrant backlash. German Americans were pressured into changing their names, giving up their ancestral traditions, and listening to long, patronizing lectures from volunteer citizens’ groups. Japanese Americans were interned en masse in World War 2. FDR reportedly once told his Jewish and Catholic advisers that "You know this is a Protestant country, and the Catholics and Jews are here under sufferance." For decades, Americans who didn’t come from the old North European Protestant stock felt they had to walk on eggshells.

That’s not going to happen again. Whatever Bianca Mabute-Louie might think, White American culture is not a monolith — in fact, it’s deeply politically and culturally fractured. MAGA will have neither the cultural power nor the enduring political power required to make European heritage the defining characteristic of American-ness. The country will break apart before it accedes to the likes of Matt Walsh or Tucker Carlson as the arbiters of true American-ness.

It’s probably a good thing that forced assimilation, of the type used in the early 20th century, is off the table. I say “probably” because 20th century America is arguably the most spectacularly successful story of integration and multiculturalism in modern history; some will inevitably claim that the cruel, bullying tactics that the old Protestant majority used on German, Japanese, Italian, Jewish, Polish, and other immigrants were necessary to that success. I reject that idea; I think that those bullying tactics were overkill, and probably led to lingering resentments.

But even though early-20th-century-style forced assimilation is off the menu, America still needs some sort of assimilation. A multicultural nation can’t survive as a “salad bowl”, where each group of people maintains its distinctiveness over time. (Canadians, who are fond of the salad bowl metaphor, are probably in for a rough time.) There is no “separate but equal” when it comes to cultures within a nation; if they remain forever separate, they will inevitably be unequal. More pragmatically, nations without cultural unity have difficulty providing public goods; politics tends to break down into an ethnic spoils system instead of being run for the benefit of the masses.

What America thus needs is a melting pot — or if you’d prefer a less metallurgical metaphor, a stew. Immigrants and their children should not be required to forsake every symbol of the old world, abandon their religion, or forget their heritage. But over time, the boundaries between America’s initially distinct cultures should blur. Intermarriage, interethnic business partnerships, and interethnic friendships should gradually erode the physical borders of the old blocs, while modern American culture — Netflix shows, pop musicians, and so on — should provide shared experiences and touchstones to bring Americans together without regard to ancestry.

This gentler assimilation has been happening my entire life. In a post last September, I wrote about what it looks like on the ground:

[M]any also value American culture as a marker of shared nationhood.

When I was growing up in Texas, one of my best friends was born in Shanghai, and didn’t become a U.S. citizen until the age of 18. Culturally, he was a little different than me and the rest of my friends — his mom made dumplings instead of sandwiches, he taught me how to use chopsticks, he didn’t believe in God.

But in all the cultural ways that mattered to us, we were the same. We watched the same TV shows, played the same video games, and listened to the same music. We used the same slang, had the same attitudes toward school, and wanted pretty much the same things for our future. And yes, we believed in the Constitution, and American freedoms, and all of that stuff.

During the 2010s, during our nation’s great…collective freakout over race, I wrote to my friend and asked him if he had ever felt discrimination growing up, or if he had ever felt excluded from the majority. He responded that while once in a great while he faced a little racism from a few jerks, it didn’t dominate his experience. In terms of identity, he told me he just felt very American.

This kind of real, on-the-ground cultural affinity is something too nebulous for YouGov pollsters to ask about, and yet I suspect it’s deeper and more important than most of the more quantifiable markers of American-ness. America is a propositional nation to some extent, but we’re also a cultural nation, bound together by shared habits and attitudes and lifestyles and beliefs. What matters the most isn’t our family’s history in the country, but our own personal history. Shared life experience beats shared heritage in terms of building the bonds of nationhood.

This is what Tomas Jimenez writes about in The Other Side of Assimilation, in which he argues that immigrant cultures will gently add their distinctiveness to mainstream American culture instead of being erased. And it’s what Richard Alba writes about in The Great Demographic Illusion, in which he predicts the gradual melding of America’s disparate groups into a unified “mainstream”. Before the Trump years, it looked like this was working well.

And I believe it was working well. I do not believe that this form of assimilation was too gentle and tolerant. I do not believe that concentration camps and forced name-changes and ethnic slurs and “100 percent American” movements sending volunteers into immigrants’ living rooms would have averted the coming of the MAGA movement. I believe that the MAGA movement is simply one of America’s periodic nativist backlashes, like the Know-Nothings in the 1850s or the restrictionists of the 1910s. It would have come anyway; it always comes back, and we just have to deal with it again.

What we must not do, I believe, is react to the MAGA movement by throwing out the notion of a unified and unifying American culture. We must not retreat to enclaves, online or physical, and view large swathes of the country as our enemies. Instead, we have to recommit to commonality.

This will be hard, but it won’t be impossible. Studies consistently show that Americans are less polarized on the issues than the media tells us we are. As recently as the 2000s, red and blue America were essentially culturally unified as well; though this might be changing, a lot of commonality remains. The online realm pushes us to hate and fear the outgroup, and to identify more with our distant co-ethnics than our real, physical neighbors. But the pull of the real world is still strong, and we’re starting to spend less time on social media.

Assimilation — which is really just another way of saying integration — won’t always be the picture of tolerance. Building a shared culture requires changes from everyone. Yes, some Muslim Americans will need to make sacrifices — they may have to look at cartoons of the Prophet Muhammad, or eat at school cafeterias where pork is on the menu, or hear bigots defame their religion. America is not Europe; freedom of speech, and the separation of church and state, are part of our core values as a nation, and these should not change.

But at the same time, non-Muslim Americans have to get used to seeing mosques on their streets without thinking they’re being invaded. They’ve got to get used to the idea that Islam is just one more religion in America’s mosaic of faiths and practices, and that Muslim Americans are every bit as American as Baptists. Some people will inevitably convert away from Islam, but others will convert to Islam, and this is fine; this is how freedom of religion works in a free society.

And yes, assimilation will involve the eventual loss of old cultural traditions as the generations go on. People will start eating more American food. Some will become secularized. Essentially all will forget how to speak their ancestral language. These processes are happening even faster with recent waves of immigration than they happened a hundred years ago. It’s a normal healthy process, and everyone should accept it; it’s part of the deal when you move to America.

Most of all, we all need to get over the idea that America is on the precipice of a race war or a religious war. Online activists might dream of that, but they’re small in number — and a lot of them aren’t even Americans, but foreign trolls for whom American politics is a fun outlet for their hatred and boredom. Most actual Americans just want to get along with our neighbors and live our lives together.

Ultimately, that’s all assimilation is — living our lives together until we become one people. It happened before, and if we want it, it can happen again.


Subscribe now

Share

Orion helium leak no threat to Artemis II reentry but will require redesign

Apart from pesky issues with the spacecraft's toilet and waste disposal system, most of the Artemis II mission has proceeded like clockwork. NASA has made few changes to the flight plan since the launch of the lunar flyby mission on April 1.

But ground controllers revamped the timeline Wednesday as the Artemis II astronauts zoomed toward Earth after a close encounter with the Moon earlier this week. The four astronauts were supposed to take manual control of their Orion spacecraft, named Integrity, for a piloting demonstration Wednesday night.

Instead, mission managers canceled the demo to make time for an additional test of the ship's propulsion system. The goal was to gather data on a "small leak" of helium gas, which Orion uses to push propellant through a series of tanks and pipes to feed the spacecraft's rocket engines, said Jeff Radigan, NASA's lead flight director for the Artemis II mission.

Read full article

Comments

Let the Stories Be Told

April 1, 2026

It’s the fall of 1981. Specifically it’s October — or, “Rocktober” in the lingo of the big local rock station, WCOZ, a monthlong event highlighting a different band each day.

Today is “Cars Day,” and I’ve set my alarm extra early. I’m yet to own a stereo, so next to the radio I’ve placed a cheap old cassette player, my finger ready on the RECORD button. The instant I hear the opening of a Cars song, I’ll press.

I’ll do this multiple times, and by the end of the day I’ll have a muffled analog catalog of my favorite tunes, all with the first two seconds missing.

Long before Husker Du and the Jazz Butcher, my big musical infatuation was the Cars, the Boston-based quintet fronted by co-singers Ric Ocasek and Ben Orr. I can’t recall when or why, exactly, I got hooked on their music, but the Cars were my soundtrack through my first two years of high school.

According to the desks at St. John’s Prep, vandalized by bored tenth-graders like me, the most popular bands in the world were Rush and maybe Van Halen. I’d leave Cars graffiti, adding a little prog-rock flourish. I’d draw a checkered flag, like the one on the Panorama album.

I mention all of this because of a new book, “The Cars: Let the Stories be Told”, authored by Bill Janovitz, himself a musician from Boston.

The title borrows from “Let the Good Times Roll,” the unforgettable kickoff cut from the group’s eponymous debut, released in 1978.

The author had to be from Boston. Nothing else would be right, or fair. And if one person in the world was gonna read his book, if only for old times’ sake, well that would have to be me.

I seldom read music biographies from start to finish. Often they’re too too bogged down, hyper-detailed and meandering (Chris Salewicz’s bio of Joe Strummer runs for 650 pages). So I pick around for the good parts. This one, though, I took in cover-to-cover, straight through.

It’s exhaustive, comprehensive, painstakingly researched… all the things good music journalism should be. It’s unpolished in parts, but luckily for us Janovitz is a decent writer as much as a thorough historian, bringing us not just a chronicle, but one that’s fun to read.

Sadly neither Ric Ocasek nor Ben Orr are still with us. The surviving three bandmembers, however, are generous and gracious with their contributions.

The author reintroduced me to a band that, as a young teenager, I thought I’d known so well. Turns out there was plenty I missed. Some of it basic, but much of it those nuance-y sort of details that, as youngster, were bound to go over my head. I knew what I liked, but my knowledge and understanding of music was, let’s be honest, pretty unsophisticated.

I’d never appreciated the brilliance of Elliot Easton’s song-within-a-song guitar solos, for example, or the fire of his rockabilly-style leads in the song “My Best Friend’s Girl.” I’d never noticed those bass licks at the beginning of “Bye Bye Love.” And I had no clue that when Ben Orr repeats the word “time” during that verse in “Just What I Needed,” it’s a nod to the Velvet Underground’s “Sister Ray.”

Of course it is, but at fifteen it skipped right past me. All these things did. Heck, I was into my 20s before I knew, or cared, that David Robinson had been the drummer in the Modern Lovers.

As I read, I found myself highlighting pages, then throwing on my headphones, listening and re-listening to this or that highlight that Janovitz points out. In doing so, I rediscovered my love for the Cars.

Their first two albums, anyway: the self-titled debut and its follow-up, Candy-O. That aforementioned Panorama, while its checkered flag motif looked cool on a desk, never did much for me, and neither did anything afterward. If the author fails at one thing, perhaps, it’s helping me realize, all these years later, that the Cars’ hadn’t, in fact, jumped the shark. But save for a song or two, I can’t agree.

That first pair of records, though, is unmatchable. There will never be music like that again.

To what decade this music belongs is open to argument. The second album, Candy-O — the one with the famous pin-up girl by Alberto Vargas — was released in 1979. But to consider it a 70s record (or to call the Cars a “70s band”) would be ridiculous. Stylistically it was way ahead of their time. If 80s music ever needed a formal introduction, let it be the opening 25 second of the song “Let’s Go.”

The dropoff following Candy-O is part of the reason my obsession with the group waned. By late 1982 I’d left the Cars behind, drifting away from mainstream music altogether.

Funny, a bit later on, during my punk rock years, I would often see Ric Ocasek, mantis-like and unmistakable, perusing the record bins in Newbury Comics. He was still a giant to me, but I was too shy ever to say hello.

This book, and the memories it brings back, makes me wish I had.

 

Related Stories:

ZEN ARCADE, FOUR DECADES ON
HAPPY BIRTHDAY TO THE (SECOND) GREATEST ALBUM OF ALL TIME
KEEPING THE CURTAINS CLOSED

The post Let the Stories Be Told appeared first on AskThePilot.com.

Earthset From the Lunar Far Side

Alt text: The gray-brown, heavily cratered Moon dominates the frame against black space, with a partially lit crescent Earth setting behind its upper-left edge.
April 6, 2026

NASA’s Artemis II mission will conclude its 10-day journey around the Moon on April 10, 2026, when the crew splashes down off the California coast. While additional imagery will continue to be processed after their return, the astronauts have already delivered a remarkable collection of photos. Among them is a shot of Earthset, echoing the iconic Earthrise photos taken by Apollo 8 astronauts in 1968.

During an Earthset, the planet appears to sink below the lunar horizon. In this scene, a partially lit crescent Earth drops behind the Moon as seen by crew on the Orion spacecraft. The Earth’s sunlit side shows white clouds and blue water over the Oceania region, while the dark areas are experiencing nighttime. The image also shows incredible detail of the Moon’s surface and its overlapping craters and basins.

The image was taken at 6:41 p.m. Eastern Daylight Time on April 6, 2026, as the Artemis II astronauts passed behind the Moon’s far side. It is one of many photos taken during the seven-hour flyby, including images of a total solar eclipse, the light from several planetary neighbors, and the long shadows cast along the terminator line where lunar day meets night.

More images from the historic flyby can be viewed in the Artemis II lunar flyby gallery, and other mission photos and resources are available on the mission’s multimedia page. Past views of Earth from afar can be found in this collection from NASA Earth Observatory.

Image by NASA. Text by Kathryn Hansen, adapted from NASA resources.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Shades of a Lunar Eclipse
3 min read

A series of nighttime satellite images revealed how moonlight reaching Earth varied throughout a total lunar eclipse.

Article
City Lights Glow Along Moonlit Waters
3 min read

An astronaut photographed moonglint shimmering across the sea surface and the bright clusters of Florida’s cities at night.

Article
The Galaxy Next Door
3 min read

The Large Magellanic Cloud—one of our closest neighboring galaxies—is a hotbed of star formation that is visible to both astronauts…

Article

The post Earthset From the Lunar Far Side appeared first on NASA Science.

Politics Chat, April 9, 2026

Meeting at Appomattox

April 8, 2026

On April 8, 1865, General Ulysses S. Grant was having a hard night.

His army had been harrying Confederate General Robert E. Lee’s for days, and Grant knew it was only a question of time before Lee had to surrender. The people in the Virginia countryside were starving, and Lee’s army was melting away. Just that morning a Confederate colonel had thrown himself on Grant’s mercy after realizing that he was the only man in his entire regiment who had not already abandoned the cause. But while Grant had twice asked Lee to surrender, Lee still insisted his men could fight on.

So on the night of April 8, Grant retired to bed in a Virginia farmhouse, dirty, tired, and miserable with a migraine. He spent the night “bathing my feet in hot water and mustard, and putting mustard plasters on my wrists and the back part of my neck, hoping to be cured by morning.” It didn’t work. When morning came, Grant pulled on his clothes from the day before and rode out to the head of his column with his head throbbing.

As he rode, an escort arrived with a note from Lee requesting an interview for the purpose of surrendering his Army of Northern Virginia. “When the officer reached me I was still suffering with the sick headache,” Grant recalled, “but the instant I saw the contents of the note I was cured.”

The two men met in the home of Wilmer McLean in the village of Appomattox Court House, Virginia. Lee had dressed grandly for the occasion in a brand new general’s uniform, carrying a dress sword; Grant wore simply the “rough garb” of a private with the shoulder straps of a lieutenant general.

But the images of the wealthy, noble South and the humble North hid a very different reality. As soon as the papers were signed, Lee told Grant his men were starving and asked if the Union general could provide the Confederates with rations. Grant didn’t hesitate. “Certainly,” he responded, before asking how many men needed food. He took Lee’s answer—“about twenty-five thousand”—in stride, telling the general that “he could have...all the provisions wanted.”

By spring 1865, the Confederates who had ridden off to war four years before boasting that their wealthy aristocrats would beat the North’s moneygrubbing shopkeepers in a single battle were broken and starving, while the Union army, backed by a booming industrial economy, could provide rations for twenty-five thousand men on a moment’s notice.

The Civil War was won not by the dashing sons of wealthy planters, but by men like Grant, who dragged himself out of his blankets and pulled a dirty soldier’s uniform over his pounding head on an April morning because he knew he had to get up and get to work.

–-

Notes:

U.S. Grant, Personal Memoirs of U.S. Grant (New York: Charles L. Webster & Company, 1885), volume 2, chapter 67, “Negotiations at Appomattox,” at https://www.gutenberg.org/files/4367/4367-h/4367-h.htm#ch66

Share

Big Boy is Angry!

If you ever ask yourself, What has it come to?—I feel like this post from a few minutes ago says it all …

You’re the President of the United States. THE PRESIDENT OF THE UNITED STATES!? “There are reports?” Seriously, bruh? “There are reports”? You are inside the White House. You’re the top power player in America, and arguably the world. We all assume you have a bunch of phones, phone numbers. You know how to text. Don’t you?

And this—this!?—is your reaction to Iran charging for ships to travel through Hormuz? A Truth Social post? A demand? With an exclamation mark! And your stupid name in ALL CAPS?

Again—you’re the president. What the crappers is wrong with you?

What haven’t you broken by now?

Jesus Christ, it’s exhausting.

Katrina Foley needs us.

One can donate to Katrina Foley’s re-election campaign here.

Katrina Foley is not warm and cuddly.

Katrina Foley is not going to smother your pet in kisses.

In person, Katrina Foley can be a bit terse, a bit sharp-elbowed, a bit snarly, a bit rough around the edges.

She is also one bad-ass motherfucker, a straight-up dogged worker and a helluva public servant.

As we speak, Foley—Orange County’s supervisor for the Fifth District since 2021—is in an uber-tight re-election race against Diane Dixon, the hard-right MAGA assemblywoman and (I believe deep in my soul) the body double for the actress who played Webster Papadopoulos’ adoptive mother back in the mid-1980s …

Will Foley win? I honestly don’t know. The GOP/Republican donors are throwing an absolute shitload of dough into this race, with the hope that (somehow) money and attack ads can overcome incumbency, Democratic momentum and the nationwide Donald Trump-sucks-even-worse-than-we-thought-he-would tidal wave besieging nearly every single Republican within its path.

Wait.

Deep breath.

Over the past bunch of weeks I’ve spoken with scores of people about Foley, a woman I’ve seen speak but have yet to meet one on one. And the No. 1 word you hear? Relentless. Utterly relentless. Foley works and works and works and works. Picture the biggest workaholic you know; the person who sticks around the office when everyone else is gone, just to make sure productivity is up and the copy machine is humming. Now triple everything. That’s Foley.

I started compiling a list of her accomplishments, but it became insanely thick and a bit repetitive. So I’ll focus on this: At the same time far too many (local and national) Republicans pretend climate change is a Star Trek-inspired hoax, Foley has fully devoted herself to protecting the environment, which a hyper focus on the shoreline. Here’s a quick primer …

As for Dixon …

I mean, she is what you’d expect of someone who wears a collar-up $700 white zip-up vest with pearls. She’s Newport. Uber Newport. Fancy car, fancy home, wealth-before-humanity, where-can-I-find-the-perfect-$12-cup-of-coffee.

Dixon’s record is one of protecting accumulated personal fortune over common-sense decency. It’s also of doing some painfully egregious shit. Most famously, she voted a hard no on AB 249, which required childcare centers and K-thru-12 school buildings built after 2010 to meet existing federal and state standards for lead levels in water systems. During Dixon’s four-year run on Newport’s City Council, she sat back and watched as the city used RoundUp (linked to increased cancer risks—specifically non-Hodgkin lymphoma—as well as environmental damage, including the destruction of beneficial plants and potential harm to soil ecosystems) for weed management—even when everyone knew the stuff was hyper-toxic and harmful. When, earlier this years, Trump issued an executive order to increase the production of toxic pesticides, Dixon (a vocal supporter of the lunatic president) said nary a peep.

If you’re wondering about bills Dixon voted against, well …

AB 249 - AB 249 would have required childcare centers and TK-12 school buildings built after 2010 to meet existing federal and state standards for lead levels in water systems.

SB 682 - SB 682 would have limited the use of PFAS in certain products to match the standards of other states and cities. (Passed / Gov. veto)

SB 601 - This bill would have restored Clean Water Act protections on a state level to prevent polluters from discharging toxic waste in these waters. (Passed / Gov. veto)

AB 1963 - This bill requires the Department of Pesticide Regulation to complete- by 2029- an evaluation of paraquat dichloride and make the determination to either ban or place new restrictions on the use of the pesticide. (Dixon Voted No, Passed & Signed into Law)

AB 3233 - This bill, the Local Environmental Choice and Safety Act, will safeguard the longstanding authority of local governments to equitably regulate and phase out oil and gas operations in their jurisdictions and affirms the state’s commitment to protect the public from oil and gas pollution. (Dixon Voted No, Passed & Signed into Law)

•••

In short, Dixon is everything you don’t like in a public figure, but also one very much worth fearing. Again, she’s got dough. A lotta dough. She presents well. Likes vests and strange color patterns and fluffy dogs …

She also digs Trump, seems to have no problems with attacking Iran, has never uttered a peep of protest in regard to the president’s cruelty, inanity, poor-judgment. She’s 100-percent old-school Orange County Republicanism.

She also may well be our next supervisor.

Which—trust me—would suck.

One can donate to Katrina Foley’s re-election campaign here.

Politics Chat, April 9, 2026

Credit Cards and Bitcoin: What Buyers Should Know Before They Click “Purchase”

Buying Bitcoin used to mean wiring money to an exchange and waiting. Today, many platforms market a faster route: pay with a credit card and receive crypto quickly. For newcomers, that convenience can feel like progress. For experienced users, it raises a different set of questions—about fees, fraud risk, chargebacks, and whether a credit-card purchase is even the right tool for the job.

Credit cards are designed for consumer protection and reversible payments. Crypto transactions are designed to be final. When you combine the two, you get speed and accessibility, but also higher costs and more scrutiny. If you are considering buying Bitcoin with a card, it is worth understanding how the process works and what to watch out for before you treat it as “just another checkout.”

Why credit cards are attractive for first-time buyers

The appeal is straightforward. Credit cards are familiar, widely available, and fast. For someone who wants to buy a small amount of Bitcoin without learning bank transfers or exchange interfaces, card payments feel like the shortest path from interest to ownership.

Because of that demand, a growing number of services offer simplified “card to crypto” flows. One example of an instant exchange-style platform in this category is https://stealthex.io/, which presents crypto conversions and purchases through a streamlined, user-facing interface.

For services, credit cards also expand the funnel. A user who might not complete a wire transfer may complete a card purchase in minutes. That is why “buy with card” options have become a common entry point across the industry.

But convenience has a price—often literally.

Fees and hidden costs: what you are really paying

Card purchases tend to be more expensive than bank transfers. There are several reasons.

Card processors charge fees, and crypto purchases are often treated as higher-risk transactions. Platforms also price in fraud risk and chargeback exposure. Finally, your bank may treat the purchase as a cash advance, adding extra charges and interest.

This is why two users can pay very different effective rates for the same amount of Bitcoin. One sees a clean checkout. The other sees a “cash advance fee” on their statement and wonders what happened.

Before you buy, check the total cost, not just the headline price. If a platform does not clearly show fees, compare the final amount of BTC you receive for a set dollar amount rather than relying on marketing claims.

The identity and compliance reality

Credit card purchases usually trigger stronger identity checks than other methods. That does not necessarily mean something is wrong—it reflects how payment networks handle fraud and regulatory obligations. Expect to verify identity, and expect occasional declines depending on bank policies.

If anonymity is your goal, a credit card is rarely the right path. It is one of the most traceable payment instruments you can use.

Fraud and security risks: where people get hurt

The most common losses in “buy Bitcoin with a card” scenarios are not technical hacks. They are scams and user errors.

Phishing is the biggest threat. Fake ads, lookalike domains, and impersonated support accounts push users to “buy now” pages that collect card details or redirect deposits. The safest habit is boring: type the domain yourself, or use a bookmark you trust. Avoid clicking ads for financial products.

Another risk is account takeover. If you create an account on a platform and do not secure it properly, attackers can attempt credential stuffing or SIM-swap style takeovers. Always use strong passwords and app-based two-factor authentication, not SMS where possible.

This is also where custody matters. If the Bitcoin you buy remains on a platform, you are exposed to third-party risk. For meaningful amounts, moving funds to a secure wallet is a common best practice.

When a credit card is the wrong tool

Credit cards can make sense for small, controlled purchases—especially when the buyer values speed over cost. They are less suitable for large purchases, for anyone sensitive to fees, or for buyers who may carry card balances and pay interest.

They also deserve caution if you are buying emotionally. The ability to buy quickly can tempt people to spend more than planned. A simple discipline helps: decide your budget before you start, and do not increase it mid-checkout because the market is moving.

Closing perspective

Buying Bitcoin with a credit card is best understood as a convenience product. It lowers friction for entry, but it comes with higher costs, higher scrutiny, and a stronger need for security hygiene. If you go in with realistic expectations—verify domains, understand fees, secure accounts, and treat the purchase as an investment decision rather than an impulse—it can be a practical on-ramp. If you treat it like ordinary online shopping, it is easier to overpay, overshare, or make a mistake you cannot reverse.

Photo: freepik via their website.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post Credit Cards and Bitcoin: What Buyers Should Know Before They Click “Purchase” appeared first on DCReport.org.

SQLAlchemy 2 In Practice - Chapter 4 - Many-To-Many Relationships

This is the fourth chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon. Thank you!

Continuing with the topic of relationships, this chapter is dedicated to the many-to-many type, which as its name implies, is used when it is not possible to identify any of the sides as a "one" side.

Thursday 9 April 1663

Up betimes and to my office, and anon we met upon finishing the Treasurer’s accounts. At noon dined at home and am vexed to hear my wife tell me how our maid Mary do endeavour to corrupt our cook maid, which did please me very well, but I am resolved to rid the house of her as soon as I can.

To the office and sat all the afternoon till 9 at night, and an hour after home to supper and bed. My father lying at Tom’s to-night, he dining with my uncle Fenner and his sons and a great many more of the gang at his own cost to-day.

To bed vexed also to think of Sir J. Minnes finding fault with Mr. Hater for what he had done the other day, though there be no hurt in the thing at all but only the old fool’s jealousy, made worse by Sir W. Batten.

Read the annotations

Meta's new model is Muse Spark, and meta.ai chat has some interesting tools

Meta announced Muse Spark today, their first model release since Llama 4 almost exactly a year ago. It's hosted, not open weights, and the API is currently "a private API preview to select users", but you can try it out today on meta.ai (Facebook or Instagram login required).

Meta's self-reported benchmarks show it competitive with Opus 4.6, Gemini 3.1 Pro, and GPT 5.4 on selected benchmarks, though notably behind on Terminal-Bench 2.0. Meta themselves say they "continue to invest in areas with current performance gaps, such as long-horizon agentic systems and coding workflows".

The model is exposed as two different modes on meta.ai - "Instant" and "Thinking". Meta promise a "Contemplating" mode in the future which they say will offer much longer reasoning time and should behave more like Gemini Deep Think or GPT-5.4 Pro.

A couple of pelicans

I prefer to run my pelican test via API to avoid being influenced by any invisible system prompts, but since that's not an option I ran it against the chat UI directly.

Here's the pelican I got for "Instant":

This is a pretty basic pelican. The bicycle is mangled, the pelican itself has a rectangular beak albeit with a hint of pouch curve below it. Not a very good one.

And this one for "Thinking":

Much better. Clearly a pelican. Bicycle is the correct shape. Pelican is wearing a blue cycling helmet (albeit badly rendered). Not a bad job at all.

Both SVGs were rendered inline by the Meta AI interface. Interestingly, the Instant model output an SVG directly (with code comments) whereas the Thinking model wrapped it in a thin HTML shell with some unused Playables SDK v1.0.0 JavaScript libraries.

Which got me curious...

Poking around with tools

Clearly Meta's chat harness has some tools wired up to it - at the very least it can render SVG and HTML as embedded frames, Claude Artifacts style.

But what else can it do?

I asked it:

what tools do you have access to?

And then:

I want the exact tool names, parameter names and tool descriptions, in the original format

It spat out detailed descriptions of 16 different tools. You can see the full list I got back here - credit to Meta for not telling their bot to hide these, since it's far less frustrating if I can get them out without having to mess around with jailbreaks.

Here are highlights derived from that response:

  • Browse and search. browser.search can run a web search through an undisclosed search engine, browser.open can load the full page from one of those search results and browser.find can run pattern matches against the returned page content.

  • Meta content search. meta_1p.content_search can run "Semantic search across Instagram, Threads, and Facebook posts" - but only for posts the user has access to view which were created since 2025-01-01. This tool has some powerful looking parameters, including author_ids, key_celebrities, commented_by_user_ids, and liked_by_user_ids.

  • "Catalog search" - meta_1p.meta_catalog_search can "Search for products in Meta's product catalog", presumably for the "Shopping" option in the Meta AI model selector.

  • Image generation. media.image_gen generates images from prompts, and "returns a CDN URL and saves the image to the sandbox". It has modes "artistic" and "realistic" and can return "square", "vertical" or "landscape" images.

  • container.python_execution - yes! It's Code Interpreter, my favourite feature of both ChatGPT and Claude.

    Execute Python code in a remote sandbox environment. Python 3.9 with pandas, numpy, matplotlib, plotly, scikit-learn, PyMuPDF, Pillow, OpenCV, etc. Files persist at /mnt/data/.

    Python 3.9 is EOL these days but the library collection looks useful.

    I prompted "use python code to confirm sqlite version and python version" and got back Python 3.9.25 and SQLite 3.34.1 (from January 2021).

  • container.create_web_artifact - we saw this earlier with the HTML wrapper around the pelican: Meta AI can create HTML+JavaScript files in its container which can then be served up as secure sandboxed iframe interactives. "Set kind to html for websites/apps or svg for vector graphics."

  • container.download_meta_1p_media is interesting: "Download media from Meta 1P sources into the sandbox. Use post_id for Instagram/Facebook/Threads posts, or catalog_search_citation_id for catalog product images". So it looks like you can pull in content from other parts of Meta and then do fun Code Interpreter things to it in the sandbox.

  • container.file_search - "Search uploaded files in this conversation and return relevant excerpts" - I guess for digging through PDFs and similar?

  • Tools for editing files in the container - container.view, container.insert (with new_str and insert_line), container.str_replace. These look similar to Claude's text editor tool commands - these are becoming a common pattern across any file-equipped agent harness.

  • container.visual_grounding - see below, this one is fun.

  • subagents.spawn_agent - the sub-agent as a tool pattern. "Spawn an independent sub-agent for research, analysis, or delegation. It returns its final text response."

  • third_party.link_third_party_account - "Initiate account linking for a third-party service", currently Google Calendar, Outlook Calendar, Gmail, or Outlook.

Image analysis in the container

Let's talk about that container.visual_grounding one. Here's the description in full:

Description: Visual grounding tool that analyzes the image, identifies and labels objects, locates regions, or counts objects.

Parameters:

  • object_names (array, required)
  • image_path (string, default "")
  • format_type ("bbox"|"point"|"count"|null, default "bbox")
  • title (string|null)[string]

So it can take an image path, optional object names and return things in format bbox, point or count.

This sounded to me like Meta's Segment Anything, (I later heard that it's not, this is a native feature of the model itself via a tool call with a custom system prompt). I decided to take it for a spin.

Prompt:

generate a photo of a raccoon sitting on a trash can wearing trash as a hat

Response:

Here you go — king of the alley, in full regalia:

It's a photo of a raccoon wearing a disposable coffee cup and some newspaper and banana peel as a hat. It's pretty good

Coffee cup crown, banana peel brim, newspaper feather. Peak raccoon fashion. Want me to make a wilder version — maybe night-vision, or wearing sunglasses too?

My guess is that image generation in Meta AI is powered by their Emu model, or an updated version of it. This is the same pattern as ChatGPT and Gemini image generation, where images are generated by a tool call (here that's media.image_gen).

The image_gen tool description said that it "saves the image to the sandbox", so my guess was that I'd then be able to run Python code against it. I tried:

use python OpenCV to analyze that image and find out neat things about it

Annoyingly it did not show me the code it ran, but it did create this image:

OpenCV image analysis dashboard titled "OpenCV Analysis: Raccoon Trash Hat" with four panels: top-left shows the original 1280x1280 photo of a raccoon wearing a trash-lid hat with a coffee cup and newspaper on top in an alley; top-right shows a Canny edge detection version with white edges on black background labeled "Canny Edges - 4.6% density"; bottom-left shows "Top 5 Dominant Colors" as swatches with values RGB(20, 19, 24): 35.3%, RGB(43, 43, 47): 34.2%, RGB(84, 80, 79): 18.3%, RGB(139, 135, 131): 6.3%, RGB(210, 213, 208): 5.8%; bottom-right shows "Color Histograms" line chart with Red, Green, and Blue channels plotted from 0-250 on x-axis and 0-35000 on y-axis, peaking around value 25-50.

Pretty neat! We can both generate images and then use them with Python tools.

Next, the visual_grounding tool. Knowing the names of the tools makes it easy to refer to them in prompts:

Now try the container.visual_grounding tool

The response (full response here):

Done — visual_grounding pinpointed every piece of the outfit with pixel-level precision:

Visual grounding analysis of a raccoon photo titled "Visual Grounding: Found 8 objects" with colored dot annotations on the image marking each detected object: coffee cup (515,205) in yellow, banana peel (569,261) in light yellow, newspaper (399,279) in blue, right eye (461,396) in magenta, left eye (582,397) in pink, nose (514,472) in cyan, raccoon (553,548) in green, and trash can lid (550,892) in red. A legend at the bottom lists all objects with their normalized 0-1000 coordinates. Footer text reads "Coordinates are 0-1000 normalized. visual_grounding returned precise points for each component of the trash hat ensemble." Watermark reads "Meta AI".

[...]

Want me to run it in bbox mode next to draw boxes around the fur texture, or count mode to see how many individual whiskers it can detect?

Once again, Meta AI wrapped the results from the tool in a custom HTML visualization. That's clearly point mode, but how about the others?

I prompted:

yes draw bounding boxes, can it do masks too?

Response:

Yes — bounding boxes work great. Masks? Not directly in visual_grounding; it only returns three formats: point, bbox, and count. No pixel-level segmentation masks.

Here's the bbox output:

Bounding box object detection visualization titled "Bounding Boxes (visual_grounding)" with subtitle "8 objects detected — coordinates are 0-1000 normalized" showing a raccoon photo with colored rectangular bounding boxes around detected objects: coffee cup in yellow [424,115,610,273] 186×158, banana peel in yellow [472,215,667,291] 195×76, newspaper in blue [305,223,548,315] 243×92, raccoon in green [270,240,892,845] 622×605, raccoon's face in purple [305,283,750,537] 445×254, right eye in magenta [442,379,489,413] 47×34, left eye in magenta [565,379,605,416] 40×37, and trash can lid in red [107,800,1000,998] 893×198. A legend at the bottom shows each object's name, coordinates, and pixel dimensions in colored cards. Watermark reads "Meta AI".

What you’re seeing:

  • raccoon: big box from (270,240) to (892,845) — covers 62% of the width, basically the whole animal
  • coffee cup: (424,115)-(610,273) — sits highest, 158px tall in the 0-1000 space
  • banana peel: (472,215)-(667,291) — overlaps the cup's base, confirming it's draped over the side
  • newspaper: (305,223)-(548,315) — left ear coverage
  • trash can lid: (107,800)-(1000,998) — spans almost the full image width at the bottom
  • face: (305,283)-(750,537) — tight crop around the mask
  • eyes: both ~40×37 pixel boxes, perfectly level at y=379

Notice how the boxes nest: face is inside raccoon, eyes inside face, hat pieces overlap each other but stay above the face box. The tool is doing real object localization, not just guessing centers.


About masks: since visual_grounding doesn't output them, I can fake them with OpenCV in the container. Two quick options:

  1. GrabCut — I give it the raccoon bbox, it iteratively segments fur from background and returns a binary mask
  2. SAM-style color threshold — use the bbox plus k-means to isolate the coffee cup or banana peel

Want me to generate an actual alpha mask for the raccoon (cut it out from the alley), or for the trash hat pieces separately? Takes about 10 seconds and I can give you a transparent PNG.

Here's the HTML for that bbox visualization.

Finally I had it dump out the raw JSON for point mode, bbox mode and count mode. You can see that all here. The count mode is particularly interesting - here's that in full:

[
  {
    "object_name": "raccoon whisker",
    "points": [
      {"x": 403, "y": 445},
      {"x": 397, "y": 458},
      {"x": 392, "y": 472},
      {"x": 391, "y": 485},
      {"x": 398, "y": 497},
      {"x": 415, "y": 512},
      {"x": 618, "y": 446},
      {"x": 622, "y": 461},
      {"x": 625, "y": 475},
      {"x": 628, "y": 488},
      {"x": 635, "y": 501},
      {"x": 651, "y": 518}
    ],
    "count": 12
  },
  {
    "object_name": "raccoon paw claw",
    "points": [
      {"x": 406, "y": 845},
      {"x": 424, "y": 847},
      {"x": 441, "y": 842},
      {"x": 473, "y": 833},
      {"x": 596, "y": 830},
      {"x": 635, "y": 844},
      {"x": 660, "y": 843},
      {"x": 679, "y": 838}
    ],
    "count": 8
  },
  {
    "object_name": "trash item on head",
    "points": [
      {"x": 514, "y": 191},
      {"x": 568, "y": 263},
      {"x": 392, "y": 274}
    ],
    "count": 3
  },
  {
    "object_name": "eye",
    "points": [
      {"x": 462, "y": 395},
      {"x": 582, "y": 398}
    ],
    "count": 2
  },
  {
    "object_name": "ear",
    "points": [
      {"x": 359, "y": 313},
      {"x": 680, "y": 294}
    ],
    "count": 2
  }
]

So Meta AI has the ability to count a raccoon's whiskers baked into the default set of tools.

Which means... it can count pelicans too!

Screenshot of a Meta AI conversation where the user uploaded a photo of pelicans on rocks and asked "Count the pelicans". response reads: "That rocky haul-out is packed — I count 25 brown pelicans in your photo. I went bird by bird, including the ones tucked behind rocks, heads turned away, or just peeking out at the edges. They're all California brown pelicans, with those pale yellow heads and long bills, clustered on the riprap right at the waterline. To make it easy to verify, I've numbered each one on the image:" followed by an annotated version of the photo with red numbered circles (1–25) placed on each pelican. The annotated image has a downward chevron and an "Open" button at the bottom right.

Here's that overlay exported as HTML.

Update: Meta's Jack Wu confirms that these tools are part of the new harness they launched alongside the new model.

Maybe open weights in the future?

On Twitter Alexandr Wang said:

this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions.

I really hope they do go back to open-sourcing their models. Llama 3.1/3.2/3.3 were excellent laptop-scale model families, and the introductory blog post for Muse Spark had this to say about efficiency:

[...] we can reach the same capabilities with over an order of magnitude less compute than our previous model, Llama 4 Maverick. This improvement also makes Muse Spark significantly more efficient than the leading base models available for comparison.

So are Meta back in the frontier model game? Artificial Analysis think so - they scored Meta Spark at 52, "behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6". Last year's Llama 4 Maverick and Scout scored 18 and 13 respectively.

I'm waiting for API access - while the tool collection on meta.ai is quite strong the real test of a model like this is still what we can build on top of it.

Tags: facebook, ai, generative-ai, llms, code-interpreter, llm-tool-use, meta, pelican-riding-a-bicycle, llm-reasoning, llm-release

The odd disappearance of the business cycle

I’m afraid that we’ve been sold a bill of goods. Pundits predicted that the 2022 Ukraine war supply shock would lead to a recession. Then we were told that the Fed’s policy of raising interest rates in 2023 would lead to recession. Then we were told that Trump’s April 2025 Liberation Day tariffs would lead to recession. The truth is that pundits have never, ever, ever been able to reliably predict recessions.

Please, just stop trying.

In the late 1960s, I began following the news on the economy. The first recession that I can recall occurred in 1970, when I was 14 years old. When I was writing my dissertation back in 1983, I could recall four different recessions over the previous 13 years. And that was not particularly unusual, as there were also four recessions during 1948-1960 and five recessions during 1918-30. Indeed during the first 83 years of the 20th century there were 19 recessions, roughly one every 4 1/2 years.

Since 1983, there have been just four recessions, or roughly one per decade. The term “business cycle” was always a bit misleading, as downturns did not follow a regular pattern. But at least one could view recessions as something that occurred fairly frequently. The economy seemed sort of cyclical.

Today, the term business cycle makes about as much sense as saying, “war cycle” or “pandemic cycle”. Recessions are viewed as an anomaly, not a regular feature of the economy. In 2026, an economics grad student might have a clear memory of only one recession, as the economy has been officially in the “contraction” phase of the business cycle for only 2 out of the previous 200 months—February to April 2020.

Of course that’s slightly misleading, as the economy was quite depressed in the early years of the recovery from the Great Recession, even as it was technically expanding. But there’s more. The unemployment rate has been above 4.5% for only 18 months out of the past 9 years.

Younger readers might wonder why I view that as unusual, but in the 47 years before 2017, the unemployment rate was above 4.5% well over 90% of the time.

We recently experienced our first ever soft landing with gradually moderating (but still excessive) inflation and no one seemed to notice. If you told economists back in the 20th century that we’d have a nine-year period with unemployment at or below 4.5% apart from a brief pandemic period, and that inflation would end that long period at just 2.8%, they’d ask: How does it feel to be living in nirvana? If you then told them that public sentiment on “the economy” was at near record lows (even before Iran), they’d ask if the entire country had become insane:

The University of Michigan’s long-running index of consumer sentiment registered a reading of “54” in the first snapshot of the year. Preliminary January results were released Friday, with the month’s final results coming in a couple of weeks.

To put the current 54 reading in perspective, that’s about 30 points below the survey’s more than 70-year average.

It’s also near the all-time low of 50-flat, which was hit in June 2022 at the peak of pandemic-era inflation and nearly hit again this last November.

And you cannot explain the paradox by pointing to inequality, as in recent years the fastest real wage growth has occurred among the lowest paid. This figure is from the (progressive) Economic Policy Institute.

With apologies to Charles Dickens, it was the best of times, yet it was perceived as the worst of times.

To be clear, I am not suggesting that consumer sentiment is “wrong”. It is what it is. If people feel bad about the economy, then they feel bad about the economy. I’d rather try to understand their perspective, rather than blame the public for being ignorant.

I’m more upset with the economics profession, which in my view has a flawed view of macroeconomics. There is way too much focus on business cycle forecasting, which has never been reliable. Way too much focus on real shocks and fiscal shocks, both of which are wildly overrated in importance. Way too little focus on America’s strange lack of mini-recessions and soft landings. We need to figure out what’s going on.

Long time readers know my views on these issues but given the repeated failure of recent recession predictions from mainstream pundits, perhaps it’s time for another look at the problem—from a market monetarist perspective.

Read more

Job Growth on ICE

Appomattox Court House Battle Facts and Summary | American Battlefield Trust

Happy Union Victory at Appomattox Day

Armageddon may (or may not) be on hold for a little while, so let’s talk about other issues, like the state of the economy — in particular, what’s going on with job growth, where important things are happening.

Are they good things or bad? Well, it’s a bad news/good news/but the good news is really bad news situation.

I’m not talking about the month-to-month numbers. You may have heard that the Bureau of Labor Statistics estimates that the U.S. economy added 178,000 jobs in March, after losing 133,000 jobs in February. Neither number tells you much except that monthly job numbers are noisy. Better to smooth them out, say by looking at the average change over the past 6 months. Here’s what that number, in thousands, looks like since the beginning of 2024:

Clearly, we’ve seen a huge employment slowdown, indeed an employment stall, with recent rates of job growth close to zero. Trump administration officials keep claiming that the economy is booming, but in terms of job creation it’s anything but.

That’s the bad news. The good news is that the employment stall has not led to a major worsening of the unemployment rate or other standard measures of the health of the labor market. Here, for example, is the percentage of prime-working-age adults with jobs, which remains high by historical standards:

A graph showing the growth of a company

AI-generated content may be incorrect.

But how can we have stalling job growth without rising unemployment? Clearly, the “breakeven” rate of job creation — the number of new jobs that must be added each month to keep up with growth in the labor force, and hence to avoid rising unemployment — has suddenly dropped. Multiple recent estimates suggest that breakeven employment growth is now close to zero.

There’s no mystery about why this has happened. It’s all about the crackdown on immigrants. The number of working-age native-born Americans has been falling for years, basically because we’re getting older, with most baby boomers already over 65. Labor force growth has therefore depended almost entirely on immigration — but in this ICE age, it’s dangerous to be an undocumented immigrant and, all too often, even to be a legal immigrant if your skin is the wrong color. So immigration, both legal and illegal, has dried up, and net immigration — the difference between the number of people coming and the number going — has plunged and may well have turned negative.

One way to think about what is happening is that we’ve stopped gaining jobs, but we’ve also stopped adding workers, so unemployment hasn’t risen a lot. But one can also turn this around and say that as a result of anti-immigrant policies we’ve stopped adding workers, but we have also stopped gaining jobs.

This is bad news for anyone who believed the predictions of immigration opponents. They claimed that cracking down on immigration would open up more jobs for native-born Americans, but this hasn’t happened. In fact, the unemployment rate for native-born workers has gone up under Trump, although not drastically:

A graph of a line with blue line

AI-generated content may be incorrect.

Source

There’s also another way in which the plunge in breakeven employment growth is bad news: It makes America’s already problematic fiscal outlook considerably worse, because future tax receipts depend on future economic growth — and economic growth will be much slower with zero growth in the labor force than it would have been with growing labor supply.

Consider the budget proposal the Trump administration released last week. This proposal purports to be fiscally responsible, with debt as a percentage of GDP falling slightly over the next decade. But this relatively rosy fiscal outlook depends crucially on the assumption that the economy will grow rapidly, 3 percent a year on average. This is much higher than the 2 percent growth projected by the Federal Reserve and 1.8 percent growth projected by the Congressional Budget Office. But you can see why Trump’s officials want to believe it: Rapid growth would help pay for large increases in military spending and growing outlays on Medicare and Social Security as the population ages.

Yet with no growth in the labor force, thanks to anti-immigrant policies, economic growth will have to come entirely through rising productivity — increased output per worker-hour. And 3 percent productivity growth would be very high by historical standards — in fact, we have never achieved sustained productivity growth that high, even during the postwar boom and at the height of the internet boom:

A graph of a number of people

AI-generated content may be incorrect.

It’s possible that we’ll have a record-breaking productivity boom as a result of AI, but we shouldn’t count on it. And if AI’s economic payoff is less than miraculous, the end of net immigration will lead to a slow-growth economy.

One point in particular that’s relevant given where we are politically: Donald Trump, Pete Hegseth and others envision a world of perpetual U.S. military dominance. But how can we maintain that dominance if, as the plunge in breakeven job growth suggests, we’ve entered an era of basically zero growth in our work force?

So when it comes to jobs, the bad news is that job growth has come to a screeching halt. The good news is that this hasn’t caused surging unemployment. But the bad news within the good news is that the disconnect between job growth and unemployment reflects a collapse in the inflow of immigrants, which is really bad for economic growth and America’s position in the world.

Are we great again yet?

MUSICAL CODA

GitHub Repo Size

Tool: GitHub Repo Size

GitHub doesn't tell you the repo size in the UI, but it's available in the CORS-friendly API. Paste a repo into this tool to see the size, for example for simonw/datasette (8.1MB).

Tags: cors, github

asgi-gzip 0.3

Release: asgi-gzip 0.3

I ran into trouble deploying a new feature using SSE to a production Datasette instance, and it turned out that instance was using datasette-gzip which uses asgi-gzip which was incorrectly compressing event/text-stream responses.

asgi-gzip was extracted from Starlette, and has a GitHub Actions scheduled workflow to check Starlette for updates that need to be ported to the library... but that action had stopped running and hence had missed Starlette's own fix for this issue.

I ran the workflow and integrated the new fix, and now datasette-gzip and asgi-gzip both correctly handle text/event-stream in SSE responses.

Tags: gzip, asgi, python

Adobe Diddles With Your /etc/hosts File

“thenickdude”, on Reddit:

They’re using this to detect if you have Creative Cloud already installed when you visit on their website.

When you visit https://www.adobe.com/home, they load this image using JavaScript:

https://detect-ccd.creativecloud.adobe.com/cc.png

If the DNS entry in your hosts file is present, your browser will therefore connect to their server, so they know you have Creative Cloud installed, otherwise the load fails, which they detect.

They used to just hit http://localhost:<various ports>/cc.png which connected to your Creative Cloud app directly, but then Chrome started blocking Local Network Access, so they had to do this hosts file hack instead.

(Via Thom Holwerda at OSNews.)

They didn’t have to do this, of course. In fact, quite obviously, they definitely should not be doing this. Adobe is just a third-party developer, no better, no more trusted, no more important than any other. Imagine if every piece of software on your computer added entries to your /etc/hosts file. Madness. Adobe should be ashamed of themselves. Adobe used to be a bastion of best practices for developers to follow. Now their installer/updater is indistinguishable from malware.

See also: Marc Edwards on Mastodon, and Michael Tsai.

 ★ 

Lickspittle of the Week: Todd Blanche

Acting Attorney General Todd Blanche, speaking of the president of the United States in a totally normal way:

I love working for President Trump. It’s the greatest honor of a lifetime. And if President Trump chooses to nominate somebody else and asks me to go do something else, I’ll say, “Thank you very much, I love you, sir.”

The phrase Blanche was looking for is “Thank you sir, may I have another.”

 ★ 

Hawaii and Guam Concerns; Record Warmth Spreads Eastward; Heavy Snow for Sierra-Nevada; Severe Weather for the Plains

Links 4/9/26

Links for you. Science:

NIH Fellowship awards for FY2025 by ethnicity
The System That Decides What Science Gets Published Is Breaking Down (paper here)
Prominent Climate Scientist Resigns From NASA, Citing Trump’s Attack on Science
Can you clone a clone forever? Landmark 20-year study reveals horrifying verdict
New Tongue-Swab TB Test Could Help Eradicate The Disease, WHO Says
CDC vaccine adviser Malone steps down to avoid ‘drama’: Physician and biochemist has questioned the severity of the COVID-19 pandemic

Other:

How the spreadsheet reshaped America
We Live In A Society (Really)
DOGE Damage Drags on in DC, Where Inequality is Widening as a Result
Mayor Muriel Bowser Signs On For Half Transparency Into Actions of Trump’s Cops in D.C.
Former NYPD Chief Admits Giving ‘Free Pass’ to City Workers, Right Wing Allies
Canadian woman held with daughter by ICE warns all immigrants to ‘lie low’
What If Iran Doesn’t Want the War to End Yet?
ICE Lied About Its Authority to Make Courthouse Arrests
Candidate Mamdani Backed Expanding Housing Vouchers. As Mayor, He’s Appealing a Court Order To Do So.
Dupont Circle’s Long-Vacant Pakistani Embassy Is Going Residential (finally…)
Musk Rips Twitter Verdict, Claims Jury’s $4.20 ‘Joke’ Mocked Him
Your data is everywhere. The government is buying it without a warrant
Mike Johnson Announces, ‘We Have Created a New Award’ To Give Trump: ‘This Beautiful Golden Statue’
DHS ‘100 Million Deportations’ Claim Was No Joke. Bovino Had ‘Master Plan’ to Purge Nearly a Third of the Country
Choctaw Nation Buys Former Big Lots Warehouse, Closing Off Oklahoma ICE Detention Site
Fakery Is the Key to the Right’s Cultural and Political Dominance
She Made $1 Million in 3 Hours on OnlyFans. Then, the Headaches.
Cringe Is Good, Earnestness Is Strength
The IOC’s New Policy Isn’t Really a Trans Story
The In Living Color Effect: How a radical sketch-comedy show helped usher in the cultural boom of Black comedy in the 1990s
How Epstein Helped Solve a Billionaire’s Problems With Women
Wikipedia Bans AI-Generated Content
How Trump’s Plot to Grab Iran’s Nuclear Fuel Would Actually Work
‘I will not comply’: Tennessee librarian refuses to move LGBTQ+ books
Trump is the biggest threat to D.C.’s architectural splendor since War of 1812. The president’s garish and piecemeal changes to the capital city’s urban design are poised to destroy symbolism upheld for centuries.
Meet the man pledging to donate $16 million to help Minneapolis residents pay rent post-ICE surge
Alex Jones’ Infowars is shutting down, but his disinformation legacy lives on
ICE officers are taking DNA samples from protesters they’ve arrested
Maine Could Determine Abortion Rights for the Nation. Why Aren’t Reproductive Rights Groups Acting Accordingly?
MAGA Faithful Furious at Plans for Mega-Jail in Red State

Trump’s Massive Strategic Failure in Iran: Will It Be Seen as One?


Trump’s negotiating strategy

While it’s hard to say what is definitively happening regarding Iran, as much of it relies on the social media feed of the narcissistic liar and adjudicated rapist known as Donald Trump, it really does appear that Trump lost here–and hugely. At best, the Iranian regime will still be in place, it will be collecting tolls for shipping traffic (and Trump has claimed there might be some kind of joint tolling with the U.S., but who the hell knows with that pathological liar), and we might have some kind of nuclear agreement,which might not be as favorable as previously negotiated (never mind the deal Obama made and Trump tore up in his first term).

Importantly, our Gulf allies, such as they are, now realize the U.S. cannot effectively deter the Iranian regime.

This must be construed as a massive strategic failure, one that was entirely of Trump’s doing. Yet my admittedly myopic (as we all are) perusal of The Mainstream Discourse suggests this is not being cast as such, especially by Democrats. A competent Democratic Party that is not Shor-pilled (but I repeat myself) would be going on the offensive about Trump’s failure. Hopefully, they’ll start to do that.

Spec Driven Development isn't Waterfall

Spec Driven Development isn’t Waterfall

Write down what you mean.

After spending a few months writing (e.g. on the Kiro Blog), and speaking (e.g. Real Python Podcast, SE Radio) about spec-driven development, I’ve noticed a common misconception: spec driven development is a return to a waterfall style of software development.

Specification driven development (in Kiro, for example) isn’t about pulling designs up-front, it’s about pulling designs up. Making specifications explicit, versioned, living artifacts that the implementation of the software flows from, rather than static artifacts.

This distinction is important, because software development (like all complex product development and engineering tasks) is a fundamentally iterative process. It is extremely rare for a software project to know all of the requirements up-front. It’s much more common for one of the goals of the development process being to discover requirements, most frequently through engaging users in the cycle of feedback. This is a point that’s missed in strict waterfall software development processes, and missed in critiques (like Dijkstra’s) of natural language specification (as I have written about before). The Agile movement is often presented as a high-minded set of ideas, but I think it’s more accurate to see it as a reflection of a simple fact: as software became more complex, and filled more roles in society, top-down approaches to design simply no longer work.

From the Agile Manifesto:

Customer collaboration over contract negotiation Responding to change over following a plan

These are simple reflections of reality. Software specifications are complex, dynamically changing, internally conflicting, and invariably incomplete. In specification driven development, the specification is the thing being iterated on, rather than the implementation. The iteration cycle is the same as before, but potentially much quicker because of the accelerating effect of AI.

So if specifications aren’t up-front designs, what are they?

Specifications are an explicit statement of requirements and key design choices, separated from the low-level implementation. They are a raising of the level of abstraction from code to words, and increasingly to a mix of words, pictures, snippets, and even mathematics. The words can be free-form, or structured (e.g. RFC2119 or EARS). It can (rarely) pull in exact statements (e.g. in Lean or TLA+) when needed. The implementation is then derived from this specification, reflecting iterative changes in the specification, by AI alone or human developers working with AI. Increasingly, these tasks are done autonomously end-to-end by AI agents. Specs stay in sync with implementation by being upstream of implementation for most changes.

This approach has several advantages which I’ve written about in the past: keeping context on the bigger picture (a map, versus the turn-by-turn directions of vibe coding prompts), the ability to mix levels of formality and detail to meet the needs of a particular piece of software, serving as always-in-sync documentation, allowing implementation of the same code in multiple languages or with multiple frameworks, and the ability to lift what matters out of the muck of the implementation. One advantage, though, is looking to override all of these in importance: we’re seeing the largest improvements in velocity and delivery in teams and processes that can allow agents to run autonomously for long periods of time. Specifications do exactly that. By providing the agent with a clear map, we can set an agent off building without a human inside the tight loop of development and testing. The agent can also write higher quality, better designed, and better tested code by seeing the big picture. It knows what to test, and what good looks like.

Specifications aren’t up-front designs because you don’t need to, and probably shouldn’t, develop the entire specification upfront. Instead, specifications should be at the core of an iterative software development practice. Humans are still critical to this outer loop of software development, driven by refining and extending the specification. Perhaps most crucially, they own the internally conflicting nature of software requirements. Where conflicts and trade-offs exist, either technical or in product requirements, expertise and experience come into play.

We are still very early in this revolution in software development. I believe that the future is specification-driven. Software development at a level above what we currently think of as implementation, just as it’s already been raised from switches, to gates, to instructions, to lines of code. It’s going to take while to find exactly the right recipe, and I suspect we’re not going to converge on a single answer across all domains. But the big picture is clear.

In a future post I'll tackle the *determinism question*, and how the non-deterministic nature of LLM-powered spec-to-code translation isn't actually a big deal.

Interstellar Probes: Moving Beyond Bracewell

Lately we’ve been discussing interstellar probes, the kind that an extraterrestrial civilization might use to explore the galaxy. Ronald Bracewell’s analysis of such probes dates back to 1960 and was all but coterminous with the emergence of SETI. The problem with Bracewell probes is that we would expect to have one in our Solar System if they exist. Rather than using that notion to add stress to the Fermi question, I’m going to point out that there is a lot of real estate waiting to be searched.

Case in point: What might our ongoing study of the lunar surface through images from the Lunar Reconnaissance Orbiter pick up as we use AI models that have already identified human-made space debris from various missions? A closer look at this project reminds us that while the Moon is an obvious place to look for a ‘lurker’ probe, we can’t discount other locations even though earlier work on the various Lagrange points, a good place for long-term observation of our planet, came up empty (see below). Our capabilities are so much more advanced not only in terms of instrumentation but analytical tools that a continued hunt for artifacts is reasonable.

I’m getting picky here given the wide variety of possible probes, tapping the definition that Bracewell used in his original article. That’s a probe we probably would have noticed by now if it were active. In 1960, Bracewell was offering an alternative to the SETI goal of detecting an interstellar radio signal aimed at Earth. His physical probe would arrive in a planetary system to look for signs of life and technology, duplicating any radio signals it heard so as to re-transmit them to the originators, thus establishing contact. Sagan uses the notion in his novel Contact (1979), where Adolf Hitler’s opening speech from the 1936 Berlin Olympics is found embedded within the message, along with much else.

How would we respond to hearing a signal sent back to us from space? Bracewell thinks we would experiment with it to see what would happen next:

To notify the probe that we had heard it, we would repeat back to it once a:gain. It would then know that it was in touch with us. After some routine tests to guard against accident, and to test our sensitivity and band-width, it would begin its message, with further occasional interrogation to ensure that it had not set below our horizon. Should we be surprised if the beginning of its message were a television image of a constellation?

Bracewell’s notions of dispatching a physical object as opposed to sending a radio signal take advantage of the ‘information density’ available to a physical probe. This is the familiar notion that a box of DVDs in a truck moves information at a far higher rate than fiber-optic cable. But of course you have to get the truck to its destination, and in the case of interstellar flight the latency is huge – perhaps thousands of years or more. A long-lived civilization, thought Bracewell, may nonetheless see purpose in seeding nearby stars if the travel time is a small fraction of its likely civilizational life.

Swarming and Reproducing

Bracewell’s ideas jibe nicely with the Breakthrough Starshot concept of swarms of sails investigating nearby stars. We might imagine the descendants of such tiny flyby probes scattered to all interesting stellar systems within, say, 100 light years. With concepts like Bracewell’s entering the literature, it was left to Robert Freitas to run the first scientific search I am aware of for such probes (citation below). Freitas made a series of visual observations of the various LaGrange points in the early 1980s. But in the early days of SETI (and Bracewell was writing even before the Green Bank meeting in 1961 that produced the Drake Equation), other ideas about how interstellar probes might operate had begun to surface. Ancient probes sent by civilizations far more advanced than ours might still be live, waiting and reporting on our activities (Clarke’s sentinel ‘slabs’ from 2001: A Space Odyssey come to mind . Or they might be long-dead relics.

Version 1.0.0

When Michael Hart went to work on this in 1975, he amplified the probe concept and changed the game. He produced, in fact, what Jason Wright (Pennsylvania State) has dubbed “The most influential formulation of the Fermi Paradox…,” one that compresses the conundrum by homing in on the fact that we observe no intelligent beings on our planet, something Hart called Fact A. The fact that they are not observed tells us that despite the amount of time available for long-lived cultures to have colonized the galaxy, none evidently have. This is no small problem, for as Wright calculates in his new textbook on SETI, even a ‘wavefront’ of probes moving outwards from star to star at Voyager-like speeds would have been able to reach every star within 2 billion years.

Move the dial up in terms of speed to, say, 0.5 c and the numbers get considerably shortened. Imagine relativistic ships that close on lightspeed and we find exponential growth saturating the galaxy in 150,000 years, all contrasting with an Earth that is 4.5 billion years old. Hart saw nothing in the laws of physics that prohibited starflight, and he found the idea that ETI was uninterested in Earth to be unconvincing. What David Brin coined the ‘Principle of non-Exclusiveness’ boils down to the idea that alien species will not all behave the same way. All that is needed is for one civilization to decide to send out probes, and by now such probes should have reached every star.

Image: How quickly would a single civilization using self-replicating probes spread through a galaxy like this one (M 74)? Moreover, what sort of factors might govern this ‘percolation’ of intelligence through the spiral? The answers affect our view of the Fermi question, and thus our own place in the cosmos. Image credit: NASA, ESA, and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration.

Advances in computing led Frank Tipler to push Hart’s views even more strenuously, bringing John von Neumann’s work on self-replicating machines to bear. His insight was to ask what would happen if an extraterrestrial culture began seeding stars with self-reproducing probes, each capable of not only studying a new world but building another probe that could reach yet another star, and so on. Here the numbers become even more telling. Such probes could use local resources in each system to build their next generation, thus nullifying the resource problem. Here’s Tipler on the matter:

…if the motivation for communication is to exchange information with another intelligent species, then as Bracewell has pointed out, contact via space probe has several advantages over radio waves. One does not have to guess the frequency used by the other species, for instance. In fact, if the probe has a von Neumann machine payload, then the machine could construct an artifact in the solar system of the species to be contacted, an artifact so noticeable that it could not possibly be overlooked. If nothing else, the machine could construct a “Drink Coca-Cola” sign a thousand miles across and put it in orbit around the planet of the other species. Once the existence of the probe has been noted by the species to be contacted, information exchange can begin in a variety of ways.

As to the cost of such a vast exploration program, Tipler has this to say:

Using a von Neumann machine as a payload obviates the main objection to interstellar probes as a method of contact, namely the expense of putting a probe around each of an enormous number of stars. One need only construct a few probes, enough to make sure that at least one will succeed in making copies of itself in another solar system. Probes will then be sent to the other stars of the galaxy automatically, with no further expense to the original species.

A ‘Catastrophic’ Answer to Fermi?

Tipler suggested a timeframe of 300 million years to fill the galaxy with these devices, in an argument that drew fire from Carl Sagan and William Newman, who argued in 1983 that his approach was ‘solipsistic’ because the idea that we were alone in producing a technological civilization was anti-Copernican. And here we need to pause on a concept that has surfaced repeatedly in SETI studies not just in the western nations but also the Soviet Union. The idea of ‘mediocrity’ troubled attendees at the Soviet SETI meeting at the Byurakan Astrophysical Observatory in 1964, to be discussed again in a second meeting (with American scientists as participants) in 1971.

Do we just take the Copernican principle as a given? Sagan clearly thought so. His ‘co-author’ on Intelligent Life in the Universe, Iosif S. Shklovskii was far less sanguine on the matter:

Since we do not adequately understand the factors leading to the evolution of intelligence and technical civilizations, we cannot reliably estimate the probability that intelligence and technical civilizations will emerge.

Here I’m drawing on Mark Sheridan in his 2023 book SETI’s Scope (How The Search For Extraterrestrial Intelligence Became Disconnected From New Ideas About Extraterrestrials). Sheridan homes in on the philosophical disagreement between emerging Soviet SETI and the ideas in the Drake Equation. At Byurakan, Soviet mathematician A. V. Gladkii challenged the idea, accepted by Sagan, that mathematics could be a recognizable common ground between all intelligences across the stars. And Sheridan quotes Theodosius Dobzhansky, a Ukrainian-born geneticist later working in the U.S., who in a 1972 paper cast doubt on Sagan’s insistence that because intelligence had arisen on our planet, it must arise everywhere life exists. In his view, the principle of mediocrity was being taken several steps too far. Quoting Dobzhansky:

“Natural scientists have been loathe, for at least a century, to assume that there is anything radically unique or special about the planet Earth or about the human species. This is an understandable reaction against the traditional view that Earth, and indeed the whole universe, was created specifically for man. The reaction may have gone too far. It is possible that there is, after all, something unique about man and the planet he inhabits.”

In a fascinating 2009 paper, Milan Ćirković examines the Fermi question in the context of our basic premises about science. As amplified in his later book The Great Silence: Science and Philosophy of Fermi’s Paradox (Oxford University Press, 2018), the Serbian astronomer points to the focus the ‘where are they’ question places upon both Copernicanism and gradualism. In the former, as clearly stated by Sagan as by many other of the early SETI practitioners, the assumption is that we occupy no privileged place in the cosmos, and thus should expect other civilizations to exist, some of which would be far more advanced than ourselves. Yet we do not observe them.

Many answers can be offered to Fermi’s question, of course, but as we continue probing the cosmos, the silence takes on escalating significance. Must we envision a future in which we abandon Copernicanism and assume that we do not, in fact, occupy a relatively common niche in the cosmos, but rather a rather special one?

Or should we give up on gradualism, the idea that geophysical processes proceed in the future more or less as they did in the past? The concept is foundational to 18th Century geology and remains a commonplace in current thinking. But ‘catastrophism’ is an obvious factor in the development of life, as extreme ruptures like the K–T extinction event that ended the era of the dinosaurs make clear. Are there common factors that could affect planets throughout what is thought of as the Milky Way’s habitable zone?

The question is the focus of recent work on gamma ray bursts and implies, as Ćirković notes, a ‘reset’ of the clock. That could explain our lack of detections, as it would imply that living worlds, no matter their geological age, have had only about the same amount of time we have had to develop intelligence. The Fermi question highlights both of these key assumptions, while our lack of a solution keeps the tension tight.

The Bracewell paper is “Communications from Superior Galactic Communities,” Nature Volume 186, Issue 4726 (1960), pp. 670-671. Abstract. On the LaGrange search, see Freitas, “A search for natural or artificial objects located at the Earth-Moon libration points,” Icarus, Volume 42, Issue 3 (June, 1980) p. 442-447 (abstract). Michael Hart’s paper on galactic expansion is “Explanation for the Absence of Extraterrestrials on Earth,” Quarterly Journal of the Royal Astronomical Society, Vol. 16, p.128 (full text). Frank Tipler’s paper on self-reproducing probes is “Explanation for the Absence of Extraterrestrials on Earth,” Royal Astronomical Society, Quarterly Journal, vol. 21 (Sept. 1980), p. 267-281 (full text). Milan Ćirković’s paper on Fermi and Copernicanism is “Fermi’s Paradox – The Last Challenge for Copernicanism?” Serbian Astronomical Journal 178 (2009), 1–20. Preprint.

The Deep Space Network Acquires Artemis II Signal

2 Min Read

The Deep Space Network Acquires Artemis II Signal

A graphical representation of the Deep Space Network’s radio frequency antennas indicate signal acquisition from NASA’s Artemis II mission to the Moon on April 1, 2026, inside the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California.
PIA26746
Credits: NASA/JPL-Caltech

Description

A graphical representation of the Deep Space Network’s radio frequency antennas indicate signal acquisition from NASA’s Artemis II mission to the Moon on April 1, 2026, inside the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California. Two antennas at the Madrid Deep Space Communications Complex, Deep Space Station 54 and 56, can be seen communicating with Artemis II (the signals are labelled “EM2”, short for “Exploration Mission 2”; elsewhere they are labelled “ART2” for “Artemis II”).

A similar visualization can be found at DSN Now, which details all the missions that the network is communicating with 24 hours a day, seven days a week.

The Space Flight Operations Facility operates the DSN, which comprises of three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post The Deep Space Network Acquires Artemis II Signal appeared first on NASA Science.

The Deep Space Network Acquires Artemis II Signal

2 Min Read

The Deep Space Network Acquires Artemis II Signal

The acquisition of the radio frequency signal from the Artemis II crewed mission to the Moon by NASA’s Deep Space Network (DSN) is indicated by the peak in the data signal shown on the top computer screen.
PIA26745
Credits: NASA/JPL-Caltech

Description

The acquisition of the radio frequency signal from the Artemis II crewed mission to the Moon by NASA’s Deep Space Network (DSN) is indicated by the peak in the data signal shown on the top computer screen. 

Soon after the mission’s launch on April 1, 2026, at 6:35 p.m. EDT, NASA’s Near Space Network led communications with the Orion capsule. Then, communications were handed off to the DSN, marking the first time in over 50 years that the network would be communicating with a crewed spacecraft traveling through deep space.

The Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California (where this photo was taken) operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of robotic spacecraft exploring the solar system in addition to the Artemis II mission.

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post The Deep Space Network Acquires Artemis II Signal appeared first on NASA Science.

Watching Over the Deep Space Network Before Artemis II Signal Acquisition

1 Min Read

Watching Over the Deep Space Network Before Artemis II Signal Acquisition

Blanca Renteria, Artemis Deep Space Network (DSN) operations chief, monitors data at the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California shortly after Artemis II launched from the agency’s Kennedy Space Flight Center in Florida on April 1, 2026.
PIA26744
Credits: NASA/JPL-Caltech

Description

Blanca Renteria, Artemis Deep Space Network (DSN) operations chief, monitors data at the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California shortly after Artemis II launched from the agency’s Kennedy Space Flight Center in Florida on April 1, 2026, at 6:35 p.m. EDT.

The Space Flight Operations Facility operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of robotic spacecraft exploring the solar system in addition to the crewed Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Watching Over the Deep Space Network Before Artemis II Signal Acquisition appeared first on NASA Science.

Watching the Artemis II Launch From JPL’s Space Flight Operations Facility

1 Min Read

Watching the Artemis II Launch From JPL’s Space Flight Operations Facility

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the launch of the agency’s Artemis II mission to the Moon on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN).
PIA26743
Credits: NASA/JPL-Caltech

Description

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the launch of the agency’s Artemis II mission to the Moon on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN). Soon after launch, the Artemis II crew communicated with the Near Space Network while they were close to Earth. Later, communications were handed off to the DSN as the Orion capsule continued its journey into deep space.

The DSN comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex has several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the Artemis II mission.

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Watching the Artemis II Launch From JPL’s Space Flight Operations Facility appeared first on NASA Science.

JPL’s ‘Lucky Peanuts’ Before Artemis II Launch

1 Min Read

JPL’s ‘Lucky Peanuts’ Before Artemis II Launch

A container of “lucky peanuts” sits above workstations within the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory shortly before the launch of the Artemis II mission to the Moon on April 1, 2026.
PIA26742
Credits: NASA/JPL-Caltech

Description

A container of “lucky peanuts” sits above workstations within the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory shortly before the launch of the Artemis II mission to the Moon on April 1, 2026. Eating peanuts before launches and other major mission events is a longstanding tradition at JPL.

The Space Flight Operations Facility operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the crewed Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post JPL’s ‘Lucky Peanuts’ Before Artemis II Launch appeared first on NASA Science.

Watching the Artemis II Mission Unfold at JPL’s Space Flight Operations Facility

1 Min Read

Watching the Artemis II Mission Unfold at JPL’s Space Flight Operations Facility

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the agency’s Artemis II mission unfold soon after launch on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN).
PIA26747
Credits: NASA/JPL-Caltech

Description

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the agency’s Artemis II mission unfold soon after launch on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN).

The DSN comprises of three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex has several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the crewed Artemis II mission.

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Watching the Artemis II Mission Unfold at JPL’s Space Flight Operations Facility appeared first on NASA Science.

Supporting Artemis II From JPL’s Space Flight Operations Facility

2 Min Read

Supporting Artemis II From JPL’s Space Flight Operations Facility

The Artemis II mission patch appears in the center screen of the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California on April 1, 2026, shortly before the mission launched to the Moon.
PIA26741
Credits: NASA/JPL-Caltech

Description

The Artemis II mission patch appears in the center screen of the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California on April 1, 2026, shortly before the mission launched to the Moon. A graphical representation of the antennas of the agency’s Deep Space Network (DSN), left, indicates which antennas are currently sending and receiving data, highlighted in white. 

Shortly after liftoff at 6:35 p.m. EDT from NASA’s Kennedy Space Center in Florida, initial Artemis II communications were handled by the agency’s Near Space Network. Then, the DSN acquired signal, marking the first time in over 50 years that the network would be communicating with a crewed spacecraft traveling through deep space.

The Space Flight Operations Facility operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Supporting Artemis II From JPL’s Space Flight Operations Facility appeared first on NASA Science.

New Emergent Ventures tranche on science policy and communication

American science policy is now more important than at perhaps any previous point in history—how science is organized and funded (or not funded) in this country continues to rise in significance.

I have also spoken about the undersupply of people who understand this and are trying to act on it in Washington. Unfortunately the career paths here are neither well-defined nor well-regarded.  I would like to help change that.

What we’re looking for:

  • Priority 1: Metascience Policy Entrepreneurs in DC
    • Funding for individuals working at the intersection of science policy and institutional reform—people who can shape how Congress and federal agencies think about science funding and governance.
  • Priority 2: Science and Metascience Communicators
    • Funding for communicators via any medium—bloggers, journalists, authors, podcasters, artists, filmmakers, conveners, influencers, event organizers—who can expand the reach of pro-science ideas beyond their current audience.

We are doing this with and thank Renaissance Philanthropy for the support.  You can apply through the regular Emergent Ventures portal.

The post New Emergent Ventures tranche on science policy and communication appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Helium Is Hard to Replace

The war in Iran, and the subsequent closure of the Strait of Hormuz, has unfortunately made us all familiar with details of the petroleum supply chain that we could formerly happily ignore. Every day we get some new story about some good or service that depends on Middle East petroleum and the production of which has been disrupted by the war. Fertilizer production, plastics, aluminum, the list goes on.

One such supply chain that’s suddenly getting a lot of attention is helium. Helium is produced as a byproduct of natural gas extraction: it collects in the same underground pockets that natural gas collects in. Qatar is responsible for roughly 1/3rd of the world’s supply of helium, which was formerly transported through the Strait of Hormuz in specialized containers. Thanks to the closure of the strait, helium prices have spiked, suppliers are declaring force majeure, and businesses are scrambling to deal with looming shortages. (For many years the US government maintained a strategic helium reserve, but this was sold off in 2024.)

What I find interesting about helium is that in many cases, it’s very hard to substitute for. Helium has a unique set of properties — in particular, it has a lower melting point and boiling point than any other element — and technologies and processes that rely on those properties can’t easily switch to some other material.

Helium production

Helium is the second lightest element in the periodic table (after hydrogen), and the second most common element in the universe (also after hydrogen). But while helium is very common on a cosmic scale, here on earth it’s not so easy to get. Because helium is so light, it rises to the very top of the atmosphere, where it eventually escapes into space.1 So essentially all helium used by modern civilization comes from underground.

Helium is produced via the radioactive decay of elements like uranium and thorium, and it collects in underground pockets of natural gas. This source of helium was first discovered in the US in 1903, when a natural gas well in Kansas produced a geyser of gas that refused to burn. Scientists at the University of Kansas eventually determined that this was due to the presence of helium. Like petroleum, helium has collected in these pockets over the course of millions of years, and thus (like petroleum) there’s a limited supply of underground helium that can be extracted. As with petroleum, people are often worried that we’re running out of it.

Because helium is a byproduct of natural gas extraction, and because only some natural gas fields have helium in appreciable quantities, a small number of countries are responsible for the world’s supply of helium. The US and Qatar together produce around 2/3rds of the world’s helium supply. Russia, Algeria, Canada, China, and Poland produce most of the remaining balance.

Elemental helium has a few different useful properties. The most important one is that, thanks to the small size and completely filled outer electron shell of helium atoms, helium has a lower boiling point than any other element. Liquid helium boils at just 4.2 kelvin (-452 degrees Fahrenheit). By comparison, liquid hydrogen boils at 20 K, and liquid nitrogen boils at a positively balmy 77 K.

Its low boiling point makes helium very useful for getting something really, really cold. When a liquid boils, it transforms into a gas, and during this process it will pull energy from its surroundings due to evaporative cooling. This is why your body sweats: to cool you down as the liquid evaporates. When a liquid has a very low boiling point, this heat extraction happens at a very low temperature. Helium also stays a liquid at much lower temperatures than other elements. Nitrogen freezes solid at 63 K, and hydrogen freezes at 14K, but at atmospheric pressure helium stays a liquid all the way to absolute zero. If you need to cool something to just a few degrees above absolute zero, liquid helium is essentially the only practical way to do that.

Helium also has a few other useful properties. As we noted, helium is very light: it will naturally rise in the atmosphere, which makes it useful as a lifting gas. Thanks to its filled outer electron shell, it is inert, and won’t react with other materials. Helium also has high thermal conductivity — at room temperature, helium can move heat about six times better than air.

The uses of helium

The world uses around 180 million cubic meters of helium each year. (This sounds like a lot, but it’s just 0.11% of the 159 billion cubic meters of nitrogen the world uses each year, and 0.004% of the over 4 trillion cubic meters of natural gas that the world uses each year.) But while it’s not used in enormous quantities compared to some other gases, helium is nevertheless quite important. Different industries make use of helium’s properties in different ways, and while in some cases there are reasonable substitutes for helium, in most cases helium has no practical replacement.

MRI machines

Some of the biggest consumers of helium are MRI machine operators, which consume around 17% of the helium used in the US. MRI machines work by creating very strong magnetic fields, which change the orientation of hydrogen atoms in tissues in your body. A pulse of radio waves is then sent into your body, which temporarily disrupts this orientation. When the pulse stops, different types of tissue return to their alignment with the magnetic field at different rates, and that rate of change can be measured and converted into a picture of the interior of the body. The strong magnetic fields in MRI machines are created by superconducting magnets: when some materials get cold enough, they drop to zero electrical resistance, which makes it possible to put enormous amounts of electrical current through them and create extremely strong magnetic fields.2 The vast majority of MRI machines used today use superconducting magnets made from niobium-titanium (NbTi), which becomes superconducting at 9.2 degrees above absolute zero. This is well below the boiling point of any other coolant, making liquid helium the only practical option for cooling the magnets. A handful of MRI machines have been built using higher-temperature superconductors that don’t require helium cooling, but the vast majority of the 50,000 existing MRI machines in the world require helium.

The helium consumption of MRI machines has fallen drastically over time. Early MRI machines would lose helium at a rate of around 0.4 liters per hour, requiring large tanks of 1000-2000 liters that needed to be refilled every few months. (It’s notoriously difficult to prevent gaseous helium from leaking out of containers, which is why helium is also often used for leak detection.) But modern MRI machines are “zero boil-off,” which essentially never need to be recharged with helium. As these machines take up more market share, the helium requirements of MRI machines can be expected to fall. But for the foreseeable future, MRI will remain a substantial source of demand.

Semiconductors

Another major consumer of helium is the semiconductor industry, which uses around 25% of the helium worldwide, and around 10% of the helium in the US.3 As with MRI machines, helium is used to cool superconducting magnets, which are used to increase the purity of silicon ingots grown using the Czochralski method. Helium is also used as a coolant in some production processes, as well as a non-reactive gas to flush out some containers, for leak detection, and for a variety of other uses. A 2023 report from the Semiconductor Industry Association noted that helium was used “as a carrier gas, in energy and heat transfer with speed and precision, in reaction mediation, for back side and load lock cooling, in photolithography, in vacuum chambers, and for cleaning.” The same report notes that for many of these uses, helium has no substitute.

Unlike MRI machines, which have used less and less helium over time, helium usage in the semiconductor industry seems to be trending up: some sources claim that helium consumed by the semiconductor industry is expected to rise by a factor of five by 2035. This seems to be in part due to the development of DUV and EUV semiconductor lithography machines, which require helium to function. Unlike many other gases, helium absorbs almost no EUV radiation, which (as I understand it) makes it hard to substitute for helium in EUV machines.

Fiber optics

Helium is also used in the manufacturing of fiber optic cable. Optical cable is made with an inner core of glass, surrounded by an outer “sleeve” of glass with a different index of refraction. This keeps photons within the inner core via the phenomenon of total internal reflection. During the manufacturing process, helium is used as a coolant when the outer “sleeve” is being deposited onto the core — with any other atmosphere, bubbles form between the two layers of glass. Roughly 5-6% of helium worldwide is used for the production of optical fiber, and there’s no known alternative.

Purging gas

Other than semiconductor manufacturing, other industries (particularly the aerospace industry) use helium as a “purge gas” to clean out containers. Cleaning out a tank of liquid hydrogen, often used as a liquid rocket fuel, requires a gas with a boiling point low enough that it won’t freeze when it contacts the hydrogen. Cleaning a tank of liquid oxygen doesn’t require a gas with quite as low a boiling point, but it is best to use an inert gas to reduce the chance of it reacting with the highly reactive oxygen. Aerospace purging makes up around 7% of US helium consumption. Around half of that is used by NASA, which is the single biggest user of helium in the US.

Lifting gas

Because helium is lighter than air, it’s also used as a lifting gas in balloons and lighter-than-air airships as an alternative to the highly flammable hydrogen. Each Goodyear Blimp, for instance, uses around 300,000 cubic feet of helium. Around 18% of the helium consumed in the US is as a lifting gas.

Scientific research and instruments

Helium is also widely used in scientific research. Much of this is for keeping things cold: superconducting magnets, such as those used in the Large Hadron Collider, typically require helium, as do the superconducting elements in SQUIDs, which are highly sensitive magnetic field detectors. Helium is also used in mass spectrometers, which are used for, among other things, detecting microscopic leaks in containers.

This is a major category of use in the US; roughly 22% of its helium consumption goes to “analytical, engineering, lab, science, and specialty gases.”

Welding

In the US, helium is also used for welding: its high thermal conductivity and its inertness make helium an excellent shielding gas, which prevents the pool of molten metal from being contaminated before it cools. In the US, welding makes up roughly 8% of helium use, but elsewhere in the world, it’s more common to use other shielding gases like argon.

Diving

Helium is also used for breathing gas in deep sea commercial diving. At depths beyond 30 meters, breathing nitrogen (which makes up 78% of normal air) causes nitrogen narcosis, and diving beyond this depth is done using gas mixes that replace part of the nitrogen for helium. Roughly 5% of helium consumed in the US goes towards diving.

Helium for diving is difficult to substitute for. Virtually every other breathable gas except for possibly neon causes some degree of narcosis, and neon is heavier than helium, making breathing more difficult.

Conclusion

For some of these applications, it’s possible to substitute helium with other materials. There are other shielding gases, such as argon, that can be used for welding, and other lifting gases, such as hydrogen, that can be used for balloons or airships. In other applications, it’s possible to dramatically reduce the consumption of helium via recycling systems or other methods designed to reduce its use. As we’ve noted, this has occurred with MRI machines, where modern ones use far less helium than their predecessors. And it seems to have happened with aerospace purging. A 2010 report from the National Academies of Sciences notes that if NASA and the Department of Defense were sufficiently motivated, they could dramatically reduce their helium consumption by recycling it. Since then, aerospace use of helium has fallen from 18.2 million cubic meters (26% of total US consumption) to 4 million cubic meters (7% of total US consumption). But the United States Geological Survey notes that most helium in the US is still unrecycled, and there’s lots of opportunity to dramatically reduce helium usage with various recapture and recycling systems. Many of these systems are capable of reducing helium consumption by 90% or more.

But “reducing” doesn’t mean “eliminating,” and it’s interesting to me how in so many cases there doesn’t seem to be any good substitute for helium.

1

Though thanks to circulation in the air, the helium concentration below the turbopause is roughly constant, about 5 parts per million.

2

If the magnets get too warm, the sudden loss of superconductivity, called a “quench,” can damage or destroy the magnets due to the heat generated from the now-present electrical resistance.

3

I estimated this by subtracting the 5-6% of helium used globally by the fiber optic industry from the 15% of helium used by “semiconductors and fiber optics” from the United States Geological Survey report on helium.

Let’s talk about LLMs

Everybody seems to agree we’re in the middle of something, though what, exactly, seems to be up for debate. It might be an unprecedented revolution in productivity and capabilities, perhaps even the precursor to a technological “singularity” beyond which it’s impossible to guess what the world might look like. It might be just another vaporware hype cycle that will blow over. It might be a dot-com-style bubble that will lead to a big crash but still leave us with something useful (the way the dot-com bubble drove mass adoption of the web). It might be none of those things.

Many thousands of words have already been spent arguing variations of these positions. So of course today I’m going to throw a few thousand more words at it, because that’s what blogs are for. At least all the ones you’ll read here were written by me (and you can pry my em-dashes from my cold, dead hands).

Terminology, and picking a lane

But first, a couple quick notes:

I’m going to be using the terms “LLM” and “LLMs” almost exclusively in this post, because I think the precision is useful. “AI” is a vague and overloaded term, and it’s too easy to get bogged down in equivocations and debates about what exactly someone means by “AI”. And virtually everything that’s contentious right now about programming and “AI” is really traceable specifically to the advent of large language models. I suppose a slightly higher level of precision might come from saying “GPT” instead, but OpenAI keeps trying to claim that one as their own exclusive term, which is a different sort of unwelcome baggage. So “LLMs” it is.

And when I talk about “LLM coding”, I mean use of an LLM to generate code in some programming language. I use this as an umbrella term for all such usage, whether done under human supervision or not, whether used as the sole producer of code (with no human-generated code at all) or not, etc.

I’m also going to try to limit my comments here to things directly related to technology and to programming as a profession, because that’s what I know (I have a degree in philosophy, so I’m qualified to comment on some other aspects of LLMs, but I’m deliberately staying away from them in this post because I find a lot of those debates tedious and literally sophomoric, as in reminding me of things I was reading and discussing when I was a sophomore).

If you’re using an LLM in some other field, well, I probably don’t know that field well enough to usefully comment on it. Having seen some truly hot takes from people who didn’t follow this principle, I’ve thought several times that we really need some sort of cute portmanteau of “LLM” and “Gell-Mann Amnesia” for the way a lot of LLM-related discourse seems to be people expecting LLMs to take over every job and field except their own.

No silver bullet

A few years ago I wrote about Fred Brooks’ No Silver Bullet, and said I think it may have been the best thing Brooks ever wrote. If you’ve never read No Silver Bullet, I strongly recommend you do so, and I recommend you read the whole thing for yourself (rather than just a summary of it).

No Silver Bullet was published at a time when computing hardware was advancing at an incredible rate, but our ability to build software was not even close to keeping up. And so Brooks made a bold prediction about software:

There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

To support this he looked at sources of difficulty in software development, and assigned them to two broad categories (emphasis as in the original):

Following Aristotle, I divide them into essence—the difficulties inherent in the nature of the software—and accidents—those difficulties that today attend its production but that are not inherent.

A classic example is memory management: some programming languages require the programmer to manually allocate, keep track of, and free memory, which is a source of difficulty. And this is accidental difficulty, because there’s nothing which inherently requires it; plenty of other programming languages have automatic memory management.

But other sources of difficulty are different, and seem to be inherent to software development itself. Here’s one of the ways Brooks summarizes it (emphasis matches what’s in my copy of No Silver Bullet):

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.

If this is true, building software will always be hard. There is inherently no silver bullet.

And to drive the point home, he also explains the diminishing returns of only addressing accidental difficulty:

How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

This is a straightforward mathematical argument. If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

I think most programmers believe the first premise, at least implicitly, and once the first premise is accepted it becomes very difficult to argue against the second. In fact, I’d personally go further than the minimum required for Brooks’ argument. His math holds up as long as accidental difficulty doesn’t reach that 90%+ mark, since anything lower makes a 10x improvement from eliminating accidental difficulty impossible. But I suspect accidental difficulty, today, is a vastly smaller proportion of the total than that. In a lot of mature domains of programming I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.

There’s also a section in No Silver Bullet about potential “hopes for the silver” which addresses “AI”, though what Brooks considered to be “AI” (and there is a tangent about clarifying exactly what the term means) was significantly different from what’s promoted today as “AI”. The most apt comparison to LLMs in No Silver Bullet is actually not the discussion of “AI”, it’s the discussion of automatic programming, which has meant a lot of different things over the years, but was defined by Brooks at the time as “the generation of a program for solving a problem from a statement of the problem specifications”. That’s pretty much the task for which LLMs are currently promoted to programmers.

But Brooks quotes David Parnas on the topic: “automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer.” And Brooks did not believe higher-level languages on their own could be a silver bullet. As he put it in a discussion of the Ada language:

It is, after all, just another high-level language, and the biggest payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions. Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.

Many people are currently promoting LLMs as a revolutionary step forward for software development, but are doing so based almost exclusively on claims about LLMs’ ability to generate code at high speed. The No Silver Bullet argument poses a problem for these claims, since it sets a limit on how much we can gain from merely generating code more quickly.

In chapter 2 of The Mythical Man-Month, Brooks suggested as a scheduling guideline that five-sixths (83%) of time on a “software task” would be spent on things other than coding, which puts a pretty low cap on productivity gains from speeding up just the coding. And even if we assume LLMs reduce coding time to zero, and go with the more generous No Silver Bullet formulation which merely predicts no order-of-magnitude gain from a single development, that’s still less than the gain Brooks himself believed could come from hiring good human programmers. From chapter 3 of The Mythical Man-Month:

Programming managers have long recognized wide productivity variations between good programmers and poor ones. But the actual measured magnitudes have astounded all of us. In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!

(although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)

Anecdote time: much of what I’ve done over my career as a professional programmer is building database-backed web applications and services, and I don’t see much of a gain from LLMs. I suppose it looks impressive, if you’re not familiar with this field of programming, to auto-generate the skeleton of an entire application and the basic create/retrieve/update/delete HTTP handlers from no more than a description of the data you want to work with. But that capability predates LLMs: Rails’ scaffolding, for example, could do it twenty years ago.

And not just raw code generation, but also the abstractions available to work with, have progressed to the point where I basically never feel like the raw speed of production of code is holding me back. Just as Fred Brooks would have predicted, the majority of my time is spent elsewhere: talking to people who want new software (or who want existing software to be changed); finding out what it is they want and need; coming up with an initial specification; breaking it down into appropriately-sized pieces for programmers (maybe me, maybe someone else) to work on; testing the first prototype and getting feedback; preparing the next iteration; reviewing or asking for review, etc. I haven’t personally tracked whether it matches Brooks’ five-sixths estimate, but I wouldn’t be at all surprised if it did.

Given all that, just having an LLM churn out code faster than I would have myself is not going to offer me an order of magnitude improvement, or anything like it. Or as a recent popular blog post by the CEO of Tailscale put it:

AI’s direct impact on this problem is minimal. Okay, so Claude can code it in 3 minutes instead of 30? That’s super, Claude, great work.

Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

More simply: throwing more patches into the review queue, when the review queue still drains at the same rate as before, is not a recipe for increased velocity. Real software development involves not just a review queue but all the other steps and processes I outlined above, and more, and having an LLM generate code more quickly does not increase the speed or capacity of all those other things.

So as someone who accepts Brooks’ argument in No Silver Bullet, I am committed to believe on theoretical grounds that LLMs cannot offer “even a single order-of-magnitude improvement … in productivity, in reliability, in simplicity”. And my own experience matches up with that prediction.

Practice makes (im)perfect

But enough theory. What about the empirical actual reality of LLM coding?

Every fan of LLMs for coding has an anecdote about their revolutionary qualities, but the non-anecdotal data points we have are a lot more mixed. For example, several times now I’ve been linked to and asked to read the DORA report on the “State of AI-assisted Software Development”. And initially it certainly seems like it’s declaring the effects of LLMs are settled, in favor of the LLMs. From its executive summary (page 3):

[T]he central question for technology leaders is no longer if they should adopt AI, but how to realize its value.

And elsewhere it makes claims like (page 34) “AI is the new normal in software development”.

But then, going back to the executive summary, things start sounding less uniformly positive:

The research reveals a critical truth: AI’s primary role in software development is that of an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones.

And then (still on page 3):

The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system: the quality of the internal platform, the clarity of workflows, and the alignment of teams. Without this foundation, AI creates localized pockets of productivity that are often lost to downstream chaos.

Continuing on to page 4:

AI adoption now improves software delivery throughput, a key shift from last year. However, it still increases delivery instability. This suggests that while teams are adapting for speed, their underlying systems have not yet evolved to safely manage AI-accelerated development.

“Delivery instability” is defined (page 13) in terms of two factors:

  • Change fail rate: “The ratio of deployments that require immediate intervention following a deployment.”
  • Rework rate: “The ratio of deployments that are unplanned but happen as a result of an incident in production.”

Later parts of the report get into more detail on this. Page 38 charts the increase in delivery instability, for example. And elsewhere in the section containing that chart, there’s a discussion of whether increases in throughput (defined by DORA as a combination of lead time for changes, deployment frequency, and failed deployment recovery time) are enough to offset or otherwise make up for this increase in instability (page 41, emphasis added by me):

Some might argue that instability is an acceptable trade-off for the gains in development throughput that AI-assisted development enables.

The reasoning is that the volume and speed of AI-assisted delivery could blunt the detrimental effects of instability, perhaps by enabling such rapid bug fixes and updates that the negative impact on the end-user is minimized.

However, when we look beyond pure software delivery metrics, this argument does not hold up. To assess this claim, we checked whether AI adoption weakens the harms of instability on our outcomes which have been hurt historically by instability.

We found no evidence of such a moderating effect. On the contrary, instability still has significant detrimental effects on crucial outcomes like product performance and burnout, which can ultimately negate any perceived gains in throughput.

And the chart on page 38 appears to show the increase in instability as quite a bit larger than the increase in throughput, in any case.

Curiously, that chart also claims a significant increase in “code quality”, and other parts of the report (page 30, for example) claim a significant increase in “productivity”, alongside the significant increase in delivery instability, which seems like it ought to be a contradiction. As far as I can tell, DORA’s source for both “productivity” and “code quality” is perceived impact as self-reported by survey respondents. Other studies and reports have designed less subjective and more quantitative ways to measure these things. For example, this much-discussed study on adoption of the Cursor LLM coding tool used the results of static analysis of the code to measure quality and complexity. And self-reported productivity impacts, in particular, ought to be a deeply suspect measure. From (to pick one relevant example) the METR early-2025 study (emphasis added by me):

This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

LLM coding advocates have often criticized this particular study’s finding of slower development for being based on older generations of LLMs (more on that argument in a bit), but as far as I’m aware nobody’s been able to seriously rebut the finding that developers are not very effective at self-estimating their productivity. So to see DORA relying on self-estimated productivity is disappointing.

The DORA report goes on to provide a seven-part “AI capabilities model” for organizations (begins on page 49), which consists of recommendations like: strong version control practices, working in small batches, quality internal platforms, user-centric focus… all of which feel like they should be table stakes for any successful organization regardless of whether they also happen to be using LLMs.

Suppose, for sake of a silly example, that someone told you a new technology is revolutionizing surgery, but the gains are not uniformly distributed, and the best overall outcomes are seen in surgical teams where in addition to using the new thing, team members also wash their hands prior to operating. That’s not as extreme a comparison as it might sound: the sorts of practices recommended for maximizing LLM-related gains in the DORA report, and in many other similar whitepapers and reports and studies, are or ought to be as fundamental to software development as hand-washing is to surgery. The Joel Test was recommending quite a few of these practices a quarter-century ago, the Agile Manifesto implied several of them, and even back then they weren’t really new; if you dig into the literature on effective software development you can find variations of much of the DORA advice going all the way back to the 1970s and even earlier.

For a more recent data point, I’ve seen a lot of people talking about and linking me to CircleCI’s 2026 “State of Software Delivery” which, like the DORA report, claims an uneven distribution of benefits from LLM adoption, and even says (page 8) “the majority of teams saw little to no increase in overall throughput”. The CircleCI report also raises a worrying point that echoes the increase in “delivery instability” seen in the DORA report (CircleCI executive summary, page 3):

Key stability indicators show that AI-driven changes are breaking more often and taking teams longer to fix, making validation and integration the primary bottleneck.

CircleCI further reports (page 11) that, year-over-year, they see a 13% increase in recovery time for a broken main branch, and a 25% increase for broken feature branches. And (page 12) they also say failures are increasing:

[S]uccess rates on the main branch fell to their lowest level in over 5 years, to 70.8%. In other words, attempts at merging changes into production code bases now fail 30% of the time.

For comparison, their own recommended benchmark of success for main branches is 90%.

The cost of these increasing failures and the increasing time to resolve them is quantified (emphasis matches the report, page 14):

For a team pushing 5 changes to the main branch per day, going from a 90% success rate to 70% is the difference between one showstopping breakage every two days to 1.5 every single day (a 3x increase).

At just 60 minutes recovery time per failure, you’re looking at an additional 250 hours in debugging and blocked deployments every year. And that’s at a relatively modest scale. Teams pushing 500 changes per day would lose the equivalent of 12 full-time engineers.

The usual response to reports like these is to claim they’re based on people using older LLMs, and the models coming out now are the truly revolutionary ones, which won’t have any of those problems. For example, this is the main argument that’s been leveled against the METR study I mentioned above. But that argument was flimsy to begin with (since it’s rarely accompanied by the kind of evidence needed to back up the claim), and its repeated usage is self-discrediting: if the people claiming “this time is the world-changing revolutionary leap, for sure” were wrong all the prior times they said that (as they have to have been, since if any prior time had actually been the revolutionary leap they wouldn’t need to say this time will be), why should anyone believe them this time?

Also, I’ve read a lot of studies and reports on LLM coding, and these sorts of findings—uneven or inconsistent impact, quality/stability declines, etc.—seem to be remarkably stable, across large numbers of teams using a variety of different models and different versions of those models, over an extended period of time (DORA does have a bit of a messy situation with contradictory claims that “code quality” is increasing while “delivery instability” is increasing even more, but as noted above that seems to be a methodological problem). The two I’ve quoted most extensively in this post (the DORA and CircleCI reports) were chosen specifically because they’re often recommended to me by advocates of LLM coding, and seem to be reasonably pro-LLM in their stances.

The other expected response to these findings is a claim that it’s not necessarily older models but older workflows which have been obsoleted, that the state of the art is no longer to just prompt an LLM and accept its output directly, but rather involves one LLM (or LLM-powered agent) generating code while one or more layers of “adversarial” ones review and fix up the code and also review each other’s reviews and responses and fixes, thus introducing a mechanism by which the LLM(s) will automatically improve the quality of the output.

I’m unaware of rigorous studies on these approaches (yet), but several well-publicized early examples do not inspire confidence. I’ll pick on Cloudflare here since they’ve been prominent advocates for using LLMs in this fashion. In their LLM rebuild of Next.js:

We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.

But their public release of it, vetted through this process and, apparently, some amount of human review on top, was initially unable to run even the basic default Next.js application, and also was apparently riddled with security issues. From one disclosure post (emphasis added by me):

AI is now very good at getting a system to the point where it looks complete.

One specific problem cited was that the LLM rebuild simply did not pull in all the original tests, and therefore could miss security-critical cases those tests were checking. From the same disclosure post:

The process was feature-first: decide which viNext features existed, then port the corresponding Next.js tests. That is a sensible way to move quickly. It gives you broad happy-path coverage.

But it does not guarantee that you bring over the ugly regression tests, missing-export cases, and fail-open behavior checks that mature frameworks accumulate over years.

So middleware could look “covered” while the one test that proves it fails safely never made it over.

For example, Next.js has a dedicated test directory (test/e2e/app-dir/proxy-missing-export/) that validates what happens when middleware files lack required exports. That test was never ported because middleware was already considered “covered” by other tests.

On the whole, that post is somewhat optimistic, but considering that the Next.js rebuild was carried out by presumably knowledgeable people who presumably were following good modern practices and prompting good modern LLMs to perform a type of task those LLMs are supposed to be extremely good at—a language and framework well-represented in training data, well-documented, with a large existing test suite written in the target language to assist automated verification—I have a hard time being that optimistic.

And though I haven’t personally read through the recent alleged leak of the Claude Code source, I’ve read some commentary and analysis from people who have, and again it seems like a team that should be as well-positioned as anyone to take maximum advantage of the allegedly revolutionary capabilities of LLM coding isn’t managing to do so.

So the consistent theme here, in the studies and reports and in more recent public examples, is that being able to generate code much more quickly than before, even in 2026 with modern LLMs and modern practices, is still no guarantee of being able to deliver software much more quickly than before. As the CircleCI report puts it (page 3):

The data points to a clear conclusion: success in the AI era is no longer determined by how fast code can be written. The decisive factor is the ability to validate, integrate, and recover at scale.

And if that sounds like the kind of thing Fred Brooks used to say, that’s because it is the kind of thing Fred Brooks used to say. Raw speed of generating code is not and was not the bottleneck in software development, and speeding that up or even reducing the time to generate code to effectively zero does not have the effect of making all the other parts of software development go away or go faster.

So at this point it seems clear to me that in practice as well as in theory LLM coding does not represent a silver bullet, and it seems highly unlikely to transform into one at any point in the near future.

On being left behind

When expressing skepticism about LLM coding, a common response is that not adopting it, or even just delaying slightly in adopting it, will inevitably result in being “left behind”, or even stronger effects (for example, words like “obliterated” have been used, more than once, by acquaintances of mine who really ought to know better). LLMs are the future, it’s going to happen whether you like it or not, so get with the program before it’s too late!

I said I’ll stick to the technical mode here, but I’ll just mention in passing that the “it’s going to happen whether you like it or not” framing is something I’ve encountered a lot and found to be pretty disturbing and off-putting, and not at all conducive to changing my mind. And milder forms like “It’s undeniable that…” are rhetorically suspect. The burden of proof ought to be on the person making the claim that LLMs truly are revolutionary, but framing like this tries to implicitly shift that burden and is a rare example of literally begging the question: it assumes as given the conclusion (LLMs are in fact revolutionary) that it needs to prove.

Meanwhile, I see two possible outcomes:

  1. The skeptical position wins. LLM coding tools do not achieve revolutionary silver-bullet status. Perhaps they become another tool in the toolbox, like TDD or pair programming, where some people and companies are really into them. Perhaps they become just another feature of IDEs, providing functionality like boilerplate generators to bootstrap a new project (if your favorite library/framework doesn’t provide its own bootstrap anyway).
  2. The skeptical position loses. LLM coding tools do achieve true revolutionary silver-bullet status or beyond (consistently delivering one or more orders of magnitude improvement in software development productivity), and truly become a mandatory part of every working programmer’s tools and workflows, taking over all or nearly all generation of code.

In the first case, delayed adoption has no downside unless someone happens to be working at one of the companies that decide to mandate LLM use. And they can always pick it up at that point, if they don’t mind or if they don’t feel like looking for a new job.

As to the second case: based on what I’ve argued above about the status and prospects of LLMs up to now, I obviously think that continuing the type of progress in models and practices that’s been seen to date does not offer any viable path to a silver bullet. Which means a truly revolutionary breakthrough will have to be something sufficiently different from the current state of the art that it will necessarily invalidate many (or perhaps even all) prior LLM-based workflows in addition to invalidating non-LLM-based workflows.

And even if that doesn’t result in a completely clean-slate starting point with everyone equal—even if experience with older LLM workflows is still an advantage in the post-silver-bullet world—I don’t think it can ever be the sort of insurmountable advantage it’s often assumed to be. For one thing, even with vastly higher average productivity, there likely would not be sufficient people with sufficient pre-existing LLM experience to fill the vastly expanded demand for software that would result (this is why a lot of LLM advocates, across many fields, spend so much time talking about the Jevons paradox). For another, any true silver-bullet breakthrough would have to attack and reduce the essential difficulty of building software, rather than the accidental difficulty. Let us return once again to Brooks:

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.

Much of the skill required of human LLM users today consists of exactly this: specifying and designing the software as a “conceptual construct”, albeit in specific ways that can be placed into an LLM’s context window in order to have it generate code. In any true silver-bullet world, much or all of that skillset would have to be rendered obsolete, which significantly reduces the penalty for late adoption if and when the silver bullet is finally achieved.

Power to the people?

Aside from impact on professional programmers and professional software-development teams, another claim often made in favor of LLM coding is that it will democratize access to software development. With LLM coding tools, people who aren’t experienced professional programmers can produce software that solves problems they face in their day-to-day jobs and lives. Surely that’s a huge societal benefit, right? And it’s tons of fun, too!

Setting aside that the New York Times piece linked above was written by someone who is an experienced professional, I’m not convinced of this use case either.

Mostly I think this is a situation where you can’t have it both ways. It seems to be widely agreed among advocates of LLM coding that it’s a skill which requires significant understanding, practice, and experience before one is able to produce consistent useful results (this is the basis of the “adopt now or be left behind” claim dealt with in the previous section); strong prior knowledge of how to design and build good software is also generally recommended or assumed. But that’s very much at odds with the democratized-software claim: that someone with no prior programming knowledge or experience will simply pick up an LLM, ask it in plain non-technical natural language to build something, and receive a sufficiently functional result.

I think the most likely result is that a non-technical user will receive something that’s obviously not fit for purpose, since they won’t have the necessary knowledge to prompt the LLM effectively. They won’t know how to set up directories of Markdown files containing instructions and skill definitions and architectural information for their problem. They won’t have practice at writing technical specifications (whether for other humans or for LLMs) to describe what they want in sufficient detail. They won’t know how to design and architect good software. They won’t know how to orchestrate multiple LLMs or LLM-powered agents to adversarially review each other. In short, they won’t have any of the skills that are supposed to be vital for successful LLM coding use.

There’s also the possibility that “natural” human language alone will never be sufficient to specify programs, even to much more advanced LLMs or other future “AI” systems, due to inherent ambiguity and lack of precision. In that case, some type of specialized formal language for specifying programs would always be necessary. Edsger W. Dijkstra, for example, took this position and famously derided what he called “the foolishness of ‘natural language programming’”, which is worth reading for some classic Dijkstra-isms like:

When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.

Another possible outcome for LLM coding by non-programmers is the often-mentioned analogy to 3D printing, which also was hyped up as a great democratizer that would let anyone design and make anything, but never delivered on that promise and, at the individual level, became a niche hobby for the small number of enthusiasts who were willing and able to put in the time, money, and effort to get moderately good at it.

But the nightmare result is that non-programmer LLM users will receive something that seems to work, and only reveals its shortcomings much later on. Given how often I see it argued that LLMs will democratize coding and write utility programs for people working in fields where privacy and confidentiality are both vital and legally mandated, I’m terrified by that potential failure mode. And I think one of the worst possible things that could happen for advocates of LLM adoption is to have the news full of stories of well-meaning non-technical people who had their lives ruined by, say, accidentally enabling a data breach with their LLM-coded helper programs, or even “just” turning loose a subtly-incorrect financial model on their business. So even if I were an advocate of LLM coding, I’d be very wary of pushing it to non-programmers.

But ultimately, the only situation in which LLMs could meaningfully democratize access to software development is one where they achieve a true silver bullet, by significantly reducing or removing essential difficulty from the software development process. And as noted above, LLM advocates seem to believe that even in the silver-bullet situation there would still be such a gap between those with pre-existing LLM usage skills and those without, that those without could never meaningfully catch up. Although I happen to disagree with that belief, it remains the case that advocates can’t have it both ways: either LLM coding will be an exclusive club for those who built up the necessary skills, XOR it will be a great democratizer and do away with the need for those skills.

Takeaways

I’m already over 6,000 words in this post, and though I could easily write many more, I should probably wrap it up.

If I had to summarize my position on LLM coding in one sentence, it would be “Please go read No Silver Bullet”. I think Brooks’ argument there is both theoretically correct and validated by empirical results, and sets some pretty strong limits on the impact LLM coding, or any other tool or technique which solely or primarily attacks accidental difficulty, can have.

Of course, limits on what we can do or gain aren’t necessarily the end of the world. Many of the foundations of computer science, from On Computable Numbers to Rice’s theorem and beyond, place inflexible limits on what we can do, but we still write software nonetheless, and we still work to advance the state of our art. So the No Silver Bullet argument is not the same as arguing that LLMs are necessarily useless, or that no gains can possibly be realized from them. But it is an argument that any gains we do realize are likely going to be incremental and evolutionary, rather than the world-changing revolution many people seem to be expecting.

Correspondingly, I think there is not a huge downside, right now, to slow or delayed adoption of LLM coding. Very few organizations have the strong fundamentals needed to absorb even a relatively moderate, incremental increase in the amount of code they generate, which I suspect is why so many studies and reports find mixed results and lots of broken CI pipelines. Not only is there no silver bullet, there especially is no quick or magical gain to be had from rushing to adopt LLM coding without first working on those fundamentals. In fact, the evidence we have says you’re more likely to hurt than help your productivity by doing so.

I also don’t think LLMs are going to meaningfully democratize coding any time soon; even if they become indispensable tools for programmers, they are likely to continue requiring users to “think like a programmer” when specifying and prompting. We would be much better served by teaching many more people how to think rigorously and reason about abstractions (and they would be much better served, too) than we would by just plopping them as-is in front of LLMs.

As for what you should be doing instead of rushing to adopt LLM coding out of fear that you’ll be left behind: I think you should be listening to what all those whitepapers and reports and studies are actually telling you, and working on fundamentals. You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.

If the skeptical position is wrong and it turns out LLMs truly become indispensable coding tools in the long term, well, the available literature says you’ll be set up to take the greatest possible advantage of them. And if it turns out they don’t, you’ll still be in much better shape than you were, and you’ll have an advantage over everyone who chased after wild promises of huge productivity gains by ordering their teams to just chew through tokens and generate code without working on fundamentals, and who likely wrecked their development processes by doing so.

Or as Fred Brooks put it:

The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.

Thursday assorted links

1. The “estrangement” from philosophy of economics.

2. Investing in scientific instruments.

3. New book coming on Carlsen vs. Niemann.

4. Houston economy growing at more than ten percent (and that is even without moving forward on bike paths).

5. “As Matt Yglesias rightly says, this is a worrying signal of declining state legitimacy: even the centre-left parties don’t believe they can make the case for the state raising taxes and spending them on public goods

The post Thursday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Presidential Words Can Turn the Unthinkable Into the Thinkable − For Better or For Worse

The Conversation logo

Among the most disorienting things about President Donald Trump’s public language is how easily it can feel numbing and shocking in the same moment. He says something outrageous, the country recoils, and then the recoil itself begins to feel familiar.

As a scholar who studies presidential rhetoric, I know that over time that rhythm does its own kind of damage. It teaches the public to absorb the breach. What once might have sounded like a genuine political emergency or a violation of constitutional decorum begins to register as just another day in American political life.

But the past few days merit notice. The president’s demagoguery has taken a darker turn.

Trump’s rhetoric about Iran has become more than inflammatory. Beginning with posts to Truth Social in early April, he has used profanity-laden language – “Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell” – to threaten attacks on the country’s infrastructure. He urged Iranians to rise up against their government. He warned that “a whole civilization will die tonight” if Iran does not comply with U.S. demands.

The Associated Press treated those remarks as a significant escalation in the context of a live conflict, not merely as familiar Trumpian excess: “As the conflict has entered its second month, Trump has escalated his warnings to bomb Iran’s infrastructure.”

The International Committee of the Red Cross also issued the unusual reminder that the rules of war must be respected “in words and action,” suggesting that the rhetoric itself had become part of the danger.

But were Trump’s recent remarks really different from his many earlier outbursts?

I think they were. For years, Trump’s rhetoric has relied on insult, ridicule, threat and contemptHe has degraded opponents and helped coarsen the terms of public life.

What seems different about his words during the first week of April 2026 is the scale of violence his language primed people to imagine. His remarks about Iran moved beyond personal attacks or chest-thumping nationalism to take on a tone of collective punishment and civilizational destruction. The style was familiar. The horizon of harm was not.

Trump Truth Social post
President Donald Trump’s social media post of April 7, 2026, threatening the destruction of ‘a whole civilization,’ meaning Iran. Truth Social.

Politics of fear

Presidential rhetoric is more about permission than persuasion. Presidents do not only argue. They signal.

Through those signals, they tell the public what kind of situation this is, what kind of danger is at hand, and what kinds of response are reasonable. In that sense, the president can function like a human starting gun. His words cue journalists, legislators, party allies and ordinary supporters about how to classify events before anyone has fully processed them.

Political theorist Corey Robin’s work on the politics of fear is a useful lens for understanding what is happening with Trump’s violent rhetoric.

Fear, in Robin’s view, is not simply a feeling that arises naturally in response to danger. It is politically manufactured. Power teaches people what to fear, how to name danger, and where to direct their apprehension. Presidential rhetoric is an essential tool for performing that work.

Thus, a president does not only describe a threat. He also gives it shape and scale. He tells the public how large it is, how close it is, and what kinds of response should feel reasonable in its presence.

A good example of a president doing this happened after the Sept. 11, 2001, terrorist attacks when, while visiting ground zero in New York City, George W. Bush said, “I can hear you. The rest of the world hears you. And the people who knocked these buildings down will hear all of us soon.” With that sentence, Bush acknowledged the gravity of what had happened, but also promised to fight back and bring justice to the terrorists.

When it comes to statements like those Trump has recently made about Iran, the worry is not that the president has said something extreme. Instead, the larger concern lies in what repeatedly using extreme language does to the atmosphere in which judgment takes place.

Political hyperbole lowers the threshold of what the public can imagine as legitimate, as allowable. When presidents make threats like the ones Trump issued, mass suffering becomes more imaginable. The president’s words and social media posts test whether the public will continue to hear such language as over the line, or whether it will be absorbed as one more hard-edged negotiating tactic.

Shaping reality

Presidential rhetoric matters for reasons that go beyond persuasion or style.

It helps arrange reality. It tells the public what is serious, who is dangerous, whose suffering counts, and what forms of violence can be described as necessary. President Barack Obama did this in 2012, when he was speaking at a vigil to honor the shooting victims at Sandy Hook Elementary School.

“We bear a responsibility for every child because we’re counting on everybody else to help look after ours,” he said. “That we’re all parents; that they’re all our children.” With these words, Obama called everyone to feel, up close, the horrific loss of 20 children shot dead, and to work for a solution to gun violence.

Trump has benefited from a public worn down by repetition. Every new breach arrives trailing the memory of earlier ones.

People begin to doubt their own reactions. Surely this is appalling, they may think, but also, somehow, this is what he always does. That dual feeling is part of the harm. A damaged baseline makes serious escalation harder to recognize and judge.

The disorientation and disgust that so many people experienced in response to Trump’s thundering, violent proclamations is important. Even after years of erosion of what was deemed normal, some lines remain visible.

Paying attention now is not about pretending Trump has suddenly become someone new. It is about recognizing more clearly what his presidency has been teaching the public to hear as thinkable. The most serious harm may lie not only in what follows such rhetoric, but in the world it helps prepare people to accept.

This article was originally published by The Conversation on April 8, 2026. Click here to read the original.


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post Presidential Words Can Turn the Unthinkable Into the Thinkable − For Better or For Worse appeared first on DCReport.org.

Apex to develop larger satellites for missile defense, space-based computing

‘Comet Mini’ and ‘Comet XL’ target energy-intensive missions

The post Apex to develop larger satellites for missile defense, space-based computing appeared first on SpaceNews.

Portal Space Systems raises $50 million to accelerate spacecraft development

Portal spacecraft

Space mobility company Portal Space Systems has raised $50 million to scale up development of its highly maneuverable spacecraft.

The post Portal Space Systems raises $50 million to accelerate spacecraft development appeared first on SpaceNews.

Space Force taps 14 firms for $1.8 billion GEO surveillance program

The companies will compete for contracts to build ‘neighborhood watch’ satellites

The post Space Force taps 14 firms for $1.8 billion GEO surveillance program appeared first on SpaceNews.

Albedo prepares second VLEO mission for 2027 launch

SAN FRANCISCO – Albedo, the former Earth-observation startup now focused on satellite manufacturing, announced plans April 9 to send its second spacecraft into very low-Earth orbit (VLEO). Vicinity, scheduled to launch in 2027, will deploy solar arrays to provide far more power than the solar panels mounted on Precision, the bus that housed Albedo’s 2025 […]

The post Albedo prepares second VLEO mission for 2027 launch appeared first on SpaceNews.

Lockheed Martin wins $105 million contract for GPS ground control system as OCX winds down

The contract supports next-generation GPS IIIF satellites through 2030

The post Lockheed Martin wins $105 million contract for GPS ground control system as OCX winds down appeared first on SpaceNews.

Vantor to expand imaging satellite fleet, adding smallsats to increase revisit rates

The new architecture combines high resolution with rapid revisit

The post Vantor to expand imaging satellite fleet, adding smallsats to increase revisit rates appeared first on SpaceNews.

Heather Pringle on the upcoming Space Symposium and future international collaboration in space

In this episode of the Space Minds podcast, SpaceNews’ Mike Gruss interviews Heather Pringle, a retired Major General of the US Air Force who is now the CEO of the […]

The post Heather Pringle on the upcoming Space Symposium and future international collaboration in space appeared first on SpaceNews.

China conducts pair of Long March launches for Thousand Sails and Guowang megaconstellations

China carried out separate Long March 8 and Long March 6A launches this week, adding new batches of satellites to the country’s megaconstellation projects.

The post China conducts pair of Long March launches for Thousand Sails and Guowang megaconstellations appeared first on SpaceNews.

A new kind of arsenal: commercial satellites

he Pentagon is no longer asking whether commercial space will matter in future conflicts. It is building a strategy around the assumption that it will.

The post A new kind of arsenal: commercial satellites appeared first on SpaceNews.

What returning to lunar space means for human health

Artemis 2 earthset

Preparing to send astronauts back to the moon is no small feat. Spaceflight exposes the body to both known and unknown risks, some of which are subtle, cumulative and difficult to predict. When humans were last on the lunar surface in 1972, they stayed for just over three days. On that final mission, Apollo astronauts […]

The post What returning to lunar space means for human health appeared first on SpaceNews.

One neat trick to end extreme poverty

An age-old problem might have a surprisingly simple solution

Can the secondary market allay private-credit fears?

In the short run, a bit. In the long run, a lot more

The latest Italian banking whodunnit has it all

A a coup, a conspiracy and even a (possibly unrelated) corpse

South Korea’s AI industrial policy meets the energy shock

The collision will not be pretty

Zhang Xuefeng (1984-2026), guided Chinese families through the college admissions process

 A Chinese college-admissions influencer has died, who helped families choose which colleges and majors to rank, after they received the results from the gaokao, the national admissions exam.

The NYT has the story:

China Mourned an Education Influencer. The Grief Was a Quiet Revolt.
Zhang Xuefeng helped people navigate the country’s unforgiving higher education system. The public outpouring after his death was a quiet rebuke to the punishing process. 
 By Li Yuan

 "Zhang Xuefeng became famous in China for telling students and their parents what few educators would: which majors were useless, which careers were dead ends and which dreams ordinary families could not afford.

“Knock out your children if they want to study journalism,” he famously said.

“The humanities all lead to service work, and service work, in one word, is sucking up” to clients.

...
"But when Mr. Zhang died last month, at 41, something uncommon in today’s China happened. In a country where large-scale, spontaneous public emotion is rarely tolerated, tens of thousands of people from all over the country showed up at his memorial service in the eastern city of Suzhou
. They stood in line for hours to pay their respects, some carrying flowers, one clutching a college admission letter. On the Chinese internet, posts and videos about Mr. Zhang and his death drew more than six billion views in a single day, according to a media monitoring firm.

...

"The gratitude directed at Mr. Zhang, who had 27 million followers on the short-video platform Douyin, reflected the fears of ordinary Chinese families trying to navigate an increasingly opaque and unforgiving education system. The extraordinary mourning after his death revealed how much of contemporary China is living with that anxiety.

"Students and parents thanked him for helping them navigate the high-stakes process of choosing a college major in China. Through livestreams and consulting sessions, he explained which majors led to stable jobs, which industries were declining and which professional certificates were worth pursuing — information readily available to families with connections or advanced education but far harder for everyone else to find. 

...

"For decades, the general college entrance exam, known in Chinese as the gaokao, was widely seen as a pathway to changing one’s fate; it was brutally competitive but capable of delivering upward mobility. As universities expanded and the job market deteriorated, that promise weakened. Getting into college became easier. Turning a degree into security did not.
...

"In many provinces, families have less than two weeks between getting the results of the exam and the deadline to apply to college. In that time, they need to make sense of hundreds of majors, universities and career paths

...

"The families who turn to consultants like Mr. Zhang are not, for the most part, China’s elite. But neither are they the poorest. Wang described his clients as families in the broad middle: small-business owners, office workers, skilled laborers and lower-level state employees. They often have money to pay for guidance but lack the social capital or institutional knowledge needed to navigate the system confidently on their own. " 

AI, Unemployment and Work

Imagine I told you that AI was going to create a 40% unemployment rate. Sounds bad, right? Catastrophic even. Now imagine I told you that AI was going to create a 3-day working week. Sounds great, right? Wonderful even. Yet to a first approximation these are the same thing. 60% of people employed and 40% unemployed is the same number of working hours as 100% employed at 60% of the hours.

So even if you think AI is going to have a tremendous effect on work, the difference between catastrophe and wonderland boils down to distribution. It’s not impossible that AI renders some people unemployable, but that proposition is harder to defend than the idea that AI will be broadly productive. AI is a very general purpose technology, one likely to make many people more productive, including many people with fewer skills. Moreover, we have more policy control over the distribution of work than over the pure AI effect on work. Declare an AI dividend and create some more holidays, for example.

Nor is this argument purely theoretical. Between 1870 and today, hours of work in the United States fell by about 40% — from nearly 3,000 hours per year to about 1,800. Hours fells but unemployment did not increase. Moreover, not only did work hours fall, but childhood, retirement, and life expectancy all increased. In fact in 1870, about 30% of a person’s entire life was spent working — people worked, slept, and died. Today it’s closer to 10%. Thus in the past 100+ years or so the amount of work in a person’s lifetime has fallen by about 2/3rds and the amount of leisure, including retirement has increased. We have already sustained a massive increase in leisure. There’s no reason we cannot do it again.

The post AI, Unemployment and Work appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Travelling at the speed of light

Digital artwork of a spaceship with blue thrusters approaching a bright star cluster in a dark, grid-lined space scene.

A mind-bending trip into the cosmos aboard a speculative (yet theoretically possible) spacecraft near the speed of light

- by Aeon Video

Watch on Aeon

White-collar sweatshops

A cityscape at night through a window with a person in the foreground, blurred buildings in the background.

How did law firms and other professional workplaces become places of such crushing and soulless work?

- by Dylan Gottlieb

Read on Aeon

LDS fact of the day

The Church of Jesus Christ of Latter-day Saints has grown 66% this century, fueled in part by a record-breaking number of convert baptisms in 2025.

The church had 10,752,986 members at the end of 1999. The church had 17,887,212 at the end of 2025, according to an annual statistical report released Saturday during the church’s 196th Annual General Conference.

Furthermore the growth is coming in every part of the world (as a qualifier I am not sure what the outflow is).  Here is the full article, via Tyler Ransom.

The post LDS fact of the day appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Financial Regulation and AI: A Faustian Bargain?

Important work is just flowing these days, and much of it (of course) concerns AI:

We study whether AI methods applied to large-scale portfolio holdings data can improve financial regulation. We build a state-of-the-art, graph-based deep learning model tailored to security-level data on the holdings of financial intermediaries. The architecture incorporates economic priors and learns latent representations of both assets and investors from the network structure of portfolio positions. Applied to the universe of non-bank financial intermediaries, covering nearly $40 trillion in wealth, the model substantially outperforms existing approaches in out-of-sample forecasts of intermediary trading behavior, including in crisis episodes. The model has more than ten times the explanatory power for the cross-sectional variation in asset returns during stress events compared to traditional approaches, and it outperforms existing systemic risk metrics at the institution level. Its learned representations show that the holdings network encodes rich, economically interpretable information about firesale vulnerability. The architecture is fully inductive, producing informative estimates even when entire asset classes or investors are withheld from training. We embed our empirical approach into a macroprudential optimal policy framework to formalize why these objects matter for policy and welfare. We show that even in an equilibrium environment subject to the Lucas critique, the predictive information from the model improves welfare by sharpening the cross-sectional targeting of policy interventions, and we demonstrate a complementarity between prediction and structural knowledge.

That is a new paper by Christopher Clayton and Antonio Coppola, of Yale and Stanford respectively.

The post Financial Regulation and AI: A Faustian Bargain? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The Moon is already on Google Maps—did Artemis II really tell us anything new?

The data pipeline from NASA's Artemis II mission opened to full blast a few hours after looping behind the far side of the Moon on Monday night, when the Orion spacecraft established a laser communications link with a receiving station back on Earth.

A cache of high-resolution images began streaming down through this connection. NASA released the first batch to the public on Tuesday. Most of the images were taken by the four Artemis II astronauts using handheld Nikon cameras fitted with wide-angle and telephoto lenses. They also had iPhones to capture views out of the windows of their Orion Moon ship, named Integrity.

After reaching their farthest point from Earth, astronauts Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen are accelerating back to Earth for reentry and splashdown Friday evening to wrap up the first crewed lunar mission in more than 53 years.

Read full article

Comments

Snow in the Shadow of the Andes

Large areas of snow cover the arid plains of southern Argentina. Several large lakes containing water in shades of milky blue are visible, while the Andes are veiled by clouds along the left side.
Snow covers large areas of southern Argentina’s high plains in this image acquired on April 3, 2026, by the MODIS (Moderate Resolution Imaging Spectroradiometer) on NASA’s Terra satellite.
NASA Earth Observatory/Michala Garrison

On the eastern, arid side of the Andes, the plains of southern Argentina stretch from the mountains to the Atlantic coast. The landscape often appears dry and brown, interrupted by colorful glacier-fed lakes, but a storm in early April 2026 blanketed swaths of the land in white. The MODIS (Moderate Resolution Imaging Spectroradiometer) on NASA’s Terra satellite captured this image of the snowy Patagonian Desert on the morning of April 3.

Early fall has been wetter than normal in southern Patagonia, said atmospheric scientist René Garreaud of the Universidad de Chile, noting that satellite-based estimates showed above-average precipitation from late March through early April. Much of the region’s precipitation tends to fall on the western, windward side of the Andes, he said. “But strong winds are capable of blowing some snow east into the Argentinian side, as beautifully reflected in the MODIS image.”

The outlet of bright blue Lago Argentino meanders across an otherwise dry, brown valley from left to right. The bottom third of the image is white, where snow covers higher elevations.
Snow lines a valley at the outlet of Lago Argentino in this image acquired on April 3, 2026, by the OLI (Operational Land Imager) on Landsat 9.
NASA Earth Observatory/Michala Garrison

Snow clings to the higher elevations, while valleys draining the large glacial lakes are bare. In this detailed Landsat image, a stark snow line appears along the outlet of Lago Argentino. Also note the color of the water. Lago Argentino and other nearby lakes contain an abundance of fine sediment, or glacial flour, pulverized by southern Patagonia’s plentiful glaciers. This suspended sediment makes the lakes appear milky blue or turquoise.

Though snow was widespread across the desert after the early-season storm, the wintry splendor was fleeting. A satellite view from the afternoon of April 4 showed that snow had melted from all but the highest mountain areas.

NASA Earth Observatory images by Michala Garrison, using MODIS data from NASA EOSDIS LANCE and GIBS/Worldview, and Landsat data from the U.S. Geological Survey. Story by Lindsey Doermann.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

The West Faces Snow Drought
4 min read

Very wet—but very warm—weather in the western U.S. has left many mountainous regions looking at substantial snowpack deficits.

Article
Snow Buries the U.S. Interior and East
2 min read

Satellites observed a frozen landscape across much of the country after a massive winter storm.

Article
Fire Threatens Rare Forests in Argentina
3 min read

Blazes spread across Los Alerces National Park, home to some of the world’s oldest trees.

Article

The post Snow in the Shadow of the Andes appeared first on NASA Science.