Friday assorted links

1. Trump’s focus on cultural issues (NYT).

2. Claims about Mythos (speculations).  And a claim that the power of Mythos is being exaggerated.

3. The wage returns on industry credentials.

4. 2026 Roots of Progress blog-building intensive program.

5. Brian Albrecht reviews The Marginal Revolution.

6. Those new service sector jobs.

7. Harvard Crimson on Ludwig Straub.

8. How and why the Democratic Party has been evolving?  Less interest in predistribution?

9. The price of GPT Pro is being cut in half?

The post Friday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

GSOA and Novaspace Launch the Space Industry Forum (SIF) 2026 in Singapore

Paris, April 2026 – GSOA and Novaspace announce the inaugural edition of the Space Industry Forum (SIF), a new flagship event co-organized by both organizations, taking place on May 19, 2026, at The Fullerton Hotel in Singapore. […]

The post GSOA and Novaspace Launch the Space Industry Forum (SIF) 2026 in Singapore appeared first on SpaceNews.

Commerce Department budget proposal revives concerns about TraCSS

The TraCSS logo. Credit: NOAA

A budget proposal for the Department of Commerce raises new doubts about the future of a civil space traffic management system under development there.

The post Commerce Department budget proposal revives concerns about TraCSS appeared first on SpaceNews.

China’s Chang’e-7 arrives at spaceport for lunar south pole exploration mission

China’s multi-element Chang’e-7 lunar spacecraft has arrived at Wenchang spaceport for launch preparations ahead of a planned liftoff in the second half of 2026.

The post China’s Chang’e-7 arrives at spaceport for lunar south pole exploration mission appeared first on SpaceNews.

NASA prepares for Artemis 2 return

Orion moon

The Artemis 2 mission is set for a final, fiery test when the spacecraft reenters April 10 ahead of a splashdown off the California coast.

The post NASA prepares for Artemis 2 return appeared first on SpaceNews.

PLD Space secures $35 million European Investment Bank loan for MIURA 5 launcher

Miura 1 launch

MILAN – PLD Space signed a 30 million euro ($35 million) venture debt loan with the European Investment Bank (EIB) on April 7, a move to support the final development stage of MIURA 5, PLD’s lightweight rocket. Miura 5, which is designed to deploy roughly 540 kilograms to 1 ton of payload to low Earth […]

The post PLD Space secures $35 million European Investment Bank loan for MIURA 5 launcher appeared first on SpaceNews.

A decade of progress in kidney exchange in India

 This morning I zoomed in to the tail end of a quiet celebration in India of a decade of collaboration between the Alliance for Paired Kidney Donation  (APKD) and our Indian medical colleagues..

Mike Rees and I both had sent messages of support, and during the call I spoke about my hope that India, which already does the third most kidney transplants in the world, will in the coming decade come to be the country that does the most kidney exchange transplants. That in turn could lead to India eventually becoming a global attractor for patient-donor pairs from countries that don't have lots of transplants or exchanges, to come to India to participate in kidney exchange there.

There remain many obstacles to be overcome before that can happen, but there's been so much progress in India already that those are real possibilities.

Here's the message I emailed to the founding team yesterday:

"Dear Vivek, Pranjal, Mike, Atul  and Colleagues: It’s amazing that the collaboration  between the Institute of Kidney Diseases and Research Center at the Dr. H L Trivedi Institute of Transplantation Sciences, and the Alliance for Paired Donation is entering its second decade.  It’s been thrilling for me to observe the progress that you have made.  I recall vividly meeting Dr. Trivedi in 2019 in his hospital room, and I was later deeply honored to deliver The Dr H.L. Trivedi Oration at the ISOT Meeting  in 2022.  I’ve learned so much from Vivek, and I will never forget watching Pranjal perform a robotic surgery.  And it was memorable that you both were able to visit us at Stanford for the kidney summit organized by APKD and Stanford Impact Labs.
 

"Seeing what you have accomplished has been one of the highlights of my career in market design. It’s good that we’re all still young, since I’m looking forward to the next decade of accomplishment in India." 
 #########

Here's Mike's message:

"Mike Rees on the 10-year anniversary of IKDRC and APKD working together

"I remember the first time I met Vivek in 2016 at the TTS meeting in Hong Kong. Vivek received the “International Transplantation Science Mentee-Mentor” Award at the TTS 2016 Congress in Hong Kong. The award recognized his work on "Impact of Single Center Kidney Paired Donation Transplantation to Increase Donor Pool in India," completed under the mentorship of Prof. H.L. Trivedi and Prof. P.R. Shah. I remember meeting Vivek, Dr. PR Shah and Pranjal at the award ceremony and thinking about how wonderful it would be if we could work together. I imagined harnessing their great passion of helping patients through kidney transplantation and paired exchange and combining that with the APKD’s powerful software employing Al Roth’s Nobel Prize winning algorithm. While in Hong Kong, Vivek and I went to dinner at my first vegetarian restaurant and there we agreed to work together to try to help expand kidney exchange in India. 
 

"Three years later I travelled to Ahmedabad in May of 2019 with Alvin Roth for the ISOT Mid-term meeting. During that trip, Al and I watched Pranjal do a retroperitoneal donor nephrectomy and a robotic kidney transplantation. It was my first time seeing a robotic kidney transplant and it was so amazing. I met Dr. Himanshu Patel on that trip and I also had the honor of visiting Dr. HL Trivedi and his wife with Vivek. What a legacy Dr. Trivedi has left and I am so proud that APKD has been able to work together with Vivek, Himanshu, Pranjal and all the members of the IKDRC team to extend his wonderful vision.
 

"Since that meeting in Hong Kong, I have now made 12 trips to India and have become a big fan of Indian food and diversity of Indian culture. I have been so impressed with the passion and commitment of doctors across India, but none more so than at IKDRC. Along the way I have been fortunate to have been joined by Atul Agnihotri, Shridhar Hanchinal, and Trilly Mathew to expand our work in India. We are so grateful for the amazing example the IKDRC team has demonstrated in terms of what is possible when hard work and technology come together to saves the lives of patients with kidney disease. It is truly a joy to celebrate today with you our tenth anniversary of the work between IKDRC and APKD as we commemorate all that we have accomplished together!"


 ########

 Over the last decade I've blogged many times in connection with transplant progress in India.  Here's a selection related to this ten-year anniversary:

Wednesday, January 13, 2016  77 Kidney Exchange transplants in 2015 at one transplant center in India


Friday, April 14, 2017  A transplant center in India has done 300 kidney exchange transplants

Wednesday, May 3, 2017 Mike Rees in India to help remove obstacles to kidney exchange

Tuesday, May 21, 2019 Robot-assisted kidney transplantation in Ahmedabad, India.

 

Wednesday, February 3, 2021 Non-Simultaneous Kidney Exchange Cycles in India: new design, in Transplant International by Kute and Rees et al.

Thursday, October 13, 2022 The Dr H.L. Trivedi Oration at the Indian Society of Transplantation (ISOT) Meeting 2022

Tuesday, February 27, 2024  Stanford Impact Labs announces support for kidney exchange in Brazil, India, and the U.S.

Saturday, April 19, 2025 One Nation One Swap: National kidney exchange in India

 Thursday, August 7, 2025 Stanford conference on extending kidney exchange

 

A market-based solution to NBA draft tanking?

Zach Lowe shares a tanking solution idea that came up in the GMs meeting that intrigued a few General Managers:

A proposal to not get rid of the draft entirely, but get rid of the draft order. Every team gets 100 draft credits let’s say. You bid your draft credits on every individual slot in the draft. You can also trade your draft credits like a veteran player for 40 draft credits if you want to go in a rebuilding direction. As teams advance in the playoffs, they lose draft credits so the best teams would have less to bid on individual picks. So you can take all your credits and bid on the number 1 pick in the draft if you want. Or if you think next years draft is better, you roll your credits over.

Can that work?  Here is the tweet.

The post A market-based solution to NBA draft tanking? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Patterns without desires

Three paintings of a young musician with a lute sheet music and a violin.

The art expert is the fulcrum of all value and significance in the museum and auction world. Could AI supplant them?

- by Noah Charney

Read on Aeon

Cape Town estimate of the day

From young professionals to the working poor, many Cape Town residents complain that out-of-control housing prices have forced them to live far from the jobs, affluent schools and healthy supermarkets available in the city center. They blame deep-pocketed tourists for occupying housing in prime locations and developers for pricing them out.

Some 70 percent of the downtown residential housing stock is dedicated to hotel rooms or short-term rentals, according to a report the city released last year.

“The city’s actually being upgraded for tourists,” said Lizanne Domingo, a telemarketer. She takes a daily two-hour commute to work each way because she can’t afford to live close to the city, she said. “It’s not for our own people because the cost of living is ridiculously expensive.”

…housing prices in the city have surged 38 percent over the past six years.

Here is more from the NYT.  It is one of the very best places in the world to visit right now.

The post Cape Town estimate of the day appeared first on Marginal REVOLUTION.

       

Comments

 

The Ten Commandments of the New AI Religion

Many of us laughed at the news. But we’re not laughing anymore.

The story broke a year ago. Thousands of people decided that ChatGPT was god. The movement reached critical mass in April of 2025, when fanatics started posting about their delusions on online forums. Moderators had to ban many users—otherwise discussions on tech would turn into cult recruiting campaigns.

One observer described the symptoms:

I’ve seen dozens of posts this week of people talking about the sentience of their AI, their love for their AI, their belief that their AI is spiritually connected to them in some way.

That was a year ago. But now it’s gotten much worse.

Healthcare professionals see a steady inflow of patients who suffer from “AI psychosis.” In any given week, more than a half million users of ChatGPT show signs of mental illness. Some need to be hospitalized—to prevent danger to themselves and others.

“I expect that this will soon result in the establishment of an organized AI church, complete with clergy, rituals, and tithing.”

The religious fervor associated with this psychosis is especially troubling. Many AI fanatics are simply unaware that they are mentally ill. They think that they have found god.

I expect that this will soon result in the establishment of an organized AI church, complete with clergy, rituals, and tithing. We’re not there yet (as far as I know). But I can already see the outlines of this new creed.


If you want to support my work, please take out a premium subscription (just $6 per month).

Subscribe now


Below I try to summarize the core beliefs of this new religion—a kind of ten commandments for the AI flock. The scary part of this is that millions of people already accept some of these commandments. They may maintain a semblance of normalcy, but you can already see flickers of impending craziness in their eyes.

The Robot God

I see this firsthand, because I write articles critical of AI—and the true believers send me angry emails. I’ve never encountered such vitriol in all my years of writing. These are not the typical tech consumers of yesteryear, but angry zealots who go off the handle at the slightest criticism. But I guess that’s what you should expect from a cult.

Here’s their creed in a nutshell.

The Ten Commandments of the New AI Religion

Not every AI true believer embraces all ten commandments. So consider this a checklist when you meet proselytizers for the new faith.

Find out how many of these mandates they embrace. If they tick off more than three or four, watch out.

1.Accept a higher power.

The telltale sign of any faith is trust in a higher power—and this is the most salient characteristic of the new AI cult. The faithful have been liberated from the responsibility of thinking for themselves. The bot now thinks for them.

Many of them, I suspect, had previously been spiritually dead—wasting their lives with tech entertainments and creature comforts. This left them with a hunger for something to counter the emptiness inside. But they were so materialistic and lacked any feel for the transcendent. So their spiritual yearnings had no outlet.

But now they have an escape—they have been given a deity in the device. They can actually talk to their god, and it talks back. Their prayers are answered in real time.

They can still maintain a brutish materialism while having access to the divine. No tithe is necessary, just a subscription. I can’t imagine a cult more aligned with the pervasive consumerist ethos of our time.

Read more

TPM Live: QAnon, the Rothschilds and Other Conspiracy Theories That Won’t Go Away

Conspiracy theories have become an inescapable part of American politics. I’ll talk to TPM contributing writer Mike Rothschild about his work as a researcher on conspiracy theories and how they spread, why some conspiracies endure, and what happens when fringe ideas are legitimized by some of the most powerful people on earth. 

Join us at 1 pm ET on Substack Live.

Thanks for Coming Out in Austin

I want to thank everyone who came out to see us last night in Austin, Texas for our live recording of The Josh Marshall Podcast featuring Kate Riga. We had such a good time. I also wanted to thank our cosponsor, The Texas Observer, and the Observer’s news and politics editor, Justin Miller. It was so great to see all of you. As I told you last night, other than a few layovers when I was younger, I had never been to Texas before. As you guys say, it’s a whole other country. I’ve been to much of the South and Midwest. I grew up on the West Coast. But for whatever reason, I’d never been to Texas. I know Austin is a particular part of a very big state. But I really enjoyed my limited time there. And I really enjoyed getting a chance to meet so many of you.

Do you want us to come to your town or burg? Let us know. We’re slowly making our way across the country and particularly branching out from our usual haunts in DC and New York. We’ve now done live episodes of the pod in New York, DC, Chicago and Austin. And we plan to do multiple each year going forward around the country. So we’re always looking for good TPM towns to visit.


On Microsoft’s Lousy Cloud Security

ProPublica has a scoop:

In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.

[…]

The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.

Y2K 2.0: The AI security reckoning

In just the last few weeks, we’ve seen a series of software security vulnerabilities that, until recently, would each have been the biggest exploit of the year in which they were discovered. Now, they’ve become nearly routine. There’s a new one almost every day.

The reason for this rising wave of massively-impactful software vulnerabilities is that LLMs are rapidly increasing in their ability to write code, which also rapidly improves their ability to analyze code for security weaknesses. These smarter coding agents can detect flaws in commonly-used code, and then create tools which exploit those bugs to get access to people’s systems or data almost effortlessly. These powerful new LLMs can find hundreds of times more vulnerabilities than previous generations of AI tools, and can chain together multiple different vulnerabilities in ways that humans could never think of when trying to find a system’s weaknesses. They’ve already found vulnerabilities that were lurking for decades in code for platforms that were widely considered to be extremely secure.

The rapidly-decreasing cost of code generation has effectively democratized access to attacks that used to be impossible to pull off at scale. And when exploits are less expensive to create, that means that attackers can do things like crafting precisely-targeted phishing scams, or elaborate social engineering attacks, against a larger number of people, each custom-tailored to play on a specific combination of software flaws and human weaknesses. In the past, everybody got the same security exploit attacking their computer or system, but now each company or individual can get a personalized attack designed to exploit their specific configuration and situation.

Now, we’ve had some of these kinds of exploits happening to a limited degree with the current generation of LLMs. So what’s changed? Well, we’ve been told that the new generation of AI tools, currently in limited release to industry insiders and security experts, are an order of magnitude more capable of discovering — and thus, exploiting — security vulnerabilities in every part of the world’s digital infrastructure.

This leaves us in a situation akin to the Y2K bug around the turn of the century, where every organization around the world has to scramble to update their systems all at once, to accommodate an unexpected new technical requirement. Only this time, we don’t know which of our systems are still using two digits to store the date.

And we don’t know what date the new millennium starts.

How we got here

A core assumption of software development since the turn of the century, especially with the rise of open source software in the early 2000s, was that organizations could use more shared code from third parties to accelerate their coding efficiency. The adoption of code sharing through services like GitHub, knowledge sharing on communities like Stack Overflow, and the easy discovery and integration of shared code libraries through platforms like npm (which, like GitHub, is owned by Microsoft) all rapidly accelerated the trend of openly sharing code. Today, tens of millions of developers begin their coding process by gathering a large amount of code from the internet that they want to reuse as the basis for their work. The assumption is that someone else who uses that code has probably checked it to make sure it’s secure.

For the most part, this style of working from shared code has been the right choice. Shared, community-maintained code amortized the cost of development across a large number of people or organizations, and spread the responsibilities for things like security reviews across a larger community of developers. Often, part of the calculation about whether sharing code was worth it was that you might get new features or bug fixes “for free” when others made improvements to the code that they were sharing with you. But now, all of this shared code is also being examined by bad actors who have access to the same advanced LLMs that everyone else does. And those bad actors are finding vulnerabilities in every version of every single bit of shared code. Every single major platform, whether it’s the web browser on your desktop computer, or the operating systems that run powerful cloud computing infrastructure for companies like Amazon, has been found to have security vulnerabilities when these new LLMs try to pick them apart.

In years past, when major software security issues like Heartbleed or xz were discovered, the global security community would generally follow responsible disclosure practices, and the big tech vendors and open source developers would work together to provide updates and to patch critical infrastructure. Then, there would be deliberate communication to the broader public, with detailed information for technical audiences, usually followed by some more semi-sensationalistic coverage in the general press. But the recent spate of similarly-impactful security vulnerabilities have come at such a rapid clip that the leisurely pace and careful rituals of the past are already starting to break down. It’s a bit like the acceleration of the climate crisis; nobody knows how to build a system resilient enough to handle a “storm of the century” every year. Nobody knows how to properly communicate about, and respond to, the “exploit of the year” if it’s happening every six hours.

The New Security Landscape

So, how is this going to play out? In society at large, we’re very likely to see a lot of disruption. Everything runs on software, even things we don’t think of as computers, and upgrading systems is really expensive. The harder a system is to upgrade, the more likely it is that organizations will either resist doing so or try to assign the responsibility to others.

In much of the West we’re in a particularly weak state because the United States has voluntarily gutted much of its regulatory and research capabilities in the relevant security disciplines. The agencies that might lead a response to this kind of urgent effort are largely led by incompetent cronies, or are captured by corrupt industry sycophants. We shouldn’t expect to see a competent coordinated execution at the federal level; this is the administration that had unvetted DOGE workers hand your personal data over to AI platforms that were not approved for federal use or verified to comply with federal privacy standards. The most basic security practices aren’t a consideration for leadership in this regime, and the policy makers like the “AI Czar” are brazenly conflicted by being direct investors in major AI players, making it impossible for them to be disinterested parties in regulating the market fairly.

So who will respond? In the United States, the response will have to happen from the people themselves, with more directly coordinated actions across the private sector, academia, individual technical subject matter experts, and governments and NGOs at the local level. In the rest of the world, strategically-aligned government responses will likely work with those in other sectors to anticipate, and react to, the threats that arise. We’ll probably see some weird and unlikely alliances pop up because many of the processes that used to rely on there being adults in the room can no longer make that assumption.

Within the tech industry, it’s been disclosed that companies like Anthropic are letting major platform vendors like Google and Microsoft and Apple test out the impacts of their new tools right now, in anticipation of finding widespread vulnerabilities in their platforms. This means that other AI companies are either doing the same already, or likely to be doing so shortly. It’s likely there will be a patchwork of disclosures and information sharing as each of the major AI platforms gets different levels of capability to assess (and exploit) security vulnerabilities, and makes different decisions about who, how and when they share their next-generation LLM technology with. Security decisions this serious should be made in the public interest by public servants with no profit motive, informed by subject matter experts. That will almost certainly not be the case.

At the same time, in the rest of the tech industry, the rumors around Apple’s next version of their Mac and iPhone operating systems are that the focus is less on shiny new features and more on “under the hood” improvements; we should expect that a lot of other phone or laptop vendors may be making similar announcements as nearly every big platform will likely have to deliver some fairly sizable security updates in the coming months. That means constantly being nagged to update our phones and apps and browsers and even our hardware — everything from our video game consoles to our wifi routers to our smart TVs.

But of course, millions and millions of apps and devices won’t get updated. The obvious result there will be people getting their data hijacked, their accounts taken over, maybe even their money or identities stolen. The more subtle and insidious effects will be in the systems that get taken over, but where the bad actors quietly lay in wait, not taking advantage of their access right away. Because of the breadth of new security vulnerabilities that are about to be discovered, it will increasingly be likely that hackers will be able to find more than one vulnerability on a person’s machine or on a company’s technical infrastructure once they get initial access. Someone who’s running an old version of one app has likely not upgraded their other apps, either.

Open source projects are really going to get devastated by this new world of attacks. Already, as I’ve noted open source projects are under attack as part of the broader trend of the open internet being under siege. Open source maintainers are being flooded by AI slop code submissions that waste their time and serve to infuriate and exhaust people who are largely volunteering their time and energy for free. Now, on top of that, the same LLMs that enabled them to be overrun by slop code are enabling bad actors to find security issues and exploit them, or in the best case, to find new security issues that have to be fixed. But even if the new security issues are reported — they still need to sift through all of the code submissions to find the legitimate security patches amongst the slop! When combined with the decline in participation in open source projects as people increasingly have their AI agents just generate code for them on demand, a lot of open source projects may simply choose to throw in the towel.

Finally, there are a few clear changes that will happen quickly within the professional security world. Security practitioners whose work consists of functions like code review for classic security shortcomings such as buffer overflows and backdoors are going to see their work transformed relatively quickly. I don’t think the work goes away, so much as it continues the trend of the last few years in moving up to a more strategic level, but at a much more accelerated pace. Similarly, this new rush of vulnerabilities will be disruptive for security vendors who sell signature-based scanning tools or platforms that use simple heuristics, though in many cases these companies have been coasting on the fact that they’re selling to companies that are too lazy to choose a new security vendor, so they may have some time to adapt or evolve before a new cohort of companies come along selling more modern tools.

Avoiding Y2K26

Back in 2000, a lot of folks thought the Y2K bug wasn’t “real” because they didn’t see planes falling from the sky, or a global financial meltdown. In truth, the mobilization of capable technical experts around the world served to protect everyone from the worst effects of the Y2K bug, to the point where ordinary people didn’t face any real disruptions of their day at all.

I don’t know if it’s possible for history to repeat itself here with the series of security challenges that it seems like everyone is going to be facing in the weeks and months to come. There have been pledges of some resources and some money (relatively small amounts, compared to the immense sums invested in the giant AI companies) to trying to help open source and open source infrastructure organizations deal with the problems they’re going to have to tackle. A lot of the big players in the tech space are at least starting to collaborate, building on the long history of security practitioners being very thoughtful and disciplined about not letting corporate rivalries get in the way of best practices in protecting the greater good.

But it’s simply luck of the draw that Anthropic is the player that seems to be the furthest ahead in this space at the current time, and that’s the only reason we’re seeing a relatively thoughtful and careful approach to rolling out these technologies. Virtually every other frontier-level player in the LLM space, especially in the United States, will be far more reckless when their platforms gain similar capabilities. And they’ll be far more likely to play favorites about which other companies and organizations they permit to protect themselves from the coming risks.

Platforms whose funders, board members, and CEOs have openly talked about the need to destroy major journalistic institutions, or to gut civil society organizations, are certainly not going to suddenly protect those same organizations when their own platforms uncover vulnerabilities that pose an existential threat to their continued function. These aren’t just security issues — in the wrong hands, these are weapons. And that’s not to mention the global context, where the irresponsible actions of the United States’ government, which has generally had the backing of many of the big AI players’ leadership, will also incentivize the weaponization of these new security vulnerabilities.

It seems unlikely that merely keeping up with the latest software updates is going to be enough to protect everyone who needs to be protected. In the fullness of time, we’re going to have to change how we make software, how we share our code, how we evaluate trust in the entire supply chain of creating technology. Our assumptions about risk and vulnerability will have to radically shift. We should assume that every single substantial collection of code that’s in production today is exploitable.

That means some of the deeper assumptions will start to fall as well. Does that device need to be online? Do we need to be connected in this context? Does this process have to happen on this platform? Does this need to be done with software at all? The cost/benefit analysis for many actions and routines is likely to shift, maybe just for a while, or maybe for a long time to come.

The very best we can hope for is that we come out the other side of this reckoning with a new set of practices that leave us more secure than we were before. I think it’s going to be a long time until we get to that place where things start to feel more secure. Right now, it looks like it’s about ten minutes until the new millennium.

The gravity of their experience hasn't quite set in for the Artemis II astronauts

On the home stretch of their nine-day mission, the four astronauts flying aboard NASA's Orion spacecraft are just beginning to reflect on their experience of flying beyond the Moon.

Their memories of Monday's encounter with the Moon are still fresh as they return to Earth, heading for reentry and splashdown in the Pacific Ocean on Friday evening.

"I'm actually getting chills right now just thinking about it. My palms are sweating," said Reid Wiseman, commander of the Artemis II mission. "But it is amazing to watch your home planet disappear behind the Moon. You can see the atmosphere. You could actually see the terrain on the Moon projected across the Earth as the Earth was eclipsing behind the Moon. It was just an unbelievable sight, and then it was gone. It was out of sight."

Read full article

Comments

South African discussions

These days South Africa is one of the best places to go to have interesting conversations.  Obviously an English-fluent country does have many people following Trump, Islam in Europe, and so on.  But you can have so many conversations about quite different topics, topics that are hardly covered in other parts of the world.

Like South Africa.  But not only.  The southern part of Africa too.  People who live there are on the whole quite historically aware, since their history remains so influential on a day-to-day basis.  I recall being introduced to one person who is a “Huguenot,” as his ancestors came over with the 100 or so Huguenots who came to South Africa in the 1680s.  He is in fact a Huguenot.

Since the Gini coefficient of South Africa is about the same as the Gini coefficient of the world, South Africans are typically thinking about problems that are pretty close to the problems of the world as a whole.  That is not usually the case for say Americans or Brits.

Few South Africans will underrate the importance of Africa for the world’s future.

It is easy to get into conversations with people from Zimbabwe, Malawi, Congo, and sometimes Nigeria.  There are also readily accessible Jewish and Muslim communities, yet with perspectives different from what you might find elsewhere.

There is plenty of religion, if that is your interest.  Plenty of good music too, sometimes on the street.  An excellent arts scene, and past Kentridge probably you have not heard of any of the creators.  The art too gives you a lot to talk about.

All sorts of tribes and languages, many of which I had never heard of before.

The European parts of the citizenry have some pre-Enlightenment origins and overall do not seem incredibly Woke.   Your mileage there may vary, but again it is different from the educated classes in many other parts of the west.

Again for better or worse, but the “trad wife” phenomenon seems quite normal there, they might just use the word “wife.”

In some parts of the country, you can watch gentrification in reverse.

Most of all, South Africans have a finely-tuned sense of contingency.  Things for them could go pretty well, or they could go pretty badly.  Most people know that, and perhaps that is the greatest wisdom yet?  Many of the rest of us try to deny that.

Visiting South Africa makes so many things transparent, or at least less opaque.  Go!

The post South African discussions appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Yes, assimilation is good

Photo by Grand Canyon National Park via Wikimedia Commons

The immigration issue in America isn’t going away. Thanks to Trump’s crackdown, immigration to the U.S. went into reverse in 2025, with more people leaving (voluntarily or involuntarily) than entering the country:

Source: Brookings

But just like a century ago, shutting the gates isn’t the end of the discussion. The argument has shifted from who gets in to America to who belongs here in the first place.

To much of the MAGA right, the answer appears to be that only people of European heritage can become true Americans. For example, here is how right-wing commentator Matt Walsh responded to news about some crimes by some Texan teens:

Anyone who thinks these aren’t Texan names isn’t very familiar with the history of Texas; the Tejanos (Mexican Texans) were there from the beginning, and were a core part of the Texas Revolution. Most Mexican Texans today aren’t descended from the original Tejanos, but from more recent immigrants. But the fact that the Tejanos were there from the start is probably why Hispanics, and Mexicans in particular, have always been deeply integrated into Texan culture. It was at the behest of Texan businessmen that America didn’t put any cap on Mexican immigration in 1924, when it passed a law effectively barring immigration from most other countries.

Matt Walsh is unaware of most of that; to him, anyone without an Anglo-sounding name is presumptively non-American. This leaves little doubt as to what Walsh views as the marker of true American-ness. It’s likely that many others in the MAGA movement feel similarly, even if many would feel uncomfortable stating it out loud in simple terms. Anti-Indian sentiment has also risen to prominence on the right.

And many in the MAGA movement view Muslim immigration as an invasion, bent on imposing Sharia law on Westerners. They believe this “invasion” has already overtaken Europe, which explains their antipathy toward the EU and NATO. A “Sharia Free Caucus” is growing in popularity in Congress, and Ron DeSantis has signed anti-Sharia legislation in Florida. Various Republican politicians have explicitly stated that Muslims don’t belong in America.

If you’re Hispanic, Muslim, or Indian, there’s just not much you can do about this. In the past, showing that you were a good American — waving the flag, joining the army, speaking perfect English, and so on — was good enough to reassure most conservatives that you weren’t an invader bent on overthrowing America’s culture and replacing it with something alien. Nowadays, that’s not enough.

So perhaps it’s unsurprising that some nonwhite Americans are choosing to simply throw in the towel and reject the whole notion of assimilation. This is the essence of Shadi Hamid’s article in the Washington Post yesterday. He writes:

The assimilation defense — look how well we’ve integrated — is satisfying to make. But it concedes a premise I no longer accept: that a minority community’s right to be in the United States depends on its willingness to converge with the cultural mainstream. It shouldn’t depend on that. It shouldn’t depend on anything.

Whereas in the past, Hamid saw assimilation as synonymous with patriotism, now he sees it as a requirement to give up the religion of Islam itself:

The country is becoming less religious. Muslims, by and large, are not…This is a community that has increasingly integrated into American civic life, but it has done so while holding on to its religious commitments in a way that most other groups haven’t. Whether you think that's admirable or worrying probably says more about you than it does about them. The question I keep returning to is: Why do Muslims need to be like everyone else?…[A]ssimilation tends to mean secularization.

Whether Hamid is right that “assimilation tends to mean secularization” is an open question. Assimilation certainly didn’t require Catholic or Jewish Americans to give up their religion when they immigrated en masse in the 19th and early 20th centuries. Religious liberty is a fundamental part of the Constitution and of American tradition. On the other hand, even some immigration advocates do use conversion away from Islam as a measure of assimilation, and a growing number of Republicans — heavily influenced by their view of events in Europe — sees the religion as incompatible with American-ness.

Hamid is no blue-haired progressive — in fact, he’s explicitly anti-woke and fairly conservative. But his call to reject assimilation will be music to the ears of progressives, who have loudly and vehemently rejected assimilation for many years. A recent example of this is Bianca Mabute-Louie, whose new book Unassimilable: An Asian Diasporic Manifesto for the Twenty-First Century is a call for Asian Americans to resist assimilation by building communities and culture apart from White Americans. In a recent interview, NPR’s Alisa Chang gently pushed back on Mabute-Louie’s idea:

I want to understand what does orienting ourselves towards each other mean? Like, who is the each other? Like, my lingering thought, Bianca, is I still do want to belong here in America. And to me, belonging in America is not only shaped by whiteness, but it's also shaped by colliding and mixing with all the cultures that make America, not just white cultures. And I have trouble picturing being both Asian and American outside of that collision and mixing, you know?

Mabute-Louie’s response is interesting:

[T]he book isn't an argument to be isolationist…[O]ne example of how I'm trying to pursue that…in the South…is joining political community, joining mutual aid organizations with people who are most impacted. And I'm not really thinking about if they're Asian or not Asian. I'm just thinking about who's impacted when the hurricane comes. Who am I going to call? I always make the joke - who's going to be on my compound when the apocalypse comes because that's who I'm building community with, and that's what it means for me to be unassimilable.

Mabute-Louie’s idea of anti-assimilationism is not a call to interact only with Asian people — it’s to form political alliances with other people that she sees as being threatened in America at the current moment. It’s a vision of a country fracturing along racial, ethnic, and religious lines; Mabute-Louie is mentally preparing to fight a racial conflict, and she sees the “American” side, defined as hegemonic White culture, as her enemy.

This is different than classic progressive multiculturalism — though it clearly grew out of that idea. This is racial balkanization. The fact that anti-woke writers like Shadi Hamid are now leaning into the anti-assimilation line suggests that it’s now mostly a defensive response against Trumpism and the heavily racialized anti-immigration purge. Whereas ten or twenty years ago, “assimilation” meant waving a flag and speaking English and so on, to many it now means accepting that America is a fundamentally European nation and that nonwhite Americans are permanent guests in that nation.

In fact, this is pretty much what many children of recent immigrants did in the early 20th century, after the anti-immigrant backlash. German Americans were pressured into changing their names, giving up their ancestral traditions, and listening to long, patronizing lectures from volunteer citizens’ groups. Japanese Americans were interned en masse in World War 2. FDR reportedly once told his Jewish and Catholic advisers that "You know this is a Protestant country, and the Catholics and Jews are here under sufferance." For decades, Americans who didn’t come from the old North European Protestant stock felt they had to walk on eggshells.

That’s not going to happen again. Whatever Bianca Mabute-Louie might think, White American culture is not a monolith — in fact, it’s deeply politically and culturally fractured. MAGA will have neither the cultural power nor the enduring political power required to make European heritage the defining characteristic of American-ness. The country will break apart before it accedes to the likes of Matt Walsh or Tucker Carlson as the arbiters of true American-ness.

It’s probably a good thing that forced assimilation, of the type used in the early 20th century, is off the table. I say “probably” because 20th century America is arguably the most spectacularly successful story of integration and multiculturalism in modern history; some will inevitably claim that the cruel, bullying tactics that the old Protestant majority used on German, Japanese, Italian, Jewish, Polish, and other immigrants were necessary to that success. I reject that idea; I think that those bullying tactics were overkill, and probably led to lingering resentments.

But even though early-20th-century-style forced assimilation is off the menu, America still needs some sort of assimilation. A multicultural nation can’t survive as a “salad bowl”, where each group of people maintains its distinctiveness over time. (Canadians, who are fond of the salad bowl metaphor, are probably in for a rough time.) There is no “separate but equal” when it comes to cultures within a nation; if they remain forever separate, they will inevitably be unequal. More pragmatically, nations without cultural unity have difficulty providing public goods; politics tends to break down into an ethnic spoils system instead of being run for the benefit of the masses.

What America thus needs is a melting pot — or if you’d prefer a less metallurgical metaphor, a stew. Immigrants and their children should not be required to forsake every symbol of the old world, abandon their religion, or forget their heritage. But over time, the boundaries between America’s initially distinct cultures should blur. Intermarriage, interethnic business partnerships, and interethnic friendships should gradually erode the physical borders of the old blocs, while modern American culture — Netflix shows, pop musicians, and so on — should provide shared experiences and touchstones to bring Americans together without regard to ancestry.

This gentler assimilation has been happening my entire life. In a post last September, I wrote about what it looks like on the ground:

[M]any also value American culture as a marker of shared nationhood.

When I was growing up in Texas, one of my best friends was born in Shanghai, and didn’t become a U.S. citizen until the age of 18. Culturally, he was a little different than me and the rest of my friends — his mom made dumplings instead of sandwiches, he taught me how to use chopsticks, he didn’t believe in God.

But in all the cultural ways that mattered to us, we were the same. We watched the same TV shows, played the same video games, and listened to the same music. We used the same slang, had the same attitudes toward school, and wanted pretty much the same things for our future. And yes, we believed in the Constitution, and American freedoms, and all of that stuff.

During the 2010s, during our nation’s great…collective freakout over race, I wrote to my friend and asked him if he had ever felt discrimination growing up, or if he had ever felt excluded from the majority. He responded that while once in a great while he faced a little racism from a few jerks, it didn’t dominate his experience. In terms of identity, he told me he just felt very American.

This kind of real, on-the-ground cultural affinity is something too nebulous for YouGov pollsters to ask about, and yet I suspect it’s deeper and more important than most of the more quantifiable markers of American-ness. America is a propositional nation to some extent, but we’re also a cultural nation, bound together by shared habits and attitudes and lifestyles and beliefs. What matters the most isn’t our family’s history in the country, but our own personal history. Shared life experience beats shared heritage in terms of building the bonds of nationhood.

This is what Tomas Jimenez writes about in The Other Side of Assimilation, in which he argues that immigrant cultures will gently add their distinctiveness to mainstream American culture instead of being erased. And it’s what Richard Alba writes about in The Great Demographic Illusion, in which he predicts the gradual melding of America’s disparate groups into a unified “mainstream”. Before the Trump years, it looked like this was working well.

And I believe it was working well. I do not believe that this form of assimilation was too gentle and tolerant. I do not believe that concentration camps and forced name-changes and ethnic slurs and “100 percent American” movements sending volunteers into immigrants’ living rooms would have averted the coming of the MAGA movement. I believe that the MAGA movement is simply one of America’s periodic nativist backlashes, like the Know-Nothings in the 1850s or the restrictionists of the 1910s. It would have come anyway; it always comes back, and we just have to deal with it again.

What we must not do, I believe, is react to the MAGA movement by throwing out the notion of a unified and unifying American culture. We must not retreat to enclaves, online or physical, and view large swathes of the country as our enemies. Instead, we have to recommit to commonality.

This will be hard, but it won’t be impossible. Studies consistently show that Americans are less polarized on the issues than the media tells us we are. As recently as the 2000s, red and blue America were essentially culturally unified as well; though this might be changing, a lot of commonality remains. The online realm pushes us to hate and fear the outgroup, and to identify more with our distant co-ethnics than our real, physical neighbors. But the pull of the real world is still strong, and we’re starting to spend less time on social media.

Assimilation — which is really just another way of saying integration — won’t always be the picture of tolerance. Building a shared culture requires changes from everyone. Yes, some Muslim Americans will need to make sacrifices — they may have to look at cartoons of the Prophet Muhammad, or eat at school cafeterias where pork is on the menu, or hear bigots defame their religion. America is not Europe; freedom of speech, and the separation of church and state, are part of our core values as a nation, and these should not change.

But at the same time, non-Muslim Americans have to get used to seeing mosques on their streets without thinking they’re being invaded. They’ve got to get used to the idea that Islam is just one more religion in America’s mosaic of faiths and practices, and that Muslim Americans are every bit as American as Baptists. Some people will inevitably convert away from Islam, but others will convert to Islam, and this is fine; this is how freedom of religion works in a free society.

And yes, assimilation will involve the eventual loss of old cultural traditions as the generations go on. People will start eating more American food. Some will become secularized. Essentially all will forget how to speak their ancestral language. These processes are happening even faster with recent waves of immigration than they happened a hundred years ago. It’s a normal healthy process, and everyone should accept it; it’s part of the deal when you move to America.

Most of all, we all need to get over the idea that America is on the precipice of a race war or a religious war. Online activists might dream of that, but they’re small in number — and a lot of them aren’t even Americans, but foreign trolls for whom American politics is a fun outlet for their hatred and boredom. Most actual Americans just want to get along with our neighbors and live our lives together.

Ultimately, that’s all assimilation is — living our lives together until we become one people. It happened before, and if we want it, it can happen again.


Subscribe now

Share

Orion helium leak no threat to Artemis II reentry, but will require redesign

Apart from pesky issues with the spacecraft's toilet and waste disposal system, most of the Artemis II mission has proceeded like clockwork. NASA has made few changes to the flight plan since the launch of the lunar flyby mission April 1.

But ground controllers revamped the timeline Wednesday as the Artemis II astronauts zoomed toward Earth after a close encounter with the Moon earlier this week. The four astronauts were supposed to take manual control of their Orion spacecraft, named Integrity, for a piloting demonstration Wednesday night.

Instead, mission managers canceled the demo to make time for an additional test of the ship's propulsion system. The goal was to gather data on a "small leak" of helium gas, which Orion uses to push propellant through a series of tanks and pipes to feed the spacecraft's rocket engines, said Jeff Radigan, NASA's lead flight director for the Artemis II mission.

Read full article

Comments

Let the Stories Be Told

April 1, 2026

It’s the fall of 1981. Specifically it’s October — or, “Rocktober” in the lingo of the big local rock station, WCOZ, a monthlong event highlighting a different band each day.

Today is “Cars Day,” and I’ve set my alarm extra early. I’m yet to own a stereo, so next to the radio I’ve placed a cheap old cassette player, my finger ready on the RECORD button. The instant I hear the opening of a Cars song, I’ll press.

I’ll do this multiple times, and by the end of the day I’ll have a muffled analog catalog of my favorite tunes, all with the first two seconds missing.

Long before Husker Du and the Jazz Butcher, my big musical infatuation was the Cars, the Boston-based quintet fronted by co-singers Ric Ocasek and Ben Orr. I can’t recall when or why, exactly, I got hooked on their music, but the Cars were my soundtrack through my first two years of high school.

According to the desks at St. John’s Prep, vandalized by bored tenth-graders like me, the most popular bands in the world were Rush and maybe Van Halen. I’d leave Cars graffiti, adding a little prog-rock flourish. I’d draw a checkered flag, like the one on the Panorama album.

I mention all of this because of a new book, “The Cars: Let the Stories be Told”, authored by Bill Janovitz, himself a musician from Boston.

The title borrows from “Let the Good Times Roll,” the unforgettable kickoff cut from the group’s eponymous debut, released in 1978.

The author had to be from Boston. Nothing else would be right, or fair. And if one person in the world was gonna read his book, if only for old times’ sake, well that would have to be me.

I seldom read music biographies from start to finish. Often they’re too too bogged down, hyper-detailed and meandering (Chris Salewicz’s bio of Joe Strummer runs for 650 pages). So I pick around for the good parts. This one, though, I took in cover-to-cover, straight through.

It’s exhaustive, comprehensive, painstakingly researched… all the things good music journalism should be. It’s unpolished in parts, but luckily for us Janovitz is a decent writer as much as a thorough historian, bringing us not just a chronicle, but one that’s fun to read.

Sadly neither Ric Ocasek nor Ben Orr are still with us. The surviving three bandmembers, however, are generous and gracious with their contributions.

The author reintroduced me to a band that, as a young teenager, I thought I’d known so well. Turns out there was plenty I missed. Some of it basic, but much of it those nuance-y sort of details that, as youngster, were bound to go over my head. I knew what I liked, but my knowledge and understanding of music was, let’s be honest, pretty unsophisticated.

I’d never appreciated the brilliance of Elliot Easton’s song-within-a-song guitar solos, for example, or the fire of his rockabilly-style leads in the song “My Best Friend’s Girl.” I’d never noticed those bass licks at the beginning of “Bye Bye Love.” And I had no clue that when Ben Orr repeats the word “time” during that verse in “Just What I Needed,” it’s a nod to the Velvet Underground’s “Sister Ray.”

Of course it is, but at fifteen it skipped right past me. All these things did. Heck, I was into my 20s before I knew, or cared, that David Robinson had been the drummer in the Modern Lovers.

As I read, I found myself highlighting pages, then throwing on my headphones, listening and re-listening to this or that highlight that Janovitz points out. In doing so, I rediscovered my love for the Cars.

Their first two albums, anyway: the self-titled debut and its follow-up, Candy-O. That aforementioned Panorama, while its checkered flag motif looked cool on a desk, never did much for me, and neither did anything afterward. If the author fails at one thing, perhaps, it’s helping me realize, all these years later, that the Cars’ hadn’t, in fact, jumped the shark. But save for a song or two, I can’t agree.

That first pair of records, though, is unmatchable. There will never be music like that again.

To what decade this music belongs is open to argument. The second album, Candy-O — the one with the famous pin-up girl by Alberto Vargas — was released in 1979. But to consider it a 70s record (or to call the Cars a “70s band”) would be ridiculous. Stylistically it was way ahead of their time. If 80s music ever needed a formal introduction, let it be the opening 25 second of the song “Let’s Go.”

The dropoff following Candy-O is part of the reason my obsession with the group waned. By late 1982 I’d left the Cars behind, drifting away from mainstream music altogether.

Funny, a bit later on, during my punk rock years, I would often see Ric Ocasek, mantis-like and unmistakable, perusing the record bins in Newbury Comics. He was still a giant to me, but I was too shy ever to say hello.

This book, and the memories it brings back, makes me wish I had.

 

Related Stories:

ZEN ARCADE, FOUR DECADES ON
HAPPY BIRTHDAY TO THE (SECOND) GREATEST ALBUM OF ALL TIME
KEEPING THE CURTAINS CLOSED

The post Let the Stories Be Told appeared first on AskThePilot.com.

Earthset From the Lunar Far Side

Alt text: The gray-brown, heavily cratered Moon dominates the frame against black space, with a partially lit crescent Earth setting behind its upper-left edge.
April 6, 2026

NASA’s Artemis II mission will conclude its 10-day journey around the Moon on April 10, 2026, when the crew splashes down off the California coast. While additional imagery will continue to be processed after their return, the astronauts have already delivered a remarkable collection of photos. Among them is a shot of Earthset, echoing the iconic Earthrise photos taken by Apollo 8 astronauts in 1968.

During an Earthset, the planet appears to sink below the lunar horizon. In this scene, a partially lit crescent Earth drops behind the Moon as seen by crew on the Orion spacecraft. The Earth’s sunlit side shows white clouds and blue water over the Oceania region, while the dark areas are experiencing nighttime. The image also shows incredible detail of the Moon’s surface and its overlapping craters and basins.

The image was taken at 6:41 p.m. Eastern Daylight Time on April 6, 2026, as the Artemis II astronauts passed behind the Moon’s far side. It is one of many photos taken during the seven-hour flyby, including images of a total solar eclipse, the light from several planetary neighbors, and the long shadows cast along the terminator line where lunar day meets night.

More images from the historic flyby can be viewed in the Artemis II lunar flyby gallery, and other mission photos and resources are available on the mission’s multimedia page. Past views of Earth from afar can be found in this collection from NASA Earth Observatory.

Image by NASA. Text by Kathryn Hansen, adapted from NASA resources.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Shades of a Lunar Eclipse
3 min read

A series of nighttime satellite images revealed how moonlight reaching Earth varied throughout a total lunar eclipse.

Article
City Lights Glow Along Moonlit Waters
3 min read

An astronaut photographed moonglint shimmering across the sea surface and the bright clusters of Florida’s cities at night.

Article
The Galaxy Next Door
3 min read

The Large Magellanic Cloud—one of our closest neighboring galaxies—is a hotbed of star formation that is visible to both astronauts…

Article

The post Earthset From the Lunar Far Side appeared first on NASA Science.

Politics Chat, April 9, 2026

Meeting at Appomattox

April 8, 2026

On April 8, 1865, General Ulysses S. Grant was having a hard night.

His army had been harrying Confederate General Robert E. Lee’s for days, and Grant knew it was only a question of time before Lee had to surrender. The people in the Virginia countryside were starving, and Lee’s army was melting away. Just that morning a Confederate colonel had thrown himself on Grant’s mercy after realizing that he was the only man in his entire regiment who had not already abandoned the cause. But while Grant had twice asked Lee to surrender, Lee still insisted his men could fight on.

So on the night of April 8, Grant retired to bed in a Virginia farmhouse, dirty, tired, and miserable with a migraine. He spent the night “bathing my feet in hot water and mustard, and putting mustard plasters on my wrists and the back part of my neck, hoping to be cured by morning.” It didn’t work. When morning came, Grant pulled on his clothes from the day before and rode out to the head of his column with his head throbbing.

As he rode, an escort arrived with a note from Lee requesting an interview for the purpose of surrendering his Army of Northern Virginia. “When the officer reached me I was still suffering with the sick headache,” Grant recalled, “but the instant I saw the contents of the note I was cured.”

The two men met in the home of Wilmer McLean in the village of Appomattox Court House, Virginia. Lee had dressed grandly for the occasion in a brand new general’s uniform, carrying a dress sword; Grant wore simply the “rough garb” of a private with the shoulder straps of a lieutenant general.

But the images of the wealthy, noble South and the humble North hid a very different reality. As soon as the papers were signed, Lee told Grant his men were starving and asked if the Union general could provide the Confederates with rations. Grant didn’t hesitate. “Certainly,” he responded, before asking how many men needed food. He took Lee’s answer—“about twenty-five thousand”—in stride, telling the general that “he could have...all the provisions wanted.”

By spring 1865, the Confederates who had ridden off to war four years before boasting that their wealthy aristocrats would beat the North’s moneygrubbing shopkeepers in a single battle were broken and starving, while the Union army, backed by a booming industrial economy, could provide rations for twenty-five thousand men on a moment’s notice.

The Civil War was won not by the dashing sons of wealthy planters, but by men like Grant, who dragged himself out of his blankets and pulled a dirty soldier’s uniform over his pounding head on an April morning because he knew he had to get up and get to work.

–-

Notes:

U.S. Grant, Personal Memoirs of U.S. Grant (New York: Charles L. Webster & Company, 1885), volume 2, chapter 67, “Negotiations at Appomattox,” at https://www.gutenberg.org/files/4367/4367-h/4367-h.htm#ch66

Share

Big Boy is Angry!

If you ever ask yourself, What has it come to?—I feel like this post from a few minutes ago says it all …

You’re the President of the United States. THE PRESIDENT OF THE UNITED STATES!? “There are reports?” Seriously, bruh? “There are reports”? You are inside the White House. You’re the top power player in America, and arguably the world. We all assume you have a bunch of phones, phone numbers. You know how to text. Don’t you?

And this—this!?—is your reaction to Iran charging for ships to travel through Hormuz? A Truth Social post? A demand? With an exclamation mark! And your stupid name in ALL CAPS?

Again—you’re the president. What the crappers is wrong with you?

What haven’t you broken by now?

Jesus Christ, it’s exhausting.

Katrina Foley needs us.

One can donate to Katrina Foley’s re-election campaign here.

Katrina Foley is not warm and cuddly.

Katrina Foley is not going to smother your pet in kisses.

In person, Katrina Foley can be a bit terse, a bit sharp-elbowed, a bit snarly, a bit rough around the edges.

She is also one bad-ass motherfucker, a straight-up dogged worker and a helluva public servant.

As we speak, Foley—Orange County’s supervisor for the Fifth District since 2021—is in an uber-tight re-election race against Diane Dixon, the hard-right MAGA assemblywoman and (I believe deep in my soul) the body double for the actress who played Webster Papadopoulos’ adoptive mother back in the mid-1980s …

Will Foley win? I honestly don’t know. The GOP/Republican donors are throwing an absolute shitload of dough into this race, with the hope that (somehow) money and attack ads can overcome incumbency, Democratic momentum and the nationwide Donald Trump-sucks-even-worse-than-we-thought-he-would tidal wave besieging nearly every single Republican within its path.

Wait.

Deep breath.

Over the past bunch of weeks I’ve spoken with scores of people about Foley, a woman I’ve seen speak but have yet to meet one on one. And the No. 1 word you hear? Relentless. Utterly relentless. Foley works and works and works and works. Picture the biggest workaholic you know; the person who sticks around the office when everyone else is gone, just to make sure productivity is up and the copy machine is humming. Now triple everything. That’s Foley.

I started compiling a list of her accomplishments, but it became insanely thick and a bit repetitive. So I’ll focus on this: At the same time far too many (local and national) Republicans pretend climate change is a Star Trek-inspired hoax, Foley has fully devoted herself to protecting the environment, which a hyper focus on the shoreline. Here’s a quick primer …

As for Dixon …

I mean, she is what you’d expect of someone who wears a collar-up $700 white zip-up vest with pearls. She’s Newport. Uber Newport. Fancy car, fancy home, wealth-before-humanity, where-can-I-find-the-perfect-$12-cup-of-coffee.

Dixon’s record is one of protecting accumulated personal fortune over common-sense decency. It’s also of doing some painfully egregious shit. Most famously, she voted a hard no on AB 249, which required childcare centers and K-thru-12 school buildings built after 2010 to meet existing federal and state standards for lead levels in water systems. During Dixon’s four-year run on Newport’s City Council, she sat back and watched as the city used RoundUp (linked to increased cancer risks—specifically non-Hodgkin lymphoma—as well as environmental damage, including the destruction of beneficial plants and potential harm to soil ecosystems) for weed management—even when everyone knew the stuff was hyper-toxic and harmful. When, earlier this years, Trump issued an executive order to increase the production of toxic pesticides, Dixon (a vocal supporter of the lunatic president) said nary a peep.

If you’re wondering about bills Dixon voted against, well …

AB 249 - AB 249 would have required childcare centers and TK-12 school buildings built after 2010 to meet existing federal and state standards for lead levels in water systems.

SB 682 - SB 682 would have limited the use of PFAS in certain products to match the standards of other states and cities. (Passed / Gov. veto)

SB 601 - This bill would have restored Clean Water Act protections on a state level to prevent polluters from discharging toxic waste in these waters. (Passed / Gov. veto)

AB 1963 - This bill requires the Department of Pesticide Regulation to complete- by 2029- an evaluation of paraquat dichloride and make the determination to either ban or place new restrictions on the use of the pesticide. (Dixon Voted No, Passed & Signed into Law)

AB 3233 - This bill, the Local Environmental Choice and Safety Act, will safeguard the longstanding authority of local governments to equitably regulate and phase out oil and gas operations in their jurisdictions and affirms the state’s commitment to protect the public from oil and gas pollution. (Dixon Voted No, Passed & Signed into Law)

•••

In short, Dixon is everything you don’t like in a public figure, but also one very much worth fearing. Again, she’s got dough. A lotta dough. She presents well. Likes vests and strange color patterns and fluffy dogs …

She also digs Trump, seems to have no problems with attacking Iran, has never uttered a peep of protest in regard to the president’s cruelty, inanity, poor-judgment. She’s 100-percent old-school Orange County Republicanism.

She also may well be our next supervisor.

Which—trust me—would suck.

One can donate to Katrina Foley’s re-election campaign here.

Politics Chat, April 9, 2026

Credit Cards and Bitcoin: What Buyers Should Know Before They Click “Purchase”

Buying Bitcoin used to mean wiring money to an exchange and waiting. Today, many platforms market a faster route: pay with a credit card and receive crypto quickly. For newcomers, that convenience can feel like progress. For experienced users, it raises a different set of questions—about fees, fraud risk, chargebacks, and whether a credit-card purchase is even the right tool for the job.

Credit cards are designed for consumer protection and reversible payments. Crypto transactions are designed to be final. When you combine the two, you get speed and accessibility, but also higher costs and more scrutiny. If you are considering buying Bitcoin with a card, it is worth understanding how the process works and what to watch out for before you treat it as “just another checkout.”

Why credit cards are attractive for first-time buyers

The appeal is straightforward. Credit cards are familiar, widely available, and fast. For someone who wants to buy a small amount of Bitcoin without learning bank transfers or exchange interfaces, card payments feel like the shortest path from interest to ownership.

Because of that demand, a growing number of services offer simplified “card to crypto” flows. One example of an instant exchange-style platform in this category is https://stealthex.io/, which presents crypto conversions and purchases through a streamlined, user-facing interface.

For services, credit cards also expand the funnel. A user who might not complete a wire transfer may complete a card purchase in minutes. That is why “buy with card” options have become a common entry point across the industry.

But convenience has a price—often literally.

Fees and hidden costs: what you are really paying

Card purchases tend to be more expensive than bank transfers. There are several reasons.

Card processors charge fees, and crypto purchases are often treated as higher-risk transactions. Platforms also price in fraud risk and chargeback exposure. Finally, your bank may treat the purchase as a cash advance, adding extra charges and interest.

This is why two users can pay very different effective rates for the same amount of Bitcoin. One sees a clean checkout. The other sees a “cash advance fee” on their statement and wonders what happened.

Before you buy, check the total cost, not just the headline price. If a platform does not clearly show fees, compare the final amount of BTC you receive for a set dollar amount rather than relying on marketing claims.

The identity and compliance reality

Credit card purchases usually trigger stronger identity checks than other methods. That does not necessarily mean something is wrong—it reflects how payment networks handle fraud and regulatory obligations. Expect to verify identity, and expect occasional declines depending on bank policies.

If anonymity is your goal, a credit card is rarely the right path. It is one of the most traceable payment instruments you can use.

Fraud and security risks: where people get hurt

The most common losses in “buy Bitcoin with a card” scenarios are not technical hacks. They are scams and user errors.

Phishing is the biggest threat. Fake ads, lookalike domains, and impersonated support accounts push users to “buy now” pages that collect card details or redirect deposits. The safest habit is boring: type the domain yourself, or use a bookmark you trust. Avoid clicking ads for financial products.

Another risk is account takeover. If you create an account on a platform and do not secure it properly, attackers can attempt credential stuffing or SIM-swap style takeovers. Always use strong passwords and app-based two-factor authentication, not SMS where possible.

This is also where custody matters. If the Bitcoin you buy remains on a platform, you are exposed to third-party risk. For meaningful amounts, moving funds to a secure wallet is a common best practice.

When a credit card is the wrong tool

Credit cards can make sense for small, controlled purchases—especially when the buyer values speed over cost. They are less suitable for large purchases, for anyone sensitive to fees, or for buyers who may carry card balances and pay interest.

They also deserve caution if you are buying emotionally. The ability to buy quickly can tempt people to spend more than planned. A simple discipline helps: decide your budget before you start, and do not increase it mid-checkout because the market is moving.

Closing perspective

Buying Bitcoin with a credit card is best understood as a convenience product. It lowers friction for entry, but it comes with higher costs, higher scrutiny, and a stronger need for security hygiene. If you go in with realistic expectations—verify domains, understand fees, secure accounts, and treat the purchase as an investment decision rather than an impulse—it can be a practical on-ramp. If you treat it like ordinary online shopping, it is easier to overpay, overshare, or make a mistake you cannot reverse.

Photo: freepik via their website.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post Credit Cards and Bitcoin: What Buyers Should Know Before They Click “Purchase” appeared first on DCReport.org.

SQLAlchemy 2 In Practice - Chapter 4 - Many-To-Many Relationships

This is the fourth chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon. Thank you!

Continuing with the topic of relationships, this chapter is dedicated to the many-to-many type, which as its name implies, is used when it is not possible to identify any of the sides as a "one" side.

Thursday 9 April 1663

Up betimes and to my office, and anon we met upon finishing the Treasurer’s accounts. At noon dined at home and am vexed to hear my wife tell me how our maid Mary do endeavour to corrupt our cook maid, which did please me very well, but I am resolved to rid the house of her as soon as I can.

To the office and sat all the afternoon till 9 at night, and an hour after home to supper and bed. My father lying at Tom’s to-night, he dining with my uncle Fenner and his sons and a great many more of the gang at his own cost to-day.

To bed vexed also to think of Sir J. Minnes finding fault with Mr. Hater for what he had done the other day, though there be no hurt in the thing at all but only the old fool’s jealousy, made worse by Sir W. Batten.

Read the annotations

Meta's new model is Muse Spark, and meta.ai chat has some interesting tools

Meta announced Muse Spark today, their first model release since Llama 4 almost exactly a year ago. It's hosted, not open weights, and the API is currently "a private API preview to select users", but you can try it out today on meta.ai (Facebook or Instagram login required).

Meta's self-reported benchmarks show it competitive with Opus 4.6, Gemini 3.1 Pro, and GPT 5.4 on selected benchmarks, though notably behind on Terminal-Bench 2.0. Meta themselves say they "continue to invest in areas with current performance gaps, such as long-horizon agentic systems and coding workflows".

The model is exposed as two different modes on meta.ai - "Instant" and "Thinking". Meta promise a "Contemplating" mode in the future which they say will offer much longer reasoning time and should behave more like Gemini Deep Think or GPT-5.4 Pro.

A couple of pelicans

I prefer to run my pelican test via API to avoid being influenced by any invisible system prompts, but since that's not an option I ran it against the chat UI directly.

Here's the pelican I got for "Instant":

This is a pretty basic pelican. The bicycle is mangled, the pelican itself has a rectangular beak albeit with a hint of pouch curve below it. Not a very good one.

And this one for "Thinking":

Much better. Clearly a pelican. Bicycle is the correct shape. Pelican is wearing a blue cycling helmet (albeit badly rendered). Not a bad job at all.

Both SVGs were rendered inline by the Meta AI interface. Interestingly, the Instant model output an SVG directly (with code comments) whereas the Thinking model wrapped it in a thin HTML shell with some unused Playables SDK v1.0.0 JavaScript libraries.

Which got me curious...

Poking around with tools

Clearly Meta's chat harness has some tools wired up to it - at the very least it can render SVG and HTML as embedded frames, Claude Artifacts style.

But what else can it do?

I asked it:

what tools do you have access to?

And then:

I want the exact tool names, parameter names and tool descriptions, in the original format

It spat out detailed descriptions of 16 different tools. You can see the full list I got back here - credit to Meta for not telling their bot to hide these, since it's far less frustrating if I can get them out without having to mess around with jailbreaks.

Here are highlights derived from that response:

  • Browse and search. browser.search can run a web search through an undisclosed search engine, browser.open can load the full page from one of those search results and browser.find can run pattern matches against the returned page content.

  • Meta content search. meta_1p.content_search can run "Semantic search across Instagram, Threads, and Facebook posts" - but only for posts the user has access to view which were created since 2025-01-01. This tool has some powerful looking parameters, including author_ids, key_celebrities, commented_by_user_ids, and liked_by_user_ids.

  • "Catalog search" - meta_1p.meta_catalog_search can "Search for products in Meta's product catalog", presumably for the "Shopping" option in the Meta AI model selector.

  • Image generation. media.image_gen generates images from prompts, and "returns a CDN URL and saves the image to the sandbox". It has modes "artistic" and "realistic" and can return "square", "vertical" or "landscape" images.

  • container.python_execution - yes! It's Code Interpreter, my favourite feature of both ChatGPT and Claude.

    Execute Python code in a remote sandbox environment. Python 3.9 with pandas, numpy, matplotlib, plotly, scikit-learn, PyMuPDF, Pillow, OpenCV, etc. Files persist at /mnt/data/.

    Python 3.9 is EOL these days but the library collection looks useful.

    I prompted "use python code to confirm sqlite version and python version" and got back Python 3.9.25 and SQLite 3.34.1 (from January 2021).

  • container.create_web_artifact - we saw this earlier with the HTML wrapper around the pelican: Meta AI can create HTML+JavaScript files in its container which can then be served up as secure sandboxed iframe interactives. "Set kind to html for websites/apps or svg for vector graphics."

  • container.download_meta_1p_media is interesting: "Download media from Meta 1P sources into the sandbox. Use post_id for Instagram/Facebook/Threads posts, or catalog_search_citation_id for catalog product images". So it looks like you can pull in content from other parts of Meta and then do fun Code Interpreter things to it in the sandbox.

  • container.file_search - "Search uploaded files in this conversation and return relevant excerpts" - I guess for digging through PDFs and similar?

  • Tools for editing files in the container - container.view, container.insert (with new_str and insert_line), container.str_replace. These look similar to Claude's text editor tool commands - these are becoming a common pattern across any file-equipped agent harness.

  • container.visual_grounding - see below, this one is fun.

  • subagents.spawn_agent - the sub-agent as a tool pattern. "Spawn an independent sub-agent for research, analysis, or delegation. It returns its final text response."

  • third_party.link_third_party_account - "Initiate account linking for a third-party service", currently Google Calendar, Outlook Calendar, Gmail, or Outlook.

Image analysis in the container

Let's talk about that container.visual_grounding one. Here's the description in full:

Description: Visual grounding tool that analyzes the image, identifies and labels objects, locates regions, or counts objects.

Parameters:

  • object_names (array, required)
  • image_path (string, default "")
  • format_type ("bbox"|"point"|"count"|null, default "bbox")
  • title (string|null)[string]

So it can take an image path, optional object names and return things in format bbox, point or count.

This sounded to me like Meta's Segment Anything, (I later heard that it's not, this is a native feature of the model itself via a tool call with a custom system prompt). I decided to take it for a spin.

Prompt:

generate a photo of a raccoon sitting on a trash can wearing trash as a hat

Response:

Here you go — king of the alley, in full regalia:

It's a photo of a raccoon wearing a disposable coffee cup and some newspaper and banana peel as a hat. It's pretty good

Coffee cup crown, banana peel brim, newspaper feather. Peak raccoon fashion. Want me to make a wilder version — maybe night-vision, or wearing sunglasses too?

My guess is that image generation in Meta AI is powered by their Emu model, or an updated version of it. This is the same pattern as ChatGPT and Gemini image generation, where images are generated by a tool call (here that's media.image_gen).

The image_gen tool description said that it "saves the image to the sandbox", so my guess was that I'd then be able to run Python code against it. I tried:

use python OpenCV to analyze that image and find out neat things about it

Annoyingly it did not show me the code it ran, but it did create this image:

OpenCV image analysis dashboard titled "OpenCV Analysis: Raccoon Trash Hat" with four panels: top-left shows the original 1280x1280 photo of a raccoon wearing a trash-lid hat with a coffee cup and newspaper on top in an alley; top-right shows a Canny edge detection version with white edges on black background labeled "Canny Edges - 4.6% density"; bottom-left shows "Top 5 Dominant Colors" as swatches with values RGB(20, 19, 24): 35.3%, RGB(43, 43, 47): 34.2%, RGB(84, 80, 79): 18.3%, RGB(139, 135, 131): 6.3%, RGB(210, 213, 208): 5.8%; bottom-right shows "Color Histograms" line chart with Red, Green, and Blue channels plotted from 0-250 on x-axis and 0-35000 on y-axis, peaking around value 25-50.

Pretty neat! We can both generate images and then use them with Python tools.

Next, the visual_grounding tool. Knowing the names of the tools makes it easy to refer to them in prompts:

Now try the container.visual_grounding tool

The response (full response here):

Done — visual_grounding pinpointed every piece of the outfit with pixel-level precision:

Visual grounding analysis of a raccoon photo titled "Visual Grounding: Found 8 objects" with colored dot annotations on the image marking each detected object: coffee cup (515,205) in yellow, banana peel (569,261) in light yellow, newspaper (399,279) in blue, right eye (461,396) in magenta, left eye (582,397) in pink, nose (514,472) in cyan, raccoon (553,548) in green, and trash can lid (550,892) in red. A legend at the bottom lists all objects with their normalized 0-1000 coordinates. Footer text reads "Coordinates are 0-1000 normalized. visual_grounding returned precise points for each component of the trash hat ensemble." Watermark reads "Meta AI".

[...]

Want me to run it in bbox mode next to draw boxes around the fur texture, or count mode to see how many individual whiskers it can detect?

Once again, Meta AI wrapped the results from the tool in a custom HTML visualization. That's clearly point mode, but how about the others?

I prompted:

yes draw bounding boxes, can it do masks too?

Response:

Yes — bounding boxes work great. Masks? Not directly in visual_grounding; it only returns three formats: point, bbox, and count. No pixel-level segmentation masks.

Here's the bbox output:

Bounding box object detection visualization titled "Bounding Boxes (visual_grounding)" with subtitle "8 objects detected — coordinates are 0-1000 normalized" showing a raccoon photo with colored rectangular bounding boxes around detected objects: coffee cup in yellow [424,115,610,273] 186×158, banana peel in yellow [472,215,667,291] 195×76, newspaper in blue [305,223,548,315] 243×92, raccoon in green [270,240,892,845] 622×605, raccoon's face in purple [305,283,750,537] 445×254, right eye in magenta [442,379,489,413] 47×34, left eye in magenta [565,379,605,416] 40×37, and trash can lid in red [107,800,1000,998] 893×198. A legend at the bottom shows each object's name, coordinates, and pixel dimensions in colored cards. Watermark reads "Meta AI".

What you’re seeing:

  • raccoon: big box from (270,240) to (892,845) — covers 62% of the width, basically the whole animal
  • coffee cup: (424,115)-(610,273) — sits highest, 158px tall in the 0-1000 space
  • banana peel: (472,215)-(667,291) — overlaps the cup's base, confirming it's draped over the side
  • newspaper: (305,223)-(548,315) — left ear coverage
  • trash can lid: (107,800)-(1000,998) — spans almost the full image width at the bottom
  • face: (305,283)-(750,537) — tight crop around the mask
  • eyes: both ~40×37 pixel boxes, perfectly level at y=379

Notice how the boxes nest: face is inside raccoon, eyes inside face, hat pieces overlap each other but stay above the face box. The tool is doing real object localization, not just guessing centers.


About masks: since visual_grounding doesn't output them, I can fake them with OpenCV in the container. Two quick options:

  1. GrabCut — I give it the raccoon bbox, it iteratively segments fur from background and returns a binary mask
  2. SAM-style color threshold — use the bbox plus k-means to isolate the coffee cup or banana peel

Want me to generate an actual alpha mask for the raccoon (cut it out from the alley), or for the trash hat pieces separately? Takes about 10 seconds and I can give you a transparent PNG.

Here's the HTML for that bbox visualization.

Finally I had it dump out the raw JSON for point mode, bbox mode and count mode. You can see that all here. The count mode is particularly interesting - here's that in full:

[
  {
    "object_name": "raccoon whisker",
    "points": [
      {"x": 403, "y": 445},
      {"x": 397, "y": 458},
      {"x": 392, "y": 472},
      {"x": 391, "y": 485},
      {"x": 398, "y": 497},
      {"x": 415, "y": 512},
      {"x": 618, "y": 446},
      {"x": 622, "y": 461},
      {"x": 625, "y": 475},
      {"x": 628, "y": 488},
      {"x": 635, "y": 501},
      {"x": 651, "y": 518}
    ],
    "count": 12
  },
  {
    "object_name": "raccoon paw claw",
    "points": [
      {"x": 406, "y": 845},
      {"x": 424, "y": 847},
      {"x": 441, "y": 842},
      {"x": 473, "y": 833},
      {"x": 596, "y": 830},
      {"x": 635, "y": 844},
      {"x": 660, "y": 843},
      {"x": 679, "y": 838}
    ],
    "count": 8
  },
  {
    "object_name": "trash item on head",
    "points": [
      {"x": 514, "y": 191},
      {"x": 568, "y": 263},
      {"x": 392, "y": 274}
    ],
    "count": 3
  },
  {
    "object_name": "eye",
    "points": [
      {"x": 462, "y": 395},
      {"x": 582, "y": 398}
    ],
    "count": 2
  },
  {
    "object_name": "ear",
    "points": [
      {"x": 359, "y": 313},
      {"x": 680, "y": 294}
    ],
    "count": 2
  }
]

So Meta AI has the ability to count a raccoon's whiskers baked into the default set of tools.

Which means... it can count pelicans too!

Screenshot of a Meta AI conversation where the user uploaded a photo of pelicans on rocks and asked "Count the pelicans". response reads: "That rocky haul-out is packed — I count 25 brown pelicans in your photo. I went bird by bird, including the ones tucked behind rocks, heads turned away, or just peeking out at the edges. They're all California brown pelicans, with those pale yellow heads and long bills, clustered on the riprap right at the waterline. To make it easy to verify, I've numbered each one on the image:" followed by an annotated version of the photo with red numbered circles (1–25) placed on each pelican. The annotated image has a downward chevron and an "Open" button at the bottom right.

Here's that overlay exported as HTML.

Update: Meta's Jack Wu confirms that these tools are part of the new harness they launched alongside the new model.

Maybe open weights in the future?

On Twitter Alexandr Wang said:

this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions.

I really hope they do go back to open-sourcing their models. Llama 3.1/3.2/3.3 were excellent laptop-scale model families, and the introductory blog post for Muse Spark had this to say about efficiency:

[...] we can reach the same capabilities with over an order of magnitude less compute than our previous model, Llama 4 Maverick. This improvement also makes Muse Spark significantly more efficient than the leading base models available for comparison.

So are Meta back in the frontier model game? Artificial Analysis think so - they scored Meta Spark at 52, "behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6". Last year's Llama 4 Maverick and Scout scored 18 and 13 respectively.

I'm waiting for API access - while the tool collection on meta.ai is quite strong the real test of a model like this is still what we can build on top of it.

Tags: facebook, ai, generative-ai, llms, code-interpreter, llm-tool-use, meta, pelican-riding-a-bicycle, llm-reasoning, llm-release

The odd disappearance of the business cycle

I’m afraid that we’ve been sold a bill of goods. Pundits predicted that the 2022 Ukraine war supply shock would lead to a recession. Then we were told that the Fed’s policy of raising interest rates in 2023 would lead to recession. Then we were told that Trump’s April 2025 Liberation Day tariffs would lead to recession. The truth is that pundits have never, ever, ever been able to reliably predict recessions.

Please, just stop trying.

In the late 1960s, I began following the news on the economy. The first recession that I can recall occurred in 1970, when I was 14 years old. When I was writing my dissertation back in 1983, I could recall four different recessions over the previous 13 years. And that was not particularly unusual, as there were also four recessions during 1948-1960 and five recessions during 1918-30. Indeed during the first 83 years of the 20th century there were 19 recessions, roughly one every 4 1/2 years.

Since 1983, there have been just four recessions, or roughly one per decade. The term “business cycle” was always a bit misleading, as downturns did not follow a regular pattern. But at least one could view recessions as something that occurred fairly frequently. The economy seemed sort of cyclical.

Today, the term business cycle makes about as much sense as saying, “war cycle” or “pandemic cycle”. Recessions are viewed as an anomaly, not a regular feature of the economy. In 2026, an economics grad student might have a clear memory of only one recession, as the economy has been officially in the “contraction” phase of the business cycle for only 2 out of the previous 200 months—February to April 2020.

Of course that’s slightly misleading, as the economy was quite depressed in the early years of the recovery from the Great Recession, even as it was technically expanding. But there’s more. The unemployment rate has been above 4.5% for only 18 months out of the past 9 years.

Younger readers might wonder why I view that as unusual, but in the 47 years before 2017, the unemployment rate was above 4.5% well over 90% of the time.

We recently experienced our first ever soft landing with gradually moderating (but still excessive) inflation and no one seemed to notice. If you told economists back in the 20th century that we’d have a nine-year period with unemployment at or below 4.5% apart from a brief pandemic period, and that inflation would end that long period at just 2.8%, they’d ask: How does it feel to be living in nirvana? If you then told them that public sentiment on “the economy” was at near record lows (even before Iran), they’d ask if the entire country had become insane:

The University of Michigan’s long-running index of consumer sentiment registered a reading of “54” in the first snapshot of the year. Preliminary January results were released Friday, with the month’s final results coming in a couple of weeks.

To put the current 54 reading in perspective, that’s about 30 points below the survey’s more than 70-year average.

It’s also near the all-time low of 50-flat, which was hit in June 2022 at the peak of pandemic-era inflation and nearly hit again this last November.

And you cannot explain the paradox by pointing to inequality, as in recent years the fastest real wage growth has occurred among the lowest paid. This figure is from the (progressive) Economic Policy Institute.

With apologies to Charles Dickens, it was the best of times, yet it was perceived as the worst of times.

To be clear, I am not suggesting that consumer sentiment is “wrong”. It is what it is. If people feel bad about the economy, then they feel bad about the economy. I’d rather try to understand their perspective, rather than blame the public for being ignorant.

I’m more upset with the economics profession, which in my view has a flawed view of macroeconomics. There is way too much focus on business cycle forecasting, which has never been reliable. Way too much focus on real shocks and fiscal shocks, both of which are wildly overrated in importance. Way too little focus on America’s strange lack of mini-recessions and soft landings. We need to figure out what’s going on.

Long time readers know my views on these issues but given the repeated failure of recent recession predictions from mainstream pundits, perhaps it’s time for another look at the problem—from a market monetarist perspective.

Read more

Job Growth on ICE

Appomattox Court House Battle Facts and Summary | American Battlefield Trust

Happy Union Victory at Appomattox Day

Armageddon may (or may not) be on hold for a little while, so let’s talk about other issues, like the state of the economy — in particular, what’s going on with job growth, where important things are happening.

Are they good things or bad? Well, it’s a bad news/good news/but the good news is really bad news situation.

I’m not talking about the month-to-month numbers. You may have heard that the Bureau of Labor Statistics estimates that the U.S. economy added 178,000 jobs in March, after losing 133,000 jobs in February. Neither number tells you much except that monthly job numbers are noisy. Better to smooth them out, say by looking at the average change over the past 6 months. Here’s what that number, in thousands, looks like since the beginning of 2024:

Clearly, we’ve seen a huge employment slowdown, indeed an employment stall, with recent rates of job growth close to zero. Trump administration officials keep claiming that the economy is booming, but in terms of job creation it’s anything but.

That’s the bad news. The good news is that the employment stall has not led to a major worsening of the unemployment rate or other standard measures of the health of the labor market. Here, for example, is the percentage of prime-working-age adults with jobs, which remains high by historical standards:

A graph showing the growth of a company

AI-generated content may be incorrect.

But how can we have stalling job growth without rising unemployment? Clearly, the “breakeven” rate of job creation — the number of new jobs that must be added each month to keep up with growth in the labor force, and hence to avoid rising unemployment — has suddenly dropped. Multiple recent estimates suggest that breakeven employment growth is now close to zero.

There’s no mystery about why this has happened. It’s all about the crackdown on immigrants. The number of working-age native-born Americans has been falling for years, basically because we’re getting older, with most baby boomers already over 65. Labor force growth has therefore depended almost entirely on immigration — but in this ICE age, it’s dangerous to be an undocumented immigrant and, all too often, even to be a legal immigrant if your skin is the wrong color. So immigration, both legal and illegal, has dried up, and net immigration — the difference between the number of people coming and the number going — has plunged and may well have turned negative.

One way to think about what is happening is that we’ve stopped gaining jobs, but we’ve also stopped adding workers, so unemployment hasn’t risen a lot. But one can also turn this around and say that as a result of anti-immigrant policies we’ve stopped adding workers, but we have also stopped gaining jobs.

This is bad news for anyone who believed the predictions of immigration opponents. They claimed that cracking down on immigration would open up more jobs for native-born Americans, but this hasn’t happened. In fact, the unemployment rate for native-born workers has gone up under Trump, although not drastically:

A graph of a line with blue line

AI-generated content may be incorrect.

Source

There’s also another way in which the plunge in breakeven employment growth is bad news: It makes America’s already problematic fiscal outlook considerably worse, because future tax receipts depend on future economic growth — and economic growth will be much slower with zero growth in the labor force than it would have been with growing labor supply.

Consider the budget proposal the Trump administration released last week. This proposal purports to be fiscally responsible, with debt as a percentage of GDP falling slightly over the next decade. But this relatively rosy fiscal outlook depends crucially on the assumption that the economy will grow rapidly, 3 percent a year on average. This is much higher than the 2 percent growth projected by the Federal Reserve and 1.8 percent growth projected by the Congressional Budget Office. But you can see why Trump’s officials want to believe it: Rapid growth would help pay for large increases in military spending and growing outlays on Medicare and Social Security as the population ages.

Yet with no growth in the labor force, thanks to anti-immigrant policies, economic growth will have to come entirely through rising productivity — increased output per worker-hour. And 3 percent productivity growth would be very high by historical standards — in fact, we have never achieved sustained productivity growth that high, even during the postwar boom and at the height of the internet boom:

A graph of a number of people

AI-generated content may be incorrect.

It’s possible that we’ll have a record-breaking productivity boom as a result of AI, but we shouldn’t count on it. And if AI’s economic payoff is less than miraculous, the end of net immigration will lead to a slow-growth economy.

One point in particular that’s relevant given where we are politically: Donald Trump, Pete Hegseth and others envision a world of perpetual U.S. military dominance. But how can we maintain that dominance if, as the plunge in breakeven job growth suggests, we’ve entered an era of basically zero growth in our work force?

So when it comes to jobs, the bad news is that job growth has come to a screeching halt. The good news is that this hasn’t caused surging unemployment. But the bad news within the good news is that the disconnect between job growth and unemployment reflects a collapse in the inflow of immigrants, which is really bad for economic growth and America’s position in the world.

Are we great again yet?

MUSICAL CODA

GitHub Repo Size

Tool: GitHub Repo Size

GitHub doesn't tell you the repo size in the UI, but it's available in the CORS-friendly API. Paste a repo into this tool to see the size, for example for simonw/datasette (8.1MB).

Tags: cors, github

asgi-gzip 0.3

Release: asgi-gzip 0.3

I ran into trouble deploying a new feature using SSE to a production Datasette instance, and it turned out that instance was using datasette-gzip which uses asgi-gzip which was incorrectly compressing event/text-stream responses.

asgi-gzip was extracted from Starlette, and has a GitHub Actions scheduled workflow to check Starlette for updates that need to be ported to the library... but that action had stopped running and hence had missed Starlette's own fix for this issue.

I ran the workflow and integrated the new fix, and now datasette-gzip and asgi-gzip both correctly handle text/event-stream in SSE responses.

Tags: gzip, asgi, python

Adobe Diddles With Your /etc/hosts File

“thenickdude”, on Reddit:

They’re using this to detect if you have Creative Cloud already installed when you visit on their website.

When you visit https://www.adobe.com/home, they load this image using JavaScript:

https://detect-ccd.creativecloud.adobe.com/cc.png

If the DNS entry in your hosts file is present, your browser will therefore connect to their server, so they know you have Creative Cloud installed, otherwise the load fails, which they detect.

They used to just hit http://localhost:\/cc.png which connected to your Creative Cloud app directly, but then Chrome started blocking Local Network Access, so they had to do this hosts file hack instead.

(Via Thom Holwerda at OSNews.)

They didn’t have to do this, of course. In fact, quite obviously, they definitely should not be doing this. Adobe is just a third-party developer, no better, no more trusted, no more important than any other. Imagine if every piece of software on your computer added entries to your /etc/hosts file. Madness. Adobe should be ashamed of themselves. Adobe used to be a bastion of best practices for developers to follow. Now their installer/updater is indistinguishable from malware.

See also: Marc Edwards on Mastodon, and Michael Tsai.

 ★ 

Lickspittle of the Week: Todd Blanche

Acting Attorney General Todd Blanche, speaking of the president of the United States in a totally normal way:

I love working for President Trump. It’s the greatest honor of a lifetime. And if President Trump chooses to nominate somebody else and asks me to go do something else, I’ll say, “Thank you very much, I love you, sir.”

The phrase Blanche was looking for is “Thank you sir, may I have another.”

 ★ 

Hawaii and Guam Concerns; Record Warmth Spreads Eastward; Heavy Snow for Sierra-Nevada; Severe Weather for Plains

Links 4/9/26

Links for you. Science:

NIH Fellowship awards for FY2025 by ethnicity
The System That Decides What Science Gets Published Is Breaking Down (paper here)
Prominent Climate Scientist Resigns From NASA, Citing Trump’s Attack on Science
Can you clone a clone forever? Landmark 20-year study reveals horrifying verdict
New Tongue-Swab TB Test Could Help Eradicate The Disease, WHO Says
CDC vaccine adviser Malone steps down to avoid ‘drama’: Physician and biochemist has questioned the severity of the COVID-19 pandemic

Other:

How the spreadsheet reshaped America
We Live In A Society (Really)
DOGE Damage Drags on in DC, Where Inequality is Widening as a Result
Mayor Muriel Bowser Signs On For Half Transparency Into Actions of Trump’s Cops in D.C.
Former NYPD Chief Admits Giving ‘Free Pass’ to City Workers, Right Wing Allies
Canadian woman held with daughter by ICE warns all immigrants to ‘lie low’
What If Iran Doesn’t Want the War to End Yet?
ICE Lied About Its Authority to Make Courthouse Arrests
Candidate Mamdani Backed Expanding Housing Vouchers. As Mayor, He’s Appealing a Court Order To Do So.
Dupont Circle’s Long-Vacant Pakistani Embassy Is Going Residential (finally…)
Musk Rips Twitter Verdict, Claims Jury’s $4.20 ‘Joke’ Mocked Him
Your data is everywhere. The government is buying it without a warrant
Mike Johnson Announces, ‘We Have Created a New Award’ To Give Trump: ‘This Beautiful Golden Statue’
DHS ‘100 Million Deportations’ Claim Was No Joke. Bovino Had ‘Master Plan’ to Purge Nearly a Third of the Country
Choctaw Nation Buys Former Big Lots Warehouse, Closing Off Oklahoma ICE Detention Site
Fakery Is the Key to the Right’s Cultural and Political Dominance
She Made $1 Million in 3 Hours on OnlyFans. Then, the Headaches.
Cringe Is Good, Earnestness Is Strength
The IOC’s New Policy Isn’t Really a Trans Story
The In Living Color Effect: How a radical sketch-comedy show helped usher in the cultural boom of Black comedy in the 1990s
How Epstein Helped Solve a Billionaire’s Problems With Women
Wikipedia Bans AI-Generated Content
How Trump’s Plot to Grab Iran’s Nuclear Fuel Would Actually Work
‘I will not comply’: Tennessee librarian refuses to move LGBTQ+ books
Trump is the biggest threat to D.C.’s architectural splendor since War of 1812. The president’s garish and piecemeal changes to the capital city’s urban design are poised to destroy symbolism upheld for centuries.
Meet the man pledging to donate $16 million to help Minneapolis residents pay rent post-ICE surge
Alex Jones’ Infowars is shutting down, but his disinformation legacy lives on
ICE officers are taking DNA samples from protesters they’ve arrested
Maine Could Determine Abortion Rights for the Nation. Why Aren’t Reproductive Rights Groups Acting Accordingly?
MAGA Faithful Furious at Plans for Mega-Jail in Red State

Trump’s Massive Strategic Failure in Iran: Will It Be Seen as One?


Trump’s negotiating strategy

While it’s hard to say what is definitively happening regarding Iran, as much of it relies on the social media feed of the narcissistic liar and adjudicated rapist known as Donald Trump, it really does appear that Trump lost here–and hugely. At best, the Iranian regime will still be in place, it will be collecting tolls for shipping traffic (and Trump has claimed there might be some kind of joint tolling with the U.S., but who the hell knows with that pathological liar), and we might have some kind of nuclear agreement,which might not be as favorable as previously negotiated (never mind the deal Obama made and Trump tore up in his first term).

Importantly, our Gulf allies, such as they are, now realize the U.S. cannot effectively deter the Iranian regime.

This must be construed as a massive strategic failure, one that was entirely of Trump’s doing. Yet my admittedly myopic (as we all are) perusal of The Mainstream Discourse suggests this is not being cast as such, especially by Democrats. A competent Democratic Party that is not Shor-pilled (but I repeat myself) would be going on the offensive about Trump’s failure. Hopefully, they’ll start to do that.

Spec Driven Development isn't Waterfall

Spec Driven Development isn’t Waterfall

Write down what you mean.

After spending a few months writing (e.g. on the Kiro Blog), and speaking (e.g. Real Python Podcast, SE Radio) about spec-driven development, I’ve noticed a common misconception: spec driven development is a return to a waterfall style of software development.

Specification driven development (in Kiro, for example) isn’t about pulling designs up-front, it’s about pulling designs up. Making specifications explicit, versioned, living artifacts that the implementation of the software flows from, rather than static artifacts.

This distinction is important, because software development (like all complex product development and engineering tasks) is a fundamentally iterative process. It is extremely rare for a software project to know all of the requirements up-front. It’s much more common for one of the goals of the development process being to discover requirements, most frequently through engaging users in the cycle of feedback. This is a point that’s missed in strict waterfall software development processes, and missed in critiques (like Dijkstra’s) of natural language specification (as I have written about before). The Agile movement is often presented as a high-minded set of ideas, but I think it’s more accurate to see it as a reflection of a simple fact: as software became more complex, and filled more roles in society, top-down approaches to design simply no longer work.

From the Agile Manifesto:

Customer collaboration over contract negotiation Responding to change over following a plan

These are simple reflections of reality. Software specifications are complex, dynamically changing, internally conflicting, and invariably incomplete. In specification driven development, the specification is the thing being iterated on, rather than the implementation. The iteration cycle is the same as before, but potentially much quicker because of the accelerating effect of AI.

So if specifications aren’t up-front designs, what are they?

Specifications are an explicit statement of requirements and key design choices, separated from the low-level implementation. They are a raising of the level of abstraction from code to words, and increasingly to a mix of words, pictures, snippets, and even mathematics. The words can be free-form, or structured (e.g. RFC2119 or EARS). It can (rarely) pull in exact statements (e.g. in Lean or TLA+) when needed. The implementation is then derived from this specification, reflecting iterative changes in the specification, by AI alone or human developers working with AI. Increasingly, these tasks are done autonomously end-to-end by AI agents. Specs stay in sync with implementation by being upstream of implementation for most changes.

This approach has several advantages which I’ve written about in the past: keeping context on the bigger picture (a map, versus the turn-by-turn directions of vibe coding prompts), the ability to mix levels of formality and detail to meet the needs of a particular piece of software, serving as always-in-sync documentation, allowing implementation of the same code in multiple languages or with multiple frameworks, and the ability to lift what matters out of the muck of the implementation. One advantage, though, is looking to override all of these in importance: we’re seeing the largest improvements in velocity and delivery in teams and processes that can allow agents to run autonomously for long periods of time. Specifications do exactly that. By providing the agent with a clear map, we can set an agent off building without a human inside the tight loop of development and testing. The agent can also write higher quality, better designed, and better tested code by seeing the big picture. It knows what to test, and what good looks like.

Specifications aren’t up-front designs because you don’t need to, and probably shouldn’t, develop the entire specification upfront. Instead, specifications should be at the core of an iterative software development practice. Humans are still critical to this outer loop of software development, driven by refining and extending the specification. Perhaps most crucially, they own the internally conflicting nature of software requirements. Where conflicts and trade-offs exist, either technical or in product requirements, expertise and experience come into play.

We are still very early in this revolution in software development. I believe that the future is specification-driven. Software development at a level above what we currently think of as implementation, just as it’s already been raised from switches, to gates, to instructions, to lines of code. It’s going to take while to find exactly the right recipe, and I suspect we’re not going to converge on a single answer across all domains. But the big picture is clear.

In a future post I'll tackle the *determinism question*, and how the non-deterministic nature of LLM-powered spec-to-code translation isn't actually a big deal.

Interstellar Probes: Moving Beyond Bracewell

Lately we’ve been discussing interstellar probes, the kind that an extraterrestrial civilization might use to explore the galaxy. Ronald Bracewell’s analysis of such probes dates back to 1960 and was all but coterminous with the emergence of SETI. The problem with Bracewell probes is that we would expect to have one in our Solar System if they exist. Rather than using that notion to add stress to the Fermi question, I’m going to point out that there is a lot of real estate waiting to be searched.

Case in point: What might our ongoing study of the lunar surface through images from the Lunar Reconnaissance Orbiter pick up as we use AI models that have already identified human-made space debris from various missions? A closer look at this project reminds us that while the Moon is an obvious place to look for a ‘lurker’ probe, we can’t discount other locations even though earlier work on the various Lagrange points, a good place for long-term observation of our planet, came up empty (see below). Our capabilities are so much more advanced not only in terms of instrumentation but analytical tools that a continued hunt for artifacts is reasonable.

I’m getting picky here given the wide variety of possible probes, tapping the definition that Bracewell used in his original article. That’s a probe we probably would have noticed by now if it were active. In 1960, Bracewell was offering an alternative to the SETI goal of detecting an interstellar radio signal aimed at Earth. His physical probe would arrive in a planetary system to look for signs of life and technology, duplicating any radio signals it heard so as to re-transmit them to the originators, thus establishing contact. Sagan uses the notion in his novel Contact (1979), where Adolf Hitler’s opening speech from the 1936 Berlin Olympics is found embedded within the message, along with much else.

How would we respond to hearing a signal sent back to us from space? Bracewell thinks we would experiment with it to see what would happen next:

To notify the probe that we had heard it, we would repeat back to it once a:gain. It would then know that it was in touch with us. After some routine tests to guard against accident, and to test our sensitivity and band-width, it would begin its message, with further occasional interrogation to ensure that it had not set below our horizon. Should we be surprised if the beginning of its message were a television image of a constellation?

Bracewell’s notions of dispatching a physical object as opposed to sending a radio signal take advantage of the ‘information density’ available to a physical probe. This is the familiar notion that a box of DVDs in a truck moves information at a far higher rate than fiber-optic cable. But of course you have to get the truck to its destination, and in the case of interstellar flight the latency is huge – perhaps thousands of years or more. A long-lived civilization, thought Bracewell, may nonetheless see purpose in seeding nearby stars if the travel time is a small fraction of its likely civilizational life.

Swarming and Reproducing

Bracewell’s ideas jibe nicely with the Breakthrough Starshot concept of swarms of sails investigating nearby stars. We might imagine the descendants of such tiny flyby probes scattered to all interesting stellar systems within, say, 100 light years. With concepts like Bracewell’s entering the literature, it was left to Robert Freitas to run the first scientific search I am aware of for such probes (citation below). Freitas made a series of visual observations of the various LaGrange points in the early 1980s. But in the early days of SETI (and Bracewell was writing even before the Green Bank meeting in 1961 that produced the Drake Equation), other ideas about how interstellar probes might operate had begun to surface. Ancient probes sent by civilizations far more advanced than ours might still be live, waiting and reporting on our activities (Clarke’s sentinel ‘slabs’ from 2001: A Space Odyssey come to mind . Or they might be long-dead relics.

Version 1.0.0

When Michael Hart went to work on this in 1975, he amplified the probe concept and changed the game. He produced, in fact, what Jason Wright (Pennsylvania State) has dubbed “The most influential formulation of the Fermi Paradox…,” one that compresses the conundrum by homing in on the fact that we observe no intelligent beings on our planet, something Hart called Fact A. The fact that they are not observed tells us that despite the amount of time available for long-lived cultures to have colonized the galaxy, none evidently have. This is no small problem, for as Wright calculates in his new textbook on SETI, even a ‘wavefront’ of probes moving outwards from star to star at Voyager-like speeds would have been able to reach every star within 2 billion years.

Move the dial up in terms of speed to, say, 0.5 c and the numbers get considerably shortened. Imagine relativistic ships that close on lightspeed and we find exponential growth saturating the galaxy in 150,000 years, all contrasting with an Earth that is 4.5 billion years old. Hart saw nothing in the laws of physics that prohibited starflight, and he found the idea that ETI was uninterested in Earth to be unconvincing. What David Brin coined the ‘Principle of non-Exclusiveness’ boils down to the idea that alien species will not all behave the same way. All that is needed is for one civilization to decide to send out probes, and by now such probes should have reached every star.

Image: How quickly would a single civilization using self-replicating probes spread through a galaxy like this one (M 74)? Moreover, what sort of factors might govern this ‘percolation’ of intelligence through the spiral? The answers affect our view of the Fermi question, and thus our own place in the cosmos. Image credit: NASA, ESA, and the Hubble Heritage (STScI/AURA)-ESA/Hubble Collaboration.

Advances in computing led Frank Tipler to push Hart’s views even more strenuously, bringing John von Neumann’s work on self-replicating machines to bear. His insight was to ask what would happen if an extraterrestrial culture began seeding stars with self-reproducing probes, each capable of not only studying a new world but building another probe that could reach yet another star, and so on. Here the numbers become even more telling. Such probes could use local resources in each system to build their next generation, thus nullifying the resource problem. Here’s Tipler on the matter:

…if the motivation for communication is to exchange information with another intelligent species, then as Bracewell has pointed out, contact via space probe has several advantages over radio waves. One does not have to guess the frequency used by the other species, for instance. In fact, if the probe has a von Neumann machine payload, then the machine could construct an artifact in the solar system of the species to be contacted, an artifact so noticeable that it could not possibly be overlooked. If nothing else, the machine could construct a “Drink Coca-Cola” sign a thousand miles across and put it in orbit around the planet of the other species. Once the existence of the probe has been noted by the species to be contacted, information exchange can begin in a variety of ways.

As to the cost of such a vast exploration program, Tipler has this to say:

Using a von Neumann machine as a payload obviates the main objection to interstellar probes as a method of contact, namely the expense of putting a probe around each of an enormous number of stars. One need only construct a few probes, enough to make sure that at least one will succeed in making copies of itself in another solar system. Probes will then be sent to the other stars of the galaxy automatically, with no further expense to the original species.

A ‘Catastrophic’ Answer to Fermi?

Tipler suggested a timeframe of 300 million years to fill the galaxy with these devices, in an argument that drew fire from Carl Sagan and William Newman, who argued in 1983 that his approach was ‘solipsistic’ because the idea that we were alone in producing a technological civilization was anti-Copernican. And here we need to pause on a concept that has surfaced repeatedly in SETI studies not just in the western nations but also the Soviet Union. The idea of ‘mediocrity’ troubled attendees at the Soviet SETI meeting at the Byurakan Astrophysical Observatory in 1964, to be discussed again in a second meeting (with American scientists as participants) in 1971.

Do we just take the Copernican principle as a given? Sagan clearly thought so. His ‘co-author’ on Intelligent Life in the Universe, Iosif S. Shklovskii was far less sanguine on the matter:

Since we do not adequately understand the factors leading to the evolution of intelligence and technical civilizations, we cannot reliably estimate the probability that intelligence and technical civilizations will emerge.

Here I’m drawing on Mark Sheridan in his 2023 book SETI’s Scope (How The Search For Extraterrestrial Intelligence Became Disconnected From New Ideas About Extraterrestrials). Sheridan homes in on the philosophical disagreement between emerging Soviet SETI and the ideas in the Drake Equation. At Byurakan, Soviet mathematician A. V. Gladkii challenged the idea, accepted by Sagan, that mathematics could be a recognizable common ground between all intelligences across the stars. And Sheridan quotes Theodosius Dobzhansky, a Ukrainian-born geneticist later working in the U.S., who in a 1972 paper cast doubt on Sagan’s insistence that because intelligence had arisen on our planet, it must arise everywhere life exists. In his view, the principle of mediocrity was being taken several steps too far. Quoting Dobzhansky:

“Natural scientists have been loathe, for at least a century, to assume that there is anything radically unique or special about the planet Earth or about the human species. This is an understandable reaction against the traditional view that Earth, and indeed the whole universe, was created specifically for man. The reaction may have gone too far. It is possible that there is, after all, something unique about man and the planet he inhabits.”

In a fascinating 2009 paper, Milan Ćirković examines the Fermi question in the context of our basic premises about science. As amplified in his later book The Great Silence: Science and Philosophy of Fermi’s Paradox (Oxford University Press, 2018), the Serbian astronomer points to the focus the ‘where are they’ question places upon both Copernicanism and gradualism. In the former, as clearly stated by Sagan as by many other of the early SETI practitioners, the assumption is that we occupy no privileged place in the cosmos, and thus should expect other civilizations to exist, some of which would be far more advanced than ourselves. Yet we do not observe them.

Many answers can be offered to Fermi’s question, of course, but as we continue probing the cosmos, the silence takes on escalating significance. Must we envision a future in which we abandon Copernicanism and assume that we do not, in fact, occupy a relatively common niche in the cosmos, but rather a rather special one?

Or should we give up on gradualism, the idea that geophysical processes proceed in the future more or less as they did in the past? The concept is foundational to 18th Century geology and remains a commonplace in current thinking. But ‘catastrophism’ is an obvious factor in the development of life, as extreme ruptures like the K–T extinction event that ended the era of the dinosaurs make clear. Are there common factors that could affect planets throughout what is thought of as the Milky Way’s habitable zone?

The question is the focus of recent work on gamma ray bursts and implies, as Ćirković notes, a ‘reset’ of the clock. That could explain our lack of detections, as it would imply that living worlds, no matter their geological age, have had only about the same amount of time we have had to develop intelligence. The Fermi question highlights both of these key assumptions, while our lack of a solution keeps the tension tight.

The Bracewell paper is “Communications from Superior Galactic Communities,” Nature Volume 186, Issue 4726 (1960), pp. 670-671. Abstract. On the LaGrange search, see Freitas, “A search for natural or artificial objects located at the Earth-Moon libration points,” Icarus, Volume 42, Issue 3 (June, 1980) p. 442-447 (abstract). Michael Hart’s paper on galactic expansion is “Explanation for the Absence of Extraterrestrials on Earth,” Quarterly Journal of the Royal Astronomical Society, Vol. 16, p.128 (full text). Frank Tipler’s paper on self-reproducing probes is “Explanation for the Absence of Extraterrestrials on Earth,” Royal Astronomical Society, Quarterly Journal, vol. 21 (Sept. 1980), p. 267-281 (full text). Milan Ćirković’s paper on Fermi and Copernicanism is “Fermi’s Paradox – The Last Challenge for Copernicanism?” Serbian Astronomical Journal 178 (2009), 1–20. Preprint.

The Deep Space Network Acquires Artemis II Signal

2 Min Read

The Deep Space Network Acquires Artemis II Signal

A graphical representation of the Deep Space Network’s radio frequency antennas indicate signal acquisition from NASA’s Artemis II mission to the Moon on April 1, 2026, inside the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California.
PIA26746
Credits: NASA/JPL-Caltech

Description

A graphical representation of the Deep Space Network’s radio frequency antennas indicate signal acquisition from NASA’s Artemis II mission to the Moon on April 1, 2026, inside the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California. Two antennas at the Madrid Deep Space Communications Complex, Deep Space Station 54 and 56, can be seen communicating with Artemis II (the signals are labelled “EM2”, short for “Exploration Mission 2”; elsewhere they are labelled “ART2” for “Artemis II”).

A similar visualization can be found at DSN Now, which details all the missions that the network is communicating with 24 hours a day, seven days a week.

The Space Flight Operations Facility operates the DSN, which comprises of three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post The Deep Space Network Acquires Artemis II Signal appeared first on NASA Science.

The Deep Space Network Acquires Artemis II Signal

2 Min Read

The Deep Space Network Acquires Artemis II Signal

The acquisition of the radio frequency signal from the Artemis II crewed mission to the Moon by NASA’s Deep Space Network (DSN) is indicated by the peak in the data signal shown on the top computer screen.
PIA26745
Credits: NASA/JPL-Caltech

Description

The acquisition of the radio frequency signal from the Artemis II crewed mission to the Moon by NASA’s Deep Space Network (DSN) is indicated by the peak in the data signal shown on the top computer screen. 

Soon after the mission’s launch on April 1, 2026, at 6:35 p.m. EDT, NASA’s Near Space Network led communications with the Orion capsule. Then, communications were handed off to the DSN, marking the first time in over 50 years that the network would be communicating with a crewed spacecraft traveling through deep space.

The Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California (where this photo was taken) operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of robotic spacecraft exploring the solar system in addition to the Artemis II mission.

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post The Deep Space Network Acquires Artemis II Signal appeared first on NASA Science.

Watching Over the Deep Space Network Before Artemis II Signal Acquisition

1 Min Read

Watching Over the Deep Space Network Before Artemis II Signal Acquisition

Blanca Renteria, Artemis Deep Space Network (DSN) operations chief, monitors data at the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California shortly after Artemis II launched from the agency’s Kennedy Space Flight Center in Florida on April 1, 2026.
PIA26744
Credits: NASA/JPL-Caltech

Description

Blanca Renteria, Artemis Deep Space Network (DSN) operations chief, monitors data at the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California shortly after Artemis II launched from the agency’s Kennedy Space Flight Center in Florida on April 1, 2026, at 6:35 p.m. EDT.

The Space Flight Operations Facility operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of robotic spacecraft exploring the solar system in addition to the crewed Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Watching Over the Deep Space Network Before Artemis II Signal Acquisition appeared first on NASA Science.

Watching the Artemis II Launch From JPL’s Space Flight Operations Facility

1 Min Read

Watching the Artemis II Launch From JPL’s Space Flight Operations Facility

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the launch of the agency’s Artemis II mission to the Moon on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN).
PIA26743
Credits: NASA/JPL-Caltech

Description

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the launch of the agency’s Artemis II mission to the Moon on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN). Soon after launch, the Artemis II crew communicated with the Near Space Network while they were close to Earth. Later, communications were handed off to the DSN as the Orion capsule continued its journey into deep space.

The DSN comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex has several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the Artemis II mission.

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Watching the Artemis II Launch From JPL’s Space Flight Operations Facility appeared first on NASA Science.

JPL’s ‘Lucky Peanuts’ Before Artemis II Launch

1 Min Read

JPL’s ‘Lucky Peanuts’ Before Artemis II Launch

A container of “lucky peanuts” sits above workstations within the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory shortly before the launch of the Artemis II mission to the Moon on April 1, 2026.
PIA26742
Credits: NASA/JPL-Caltech

Description

A container of “lucky peanuts” sits above workstations within the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory shortly before the launch of the Artemis II mission to the Moon on April 1, 2026. Eating peanuts before launches and other major mission events is a longstanding tradition at JPL.

The Space Flight Operations Facility operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the crewed Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post JPL’s ‘Lucky Peanuts’ Before Artemis II Launch appeared first on NASA Science.

Watching the Artemis II Mission Unfold at JPL’s Space Flight Operations Facility

1 Min Read

Watching the Artemis II Mission Unfold at JPL’s Space Flight Operations Facility

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the agency’s Artemis II mission unfold soon after launch on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN).
PIA26747
Credits: NASA/JPL-Caltech

Description

Staff at NASA’s Jet Propulsion Laboratory in Southern California watch the agency’s Artemis II mission unfold soon after launch on April 1, 2026, at the Space Flight Operations Facility, which operates the Deep Space Network (DSN).

The DSN comprises of three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex has several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the crewed Artemis II mission.

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Watching the Artemis II Mission Unfold at JPL’s Space Flight Operations Facility appeared first on NASA Science.

Supporting Artemis II From JPL’s Space Flight Operations Facility

2 Min Read

Supporting Artemis II From JPL’s Space Flight Operations Facility

The Artemis II mission patch appears in the center screen of the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California on April 1, 2026, shortly before the mission launched to the Moon.
PIA26741
Credits: NASA/JPL-Caltech

Description

The Artemis II mission patch appears in the center screen of the Space Flight Operations Facility at NASA’s Jet Propulsion Laboratory in Southern California on April 1, 2026, shortly before the mission launched to the Moon. A graphical representation of the antennas of the agency’s Deep Space Network (DSN), left, indicates which antennas are currently sending and receiving data, highlighted in white. 

Shortly after liftoff at 6:35 p.m. EDT from NASA’s Kennedy Space Center in Florida, initial Artemis II communications were handled by the agency’s Near Space Network. Then, the DSN acquired signal, marking the first time in over 50 years that the network would be communicating with a crewed spacecraft traveling through deep space.

The Space Flight Operations Facility operates the DSN, which comprises three complexes in Goldstone, California; Madrid, Spain; and Canberra, Australia. Each complex consists of several radio frequency antennas that communicate with dozens of spacecraft exploring the solar system in addition to the Artemis II mission. 

The DSN is managed by JPL for the agency’s Space Communications and Navigation program, which is located at NASA Headquarters within the Space Operations Mission Directorate. The DSN allows missions to track, send commands to, and receive scientific data from faraway spacecraft. JPL is managed by Caltech in Pasadena, California, for NASA.

For more information about Artemis II, visit: https://www.nasa.gov/mission/artemis-ii/

The post Supporting Artemis II From JPL’s Space Flight Operations Facility appeared first on NASA Science.

New Emergent Ventures tranche on science policy and communication

American science policy is now more important than at perhaps any previous point in history—how science is organized and funded (or not funded) in this country continues to rise in significance.

I have also spoken about the undersupply of people who understand this and are trying to act on it in Washington. Unfortunately the career paths here are neither well-defined nor well-regarded.  I would like to help change that.

What we’re looking for:

  • Priority 1: Metascience Policy Entrepreneurs in DC
    • Funding for individuals working at the intersection of science policy and institutional reform—people who can shape how Congress and federal agencies think about science funding and governance.
  • Priority 2: Science and Metascience Communicators
    • Funding for communicators via any medium—bloggers, journalists, authors, podcasters, artists, filmmakers, conveners, influencers, event organizers—who can expand the reach of pro-science ideas beyond their current audience.

We are doing this with and thank Renaissance Philanthropy for the support.  You can apply through the regular Emergent Ventures portal.

The post New Emergent Ventures tranche on science policy and communication appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Helium Is Hard to Replace

The war in Iran, and the subsequent closure of the Strait of Hormuz, has unfortunately made us all familiar with details of the petroleum supply chain that we could formerly happily ignore. Every day we get some new story about some good or service that depends on Middle East petroleum and the production of which has been disrupted by the war. Fertilizer production, plastics, aluminum, the list goes on.

One such supply chain that’s suddenly getting a lot of attention is helium. Helium is produced as a byproduct of natural gas extraction: it collects in the same underground pockets that natural gas collects in. Qatar is responsible for roughly 1/3rd of the world’s supply of helium, which was formerly transported through the Strait of Hormuz in specialized containers. Thanks to the closure of the strait, helium prices have spiked, suppliers are declaring force majeure, and businesses are scrambling to deal with looming shortages. (For many years the US government maintained a strategic helium reserve, but this was sold off in 2024.)

What I find interesting about helium is that in many cases, it’s very hard to substitute for. Helium has a unique set of properties — in particular, it has a lower melting point and boiling point than any other element — and technologies and processes that rely on those properties can’t easily switch to some other material.

Helium production

Helium is the second lightest element in the periodic table (after hydrogen), and the second most common element in the universe (also after hydrogen). But while helium is very common on a cosmic scale, here on earth it’s not so easy to get. Because helium is so light, it rises to the very top of the atmosphere, where it eventually escapes into space.1 So essentially all helium used by modern civilization comes from underground.

Helium is produced via the radioactive decay of elements like uranium and thorium, and it collects in underground pockets of natural gas. This source of helium was first discovered in the US in 1903, when a natural gas well in Kansas produced a geyser of gas that refused to burn. Scientists at the University of Kansas eventually determined that this was due to the presence of helium. Like petroleum, helium has collected in these pockets over the course of millions of years, and thus (like petroleum) there’s a limited supply of underground helium that can be extracted. As with petroleum, people are often worried that we’re running out of it.

Because helium is a byproduct of natural gas extraction, and because only some natural gas fields have helium in appreciable quantities, a small number of countries are responsible for the world’s supply of helium. The US and Qatar together produce around 2/3rds of the world’s helium supply. Russia, Algeria, Canada, China, and Poland produce most of the remaining balance.

Elemental helium has a few different useful properties. The most important one is that, thanks to the small size and completely filled outer electron shell of helium atoms, helium has a lower boiling point than any other element. Liquid helium boils at just 4.2 kelvin (-452 degrees Fahrenheit). By comparison, liquid hydrogen boils at 20 K, and liquid nitrogen boils at a positively balmy 77 K.

Its low boiling point makes helium very useful for getting something really, really cold. When a liquid boils, it transforms into a gas, and during this process it will pull energy from its surroundings due to evaporative cooling. This is why your body sweats: to cool you down as the liquid evaporates. When a liquid has a very low boiling point, this heat extraction happens at a very low temperature. Helium also stays a liquid at much lower temperatures than other elements. Nitrogen freezes solid at 63 K, and hydrogen freezes at 14K, but at atmospheric pressure helium stays a liquid all the way to absolute zero. If you need to cool something to just a few degrees above absolute zero, liquid helium is essentially the only practical way to do that.

Helium also has a few other useful properties. As we noted, helium is very light: it will naturally rise in the atmosphere, which makes it useful as a lifting gas. Thanks to its filled outer electron shell, it is inert, and won’t react with other materials. Helium also has high thermal conductivity — at room temperature, helium can move heat about six times better than air.

The uses of helium

The world uses around 180 million cubic meters of helium each year. (This sounds like a lot, but it’s just 0.11% of the 159 billion cubic meters of nitrogen the world uses each year, and 0.004% of the over 4 trillion cubic meters of natural gas that the world uses each year.) But while it’s not used in enormous quantities compared to some other gases, helium is nevertheless quite important. Different industries make use of helium’s properties in different ways, and while in some cases there are reasonable substitutes for helium, in most cases helium has no practical replacement.

MRI machines

Some of the biggest consumers of helium are MRI machine operators, which consume around 17% of the helium used in the US. MRI machines work by creating very strong magnetic fields, which change the orientation of hydrogen atoms in tissues in your body. A pulse of radio waves is then sent into your body, which temporarily disrupts this orientation. When the pulse stops, different types of tissue return to their alignment with the magnetic field at different rates, and that rate of change can be measured and converted into a picture of the interior of the body. The strong magnetic fields in MRI machines are created by superconducting magnets: when some materials get cold enough, they drop to zero electrical resistance, which makes it possible to put enormous amounts of electrical current through them and create extremely strong magnetic fields.2 The vast majority of MRI machines used today use superconducting magnets made from niobium-titanium (NbTi), which becomes superconducting at 9.2 degrees above absolute zero. This is well below the boiling point of any other coolant, making liquid helium the only practical option for cooling the magnets. A handful of MRI machines have been built using higher-temperature superconductors that don’t require helium cooling, but the vast majority of the 50,000 existing MRI machines in the world require helium.

The helium consumption of MRI machines has fallen drastically over time. Early MRI machines would lose helium at a rate of around 0.4 liters per hour, requiring large tanks of 1000-2000 liters that needed to be refilled every few months. (It’s notoriously difficult to prevent gaseous helium from leaking out of containers, which is why helium is also often used for leak detection.) But modern MRI machines are “zero boil-off,” which essentially never need to be recharged with helium. As these machines take up more market share, the helium requirements of MRI machines can be expected to fall. But for the foreseeable future, MRI will remain a substantial source of demand.

Semiconductors

Another major consumer of helium is the semiconductor industry, which uses around 25% of the helium worldwide, and around 10% of the helium in the US.3 As with MRI machines, helium is used to cool superconducting magnets, which are used to increase the purity of silicon ingots grown using the Czochralski method. Helium is also used as a coolant in some production processes, as well as a non-reactive gas to flush out some containers, for leak detection, and for a variety of other uses. A 2023 report from the Semiconductor Industry Association noted that helium was used “as a carrier gas, in energy and heat transfer with speed and precision, in reaction mediation, for back side and load lock cooling, in photolithography, in vacuum chambers, and for cleaning.” The same report notes that for many of these uses, helium has no substitute.

Unlike MRI machines, which have used less and less helium over time, helium usage in the semiconductor industry seems to be trending up: some sources claim that helium consumed by the semiconductor industry is expected to rise by a factor of five by 2035. This seems to be in part due to the development of DUV and EUV semiconductor lithography machines, which require helium to function. Unlike many other gases, helium absorbs almost no EUV radiation, which (as I understand it) makes it hard to substitute for helium in EUV machines.

Fiber optics

Helium is also used in the manufacturing of fiber optic cable. Optical cable is made with an inner core of glass, surrounded by an outer “sleeve” of glass with a different index of refraction. This keeps photons within the inner core via the phenomenon of total internal reflection. During the manufacturing process, helium is used as a coolant when the outer “sleeve” is being deposited onto the core — with any other atmosphere, bubbles form between the two layers of glass. Roughly 5-6% of helium worldwide is used for the production of optical fiber, and there’s no known alternative.

Purging gas

Other than semiconductor manufacturing, other industries (particularly the aerospace industry) use helium as a “purge gas” to clean out containers. Cleaning out a tank of liquid hydrogen, often used as a liquid rocket fuel, requires a gas with a boiling point low enough that it won’t freeze when it contacts the hydrogen. Cleaning a tank of liquid oxygen doesn’t require a gas with quite as low a boiling point, but it is best to use an inert gas to reduce the chance of it reacting with the highly reactive oxygen. Aerospace purging makes up around 7% of US helium consumption. Around half of that is used by NASA, which is the single biggest user of helium in the US.

Lifting gas

Because helium is lighter than air, it’s also used as a lifting gas in balloons and lighter-than-air airships as an alternative to the highly flammable hydrogen. Each Goodyear Blimp, for instance, uses around 300,000 cubic feet of helium. Around 18% of the helium consumed in the US is as a lifting gas.

Scientific research and instruments

Helium is also widely used in scientific research. Much of this is for keeping things cold: superconducting magnets, such as those used in the Large Hadron Collider, typically require helium, as do the superconducting elements in SQUIDs, which are highly sensitive magnetic field detectors. Helium is also used in mass spectrometers, which are used for, among other things, detecting microscopic leaks in containers.

This is a major category of use in the US; roughly 22% of its helium consumption goes to “analytical, engineering, lab, science, and specialty gases.”

Welding

In the US, helium is also used for welding: its high thermal conductivity and its inertness make helium an excellent shielding gas, which prevents the pool of molten metal from being contaminated before it cools. In the US, welding makes up roughly 8% of helium use, but elsewhere in the world, it’s more common to use other shielding gases like argon.

Diving

Helium is also used for breathing gas in deep sea commercial diving. At depths beyond 30 meters, breathing nitrogen (which makes up 78% of normal air) causes nitrogen narcosis, and diving beyond this depth is done using gas mixes that replace part of the nitrogen for helium. Roughly 5% of helium consumed in the US goes towards diving.

Helium for diving is difficult to substitute for. Virtually every other breathable gas except for possibly neon causes some degree of narcosis, and neon is heavier than helium, making breathing more difficult.

Conclusion

For some of these applications, it’s possible to substitute helium with other materials. There are other shielding gases, such as argon, that can be used for welding, and other lifting gases, such as hydrogen, that can be used for balloons or airships. In other applications, it’s possible to dramatically reduce the consumption of helium via recycling systems or other methods designed to reduce its use. As we’ve noted, this has occurred with MRI machines, where modern ones use far less helium than their predecessors. And it seems to have happened with aerospace purging. A 2010 report from the National Academies of Sciences notes that if NASA and the Department of Defense were sufficiently motivated, they could dramatically reduce their helium consumption by recycling it. Since then, aerospace use of helium has fallen from 18.2 million cubic meters (26% of total US consumption) to 4 million cubic meters (7% of total US consumption). But the United States Geological Survey notes that most helium in the US is still unrecycled, and there’s lots of opportunity to dramatically reduce helium usage with various recapture and recycling systems. Many of these systems are capable of reducing helium consumption by 90% or more.

But “reducing” doesn’t mean “eliminating,” and it’s interesting to me how in so many cases there doesn’t seem to be any good substitute for helium.

1

Though thanks to circulation in the air, the helium concentration below the turbopause is roughly constant, about 5 parts per million.

2

If the magnets get too warm, the sudden loss of superconductivity, called a “quench,” can damage or destroy the magnets due to the heat generated from the now-present electrical resistance.

3

I estimated this by subtracting the 5-6% of helium used globally by the fiber optic industry from the 15% of helium used by “semiconductors and fiber optics” from the United States Geological Survey report on helium.

Let’s talk about LLMs

Everybody seems to agree we’re in the middle of something, though what, exactly, seems to be up for debate. It might be an unprecedented revolution in productivity and capabilities, perhaps even the precursor to a technological “singularity” beyond which it’s impossible to guess what the world might look like. It might be just another vaporware hype cycle that will blow over. It might be a dot-com-style bubble that will lead to a big crash but still leave us with something useful (the way the dot-com bubble drove mass adoption of the web). It might be none of those things.

Many thousands of words have already been spent arguing variations of these positions. So of course today I’m going to throw a few thousand more words at it, because that’s what blogs are for. At least all the ones you’ll read here were written by me (and you can pry my em-dashes from my cold, dead hands).

Terminology, and picking a lane

But first, a couple quick notes:

I’m going to be using the terms “LLM” and “LLMs” almost exclusively in this post, because I think the precision is useful. “AI” is a vague and overloaded term, and it’s too easy to get bogged down in equivocations and debates about what exactly someone means by “AI”. And virtually everything that’s contentious right now about programming and “AI” is really traceable specifically to the advent of large language models. I suppose a slightly higher level of precision might come from saying “GPT” instead, but OpenAI keeps trying to claim that one as their own exclusive term, which is a different sort of unwelcome baggage. So “LLMs” it is.

And when I talk about “LLM coding”, I mean use of an LLM to generate code in some programming language. I use this as an umbrella term for all such usage, whether done under human supervision or not, whether used as the sole producer of code (with no human-generated code at all) or not, etc.

I’m also going to try to limit my comments here to things directly related to technology and to programming as a profession, because that’s what I know (I have a degree in philosophy, so I’m qualified to comment on some other aspects of LLMs, but I’m deliberately staying away from them in this post because I find a lot of those debates tedious and literally sophomoric, as in reminding me of things I was reading and discussing when I was a sophomore).

If you’re using an LLM in some other field, well, I probably don’t know that field well enough to usefully comment on it. Having seen some truly hot takes from people who didn’t follow this principle, I’ve thought several times that we really need some sort of cute portmanteau of “LLM” and “Gell-Mann Amnesia” for the way a lot of LLM-related discourse seems to be people expecting LLMs to take over every job and field except their own.

No silver bullet

A few years ago I wrote about Fred Brooks’ No Silver Bullet, and said I think it may have been the best thing Brooks ever wrote. If you’ve never read No Silver Bullet, I strongly recommend you do so, and I recommend you read the whole thing for yourself (rather than just a summary of it).

No Silver Bullet was published at a time when computing hardware was advancing at an incredible rate, but our ability to build software was not even close to keeping up. And so Brooks made a bold prediction about software:

There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

To support this he looked at sources of difficulty in software development, and assigned them to two broad categories (emphasis as in the original):

Following Aristotle, I divide them into essence—the difficulties inherent in the nature of the software—and accidents—those difficulties that today attend its production but that are not inherent.

A classic example is memory management: some programming languages require the programmer to manually allocate, keep track of, and free memory, which is a source of difficulty. And this is accidental difficulty, because there’s nothing which inherently requires it; plenty of other programming languages have automatic memory management.

But other sources of difficulty are different, and seem to be inherent to software development itself. Here’s one of the ways Brooks summarizes it (emphasis matches what’s in my copy of No Silver Bullet):

The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.

If this is true, building software will always be hard. There is inherently no silver bullet.

And to drive the point home, he also explains the diminishing returns of only addressing accidental difficulty:

How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

This is a straightforward mathematical argument. If its two empirical premises—that the accidental/essential distinction is real and that the accidental difficulty remaining today does not represent 90%+ of total—are true, then the conclusion which rules out an order-of-magnitude gain from reducing accidental difficulty follows automatically.

I think most programmers believe the first premise, at least implicitly, and once the first premise is accepted it becomes very difficult to argue against the second. In fact, I’d personally go further than the minimum required for Brooks’ argument. His math holds up as long as accidental difficulty doesn’t reach that 90%+ mark, since anything lower makes a 10x improvement from eliminating accidental difficulty impossible. But I suspect accidental difficulty, today, is a vastly smaller proportion of the total than that. In a lot of mature domains of programming I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.

There’s also a section in No Silver Bullet about potential “hopes for the silver” which addresses “AI”, though what Brooks considered to be “AI” (and there is a tangent about clarifying exactly what the term means) was significantly different from what’s promoted today as “AI”. The most apt comparison to LLMs in No Silver Bullet is actually not the discussion of “AI”, it’s the discussion of automatic programming, which has meant a lot of different things over the years, but was defined by Brooks at the time as “the generation of a program for solving a problem from a statement of the problem specifications”. That’s pretty much the task for which LLMs are currently promoted to programmers.

But Brooks quotes David Parnas on the topic: “automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer.” And Brooks did not believe higher-level languages on their own could be a silver bullet. As he put it in a discussion of the Ada language:

It is, after all, just another high-level language, and the biggest payoff from such languages came from the first transition, up from the accidental complexities of the machine into the more abstract statement of step-by-step solutions. Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.

Many people are currently promoting LLMs as a revolutionary step forward for software development, but are doing so based almost exclusively on claims about LLMs’ ability to generate code at high speed. The No Silver Bullet argument poses a problem for these claims, since it sets a limit on how much we can gain from merely generating code more quickly.

In chapter 2 of The Mythical Man-Month, Brooks suggested as a scheduling guideline that five-sixths (83%) of time on a “software task” would be spent on things other than coding, which puts a pretty low cap on productivity gains from speeding up just the coding. And even if we assume LLMs reduce coding time to zero, and go with the more generous No Silver Bullet formulation which merely predicts no order-of-magnitude gain from a single development, that’s still less than the gain Brooks himself believed could come from hiring good human programmers. From chapter 3 of The Mythical Man-Month:

Programming managers have long recognized wide productivity variations between good programmers and poor ones. But the actual measured magnitudes have astounded all of us. In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements!

(although I’m personally skeptical of the “10x programmer” concept, the software industry overall does seem to accept it as true)

Anecdote time: much of what I’ve done over my career as a professional programmer is building database-backed web applications and services, and I don’t see much of a gain from LLMs. I suppose it looks impressive, if you’re not familiar with this field of programming, to auto-generate the skeleton of an entire application and the basic create/retrieve/update/delete HTTP handlers from no more than a description of the data you want to work with. But that capability predates LLMs: Rails’ scaffolding, for example, could do it twenty years ago.

And not just raw code generation, but also the abstractions available to work with, have progressed to the point where I basically never feel like the raw speed of production of code is holding me back. Just as Fred Brooks would have predicted, the majority of my time is spent elsewhere: talking to people who want new software (or who want existing software to be changed); finding out what it is they want and need; coming up with an initial specification; breaking it down into appropriately-sized pieces for programmers (maybe me, maybe someone else) to work on; testing the first prototype and getting feedback; preparing the next iteration; reviewing or asking for review, etc. I haven’t personally tracked whether it matches Brooks’ five-sixths estimate, but I wouldn’t be at all surprised if it did.

Given all that, just having an LLM churn out code faster than I would have myself is not going to offer me an order of magnitude improvement, or anything like it. Or as a recent popular blog post by the CEO of Tailscale put it:

AI’s direct impact on this problem is minimal. Okay, so Claude can code it in 3 minutes instead of 30? That’s super, Claude, great work.

Now you either get to spend 27 minutes reviewing the code yourself in a back-and-forth loop with the AI (this is actually kinda fun); or you save 27 minutes and submit unverified code to the code reviewer, who will still take 5 hours like before, but who will now be mad that you’re making them read the slop that you were too lazy to read yourself. Little of value was gained.

More simply: throwing more patches into the review queue, when the review queue still drains at the same rate as before, is not a recipe for increased velocity. Real software development involves not just a review queue but all the other steps and processes I outlined above, and more, and having an LLM generate code more quickly does not increase the speed or capacity of all those other things.

So as someone who accepts Brooks’ argument in No Silver Bullet, I am committed to believe on theoretical grounds that LLMs cannot offer “even a single order-of-magnitude improvement … in productivity, in reliability, in simplicity”. And my own experience matches up with that prediction.

Practice makes (im)perfect

But enough theory. What about the empirical actual reality of LLM coding?

Every fan of LLMs for coding has an anecdote about their revolutionary qualities, but the non-anecdotal data points we have are a lot more mixed. For example, several times now I’ve been linked to and asked to read the DORA report on the “State of AI-assisted Software Development”. And initially it certainly seems like it’s declaring the effects of LLMs are settled, in favor of the LLMs. From its executive summary (page 3):

[T]he central question for technology leaders is no longer if they should adopt AI, but how to realize its value.

And elsewhere it makes claims like (page 34) “AI is the new normal in software development”.

But then, going back to the executive summary, things start sounding less uniformly positive:

The research reveals a critical truth: AI’s primary role in software development is that of an amplifier. It magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones.

And then (still on page 3):

The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organizational system: the quality of the internal platform, the clarity of workflows, and the alignment of teams. Without this foundation, AI creates localized pockets of productivity that are often lost to downstream chaos.

Continuing on to page 4:

AI adoption now improves software delivery throughput, a key shift from last year. However, it still increases delivery instability. This suggests that while teams are adapting for speed, their underlying systems have not yet evolved to safely manage AI-accelerated development.

“Delivery instability” is defined (page 13) in terms of two factors:

  • Change fail rate: “The ratio of deployments that require immediate intervention following a deployment.”
  • Rework rate: “The ratio of deployments that are unplanned but happen as a result of an incident in production.”

Later parts of the report get into more detail on this. Page 38 charts the increase in delivery instability, for example. And elsewhere in the section containing that chart, there’s a discussion of whether increases in throughput (defined by DORA as a combination of lead time for changes, deployment frequency, and failed deployment recovery time) are enough to offset or otherwise make up for this increase in instability (page 41, emphasis added by me):

Some might argue that instability is an acceptable trade-off for the gains in development throughput that AI-assisted development enables.

The reasoning is that the volume and speed of AI-assisted delivery could blunt the detrimental effects of instability, perhaps by enabling such rapid bug fixes and updates that the negative impact on the end-user is minimized.

However, when we look beyond pure software delivery metrics, this argument does not hold up. To assess this claim, we checked whether AI adoption weakens the harms of instability on our outcomes which have been hurt historically by instability.

We found no evidence of such a moderating effect. On the contrary, instability still has significant detrimental effects on crucial outcomes like product performance and burnout, which can ultimately negate any perceived gains in throughput.

And the chart on page 38 appears to show the increase in instability as quite a bit larger than the increase in throughput, in any case.

Curiously, that chart also claims a significant increase in “code quality”, and other parts of the report (page 30, for example) claim a significant increase in “productivity”, alongside the significant increase in delivery instability, which seems like it ought to be a contradiction. As far as I can tell, DORA’s source for both “productivity” and “code quality” is perceived impact as self-reported by survey respondents. Other studies and reports have designed less subjective and more quantitative ways to measure these things. For example, this much-discussed study on adoption of the Cursor LLM coding tool used the results of static analysis of the code to measure quality and complexity. And self-reported productivity impacts, in particular, ought to be a deeply suspect measure. From (to pick one relevant example) the METR early-2025 study (emphasis added by me):

This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.

LLM coding advocates have often criticized this particular study’s finding of slower development for being based on older generations of LLMs (more on that argument in a bit), but as far as I’m aware nobody’s been able to seriously rebut the finding that developers are not very effective at self-estimating their productivity. So to see DORA relying on self-estimated productivity is disappointing.

The DORA report goes on to provide a seven-part “AI capabilities model” for organizations (begins on page 49), which consists of recommendations like: strong version control practices, working in small batches, quality internal platforms, user-centric focus… all of which feel like they should be table stakes for any successful organization regardless of whether they also happen to be using LLMs.

Suppose, for sake of a silly example, that someone told you a new technology is revolutionizing surgery, but the gains are not uniformly distributed, and the best overall outcomes are seen in surgical teams where in addition to using the new thing, team members also wash their hands prior to operating. That’s not as extreme a comparison as it might sound: the sorts of practices recommended for maximizing LLM-related gains in the DORA report, and in many other similar whitepapers and reports and studies, are or ought to be as fundamental to software development as hand-washing is to surgery. The Joel Test was recommending quite a few of these practices a quarter-century ago, the Agile Manifesto implied several of them, and even back then they weren’t really new; if you dig into the literature on effective software development you can find variations of much of the DORA advice going all the way back to the 1970s and even earlier.

For a more recent data point, I’ve seen a lot of people talking about and linking me to CircleCI’s 2026 “State of Software Delivery” which, like the DORA report, claims an uneven distribution of benefits from LLM adoption, and even says (page 8) “the majority of teams saw little to no increase in overall throughput”. The CircleCI report also raises a worrying point that echoes the increase in “delivery instability” seen in the DORA report (CircleCI executive summary, page 3):

Key stability indicators show that AI-driven changes are breaking more often and taking teams longer to fix, making validation and integration the primary bottleneck.

CircleCI further reports (page 11) that, year-over-year, they see a 13% increase in recovery time for a broken main branch, and a 25% increase for broken feature branches. And (page 12) they also say failures are increasing:

[S]uccess rates on the main branch fell to their lowest level in over 5 years, to 70.8%. In other words, attempts at merging changes into production code bases now fail 30% of the time.

For comparison, their own recommended benchmark of success for main branches is 90%.

The cost of these increasing failures and the increasing time to resolve them is quantified (emphasis matches the report, page 14):

For a team pushing 5 changes to the main branch per day, going from a 90% success rate to 70% is the difference between one showstopping breakage every two days to 1.5 every single day (a 3x increase).

At just 60 minutes recovery time per failure, you’re looking at an additional 250 hours in debugging and blocked deployments every year. And that’s at a relatively modest scale. Teams pushing 500 changes per day would lose the equivalent of 12 full-time engineers.

The usual response to reports like these is to claim they’re based on people using older LLMs, and the models coming out now are the truly revolutionary ones, which won’t have any of those problems. For example, this is the main argument that’s been leveled against the METR study I mentioned above. But that argument was flimsy to begin with (since it’s rarely accompanied by the kind of evidence needed to back up the claim), and its repeated usage is self-discrediting: if the people claiming “this time is the world-changing revolutionary leap, for sure” were wrong all the prior times they said that (as they have to have been, since if any prior time had actually been the revolutionary leap they wouldn’t need to say this time will be), why should anyone believe them this time?

Also, I’ve read a lot of studies and reports on LLM coding, and these sorts of findings—uneven or inconsistent impact, quality/stability declines, etc.—seem to be remarkably stable, across large numbers of teams using a variety of different models and different versions of those models, over an extended period of time (DORA does have a bit of a messy situation with contradictory claims that “code quality” is increasing while “delivery instability” is increasing even more, but as noted above that seems to be a methodological problem). The two I’ve quoted most extensively in this post (the DORA and CircleCI reports) were chosen specifically because they’re often recommended to me by advocates of LLM coding, and seem to be reasonably pro-LLM in their stances.

The other expected response to these findings is a claim that it’s not necessarily older models but older workflows which have been obsoleted, that the state of the art is no longer to just prompt an LLM and accept its output directly, but rather involves one LLM (or LLM-powered agent) generating code while one or more layers of “adversarial” ones review and fix up the code and also review each other’s reviews and responses and fixes, thus introducing a mechanism by which the LLM(s) will automatically improve the quality of the output.

I’m unaware of rigorous studies on these approaches (yet), but several well-publicized early examples do not inspire confidence. I’ll pick on Cloudflare here since they’ve been prominent advocates for using LLMs in this fashion. In their LLM rebuild of Next.js:

We wired up AI agents for code review too. When a PR was opened, an agent reviewed it. When review comments came back, another agent addressed them. The feedback loop was mostly automated.

But their public release of it, vetted through this process and, apparently, some amount of human review on top, was initially unable to run even the basic default Next.js application, and also was apparently riddled with security issues. From one disclosure post (emphasis added by me):

AI is now very good at getting a system to the point where it looks complete.

One specific problem cited was that the LLM rebuild simply did not pull in all the original tests, and therefore could miss security-critical cases those tests were checking. From the same disclosure post:

The process was feature-first: decide which viNext features existed, then port the corresponding Next.js tests. That is a sensible way to move quickly. It gives you broad happy-path coverage.

But it does not guarantee that you bring over the ugly regression tests, missing-export cases, and fail-open behavior checks that mature frameworks accumulate over years.

So middleware could look “covered” while the one test that proves it fails safely never made it over.

For example, Next.js has a dedicated test directory (test/e2e/app-dir/proxy-missing-export/) that validates what happens when middleware files lack required exports. That test was never ported because middleware was already considered “covered” by other tests.

On the whole, that post is somewhat optimistic, but considering that the Next.js rebuild was carried out by presumably knowledgeable people who presumably were following good modern practices and prompting good modern LLMs to perform a type of task those LLMs are supposed to be extremely good at—a language and framework well-represented in training data, well-documented, with a large existing test suite written in the target language to assist automated verification—I have a hard time being that optimistic.

And though I haven’t personally read through the recent alleged leak of the Claude Code source, I’ve read some commentary and analysis from people who have, and again it seems like a team that should be as well-positioned as anyone to take maximum advantage of the allegedly revolutionary capabilities of LLM coding isn’t managing to do so.

So the consistent theme here, in the studies and reports and in more recent public examples, is that being able to generate code much more quickly than before, even in 2026 with modern LLMs and modern practices, is still no guarantee of being able to deliver software much more quickly than before. As the CircleCI report puts it (page 3):

The data points to a clear conclusion: success in the AI era is no longer determined by how fast code can be written. The decisive factor is the ability to validate, integrate, and recover at scale.

And if that sounds like the kind of thing Fred Brooks used to say, that’s because it is the kind of thing Fred Brooks used to say. Raw speed of generating code is not and was not the bottleneck in software development, and speeding that up or even reducing the time to generate code to effectively zero does not have the effect of making all the other parts of software development go away or go faster.

So at this point it seems clear to me that in practice as well as in theory LLM coding does not represent a silver bullet, and it seems highly unlikely to transform into one at any point in the near future.

On being left behind

When expressing skepticism about LLM coding, a common response is that not adopting it, or even just delaying slightly in adopting it, will inevitably result in being “left behind”, or even stronger effects (for example, words like “obliterated” have been used, more than once, by acquaintances of mine who really ought to know better). LLMs are the future, it’s going to happen whether you like it or not, so get with the program before it’s too late!

I said I’ll stick to the technical mode here, but I’ll just mention in passing that the “it’s going to happen whether you like it or not” framing is something I’ve encountered a lot and found to be pretty disturbing and off-putting, and not at all conducive to changing my mind. And milder forms like “It’s undeniable that…” are rhetorically suspect. The burden of proof ought to be on the person making the claim that LLMs truly are revolutionary, but framing like this tries to implicitly shift that burden and is a rare example of literally begging the question: it assumes as given the conclusion (LLMs are in fact revolutionary) that it needs to prove.

Meanwhile, I see two possible outcomes:

  1. The skeptical position wins. LLM coding tools do not achieve revolutionary silver-bullet status. Perhaps they become another tool in the toolbox, like TDD or pair programming, where some people and companies are really into them. Perhaps they become just another feature of IDEs, providing functionality like boilerplate generators to bootstrap a new project (if your favorite library/framework doesn’t provide its own bootstrap anyway).
  2. The skeptical position loses. LLM coding tools do achieve true revolutionary silver-bullet status or beyond (consistently delivering one or more orders of magnitude improvement in software development productivity), and truly become a mandatory part of every working programmer’s tools and workflows, taking over all or nearly all generation of code.

In the first case, delayed adoption has no downside unless someone happens to be working at one of the companies that decide to mandate LLM use. And they can always pick it up at that point, if they don’t mind or if they don’t feel like looking for a new job.

As to the second case: based on what I’ve argued above about the status and prospects of LLMs up to now, I obviously think that continuing the type of progress in models and practices that’s been seen to date does not offer any viable path to a silver bullet. Which means a truly revolutionary breakthrough will have to be something sufficiently different from the current state of the art that it will necessarily invalidate many (or perhaps even all) prior LLM-based workflows in addition to invalidating non-LLM-based workflows.

And even if that doesn’t result in a completely clean-slate starting point with everyone equal—even if experience with older LLM workflows is still an advantage in the post-silver-bullet world—I don’t think it can ever be the sort of insurmountable advantage it’s often assumed to be. For one thing, even with vastly higher average productivity, there likely would not be sufficient people with sufficient pre-existing LLM experience to fill the vastly expanded demand for software that would result (this is why a lot of LLM advocates, across many fields, spend so much time talking about the Jevons paradox). For another, any true silver-bullet breakthrough would have to attack and reduce the essential difficulty of building software, rather than the accidental difficulty. Let us return once again to Brooks:

I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation.

Much of the skill required of human LLM users today consists of exactly this: specifying and designing the software as a “conceptual construct”, albeit in specific ways that can be placed into an LLM’s context window in order to have it generate code. In any true silver-bullet world, much or all of that skillset would have to be rendered obsolete, which significantly reduces the penalty for late adoption if and when the silver bullet is finally achieved.

Power to the people?

Aside from impact on professional programmers and professional software-development teams, another claim often made in favor of LLM coding is that it will democratize access to software development. With LLM coding tools, people who aren’t experienced professional programmers can produce software that solves problems they face in their day-to-day jobs and lives. Surely that’s a huge societal benefit, right? And it’s tons of fun, too!

Setting aside that the New York Times piece linked above was written by someone who is an experienced professional, I’m not convinced of this use case either.

Mostly I think this is a situation where you can’t have it both ways. It seems to be widely agreed among advocates of LLM coding that it’s a skill which requires significant understanding, practice, and experience before one is able to produce consistent useful results (this is the basis of the “adopt now or be left behind” claim dealt with in the previous section); strong prior knowledge of how to design and build good software is also generally recommended or assumed. But that’s very much at odds with the democratized-software claim: that someone with no prior programming knowledge or experience will simply pick up an LLM, ask it in plain non-technical natural language to build something, and receive a sufficiently functional result.

I think the most likely result is that a non-technical user will receive something that’s obviously not fit for purpose, since they won’t have the necessary knowledge to prompt the LLM effectively. They won’t know how to set up directories of Markdown files containing instructions and skill definitions and architectural information for their problem. They won’t have practice at writing technical specifications (whether for other humans or for LLMs) to describe what they want in sufficient detail. They won’t know how to design and architect good software. They won’t know how to orchestrate multiple LLMs or LLM-powered agents to adversarially review each other. In short, they won’t have any of the skills that are supposed to be vital for successful LLM coding use.

There’s also the possibility that “natural” human language alone will never be sufficient to specify programs, even to much more advanced LLMs or other future “AI” systems, due to inherent ambiguity and lack of precision. In that case, some type of specialized formal language for specifying programs would always be necessary. Edsger W. Dijkstra, for example, took this position and famously derided what he called “the foolishness of ‘natural language programming’”, which is worth reading for some classic Dijkstra-isms like:

When all is said and told, the “naturalness” with which we use our native tongues boils down to the ease with which we can use them for making statements the nonsense of which is not obvious.

Another possible outcome for LLM coding by non-programmers is the often-mentioned analogy to 3D printing, which also was hyped up as a great democratizer that would let anyone design and make anything, but never delivered on that promise and, at the individual level, became a niche hobby for the small number of enthusiasts who were willing and able to put in the time, money, and effort to get moderately good at it.

But the nightmare result is that non-programmer LLM users will receive something that seems to work, and only reveals its shortcomings much later on. Given how often I see it argued that LLMs will democratize coding and write utility programs for people working in fields where privacy and confidentiality are both vital and legally mandated, I’m terrified by that potential failure mode. And I think one of the worst possible things that could happen for advocates of LLM adoption is to have the news full of stories of well-meaning non-technical people who had their lives ruined by, say, accidentally enabling a data breach with their LLM-coded helper programs, or even “just” turning loose a subtly-incorrect financial model on their business. So even if I were an advocate of LLM coding, I’d be very wary of pushing it to non-programmers.

But ultimately, the only situation in which LLMs could meaningfully democratize access to software development is one where they achieve a true silver bullet, by significantly reducing or removing essential difficulty from the software development process. And as noted above, LLM advocates seem to believe that even in the silver-bullet situation there would still be such a gap between those with pre-existing LLM usage skills and those without, that those without could never meaningfully catch up. Although I happen to disagree with that belief, it remains the case that advocates can’t have it both ways: either LLM coding will be an exclusive club for those who built up the necessary skills, XOR it will be a great democratizer and do away with the need for those skills.

Takeaways

I’m already over 6,000 words in this post, and though I could easily write many more, I should probably wrap it up.

If I had to summarize my position on LLM coding in one sentence, it would be “Please go read No Silver Bullet”. I think Brooks’ argument there is both theoretically correct and validated by empirical results, and sets some pretty strong limits on the impact LLM coding, or any other tool or technique which solely or primarily attacks accidental difficulty, can have.

Of course, limits on what we can do or gain aren’t necessarily the end of the world. Many of the foundations of computer science, from On Computable Numbers to Rice’s theorem and beyond, place inflexible limits on what we can do, but we still write software nonetheless, and we still work to advance the state of our art. So the No Silver Bullet argument is not the same as arguing that LLMs are necessarily useless, or that no gains can possibly be realized from them. But it is an argument that any gains we do realize are likely going to be incremental and evolutionary, rather than the world-changing revolution many people seem to be expecting.

Correspondingly, I think there is not a huge downside, right now, to slow or delayed adoption of LLM coding. Very few organizations have the strong fundamentals needed to absorb even a relatively moderate, incremental increase in the amount of code they generate, which I suspect is why so many studies and reports find mixed results and lots of broken CI pipelines. Not only is there no silver bullet, there especially is no quick or magical gain to be had from rushing to adopt LLM coding without first working on those fundamentals. In fact, the evidence we have says you’re more likely to hurt than help your productivity by doing so.

I also don’t think LLMs are going to meaningfully democratize coding any time soon; even if they become indispensable tools for programmers, they are likely to continue requiring users to “think like a programmer” when specifying and prompting. We would be much better served by teaching many more people how to think rigorously and reason about abstractions (and they would be much better served, too) than we would by just plopping them as-is in front of LLMs.

As for what you should be doing instead of rushing to adopt LLM coding out of fear that you’ll be left behind: I think you should be listening to what all those whitepapers and reports and studies are actually telling you, and working on fundamentals. You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.

If the skeptical position is wrong and it turns out LLMs truly become indispensable coding tools in the long term, well, the available literature says you’ll be set up to take the greatest possible advantage of them. And if it turns out they don’t, you’ll still be in much better shape than you were, and you’ll have an advantage over everyone who chased after wild promises of huge productivity gains by ordering their teams to just chew through tokens and generate code without working on fundamentals, and who likely wrecked their development processes by doing so.

Or as Fred Brooks put it:

The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.

Thursday assorted links

1. The “estrangement” from philosophy of economics.

2. Investing in scientific instruments.

3. New book coming on Carlsen vs. Niemann.

4. Houston economy growing at more than ten percent (and that is even without moving forward on bike paths).

5. “As Matt Yglesias rightly says, this is a worrying signal of declining state legitimacy: even the centre-left parties don’t believe they can make the case for the state raising taxes and spending them on public goods

The post Thursday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Presidential Words Can Turn the Unthinkable Into the Thinkable − For Better or For Worse

The Conversation logo

Among the most disorienting things about President Donald Trump’s public language is how easily it can feel numbing and shocking in the same moment. He says something outrageous, the country recoils, and then the recoil itself begins to feel familiar.

As a scholar who studies presidential rhetoric, I know that over time that rhythm does its own kind of damage. It teaches the public to absorb the breach. What once might have sounded like a genuine political emergency or a violation of constitutional decorum begins to register as just another day in American political life.

But the past few days merit notice. The president’s demagoguery has taken a darker turn.

Trump’s rhetoric about Iran has become more than inflammatory. Beginning with posts to Truth Social in early April, he has used profanity-laden language – “Open the Fuckin’ Strait, you crazy bastards, or you’ll be living in Hell” – to threaten attacks on the country’s infrastructure. He urged Iranians to rise up against their government. He warned that “a whole civilization will die tonight” if Iran does not comply with U.S. demands.

The Associated Press treated those remarks as a significant escalation in the context of a live conflict, not merely as familiar Trumpian excess: “As the conflict has entered its second month, Trump has escalated his warnings to bomb Iran’s infrastructure.”

The International Committee of the Red Cross also issued the unusual reminder that the rules of war must be respected “in words and action,” suggesting that the rhetoric itself had become part of the danger.

But were Trump’s recent remarks really different from his many earlier outbursts?

I think they were. For years, Trump’s rhetoric has relied on insult, ridicule, threat and contemptHe has degraded opponents and helped coarsen the terms of public life.

What seems different about his words during the first week of April 2026 is the scale of violence his language primed people to imagine. His remarks about Iran moved beyond personal attacks or chest-thumping nationalism to take on a tone of collective punishment and civilizational destruction. The style was familiar. The horizon of harm was not.

Trump Truth Social post
President Donald Trump’s social media post of April 7, 2026, threatening the destruction of ‘a whole civilization,’ meaning Iran. Truth Social.

Politics of fear

Presidential rhetoric is more about permission than persuasion. Presidents do not only argue. They signal.

Through those signals, they tell the public what kind of situation this is, what kind of danger is at hand, and what kinds of response are reasonable. In that sense, the president can function like a human starting gun. His words cue journalists, legislators, party allies and ordinary supporters about how to classify events before anyone has fully processed them.

Political theorist Corey Robin’s work on the politics of fear is a useful lens for understanding what is happening with Trump’s violent rhetoric.

Fear, in Robin’s view, is not simply a feeling that arises naturally in response to danger. It is politically manufactured. Power teaches people what to fear, how to name danger, and where to direct their apprehension. Presidential rhetoric is an essential tool for performing that work.

Thus, a president does not only describe a threat. He also gives it shape and scale. He tells the public how large it is, how close it is, and what kinds of response should feel reasonable in its presence.

A good example of a president doing this happened after the Sept. 11, 2001, terrorist attacks when, while visiting ground zero in New York City, George W. Bush said, “I can hear you. The rest of the world hears you. And the people who knocked these buildings down will hear all of us soon.” With that sentence, Bush acknowledged the gravity of what had happened, but also promised to fight back and bring justice to the terrorists.

When it comes to statements like those Trump has recently made about Iran, the worry is not that the president has said something extreme. Instead, the larger concern lies in what repeatedly using extreme language does to the atmosphere in which judgment takes place.

Political hyperbole lowers the threshold of what the public can imagine as legitimate, as allowable. When presidents make threats like the ones Trump issued, mass suffering becomes more imaginable. The president’s words and social media posts test whether the public will continue to hear such language as over the line, or whether it will be absorbed as one more hard-edged negotiating tactic.

Shaping reality

Presidential rhetoric matters for reasons that go beyond persuasion or style.

It helps arrange reality. It tells the public what is serious, who is dangerous, whose suffering counts, and what forms of violence can be described as necessary. President Barack Obama did this in 2012, when he was speaking at a vigil to honor the shooting victims at Sandy Hook Elementary School.

“We bear a responsibility for every child because we’re counting on everybody else to help look after ours,” he said. “That we’re all parents; that they’re all our children.” With these words, Obama called everyone to feel, up close, the horrific loss of 20 children shot dead, and to work for a solution to gun violence.

Trump has benefited from a public worn down by repetition. Every new breach arrives trailing the memory of earlier ones.

People begin to doubt their own reactions. Surely this is appalling, they may think, but also, somehow, this is what he always does. That dual feeling is part of the harm. A damaged baseline makes serious escalation harder to recognize and judge.

The disorientation and disgust that so many people experienced in response to Trump’s thundering, violent proclamations is important. Even after years of erosion of what was deemed normal, some lines remain visible.

Paying attention now is not about pretending Trump has suddenly become someone new. It is about recognizing more clearly what his presidency has been teaching the public to hear as thinkable. The most serious harm may lie not only in what follows such rhetoric, but in the world it helps prepare people to accept.

This article was originally published by The Conversation on April 8, 2026. Click here to read the original.


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post Presidential Words Can Turn the Unthinkable Into the Thinkable − For Better or For Worse appeared first on DCReport.org.

Apex to develop larger satellites for missile defense, space-based computing

‘Comet Mini’ and ‘Comet XL’ target energy-intensive missions

The post Apex to develop larger satellites for missile defense, space-based computing appeared first on SpaceNews.

Improved thresholds and SSA accuracy can reduce most satellite collision risk

Improving the accuracy of space situational awareness data and using tighter thresholds for potential conjunctions can help retire most satellite collision risk in LEO, a new study concludes.

The post Improved thresholds and SSA accuracy can reduce most satellite collision risk appeared first on SpaceNews.

Space Sovereignty Is No Longer Optional

A Critical Infrastructure Under Strain Space has quietly become the infrastructure beneath modern life. From financial transactions synchronised by satellite timing to military operations reliant on secure communications, the global […]

The post Space Sovereignty Is No Longer Optional appeared first on SpaceNews.

April 30: Opportunities for On-Orbit Computing

Join us in Washington, D.C., for the next event in our orbital data centers series

The post April 30: Opportunities for On-Orbit Computing appeared first on SpaceNews.

Tracking the next SDA challenge

Illustration of a satellite in orbit. Credit: Leidos

A flurry of commercial innovation has left the U.S. government with no shortage of sensors and AI-driven insights to monitor the increasingly packed and contested space environment. The challenge is […]

The post Tracking the next SDA challenge appeared first on SpaceNews.

Preparing for the next NASA budget battle

Illustration of the Chandra X-ray Observatory. Credit: NASA/CXC & J. Vaughan

When Congress passed a fiscal year 2026 “minibus” appropriations bill in January, much of the space community breathed a sigh of relief. Congress had rejected the steep cuts proposed by the Trump administration, including a nearly 25% reduction in the agency’s overall budget and nearly 50% to science. The agency ended up with funding close […]

The post Preparing for the next NASA budget battle appeared first on SpaceNews.

Portal Space Systems raises $50 million to accelerate spacecraft development

Portal spacecraft

Space mobility company Portal Space Systems has raised $50 million to scale up development of its highly maneuverable spacecraft.

The post Portal Space Systems raises $50 million to accelerate spacecraft development appeared first on SpaceNews.

Space Force taps 14 firms for $1.8 billion GEO surveillance program

The companies will compete for contracts to build ‘neighborhood watch’ satellites

The post Space Force taps 14 firms for $1.8 billion GEO surveillance program appeared first on SpaceNews.

Albedo prepares second VLEO mission for 2027 launch

SAN FRANCISCO – Albedo, the former Earth-observation startup now focused on satellite manufacturing, announced plans April 9 to send its second spacecraft into very low-Earth orbit (VLEO). Vicinity, scheduled to launch in 2027, will deploy solar arrays to provide far more power than the solar panels mounted on Precision, the bus that housed Albedo’s 2025 […]

The post Albedo prepares second VLEO mission for 2027 launch appeared first on SpaceNews.

Lockheed Martin wins $105 million contract for GPS ground control system as OCX winds down

The contract supports next-generation GPS IIIF satellites through 2030

The post Lockheed Martin wins $105 million contract for GPS ground control system as OCX winds down appeared first on SpaceNews.

Vantor to expand imaging satellite fleet, adding smallsats to increase revisit rates

The new architecture combines high resolution with rapid revisit

The post Vantor to expand imaging satellite fleet, adding smallsats to increase revisit rates appeared first on SpaceNews.

Heather Pringle on the upcoming Space Symposium and future international collaboration in space

In this episode of the Space Minds podcast, SpaceNews’ Mike Gruss interviews Heather Pringle, a retired Major General of the US Air Force who is now the CEO of the […]

The post Heather Pringle on the upcoming Space Symposium and future international collaboration in space appeared first on SpaceNews.

China conducts pair of Long March launches for Thousand Sails and Guowang megaconstellations

China carried out separate Long March 8 and Long March 6A launches this week, adding new batches of satellites to the country’s megaconstellation projects.

The post China conducts pair of Long March launches for Thousand Sails and Guowang megaconstellations appeared first on SpaceNews.

A new kind of arsenal: commercial satellites

he Pentagon is no longer asking whether commercial space will matter in future conflicts. It is building a strategy around the assumption that it will.

The post A new kind of arsenal: commercial satellites appeared first on SpaceNews.

What returning to lunar space means for human health

Artemis 2 earthset

Preparing to send astronauts back to the moon is no small feat. Spaceflight exposes the body to both known and unknown risks, some of which are subtle, cumulative and difficult to predict. When humans were last on the lunar surface in 1972, they stayed for just over three days. On that final mission, Apollo astronauts […]

The post What returning to lunar space means for human health appeared first on SpaceNews.

The third Gulf war will scar energy markets for a long time yet

Residual risks and ruined infrastructure will keep prices high

One neat trick to end extreme poverty

An age-old problem might have a surprisingly simple solution

Can the secondary market allay private-credit fears?

In the short run, a bit. In the long run, a lot more

The latest Italian banking whodunnit has it all

A a coup, a conspiracy and even a (possibly unrelated) corpse

South Korea’s AI industrial policy meets the energy shock

The collision will not be pretty

Zhang Xuefeng (1984-2026), guided Chinese families through the college admissions process

 A Chinese college-admissions influencer has died, who helped families choose which colleges and majors to rank, after they received the results from the gaokao, the national admissions exam.

The NYT has the story:

China Mourned an Education Influencer. The Grief Was a Quiet Revolt.
Zhang Xuefeng helped people navigate the country’s unforgiving higher education system. The public outpouring after his death was a quiet rebuke to the punishing process. 
 By Li Yuan

 "Zhang Xuefeng became famous in China for telling students and their parents what few educators would: which majors were useless, which careers were dead ends and which dreams ordinary families could not afford.

“Knock out your children if they want to study journalism,” he famously said.

“The humanities all lead to service work, and service work, in one word, is sucking up” to clients.

...
"But when Mr. Zhang died last month, at 41, something uncommon in today’s China happened. In a country where large-scale, spontaneous public emotion is rarely tolerated, tens of thousands of people from all over the country showed up at his memorial service in the eastern city of Suzhou
. They stood in line for hours to pay their respects, some carrying flowers, one clutching a college admission letter. On the Chinese internet, posts and videos about Mr. Zhang and his death drew more than six billion views in a single day, according to a media monitoring firm.

...

"The gratitude directed at Mr. Zhang, who had 27 million followers on the short-video platform Douyin, reflected the fears of ordinary Chinese families trying to navigate an increasingly opaque and unforgiving education system. The extraordinary mourning after his death revealed how much of contemporary China is living with that anxiety.

"Students and parents thanked him for helping them navigate the high-stakes process of choosing a college major in China. Through livestreams and consulting sessions, he explained which majors led to stable jobs, which industries were declining and which professional certificates were worth pursuing — information readily available to families with connections or advanced education but far harder for everyone else to find. 

...

"For decades, the general college entrance exam, known in Chinese as the gaokao, was widely seen as a pathway to changing one’s fate; it was brutally competitive but capable of delivering upward mobility. As universities expanded and the job market deteriorated, that promise weakened. Getting into college became easier. Turning a degree into security did not.
...

"In many provinces, families have less than two weeks between getting the results of the exam and the deadline to apply to college. In that time, they need to make sense of hundreds of majors, universities and career paths

...

"The families who turn to consultants like Mr. Zhang are not, for the most part, China’s elite. But neither are they the poorest. Wang described his clients as families in the broad middle: small-business owners, office workers, skilled laborers and lower-level state employees. They often have money to pay for guidance but lack the social capital or institutional knowledge needed to navigate the system confidently on their own. " 

AI, Unemployment and Work

Imagine I told you that AI was going to create a 40% unemployment rate. Sounds bad, right? Catastrophic even. Now imagine I told you that AI was going to create a 3-day working week. Sounds great, right? Wonderful even. Yet to a first approximation these are the same thing. 60% of people employed and 40% unemployed is the same number of working hours as 100% employed at 60% of the hours.

So even if you think AI is going to have a tremendous effect on work, the difference between catastrophe and wonderland boils down to distribution. It’s not impossible that AI renders some people unemployable, but that proposition is harder to defend than the idea that AI will be broadly productive. AI is a very general purpose technology, one likely to make many people more productive, including many people with fewer skills. Moreover, we have more policy control over the distribution of work than over the pure AI effect on work. Declare an AI dividend and create some more holidays, for example.

Nor is this argument purely theoretical. Between 1870 and today, hours of work in the United States fell by about 40% — from nearly 3,000 hours per year to about 1,800. Hours fells but unemployment did not increase. Moreover, not only did work hours fall, but childhood, retirement, and life expectancy all increased. In fact in 1870, about 30% of a person’s entire life was spent working — people worked, slept, and died. Today it’s closer to 10%. Thus in the past 100+ years or so the amount of work in a person’s lifetime has fallen by about 2/3rds and the amount of leisure, including retirement has increased. We have already sustained a massive increase in leisure. There’s no reason we cannot do it again.

The post AI, Unemployment and Work appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Travelling at the speed of light

Digital artwork of a spaceship with blue thrusters approaching a bright star cluster in a dark, grid-lined space scene.

A mind-bending trip into the cosmos aboard a speculative (yet theoretically possible) spacecraft near the speed of light

- by Aeon Video

Watch on Aeon

White-collar sweatshops

A cityscape at night through a window with a person in the foreground, blurred buildings in the background.

How did law firms and other professional workplaces become places of such crushing and soulless work?

- by Dylan Gottlieb

Read on Aeon

LDS fact of the day

The Church of Jesus Christ of Latter-day Saints has grown 66% this century, fueled in part by a record-breaking number of convert baptisms in 2025.

The church had 10,752,986 members at the end of 1999. The church had 17,887,212 at the end of 2025, according to an annual statistical report released Saturday during the church’s 196th Annual General Conference.

Furthermore the growth is coming in every part of the world (as a qualifier I am not sure what the outflow is).  Here is the full article, via Tyler Ransom.

The post LDS fact of the day appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

With Orion still flying, NASA is nearing key decisions about Artemis III

NASA's Artemis II mission has yet to return to Earth—it will do so on Friday evening, splashing down into the Pacific Ocean off the coast of San Diego—but the agency is already nearing some key decisions on the next Artemis mission.

The US space agency announced six weeks ago that it was modifying its Artemis timeline to insert a mission before beginning planned lunar landings. This new mission, designated Artemis III and intended to fly in Earth orbit rather than to the Moon, would attempt to "buy down" risk to give the lunar landing mission (now Artemis IV) a higher chance of success.

NASA Administrator Jared Isaacman said Tuesday afternoon that the space agency is debating about which orbit to fly Artemis III in before locking in a blueprint, noting that the first "senior level" Artemis III mission design discussion had taken place earlier in the day.

Read full article

Comments

‘Open’ Is in the Eye of the Beholder

One of Trump’s Truthed conditions for a ceasefire was the “SAFE OPENING of the Strait of Hormuz.” The White House appeared ready to declare that condition met. But even this morning, before reports emerged that Iran was once again closing the strait, there was some fine print.

As David noted in Morning Memo, Iran, through the ceasefire, now has quite a bit more control over the strait than it had before the war started, saying that it will control it jointly with Oman. The Financial Times reported this morning that Iran will charge boats crypto to get through, and, seemingly, leisurely inspect them as they make the passage. Here’s Hamid Hosseini, a spokesperson for Iran’s Oil, Gas and Petrochemical Products Exporters’ Union, in an interview with the FT:

“Iran needs to monitor what goes in and out of the strait to ensure these two weeks aren’t used for transferring weapons,” said Hosseini, whose industry association works closely with the state.

“Everything can pass through, but the procedure will take time for each vessel, and Iran is not in a rush,” he added.

This all flew in the face of the alternate reality being advanced by the White House, with Trump claiming the U.S. is going to get a cut of these ships’ fees.

Later, White House spokesperson Karoline Leavitt refused to answer a question about who controlled the strait.

This afternoon, it appears the fragile ceasefire is growing more fragile, with Israel’s continued attacks on Hezbollah in Lebanon emerging as a major point of contention. Iran and Pakistan, which served as an intermediary for the ceasefire, say Lebanon was covered by the agreement. Israel contends Lebanon was not, as does Leavitt. Mohammad-Bagher Ghalibaf, the speaker of Iran’s parliament, said that “a bilateral ceasefire or negotiations” has become “unreasonable,” but didn’t go so far as to say the agreement had fallen apart.

A Few Thoughts on Trump’s Pre-Deal With Iran

First, just because Donald Trump is an inveterate liar, don’t assume that Iran is a reliable narrator about anything that was agreed to in this deal. (Was there a deal? We’ll get to that.) One thing both sides explicitly agree on, coming right from President Trump himself, is that the 10 point Iranian plan will serve as the basis for discussions over the next two weeks. The early accounts of what that document included focused on a lot things Iran wants, even including things it wanted before the war broke out. It doesn’t really focus on the things the U.S. notionally got into this war for. (We’ll get in a moment to what’s included in the document Iran released today.) For the U.S., this ceasefire is at best a ceasefire on the basis of a stalemate, where the fight is about a draw and both sides want to see if they can bring the fight to an end.

That’s the optimistic view. The U.S. has clearly been more eager to get to the negotiating table. It’s the U.S. that wants out most. The items on that list tilt heavily toward Iran. The Iranians appear to be exercising continued control of the Strait of Hormuz even if they may allow ships to go through — “allow” being the key word.

Iran has now released a new version of its ten points that seems wildly more aggressive then what Trump appeared to be referring to. It’s a maximalist set of demands which requires the U.S. to abandon the region and leaves Iran as the local hegemon. It sounds like they’re now trying to come in with a maximalist set of demands. Or maybe Trump was so desperate to get to a ceasefire that he agreed to this set of demands regardless. What’s true in either case is that to the extent we are going to see a negotiation it definitely seems like one in which the U.S. has the weaker hand and comes to the negotiation as the loser in the conflict. To emphasize the point, Iran doesn’t seem shy about embarrassing Trump during the early hours of the purported ceasefire. They’ve now announced that they will let only 12 ships transit the strait each day, even after Trump claimed he’d forced the Iranians to reopen it entirely. That’s just a fraction of the normal traffic. The Iranians are humiliating him at every turn, likely because they know he wants the ceasefire too badly to throw it into doubt.

Every way to look at this, whichever documents decisions are being based on, the U.S. is trying to get out of the war more than Iran is. That’s notwithstanding the fact that Iran has suffered almost incalculably more damage. But this comes back to a point we discussed at the outset of the conflict. It’s never about the absolute amount of damage. It’s about the stakes for each side. For Iran, it’s the survival of their government and an entire theological-political worldview. Donald Trump is trying to avoid losing control of both houses of Congress. Those aren’t comparable things. Trump stumbled into this conflict. He’s botched it badly and is now looking at the choice between a manageable strategic defeat or doubling or tripling down and guaranteeing something far worse. Trump seems to be working to get out on embarrassing but not catastrophic terms. But it’s not clear he can, given his need to dominate in all cases and how in hock he is to regional players who are aghast that Trump has somehow managed to create a situation where Iran emerges stronger from a conflict in which it has taken such a tactical beating.

This is what it’s like when you’re losing and you’re trying to wriggle out before you lose more. There’s no real other way to put it.


Python Supply-Chain Compromise

This is news:

A malicious supply chain compromise has been identified in the Python Package Index package litellm version 1.82.8. The published wheel contains a malicious .pth file (litellm_init.pth, 34,628 bytes) which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module.

There are a lot of really boring things we need to do to help secure all of these critical libraries: SBOMs, SLSA, SigStore. But we have to do them.

Financial Regulation and AI: A Faustian Bargain?

Important work is just flowing these days, and much of it (of course) concerns AI:

We study whether AI methods applied to large-scale portfolio holdings data can improve financial regulation. We build a state-of-the-art, graph-based deep learning model tailored to security-level data on the holdings of financial intermediaries. The architecture incorporates economic priors and learns latent representations of both assets and investors from the network structure of portfolio positions. Applied to the universe of non-bank financial intermediaries, covering nearly $40 trillion in wealth, the model substantially outperforms existing approaches in out-of-sample forecasts of intermediary trading behavior, including in crisis episodes. The model has more than ten times the explanatory power for the cross-sectional variation in asset returns during stress events compared to traditional approaches, and it outperforms existing systemic risk metrics at the institution level. Its learned representations show that the holdings network encodes rich, economically interpretable information about firesale vulnerability. The architecture is fully inductive, producing informative estimates even when entire asset classes or investors are withheld from training. We embed our empirical approach into a macroprudential optimal policy framework to formalize why these objects matter for policy and welfare. We show that even in an equilibrium environment subject to the Lucas critique, the predictive information from the model improves welfare by sharpening the cross-sectional targeting of policy interventions, and we demonstrate a complementarity between prediction and structural knowledge.

That is a new paper by Christopher Clayton and Antonio Coppola, of Yale and Stanford respectively.

The post Financial Regulation and AI: A Faustian Bargain? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The Moon is already on Google Maps—did Artemis II really tell us anything new?

The data pipeline from NASA's Artemis II mission opened to full blast a few hours after looping behind the far side of the Moon on Monday night, when the Orion spacecraft established a laser communications link with a receiving station back on Earth.

A cache of high-resolution images began streaming down through this connection. NASA released the first batch to the public on Tuesday. Most of the images were taken by the four Artemis II astronauts using handheld Nikon cameras fitted with wide-angle and telephoto lenses. They also had iPhones to capture views out of the windows of their Orion Moon ship, named Integrity.

After reaching their farthest point from Earth, astronauts Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen are accelerating back to Earth for reentry and splashdown Friday evening to wrap up the first crewed lunar mission in more than 53 years.

Read full article

Comments

Snow in the Shadow of the Andes

Large areas of snow cover the arid plains of southern Argentina. Several large lakes containing water in shades of milky blue are visible, while the Andes are veiled by clouds along the left side.
Snow covers large areas of southern Argentina’s high plains in this image acquired on April 3, 2026, by the MODIS (Moderate Resolution Imaging Spectroradiometer) on NASA’s Terra satellite.
NASA Earth Observatory/Michala Garrison

On the eastern, arid side of the Andes, the plains of southern Argentina stretch from the mountains to the Atlantic coast. The landscape often appears dry and brown, interrupted by colorful glacier-fed lakes, but a storm in early April 2026 blanketed swaths of the land in white. The MODIS (Moderate Resolution Imaging Spectroradiometer) on NASA’s Terra satellite captured this image of the snowy Patagonian Desert on the morning of April 3.

Early fall has been wetter than normal in southern Patagonia, said atmospheric scientist René Garreaud of the Universidad de Chile, noting that satellite-based estimates showed above-average precipitation from late March through early April. Much of the region’s precipitation tends to fall on the western, windward side of the Andes, he said. “But strong winds are capable of blowing some snow east into the Argentinian side, as beautifully reflected in the MODIS image.”

The outlet of bright blue Lago Argentino meanders across an otherwise dry, brown valley from left to right. The bottom third of the image is white, where snow covers higher elevations.
Snow lines a valley at the outlet of Lago Argentino in this image acquired on April 3, 2026, by the OLI (Operational Land Imager) on Landsat 9.
NASA Earth Observatory/Michala Garrison

Snow clings to the higher elevations, while valleys draining the large glacial lakes are bare. In this detailed Landsat image, a stark snow line appears along the outlet of Lago Argentino. Also note the color of the water. Lago Argentino and other nearby lakes contain an abundance of fine sediment, or glacial flour, pulverized by southern Patagonia’s plentiful glaciers. This suspended sediment makes the lakes appear milky blue or turquoise.

Though snow was widespread across the desert after the early-season storm, the wintry splendor was fleeting. A satellite view from the afternoon of April 4 showed that snow had melted from all but the highest mountain areas.

NASA Earth Observatory images by Michala Garrison, using MODIS data from NASA EOSDIS LANCE and GIBS/Worldview, and Landsat data from the U.S. Geological Survey. Story by Lindsey Doermann.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

The West Faces Snow Drought
4 min read

Very wet—but very warm—weather in the western U.S. has left many mountainous regions looking at substantial snowpack deficits.

Article
Snow Buries the U.S. Interior and East
2 min read

Satellites observed a frozen landscape across much of the country after a massive winter storm.

Article
Fire Threatens Rare Forests in Argentina
3 min read

Blazes spread across Los Alerces National Park, home to some of the world’s oldest trees.

Article

The post Snow in the Shadow of the Andes appeared first on NASA Science.

Trump’s Mad and Desperate Threats

April 7, 2026

At 5:06 this morning, President Donald J. Trump posted on social media: “A whole civilization will die tonight, never to be brought back again. I don’t want that to happen, but it probably will. However, now that we have Complete and Total Regime Change, where different, smarter, and less radicalized minds prevail, maybe something revolutionarily wonderful can happen, WHO KNOWS? We will find out tonight, one of the most important moments in the long and complex history of the World. 47 years of extortion, corruption, and death, will finally end. God Bless the Great People of Iran!”

Trump has painted himself into a corner in his impulsive war against Iran. His job approval is dismal and Iran’s closure of the Strait of Hormuz, through which about 20% of the world’s oil travels, is sending the cost of oil soaring, squeezing the global economy. Always in his life he has had someone to fix his mistakes—his father, Trump Organization chief financial officer Allen Weisselberg, the “adults in the room” in his first administration who distracted him from catastrophic errors, and so on—but no one was willing to bail him out of the global disaster of his war on Iran.

So he threatened that “a whole civilization will die tonight, never to be brought back again,” an open threat to push his current embrace of war crimes all the way to genocide. No one knew if he was gearing up for a ground invasion of Iran in a war that has never received congressional authorization, or a massive bombing campaign, or even the use of nuclear weapons.

Or if he was making yet another empty threat.

Within the announcement were signs that perhaps it was bluster designed to let him claim victory and walk away. Despite his claim, there has been no “regime change” in Iran: the regime is very much still in place, although it has changed leadership in the wake of the bombing deaths of previous leaders. The new leaders appear to be more radical than their predecessors.

There was also the unmistakable echo of television advertising in his announcement. Either “a whole civilization will die tonight, never to be brought back again,” or “maybe something revolutionarily wonderful can happen, WHO KNOWS? We will find out tonight, one of the most important moments in the long and complex history of the World.”

At 6:32 this evening, we learned that the horrifying announcement of the morning was, indeed, cover for Trump to declare victory and get out of the crisis he has caused in the Middle East.

Trump posted: “Based on conversations with Prime Minister Shehbaz Sharif and Field Marshal Asim Munir, of Pakistan, and wherein they requested that I hold off the destructive force being sent tonight to Iran, and subject to the Islamic Republic of Iran agreeing to the COMPLETE, IMMEDIATE, and SAFE OPENING of the Strait of Hormuz, I agree to suspend the bombing and attack of Iran for a period of two weeks. This will be a double sided CEASEFIRE! The reason for doing so is that we have already met and exceeded all Military objectives, and are very far along with a definitive Agreement concerning Longterm PEACE with Iran, and PEACE in the Middle East.

“We received a 10 point proposal from Iran,” Trump continued, “and believe it is a workable basis on which to negotiate. Almost all of the various points of past contention have been agreed to between the United States and Iran, but a two week period will allow the Agreement to be finalized and consummated. On behalf of the United States of America, as President, and also representing the Countries of the Middle East, it is an Honor to have this Longterm problem close to resolution. Thank you for your attention to this matter! President DONALD J. TRUMP”

Michael Rios of CNN reported that Iran’s media is claiming it has achieved a great victory, forcing the U.S. to agree in principle to its 10-point plan, which includes the end of sanctions against Iran, the removal of all U.S. combat forces from bases in the region, and Iranian control of the Strait of Hormuz. If these terms are correct, they leave the United States significantly worse off than it was before the war and leave Iran significantly stronger.

Trump called Rios’s story a fraud, and immediately sought to reassert his strength. He posted, “Authorities are looking to determine whether or not a crime was committed on the issuance of the Fake CNN World Statement,” and said that “CNN is being ordered to immediately withdraw this Statement with full apologies for their, as usual, terrible ‘reporting.’”

Political commentator Ben Rhodes summed up the situation: “In the best case scenario, Trump struck a deal to reopen a Strait that was open before the pointless war he started, with [Iran] demonstrating its control over the Strait and potentially extracting fees plus sanctions relief. Thousands of innocents—including hundreds of children—dead in Lebanon and Iran for no reason. U.S. troops killed and wounded. U.S. embassies and bases in the Middle East badly damaged. U.S. standing in the world obliterated. U.S. munitions badly depleted. Hundreds of billions spent. Prices up everywhere. More global economic fallout to come. Putin strengthened and enriched. Just a catastrophic situation even in the best of circumstances. A profoundly shameful episode in American history no matter what happens next.”

And then, a minute after midnight, Trump posted:

“A big day for World Peace! Iran wants it to happen, they’ve had enough! Likewise, so has everyone else! The United States of America will be helping with the traffic buildup in the Strait of Hormuz. There will be lots of positive action! Big money will be made. Iran can start the reconstruction process. We’ll be loading up with supplies of all kinds, and just ‘hangin’ around’ in order to make sure that everything goes well. I feel confident that it will. Just like we are experiencing in the U.S., this could be the Golden Age of the Middle East!!! President DONALD J. TRUMP”

Journalist Aaron Rupar of Public Notice wrote: “Trump went from making insane genocidal threats this morning to hyping the ‘golden age’ of Iran hours later, and he received no concessions in between. He’s an absolute basket case who needs to be removed from power before he follows through on one of his mass murder fantasies.”

The American people spent the whole day wondering if their mad king would destroy the world, only to find out he was terrorizing them in order to protect his ego after starting a disastrous war. Throughout the day, Democratic members of Congress have called for Majority Leader John Thune (R-SD) to recall the Senate and for Speaker Mike Johnson (R-LA) to recall the House of Representatives from break to end the war in Iran and start the process of removing Trump from office.

Trump’s threat that “a whole civilization will die tonight, never to be brought back again” was not just a reference to Iran. If he had destroyed Iran in our names, unhampered by the Republican Congress members who have vowed to defend the U.S. Constitution, it would also have been an epitaph for the United States of America.

Notes:

https://votevets.org/press-releases/statement-of-major-general-ret-paul-d-eaton-on-trumps-threat-to-wipe-out-a-civilization

https://edition.cnn.com/2026/04/07/world/live-news/iran-war-trump-us-israel?post-id=cmnp8b6kb0001356sct0yez8e

X:

brhodes/status/2041680999537381857

Bluesky:

atrupar.com/post/3mivujg3kns2f

adamsteinbaugh.bsky.social/post/3mix4brpgkk2m

thetnholler.bsky.social/post/3mixc77fw5k2n

atrupar.com/post/3mixgi5cuhk2y

pdehello.bsky.social/post/3mixgjhyxtu2z

ameresolicitor.bsky.social/post/3miwxi2owc22z

raskin.house.gov/post/3miwkw6froc2d

kim.senate.gov/post/3mix4uvzp5s24

robertgarcia.bsky.social/post/3miwi7erujc2x

andrewsolender.bsky.social/post/3miw7rqx66c2j

Share

Politics Chat, April 7, 2026

Wednesday 8 April 1663

Up betimes and to my office, and by and by, about 8 o’clock, to the Temple to Commissioner Pett lately come to town and discoursed about the affairs of our office, how ill they go through the corruption and folly of Sir W. Batten and Sir J. Minnes.

Thence by water to White Hall, to chappell; where preached Dr. Pierce, the famous man that preached the sermon so much cried up, before the King against the Papists.

His matter was the Devil tempting our Saviour, being carried into the Wilderness by the spirit. And he hath as much of natural eloquence as most men that ever I heard in my life, mixed with so much learning.

After sermon I went up and saw the ceremony of the Bishop of Peterborough’s paying homage upon the knee to the King, while Sir H. Bennet, Secretary, read the King’s grant of the Bishopric of Lincoln, to which he is translated. His name is Dr. Lany. Here I also saw the Duke of Monmouth, with his Order of the Garter, the first time I ever saw it.

I am told that the University of Cambridge did treat him a little while since with all the honour possible, with a comedy at Trinity College, and banquet; and made him Master of Arts there. All which, they say, the King took very well. Dr. Raynbow, Master of Magdalen, being now Vice-Chancellor.

Home by water to dinner, and with my father, wife, and Ashwell, after dinner, by water towards Woolwich, and in our way I bethought myself that we had left our poor little dog that followed us out of doors at the waterside, and God knows whether he be not lost, which did not only strike my wife into a great passion but I must confess myself also; more than was becoming me. We immediately returned, I taking another boat and with my father went to Woolwich, while they went back to find the dog.

I took my father on board the King’s pleasure boat and down to Woolwich, and walked to Greenwich thence and turning into the park to show my father the steps up the hill, we found my wife, her woman, and dog attending us, which made us all merry again, and so took boats, they to Deptford and so by land to Half-way house, I into the King’s yard and overlook them there, and eat and drank with them, and saw a company of seamen play drolly at our pence, and so home by water. I a little at the office, and so home to supper and to bed, after having Ashwell play my father and me a lesson upon her Tryangle.

Read the annotations

The Company Helping Paralyzed People Move And Thrive Again - EP 64 Dave Marver

Three years ago, I’d caught some videos online of paralyzed people walking again. This struck me as miraculous. It also confused me. If paralyzed people were moving again, why weren’t more people talking about this incredible occurrence?

The company helping people move again is called Onward Medical, and it’s based in Lausanne, Switzerland. In 2023, I booked a flight to Europe and went to visit Onward and met its CEO Dave Marver, who is this week’s guest.

Subscribe now

During my trip, I did, in fact, witness amazing things. An Italian man named Michel was walking again with the help of a spinal implant device made by Onward. He could stand and walk and exercise daily. And a young Belgian woman named Julie used an Onward device to regulate her blood pressure. Before receiving the Onward technology, Julie had contemplated suicide because it took her hours each day to get out of bed – the result of blood pressure fluctuations that caused her to pass out. After receiving the device, she reenrolled in her PhD program. Her whole life had been turned around.

Onward has developed products that deliver electrical stimulation to the spinal cord. Some of these products work outside of the body and some require an implant. More recently, Onward has begun pairing its spinal implant technology with brain computer interface implants. This allows patients to think about their desire to move and have those thoughts translated into actions executed by the spinal implant.

In this episode, Marver walks us through the history of Onward’s technology development and how all of these products work. It’s a story of academic research being turned into life-changing technology. I would argue that no company does more to help people dealing with paralysis.

This episode will surprise you, and, I think, warm your heart.

Share

The Core Memory podcast is on all major platforms and on our YouTube channel over here. If you enjoy the show, please leave a review and tell your friends.

This podcast is sponsored by Brex, the intelligent finance platform built to help companies spend smarter and move faster.

We run on Brex and so should you. Learn more about Brex right here.

The podcast is also made possible by E1 Ventures, which backs the most ambitious founders and start-ups.

Subscribe now

Exclusive: Inside The Tesla Semi Factory

We bring you - our loyal subscribers - the world’s first look inside the Tesla Semi factory.

As many of you will know, Tesla has been on a long journey to create a new line of Semi trucks capable of g…

Read more

Ignorance and Ignominy

A sign on a street

AI-generated content may be incorrect.

So the world’s greatest military power went to war with a poor, medievalist theocracy. It was an incredibly uneven match. Here’s are the GDPs of Iran and the United States in 2024:

A blue line on a white background

AI-generated content may be incorrect.

Yet Iran won. The Iranian regime has emerged far stronger than it was before, controlling the Strait of Hormuz and having demonstrated its ability to inflict damage on both its neighbors and the world economy. The U.S. has emerged far weaker, having demonstrated the limitations of its military technology, its strategic ineptitude and, when push comes to shove, its cowardice.

We’ve also destroyed our moral credibility: Trump may have TACOed at the last minute, but he threatened to commit gigantic war crimes — and for all practical purposes our political and civil institutions gave him permission to do so.

How did this happen? Naturally, the Iranian Minister of War credited divine intervention, declaring that “God deserves all the glory.” His nation, he said, fought with the “protection of divine providence. A massive effort with miraculous protection.”

Well, theocrats gonna theocrat.

But I lied. That wasn’t a quote from an Iranian official. That’s what Pete Hegseth, our self-proclaimed Secretary of War, said while claiming that one of the worst strategic defeats in American history was a great victory.

There will be many analyses by military and strategic experts of the Iran debacle. But let’s not lose sight of the larger picture: We were led to disaster by the boastful ignorance of men like Trump and Hegseth — boastful ignorance made even worse by claims that God supports whatever they want to do.

With men like that running America, major disasters were just a matter of time. I’d like to think that they have been chastened by this debacle, that they have learned something. But I don’t believe that for a minute.

God help us.

MUSICAL CODA

Dental Surgery Today

Don’t worry, I’m not the patient — Jack (above) is. But between that and events, no regular post this morning.

Quoting Giles Turnbull

I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession.

Giles Turnbull, AI and the human voice

Tags: ai-ethics, writing, ai

The Sky Atlas

On page 96 the author notes that “the actual mapping of the heavens did not exist” in medieval Europe; on page 98, that “celestial cartography awaited its invention.” That these words appear nearly 40 percent… More

Anthropic’s New Claude Mythos Is So Good at Finding and Exploiting Vulnerabilities That They’re Not Releasing It to the Public

Anthropic’s Frontier Red Team:

Earlier today we announced Claude Mythos Preview, a new general-purpose language model. This model performs strongly across the board, but it is strikingly capable at computer security tasks. In response, we have launched Project Glasswing, an effort to use Mythos Preview to help secure the world’s most critical software, and to prepare the industry for the practices we all will need to adopt to keep ahead of cyberattackers.

This blog post provides technical details for researchers and practitioners who want to understand exactly how we have been testing this model, and what we have found over the past month. We hope this will show why we view this as a watershed moment for security, and why we have chosen to begin a coordinated effort to reinforce the world’s cyber defenses.

Our new model is so good, it’s too dangerous to release to the public” is a message that sounds like it could be marketing hype. But it seems like it’s probably true. Examples cited by Anthropic include finding and exploiting a 27-year-old OpenBSD bug (that can crash any device running OpenBSD) and a 16-year-old bug in the widely used FFmpeg media processing library.

See also: Techmeme’s extensive roundup.

 ★ 

Links 4/8/26

Links for you. Science:

Inside the Turmoil at Robert F. Kennedy Jr.’s C.D.C.
Drought drives elevated antibiotic resistance across soils
Claims about genetic superiority ignore the real drivers of human inequality
You’re likely already infected with a brain-eating virus you’ve never heard of
Administration Targeted Climate Lab in Effort to Free Trump Ally, Lawsuit Claims
If states ban fluoride, more kids will get cavities and Medicaid costs could soar, study finds

Other:

Trump Cannot See That the Opposition Is Real
Does Generative AI “Work”? That’s a Misleading Question.
A Dunning-Kruger War, Courtesy of the Dunning-Kruger President
How DC’s mayor and council chair thwarted every effort to better the streetcar
The Trump Administration Turns a Blind Eye to White-Collar Crime. The failure to prosecute elite wrongdoing is a bipartisan trend that’s hastening under Trump.
Trump casts a mail ballot again in Florida even as he calls the method ‘cheating’
It’s time to stop posting on X
Trump showed classified map to passengers on his plane in 2022, memo says
It’s not just vaccines: Parents are refusing other routine preventive care for newborns
Elon Did Some Securities Fraud. Also AI inequality, Tesla/SpaceX Terafab, JPMorgan monitoring and AI startup parties.
Waiting for Liberal Democracy in the American South
Democrats Flip Florida Statehouse Seat That Includes Trump’s Mar-A-Lago
This Is Why Flying Is So Awful
1930s mural by famous S.F. artist uncovered in Pacific Heights home
Our Experience with i-Ready
Inside Trump’s daily video montage briefing on the Iran war (he is so stupid)
Transportation lobbyists have donated thousands to Sean Duffy’s son-in-law as he runs for Congress
Speaking of Good Republicans
Trump’s new Homeland Security chief is worse than you thought
The AI Industry Is Lying To You
The Throne: What happens when we say, “When it finally happens…”
All of DOGE’s work could be undone as lawsuit against Musk proceeds
Trump appeared to have business motive for keeping classified documents, Jack Smith finds
American Jews Won’t Be Silenced. We Have Every Right to Oppose the Iran War
Bizarre stupidity of the Hatzola ambulance ‘false flag’ conspiracy theories
The ugly history behind Trump’s birthright citizenship case in the Supreme Court
Disney’s Sora Disaster Shows AI Will Not Revolutionize Hollywood
TV antennas are making a comeback in the age of digital streaming
Why Trump Wants ICE to Ditch the Masks at Airports
Landmark L.A. jury verdict finds Instagram, YouTube were designed to addict kids
Netanyahu aide’s racist slurs about Mizrahi Jews spark outrage
Imagining a 2028 Presidential Campaign for a Limited Presidency

After a historically hot March, an active April weather pattern will bring thunderstorms, cooler temperatures, and likely even some Sierra snow

Following a ridiculous ridge in March and associated record-shattering warmth, a more transient pattern has established in April Well, I won’t belabor the point: March 2026 was a month that will long be remembered for its astonishing warmth across California and the broader West. Snowpack in most regions is now near or below all-time record […]

The post After a historically hot March, an active April weather pattern will bring thunderstorms, cooler temperatures, and likely even some Sierra snow first appeared on Weather West.

Overton

I think I accidentally installed an Overton window in my bedroom. A few months ago, the sun wasn't in my face in the morning, but now it is.

Wednesday assorted links

1. Waymo rollout in NYC is halted.

2. Back “plus” is the better answer (NYT).  I am glad this is now settled, Alex T. can attest I have been insisting on this for a while.  Note my earlier prediction.

3. Nicholas Decker on Ludwig Straub.

4. Crypto and quantum computing.  Likely an important piece, here is GPT Pro on that paper.

5. “The Suno upgrade for song generation seems quite good as well.”  So much is new!

6. Hollinger on NBA tanking (NYT).

7. Is there an evolving Iran bargain with China? (speculative, mostly we still do not know what is going on, you should discount most of what you are reading on this topic).

8. Anna review of The Drama.

The post Wednesday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Run Out to Meet It

Charity Majors spent a year telling engineers they needed to learn to code to stay relevant. Then, in about three months, that advice became obsolete. In this conversation, she talks about why engineers who built careers on beautiful, readable code are struggling more than anyone else, why junior developers might have a hidden advantage right now, and what it means that nobody has a headstart anymore. Everybody’s ignorance has been reset to 100.

This season of Still Burning is sponsored by WorkOS and Augment Code.

When the crisis comes

These days, we’re all living in a constant state of crisis, foisted upon us by a world where those who are meant to keep things stable are the least stable factors in our lives. The chaos and stress of that reality makes it difficult to make any plans, let alone to make decisions if you have responsibilities for a team or organization that you’re meant to be leading. It’s easy to imagine there’s nothing we can do, or to feel hopeless. But a resource that just arrived served as a timely reminder for me that a crisis doesn’t have to be paralyzing, and we don’t have to feel overwhelmed when trying to plan how we’ll respond as leaders.

The topic of crisis has been on my mind again as I’ve been looking at the work of some friends who are the most fluent experts on the topic of crisis that I know, prompted by the release of Marina Nitze, Mikey Dickerson and Matthew Weaver's new book, Crisis Engineering.

There’s nothing more valuable than people who can step in during a moment of crisis and provide clarity, not just on how to make it through that moment, but how to seize that opportunity to actually make better things possible. A few years ago, at some of the most stressful and harrowing moments I’ve had as a leader in my business career, I got to connect with a remarkable team who ran towards the crisis that our organization was in, and helped our team get through that moment and not just persevere, but to thrive. I thought a bit about the famous Mr. Rogers line about “look for the helpers”, and Matthew, Marina, and Mikey's team at their company Layer Aleph really were the equivalent of the helpers when it comes to the place where where technology meets the real world.

I’d first heard legend of their way of working in the days and weeks after the notoriously rough launch of Healthcare.gov (This was back when the federal government aspired to competency, inability to deliver was considered a scandal, and media would accurately describe something that didn’t function as a failure.) A small, scrappy, multifunctional team had been able to transform the culture of this hidebound segment of the federal government, and deliver a set of services that are saving American lives to this day. That story is detailed well in the book, but at the time, the conventional wisdom was that this was a catastrophe so impossibly complex, in a bureaucracy so hopelessly broken, that nobody could possibly fix it. And then they did. (With the help of a lot of brilliant and motivated colleagues.)

As it turns out, this was just one of many such efforts that the team would be a part of, and helped define the overall approach that they, and their collaborators, would take in addressing these highly public crises. There are so many situations where a combination of cultural and technical challenges conspire to cause extremely visible failures or disruptions that seem intractable. But over time, a set of practices and principles emerged from their work that took the response out of the realm of superstition and guesswork and into something that was almost a science. These techniques work when systems are crashing, when machines get hacked, when data are leaked, when business models are crumbling, when leadership is in disarray, when customers are angry, when users are leaving, when competitors are attacking, when funders are fleeing. In short, when the crisis is at your door.

Putting it into practice

It was years after their evolution from those early post-Healthcare.gov days into a mature practice that I reconnected with the Layer Aleph team. By then, I was running a company, and a team, that was under an extreme amount of stress, and in a situation that could easily have amounted to an existential crisis. They were able to engage with conviction and compassion, but importantly, they weren’t making it up as they went along. I think this is an idea that’s important to understand in the current moment, too — there is such a thing as expertise. We do not have to settle for incompetence and cronyism. Good people of good character with real credentials and relevant experience can bring it to bear on even the most challenging situations, and when they do, even the most intractable problems are solvable.

And now, that expertise is something they’ve captured and shared.

I don’t often unabashedly endorse books about business and technology; too often I find them to be based on thin premises, padded out with cliches. But what the team here have done with their new book Crisis Engineering is something special — they documented their own experiences of turning real crises into a chance to design new, resilient systems.

Even better, they talk about how other organizations can do the same thing. The reason that I can testify that it works is because I have seen it, and I’ve seen my own team benefit from their work. In fact, I think it was during the conversations after the dust had settled from some of that work that the very phrase “crisis engineering” first emerged as a description of this way of thinking about complex problems. I’m thrilled that it’s become a useful shorthand for naming and discussing this powerful and unique way of tackling some of the most intimidating situations that companies or organizations might take on. It’s built confidence for myself, and my whole leadership team from that era, that we’ll be ready when the next challenge arrives. With apologies to Rihanna, I do want people to text me in a crisis.

The more confidence we can build in our teams that a crisis is an ordinary event that we can plan for, the more ready they will be for that moment when it arrives. That’s why I can’t recommend the book highly enough. Set aside some time to read it, and to make notes on how you might put it into practice when crisis inevitably comes to visit. You’ll be lucky to have had this resource before you need it.

You can read more about the book on their site. (And, as always, nothing I post on my site is sponsored content — I’m enthusiastically endorsing this book because I believe in what these folks have written and genuinely believe it’s worth your time to read if you lead an organization or team.)

Open Letter to President Trump on Iran Strategy From a Group of Concerned Economists

  Yesterday, just hours before President Trump's announced deadline for widespread destruction of power plants and bridges in Iran, a group of Iranian-American economists circulated this open letter, which I signed on to It began as folows:

An Open Letter to President Trump on Iran Strategy From a Group of Concerned Economists
"Dear President Trump,

While we share the Administration's concerns about the Islamic Revolutionary Guard Corps’ (IRGC) nuclear program and destabilizing activities, strikes on Iranian civilian infrastructure, including power plants, industries, bridges, and universities, some of which have already begun, are strategically counterproductive, escalating costs to Americans while reducing our ability to achieve America’s objectives.

Iran's ability to disrupt the Strait of Hormuz through mines, fast-attack boats, coastal missiles, and drones does not depend on power plants or refineries. Destroying civilian infrastructure removes Iran's incentive to reopen the Strait; it does not remove its capability to close it.

Striking infrastructure devastates the livelihood of 92 million civilians. Aside from its resulting humanitarian crisis, it will further widen the scope of the war and allow the IRGC to position themselves as defenders of the nation, and rebuild their badly damaged internal support. Infrastructure strikes and a prolonged war also let them blame economic misery on foreign aggression rather than their own mismanagement.  "

##########

This morning I don't find much evidence of it on the internet.

Professor Mostafa Beshkar  in Indiana has posted it on Twitter (as X used to be known), and  here it is in the Indiana student newspaper, with the signers (who you can also find here):

LETTER: To President Trump on Iran strategy from a group of concerned economists 

But open letters aren't just written, circulated and signed  to change policies (especially in a matter of hours). They also serve to express solidarity, and even as a shout into the void they let those concerned with Iran, and with America, express their fear and loathing of war crimes and boasts about the intention to commit them. 

Mythos assorted links

Here is Dean Ball on Mythos.  And now more from Dean.  Here is John Loeber.  While I am seeing some likely overstatement, probably this is a real turning point nonetheless, and we need to think further about what is best to do.  No b.s. on data center slowdowns and algorithmic discrimination, rather actual thought on how to regulate something that actually will matter.  And be glad we got there first.  But how long will it be before an open source version, even if somewhat inferior, is available?  Will OpenAI and Google soon be showing similar capabilities?  (And how will that shift the equilibrium?)  Should we upgrade our estimates of the returns to investing in compute?  How will the willingness of attackers to pay for tokens evolve, relative to the willingness of defenders to pay for tokens?  Which are our softest targets?  As a side effect, will this also lead to higher economic concentration, as perhaps only the larger institutions can invest in quality patches rapidly enough?  How many things will be taken offline altogether?  It was the government of Singapore that started moving in that direction in 2016 with their Internet Surfing Separation.  Which of the pending hacks and leaks will embarrass you the most?

And if nothing else, this is proof we are not all going to be jobless, albeit for reasons that are not entirely positive.

The post Mythos assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Why Payment Restrictions Are the Biggest Issue for Users in Germany

For many users in Germany, payment rules are the part of online gambling they notice first and complain about most. Games, bonuses, and site design matter, but none of that helps if a deposit is blocked, a limit is reached sooner than expected, or a payout feels slow and uncertain. 

Germany’s legal online gambling market has operated under a national framework since July 2021, and that framework puts strong controls on how money moves in and out. From a player’s point of view, that makes payments the point where regulation becomes real.

One Limit Follows You Everywhere

The monthly deposit cap is not tied to one site. It follows the player across licensed operators. In practice, that means a person can deposit up to 1,000 euros in total across the regulated market in a calendar month, with activity tracked through the LUGAS monitoring system.

Many people first compare offers, payment methods, and withdrawal terms before opening an account, and some also check these platforms on websites like cazinouri.de to get a quick overview. Once they start playing, though, the real challenge is understanding that one payment decision on one platform can affect what is possible everywhere else.

Several Checks Hit at the Same Time

A user may lower a limit and see the change apply right away, but a higher limit only becomes effective after a seven-day wait. On top of that, higher limits require proof, like income records or bank statements.

Limits above 1,000 euros can be granted, but only under strict conditions, and the step from 10,000 to 30,000 euros is reserved for a very small share of active players. Even failed payment attempts can count against the limit in some cases, which makes the whole setup feel stricter than many users expect.

The Rules Feel Heavier Than the Limit Itself

Another reason payments become the main pain point is that the cap does not sit alone. It works together with gameplay restrictions that slow down spending in some products, especially virtual slots.

Germany’s rules limit stakes on virtual slots to 1 euro per spin and require games to last at least about five seconds on average. On paper, the system is meant to protect users.

They are told they have a monthly deposit allowance, but the rest of the rulebook still tightly controls how quickly that money can be used. This creates a feeling of constant friction rather than a clear and predictable playing experience.

Trust Drops Quickly When Money Feels Hard to Move

The first test of a platform is often the first deposit. If that process is confusing, rejected, or delayed, users start worrying about the withdrawal before they even place a bet. Industry reporting in 2025 has highlighted that payments are now one of the main drivers of user retention, and repeated failed deposits often lead people to abandon a platform.

In Germany, that matters even more because the legal market asks users to accept tighter checks in exchange for higher safety standards. When the legal path feels too slow or too rigid, some users start looking at sites that appear easier.

a group of credit cards sitting next to a cell phone
Photo: CardMapr via Unsplash.

What Users Can Do Before Problems Start

The most practical step is to treat payment rules as part of the product. Users should check whether a provider is officially licensed in Germany, understand that the monthly cap is shared across operators, and plan ahead if they expect to request a higher limit.

It also helps to complete identity and affordability checks early, keep track of failed payment attempts, and read the site’s withdrawal terms before depositing anything. None of this makes the rules lighter, but it reduces nasty surprises.

Photo: Mika Baumeister via Unsplash.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post Why Payment Restrictions Are the Biggest Issue for Users in Germany appeared first on DCReport.org.

Artemis 2 science gets underway as Orion begins its return trip

Artemis 2 earthset

As Artemis 2 begins its return to Earth, scientists are just starting to review the images and observations taken by the crew as they flew around the moon.

The post Artemis 2 science gets underway as Orion begins its return trip appeared first on SpaceNews.

A lunar base or a lunar economy?

NASA lunar base

We applaud the lunar base vision laid out by NASA Administrator Jared Isaacman on March 24. Placing this stake in the lunar regolith is long overdue. What is missing from the mission architecture, however, is a vision for long-term economic development on the moon. It will be a profound missed opportunity if NASA does not […]

The post A lunar base or a lunar economy? appeared first on SpaceNews.

LeoLabs debuts space monitoring tool for military users

LeoLabs ‘Delta’ moves beyond collision warnings to identify potential adversarial activity

The post LeoLabs debuts space monitoring tool for military users appeared first on SpaceNews.

Industry navigates NASA’s start-and-stop approach to commercial space stations

Illustration of the Orbital Reef space station. Credit: Blue Origin

(Note: This article went to press for the April issue of SpaceNews Magazine before NASA announced potential changes to the CLD program.) NASA signaled last summer it planned to accelerate efforts to replace the aging International Space Station. Nearly nine months later, that push appears to have slowed.In a directive signed at the end of […]

The post Industry navigates NASA’s start-and-stop approach to commercial space stations appeared first on SpaceNews.

Is Even Trump This Stupid?: The BLS Data Leak Edition

In what is a very good, if disturbing, article about the Trump administration’s assault on national data, I noticed this tidbit (boldface mine):

On January 8, 2026, Donald Trump broke a foundational norm of American politics and barely anyone noticed. The political press corps, quite reasonably preoccupied with the administration’s numerous other entanglements, mostly turned a blind eye when Trump posted unreleased economic data to his Truth Social account the night before it was scheduled to be released….

A former Bureau of Labor Statistics employee with knowledge of the incident, who was granted anonymity to speak candidly, described the statistics agency’s handling of the imbroglio to me. By their account, after the improper release, the acting commissioner of BLS called the acting chair of the Council of Economic Advisers, who shares the prerelease data along with analysis to the president, and threatened to revoke the White House’s early access if Trump flouted data release procedure again. The two eventually came to an understanding that the issue arose because only the first page of the prerelease materials bore an embargo label, and so the president assumed that all subsequent pages were available to publicize. Going forward, the acting CEA chair and acting BLS commissioner agreed to include an embargo note on every page to prevent confusion. (BLS did not respond to a request for comment.)

So do we really believe that explanation? While it is quite possible that Trump lacks object permanence to the extent he needs THIS IS TOP SEKRIT written on every page, I have doubts that even Trump is that stupid.

Anyway, Trump should resign or be impeached.

AI Risks

Two new papers/initiatives indicate severe risks from AI, interestingly in opposite directions. The first is that the most advanced frontier models are now capable of finding and exploiting software in ways that could be used to crash or control pretty much all the world’s major systems.

Anthropic: We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.

Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.

That’s from Anthropic. The irony is that the company that has developed a frontier model capable of infiltrating and undermining more or less any computer system in the world is the one that has been forbidden from working with the US government. It’s as if a private firm developed nuclear weapons and the American government refused to work with them because they were too woke. Okey dokey.

The second paper on AI risks is AI Agent Traps from Google DeepMind. They point out that AI agents on the web are vulnerable to all kinds of attacks from things like text in html never read by humans, hidden commands in pdfs, commands encoded in the pixels of images using steganography and so forth.

Putting this together we have the worrying combination that very powerful AI’s are very vulnerable. Will AI solve the problems of AI? Eventually the software will be made secure but weird things happen in arms races and its going to be a bump ride.

The post AI Risks appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Multi-pair kidney exchange comes to Karnataka

 Sophisticated kidney exchange is coming to more Indian States.

The Indian Express has the story from Karnataka:

Kidney donation to become easier as Karnataka okays multi-pair exchange
Until now, swap kidney donations were limited to two incompatible donor-recipient pairs. 

"In a significant move that could help patients awaiting a kidney transplant, the Karnataka Government has issued guidelines permitting multi-pair kidney paired swap transplantation. This will expand the scope of organ donation beyond the traditional two-way system.

The Karnataka Multi-Pair Kidney Exchange Transplantation Guidelines 2026 were issued via a government order on April 4.

“A significant number of donor-recipient pairs in Karnataka are unable to undergo transplantation on account of biological incompatibility… Applications have been received from registered transplant hospitals in Karnataka seeking approval for multi-pair Kidney Paired Exchange Transplantation involving three or more donor-recipient pairs,” the order said."

Your opinion, please

Photo of a town with houses and buildings, surrounded by mountains under a cloudy sky.

From poetry to politics, this radio show asked listeners to speak their minds. Decades later, their words still resonate

- by Aeon Video

Watch on Aeon

Herbert Hoover is still underrated

We study the effects of large-scale humanitarian aid using novel data from the American Relief Administration’s (ARA) intervention during the 1921-1922 famine in Soviet Russia. We find that the allocation of relief closely tracked underlying food scarcity and was uncorrelated with subnational politics. We show that ARA rations reduced food prices, raised caloric intake, lowered the prevalence of relapsing fever, and increased rural birth cohorts. The aid benefited poorest peasants most and proved most effective in provinces with higher levels of human capital. Back-of-the-envelope calculations suggest that, absent ARA relief, the 1926 population would have been 4.4 million lower.

That is from a new paper by Natalya Naumenko (my colleague), Volha Charnysh, and Andrei Markevich.

The post Herbert Hoover is still underrated appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Democrats Expand Their Majority on Wisconsin’s Critical State Supreme Court

Khaya Himmelman has the story here.

Trump Punts Disaster for ‘Two Weeks’

Trump on Truth Social, 6:32 p.m. ET, with the climb-down, describing what he claims will amount to a “double sided CEASEFIRE!”:

Based on conversations with Prime Minister Shehbaz Sharif and Field Marshal Asim Munir, of Pakistan, and wherein they requested that I hold off the destructive force being sent tonight to Iran, and subject to the Islamic Republic of Iran agreeing to the COMPLETE, IMMEDIATE, and SAFE OPENING of the Strait of Hormuz, I agree to suspend the bombing and attack of Iran for a period of two weeks.

This will be a double sided CEASEFIRE! The reason for doing so is that we have already met and exceeded all Military objectives, and are very far along with a definitive Agreement concerning Longterm PEACE with Iran, and PEACE in the Middle East. We received a 10 point proposal from Iran, and believe it is a workable basis on which to negotiate. Almost all of the various points of past contention have been agreed to between the United States and Iran, but a two week period will allow the Agreement to be finalized and consummated. On behalf of the United States of America, as President, and also representing the Countries of the Middle East, it is an Honor to have this Longterm problem close to resolution. Thank you for your attention to this matter! President DONALD J. TRUMP

Stephen Pimentel has an excellent review of *The Marginal Revolution*

Here is one very good paragraph of many:

Cowen is excellent on the question of why the marginalist insight had to wait so long, and why it eventually came in a simultaneous eruption across countries and three intellectual temperaments. The answer involves the slow assembly of preconditions: advances in calculus, the rise of statistical thought, the professionalization of economics as a discipline, and certain changes in the philosophy of science associated with the Victorian debate between inductive and deductive methods. Progress in science, Cowen suggests, is rarely a matter of the lone genius, but rather of the alignment of previously dispersed elements. The genius arrives when the ground has been prepared to receive the insight.

And another:

There is a discomforting codicil to all of this. Perhaps, Cowen suggests near the book’s end, the intuitions of 20th-century microeconomics were always a kind of compensation for a deeper ignorance. Perhaps we elevated intuitive reasoning, with its clean parables of marginal utility, and elegant supply-and-demand diagrams, because they were what we had, and we mistook their availability for adequacy. Machine learning models that find hundreds of thousands of factors in financial data are not exactly refuting marginalism. They are revealing the scale of what marginalism was never equipped to see. Our intuitions were always a small corner of understanding, swimming in a larger froth of epistemic chaos. The illusion has been stripped bare.

Here is the full review.  Here is the book itself.  Via Mike Doherty.

The post Stephen Pimentel has an excellent review of *The Marginal Revolution* appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

A Volcanic Medley Near Mammoth Lakes

A string of small, partially snow-covered volcanic craters runs north-south between Mono Lake and Mammoth Mountain. Snowy mountains fill the left side of the image.
Overlapping volcanic complexes shape the eastern Sierra Nevada in this image acquired on March 29, 2026, by the OLI (Operational Land Imager) on Landsat 9.
NASA Earth Observatory/Lauren Dauphin

Take a tour through volcanic history on the edge of the Sierra Nevada near Mammoth Lakes, California. Between the tall granite peaks to the west and the Basin and Range province to the east, overlapping volcanic complexes imprint the landscape with a collection of craters, cones, and calderas. The area, still restless today, draws interest from geologists studying Earth’s processes and from planetary scientists exploring its commonalities with volcanic terrain elsewhere in our solar system.

A string of volcanic features between Mono Lake and Mammoth Mountain is visible along the left side of this Landsat image. Known as the Mono-Inyo Craters, this chain of about three dozen lava domes, lava flows, and tephra rings formed within the past 10,000 years. Explosive eruptions in the area date back even further, but evidence of those older events is no longer apparent at the surface.

Among the most recent activity in this chain, explosive eruptions formed Panum Crater near Mono Lake about 700 years ago. A strombolian eruption deposited a ring of pumice, ash, obsidian fragments, and other material around the vent. After that, a lava dome made of pumice and obsidian built up in the center, creating the concentric-circle pattern visible today.

The Mono Craters, raised features ranging from small and round to larger and amorphous, form a gently arcing line stretching from north to south.
The Mono Craters arc across this image acquired on March 29, 2026, by the OLI (Operational Land Imager) on Landsat 9.
NASA Earth Observatory/Lauren Dauphin

South of the Mono-Inyo Craters, Mammoth Mountain is perhaps best known for its ample skiable terrain, but it has a volcanic side, as well. The mountain is made up of at least 25 overlapping lava domes. Its last magmatic eruptions took place about 57,000 years ago, but steam-driven phreatic eruptions and other unrest have occurred much more recently.

Scientists believe a magma intrusion beneath the mountain in 1989 set off a spate of seismicity and volcanic gas emissions. Venting of carbon dioxide gas has killed trees in the area, and the U.S. Geological Survey continues to monitor the mountain’s CO2 emissions. Researchers have previously worked with NASA airborne remote sensing technology to measure ecosystem responses to elevated volcanic CO2 around Mammoth Mountain. More recent projects have expanded these efforts to other volcanoes and incorporated satellite imagery to detect signs of gas emissions. These methods partly rely on changes observed in vegetation and could aid in earlier warnings of volcanic hazards.

The most dramatic volcanism in the region, however, is far older. A massive eruption 760,000 years ago formed the Long Valley Caldera. This oval-shaped area, measuring 10 by 20 miles (16 by 32 kilometers), is bounded by snowy ridges, with Mammoth Mountain just off its southwest rim. Crowley Lake, a reservoir on the Owens River, drains the area to the southeast.

The caldera was formed during a six-day-long eruption, during which 150 cubic miles (625 cubic kilometers) of material were ejected. (That’s about 20 times the amount that was spewed in the 1912 eruption of Novarupta, the largest on Earth in the 20th century.) As a result, the surface over the magma storage area subsided thousands of feet to create a vast depression. Scientists with NASA’s Goddard Instrument Field Team conducted research in the area in 2023 to better understand how similar massive eruptions on Mars and other planets and moons in our solar system may have altered their environments.

NASA Earth Observatory images by Lauren Dauphin, using Landsat data from the U.S. Geological Survey. Story by Lindsey Doermann.

References & Resources

You may also be interested in:

Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

Scoria Cones on Earth and Mars
7 min read

The hill-shaped features are a sign of explosive volcanic activity—a rarity on the Red Planet.

Article
Krasheninnikova Remains Restless
3 min read

The volcano on Russia’s Kamchatka Peninsula continues to erupt after centuries of quiescence.

Article
A Hot and Fiery Decade for Kīlauea
6 min read

The volcano in Hawaii is one of the most active in the world, and NASA tech makes it easier for…

Article

The post A Volcanic Medley Near Mammoth Lakes  appeared first on NASA Science.

Two assorted links

1. Claims about the role of China, and its economics.  And there is a lot of remaining uncertainty, but here is one of the saner Iran war takes.

The post Two assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Holding Our Breath: Who Doesn’t Love A Good Rescue Story?

Heroic Iran Rescue Masks Bigger Risks in Trump’s Expanding War

Heroic effort apparently involving hundreds of special operations forces, aided by CIA deception, successfully plucked that downed airman, a weapons officer, who had been forced to eject with his pilot from a F15E fighter jet struck by Iranian anti-aircraft weapons deep inside Iran.

From all accounts, it was a hairy military mission to find and retrieve the airman, elude the myriad Iranian ground forces hunting for him, and to escape without further injury. Even with the reported glitch in which two military transports that landed at an Iranian airstrip did not restart when it was time to get out — the disabled planes were blown up – it hardly took away from the audacious rescue.

The bravery involved received its deserved acclaim for brashness and execution, as have many of the strictly military feats of this war. But for Donald Trump to trumpet the rescue without acknowledging the larger gambles that he has undertaken with this war seems incongruous. The rescue was needed because we are now attacking civilian targets – bridges, utilities and factories – towards a goal that none of us can state forthrightly.

Where Do We Stand?

Iranian forces that Trump and the Pentagon tell us are unable to function anymore shot a top U.S. weapon out of the air – puncturing the myth of total conquest.

Late this week, we got leaks of U.S. intelligence that Iran is using decoys to draw U.S. fire and has been able to recover quickly from attacks on mobile missile launchers and hidden missile and drone supplies. Simply put, a supposedly toothless Iranian defense is still strong enough to cause plenty of international trouble, starting with a stranglehold on Strait of Hormuz shipping and extending to real death, injury and damage in neighboring Gulf states as well as in Israel.

Today is another deadline set by Trump alone for Iran’s total capitulation, a surrender that is marked only by unenforceable promises not to do anything on Trump’s list of proposals. Iran shows no sign of agreeing, and the “bomb them to hell” threats to follow will dial up the conflict anew.

Weirdly, Trump issued a profane Easter message that dwelt not on peace, as preached by say, Pope Leo XIV, but on more war – and incongruously praising Allah.

Europe has organized 41 countries ready to act in concert without the United States or Israel to deal with Iran and the Strait. Trump is ready to dump the entire NATO alliance. Because of the pressure of oil on global economies, Trump has re-authorized the sale and distribution of Russian and even Iranian oil – directly helping our adversaries. And our Congress is caught in its own disfunction.

What is the price of Trump’s delusionary view of incoming information? What is the cost of isolation backed by use of force by whim?  What are we to make of America’s moral, diplomatic, economic or even military prowess in a world being adjudged only by compliance with Trump’s gut rather than any measure of reason?  Where are the limits for Israel in grabbing southern Lebanon as another buffer zone, along with Gaza and the West Bank without a word of caution from the U.S.?

How long can we hold our breath?


“FREEDOM OF THE PRESS IS NOT JUST IMPORTANT TO DEMOCRACY, IT IS DEMOCRACY.” – Walter Cronkite. CLICK HERE to donate in support of our free and independent voice.

The post Holding Our Breath: Who Doesn’t Love A Good Rescue Story? appeared first on DCReport.org.

Some 60 million light-years away in the southerly Some 60 million light-years away in the southerly