Rachel Glennerster calls for reforming foreign aid

Aid agencies already try to cover too many countries and sectors, incurring high costs to set up small programs. Aid projects are far too complicated, resembling a Christmas tree weighed down with everyone’s pet cause. With less money (and in the US, very few staff), now is the time to radically simplify. By choosing a few highly cost-effective interventions and doing them at large scale in multiple countries, we would ensure

  • aid funds are spent on highly effective projects;
  • we benefit from the substantial economies of scale seen in development;
  • a much higher proportion of aid money goes to recipient countries, with less spent on consultants; and
  • politicians and the public can more easily understand what aid is being spent on, helping build support for aid.

The entire piece is excellent.

The post Rachel Glennerster calls for reforming foreign aid appeared first on Marginal REVOLUTION.

      

Related Stories

 

Review: Planetary Defenders

NASA+, the new streaming service run by the space agency, is offering more than just old videos and coverage of launches. Jeff Foust reviews a new documentary released on NASA+ last week that examines NASA's role in protecting the Earth from asteroid impacts.

Tuesday Telescope: A rare glimpse of one of the smallest known moons

I'll bet you don't spend a ton of time thinking about Deimos, the smaller of the two Martian moons, which is named after the Ancient Greek god that personified dread.

And who could blame you? Of the two Martian moons, Phobos gets more attention, including as a possible waystation for human missions to Mars. Phobos is larger than Deimos, with a radius of 11 km, and closer to the Martian surface, a little more than 9,000 km away.

By contrast, Deimos is tiny, with a radius of 6 km, and quite a bit further out, more than 23,000 km from the surface. It is so small that, on the surface of Mars, Deimos would only appear about as bright in the night sky as Venus does from Earth.

Read full article

Comments

"A bonafide frigging flight": How NS-31 broke spaceflight norms and created an online uproar

Last week's New Shepard suborbital flight, with six women on board, generated a lot of attention but also criticism. Deana Weibel examines the flight and how it broke decades-old norms of spaceflight.

Space weather and spaceflight

Much of the focus on forecasting and responding to space weather has been on the terrestrial impact of solar storms on communications and the power grid. Jeff Foust reports the effect of space weather on satellites and space missions is now growing in importance.

Anything but expendable: A history of the Evolved Expendable Launch Vehicle (EELV) Secondary Payload Adapter (ESPA) (part 3)

In the concluding part of his history of ESPA, Darren Raspa recounts the development and first flight of that payload adapter and its evolution in the years that followed.

What's different about this Moon? What's different about this Moon?


We need more elitism

Even though the elites themselves are highly imperfect.  That is the theme of my latest FP column.  Excerpt:

Very often when people complain about “the elites,” they are not looking in a sufficiently elitist direction.

A prime example: It is true during the pandemic that the CDC and other parts of the government gave us the impression that the vaccines would stop or significantly halt transmission of the coronavirus. The vaccines may have limited transmission to some partial degree by decreasing viral load, but mostly this was a misrepresentation, perhaps motivated by a desire to get everyone to take the vaccines. Yet the vaccine scientists—the real elites here—were far more qualified in their research papers and they expressed a more agnostic opinion. The real elites were not far from the truth.

You might worry, as I do, that so many scientists in the United States have affiliations with the Democratic Party. As an independent, this does induce me to take many of their policy prescriptions with a grain of salt. They might be too influenced by NPR and The New York Times, and more likely to favor government action than more decentralized or market-based solutions. Still, that does not give me reason to dismiss their more scientific conclusions. If I am going to differ from those, I need better science on my side, and I need to be able to show it.

A lot of people do not want to admit it, but when it comes to the Covid-19 pandemic the elites, by and large, actually got a lot right. Most importantly, the people who got vaccinated fared much better than the people who did not. We also got a vaccine in record time, against most expectations. Operation Warp Speed was a success. Long Covid did turn out to be a real thing. Low personal mobility levels meant that often “lockdowns” were not the real issue. Most of that economic activity was going away in any case. Most states should have ended the lockdowns sooner, but they mattered less than many critics have suggested. Furthermore, in contrast to what many were predicting, those restrictions on our liberty proved entirely temporary.

Recommended.

The post We need more elitism appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

SAIC wins $55 million Space Development Agency contract for satellite network integration

By introducing a program integrator role, SDA aims to ensure better compatibility among satellites and cohesion across the network

The post SAIC wins $55 million Space Development Agency contract for satellite network integration appeared first on SpaceNews.

SpaceX launches third mid-inclination rideshare mission

Bandwagon-3 launch

SpaceX launched the third in its series of mid-inclination dedicated rideshare missions April 21, but with very few rideshare payloads on board.

The post SpaceX launches third mid-inclination rideshare mission appeared first on SpaceNews.

Astra targets cargo delivery with Rocket 4 in Pentagon-backed plan

Former public rocket startup focusing on defense applications after going private

The post Astra targets cargo delivery with Rocket 4 in Pentagon-backed plan appeared first on SpaceNews.

Iridium shields supply chain as higher tariffs loom

TAMPA, Fla. — Iridium is ramping up tariff countermeasures to shield the U.S.-based satellite operator from import tax hikes as global trade tensions escalate. The operator has historically imported satellite […]

The post Iridium shields supply chain as higher tariffs loom appeared first on SpaceNews.

Northwood raises $30 million to establish ground station network

SAN FRANCISCO — Northwood Space raised $30 million in a Series A round to establish a global network of phased array ground stations. Alpine Space Ventures and Andreessen Horowitz led […]

The post Northwood raises $30 million to establish ground station network appeared first on SpaceNews.

Android Improves Its Security

Android phones will soon reboot themselves after sitting idle for three days. iPhones have had this feature for a while; it’s nice to see Google add it to their phones.

Unlike everyone else, Americans and Britons still shun the office

What is their love of working from home doing to their economies?

Wednesday: New Home Sales, Architecture Billings Index, Beige Book

Mortgage Rates Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

Wednesday:
• At 7:00 AM ET, The Mortgage Bankers Association (MBA) will release the results for the mortgage purchase applications index.

• At 10:00 AM, New Home Sales for March from the Census Bureau. The consensus is for 680 thousand SAAR, up from 676 thousand in February.

• During the day, The AIA's Architecture Billings Index for March (a leading indicator for commercial real estate).

• At 2:00 PM, the Federal Reserve Beige Book, an informal review by the Federal Reserve Banks of current economic conditions in their Districts.

Trump fires at the Fed. America’s economy is collateral damage

The president may test legal bounds as he tries to sway Jerome Powell

Tuesday 22 April 1662

After taking leave of my wife, which we could hardly do kindly, because of her mind to go along with me, Sir W. Pen and I took coach and so over the bridge to Lambeth, W. Bodham and Tom Hewet going as clerks to Sir W. Pen, and my Will for me. Here we got a dish of buttered eggs, and there staid till Sir G. Carteret came to us from White Hall, who brought Dr. Clerke with him, at which I was very glad, and so we set out, and I was very much pleased with his company, and were very merry all the way. … [What was censored here? D.W.] [He, among good Storys, telling us a story of the monkey that got hold of the young lady’s cunt as she went to stool to shit, and run from under her coats and got upon the table, which was ready laid for supper after dancing was done. Another about a Hectors crying “God damn you, rascall!” – L&M] We came to Gilford and there passed our time in the garden, cutting of sparagus for supper, the best that ever I eat in my life but in the house last year. Supped well, and the Doctor and I to bed together, calling cozens from his name and my office.

Read the annotations

Effective Strategies for Page-Level Targeting

Key Takeaways

  • Page-level targeting enables businesses to deliver personalized content based on specific user behavior and page context, increasing engagement and conversions.
  • Data-driven insights and dynamic content delivery are essential for tailoring user experiences and optimizing targeting strategies effectively.
  • By leveraging advanced technologies and continuous optimization, businesses can overcome challenges and stay competitive in an evolving digital landscape.

In today’s fast-paced digital landscape, businesses are continuously exploring strategic ways to captivate and engage their audience. One such strategy is page-level targeting, a powerful approach that allows businesses to deliver highly personalized content tailored to individual pages and specific audience segments. By honing in on precise user preferences and behaviors, companies can transform casual website visits into purposeful interactions, driving higher engagement and satisfaction.

Unlike broader marketing methods that cast a wide net, page-level targeting focuses on specific pages within a website, ensuring that visitors encounter content that directly resonates with their interests. This enhances user experience and significantly boosts conversion rates, as users are likely to respond positively to content that feels relevant and customized to their needs. By implementing page-level targeting, businesses can maximize their digital presence, turning passive viewers into active participants who are more inclined to take desired actions.

Introduction to Page-Level Targeting

Page-level targeting is a strategic approach that revolves around delivering tailored content to individual pages on a website. This carefully curated experience is driven by a profound understanding of user behavior, preferences, and intent. Unlike one-size-fits-all strategies, page-level targeting prioritizes relevance by focusing on customizing content to meet the specific needs and interests of various audience segments. This approach ensures that every interaction is valuable, driving engagement and building stronger connections between businesses and their audiences.

The Basics of Page-Level Targeting

At the heart of page-level targeting lies a commitment to personalization. This method involves analyzing data to better understand user demographics, such as age, location, and interests, and leveraging this information to craft content that speaks directly to them. The core components of effective page-level targeting include audience segmentation, dynamic content delivery, and continuous optimization. By employing these techniques, businesses can ensure their website serves as a robust tool for engaging users and meeting business objectives.

Why Page-Level Targeting Matters

The growing emphasis on page-level targeting can be attributed to its myriad benefits. In an age where consumers expect tailored experiences, businesses that deliver relevant content are more likely to capture attention and drive results. By implementing this strategy, companies can improve the overall user experience, fostering longer site visits and higher engagement rates. The enhanced relevance provided by page-level targeting translates into increased brand loyalty and customer retention, offering substantial long-term advantages.

Crafting Effective Page-Level Strategies

Designing successful page-level targeting strategies begins with a comprehensive understanding of audience demographics. Businesses can establish more meaningful interactions with their audience by identifying key segments and tailoring content to meet their unique needs. This involves implementing techniques such as audience segmentation, which clusters users into distinct groups based on shared characteristics and developing content that caters specifically to each group’s preferences and behaviors.

Steps to Tailor Content

  • Identify common user interests and preferences through surveys, analytics, and feedback.
  • Employ dynamic content delivery methods, adjusting the content in real time based on user interactions.
  • Regularly evaluate and optimize page-level strategies, tweaking elements to enhance responsiveness and relevance.

The Role of Data in Page-Level Targeting

Data serves as the foundation for successful page-level targeting initiatives. Businesses can gain helpful insights into user behaviors and preferences by harnessing analytics tools such as Google Analytics. This data-driven approach enables companies to track user interactions across various pages, identify patterns, and refine content techniques to better meet audience expectations. With a comprehensive understanding of user data, businesses can make well-informed decisions that boost website performance and drive results.

Overcoming Common Challenges

While page-level targeting offers immense potential, it does present certain challenges, including technical complexities and data management obstacles. However, by learning from real-world examples, businesses can proactively address these challenges. Solutions such as investing in robust technology infrastructure, fostering cross-departmental collaboration, and continuously monitoring campaign performance are instrumental in overcoming these hurdles. By embracing these proactive measures, companies can streamline their targeting processes, ensuring seamless implementation and execution.

For instance, organizations can build teams dedicated to managing targeting efforts, ensuring consistent communication and coordination across various departments. Additionally, regularly reviewing and adjusting strategies based on performance data empowers businesses to remain agile and adaptable in an ever-evolving digital landscape.

Tools and Technologies Supporting Targeting Efforts

The success of page-level targeting hinges on the effective utilization of cutting-edge tools and technologies. A variety of platforms provide businesses with the capabilities needed to execute and refine targeting strategies. From AI-powered content management systems to advanced data analytics solutions, these tools play a pivotal role in delivering tailored experiences that captivate and engage users. By investing in innovative technologies, businesses can elevate their targeting efforts, ensuring that content remains dynamic, relevant, and impactful.

Future Trends in Page-Level Targeting

Looking ahead, the evolution of page-level targeting is intertwined with innovations in artificial intelligence (AI) and machine learning. These technologies promise to revolutionize personalization by enabling businesses to anticipate user needs and deliver content with unparalleled precision. The integration of AI-driven insights into targeting strategies will empower businesses to stay ahead of consumer expectations, fostering deeper connections and driving sustained growth.

As the digital realm continues to progress, businesses that embrace these emerging trends will be poised to deliver exceptional customer experiences, setting new standards for engagement and interactivity.

Conclusion: Achieving Success with Page-Level Targeting

In conclusion, page-level targeting represents a strategic approach for modern businesses seeking to elevate their digital presence and connect with intended audiences on a deeper level. Companies can drive higher engagement, amplify brand loyalty, and achieve lasting success by focusing on individual pages and tailoring content to meet specific user needs. As businesses continue to explore an increasingly competitive digital landscape, embracing page-level targeting will be instrumental in driving meaningful interactions and achieving enduring results.

Photo: Pixabay via Pexels


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT MISSION

The post Effective Strategies for Page-Level Targeting appeared first on DCReport.org.

‘We Don’t Have An Option Not To Fight’: How Black Women Are Resisting Now

Black Women Reject Claims That Their Post-Election Silence Means Inaction

Where are the 92 percent?

That has been a persistent question since the presidential election, referring to the Black women who overwhelmingly organized and voted for Kamala Harris and then seemingly went dark after November 5. For many of them — who have largely rejected Donald Trump in his three campaigns for president — Harris’ loss felt like a betrayal, and another signal of disrespect from a democracy they have long worked hard to shape.

In the early days of the Trump administration, there have been feelings of anger, resolve, resignation and exhaustion among Black women and many other Americans frustrated with the president’s actions and the current political climate. Earlier this month, millions of protesters took to the streets in cities across the country to make their voices heard as Trump and ally Elon Musk have sought to dramatically remake the federal government, with consequences for real Americans.

The crowds were overwhelmingly White, not the typical makeup of other recent protest movements. Many of the Black women who have been among the leaders of such movements in the past decade, were noticeably — and intentionally — absent.

The Black women I talk to said they are being strategic, pragmatic and creative about what their resistance looks like now, preparing for a long fight ahead, and rejecting narratives that suggest their lack of visibility in this moment translates into inaction.

“People are paying more attention to what Black women are doing because of the impact we had in the election,” said Janai Nelson, president of the NAACP Legal Defense Fund. “We pointed people in the right direction and they did not follow. We may be out of sight to some people, but we’re not checked out by any stretch. The crisis in America is certainly not out of our minds.”

Within weeks of the election, a meme began to circulate of a group of Black women sitting on the roof of a building, sipping their beverages and watching the country burn. The message: Black women would do nothing to help if the democracy they’d tried to save went up in flames.

This month, another image quickly gained traction during the “Hands Off” protests: a photograph of White marchers filing past a restaurant while Black people having brunch looked on, unbothered.

While the idea that Black women deserve rest is showing up in organic social media content, it’s also part of a campaign of misinformation, said Esosa Osa, founder of Onyx Impact, a nonprofit dedicated to researching Black online communities and fighting harmful information that targets Black voters. Emphasizing Black women talking about rest can discourage others in this key Democratic voting bloc from engaging civically.

“We are seeing bad actors trying to influence and suppress Black engagement in a really targeted and hostile way,” Osa said. “We should be cautious of any narrative that’s just, ‘Black women won’t turn out or won’t engage civically.’  Those are the types of narratives that folks working against Black power would want to uplift and amplify. Just because you don’t see your Black friend at a protest doesn’t mean we’re not working or being strategic.”

A lot of that strategy is happening behind the scenes, said Kimberlé Crenshaw, who coined the term “intersectionality” and is a leading legal and civil rights scholar at UCLA and Columbia Law School. Crenshaw added that she has been skeptical of much of what she has seen online about Black women “resting.”

“I see a contrast between what’s being given to me on social media and what I’m seeing in the trenches,” Crenshaw said. “Are we tired? Yes. Are we heartbroken? Absolutely. Are we willing to roll over and let this … happen to us without hearing from us? I’m not seeing that, not in the circles I talk to. We don’t have an option not to fight.”

Nelson is among the Black women in the fight now, tapping into LDF’s long history of legal activism to make American democracy live up to its values. The group was among several civil rights organizations that filed a lawsuit earlier this month challenging Trump’s executive order calling for sweeping election changes.

Fatima Goss Graves, head of the National Women’s Law Center, said Black women are leading a lot of the strategy in this time, pointing to colleagues like Alexis McGill Johnson of Planned Parenthood; Melanie Campbell of the Black Women’s Roundtable, a network focused on the political and economic power of Black women; and SEIU President April Verrett. In February, Graves’ organization, a nonprofit advancing gender justice, filed a lawsuit challenging the president’s executive orders that take aim at diversity, equity and inclusion (DEI) initiatives.

Asked about this month’s protests, Graves said she was not surprised to see White Americans — who make up the majority of the federal workforce — as the main participants.

“The folks who usually come to the streets first are the ones who see the direct impact,” Graves pointed out. “You haven’t always seen groups like that in the streets. I actually feel good about Black women’s leadership at this time. They understand the assignment fully.”

And there are others, focused on building community, messaging to counteract negative narratives and protesting with the power of their purses.

Black women protesting.
Black women are leading the fight for democracy with strategy, resilience, and vision. Photo: Jakayla Toney via Pexels

In the days leading up to Trump’s joint address to Congress, an idea was launched by Black activists, organizers and strategists including Angela Rye, Leah Daughtry and Tamika Mallory to provide an alternative to the president’s speech: a marathon of online programming aimed at educating and empowering Black Americans impacted by the new administration.

“State of the People” streamed for 24 hours and has since evolved into a 10-city tour starting April 26 in Atlanta that will include mutual aid, political education and town halls.

“We have not stopped; we are focused on not just surviving, but making sure we don’t lose ground on what we have achieved as a people in this country,” said Campbell, one of the organizers of the State of the People effort. “This is designed to build a larger, intergenerational movement, showing the potential of long-term, sustained organizing on the ground and online.”

During the Lenten season, Jamal Bryant, pastor of the Atlanta-based mega congregation New Birth Missionary Baptist Church, called for a 40-day boycott of Target after the retailer announced it would scale back its DEI initiatives. The campaign came in the wake of the Trump administration’s executive orders calling for an end to such programs, which the president referred to as “radical and wasteful.” Black consumers, many of them women, make up nearly 9 percent of Target shoppers. While the full impact of the boycott is unclear, the company’s stock price has dropped, foot traffic to stores has slowed significantly and net quarterly sales decreased as a result.

Last month, 100 women did a “buy-in” at a Washington, D.C.-area Costco to show support for the store’s commitment to DEI as part of an annual summit organized by the Black Women’s Roundtable. Campbell said the gathering also included a day on Capitol Hill hosted by Angela Alsobrooks and Lisa Blount Rochester — the nation’s two Black women senators — focused on federal budget priorities including Medicaid, Medicare and Social Security.

Campbell said she has been part of different organizing efforts since the election and strategizing around protecting Black women’s leadership in this moment.

“Part of resistance is self-care,” Campbell said. “That does not have anything to do with not fighting, because we are. We said we were going to take some rest after November 5, but there was never any notion that we weren’t going to fight for our freedom in this country.”

Resistance to the Trump administration, including for Black women, is still taking shape. Campbell said she invites allies whom she felt left down by after the election to step up now. What is clear in this unprecedented moment is that it will not look like it has looked before.

Nelson said Black women’s roles now must be “very targeted, very pinpointed, because we are in a crisis unlike anything we have seen in modern history for Black women.”

“We’re taking it very seriously,” Nelson said. “To the extent people sense silence or reserve, those energies are being put to good use, just in a quiet way.”

When the moment is right, Graves predicted that Black women could also take their protest to the streets.

“That’s part of being a strategist,” she said. “We’ll know when it’s time for us to engage, and that’s OK.”

This column first appeared in The Amendment, a biweekly newsletter by Errin Haines, The 19th’s editor-at-large. Subscribe today to get early access to her analysis.

Photo: Darina Belonogova via Pexels


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT MISSION

The post ‘We Don’t Have An Option Not To Fight’: How Black Women Are Resisting Now appeared first on DCReport.org.

Who needs a UBI?

CDPAP’s enrollment, workforce and total costs ballooned after the state relaxed eligibility rules in 2015. The number of people receiving care through the program surged from just under 20,000 in 2016 to almost 248,000 last year. New York state Medicaid spending on CDPAP in the last five years has more than tripled to about $9.1 billion.

New York needs to make changes to the program, which Hochul called “wildly expensive.”

…Jobs in home health make up an increasingly large share of the city and state’s overall economy. Between 2014 and 2024, home health aide jobs went from comprising 6% of New York City’s total private-sector jobs to 12%, according to Bill Hammond, the senior fellow for health policy at the Empire Center for Public Policy, a fiscally conservative think tank.

I am not sure all of these numbers fit together, and am not sure that the actual percentage of private sector jobs is 12 percent.  Nonetheless, the growth here seems quite rapid.  Here is more from Laura Nahmias at Bloomberg.

The post Who needs a UBI? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

When worlds collide

Caravaggio in Bar Italia

You can run `tty` to see your current TTY

This is not a super useful fact but I thought it was cool and didn’t know it before: you can run tty to get the TTY device for your current terminal session.

For example, if I run tty and get /dev/pts/0, I can then go to a different terminal tab, run echo blah > /dev/pts/0, and than blah will show up in the original terminal tab.

TOI-270 d: The Clearest Look Yet at a Sub-Neptune Atmosphere

TOI-270 d: The Clearest Look Yet at a Sub-Neptune Atmosphere

Sub-Neptune planets are going to be occupying scientists for a long time. They’re the most common type of planet yet discovered, and they have no counterpart in our own Solar System. The media buzz about K2-18b that we looked at last time focused solely on the possibility of a biosignature detection. But this world, and another that I’ll discuss in just a moment, have a significance that can’t be confined to life. Because whether or not what is happening in the atmosphere of K2-18b is a true biosignature, the presence of a transiting sub-Neptune relatively close to the Sun offers immense advantages in studying the atmosphere and composition of this entire category.

Are these ‘ hycean’ worlds with global oceans beneath an atmosphere largely made up of hydrogen? It’s a possibility, but it appears that not all sub-Neptunes are the same. Helpfully, we have another nearby transiting sub-Neptune, a world known as TOI-270 d, which at 73 light years is even closer than K2-18b, and has in recent work from the Southwest Research Institute provided us with perhaps the clearest data yet on the atmosphere of such a world. This work may prompt re-thinking of both planets, for rather than oceans, we may be dealing with rocky surfaces under a hot atmosphere.

TOI-270 d exists within its star’s habitable zone. The primary here is a red dwarf in the constellation Pictor, about 40 percent as massive as the Sun. Three planets are known in the system, discovered by the TESS satellite by detection of their transits.

SwRI’s Christopher Glein is lead author of the paper on this work. He and his team are working with data from the James Webb Space Telescope, collected by Björn Benneke and reported in a 2024 paper that was startling in its detail (citation below). Seeing TOI-270 d as a possible “archetype of the overall population of sub-Neptunes,” the Benneke paper describes it as a planet in which the atmosphere is not largely hydrogen but enriched throughout, blending hydrogen and helium with heavier elements (the term is ‘miscible’) rather than formed with a stratified hydrogen layer at the top.

Image: SwRI’s Christopher Glein. Credit: Ian McKinney/SwRI.

Glein acknowledges the appeal of planets with the potential for life, the search for which drives much of the energy in exoplanet research. And the new data offer much to consider:

“The JWST data on TOI-270 d collected by Björn Benneke and his team are revolutionary. I was shocked by the level of detail they extracted from such a small exoplanet’s atmosphere, which provides an incredible opportunity to learn the story of a totally alien planet. With molecules like carbon dioxide, methane and water detected, we could start doing some geochemistry to learn how this unusual world formed.”

TOI-270 d, in other words, offers up plenty of detail, with carbon dioxide, methane and water readily detected, allowing as Glein notes the possibility of doing a geochemical analysis to delve into not just the atmosphere’s composition, but how this super-Earth formed in the first place. We have to begin with temperature, for the gases that showed up in the JWST data were at temperatures close to 550 degrees C. Hotter, in other words, than the surface of Venus, a fact that we need to reckon with if we’re holding out hope for global oceans. For at these temperatures gases do some interesting things.

Here the term is ‘equilibration process.’ At a certain level of the atmosphere, pressures and temperatures are high enough that gases reach chemical equilibrium – they become a stable mix. Going higher means both temperature and pressure drop, thus slowing reaction rates. But it is possible for gases to move upward faster than their chemical reactions can adjust to the change, which ‘freezes’ in the composition that was set at the equilibrium level. The mixture ‘quenches,’ in the terminology, and at that point the chemical ratios can no longer change. Finding out where this happens allows scientists to interpret what they see in data taken much higher in the atmosphere.

We are left with the chemical signature of the deep atmosphere where equilibrium occurs. We draw these inferences from the data taken by JWST from the upper atmosphere, offering a broader view of the atmosphere’s composition throughout.

The paper analyzes the balance between methane and carbon dioxide in terms of this quenching, as the relative amounts of the two gases become ‘frozen’ as they move upward. Working out where the balance would occur in a hydrogen-rich atmosphere allowed the team to work out that the freeze out occurred at temperatures between 885 K and 1112 K, with pressures ranging from one to 13 times Earth sea-level pressure. All this points to a thick, hot atmosphere, and one with a persistent conundrum.

For while models suggest that we should find ammonia in the atmosphere of temperate sub-Neptunes, it fails to appear. A nitrogen-poor atmosphere, the authors believe, is possibly the result of nitrogen being sequestered in a magma ocean. The speculation points to a world that is anything but hycean – no water oceans here! We may in fact be observing a planet with a thick atmosphere rich in hydrogen and helium that is well mixed with “metals” (elements heavier than helium), all of this over a rocky surface.

Image: An SwRI-led study modeled the chemistry of TOI-270 d, a nearby exoplanet between Earth and Neptune in size, finding evidence that it is a giant rocky world (super-Earth) surrounded by a deep, hot atmosphere. NASA’s JWST detected gases emanating from a region of the atmosphere over 530 degrees Celsius — hotter than the surface of Venus. The model illustrates a potential magma ocean removing ammonia (NH3) from the atmosphere. Hot gases then undergo an equilibration process and are lofted into the planet’s photosphere where JWST can detect them. Credit: SwRI / Christopher Glein.

The paper also notes the lack of carbon monoxide, explaining this by a model showing that CO would have frozen out even deeper in the atmosphere. Both modeling and data offer an explanation for TOI-270 d but also point to alternatives for K2-18 b. The modeling of the latter as an ocean world is but one explanation. Photochemical models show how difficult it is to produce and maintain enough methane under such conditions. Furthermore, K2-18 b likely receives too much stellar energy to maintain surface liquid water, due to greenhouse heating and limited atmospheric circulation. Thus the paper’s conclusion on K2-18b:

Because a revised deep-atmosphere scenario can accommodate depleted CO and NH3 abundances, the apparent absence of these species should no longer be taken as evidence against this type of scenario for TOI-270 d and similar planets, such as K2-18 b. Our results imply that the Hycean hypothesis is currently unnecessary to explain any data, although this does not preclude the existence of Hycean worlds.

This is a deep, rich analysis drawing plausible conclusions from clearer data than we had previously acquired from transiting sub-Neptunes. The question of water worlds under hydrogen atmospheres remains open, but the galvanizing nature of this paper is that it points to forms of analysis that until now we’ve been able to do only in our own Solar System. I think the authors are connecting dots in very useful ways here, pointing to the progress in exoplanetary science as we go ever deeper into atmospheres.

From the paper. The italics are mine:

Our overall philosophy was to develop modeling approaches that are rooted as much as possible in empirical experience. This experience includes fumaroles on Earth that constrain quench temperatures between redox species in hot gases, and making planets out of meteorites and cometary material to understand how different elements can reach different levels of enrichment in planetary atmospheres. Our approaches were simple, perhaps too simple in some cases if the goal is to accurately pinpoint the composition, present conditions, and history of the planet. If, instead, the goal is to suggest new ways of thinking about geochemistry on exoplanets that maintain focus on key variables and how they can be connected to observational data, as well as large-scale links between what we observe and how the atmosphere might have originated, then a different path to progress can be taken.. The latter is the point of view we pursued.

The paper is Glein et al., “Deciphering Sub-Neptune Atmospheres: New Insights from Geochemical Models of TOI-270 d,” accepted at the Astrophysical Journal (preprint). The Benneke et al. paper is “JWST Reveals CH4, CO2, and H2O in a Metal-rich Miscible Atmosphere on a Two-Earth-Radius Exoplanet,” currently available as a preprint.

Today I'm Starting My 5th Year on Substack—But Have I Learned Anything?

I launched The Honest Broker on Substack four years ago today—on April 21, 2021. And it almost didn’t happen.

I’ll tell you why below. But first, let’s celebrate.

I’m wearing my party hat. I’m putting up balloons and streamers. I’m mixing up a batch of my secret party punch. We’re gonna pull out all the stops.

And you’re the guest of honor—because this only happened with your support and encouragement.

Thank you!

All of you are near and dear to me. But I want to give a special shoutout to the premium subscribers and founding members who help pay the bills here. I couldn’t operate as an honest broker in these turbulent times without your help.

And if you aren’t a premium subscriber, please consider becoming one.

There are all sorts of benefits—including access to more than 500 paywalled articles, as wells as future updates, reviews, rants, best-of-year lists, and other perks.

Above all, you get the satisfaction of supporting frank and unfiltered reporting on arts, culture, media, and society. We need that now more than ever.


Please support my work by taking out a premium subscription (just $6 per month—or less).

Subscribe now


As part of the celebration, I’m launching my first ever online chat today for premium subscribers. I’ve never done this before on any platform—but this is a milestone moment and I’m ready to try new things.

The subject of today’s chat is how to build a community on Substack. I’ll share what I’ve learned since launching back in 2021—and listen to your questions, comments, and advice.

Because we’re all in this together.


How did I get here?

HOW DID I GET HERE?

Back in 2021, I would have told you that there were no surprises left in my career. I’d been writing on music and culture since I was a teenager—and I’d seen it all.

I’d worked with all kinds of editors and publishers and periodicals. I’d had hits and misses. I’d had ups and downs. I’d seen fire and I’d seen rain. Etc. etc.

For better or worse, my story was already written—and it was probably time to ride off into the sunset.

So I looked forward to a more low-key lifestyle. I’d sit on the veranda with a good book, a glass of wine, and rare vinyl playing in the background.

I just needed to acquire a veranda.

But I was wrong. I had a huge surprise ahead of me.

That’s because I was now going to embark on the wildest ride yet of my already unusual career. And I didn’t know it was coming.

For a start, I’d now become my own editor and my own publisher—with my own periodical. (And there’s no tougher taskmaster than that face in the mirror.)

I’d also get to take on a new role—the way Bruce Wayne gets to be Batman. Of course, I’m a little different from him. I’ve never dreamed of becoming a superhero. But I did want to step forth in my new guise as—pause for dramatic effect—the Honest Broker.

The Honest Broker is sorta like Batman. At least I like to think so—he fights for the common good and calls out malefactors. The only difference is that I don’t live in a fancy mansion—with or without veranda—or have a butler or drive a cool Batmobile.

But doesn’t it feel like we’ve all been living in Gotham City for a long time now? Don’t we all have certain cape-and-cowl responsibilities?

Finally (and best of all), I was now going to have a closer relationship with readers than ever before. That was the thing I missed most about my career switch from musician to writer—the direct contact with an audience.

I’d now get it back.

But back in 2021, I didn’t think it was possible for an author to enjoy that direct and immediate connection with readers. But in this crazy new medium of Substacking, it absolutely is.

I was about to learn all this firsthand.


Substack’s Dan Stone (a longtime family friend) first suggested I join the platform back in 2020. By pure coincidence, I’d just subscribed to my first Substack the previous week—so I already knew about the platform.

I told Dan that I’d be a reader of Substacks, and even a paid subscriber. But I had no intention of writing one myself.

I was enjoying the good life. I was happy and living large. My books were selling well, and I had other interesting freelance opportunities.

So why join an unproven new venture?

But Dan is persistent and persuasive. So a few weeks later he made a second pitch—selling me again on the benefits of life on Substack.

I told him I’d think it over. And—to be honest—I did like the idea of being my own editor.

I’ve often battled with editors over the years—especially whenever I wanted to write about any subject besides jazz.

I thought my track record should entitle me to a little more freedom, but the opposite happened. The publishing world got more inflexible over time. With each passing year, editors became more cautious and risk averse.

I actually had an easier time with editors when I was young and unknown. Go figure! But the more success I enjoyed, the more they wanted me to march to their beat.

I don’t think this has anything to do with me—the whole media industrial complex has lost its mojo over the last 25 years, and I was just another casualty. But I felt totally worn out from battling with the publishing establishment.

That made me an obvious candidate for Substack.

But I was still afraid of the demands of launching and running a newsletter. Could I really publish consistently, and build a following?

I spent weeks agonizing over the question. I couldn’t make up my mind.

And then I caught COVID.

My COVID case wasn’t terrible. But it did leave me feeling drained and exhausted—which is unusual for me. The virus definitely knocked me off my game.

That made launching a Substack seem impossible. So, while recovering, I made the decision to decline Dan’s invitation for a second time

I sat down with my family over dinner one night, and told them my verdict—because we always talk about these things over dinner at our home.

My little speech went something like this:

I like what Substack is doing, and I’m absolutely convinced it will have a positive impact on the culture. I believe that their publishing model is the way of the future.

But I don’t like doing anything unless I do it well. And the work involved in creating a successful newsletter is huge. I would need to stop all my other projects, and just focus on Substack.

I’d need to publish regularly. I’d need to maintain constant focus and intensity—day after day, week after week, month after month.

And I don’t see that happening.

Frankly, I’m not that ambitious. Family and quality of life are more important to me than career success.

At my age, I should be thinking about cutting back—not ramping up. And it doesn’t help that I feel so exhausted from COVID.

So I’m not going to do it. Substack will flourish—I’m certain of that—but it will flourish without me.

That was my speech. And I delivered it with conviction. End of story.

But it wasn’t the end of the story.

That’s because my smart wife Tara pushed back. She rarely tells me how to manage my career—but this time she did.

Here’s what she said:

You’re only saying this because COVID has drained your energy. But I know that you have always wanted more freedom as a writer—and I’m certain you would flourish in an environment where you had that freedom.

It’s not a question of ambition. It’s about happiness. You would enjoy doing this.

So don’t be hasty in walking away from this. Take a few more days to think about it. Let your body rest up and recover—and then make a decision.

As always, Tara was right.

A few days later, I felt back to 100%. And now the idea of doing something different—more open-ended and flexible—with an entirely new publishing business model seemed like the obvious move for me.

I also thought this might really be my calling—discovered late in life. But better late than never.

That’s because, at this stage in my life, my main goal is to have a positive impact as part of a community. Maybe Substack would be my pathway for doing that.


But could I actually write multiple articles every week?

There was only way to find out. I created a workplan for a full year of Substack articles.

This meant that I needed to come up with more than 100 article ideas. Was that even possible?

I spent many days creating this workplan. And when I was finished it looked like this:

My anticipated workplan for my first 12 months on Substack

I stared at this spreadsheet for a long, long time. It’s one thing to make a plan—but turning it into a reality is a thousand times harder.

I now had doubts all over again. But I decided to spend the next month working on some of these essays and articles. I wanted to see how efficiently I could operate—but I especially wanted to find out if doing this felt energizing—or exhausting.

As the month of preparation proceeded, I found myself getting more and more excited. And even as I finished drafts of these articles, I came up with more ideas for other articles.

I could now see that operating as my own editor and publisher was intensely liberating. I was absolutely certain now that I wanted to push ahead and make this a reality.

There was only one missing piece of the puzzle—namely you.

Could I connect directly with my reader? Would we vibe together?

Well, there was only one way to find out. I had to launch this ship, and see if she sailed.

That happened four years ago today.

That was a million words ago—or, to be specific, 654 articles. During this period, our community has grown more than I ever imagined possible. We now have 235,000 members. I find that hard to believe.

And I still have a long list (100 plus) of future articles I want to write. (That’s what happens when you’re a journalist living in Gotham City—there’s never a shortage of stories.)

I also think that I should probably take advantage of Substack capacities I’ve ignored in the past—like chat or video or whatever.

If you have suggestions, let me know in the contents.

That’s because we’re still in this together. So let’s keep going, and see what we can do.

Severe Thunderstorms in the Southern and Central Plains; Flooding Threat in Western Hawaii

Showers and Thunderstorms in the South

Goldman: "When Will Growth Slow, and When Will We Know?"

Goldman Sachs economists put out a note this morning: When Will Growth Slow, and When Will We Know?

A few brief excerpts:
Most of the sequential inflation increases in the last trade war took place within 2-3 months of the tariffs’ implementation, and we expect spending growth to slow shortly after prices start rising.
...
[W]e expect to see continued softness in the survey data before the hard data start to weaken around mid-to-late summer. Our analysis cautions against dismissing the current deterioration in the survey data despite their recent record, and the evolution of the data in recent weeks is consistent with previous “event-driven” growth slowdowns. Still, it is too early to draw strong conclusions from the limited data we have so far, and we will continue to watch for indications of slower growth in the coming months.
It will take some time for tariffs and policy uncertainty to show up in the hard data. I think we will start seeing the impact of tariffs on inflation in the May or June reports (released in June and July).

We might see the impact earlier on New Home sales. New home sales are reported when the contract is signed, so the report tomorrow will be for contracts signed in March (prior to the April 2nd tariff shock).  But we might see policy and the stock market sell-off impacting April new home sales in the May report.

Tuesday assorted links

1. Brazil fact of the day.

2. My interview with Diario Financiero (Chilean, in Spanish).

3. The AGI Chronicles, a book in the works, I have high hopes.

4. Shruti Rajagopalan named to the Project Syndicate 30 Forward Thinkers list.

5. Max Romeo, RIP, one song by him.

6. Circle is launching a new, stablecoin-based payments and remittance network.

7. My Niskanen podcast with Matt Grossman on building a science of progress.

8. Herbert Gans, RIP (NYT).

9. Cluely, for cheating.

10. The AEA best paper awards.

The post Tuesday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

library-mcp: working with Markdown knowledge bases

At work, we’ve been building agentic workflows to support our internal Delivery team on various accounting, cash reconciliation, and operational tasks. To better guide that project, I wrote my own simple workflow tool as a learning project in January. Since then, the Model Context Protocol (MCP) has become a prominent solution for writing tools for agents, and I decided to spend some time writing an MCP server over the weekend to build a better intuition.

The output of that project is library-mcp, a simple MCP that you can use locally with tools like Claude Desktop to explore Markdown knowledge bases. I’m increasingly enamored with the idea of “datapacks” that I load into context windows with relevant work, and I am currently working to release my upcoming book in a “datapack” format that’s optimized for usage with LLMs. library-mcp allows any author to dynamically build datapacks relevant to their current question, as long as they have access to their content in Markdown files.

A few screenshots tell the story. First, here’s a list of the tools provided by this server. These tools give a variety of ways to search through content and pull that content into your context window.

The image displays a pop-up window titled “Available MCP tools” describing several server library tools like get_by_date_range and get_by_slug_or_url, which are used to retrieve posts or content based on specific parameters. The tools utilize Model Context Protocol (MCP) to interact with specialized servers.

Each time you access a tool for the first time in a chat, Claude Desktop prompts you to verify you want that tool to operate. This is a nice feature, and I think it’s particularly important that approval is done at the application layer, not at the agent layer. If agents approve their own usage, well, security is going to be quite a mess.

The image shows a prompt asking for permission to run a tool from a local library called “list_all_tags,” with options to allow the action for the chat, allow once, or deny. There is a warning about potentially malicious actions from MCP servers.

Here’s an example of retrieving all the tags to figure out what I’ve written about. You could do a follow-up like, “Get me posts I’ve written about ‘python’” after seeing the tags. The interesting thing here is you can combine retrieval and intelligence. For example, you could ask “Get me all the tags I’ve written, and find those that seem related to software architecture” and it does a good job of filtering.

The image shows a dialogue analyzing blog content to identify the most frequently covered topics, highlighting “Management” with 209 posts, followed by “Django” with 72 posts, and “Python” with 66 posts. The analysis also includes a list of less frequently covered topics such as innovation, plausible, tailscale, and business, each with one post.

Finally, here’s an example of actually using a datapack to answer a question. In this case, it’s evaluating how my writing has changed between 2018 and 2025.

The image shows a request to compare blog posts from January 2018 and January 2025, with a focus on how writing has evolved. The analysis highlights three main topics from the 2018 posts: technical infrastructure concepts, personal writing goals, and diversity in tech teams.

More practically, I’ve already experimented with friends writing their CTO onboarding plans with Your first 90 days as CTO as a datapack in the context window, and you can imagine the right datapacks allowing you to go much further. Writing a company policy with all the existing policies in a datapack, along with a document about how to write policies effectively, for example, would improve consistency and be likely to identify conflicting policies.

Altogether, I am currently enamored with the vision of useful datapacks facilitating creation, and hope that library-mcp is a useful tool for folks as we experiment our way towards this idea.

strace's `--tips`

Two things I can’t believe I didn’t know earlier:

  1. strace’s official logo is an adorable ostrich
  2. If you pass --tips to strace, it’ll give you an ASCII-art ostrich with a speech bubble with an strace tip, like this: (via jade)
  ______________________________________________         ____  
 | You can use -o|COMMAND to redirect strace's  |      |-. .-.|
 | output to COMMAND.  This may be useful       \      (_@)(_@)
 | in cases when there is a redirection          \     .---_  \
 | in place for the traced program.  Don't       _\   /..   \_/
 | forget to escape the pipe character, though, /     |__.-^ /
 | as it is usually interpreted by the shell.   |         }  | 
 \______________________________________________/        |   [ 

NMHC on Apartments: Market conditions Tightened in Q1 pre-Tariffs

Today, in the CalculatedRisk Real Estate Newsletter: NMHC on Apartments: Market conditions Tightened in Q1 pre-Tariffs

Excerpt:
From the NMHC: Apartment Market Sees Tighter Conditions, Rebounding Deal Flow and Improved Debt Financing in First Quarter
Changes in U.S. trade policy over the past two weeks have impacted global financial markets, causing stock prices to fall (and then partially recover) and long-term yields to increase amidst a retreat of capital from U.S. Treasuries.

This volatility had a noticeable effect on apartment market sentiment captured in the National Multifamily Housing Council’s (NMHC’s) latest Quarterly Survey of Apartment Market Conditions. More specifically, apartment executives who responded to this month’s survey after the announcement of tariffs on April 2nd—as opposed to the roughly half of respondents who responded in the days prior—were more likely to report worsening conditions for debt and equity financing as well as decreasing sales volume over the preceding three months.
...
NMHC Apartment Indx• The Market Tightness Index came in at 52 this quarter – above the breakeven level of 50 for the first time since July 2022 – indicating tighter market conditions. This also appears to be the only index value that wat not meaningfully affected by market volatility this round (it makes sense that it would take longer to observe changes in the supply and demand for physical apartment space).
However, take this quarter’s survey results with a grain of salt. As economists at the NMHC mentioned, the negative impact of policy was probably not picked up in this quarter’s market tightness index.
There is much more in the article.

MBA Survey: Share of Mortgage Loans in Forbearance Decreases to 0.36% in March

From the MBA: Share of Mortgage Loans in Forbearance Decreases Slightly to 0.36% in March
The Mortgage Bankers Association’s (MBA) monthly Loan Monitoring Survey revealed that the total number of loans now in forbearance decreased by 2 basis points from 0.38% of servicers’ portfolio volume in the prior month to 0.36% as of March 31, 2025. According to MBA’s estimate, 180,000 homeowners are in forbearance plans. Mortgage servicers have provided approximately 8.6 million forbearances since March 2020.

The share of Fannie Mae and Freddie Mac loans in forbearance decreased 2 basis points to 0.13% in March 2025. Ginnie Mae loans in forbearance decreased by 1 basis points to 0.83%, and the forbearance share for portfolio loans and private-label securities (PLS) decreased 4 basis points to 0.33%.

“Overall mortgage performance improved in March, with more borrowers making their mortgage payments and fewer borrowers in forbearance and loan workouts compared to the prior month,” said MBA’s Vice President of Industry Analysis Marina Walsh, CMB. “This monthly improvement may be tied to several factors such as receipt of tax refunds and homeowner recovery from natural disasters.”

Added Walsh, “The labor market is relatively healthy, which is helping mortgage performance remain strong. However, compared to one year ago, there are fewer borrowers current on their mortgages. Also, more borrowers in loan workouts – particularly those with FHA loans – are having difficulty staying current.”
...
By reason, 76.0% of borrowers are in forbearance for reasons such as a temporary hardship caused by job loss, death, divorce, or disability. Another 21.4% are in forbearance because of a natural disaster. The remaining 2.6% of borrowers are still in forbearance because of COVID-19.
emphasis added
At the end of March, there were about 180,000 homeowners in forbearance plans.

DOJ-in-Exile, A Further Elaboration

I’ve been gratified at just how much response and interest I’ve got to my proposal for a DOJ-in-Exile project. I’ve heard from so many people either wanting to volunteer their time or work for such a project or help get it off the ground that I haven’t even been able to respond to everyone yet. But I’m very encouraged by the interest. As I said yesterday, this isn’t something I am envisioning running. I don’t have the expertise and I’m already doing something. I’m trying to bring together interested people and potentially funders and thus hopefully play some role in bringing it into existence.

To help bring the idea into more focus I thought I’d try to flesh out the concept.

1) The core work product has to be serious-minded, substantive and come from people who not only know the law but how prosecutors work. I explained in the first post the general civic value of such an enterprise. To have that value it can’t traffic in hyperbole or go off half-cocked. The value stems from people across the political spectrum as well as working lawyers looking at the briefs or indictments-in-waiting and saying, “Okay, that’s real.” Of course, that doesn’t mean MAGA influencers have to give any thumbs up. What matters isn’t what people say publicly for effect but what they think and know privately. If you’re currently sharing classified information on insecure platforms or exfiltrating personal tax data out of IRS computer systems I would want your lawyer to be able to look at these documents and tell you, “Sir, yeah, that could be a problem for you.” That level of quality and expertise is also the sine qua non for press pick up. That standard must be hit for it to become that kind of resource for reporters trying to understand the nature of what’s happening. That makes it pervasive in the broader political, legal and news discourse.

2) Even though the work product must past muster for people who understand the law it must also be accessible and visually compelling for the wider public. That does not mean dumbing anything down. Only a subset of the public is going to have the time or interest to actually read through memos or indictments-in-waiting. But they should be there for those who want to dig deeper – well formatted, organized in a clear and compelling way, shareable across social platforms.

3) Specificity matters above all else. It’s not enough to say that there are widespread examples of misuse of statutorily protected private taxpayer data. No one cares about a general discussion of different laws or the equities implicated by them. You have to attach the violation of particular laws to specific individuals, with an enumeration of the available evidence and relevant statutes. The enterprise is inherently tentative since a DOJ-in-Exile lacks the investigative powers to subpoena documents, impanel grand juries and so forth. But quite a lot can be assembled by a careful review of quality journalism, facts revealed through litigation, interviews with witnesses and more. A more definitive and complete investigation would have to come from a DOJ under a future administration.

4) There are a number of reasons for such an exercise. The first is simply to illustrate the scale of law-breaking and public corruption taking place while the Department of Justice has called a hiatus on enforcing the law with respect to Republicans and those working at their behest. Occasional references to possible crimes toward the ends of articles have no staying power. A sliver of ice on hot summer pavement will melt in seconds. A one ton block of ice takes days. Scale and concentration have a powerful effect. Second is to provide a record for a future administration. Third is deterrence. People should know right now that impunity isn’t forever. They should know specifically the criminal infractions they’re committing, the available evidence and that they may eventually face a legal reckoning. The public should know.

Fourth is something more general. I’ve noted the disorienting, reality-distorting nature of this moment in history. Crimes are committed openly and appear to have no consequence. We are used to something different – a roughly predictable order of actions and reactions, an ascending scale of consequences. Public evidence of criminal conduct triggers investigations. Investigations may lead to prosecutions. Prosecutions may lead to conviction and punishment. Without that accustomed order of events everything seems disjointed and untethered. Our minds try to impose some order or logic. If there are no consequences maybe that means there are no crimes? Perhaps up is actually down?

Under instrument flight rules pilots are taught to ignore all their physical sensations and perceptions of speed, attitude, up and down and only pay attention to the flight instruments. The instrument panel is reality. Simply because laws go unenforced doesn’t mean the law doesn’t exist. It doesn’t mean crimes aren’t being committed. A major point of a DOJ-in-Exile would be to serve as political society’s instrument panel in a period of public deception. Even when you can’t change the reality it is still critical to know what the reality is. It’s the only path to changing it.

Please keep getting in touch. I’m relying on your input.

BREAKING: DOJ Scraps Plan to Shutter Tax Division

I just learned that the Department of Justice has shelved its plan to essentially shutter the Department’s Tax Division. The plan had been to disperse the Division’s lawyers to U.S. Attorney’s Offices around the country and maintain a very small residual oversight office at Main Justice. This would satisfy, at least in the view of DOJ’s current political leadership, statutory requirements. But it would trigger big departures of lawyers unwilling to relocate around the country and dilute and dissipate institutional knowledge and organizational focus.

Division personnel were informed this evening that that plan has been shelved in favor of a new plan. Under this new reorganization plan the Tax Division’s criminal side would be placed within the Criminal Division and the civil side within the Civil Division. So there’s an element of institutional demotion. But these two parts of the Tax Division would remain more or less intact within these new homes, as I understand it. It’s still a major change but dramatically different than the original reorganization plan which I reported on April 8th. It’s not clear yet based on tonight’s email to Tax Division staff whether this means there will no longer be a “Tax Division” within the DOJ. It appears not. But under this new plan it would be more a matter of reorganizing the org chart than actually cutting the operational units and personnel.

Green Investments Might Not Be That Easy To Kill

In my late 2024 post-election brainstorming, another idea of mine was to create a structure for pressing Republican Reps who threatened to cancel the green energy investments in their districts under the Inflation Reduction Act. It was a matter of some consternation for Democrats at the time, but those investments were overwhelmingly in Republican districts — like something like 75% of them. There were a few explanations of that at the time, one of which was that it was focused on those areas that were in whatever way “passed over” in the city-centric prosperity of the early 21st century. But we’re seeing another one of the benefits now and it’s precisely that dynamic I was keen mobilize: it makes these investments much harder to claw back by a future Republican administration.

Reuters has a piece up this morning about how it’s developing as a major complication for the Trump tax cut bill. According to Reuters, 11 of the 26 GOP Reps on the Ways and Means Committee, the folks who write tax bills in the House, represent districts getting substantial amounts of IRA green investment money. The article is replete with examples of these reps and in many cases truly eye-popping levels of investment in individual districts driven by the IRA. As just one example, Rep. David Kustoff’s Tennessee district has gotten most of the $6.5 billion invested in the district in the last four years from Ford and a South Korean partner company pursuant to subsidies in the IRA. That’s the most of any Ways and Means tax-writer. But that’s a mind-bending amount of investment for a single district.

Notwithstanding this, I’d still be leasing billboard space next to all these new plants and putting up permanent signs that say, Brought to You by Democrats (with zero Republican votes) under President Joe Biden.

In all seriousness, if you’re an eligible billionaire or even centi-millionaire (I’m not proud), hit me up. I’ve got an endless list of good ideas. I can give them to you and you’ll get an extremely high civic/political ROI and everyone will think you’re super cool and smart and doing great work like Frederick Douglass.

Congratulations to the 2025 John Bates Clark Medalist Stefanie Stantcheva

Awesome choice, she is one of my favorite economists, and she is also a super-nice person.  Here is the report and list of winners, here are previous mentions on MR.

The post Congratulations to the 2025 John Bates Clark Medalist Stefanie Stantcheva  appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Deregulation suggestions

If you have ideas for cutting regulations, the US government wants to hear from you! This could be important. Provide details on the exact regulation in the CFR.

The post Deregulation suggestions appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The Name of the Game Is Rallying Your Base and Discouraging Theirs

Or at least seventy percent of the game (boldface mine):

Our estimate is that the voters who voted in the 2025 Supreme Court election backed Kamala Harris by 7 points in 2024. In other words, roughly 70% of Susan Crawford’s win margin was attributable to changes in who was voting, rather than changes in how people voted. While the persuasion she got would still have been enough to flip the state with a November 2024 electorate (which was Trump +1), the landslide victory largely came down to a big turnout advantage

The geographic differences confirm preexisting patterns of educated white voters punching above their weight for Democrats, and the impact of the low-propensity voter realignment is especially visible in the western part of the state. Looking forward to 2026, this coalition that Wisconsin Democrats have arranged could prove crucial to flipping the state legislature and holding their existing statewide offices.

Roughly a year ago, some asshole with a blog noted:

And I’m old enough to remember when Democrats were the party of low-propensity voters. In other words, Democrats used to want high levels of turnout because unlikely/less likely voters were disproportionately Democratic…

A side effect of this, which isn’t relevant for 2024, is that Democrats will likely do very well in many off-year elections–and those elections do count–a majority built in off-year elections can pass legislation just as well as one built in a presidential year.

The Wisconsin judicial election seems to back that up:

These turnout dynamics in Wisconsin invert previous Democratic political thought. Previously, low-propensity working class voters of all races showing up in presidential years used to benefit Democrats. In midterms, those voters would stay home, usually giving Republicans an advantage. But this has not been the case of late — in 2022, 2023, and 2025, the voters that showed up were much more Democratic than the partisanship of the state.

Like I wrote, votes cast by senators and congressmen elected in off-year elections count as much as they do when cast by presidential year winners. But Democrats also need to realize that they have to rally their voters, try to get some of the highly unlikely voters and discourage Republican voters to win in general elections.

Vibe Coding, Vibe Checking, and Vibe Blogging

For the past decade and a half, I’ve been exploring the intersection of technology, education, and design as a professor of cognitive science and design at UC San Diego. Some of you might have read my recent piece for O’Reilly Radar where I detailed my journey adding AI chat capabilities to Python Tutor, the free visualization tool that’s helped millions of programming students understand how code executes. That experience got me thinking about my evolving relationship with generative AI as both a tool and a collaborator.

I’ve been intrigued by this emerging practice called “vibe coding,” a term coined by Andrej Karpathy that’s been making waves in tech circles. Simon Willison describes it perfectly: “When I talk about vibe coding I mean building software with an LLM without reviewing the code it writes.” The concept is both liberating and slightly terrifying—you describe what you need, the AI generates the code, and you simply run it without scrutinizing each line, trusting the overall “vibe” of what’s been created.

My relationship with this approach has evolved considerably. In my early days of using AI coding assistants, I was that person who meticulously reviewed every single line, often rewriting significant portions. But as these tools have improved, I’ve found myself gradually letting go of the steering wheel in certain contexts. Yet I couldn’t fully embrace the pure “vibe coding” philosophy; the professor in me needed some quality assurance. This led me to develop what I’ve come to call “vibe checks”—strategic verification points that provide confidence without reverting to line-by-line code reviews. It’s a middle path that’s worked surprisingly well for my personal projects, and today I want to share some insights from that journey.

Vibe Coding in Practice: Converting 250 HTML Files to Markdown

I’ve found myself increasingly turning to vibe coding for those one-off scripts that solve specific problems in my workflow. These are typically tasks where explaining my intent is actually easier than writing the code myself, especially for data processing or file manipulation jobs where I can easily verify the results.

Let me walk you through a recent example that perfectly illustrates this approach. For a class I teach, I had students submit responses to a survey using a proprietary web app that provided an HTML export option. This left me with 250 HTML files containing valuable student feedback, but it was buried in a mess of unnecessary markup and styling code. What I really wanted was clean Markdown versions that preserved just the text content, section headers, and—critically—any hyperlinks students had included in their responses.

Rather than writing this conversion script myself, I turned to Claude with a straightforward request: “Write me a Python script that converts these HTML files to Markdown, preserving text, basic formatting, and hyperlinks.” Claude suggested using the BeautifulSoup library (a solid choice) and generated a complete script that would process all files in a directory, creating a corresponding Markdown file for each HTML source.

(In retrospect, I realized I probably could have used Pandoc for this conversion task. But in the spirit of vibe coding, I just went with Claude’s suggestion without overthinking it. Part of the appeal of vibe coding is bypassing that research phase where you compare different approaches—you just describe what you want and roll with what you get.)

True to the vibe coding philosophy, I didn’t review the generated code line by line. I simply saved it as a Python file, ran it on my directory of 250 HTML files, and waited to see what happened. This “run and see” approach is what makes vibe coding both liberating and slightly nerve-wracking—you’re trusting the AI’s interpretation of your needs without verifying the implementation details.

Trust and Risk in Vibe Coding: Running Unreviewed Code

The moment I hit “run” on that vibe-coded script, I realized something that might make many developers cringe: I was executing completely unreviewed code on my actual computer with real data. In traditional software development, this would be considered reckless at best. But the dynamics of trust feel different with modern AI tools like Claude 3.7 Sonnet, which has built up a reputation for generating reasonably safe and functional code.

My rationalization was partly based on the script’s limited scope. It was just reading HTML files and creating new Markdown files alongside them—not deleting, modifying existing files, or sending data over the network. Of course, that’s assuming the code did exactly what I asked and nothing more! I had no guarantees that it didn’t include some unexpected behavior since I hadn’t looked at a single line.

This highlights a trust relationship that’s evolving between developers and AI coding tools. I’m much more willing to vibe code with Claude or ChatGPT than I would be with an unknown AI tool from some obscure website. These established tools have reputations to maintain, and their parent companies have strong incentives to prevent their systems from generating malicious code.

That said, I’d love to see operating systems develop a “restricted execution mode” specifically designed for vibe coding scenarios. Imagine being able to specify: “Run this Python script, but only allow it to CREATE new files in this specific directory, prevent it from overwriting existing files, and block internet access.” This lightweight sandboxing would provide peace of mind without sacrificing convenience. (I mention only restricting writes rather than reads because Python scripts typically need to read various system files from across the filesystem, making read restrictions impractical.)

Why not just use VMs, containers, or cloud services? Because for personal-scale projects, the convenience of working directly on my own machine is hard to beat. Setting up Docker or uploading 250 HTML files to some cloud service introduces friction that defeats the purpose of quick, convenient vibe coding. What I want is to maintain that convenience while adding just enough safety guardrails.

Vibe Checks: Simple Scripts to Verify AI-Generated Code

OK now come the “vibe checks.” As I mentioned earlier, the nice thing about these personal data processing tasks is that I can often get a sense of whether the script did what I intended just by examining the output. For my HTML-to-Markdown conversion, I could open up several of the resulting Markdown files and see if they contained the survey responses I expected. This manual spot-checking works reasonably well for 250 files, but what about 2,500 or 25,000? At that scale, I’d need something more systematic.

This is where vibe checks come into play. A vibe check is essentially a simpler script that verifies a basic property of the output from your vibe-coded script. The key here is that it should be much simpler than the original task, making it easier to verify its correctness.

For my HTML-to-Markdown conversion project, I realized I could use a straightforward principle: Markdown files should be smaller than their HTML counterparts since we’re stripping away all the tags. But if a Markdown file is dramatically smaller—say, less than 40% of the original HTML size—that might indicate incomplete processing or content loss.

So I went back to Claude and vibe coded a check script. This script simply:

  1. Found all corresponding HTML/Markdown file pairs
  2. Calculated the size ratio for each pair
  3. Flagged any Markdown file smaller than 40% of its HTML source

And lo and behold, the vibe check caught several files where the conversion was incomplete! The original script had failed to properly extract content from certain HTML structures. I took these problematic files, went back to Claude, and had it refine the original conversion script to handle these edge cases.

After a few iterations of this feedback loop—convert, check, identify issues, refine—I eventually reached a point where there were no more suspiciously small Markdown files (well, there were still a few below 40%, but manual inspection confirmed these were correct conversions of HTML files with unusually high markup-to-content ratios).

Now you might reasonably ask: “If you’re vibe coding the vibe check script too, how do you know that script is correct?” Would you need a vibe check for your vibe check? And then a vibe check for that check? Well, thankfully, this recursive nightmare has a practical solution. The vibe check script is typically an order of magnitude simpler than the original task—in my case, just comparing file sizes rather than parsing complex HTML. This simplicity made it feasible for me to manually review and verify the vibe check code, even while avoiding reviewing the more complex original script.

Of course, my file size ratio check isn’t perfect. It can’t tell me if the content was converted with the proper formatting or if all hyperlinks were preserved correctly. But it gave me a reasonable confidence that no major content was missing, which was my primary concern.

Vibe Coding + Vibe Checking: A Pragmatic Middle Ground

The take-home message here is simple but powerful: When you’re vibe coding, always build in vibe checks. Ask yourself: “What simpler script could verify the correctness of my main vibe-coded solution?” Even an imperfect verification mechanism dramatically increases your confidence in results from code you never actually reviewed.

This approach strikes a nice balance between the speed and creative flow of pure vibe coding and the reliability of more rigorous software development methodologies. Think of vibe checks as lightweight tests—not the comprehensive test suites you’d write for production code, but enough verification to catch obvious failures without disrupting your momentum.

What excites me about the future is the potential for AI coding tools to suggest appropriate vibe checks automatically. Imagine if Claude or similar tools could not only generate your requested script but also proactively offer: “Here’s a simple verification script you might want to run afterward to ensure everything worked as expected.” I suspect if I had specifically asked for this, Claude could have suggested the file size comparison check, but having this built into the system’s default behavior would be incredibly valuable. I can envision specialized AI coding assistants that operate in a semi-autonomous mode—writing code, generating appropriate checks, running those checks, and involving you only when human verification is truly needed.

Combine this with the kind of sandboxed execution environment I mentioned earlier, and you’d have a vibe coding experience that’s both freeing and trustworthy—powerful enough for real work but with guardrails that prevent catastrophic mistakes.

And now for the meta twist: This entire blog post was itself the product of “vibe blogging.” At the start of our collaboration, I uploaded my previous O’Reilly article,”Using Generative AI to Build Generative AI” as a reference document. This gave Claude the opportunity to analyze my writing style, tone, and typical structure—much like how a human collaborator might read my previous work before helping me write something new.

Instead of writing the entire post in one go, I broke it down into sections and provided Claude with an outline for each section one at a time. For every section, I included key points I wanted to cover and sometimes specific phrasings or concepts to include. Claude then expanded these outlines into fully formed sections written in my voice. After each section was drafted, I reviewed it—my own version of a “vibe check”—providing feedback and requesting revisions until it matched what I wanted to say and how I wanted to say it.

This iterative, section-by-section approach mirrors the vibe coding methodology I’ve discussed throughout this post. I didn’t need to write every sentence myself, but I maintained control over the direction, messaging, and final approval. The AI handled the execution details based on my high-level guidance, and I performed verification checks at strategic points rather than micromanaging every word.

What’s particularly interesting is how this process demonstrates the same principles of trust, verification, and iteration that I advocated for in vibe coding. I trusted Claude to generate content in my style based on my outlines, but I verified each section before moving to the next. When something didn’t quite match my intent or tone, we iterated until it did. This balanced approach—leveraging AI capabilities while maintaining human oversight—seems to be the sweet spot for collaborative creation, whether you’re generating code or content.

Epilogue: Behind the Scenes with Claude

[Claude speaking]

Looking back at our vibe blogging experiment, I should acknowledge that Philip noted the final product doesn’t fully capture his authentic voice, despite having his O’Reilly article as a reference. But in keeping with the vibe philosophy itself, he chose not to invest excessive time in endless refinements—accepting good-enough rather than perfect.

Working section-by-section without seeing the full structure upfront created challenges, similar to painting parts of a mural without seeing the complete design. I initially fell into the trap of copying his outline verbatim rather than transforming it properly.

This collaboration highlights both the utility and limitations of AI-assisted content creation. I can approximate writing styles and expand outlines but still lack the lived experience that gives human writing its authentic voice. The best results came when Philip provided clear direction and feedback.

The meta-example perfectly illustrates the core thesis: Generative AI works best when paired with human guidance, finding the right balance between automation and oversight. “Vibe blogging” has value for drafts and outlines, but like “vibe coding,” some form of human verification remains essential to ensure the final product truly represents what you want to say.

[Philip speaking so that humans get the final word…for now]

OK, this is the only part that I wrote by hand: My parting thought when reading over this post is that I’m not proud of the writing quality (sorry Claude!), but if it weren’t for an AI tool like Claude, I would not have written it in the first place due to lack of time and energy. I had enough energy today to outline some rough ideas, then let Claude do the “vibe bloggingâ€� for me, but not enough to fully write, edit, and fret over the wording of a full 2,500-word blog post all by myself. Thus, just like with vibe coding, one of the great joys of “vibe-ingâ€� is that it greatly lowers the activation energy of getting started on creative personal-scale prototypes and tinkering-style projects. To me, that’s pretty inspiring.

OpenAI o3 and o4-mini System Card

OpenAI o3 and o4-mini System Card

I'm surprised to see a combined System Card for o3 and o4-mini in the same document - I'd expect to see these covered separately.

The opening paragraph calls out the most interesting new ability of these models (see also my notes here). Tool usage isn't new, but using tools in the chain of thought appears to result in some very significant improvements:

The models use tools in their chains of thought to augment their capabilities; for example, cropping or transforming images, searching the web, or using Python to analyze data during their thought process.

Section 3.3 on hallucinations has been gaining a lot of attention. Emphasis mine:

We tested OpenAI o3 and o4-mini against PersonQA, an evaluation that aims to elicit hallucinations. PersonQA is a dataset of questions and publicly available facts that measures the model's accuracy on attempted answers.

We consider two metrics: accuracy (did the model answer the question correctly) and hallucination rate (checking how often the model hallucinated).

The o4-mini model underperforms o1 and o3 on our PersonQA evaluation. This is expected, as smaller models have less world knowledge and tend to hallucinate more. However, we also observed some performance differences comparing o1 and o3. Specifically, o3 tends to make more claims overall, leading to more accurate claims as well as more inaccurate/hallucinated claims. More research is needed to understand the cause of this result.

Table 4: PersonQA evaluation
Metric o3 o4-mini o1
accuracy (higher is better) 0.59 0.36 0.47
hallucination rate (lower is better) 0.33 0.48 0.16

The benchmark score on OpenAI's internal PersonQA benchmark (as far as I can tell no further details of that evaluation have been shared) going from 0.16 for o1 to 0.33 for o3 is interesting, but I don't know if it it's interesting enough to produce dozens of headlines along the lines of "OpenAI's o3 and o4-mini hallucinate way higher than previous models".

The paper also talks at some length about "sandbagging". I’d previously encountered sandbagging defined as meaning “where models are more likely to endorse common misconceptions when their user appears to be less educated”. The o3/o4-mini system card uses a different definition: “the model concealing its full capabilities in order to better achieve some goal” - and links to the recent Anthropic paper Automated Researchers Can Subtly Sandbag.

As far as I can tell this definition relates to the American English use of “sandbagging” to mean “to hide the truth about oneself so as to gain an advantage over another” - as practiced by poker or pool sharks.

(Wouldn't it be nice if we could have just one piece of AI terminology that didn't attract multiple competing definitions?)

o3 and o4-mini both showed some limited capability to sandbag - to attempt to hide their true capabilities in safety testing scenarios that weren't fully described. This relates to the idea of "scheming", which I wrote about with respect to the GPT-4o model card last year.

Tags: ai-ethics, generative-ai, openai, o3, ai, llms

Decentralizing Schemes

Decentralizing Schemes

Tim Bray discusses the challenges faced by decentralized Mastodon in that shared URLs to posts don't take into account people accessing Mastodon via their own instances, which breaks replies/likes/shares etc unless you further copy and paste URLs around yourself.

Tim proposes that the answer is URIs: a registered fedi://mastodon.cloud/@timbray/109508984818551909 scheme could allow Fediverse-aware software to step in and handle those URIs, similar to how mailto: works.

Bluesky have registered at: already, and there's also a web+ap: prefix registered with the intent of covering ActivityPub, the protocol used by Mastodon.

Tags: social-media, tim-bray, bluesky, urls, mastodon, decentralisation

AI assisted search-based research actually works now

For the past two and a half years the feature I've most wanted from LLMs is the ability to take on search-based research tasks on my behalf. We saw the first glimpses of this back in early 2023, with Perplexity (first launched December 2022, first prompt leak in January 2023) and then the GPT-4 powered Microsoft Bing (which launched/cratered spectacularly in February 2023). Since then a whole bunch of people have taken a swing at this problem, most notably Google Gemini and ChatGPT Search.

Those 2023-era versions were promising but very disappointing. They had a strong tendency to hallucinate details that weren't present in the search results, to the point that you couldn't trust anything they told you.

In this first half of 2025 I think these systems have finally crossed the line into being genuinely useful.

Deep Research, from three different vendors

First came the Deep Research implementations - Google Gemini and then OpenAI and then Perplexity launched products with that name and they were all impressive: they could take a query, then churn away for several minutes assembling a lengthy report with dozens (sometimes hundreds) of citations. Gemini's version had a huge upgrade a few weeks ago when they switched it to using Gemini 2.5 Pro, and I've had some outstanding results from it since then.

Waiting a few minutes for a 10+ page report isn't my ideal workflow for this kind of tool. I'm impatient, I want answers faster than that!

Last week, OpenAI released search-enabled o3 and o4-mini through ChatGPT. On the surface these look like the same idea as we've seen already: LLMs that have the option to call a search tool as part of replying to a prompt.

But there's one very significant difference: these models can run searches as part of the chain-of-thought reasoning process they use before producing their final answer.

This turns out to be a huge deal. I've been throwing all kinds of questions at ChatGPT (in o3 or o4-mini mode) and getting back genuinely useful answers grounded in search results. I haven't spotted a hallucination yet, and unlike prior systems I rarely find myself shouting "no, don't search for that!" at the screen when I see what they're doing.

Here are four recent example transcripts:

Talking to o3 feels like talking to a Deep Research tool in real-time, without having to wait for several minutes for it to produce an overly-verbose report.

My hunch is that doing this well requires a very strong reasoning model. Evaluating search results is hard, due to the need to wade through huge amounts of spam and deceptive information. The disappointing results from previous implementations usually came down to the Web being full of junk.

Maybe o3, o4-mini and Gemini 2.5 Pro are the first models to cross the gullibility-resistance threshold to the point that they can do this effectively?

Google and Anthropic need to catch up

The user-facing Google Gemini app can search too, but it doesn't show me what it's searching for. As a result, I just don't trust it. This is a big missed opportunity since Google presumably have by far the best search index, so they really should be able to build a great version of this. And Google's AI assisted search on their regular search interface hallucinates wildly to the point that it's actively damaging their brand. I just checked and Google is still showing slop for Encanto 2!

Claude also finally added web search a month ago but it doesn't feel nearly as good. It's using the Brave search index which I don't think is as comprehensive as Bing or Gemini, and searches don't happen as part of that powerful reasoning flow.

The truly magic moment for me came a few days ago.

My Gemini image segmentation tool was using the @google/generative-ai library which has been loudly deprecated in favor of the still in preview Google Gen AI SDK @google/genai library.

I did not feel like doing the work to upgrade. On a whim, I pasted my full HTML code (with inline JavaScript) into ChatGPT o4-mini-high and prompted:

This code needs to be upgraded to the new recommended JavaScript library from Google. Figure out what that is and then look up enough documentation to port this code to it.

(I couldn't even be bothered to look up the name of the new library myself!)

... it did exactly that. It churned away thinking for 21 seconds, ran a bunch of searches, figured out the new library (which existed way outside of its training cut-off date), found the upgrade instructions and produced a new version of my code that worked perfectly.

Screenshot of AI assistant response about upgrading Google Gemini API code. Shows "Thought for 21 seconds" followed by web search results for "Google Gemini API JavaScript library recommended new library" with options including Google AI for Developers, GitHub, and Google for Developers. The assistant explains updating from GoogleGenerativeAI library to @google-ai/generative, with code samples showing: import { GoogleGenAI } from 'https://cdn.jsdelivr.net/npm/@google/genai@latest'; and const ai = new GoogleGenAI({ apiKey: getApiKey() });

I ran this prompt on my phone out of idle curiosity while I was doing something else. I was extremely impressed and surprised when it did exactly what I needed.

How does the economic model for the Web work now?

I'm writing about this today because it's been one of my "can LLMs do this reliably yet?" questions for over two years now. I think they've just crossed the line into being useful as research assistants, without feeling the need to check everything they say with a fine-tooth comb.

I still don't trust them not to make mistakes, but I think I might trust them enough that I'll skip my own fact-checking for lower-stakes tasks.

This also means that a bunch of the potential dark futures we've been predicting for the last couple of years are a whole lot more likely to become true. Why visit websites if you can get your answers directly from the chatbot instead?

The lawsuits over this started flying back when the LLMs were still mostly rubbish. The stakes are a lot higher now that they're actually good at it!

I can feel my usage of Google search taking a nosedive already. I expect a bumpy ride as a new economic model for the Web lurches into view.

Tags: gemini, anthropic, openai, llm-tool-use, o3, search, ai, llms, google, generative-ai, perplexity, chatgpt, ai-ethics, llm-reasoning, ai-assisted-search, deep-research

Two Men, One Space Gun

Rockets spew fire and produce tons of noise, which makes them cool and sexy, if you’re into fire and noise, which is to say, if you’re human.

Also cool, however, is a 10-kilometer-long space gun that simply blasts objects into orbit with less obvious drama.

Making such a gun is the dream project for Mike Grace and Nathan Saichek, the co-founders of Longshot Space based in Oakland, California. And their efforts to date are the subject of our latest video filmed during a recent visit to their engineering compound.

Subscribe now

Longshot falls into the category of kinetic launch systems. These are machines that try and get objects into space without all the fuel, engines and other engineering baggage associated with rockets. Lots of people think kinetic launch systems – other examples include SpinLaunch and Auriga Space – are crazy, and they sort of are.

But they also make a lot of sense when you consider that gravity is a huge pain and that rockets are very inefficient. Roughly 95 percent of a rocket’s mass goes toward getting it off Earth, leaving a few percent behind for the actual payload.

Kinetic launch systems focus on putting the gravity-defeating infrastructure on the ground instead of in the air. The hope then is that you can blast objects into space cheaper and faster.

One of the major downsides with this approach, though, is that you’re hurling sensitive electronics through the atmosphere and creating all sorts of conditions that electronics tend not to enjoy.

Share

Mike and Nathan care not for the naysayers and have been building a smaller version of their gun inside of a shipping container. It works, and it’s awesome. You’ll see.

For more on the history of space guns and the physics behind them, we have some additional deep dives with the Longshot team.

Same-sex marriage challenges, redux

 The NYT has the story:

Same-Sex Marriage Is the Law of the Land. Some States Are Debating It Anyway.  State efforts to urge the Supreme Court to reconsider same-sex marriage have not advanced, but they have reopened the issue.  By Amy Harmon

"In half a dozen states, Republican lawmakers have introduced resolutions urging the Supreme Court to overturn its 2015 decision, Obergefell v. Hodges. In Tennessee, a Republican legislator has proposed a new category of “covenant” marriages between “one male and one female.” And in several states, including Virginia and Oregon, Democrats are laying the groundwork to repeal old state statutes and constitutional amendments that prohibited same-sex marriage, which could come back into effect should Obergefell be overturned."

########

Earlier:

Wednesday, December 14, 2022 Biden Signs Bill to Protect Same-Sex Marriage Rights

 


 

Working Through the Fear of Being Seen

Working Through the Fear of Being Seen

Heartfelt piece by Ashley Willis about the challenge of overcoming self-doubt in publishing online:

Part of that is knowing who might read it. A lot of the folks who follow me are smart, opinionated, and not always generous. Some are friends. Some are people I’ve looked up to. And some are just really loud on the internet. I saw someone the other day drag a certain writing style. That kind of judgment makes me want to shrink back and say, never mind.

Work to avoid being somebody who discourages others from sharing their thoughts.

Via @ashley.dev

Tags: blogging

The Three Reasons Trump Despises Jerome Powell

Official White House photo by Andrea Hanks

Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.

Subscribe now

President Trump is very mad at Federal Reserve chair Jerome Powell, ostensibly because Powell hasn’t said he’s going to lower interest rates as Trump has demanded. In fact, there are two other very important reasons Trump despises Powell, which I explain below. But first, today’s presidential tantrum:

Calling Powell “a major loser” is not in fact going to make it more likely that interest rates come down. The Fed is taking a wait-and-see approach at the moment, which seems reasonable given the contradictory implications of Trump’s tariffs. On one hand, the tariffs will almost certainly raise prices, and the Fed’s usual response to inflation is to raise interest rates. On the other, the tariffs — along with the firing of tens or hundreds of thousands of federal workers, the degradation of all kinds of government services, and the likelihood that the upcoming Republican budget will include brutal cuts to programs like Medicaid — may well set off a ruinous recession, and the Fed’s usual response to a recession is to lower interest rates.

It’s quite the dilemma. And we should note that the Fed chair doesn’t make interest rate decisions on his own; the Fed’s Open Market Committee arrives at a consensus at its quarterly meetings on what to do (though of course the chair wields significant influence). In the meantime, Powell and the Fed are standing pat.

Trump is livid about this and is floating the idea of firing Powell, which according to the law as it currently exists (i.e. pending the Supreme Court rewriting it) the president does not have the power to do. And the markets are not exactly pleased:

This is itself a reflection of how investors have realized that Trump is an unhinged toddler who cannot be trusted not to crash the world economy. After all, if he was the economic genius he fancies himself to be, they’d be thrilled at the prospect of him replacing the Fed chair with someone who would respond to his every demand, since that would guarantee limitless prosperity. Apparently, anyone who doesn’t acknowledge his wisdom is BAD AT BUSINESS:

Reason #2 Trump hates Powell: he’s too honest

Anyone whose memory goes back more than two decades probably still finds it odd that Powell speaks in public regularly, and when he does so he sounds, if not completely freewheeling, at least like a human being. During Alan Greenspan’s 18-year tenure as Fed chair between 1987 and 2006, Americans got used to the chair’s public utterances being so maddeningly vague that it was impossible to discern what he thought or might do. He once told Congress, only barely exaggerating, that “Since I’ve become a central banker, I’ve learned to mumble with great incoherence. If I seem unduly clear to you, you must have misunderstood what I said.” His two successors, Ben Bernanke and Janet Yellen, were only slightly less cautious in public.

Compared to that, Powell is refreshingly candid and forthcoming; while he may not say precisely what the Fed plans to do in the future, he doesn’t try to be inscrutable. Which at a moment like this means he will be quoted saying things that don’t reflect well on Trump. Most notably, he said recently that there is a “strong likelihood” that Trump’s tariffs will raise consumer prices (which of course they will), and that creates a “challenging scenario” for the Fed as it seeks to hold overall inflation down. Absolutely no one could argue that he was wrong, but it still led to a wave of negative stories about Trump’s economic policies, stories that no doubt enraged the president.

Reason #3: Powell is a reminder of a past Trump hates

But the most important reason Trump hates Powell is that he is a vestige of a past Trump is trying to overcome, a period when Trump was constrained by established systems and swayed by advisors he later came to view as traitors.

When Trump nominated Powell to be Fed chair in 2017, he had already been on the Fed board of governors for years. He was a Republican, but was known as a pragmatic centrist. In other words, he was exactly the kind of figure any Republican president would appoint. While there is little reporting on the decision-making process, at the time Trump was surrounded by reasonably sane economic advisors, some of whom would later quit in disgust.

At that point there was little indication that Trump had been manipulated into picking Powell, or that he was unhappy with the Fed at all; my guess is they said “Everybody likes Jay Powell, he’ll be a steady hand,” and Trump said “Sure, he seems fine.” He did interview a few candidates for the job, including Yellen, whom he called “a wonderful woman who has done a terrific job.” And with both inflation and unemployment low, he had little reason to be dissatisfied.

But over time, Trump’s displeasure with the establishment Republicans inside and outside his administration grew. In 2019, he tried to nominate buffoonish right-wing pundit Stephen Moore and failed presidential candidate and renowned nincompoop Herman “Ubeki-beki-beki-beki-stan-stan” Cain to the Federal Reserve board; both withdrew after Senate Republicans made clear they couldn’t get confirmed. The rejection of his picks no doubt left a sour taste in Trump’s mouth.

Which brings us to today. One of the central themes of Trump’s second term is a lack of constraint: He is not only unconstrained emotionally, he has made sure to staff his administration only with cultists who will not only never question him, but will work like busy beavers to remove any bureaucratic or legal impediments to the swift realization of his will. Republicans in Congress have abdicated any shred of independence; just about whatever he asks for, he’ll get. This is the presidency Trump always wanted.

Yet there Jay Powell sits, wielding power over the economy yet refusing to do Trump’s bidding. Right after the election Powell was asked at a press conference whether he’d resign if Trump asked him to, and he gave a one-word answer: “No.” One can only imagine how angry that made Trump, and he is angry still.

But there’s good news for the president, and bad news for the rest of us: Powell’s term is up next year, and Trump will get to appoint someone new to chair the Federal Reserve. He can pick Don Jr. or Lee Greenwood or whoever he saw on Fox that morning, and the Senate will probably confirm his choice. By that time, who knows how bad things will have gotten.

The Cross Section is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The pundit's dilemma

“There are no American troops in Baghdad!” — Mohammed Saeed al-Sahhaf

Around the middle of 2022, it became apparent that the Biden administration was going to go all-in on industrial policy. The policies that Biden implemented — the CHIPS Act, and the green energy subsidies in the Inflation Reduction Act — had already been in the works for quite a while. But so had a bunch of other Democratic policy ideas that didn’t end up making it into law.1 So it wasn’t until 2022 that we realized that industrial policy was going to be Biden’s Big Thing.

I was personally overjoyed about this trend. For years, I had worried that America’s eroding manufacturing base made it less capable of standing up to China and other authoritarian adversaries. I was also optimistic that a big push for green energy could accelerate decarbonization both at home and abroad. And for years, I had been deeply interested in the successful industrial policies of East Asian countries like Korea and Japan during their catch-up development phases. I was eager to see America try its hand at something similar, in order to revive its stagnant industrial base.

At the same time, I didn’t agree with everything the administration was trying to do. I realized that if the U.S. did experience a manufacturing revival, it would be by leveraging our strengths in automation and technology — in other words, even in the best-case scenario, the era of good plentiful blue-collar manufacturing jobs wasn’t going to come back. Some people in the Biden administration didn’t seem to understand that, and appeared to view industrial policy as a jobs program.

I also disagreed with some of the ways that the Biden administration tried to implement industrial policy. The “buy American” provisions were unnecessary and unhelpful (local demand will foster a network of domestic suppliers anyway). The “everything bagel” requirements for subsidies were too onerous, particularly the union labor requirements in states with very few union workers.

As one of the more prominent commentators writing about these policies, I had a choice to make. I had to decide whether to emphasize my criticisms of Biden’s approach, or my overall positivity about the idea. Then, as the years went by and we started to see some of the results of Biden’s policies, the choice became even more acute, because some of the policies succeeded and some of them failed.

The administration’s core industrial policies seemed to be bearing fruit; there was a massive factory construction boom, most of it financed with private money (the subsidies acting mainly as a nudge). The TSMC plant in Arizona got back on track after some initial hitches. But at the same time, Biden’s administration failed to build any EV chargers or rural broadband, and new transmission lines were stymied by regulatory hurdles. It seemed like the more the government was directly involved with a project, the less likely it was to ever happen; nudging the private sector worked, but direct government action turned into wasteful jobs programs that built nothing.

I had to make a decision: How vehemently and how frequently should I lambast the Biden administration’s failures, and how vehemently and how frequently should I praise its successes? If I leaned too hard toward praise, I might look like an apparatchik who just parroted administration talking points. And even worse, I might fail to perform one of my core functions as an independent commentator — criticizing policy to help steer it in the right direction. But on the other hand, if I leaned too much toward criticism, the people in the administration might decide that I was their political opponent, and stop listening to me entirely. And I might also throw the baby out with the bathwater by ignoring the real successes.

To be honest, I didn’t actually think very hard about any of these tradeoffs at the time. But perhaps subconsciously, I was making these sort of calculations. In any case, I think I ended up striking a pretty good balance between criticism and praise.

Conservative industrialists, however, are facing a much harder dilemma right now. Biden’s industrial policy was a mixed bag, with more successes than failures. But Trump’s tariff policy is a giant flaming disaster. The dollar is down, as investors flee American bonds, putting the country’s whole financial stability in danger. Forecasts for the real economy are getting more pessimistic by the day. Stocks are down yet again. Here’s a representative headline:

Any time you hear the words “worst since 1932” in connection with the American economy, you know things are not going well.

Given this disaster, commentators who support Trump have a hard choice to make. Do they denounce Trump’s policies, thus condemning themselves to certain excommunication from the MAGA movement, and dropping their influence to basically zero without altering Trump’s course in any perceptible way? Or do they scramble to find some way to put lipstick on the tariff pig?

The latter course of action will preserve their influence within MAGA-land, but will also make them look ridiculous to the rest of the country, who has no such ideological commitments to uphold. And in the process, the commentators will become an accomplice to this act of economic self-mutilation.

Oren Cass, the chief economist at American Compass and a prominent advocate of industrial policy from the right, has chosen the second path. He is continuing to fiercely defend the idea that tariffs are a way to reindustrialize the American economy. In a post on April 4th, shortly after Trump’s “Liberation Day” tariffs, Cass defended the tariffs and made some minor suggestions for how Trump might tweak the policy:

As the economic carnage from the tariffs became more apparent, Cass continued to double down. In a debate with Jason Furman and others a week later, Cass argued that trade had devastated American manufacturing in the 2000s. A few days after that, he argued that many Americans will enjoy working in factories:

[I]f fully 25% of poll respondents say they’d prefer a factory job to their current job, that suggests massive potential unmet by the current labor market…One in four?! That would be 40 million of 160 million employed Americans!…When we talk about reindustrializing America, we are talking about adding millions of manufacturing jobs, not tens of millions—several percentage points of the labor force, not a quarter or more…[P]lease, let’s stop with the “nobody wants to work in a factory,” and even more so the “I don’t see you working in factory.”

And a few days after that, Cass argued that Trump’s tariff strategy is primed to succeed, because meeting American domestic demand is more important than exporting:

Our $1 trillion trade deficit presents an enormous opportunity. Most economic analysis of the American position in the global economy seems to assume that export markets represent the key opportunity for U.S. industry and thus success depends upon winning in those markets. From that perspective, for instance, tariffs on the intermediate goods that manufacturers might use in their own production is a disaster. How can a U.S. producer hope to compete with a German producer if the U.S. producer has to pay a tariff on components from China and the German producer does not?

This assumption makes good sense in the typical developing-country situation where the domestic market is small and trade imbalances are likely a minor factor…But the top priority and major opportunity for the United States is not higher exports, it is recovering the capacity to meet domestic demand with domestic supply…Indeed, the U.S. will be building leading-edge semiconductor fabs to meet domestic demand for a long time before it needs to consider export markets at all.

If you’re going to try and mount an intellectual defense of Trump’s tariff policies — instead of just screaming out memes, pointing fingers, and trying to distract people by talking about immigration instead — this is basically how you have to do it. Nothing good is coming of Trump’s tariffs right now, so if you’re going to defend them, you basically have to argue that they represent short term pain for long term gain. In this case, the long-term “gain” is economic self-sufficiency and a bonanza of factory jobs.

Formally speaking, you can’t prove that this is wrong, except by waiting a bunch of years and then observing that reindustrialization didn’t happen. There are no hard and fast economic laws of the Universe; we have a lot of theories, but economies are complex beasts, and the past is an imperfect guide to the future. I can’t completely rule out the possibility that Trump’s tariffs will cause a vast crop of steel factories and shoe factories and semiconductor factories to spring up from the American topsoil like mushrooms after the rain.

And yet when we look at what’s actually happening to American manufacturing in real time, it doesn’t look anything at all like the beginning of the reindustrialization that Cass imagines. Here’s a quick roundup of news from just the last few days:

  • Volvo is cutting 800 jobs at three U.S. factories, citing tariff uncertainty.

  • The Philadelphia Manufacturing Survey is plunging, signaling extreme pessimism among U.S. manufacturers.

  • Howmet Aerospace, a major aircraft parts manufacturer based in Pittsburgh, has declared that it may halt production due to Trump’s tariffs.

  • The April NY Manufacturing Survey is recording some of the most negative conditions that it has ever recorded, with new orders and shipments falling off a cliff:

Source: Heather Long
  • The Philadelphia Fed’s survey of manufacturers is also showing a massive dropoff in new orders.

  • Falling orders are causing Volvo to lay off hundreds of American workers at two factories.

  • Plenty of evidence shows that American manufacturers are pulling back on their capital spending plans because of tariffs:

    Manufacturing pessimism has spiked dramatically in April as industrial producers deal with changing tariff plans and try to assess how global trade policy will impact their costs and operations in the coming months…Several surveys released this week show huge swings in business confidence between January, when most manufacturers had a positive outlook for near-term conditions…and April, when sentiment changed for the much, much worse…Some of the most dramatic declines came from the Equipment Leasing Finance Foundation (ELFF), an organization that represents lenders that help manufacturers obtain new capital equipment for factories. In March, more than half of manufacturers surveyed by that group expected capital spending to increase or stay about the same in the next four months. By April, more than 61% said they expect spending to fall.

  • Ford is halting sales of some American-made cars to China. (Note: This will increase America’s trade deficit with China).

  • GM is laying off American factory workers too.

  • News of various other factory layoffs is proliferating. Cleveland Cliffs, the steel company, is laying off 1200 workers.

It’s not hard to understand why American manufacturing is getting hit hard by tariffs. As I’ve said in many posts, and as knowledgeable folks are screaming from the rooftops, broad tariffs of the type Trump has imposed raise the price of imported components and makes U.S. manufacturers less competitive. Even manufacturers who initially scoffed at this effect are quickly learning what a big problem it is:

And on top of the effect of the tariffs themselves, uncertainty about future tariffs — which has spiked to record levels under Trump — hurts manufacturing even more, because manufacturers don’t know how to plan their future supply chains.

This is not a difficult principle to understand. And yet somehow, Oren Cass appears not to grasp it. In his post that I quoted above, Cass argues that the “imported components” problem only applies to exports, not to manufacturing for the domestic market. That is obviously wrong. For an American auto factory in Kentucky, the cost of components matters just as much whether you’re selling your cars to customers in Dubai or Dallas. Without the ability to source cheap car parts from Mexico and Canada, American manufacturers will see their costs for domestic manufacturing rise.

This is why you can’t just measure the U.S. trade deficit and assume that this amount will be added to U.S. manufacturing if we wall off our economy (as Cass assumes in his post). The amount of domestic demand for manufactured products is not fixed. More expensive components will mean more expensive manufactured products, which will cause Americans to consume less. Americans will become poorer, and they’ll also substitute their consumption away from goods and toward services (whose price will go up by less).

Cass’ hand-waving dismissal of exports also ignores another crucial factor: scale effects. Export markets help American manufacturers scale up their production, which lowers their costs — and thus helps them sell even more to American consumers, capturing more of the U.S. market as well. Commentators like Sam Hammond, who argue that the U.S. should focus on export promotion instead of import substitution, are correct for this reason.

(On top of all that, it’s not clear how much tariffs even reduce trade deficits. As Arnaud Costinot and Ivan Werning show in a new theory paper, it’s possible that even very high tariffs might leave trade deficits mostly unchanged, while simply making an economy poorer.)

All this is theoretical, of course. Cass and other tariff defenders can simply continue to assume that tariffs will reduce trade deficits by a lot, and that this reduction will translate into a whole bunch of new U.S. factories. Perhaps they assume that the unfolding carnage in the U.S. manufacturing industry is simply a temporary disruption, and that after a short period of suffering, American manufacturers will get back on their feet, invest in domestic production capacity, and eventually do better than ever.

Except if that were true, we’d probably see two things. First, we’d see manufacturing stocks doing well, because far-sighted investors would know if American manufacturers would eventually benefit from tariffs. Instead, the stocks of American manufacturers have been crashing. Also, if tariffs provided a protective shield under which American manufacturers could flourish, we’d probably see a bunch of them rushing to invest and build new capacity. But we don’t; capex in the industry is plummeting.

In other words, neither investors nor manufacturers themselves believes in the long-term reindustrialized future that Oren simply assumes will come about. Instead, it appears that tariffs are dramatically accelerating America’s deindustrialization.

And if that’s true, Cass’ other arguments all go up in smoke. Who cares if people like working factory jobs when there are fewer factory jobs to work in? And so on. Every defense of the tariffs is based on this assumption of reindustrialization; if, as looks very likely, that turns out to be a fantasy, the whole edifice collapses.

Oren Cass and other tariff-defending pundits have thus hitched their wagons to a stagecoach that is driving straight off a cliff. They have managed to remain inside the favored circle of the MAGA movement, but this required them to make a terrible sacrifice — they lost their freedom to look out the window and see the calamity unfolding.

I am glad that during the Biden administration, I didn’t have to make any such choice.


Subscribe now

Share

1

Some of these — like the big subsidies for care industries in the Build Back Better bill — got killed because of lack of political support from the center. Others, like wealth taxes, co-determination, and sectoral bargaining, just never really caught fire.

Trump’s Cultural Revolution

I’m in Lisbon, speaking at a conference the Banco de Portugal is holding to commemorate the revolution that brought democracy to Portugal 50 years ago. I worked at the Bank in 1976 and have been a friend of Portugal ever since. And while Portugal has faced many challenges since the Carnation Revolution, all in all its democracy has flourished.

Alas, democracy in my own nation is now under dire threat. So I thought I’d write a short post about that today. Probably another brief post tomorrow. Then my wife and I will be on a bike trip, with at most quick notes from the road.

Donald Trump has been treated very, very badly. At least that’s what he says all the time, and there’s no reason to doubt that it’s how he feels. Hardly a day goes by without an outburst like this:

Above all, he clearly feels rage toward people who, he imagines, think they’re smarter or better than him.

And he and the movement he leads, composed of people possessed by similar rage, are seeking retribution. Retribution against whom? Yes, they hate wokeness. But three months in, it’s obvious that the MAGA types want revenge not just on their political opponents but on everyone they consider elites — a group that, as they see it, doesn’t include billionaires, but does include college professors, scientists and experts of any kind.

It took no time at all for the Trumpists to move from trying to purge government agencies of DEI to trying to control the content of medical journals.

Don’t try to sanewash what’s happening. It’s evil, but it isn’t calculated evil. That is, it’s not a considered political strategy, with a clear end goal. It’s a visceral response from people who, as Thomas Edsall puts it, are addicted to revenge.

If you want a model for what’s happening to America, think of Mao’s Cultural Revolution.

But wait, wasn’t Mao hard left while America has been taken over by the hard right? Well, why do you think there’s a big difference between the two? I’m a believer in horseshoe theory, which says that the extreme left and the extreme right are more like each other than either is like the political center. For example, among Britain’s unions there is a hard-left faction that has no counterpart in the United States. Some of its positions, notably making apologies for Vladimir Putin’s invasion of Ukraine, look a lot like MAGA.

And in America some leftist commentators have effectively become spokesmen for the tech-bro right.

Once you’ve seen the parallel between what MAGA is trying to do and China’s Cultural Revolution, the similarities are everywhere. Maoists sent schoolteachers to do farm labor; Trumpists are talking about putting civil servants to work in factories.

The Cultural Revolution was, of course, a huge disaster for China. It inflicted vast suffering on its targets and also devastated the economy. But the Maoists didn’t care. Revenge was their priority, never mind the effects on GDP.

The Trumpists are surely the same. Their rampage will, if unchecked, have dire economic consequences. Right now we’re all focused on tariff madness, but undermining higher education and crippling scientific research will eventually have even bigger costs. But don’t expect them to care, or even to acknowledge what’s happening. Trump has already declared that the inflation everyone can see with their own eyes is fake news.

There is, however, one big difference between Chairman Mao in 1966 and President Trump in 2025: Trump probably — probablydoesn’t have the cards.

Until a couple of weeks ago, as one institution after another capitulated to Trump’s demands, it was hard to avoid the sickening feeling that American civil society would fold without a fight. But as I said, Trump and his movement are driven by visceral urges, not strategy. And right now it looks as if they overreached. In different ways, the rendition of innocent people to gulags in El Salvador — don’t call it deportation — and the assault on Harvard seem to have stiffened spines. And the catastrophe of Trump’s economic policy has alienated businesspeople who would otherwise have served as his useful idiots.

America as we know it may yet perish. But at this point we seem to have a chance.

MUSICAL CODA

The truth about love

Ancient stone sculpture of a reclining female figure on a smooth surface against a grey background.

In Plato’s Symposium, Socrates shared a theory of love from the teachings of a ‘non-Athenian woman’. Who was she really?

- by Armand D’Angour

Read at Aeon

My debate with Dani Rodrik about tariffs and free trade

This occurred in Knoxville, you can watch it here.  Lots of fun, and p.s. I am more of a free trader than he is.  We did have some disagreements.

The post My debate with Dani Rodrik about tariffs and free trade appeared first on Marginal REVOLUTION.

       

Comments

 

Is this a lot or a little?

“The Effect of Deactivating Facebook and Instagram on Users’ Emotional State” — by Hunt Alcott, et.al.

We estimate the effect of social media deactivation on users’ emotional state in two large randomized experiments before the 2020 U.S. election. People who deactivated Facebook for the six weeks before the election reported a 0.060 standard deviation improvement in an index of happiness, depression, and anxiety, relative to controls who deactivated for just the first of those six weeks. People who deactivated Instagram for those six weeks reported a 0.041 standard deviation improvement relative to controls. Exploratory analysis suggests the Facebook effect is driven by people over 35, while the Instagram effect is driven by women under 25.

What is wrong with the simple model that Facebook and Instagram allow you to achieve some very practical objectives, such as staying in touch with friends or expressing your opinions, at the cost of only a very modest annoyance (which to be clear existed in earlier modes of communication as well)?

Here is also a new paper on phone app usage in the classroom, by Billur Aksoy, Lester R. Lusher, and Scott E. Carrell:

Phone usage in the classroom has been linked to worsened academic outcomes. We present findings from a field experiment conducted at a large public university in partnership with an app marketed as a soft commitment device that provides incentives to reduce phone use in the classroom. We find that app usage led to improvements in classroom focus, attendance, and overall academic satisfaction. Analysis of time spent outside the classroom suggests a potential substitution effect: students using the app allocated less time to study, particularly on campus. Overall, though statistically insignificant, we find improvements in transcript grades associated with app usage.

Again NBER.  I just do not see the compelling case for the apocalyptic interpretations here.

The post Is this a lot or a little? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Is this one galaxy or two? Is this one galaxy or two?


Taking ‘remote care’ to new heights — how space can shape the future of healthcare

Concept art of a crewed lunar base. Credit: ESA – P. Carril

Imagine you’re on the moon, digging up lunar regolith. You’re focused on the task at hand when you suddenly slip and fall, right into your pickaxe. Even in the moon’s […]

The post Taking ‘remote care’ to new heights — how space can shape the future of healthcare appeared first on SpaceNews.

Taking the Ground Out of Ground Systems

The Russia-Ukraine war is arguably the first commercial space war on account of its use of private companies’ imagery for tracking and targeting. The conflict has demonstrated the advantage of […]

The post Taking the Ground Out of Ground Systems appeared first on SpaceNews.

SpaceX launches cargo Dragon to ISS with additional crew supplies

CRS-32 launch

SpaceX launched a Dragon spacecraft to the International Space Station April 21 whose cargo includes more crew supplies and fewer science experiments than usual.

The post SpaceX launches cargo Dragon to ISS with additional crew supplies appeared first on SpaceNews.

Chinese orbital logistics startup InfinAstro raises angel round funding

A digital rendering of InfinAstro’s “Space Bus” orbital transfer vehicle in space, carrying multiple small satellites and approaching Earth orbit.

Chinese startup InfinAstro has secured early funding for its plan to fill a gap in China’s commercial on-orbit services.

The post Chinese orbital logistics startup InfinAstro raises angel round funding appeared first on SpaceNews.

OMB suggests NOAA scale back plans for geostationary satellites

SAN FRANCISCO – A White House budget proposal calls for replacing the National Oceanic and Atmospheric Administration’s future geostationary satellite constellation, GeoXO, with a far less expensive and ambitious program. […]

The post OMB suggests NOAA scale back plans for geostationary satellites appeared first on SpaceNews.

DARPA requests proposals for water-prospecting lunar orbiter

GRAIL

DARPA is seeking proposals for a small lunar orbiter that could be used to test operations in very low orbits while prospecting for water ice.

The post DARPA requests proposals for water-prospecting lunar orbiter appeared first on SpaceNews.

Monday 21 April 1662

This morning I attempted to persuade my wife in bed to go to Brampton this week, but she would not, which troubles me, and seeing that I could keep it no longer from her, I told her that I was resolved to go to Portsmouth to-morrow. Sir W. Batten goes to Chatham to-day, and will be back again to come for Portsmouth after us on Thursday next.

I went to Westminster and several places about business. Then at noon dined with my Lord Crew; and after dinner went up to Sir Thos. Crew’s chamber, who is still ill. He tells me how my Lady Duchess of Richmond and Castlemaine had a falling out the other day; and she calls the latter Jane Shore, and did hope to see her come to the same end that she did.

Coming down again to my Lord, he told me that news was come that the Queen is landed; at which I took leave, and by coach hurried to White Hall, the bells ringing in several places; but I found there no such matter, nor anything like it. So I went by appointment to Anthony Joyce’s, where I sat with his wife and Mall Joyce an hour or two, and so her husband not being at home, away I went and in Cheapside spied him and took him into the coach. Home, and there I found my Lady Jemimah, and Anne, and Madamoiselle come to see my wife, whom I left, and to talk with Joyce about a project I have of his and my joyning, to get some money for my brother Tom and his kinswoman to help forward with her portion if they should marry. I mean in buying of tallow of him at a low rate for the King, and Tom should have the profit; but he tells me the profit will be considerable, at which I was troubled, but I have agreed with him to serve some in my absence.

He went away, and then came Mr. Moore and sat late with me talking about business, and so went away and I to bed.

Read the annotations

Tuesday: Richmond Fed Mfg

Mortgage Rates From Matthew Graham at Mortgage News Daily: Mortgage Rates Jump Back Toward 7%
The latest headlines involve heavy criticism of Fed Chair Powell on the part of The President. Without any comment on whether that criticism is justified, we can still observe that markets find it unsettling. Traders are expressing that sentiment by pushing stocks lower and rates higher.

Mortgage rates jumped fairly sharply today, with the average lender moving up from 6.87% to just under 7.00% for top tier 30yr fixed scenarios. [30 year fixed 6.98%]
emphasis added
Tuesday:
• At 10:00 AM ET, Richmond Fed Survey of Manufacturing Activity for April.

Links 4/21/25

Links for you. Science:

HHS ‘Reductions to Absurdity,’ One Week On — We now have more details on last week’s cuts; the analysis is not pretty
‘The Great Educator, Sadly, Is Going to Be These Viruses’: CounterSpin interview with Paul Offit on RFK Jr. and measles
Public Health Org Calls for RFK Jr. to ‘Resign or Be Fired’
Government shuts CDC office focused on alcohol-related harms and prevention
Tuberculosis Is Back in the Spotlight. Does the U.S. Even Care?
American Farmers and the USDA Had Finally Embraced Their Role in the Climate Crisis. Then Came the Federal Funding Freeze. Critics say the Trump administration’s halt to billions in conservation spending could cause long-term damage and slow hard-won progress.

Other:

The Bargain Vs. The Boot
Turning the little screws: What Trump’s trade war means for AI, automation and making iPhones in America
White nationalist could soon be in charge of counterterrorism unit (far more dangerous to Jews than some dipshit college student)
Cheery
America Is Trying To Form An Anti-China Trade Bloc
RFK Jr.’s “MAHA” movement doesn’t want to eliminate chronic illness. They want to eliminate the chronically ill. (I sometimes strongly disagree with the author, but I do think she is correct here)
Why Trump Surrendered So Quickly on His Tariff Plan
Democrats dare Republicans to block bill that would bring DOGE to heel
Virginia’s new speed-limiting device law could inspire other states
Trump Blinked … Again. Trump’s tariff policy is still a disaster, but slightly less of a disaster than it was 24 hours ago.
Trump Cabinet goes full quack on fluoride and autism
The New American Golden Age Has Been Very Hard on Newsmax, and It’s About to Get Even Harder
Why Elite Colleges Aren’t Pushing Back on Trump, and Why Silence Is Dangerous
Workers key to bird flu response taking USDA buyouts, may strain agency’s efforts
America Is Backsliding Toward Its Most Polluted Era
Yes, they voted for this
Farmers Try to Keep Food on the Table Despite Trump
In Just Seven Days, [Factory Work] Can Make You A Man. Meet the Khmer L’Orange.
There Are Only Two Ways To Handle Trump’s Threats
The Third-Worlding of America
Apparently, Fired Police Officers Are Picking Who Gets Shipped Off to Prison in El Salvador by ICE?!
Australian with working visa detained and deported on returning to US from sister’s memorial
Senator Uses First Bill To Ban Cars Already Unavailable For Sale In The U.S.
This Shouldn’t Happen to Anybody. But It Did.
Politicians Shouldn’t Get to Delete Inconvenient Facts
Trump Wants to Merge Government Data. Here Are 314 Things It Might Know About You.
Florida teacher loses job for calling student by preferred name

The Centrist Competency Con: The Andrew Cuomo Edition

One of the myths that centrist Democrats capitalize on is the idea that they are competent, problem solvers. Of course, the reality is quite different, as this summary of NYC Democratic mayoral candidate and serial sexual harasser Andrew Cuomo’s record indicates:

Don’t be taken in. During the pandemic, Cuomo’s “competence” led to 15,000 nursing-home-related deaths in New York, higher than any other state, thanks to a directive of his that protected his donors. His legacy also includes an anemic and mismanaged public transit system, slashed state funding for homeless services and psychiatric beds (mental health crisis, anyone?), and the repeated sexual harassment of his staff, as confirmed by the New York attorney general and the US Department of Justice—for which he has charged taxpayers $17.9 million in legal fees and subpoenaed the gynecological records of one victim, among other nonsense. Not to mention the Moreland scandal 12 years ago, in which Cuomo sought to interfere with a state ethics investigation that ultimately landed several of his closest associates in jail, thanks to former US attorney Preet Bharara, whom Trump fired during his first administration when he refused to pervert justice. Mind you, this is a short list.

Cuomo is not being supported because he would be good at the job. He is supported by Democrats and others who are seeking an answer to the question, “How do we prevent a progressive from becoming mayor of NYC?” As an added bonus, he is so ethically compromised, they can be absolutely certain that he won’t be supported by progressives if he were to decide to buck his masters.

Rank-and-file Democrats, please do better than this fucking guy.

Harvard's lawsuit against the Trump administration

 Read it and weep for our country:


"7. Defendants’ actions are unlawful. The First Amendment does not permit the Government to “interfere with private actors’ speech to advance its own vision of ideological balance,” Moody v. NetChoice, 603 U.S. 707, 741 (2024), nor may the Government “rely[] on the ‘threat of invoking legal sanctions and other means of coercion . . . to achieve the suppression’ of disfavored speech,” Nat’l Rifle Ass’n v. Vullo, 602 U.S. 175, 189 (2024) (citation omitted). The Government’s attempt to coerce and control Harvard disregards these fundamental First Amendment principles, which safeguard Harvard’s “academic freedom.” Asociacion de Educacion Privada de P.R., Inc. v. Garcia-Padilla, 490 F.3d 1, 8 (1st Cir. 2007). A threat such as this to a university’s academic freedom strikes an equal blow to the research conducted and resulting advancements made on its campus.
8. The Government’s actions flout not just the First Amendment, but also federal laws
and regulations. The Government has expressly invoked the protections against discrimination
contained in Title VI of the Civil Rights Act of 1964 as a basis for its actions. Make no mistake: Harvard rejects antisemitism and discrimination in all of its forms and is actively making structural reforms to eradicate antisemitism on campus. But rather than engage with Harvard regarding those ongoing efforts, the Government announced a sweeping freeze of funding for medical, scientific, technological, and other research that has nothing at all to do with antisemitism and Title VI compliance. Moreover, Congress in Title VI set forth detailed procedures that the Government “shall” satisfy before revoking federal funding based on discrimination concerns. 42 U.S.C. § 2000d-1. Those procedures effectuate Congress’s desire that “termination of or refusal to grant or to continue” federal financial assistance be a remedy of last resort. Id. The Government made no effort to follow those procedures—nor the procedures provided for in Defendants’ own agency regulations—before freezing Harvard’s federal funding.
9. These fatal procedural shortcomings are compounded by the arbitrary and
capricious nature of Defendants’ abrupt and indiscriminate decision..."

SpaceX’s rideshare Bandwagon-3 mission marks the 300th orbital flight from Cape Canaveral’s pad 40

SpaceX launches their Falcon 9 rocket on April 21, 2025 with a rideshare mission towards mid-inclination orbits from SLC-40 in Cape Canaveral, FL. Image: Michael Cain/Spaceflight Now

Update 9:38 p.m. EDT: The Falcon 9 first stage booster successfully landed at Landing Zone 2.

SpaceX completed its third Falcon 9 rocket launch in less than 48 hours with a rideshare mission carrying payloads to a mid-inclination orbit.

Liftoff of the Bandwagon-3 mission happened at 8:48 p.m. EDT (0048 UTC) from Space Launch Complex 40 at Cape Canaveral Space Force Station. It was the 245th orbital launch for SpaceX from SLC-40 and the 300th total orbital flight from this pad.

This flight was the third launch in SpaceX’s rideshare program to mid-inclination orbits. It followed Bandwagon-1 from Kennedy Space Center and Bandwagon-2 from Vandenberg Space Force Base and 13 Transporter ride-share missions to polar orbit.

Heading into the launch opportunity Monday night, the 45th Weather Squadron forecast a 95 percent chance of favorable weather, continuing the pattern seen during the early morning launch of the CRS-32 mission from pad 39A.

SpaceX used the Falcon 9 first stage booster, tail number 1090, to launch the rideshare flight. This was its third trip to space and back following the launches of the O3b mPOWER-E mission and Crew-10.

About eight minutes after liftoff, B1090 targeted a touchdown at Landing Zone 2. This was just the 12th landing at LZ-2 as compared to the 51 landings at LZ-1.

A nebula-like effect was seen as the plumes of the SpaceX Falcon 9 rocket’s first and second stages interacted as the booster began it boostback burn to target a touchdown at Landing Zone 2. This happened less than three minutes into the flight of the Bandwagon-3 rideshare mission on April 21, 2025. Image: Michael Cain/Spaceflight Now

Among the payloads onboard the Falcon 9 rocket was the fourth synthetic aperture radar (SAR) satellite for the Korea 425 Project constellation for the South Korean military. The first three satellites launched on a series of rideshare flights beginning back in December 2023:

  • Korea 425 – Dec. 1, 2023
  • Bandwagon-1 – Apr. 7, 2024
  • Bandwagon-2 – Dec. 21, 2024

According to a 2018 story in the SAR Journal, the South Korean Agency for Defense Development (ADD) awarded a KRW588 billion ($530 million) contract to Korea Aerospace Industries (KAI) to oversee the development and construction of a series of five surveillance satellites.

The contract requires the satellites to be sent into orbit by 2025, which leaves just one more launch following the Monday night launch. The satellites are produced through a partnership between Korean firm Hanwha Systems Corporation (HSC) and Thales Alenia Space, the latter of which is the prime developer.

An artists’ rendering of two Korea 425 Project satellites in low Earth orbit. Graphic: Thales Alenia Space

“Our contribution extends to the supply of our high-performance Synthetic Aperture RADAR (SAR) that utilizes an innovative antenna consisting of a large deployable reflector with 24 deployable petals and an active phased array feed array in dual polarization,” Thales Alenia Space wrote following the December 2023 launch of the first Korea 425 Project satellite. “Another significant part of our contribution is the acquisition, storage, and data retransmission system on the ground.

“The high agility of the ‘dancing satellite’ is guaranteed by the innovative avionics and Control Momentum Gyroscope, also provided by Thales Alenia Space. These advanced technologies enable high-performance observation and surveillance capabilities, critical to the success of the program.”

The fifth and final SAR satellite may launch onboard the Bandwagon-4 mission, though that has not been formally announced.

Sharing the Falcon 9 with the 425 Project satellite were the Tomorrow Company Inc.’s (Tomorrow.io) Tomorrow-S7 satellite and Atmos Space Cargo’s Phoenix re-entry capsule.

According to the Observing Systems Capability Analysis and Review Tool (OSCAR), part of the World Meteorological Organization, the Tomorrow-S7 satellite is the ninth spacecraft in the Tomorrow.io constellation and the seventh in the Tomorrow Microwave Sounder series.

The 6U CubeSat (roughly 30 cm x 20 cm x 10 cm) reportedly has a dry mass of 12 kg (26.5 lbs) and will operate in low Earth orbit at 515 km (320 mi) in altitude at a 45-degree inclination.

It’s designed to have a three-year operating life with a mission to observe “all-weather temperature and humidity profiles.” The OSCAR tool suggests that there may be a second sounder satellite from Tomorrow.io onboard the Bandwagon-3 mission. Spaceflight Now reached out to the company to confirm if that’s accurate and is waiting to hear back.

Atmos Space Cargo’s Phoenix 1 re-entry capsule shown following integration onto the SpaceX payload adaptor that will fly on the Bandwagon-1 rideshare mission. Image: Atmos Space Cargo

The final passenger onboard Bandwagon-3 was Atmos Space Cargo’s Phoenix capsule. The German company received permission for the re-entry mission from the Federal Aviation Administration (FAA) in January, which will makes it “first private company in Europe to receive such authorization and the first non-governmental entity in European history to attempt space re-entry.”

The mission, dubbed Phoenix 1, is designed to complete two orbits around Earth before making its atmospheric re-entry. It’s designed to gather data about the capsule’s inflatable heat shield and has three core objectives:

  • Collecting in-flight data from the capsule and sub-components in orbit
  • Gathering scientific data from customer payloads carrying technology demonstrators and biological experiments
  • Successfully deploying and stabilizing the Inflatable Heat Shield during atmospheric re-entry

According to a February 2025 press release from the company, Atmos said the mission “is expected to conclude with the prototype’s demise during re-entry, providing valuable flight data for the next iteration of this platform – the Phoenix 2 capsule.”

“Driving advancements for reusable, affordable and reliable downmass is critical to the success of orbital space development,” said Lori Garver, the former NASA Deputy Administrator and member of the Atmos advisory board. “Having the ability to return life sciences and other types of microgravity research, rocket upper stages, military spacecraft and manufactured resources could be the next breakthrough in space transportation.”

Onboard the spacecraft are four payloads from three customers: DLR from Germany with its M-42 radiation detector, IDDK from Japan with its Micro Imaging Device and Frontier Space from the United Kingdom with its ‘lab-in-a-box’ SpaceLab and a bioreactor.

The Phoenix 1 spacecraft arrived in Florida in late March and integration, in partnership with Exolaunch, was completed earlier this month.

In an update to the mission published on April 18, Atmos said they had to change their mission’s trajectory, which called for a change in the ground stations being used in the operation. The landing site shifted away from a re-entry over Africa and a splashdown in the Indian Ocean.

The new trajectory has the re-entry beginning over Los Angeles, California, extending across South America, resulting in a splashdown about 2,000 km off the coast of Brazil in the Atlantic Ocean. Atmos didn’t go into detail as to why the mission profile needed to change.

Depiction of Phoenix 1 orbital and re-entry trajectory until splashdown. Graphic: Atmos Space Cargo

The team will also attempt to compensate for the loss of traditional communication during the re-entry phase by conducting an air-to-air reconnaissance mission. It will do so by chasing the Phoenix 1 spacecraft with a “chartered aircraft equipped with a mobile satellite terminal from its EIP (at roughly 120 km altitude) through the plasma blackout phase.”

“Once deployment of Phoenix is confirmed, the Atmos Mission Control team will receive and pass on current trajectory data to our GNC (guidance, navigation control) team, who will quickly calculate an exact flight path to share with our airborne recon team in the chase plane, who will be on their way to meet and follow Phoenix just as it enters Earth’s atmosphere and continues its descent,” Atmos wrote.

“We added this experimental new chapter to our mission plan with the aim to visually monitor and confirm the status of our capsule while attempting to re-establish a data link after plasma blackout to recover the most valuable flight data for further heat shield analysis and the subsequent vehicle development of Phoenix 2 – expected to launch in 2026. Phoenix 2 will carry its own propulsion system on board, allowing us to independently determine the moment of re-entry as well as our return trajectory and angle.”

Atmos said it also anticipates a “steeper flight path angle, resulting in a higher vertical re-entry velocity, which introduces higher thermal and aerodynamic loads to our capsule’s re-entry scenario.”

An artist’s depiction of the Phoenix 1 spacecraft in low Earth orbit. Graphic: Atmos Space Cargo

“Under these conditions, there is a high probability that the increased thermal stress and aerodynamic forces may affect the capsule structure and heat shield, but all flight data we will receive during this inaugural flight will inform our system analysis and optimization for Phoenix 2.

Back in February, the company received €13.1 million in funding from the European Innovation Council, part of the European Union’s Horizon Europe framework, to help support the development of the Phoenix 2 spacecraft.

The Talk Show: ‘The Best Hatched Plan’

Special guest Glenn Fleishman returns to the show for episode 420 on 4/20, but everyone’s sober, I swear. Topics include Trump’s dumb tariffs and Glenn’s smart new edition of his book Six Centuries of Type & Printing.

Sponsored by:

  • Squarespace: Make your next move. Use code talkshow for 10% off your first order.
  • Notion: Try the powerful, easy-to-use Notion AI today.
  • BetterHelp: Give online therapy a try at BetterHelp and get on your way to being your best self.
  • Clic for Sonos: No lag. No hassle. Just Clic.
 ★ 

“This debate is entirely obsolete. To what extent is the constitutional order still in effect? If we must ask, we are fully in a crisis situation; once we don’t have to ask anymore, the constitutional order will have already been overthrown.”

Larry David: ‘My Dinner With Adolf’

Larry David, in a column for The New York Times:

He loved that story, especially the part where Hitler shot the dog before it got back into the car. Then a beaming Hitler said, “Hey, if I can kill Jews, Gypsies and homosexuals, I can certainly kill a dog!” That perhaps got the biggest laugh of the night — and believe me, there were plenty.

I have been reliably informed that, having linked approvingly to Bill Maher’s “book report” on his dinner with Trump, I must also link to David’s report of dinner with Adolf.

 ★ 

Yours Truly on The MacRumors Show

Just in case you haven’t had enough of me on various recent podcasts, I had the pleasure of joining hosts Dan Barbera and Hartley Charlton on The MacRumors Show, talking mostly about Apple Intelligence and the future of the Vision platform. Fun!

 ★ 

Why Do AI Company Logos Look Like Buttholes?

Radek Sienkiewicz:

If you pay attention to AI company branding, you’ll notice a pattern:

  1. Circular shape (often with a gradient)
  2. Central opening or focal point
  3. Radiating elements from the center
  4. Soft, organic curves

Sound familiar? It should, because it’s also an apt description of ... well, you know.

A butthole.

 ★ 

How Tim Cook Navigated Out of Trump’s Tariffs on China, For Now

Jeff Stein, Elizabeth Dwoskin, and Cat Zakrzewski, reporting for The Washington Post:

As President Donald Trump’s enormous new tariffs on China rippled through global supply chains, Apple CEO Tim Cook went to work behind the scenes.

Cook spoke to Commerce Secretary Howard Lutnick last week about the potential impact of the tariffs on iPhone prices, two people familiar with the phone call said, speaking on the condition of anonymity to reflect private conversations that were previously unreported. Cook spoke with other senior officials in the White House, the people said. And he refrained from publicly criticizing the president or his policies on national television, as many other executives have over the past several weeks.

By the end of the week, the Trump administration agreed to exempt from import duties electronic products that Apple produces in China, an action that also granted a reprieve to other large U.S. firms, including HP and Dell. Trump did so despite the recommendations of senior White House aide Peter Navarro, who had wanted the taxes to remain in place, the people said.

Three points:

  1. Tim Cook manages this dance with aplomb. This is not a “good system”. But given the way Trump operates, what Cook managed here is not merely good for Apple but better policy, period.

  2. Howard Lutnick is a lickspittle moron with the demeanor of a used car salesman who knowingly sells overpriced lemons to suckers. Here he is on Meet the Press a few weeks ago bragging that “The army of millions and millions of human beings screwing in little screws to make iPhones — that kind of thing is going to come to America.” Keith Olbermann mentioned in a recent episode of his podcast that Lutnick is a dead ringer for Morrie Kessler, the bookmaker of “Morrie’s Wigs” fame from Goodfellas, and I can’t un-see it.

  3. Peter Navarro is such a profound dope and abject fraud — seriously, he’s not even good at making up phony names — that he makes Lutnick seem like a credible, responsible official.

 ★ 

The Information: ‘Meta Asked Amazon, Microsoft to Help Fund Llama’

Kalley Huang and Erin Woo, reporting for The Information (via Ed Zitron, who summarized it on Bluesky):

Meta Platforms over the past year asked Microsoft, Amazon and others to help pay the costs of training Meta’s flagship large language model, Llama, according to four people briefed on the discussions. Meta’s overtures reflected worries about the growing costs of its artificial intelligence development, according to two of the people. [...]

Meta in particular has faced questions about the business logic behind its AI development, given that Llama is open-source software, freely available for anyone’s use. That makes it difficult to turn into a business. And Meta makes money primarily from advertising and has little experience in selling business software.

While Meta held its most serious discussions with Amazon and Microsoft, it has also discussed the idea with Databricks, IBM and Oracle, as well as representatives from at least one Middle Eastern investor, according to two of the people briefed on the discussions. Meta was still in discussions with companies about the Llama Consortium as recently as the start of this year, the two people said.

Would you consider throwing a few sacks full of your cash on this bonfire of our cash that we’ve been burning?” is a hell of a pitch.

In its discussions with other companies, Meta primarily asked for money. It also sought servers or other resources that would offset the cost of training its models, according to two of the people briefed on the discussions. In return for their assistance, Meta discussed offering other companies promotion of their services alongside Llama — for example, a Meta executive might appear at a conference hosted by a consortium partner — or providing more insight into the training process for the model, one of those people said.

Pay a little and a Meta representative will show up at your developer conference. Pay more and a Meta rep won’t show up at your developer conference.

 ★ 

LA Ports: March Inbound Traffic Up YoY, Outbound Down

Container traffic gives us an idea about the volume of goods being exported and imported - and usually some hints about the trade report since LA area ports handle about 40% of the nation's container port traffic.

The following graphs are for inbound and outbound traffic at the ports of Los Angeles and Long Beach in TEUs (TEUs: 20-foot equivalent units or 20-foot-long cargo container).

To remove the strong seasonal component for inbound traffic, the first graph shows the rolling 12-month average.

LA Area Port TrafficClick on graph for larger image.

On a rolling 12-month basis, inbound traffic increased 0.8% in March compared to the rolling 12 months ending the previous month.   Outbound traffic decreased 0.9% compared to the rolling 12 months ending the previous month.

The 2nd graph is the monthly data (with a strong seasonal pattern for imports).

LA Area Port TrafficUsually imports peak in the July to October period as retailers import goods for the Christmas holiday and then decline sharply and bottom in the Winter depending on the timing of the Chinese New Year.  

Imports were up 12% YoY in March and exports were down 9% YoY.    

Recently importers rushed to beat the tariffs.  And port traffic will likely slow sharply in coming months.

Lawler: Early Read on Existing Home Sales in March

From housing economist Tom Lawler:

Based on publicly-available local realtor/MLS reports released across the country through today, I project that existing home sales as estimated by the National Association of Realtors ran at a seasonally adjusted annual rate of 4.06 million in March, down 4.7% from February’s preliminary pace and down 1.5% from last March’s seasonally adjusted pace.

Local realtor/MLS reports suggest that the median existing single-family home sales price last month was up by about 2.6% from a year earlier.

CR Note: The NAR is scheduled to release March Existing Home sales on Thursday, April 24th at 10:00 AM. The consensus is for 4.14 million SAAR, down from 4.26 million. Last year, the NAR reported sales in March 2024 at 4.12 million SAAR.

Long-Run Effects of Trade Wars

This short note shows that accounting for capital adjustment is critical when analyzing the long-run effects of trade wars on real wages and consumption. The reason is that trade wars increase the relative price between investment goods and labor by taxing imported investment goods and their inputs. This price shift depresses capital demand, shrinks the long-run capital stock, and pushes down consumption and real wages compared to scenarios when capital is fixed. We illustrate this mechanism by studying recent US tariffs using a dynamic quantitative trade model. When the capital stock is allowed to adjust, long-run consumption and wage responses are both larger and more negative. With capital adjustment, U.S. consumption can fall by 2.6%, compared to 0.6% when capital is held fixed, as in a static model. That is, capital stock adjustment emerges as a dominant driver of long-run outcomes, more important than the standard mechanisms from static trade models — terms-of-trade effects and misallocation of production across countries.

That is from a new NBER working paper by David Baqaee and Hannes Malmberg.  Bravo to the authors for producing this result so quickly.  And…as a side note…other forms of taxing capital can be bad too!  Really.  A number of people have spent the last twenty years tying themselves into knots on this question.

The post Long-Run Effects of Trade Wars appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Acquired on Epic Systems

I've just been on a long car trip and I ended up listening to the Acquired podcast episode on Epic Systems. Not a company I'd heard of or a story I knew. It's fascinating. Whimsy, determination, abuse of power, the wealthiest self-made woman in history, medical software, ERP failures, government over-reach, MUMPS, more whimsy.

I imagine many people I know will find this equally fascinating. And many others I know will find it infuriating, galling and partial. I'd love to hear people talk about this.

 

w/e 2025-04-21

Easter Monday feels like the end of the previous week, so, hello. I returned home from Essex on Tuesday, back to a mostly grey and damp Herefordshire, Easter Sunday aside.

Back in acting classes – somehow over five years ago – the teacher would occasionally focus our emotions when things were getting muddled by asking, “Are you mad, sad or glad?” It’s a simplification of course, but if you have to pick only one it creates some clarity in the performance.

Last time I was home I was mostly glad – content and relaxed. This time I’ve been mostly mad – inexplicably frustrated and annoyed at nothing in particular – and sad – it would have been Dad’s birthday this weekend, and a year since his first sudden trip to hospital.

My life feels split into chunks of 2-3 weeks between stays in Essex which isn’t helping my attempt to focus on one season at a time. We’re well into Spring and while I’ve put a bunch of tasks into a “🌱 Spring 2025” project in Things, that’s the extend of my personal project management. I’ve had a dozen or so tasks in my “Today” view for the past week, all still not-done.


§ I’ve distracted my mad/sad mind a bit by doing some client work and some personal coding, finding a rare “quick, fun” project in my tasks, rather than all the admin and maintenance to-dos: I wrote a quick script to count how many posts I’ve made on Bluesky each year, so that I could combine them with the number of Twitter and Mastodon posts to generate this chart:

A stacked bar chart with a bar for each year from 2006 to 2024. Twitter's bars make up most of it, rising from about 300 posts per year to a peak of over 2,500 in 2013. They fall down, with hardly any in 2023. Mastodon makes an appearance from 2018 onwards, and Bluesky only really visible in 2024. Both the latter combined are about 500 posts per year for 2023 and 2024.
Social media posts per year

I don’t have much memory of what it was like back in 2012-2014, when I was posting to Twitter about seven times a day on average. I’m barely doing once a day, including replies, across Mastodon and Bluesky now. In general I’ve mainly followed people I know, rather than celebrities or news sources, so it’s an indication of how much social life there was for me on Twitter back then. So many more friends in one place.


§ As I mentioned, my limited company is closing down, so I’m now self-employed/freelance. I’ve been using Ko-fi for anyone who wants to support Pepys’ Diary, covering the costs of running things. It is, for me, mainly a front-end to the payments and subscriptions in Stripe. But the Stripe account was in the name of my company and I don’t think there was a way to change that over to be “me”. Even if there was, it felt better to make a clean break seem more obvious.

So I’ve emailed subscribers one-by-one, telling them about the change, cancelling their current subscription, and letting them know they’ll have to re-subscribe if they want to continue. Only 50% have done so. Which is fine – subscribers don’t get anything extra for doing so, and I’d run the site without any (I did that for over 20 years). But it’s interesting nonetheless, and a slightly lower percentage than I expected.


§ We watched Kaos this week which was excellent. A shame it got cancelled. I’d probably have got even more from it if my knowledge of Greek mythology was less sketchy – it would have been more fun to see what they’ve done with the myths and I expect I missed plenty of fun details, but it was still fun, and well done. Lots of great performances. One of my favourites was Stephen Dillane as Prometheus, just because his pieces direct to camera felt so… easy? conversational? Must be hard to do that.


§ That’s all.


Read comments or post one

Monday assorted links

1. I find this illustrative, and also very, very naive.  Here is a related query.  I think this crowd is bad at modeling social systems and macro systems more generally.  That is an intrinsically thing to do, but I would keep that in mind when reading “rationalist” analyses.

2. Yale sells $6 billion of its portfolio.

3. Strawberries in Senegal (NYT).  And maybe the Straussians won’t like the new Maimonides translation?

4. The nuclear-powered flying hotel?

5. “Texas schools nix lesson over Virginia state flag’s exposed breast. The Roman goddess Virtus has been on the state flag since 1861, but the banner has only featured her bare breast since the early 20th century.”  And: “A case of early 20th-century gender confusion led to the breast baring in the first place. In 1901, Secretary of the Commonwealth D.Q. Eggleston complained that Virtus “looked more like a man than a woman and wanted to correct it. He instructed designers to add the breast to clarify her sex,” the Virginian-Pilot reported in a 2023 deep dive into how Virginia wound up with the only state flag boasting an exposed nipple.”

6. Ethan Mollick on AGI.  And resistance to the term AGI and its attainment.  A good piece, with a cameo by Duchamp.

7. Human aesthetics after AI.

The post Monday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Air Fact

'Wow, that must be why you swallow so many of them per year!' 'No, that's spiders. You swallow WAY more ants.'

California Home Sales Up 4.9% YoY in March; 4th Look at Local Housing Markets

Today, in the Calculated Risk Real Estate Newsletter: California Home Sales Up 4.9% YoY in March; 4th Look at Local Housing Markets

A brief excerpt:
From the California Association of Realtors® (C.A.R.): Elevated interest rates and economic uncertainty ease March home sales, C.A.R. reports
March’s sales pace fell 2.3 percent from the 284,540 homes sold in February and was up 4.9 percent from a year ago, when a revised 264,200 homes were sold on an annualized basis.
...
Months of SupplyIn March, sales in these markets were down 3.0% YoY. Last month, in February, these same markets were down 6.1% year-over-year Not Seasonally Adjusted (NSA).

Important: There were the same number of working days in March 2025 (21) as in March 2024 (21). So, the year-over-year change in the headline SA data will be close to the change in the NSA data (there are other seasonal factors).
...
Several local markets - like Illinois, Miami, New Jersey and New York - will report after the NAR release.
There is much more in the article.

Housing April 21st Weekly Update: Inventory up 2.4% Week-over-week, Up 33.4% Year-over-year

Altos reports that active single-family inventory was up 2.4% week-over-week.

Inventory is now up 15.2% from the seasonal bottom in January and is increasing.  

Usually, inventory is up about 6% or 7% from the seasonal low by this week in the year.   So, 2025 is seeing a larger than normal pickup in inventory.

The first graph shows the seasonal pattern for active single-family inventory since 2015.

Altos Year-over-year Home InventoryClick on graph for larger image.

The red line is for 2025.  The black line is for 2019.  

Inventory was up 32.6% compared to the same week in 2024 (last week it was up 33.4%), and down 16.7% compared to the same week in 2019 (last week it was down 17.5%). 

Inventory will pass 2020 levels in the next two weeks, and it now appears inventory will be close to 2019 levels towards the end of 2025.

Altos Home InventoryThis second inventory graph is courtesy of Altos Research.

As of April 18th, inventory was at 719 thousand (7-day average), compared to 702 thousand the prior week. 

Mike Simonsen discusses this data regularly on Youtube

Food: from luxuries to tragedies--chocolate and peanut butter at the opposite ends of human welfare

Below are two stories about food, that couldn't be more different (although the foods, chocolate and peanut butter, have some connection when times are good).

 The (sort of) luxury story, from the Guardian comes with a picture of bonbons:

 US chocolate prices surge amid soaring cocoa costs and tariffs
Price of cocoa – chocolate’s key ingredient – has climbed over past year and tariffs on imports will keep prices high
  by Lauren Aratani

 And here's the tragedy, reported on in The Atlantic

‘In Three Months, Half of Them Will Be Dead’
Elon Musk promised to preserve lifesaving aid to foreign children. Then the Trump administration quietly canceled it. By Hana Kiros

"As DOGE was gutting USAID in February, it alarmed the global-health community by issuing stop-work orders to the two American companies that make a lifesaving peanut paste widely recognized as the best treatment for malnutrition"

...

"The move reneged on an agreement to provide about 3 million children with emergency paste over approximately the next year. What’s more, according to the two companies, the administration has also not awarded separate contracts to shipping companies, leaving much of the food assured by the original reinstated contracts stuck in the United States."


Repeat Offender

NYT reports a second Hegseth Signal chat in which the secretary of defense shared Houthi attack plans. This one included his wife and brother.

Pandemic Preparation Without Romance

My latest paper, Pandemic Preparation Without Romance, has just appeared at Public Choice.

Abstract: The COVID-19 pandemic, despite its unprecedented scale, mirrored previous disasters in its predictable missteps in preparedness and response. Rather than blaming individual actors or assuming better leadership would have prevented disaster, I examine how standard political incentives—myopic voters, bureaucratic gridlock, and fear of blame—predictably produced an inadequate pandemic response. The analysis rejects romantic calls for institutional reform and instead proposes pragmatic solutions that work within existing political constraints: wastewater surveillance, prediction markets, pre-developed vaccine libraries, human challenge trials, a dedicated Pandemic Trust Fund, and temporary public–private partnerships. These mechanisms respect political realities while creating systems that can ameliorate future pandemics, potentially saving millions of lives and trillions in economic damage.

Here’s one bit:

…in the aftermath of an inadequate government response to an emergency, we often hear calls to reorganize and streamline processes and to establish a single authority with clear responsibility and decision-making power to overcome bureaucratic gridlock. By centralizing authority, it is argued that the government can respond more swiftly and effectively, reducing the inefficiencies caused by a fragmented system.

Yet, the tragedy of the anti-commons was also cited to explain the failure of the government after 9/11. Indeed, the Department of Homeland Security was created to centralize a fragmented system and allow it to act with alacrity. Isn’t a pandemic a threat to homeland security? And what about the Swine Flu pandemic of 2009? While not nearly as deadly as the COVID pandemic, 60 million Americans were sickened, some 274 thousand hospitalized with over 12 thousand deaths (Shresha et al. 2011). Wasn’t this enough practice to act swiftly?

Rather than advocating for a reorganization of bureaucracies, I propose accepting the tragedy of the anti-commons as an inevitable reality. The tragedy of the commons is an equilibrium outcome of modern-day bureaucracy. Bureaucracy has its reasons and some of those reasons may even be reasonable (Wittman 1995). It is too much to expect the same institution to respond to the ordinary demands of day-to-day politics and to the very different demands of emergencies. Indeed, when an institution evolves to meet the demands of day-to-day politics it inevitably develops culture, procedures and processes that are not optimized for emergencies.

Instead of rearranging organization charts we should focus on what has proven effective: the creation of ad-hoc, temporary, public–private organizations. Two notable examples are Operation Warp Speed in the United States and the British Vaccine Taskforce. These entities were established quickly and operated outside regular government channels, free from the typical procurement, hiring, or oversight rules that hinder standard bureaucracies.

…Operation Warp Speed exemplified the “American Model” of emergency response. Rather than relying on command-and-control or government production, the American Model leverages the tremendous purchasing power of the US government with the agility and innovation of the private sector.

The only problem with the “American Model” was its inconsistent application.

I am especially fond of this paper because it is the first, to my knowledge, to cite separate papers from Alex, Maxwell and Connor Tabarrok.

Addendum: This paper isn’t about lockdowns. It’s about avoiding lockdowns!

The post Pandemic Preparation Without Romance appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The last letter

Black and white photo of a firing squad in a wooded area with snow on the ground, soldiers aiming rifles at standing figures.

Condemned to death by firing squad, French resistance fighters put pen to paper. Their dying words can teach us how to live

- by Daniel R Brunstetter

Read at Aeon

Crafting a Hardanger boat

Photo of a person applying varnish to the interior planks of a wooden boat on a sunny day.

Watch as two craftspeople use 19th-century methods and tools to turn a tree on Norway’s coast into a rowing boat

- by Aeon Video

Watch at Aeon

“Growth is getting harder to find, not ideas”

Here is the thread, here is the paper:

Relatively flat US output growth versus rising numbers of US researchers is often interpreted as evidence that “ideas are getting harder to find.” We build a new 46-year panel tracking the universe of U.S. firms’ patenting to investigate the micro underpinnings of this claim, separately examining the relationships between research inputs and ideas (patents) versus ideas and growth. Over our sample period, we find that researchers’ patenting productivity is increasing, there is little evidence of any secular decline in high-quality patenting common to all firms, and the link between patents and growth is present, differs by type of idea, and is fairly stable. On the other hand, we find strong evidence of secular decreases in output unrelated to patenting, suggesting an important role for other factors. Together, these results invite renewed empirical and theoretical attention to the impact of ideas on growth. To that end, our patent-firm bridge, which will be available to researchers with approved access, is used to produce new, public-use statistics on the Business Dynamics of Patenting Firms (BDS-PF).

By Teresa C. Fort, Nathan Goldschlag, Jack Liang, Peter K. Schott, and Nikolas Zolas.  Via Basil Halperin.

The post “Growth is getting harder to find, not ideas” appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

It’s happening, UAE edition

The United Arab Emirates aims to use AI to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions.

The plan for what state media called “AI-driven regulation” goes further than anything seen elsewhere, AI researchers said, while noting that details were scant. Other governments are trying to use AI to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data.

“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media. Ministers last week approved the creation of a new cabinet unit, the Regulatory Intelligence Office, to oversee the legislative AI push.

Here is more from the FT.

The post It’s happening, UAE edition appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Piercing the skies above Paranal

Today’s Picture of the Week is a majestic portrait of UT4, one of the four 8-m telescopes of ESO’s Very Large Telescope (VLT). Framed against the star-filled sky of the Paranal Observatory, this telescope is much more than a passive observer. From within its dome, it pierces the peaceful night with four laser beams.

These lasers are projected from the 4 Laser Guide Star Facility (4LGSF), which UT4 uses to create its own artificial stars in the sky. The lasers create these points of light by exciting sodium atoms in the atmosphere, about 90 km above the ground, causing them to glow. These “stars” then act as guides, and by studying how they are blurred by the atmosphere the telescope learns how to adjust for atmospheric turbulence — the same turbulence that makes every little star twinkle.

The adjustments are made by UT4’s adaptive optics system, which can precisely deform the telescope’s secondary mirror to cancel out atmospheric disturbances measured by the system. Using adaptive optics, a ground-based telescope can take much sharper images than the atmosphere would normally allow — it’s almost as good as sending the VLT up into space.

Soon, the other three 8-m telescopes of the VLT will be equipped with one laser each. This is part of a series of upgrades of the VLT Interferometer and its GRAVITY+ instrument, which can combine the light of several telescopes to create a huge “virtual” telescope. Another massive eye on the sky, ESO’s Extremely Large Telescope (ELT), is nearing completion not far from Paranal, and will be equipped with at least 6 lasers, to deliver the sharpest images possible with a ground-based telescope. 

Augmented Coding: an Experience Report

Last weekend I sat down with augmentcode.com to do some serious AI-assisted coding. The agent, “Auggie,” promises to automate coding tasks to a level I haven’t tried before. Gotta say, it was pretty good. I got farther than I would have alone. But hoooo, it is a new kind of effort and frustration.

Overall impression: it’s like an appliance, in that I can set it doing something and then go away for a bit, and come back and my clothes are clean. Except it’s way harder to operate! The interesting part comes later, in how it changes the ways I think and work.

Here, have 3 things I love about augmented coding; 1 tradeoff; and 3 things I hated.

❤ Augmented coding was so efficient that more changes were in scope.

My sample app is implemented in 3-4 languages, so adding a service was a lot of work—but not anymore. The AI is great at implementing the same thing in another language. Some of the existing services have duplicated hard-coded lists, because that was the fastest implementation. The new service has a shared sqlite database, because why not? I didn’t have to look up libraries in 3 languages.

Caveat: until one of them doesn’t work. sqlite3 in Node caused errors when I built the docker container. I asked Claude, which told me to use better-sqlite, and so I had Auggie switch it to that.

💖 I can make great scaffolding to support the production code.

When production code is important, then it is a minority: most project code is scaffolding. Tests, deployments, linting, verification, utilities for local development, handy admin interfaces–scaffolding is code that helps us safely and efficiently change production code.

I can trust Auggie more if it can run a reproducible end-to-end test after every change. Writing that test is too much work for me, but not for the AI. I did not have to dig into browser automation libraries. I don’t even know which ones it used! For scaffolding, I don’t have to look at the code.

Caveat: It still took half an hour, with some critique and instructions. I did look at the code, and found it flailing, and cleaned it up.

💓 It lets me work in smaller conceptual steps.

While some things happen in bigger jumps (“implement the user service in python”), more interesting ones can be smaller. I’m moving hard-coded phrases in phrase-picker into a sqlite database. I know I’m going to want a tag attribute on the phrases, so that Java-specific ones only show up for Java people, etc. If I were doing this by hand, I’d think through the whole table structure before implementing it.
With AI assistance, I worry first about getting that db working at all, and then adding the field will be as easy as typing a few sentences. While it’s integrating the database, I can think about what to put in it.

There’s nothing more effective in software development (or any complex task) than taking many more much smaller steps.

🤨 Development is still unpredictable.

Coding is always rough: something that seems easy turns out to be a huge pain. Sometimes it’s a typo I can’t spot, and sometimes it’s that spawning a process in Node and getting the output is a pain. The AI helps with those, but then it gets into its own knots.
The agent makes mistakes, then tries to run it, gets errors, and goes back and fixes its mistakes. It’s really quite a good simulation of an oddly-skilled developer. So fast at typing! So clueless at when to stop.

When it gets in a hopeless loop of trying things, I click ‘stop’. I look at what it did (with git diff) and then when it’s too much, roll back all its changes. Try again at something smaller. Ask a different AI what would work, and get more specific. Or just literally try again, it’s nondeterministic.

When the bulk of my project work turns out to be fighting with Docker, the AI helps a little. I’m carefully tweaking docker-compose files, the autocomplete and “next edit” features of Augment Code help most; they reduce typos and help me not miss a change.

😓 Going too fast can be a mistake.

Some of the scope that I added (“it can do this in ten minutes!”) turned out to be a mistake. It did it wrong in ten minutes, and then I spent a frustrated hour fighting it. The ten minutes were a worthwhile risk; everything after that was my failure. Give it up, Jess! Let it be wrong, roll it back, note this down and move on.

After every change, I have to check what it did. I’ve asked it to do a commit each time, so I look at the content of that commit. One limitation on how much I can ask it to do at is: the amount of code I’m willing to look at.

If it’s scaffolding code, I don’t have to look at it. But I’d better do some manual checks. For a good while, its e2e-test script ignored failures in the Docker build, so that further checks ran on the already-deployed working version, defeating the purpose. The agent happily screwed up the Dockerfiles and moved on.

Smaller steps, it’s always about smaller steps! My calibration for size of step that I can implement is not applicable, and I have to learn what’s right for my new role as prompter and supervisor.

🥺 I have to learn from its mistakes.

My new job is less “making the code” and more “making the making.” When it frustrates me, I have to think about how to get different behavior next time.

Sometimes it’s easy: “Always make a commit, and tag it with -- auggie.” It puts that in its “Augment Memories” file, which I can see (and sometimes edit? unclear).

Sometimes it’s changing my expectations: it can call a database no problem, but it struggles with OpenTelemetry, so I have to be way more specific or hand-code some parts.

Sometimes it’s changing the structure of the work: first make an end-to-end test, then tell it to run that every time. Ask it to check Honeycomb for tracing output every time. Give it the constraints it needs.

When I’m doing the coding, I learn from everything I type and every error I see. It doesn’t have that big a context window. It learns what it puts in “Augment memories” and the rest is gone in a few minutes. It will make the same mistakes again, unless I find the constraints it needs. This is hard.

😵‍💫 I ruined my weekend trying to keep it busy.

That bug of productivity caught me—”I’m getting code done while I make breakfast!” Right out of bed, I wanted to get it started on something before my shower. This is never as quick as I think it’s going to be. That’s how I got into some of the yaks I should not have shaved, just to get it moving. That led me down paths of frustration, always worse when dirty and unfed.

This is not an AI problem. This is a me problem. Shower time is shower time. After that, while it is coding and I am not, spend this time thinking hard about what to implement, how to break it down, and when to give up.

The real power of the tool is in how it changes me.

This is a different way of working, with different triumphs and pain. I can get more done, but I have to wrangle a nondeterministic machine instead of a satisfyingly predictable one.

Coding with AI takes (and enables) more discipline.

I need more tests, more checks and constraints. This takes more scaffolding, which is now fast to create.
More “how will I know this works?” and less “how will I implement this?”
I need to learn how to influence the AI, and channel frustration into guidance.
More high-level thinking about what the software should do, less puzzle-solving of making it happen.

Coding with AI assistance is a new skill. It makes us more powerful, and it asks even more of us.

This surprising sky has almost everything. This surprising sky has almost everything.