Facilitating AI adoption at Imprint

I’ve been working on internal “AI” adoption, which is really LLM-tooling and agent adoption, for the past 18 months or so. This is a problem that I think is, at minimum, a side-quest for every engineering leader in the current era. Given the sheer number of folks working on this problem within their own company, I wanted to write up my “working notes” of what I’ve learned.

This isn’t a recommendation about what you should do, merely a recap of how I’ve approached the problem thus far, and what I’ve learned through ongoing iteration. I hope the thinking here will be useful to you, or at least validate some of what you’re experiencing in your rollout. The further you read, the more specific this will get, ending with cheap-turpentine-esque topics like getting agents to reliably translate human-readable text representations of Slack entities into mrkdwn formatting of the correct underlying entity.

I am hiring: If you’re interested in working together with me on internal agent and AI adoption at Imprint, we are hiring our founding Senior Software Engineer, AI. The ideal candidate is a product engineer who’s spent some time experimenting with agents, and wants to spend the next year or two digging into this space.

Prework: building my intuition

As technologists, I think one of the basics we owe our teams is spending time working directly with new tools to develop an intuition for how they do, and don’t work. AI adoption is no different.

Towards that end, I started with a bit of reading, especially Chip Huyen’s AI Engineering, and then dove in a handful of bounded projects: building my rudimentary own agent platform using Claude code for implementation, creating a trivial MCP for searching my blog posts, and an agent to comment on Notion documents.

Each of these projects was two to ten hours, and extremely clarifying. Tool use is, in particular, something that seemed like magic until I implemented a simple tool-using agent, at which point it become something extremely non-magical that I could reason about and understand.

Our AI adoption strategy

Imprint’s general approach to refining AI adoption is strategy testing: identify a few goals, pick an initial approach, and then iterate rapidly in the details until the approach genuinely works. In an era of crushing optics, senior leaders immersing themselves in the details is one of our few defenses.

First draft of Imprint’s strategy for AI adoption

Shortly after joining, I partnered with the executive team to draft the above strategy for AI adoption. After a modest amount of debate, the pillars we landed on were:

  1. Pave the path to adoption by removing obstacles to adoption, especially things like having to explicitly request access to tooling. There’s significant internal and industry excitemetn for AI adoption, and we should believe in our teams. If they aren’t adopting tooling, we predominantly focus on making it easier rather than spending time being skeptical or dismissive of their efforts towards adoption.
  2. Opportunity for adoption is everywhere, rather than being isolated to engineering, customer service, or what not. To become a company that widely benefits from AI, we need to be solving the problem of adoption across all teams. It’s not that I believe we should take the same approach everywhere, but we need some applicable approach for each team.
  3. Senior leadership leads from the front to ensure what we’re doing is genuinely useful, rather than getting caught up in what we’re measuring.

As you see from those principles, and my earlier comment, my biggest fear for AI adoption is that they can focus on creating the impression of adopting AI, rather than focusing on creating additional productivity. Optics are a core part of any work, but almost all interesting work occurs where optics and reality intersect, which these pillars aimed to support.


As an aside, in terms of the components of strategy in Crafting Engineering Strategy, this is really just the strategy’s policy. In addition, we used strategy testing to refine our approach, defined a concrete set of initial actions to operationalize it (they’re a bit too specific to share externally), and did some brief exploration to make sure I wasn’t overfitting on my prior work at Carta.

Documenting tips & tricks

My adoption step was collecting as many internal examples of tips and tricks as possible into a single Notion database. I took a very broad view on what qualified, with the belief that showing many different examples of using tools–especially across different functions–is both useful and inspiring.

The image is a table listing AI tips and trainings with columns for the name of the tip and the relevant team. It includes topics like using Claude Code, adding Slack bots, and employing ChatGPT for marketing.

I’ve continued extending this, with contributions from across the company, and it’s become a useful resource for both humans and bots alike to provide suggestions on approaching problems with AI tooling.

Centralizing our prompts

One of my core beliefs in our approach is that making prompts discoverable within the company is extremely valuable. Discoverability solves four distinct problems:

  1. Creating visibility into what prompt’s can do (so others can be inspired to use them in similar scenarios). For example, that you can use our agents to comment on a Notion doc when it’s created, respond in Slack channels effectively, triage Jira tickets, etc
  2. Showing what a good prompt looks like (so others can improve their prompts). For example, you can start moving complex configuration into tables and out of lists which are harder to read and accurately modify
  3. Serving as a repository of copy-able sections to reuse across prompts. For example, you can copy one of our existing “Jira-issue triaging prompts” to start triaging a new Jira project
  4. Prompts are joint property of a team or function, not the immutable construct of one person. For example, anyone on our Helpdesk team can improve the prompt responding to Helpdesk requests, not just one person with access to the prompt, and it’s not locked behind being comfortable with Git or Github (although I do imagine we’ll end up with more restrictions around editing our most important internal agents over time)
  5. Identifying repeating prompt sub-components that imply missing or hard to use tools. For example, earlier versions of our prompts had a lot of confusion around how to specify Slack users and channels, which I got comfortable working around, but others did not

My core approach is that every agent’s prompt is stored in a single Notion database which is readable by everyone in the company. Most prompts are editable by everyone, but some have editing restrictions.

Here’s an example of a prompt we use for routing incoming Jira issues from Customer Support to the correct engineering team.

The image provides instructions for triaging Jira tickets, detailing steps for retrieving comments, updating labels, and determining responsible teams. It includes guidelines for using Slack for communication and references, and lists teams with their on-call aliases and areas of responsibility.

Here’s a second example, this time of responding to requests in our Infrastructure Engineering team’s request channel.

The image contains detailed instructions for service desk agents on handling Slack messages related to access requests for tools such as AWS, VPN, NPM, and more. It provides step-by-step guidelines for different scenarios, including retrieving user IDs, handling specific requests, and directing users to appropriate resources or teams.

Pretty much all prompts end with an instruction to include a link to the prompt in the generated message. This ensures it’s easy to go from a mediocre response to the prompt-driving the response, so that you can fix it.

Adopting standard platform

In addition to collecting tips and prompts, the next obvious step for AI adoption is identifying a standard AI platform to be used within the company, e.g. ChatGPT, Claude, Gemini or what not.

We’ve gone with OpenAI for everyone. In addition to standardizing on a platform, we made sure account provisioning was automatic and in place on day one. To the surprise of no one who’s worked in or adjacent to IT, a lot of revolutionary general AI adoption is… really just account provisioning and access controls. These are the little details that can so easily derail the broader plan if you don’t dive into them.

Within Engineering, we also provide both Cursor and Claude. That said, the vast majority of our Claude usage is done via AWS Bedrock, which we use to power Claude Code… and we use Claude Code quite a bit.

Other AI tooling

While there’s a general industry push towards adopting more AI tooling, I find that a significant majority of “AI tools” are just SaaS vendors that talk about AI in their marketing pitches. We have continued to adopt vendors, but have worked internally to help teams evaluate which “AI tools” are meaningful.

We’ve spent a fair amount of time going deep on integrating with AI tooling for chat and IVR tooling, but that’s a different post entirely.

Metrics

Measuring AI adoption is, like all measurement topics, fraught. Altogether, I’ve found measuring tool adoption very instruction to identify the right questions to ask. Why haven’t you used Cursor? Or Claude Code? Or whatever? These are fascinating questions to dig into. I try to look at usage data at least once a month, with a particular focus on two questions:

  1. For power adopters, what are they actually doing? Why do they find it useful?
  2. For low or non-adopters, why aren’t they using the tooling? How could we help solve that for them?

At the core, I believe folks who aren’t adopting tools are rational non-adopters, and spending some time understanding the (appearance of) resistance goes further than top-down mandate. I think it’s often an education gap that is bridged easily enough. Conceivably, at some point I’ll discover a point of diminishing returns, where the lack of progress is stymied on folks who are rejecting AI tooling–or because the AI tooling isn’t genuinely useful–but I haven’t found that point yet.

Building internal agents

The next few sections are about building internal agents. The core implementation is a single stateless lambda which handles a wide variety of HTTP requests, similar-ish to Zapier. This is currently implemented in Python, and is roughly 3,000 lines of code, much of it dedicated to oddities like formatting Slack messages, etc.

For the record, I did originally attempt to do this within Zapier, but I found that Zapier simply doesn’t facilitate the precision I believe is necessary to do this effectively. I also think that Zapier isn’t particularly approachable for a non-engineering audience.

What has fueled adoption (especially for agents)

As someone who spent a long time working in platform engineering, I still want to believe that you can build a platform, and users will come. Indeed, I think it’s true that a small number of early adopters will come, if the problem is sufficiently painful for them, as was the case for Uber’s service migration (2014).

However, what we’ve found effective for driving adoption is basically the opposite of that. What’s really worked is the intersection of platform engineering and old-fashioned product engineering:

  1. (product eng) find a workflow with a lot of challenges or potential impact
  2. (product eng) work closely with domain experts to get the first version working
  3. (platform eng) ensure that working solution is extensible by the team using it
  4. (both) monitor adoption as indicator of problem-solution fit, or lack thereof

Some examples of the projects where we’ve gotten traction internally:

  • Writing software with effective AGENTS.md files guiding use of tests, typechecking and linting
  • Powering initial customer questions through chat and IVR
  • Routing chat bots to steer questions to solve the problem, provide the the answer, or notify the correct responder
  • Issue triaging for incoming tickets: tagging them, and assigning them to the appropriate teams
  • Providing real-time initial feedback on routine compliance and legal questions (e.g. questions which occur frequently and with little deviation)
  • Writing weekly priorities updates after pulling a wide range of resources (Git commits, Slack messages, etc)

For all of these projects that have worked, the formula has been the opposite of “build a platform and they will come.” Instead it’s required deep partnership from folks with experience building AI agents and using AI tooling to make progress. The learning curve for effective AI adoption in important or production-like workflows remains meaningfully high.

Configuring agents

Agents that use powerful tools represent a complex configuration problem. First, exposing too many tools–especially tools that the prompt author doesn’t effectively understand–makes it very difficult to create reliable workflows. For example, we have an exit_early command that allows terminating the agent early: this is very effective in many cases, but is also easy to break your bot. Similarly, we have a slack_chat command that allows posting across channels, which can support a variety of useful workflows (e.g. warm-handoffs of a question in one channel into a more appropriate alternative), but can also spam folks. Second, as tools get more powerful, they can introduce complex security scenarios.

To address both of these, we currently store configuration in a code-reviewed Git repository. Here’s an example of a JIRA project.

This image shows a configuration script for a Jira setup with specified project keys, a prompt ID, a list of allowed tools such as “notion_search” and “slack_chat,” and a model set to “gpt-4.1”. The configuration also has a setting “respond_to_issue” set to False.

Here’s another for specifying a Slack responder bot.

This image shows a code snippet configuring a channel in Slack for “eng-new-hires,” with specified Slack channel IDs, a Notion prompt ID, and a list of allowed tools like “notion_search” and “jira_search_jql.” The model specified is “gpt-4.1.”

Compared to a JSON file, we can statically type the configuration, and it’s easy to extend over time. For example, we might want to extend slack_chat to restrict which channels a given bot is allowed to publish into, which would be easy enough. For most agents today, the one thing not under Git-version control is the prompts themselves, which are versioned by Notion. However, we can easily require specific agents to use prompts within the Git-managed repository for sensitive scenarios.

After passing tests, linting and typechecking, the configurations are automatically deployed.

Resolving foreign keys

It’s sort of funny to mention, but one thing that has in practice really interfered with easily writing effective prompts is making it easy to write things like @Will Larson which is then translated into <@U12345> or whatever the appropriate Slack identifier is for a given user, channel, or user group. The same problem exists for Jira groups, Notion pages and databases, and so on.

This is a good example of where centralizing prompts is useful. I got comfortable pulling the unique identifiers myself, but it became evident that most others were not. This eventually ended with three tools for Slack resolution: slack_lookup which takes a list of references to lookup, slack_lookup_prefix which finds all Slack entities that start with a given prefix (useful to pull all channels or groups starting with @oncall-, for example, rather than having to hard-code the list in your prompt), and slack_search_name which uses string-distance to find potential matches (again, useful for dealing with typos).

If this sounds bewildering, it’s largely the result of Slack not exposing relevant APIs for this sort of lookup. Slack’s APIs want to use IDs to retrieve users, groups and channels, so you have to maintain your own cache of these items to perform a lookup. Performing the lookups, especially for users, is itself messy. Slack users have a minimum of three ways they might be referenced: user.profile.display_name, user.name, and user.real_name, only a subset of which are set for any given user. The correct logic here is, as best I can tell, to find a match against user.profile.display_name, then use that if it exists. Then do the same for user.name and finally user.real_name. If you take the first user that matches one of those three, you’ll use the wrong user in some scenarios.

In addition to providing tools to LLMs for resolving names, I also have a final mandatory check for each response to ensure the returned references refer to real items. If not, I inject which ones are invalid into the context window and perform an additional agent loop with only entity-resolution tools available. This feels absurd, but it was only at this point that things really started working consistently.


As an aside, I was embarassed by these screenshots, and earlier today I made the same changes for Notion pages and databases as I had previously for Slack.

Formatting

Similarly to foreign entity resolution, there’s a similar problem with Slack’s mrkdwn variant of Markdown and JIRA’s Atlassian Document Format: they’re both strict.

The tools that call into those APIs now have strict instructions on formatting. These had been contained in individual prompts, but they started showing up in every prompt, so I knew I needed to bring them into the agent framework itself rather than forcing every prompt-author to understand the problem.

My guess is that I need to add a validation step similar to the one I added for entity-resolution, and that until I do so, I’ll continue to have a small number of very infrequent but annoying rendering issues, To be honest, I personally don’t mind the rendering issues, but that creates a lot of uncertainty for others using agents, so I think solving them is a requirement.

Logging and debugging

Today, all logs, especially tool usage, are fed into two places. First, it goes into Datadog for full logging visibility. Second, and perhaps more usefully for non-engineers, they feed into a Slack channel, #ai-logs which create visibility into which tools are used and with which (potentially truncated) parameters.

Longer term, I imagine this will be exposed via a dedicated internal web UX, but generally speaking I’ve found that the subset of folks who are actively developing agents are pretty willing to deal with a bit of cruft. Similarly the folks who aren’t developing agents directly don’t really care, they want it to work perfectly every time, and aren’t spending time looking at logs.

Biggest remaining gap: universal platform for accessing user-scope MCP servers

The biggest internal opportunity that I see today is figuring out how to get non-engineers an experience equivalent to running Claude Code locally with all their favorite MCP servers plugged in. I’ve wanted ChatGPT or Claude.ai to provide this, but they don’t really quite get there, Claude Desktop is close, but is somewhat messy to configure as we think about finding a tool that we can easily allow everyone internally to customize and use on a daily basis.

I’m still looking for what the right tool is here. If anyone has any great suggestions that we can be somewhat confident will still exist in two years, and don’t require sending a bunch of internal data to a very early stage company, then I’m curious to hear!

What’s next?

You’re supposed to start a good conclusion with some sort of punchy anecdote that illuminates your overall thesis in a new way. I’m not sure if I can quite meet that bar, but the four most important ideas for me are:

  1. We are still very early on AI adoption, so focusing on rate of learning is more valuable than anything else
  2. If you want to lead an internal AI initiative, you simply must be using the tools, and not just ChatGPT, but building your own tool-using agent using only an LLM API
  3. My experience is that real AI adoption on real problems is a complex blend of: domain context on the problem, domain experience with AI tooling, and old-fashioned IT issues. I’m deeply skeptical of any initiative for internal AI adoption that doesn’t anchor on all three of those. This is an advantage of earlier stage companies, because you can often find aspects of all three of those in a single person, or at least across two people. In larger companies, you need three different organizations doing this work together, this is just objectively hard
  4. I think model selection matters a lot, but there are only 2-3 models you need at any given moment in time, and someone can just tell you what those 2-3 models are at any given moment. For example, GPT-4.1 is just exceptionally good at following rules quickly. It’s a great model for most latency-sensitive agents

I’m curious what other folks are finding!

“Artemis II will launch without crew, and Artemis IV will be the crewed lunar landing”

This should be the headline, one day soon, in a world where these decisions are primarily technical and think long term. At no extra cost, while we wait for lunar landers, Artemis II uncrewed and Artemis III as the first flight of SLS/Orion with crew, would close out all non-lander risk. Artemis IV, with the … Continue reading “Artemis II will launch without crew, and Artemis IV will be the crewed lunar landing”

Hyperacute Interdynamics

Our models fall apart where the three theories overlap; we're unable to predict what happens when a nanometer-sized squirrel eats a grapefruit with the mass of the sun.

Tuesday: Job Openings

Mortgage Rates From Matthew Graham at Mortgage News Daily: Mortgage Rates Start Week Near 3 Month Highs
Both stocks and bonds lost ground on Monday. This pushed mortgage rates up near their highest levels in just over 3 months (because mortgages are based on bond prices). To put the 3-month highs in perspective, today's rates are right in line with those seen 2 weeks ago. [30 year fixed 6.36%]
emphasis added
Tuesday:
• At 6:00 AM ET, NFIB Small Business Optimism Index for November.

• At 10:00 AM, Job Openings and Labor Turnover Survey for October from the BLS.

Monday 8 December 1662

Up, and carrying Gosnell by coach, set her down at Temple Barr, she going about business of hers today. By the way she was telling me how Balty did tell her that my wife did go every day in the week to Court and plays, and that she should have liberty of going abroad as often as she pleased, and many other lies, which I am vexed at, and I doubt the wench did come in some expectation of, which troubles me.

So to the Duke and Mr. Coventry, and alone, the rest being at a Pay and elsewhere, and alone with Mr. Coventry I did read over our letter to my Lord Treasurer, which I think now is done as well as it can be. Then to my Lord Sandwich’s, and there spent the rest of the morning in making up my Lord’s accounts with Mr. Moore, and then dined with Mr. Moore and Battersby his friend, very well and merry, and good discourse. Then into the Park, to see them slide with their skeates, which is very pretty. And so to the Duke’s, where the Committee for Tangier met: and here we sat down all with him at a table, and had much good discourse about the business, and is to my great content. That done, I hearing what play it was that is to be acted before the King to-night, I would not stay, but home by coach, where I find my wife troubled about Gosnell, who brings word that her uncle, justice Jiggins, requires her to come three times a week to him, to follow some business that her mother intrusts her withall, and that, unless she may have that leisure given her, he will not have her take any place; for which we are both troubled, but there is no help for it, and believing it to be a good providence of God to prevent my running behindhand in the world, I am somewhat contented therewith, and shall make my wife so, who, poor wretch, I know will consider of things, though in good earnest the privacy of her life must needs be irksome to her. So I made Gosnell and we sit up looking over the book of Dances till 12 at night, not observing how the time went, and so to prayers and to bed.

Read the annotations

I’m working on the 2025 gift guide right now, but I wanted to separately shout-out my favorite gift recommendation of the year: Kelli Anderson’s incredible popup book about typography & the alphabet, Alphabet in Motion (Amazon).

💬 Join the discussion on kottke.org

The Surprising Choices Seniors Are Making To Stay Independent Longer

Aging has a funny habit of sneaking in new decisions when no one asks for them, and yet many older adults are navigating these choices with a level of clarity that deserves far more attention. The conversation around healthy independence is changing, not because of flashy trends but because people want to stay in control of their lives without turning daily routines into a project plan. The shift has opened doors to practical tools, smarter support, and better planning that actually lighten the load rather than complicate it.

The Growing Value Of Practical Fitness For Daily Life

Strength training and balance work used to be the thing gyms pushed as optional add ons, but seniors are choosing them for a very different reason. The goal is less about sculpted arms and more about keeping basic mobility comfortable so getting dressed or stepping over a curb does not feel like a chore. Functional training can look like gentle resistance moves, walking short intervals, or practicing controlled motions that mimic everyday tasks. Many older adults say they feel steadier and lighter when they focus on this kind of movement, and that sense of physical confidence carries over into everything from grocery shopping to spending time with grandkids. Small gains matter here because they build momentum without overwhelming anyone with complicated routines.

Indoor wellness programs have also become a reliable anchor, especially for people who prefer consistency over intensity. Water aerobics reduces joint strain, yoga encourages flexibility, and chair based exercise allows participation without pressure. The point is not perfection, it is comfort, and comfort makes people stick with a routine long enough to enjoy the benefits. A body that moves with ease often supports an outlook that feels more open to possibility.

Nutrition That Supports Energy Instead Of Restricting It

Food advice aimed at older adults used to read like a never ending rulebook. That is changing, thankfully, as nutritionists emphasize steady energy and overall nourishment instead of elimination. Seniors gravitate toward balanced meals that keep digestion calm and blood sugar steady, which usually means leaning into protein, fiber, and hydration while enjoying flavors that feel satisfying instead of sparse. Many find that their energy lasts longer when meals follow a predictable rhythm without becoming repetitive.

There is also a growing interest in nutrient density rather than diet culture. Omega rich foods, colorful produce, and fermented ingredients that support gut health have become staples because they help people feel grounded and clear headed. The goal is simple fuel, not complicated formulas that require weighing every part of a meal. When eating well fits naturally into daily life, it becomes far easier to maintain independence with confidence.

Technology Is Becoming A Quiet Partner In Senior Living

Even the most tech resistant individuals have started to notice a shift. Devices are becoming friendlier, designs are less intimidating, and the focus is shifting toward everyday usefulness rather than novelty. Smart medication systems reduce the pressure of remembering doses. Wearable monitors can track heart rate or movement patterns without constant fuss. Voice based interfaces make tasks like setting reminders or adjusting lighting surprisingly easy. These tools support autonomy without drawing attention to themselves, which is exactly why adoption is rising among older adults.

In community settings, innovations are making life smoother for residents who prefer structure without rigidity. Safety sensors, simple video communication systems, and adaptive scheduling tools help maintain normalcy in a way that feels respectful rather than intrusive. When tech in senior living communities is designed with subtlety, seniors feel like they are getting support rather than supervision. The heart of the shift is dignity, and dignity is what keeps people engaged and comfortable in their daily routines.

Financial Planning That Protects Lifelong Independence

Money decisions often land on seniors with all the grace of a falling tree. Retirement income, healthcare expenses, and supplemental coverage can feel tangled, especially when benefits change with little warning. The most successful planners tend to approach the topic early and revisit it often, not because they love spreadsheets but because financial clarity eases stress. Many older adults find that working with specialists who understand retirement age concerns gives them a clearer path toward decisions that preserve their long term comfort.

Some focus on simplifying recurring expenses so daily budgeting does not feel like detective work. Others shift priorities toward investments that create predictable income rather than chasing growth. What matters is that the plan matches the person, not the other way around. Independence does not happen by accident. It grows from steady choices that allow people to live the way they want without second guessing every financial step they take.

Support Systems That Strengthen Autonomy Instead Of Replacing It

Good support has a way of widening a person’s world. Seniors are increasingly drawn to planning tools and professional guidance that protect the life they have built rather than overhaul it. Health coverage assistance is a major part of that trend. Many older adults have learned that Medicare consultants like the ones at Senior Advisors in Scottsdale can save you serious money, especially when benefits feel like they are written in a secret code. Reducing unnecessary spending gives people more bandwidth for things that actually matter to them, from hobbies to travel to simply enjoying a calmer daily rhythm.

Beyond insurance, support comes from transportation services, community engagement programs, and home based resources that keep routines running smoothly. These approaches allow seniors to choose how they want to spend their time rather than being boxed in by logistics. Autonomy grows in spaces where people feel respected and heard, and thoughtful support makes that possible.

Looking Ahead With Confidence

The choices older adults make today are shaping a future where aging feels less like a series of limitations and more like a steady adjustment toward comfort and possibility. Independence thrives when people have the right tools, the right support, and the right information, and those pieces are far more accessible than they once were. There is strength in shaping the next chapter deliberately, and seniors are proving that thoughtful planning can keep life moving in a direction that feels empowered and steady.

Photo: Freepik via their website.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post The Surprising Choices Seniors Are Making To Stay Independent Longer appeared first on DCReport.org.

Prolonged Atmospheric River in the Northwest; Snow in the North-Central US

Does studying economics and business make students more conservative?

College education is a key determinant of political attitudes in the United States and other countries. This paper highlights an important source of variation among college graduates: studying different academic fields has sizable effects on their political attitudes. Using surveys of about 300,000 students across 500 U.S. colleges, we find several results. First, relative to natural sciences, studying social sciences and humanities makes students more left-leaning, whereas studying economics and business makes them more right-leaning. Second, the rightward effects of economics and business are driven by positions on economic issues, whereas the leftward effects of humanities and social sciences are driven by cultural ones. Third, these effects extend to behavior: humanities and social sciences increase activism, while economics and business increase the emphasis on financial success. Fourth, the effects operate through academic content and teaching rather than socialization or earnings expectations. Finally, the implications are substantial. If all students majored in economics or business, the college–noncollege ideological gap would shrink by about one-third. A uniform-major scenario, in which everyone studies the same field, would reduce ideological variance and the gender gap. Together, the results show that academic fields shape students’ attitudes and that field specialization contributes to political fragmentation.

That is a recent paper from Yoav Goldstein and Matan Kolerman.  Here is a thread on the paper.

The post Does studying economics and business make students more conservative? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Leading Index for Commercial Real Estate Decreased 1% in November

From Dodge Data Analytics: Dodge Momentum Index Decreases 1% in November
The Dodge Momentum Index (DMI), issued by Dodge Construction Network, decreased 1.1% in November to 276.8 (2000=100) from the downwardly revised October reading of 280.0. Over the month, commercial planning ticked down 0.1% and institutional planning declined by 3.4%. Year-to-date, the DMI is up 36% from the average reading over the same period in 2024.

“The influx of high-value data center work, compounded by inflationary cost pressures, continues to support elevated DMI levels,” stated Sarah Martin, Associate Director of Forecasting at Dodge Construction Network. “Overall, nonresidential construction is expected to strengthen in 2027, led primarily by data center and healthcare projects. Other nonresidential sectors are more likely to face softer demand and heightened macroeconomic risks.”

On the commercial side, activity slowed down for warehouses and hotels, while planning momentum was sustained for data centers, traditional office buildings and retail stores. On the institutional side, education, healthcare, public and recreational planning saw weaker momentum, after strong activity in recent months. Planning for religious buildings, however, continued to accelerate. Year-over-year, the DMI was up 50% when compared to November 2024. The commercial segment was up 57% (+36% when data centers are removed) and the institutional segment was up 37% over the same period.
...
The DMI is a monthly measure based on the three-month moving value of nonresidential building projects going into planning, shown to lead construction spending for nonresidential buildings by a full year to 18 months.
emphasis added
Dodge Momentum Index Click on graph for larger image.

This graph shows the Dodge Momentum Index since 2002. The index was at 276.8 in November, down from 280.0 the previous month.

According to Dodge, this index leads "construction spending for nonresidential buildings a full year to 18 months".  

Commercial construction is typically a lagging economic indicator.

Substitution Cipher Based on The Voynich Manuscript

Here’s a fun paper: “The Naibbe cipher: a substitution cipher that encrypts Latin and Italian as Voynich Manuscript-like ciphertext“:

Abstract: In this article, I investigate the hypothesis that the Voynich Manuscript (MS 408, Yale University Beinecke Library) is compatible with being a ciphertext by attempting to develop a historically plausible cipher that can replicate the manuscript’s unusual properties. The resulting cipher­a verbose homophonic substitution cipher I call the Naibbe cipher­can be done entirely by hand with 15th-century materials, and when it encrypts a wide range of Latin and Italian plaintexts, the resulting ciphertexts remain fully decipherable and also reliably reproduce many key statistical properties of the Voynich Manuscript at once. My results suggest that the so-called “ciphertext hypothesis” for the Voynich Manuscript remains viable, while also placing constraints on plausible substitution cipher structures.

A brief history of specifiers and protocols

If you’re running JavaScript on a server, how do you import a module? Traditionally, imports looked like this, with CommonJS:

const axios = require('axios');

But now they look like this with ECMAScript Modules:

import axios from 'axios';
// Or, less often, dynamically:
const axios = await import('axios');

However, there’s another layer of complexity: import specifiers.

2021: Node.js introduces the node: protocol

I think the first kind of specifier for a real runtime - and I’ll be specific about that because the Webpack bundler supported a tremendous variety of import styles before this - was Node.js, in 2021 when they introduced the node: protocol.

Before the node: protocol was introduced, in Node.js you’d import the OS library like this, by importing a module called os:

import os from 'os';

After they introduced the node: protocol, you’d do this:

import os from 'node:os';

This has some benefits:

NPM modules can’t contain the : character, so there can’t be overlap between these node: modules and userspace modules from NPM. So Node.js can introduce more modules in the future without fearing overlap from NPM. Plus, it’s more explicit - you immediately know which modules are from Node.js itself.

Node.js supports both versions still, but there are linter rules like useNodejsImportProtocol to push you to using the node: protocol.

2018: Deno introduces https imports

Deno (a Node alternative) originated from Ryan Dahl (Node’s creator) reflecting on his mistakes and building something new. In his talk 10 Things I Regret about Node.js from 2018, he proposed that defaulting to NPM was bad because it centralized on only one module registry. And then he presented the first look at Deno, which would work with “relative or absolute URLs ONLY.�

Deno imports announcement

From their v1 launch blog post, here’s what importing a module looks like with HTTPS imports:

import { serve } from "https://deno.land/std@0.50.0/http/server.ts";

Sidenote that the main popular language to also feature HTTP imports is the Go language. Here’s what importing the GitHub SDK in Go looks like if you import in a package:

import "github.com/google/go-github/v80/github"

2022: Deno introduces the npm: protocol

Unfortunately, this did not last. In 2022, Deno stabilized NPM compatibility and introduced the npm: protocol.

This let you import an NPM module like this:

import { chalk } from "npm:chalk@5";

Supporting the NPM ecosystem, which is the largest and most popular registry for JavaScript, was probably a necessity for Deno to have any traction. At this point, Deno did not support package.json, the NPM standard for storing which versions of NPM modules you were using. So compatibility looked like:

  • Node & Deno: import * from "chalk" only if Deno has an import map. In this case, you need an import map for Deno and a package.json file for Node.js.
  • Deno only:
    • import * from "https://esm.sh/chalk"
    • import * from "npm:chalk"

2023: Deno introduces package.json support

In 2023, Deno introduced support for package.json, which made it significantly more compatible with Node.js:

  • Node & Deno: import * from "chalk" with package.json
  • Deno only:
    • import * from "https://esm.sh/chalk"
    • import * from "npm:chalk"

2024: Deno introduces the jsr: protocol

Then Deno introduced JSR, an alternative to NPM. You could import JSR modules a bunch of different ways, but one of them was:

import * as chalk from "jsr:@nothing628/chalk";

So now Deno supports three protocols (jsr:, npm:, and node:) and Node supports one (node:). JSR has been a mixed bag so far, not clearly ‘better’ than NPM in ways that the community values, and it’s very hard to overcome the network effects of something like NPM.

2024: Deno moves away from HTTP imports

Also in 2024, Deno started moving away from HTTP imports, with blog posts about what HTTP imports got wrong and then in 2025, they published ‘If you’re not using npm specifiers, you’re doing it wrong’.

2025: The current messy state of affairs

Here’s a basic chart of the module specifier situation as of today (December 8, 2025), the best I can see based on manual testing and reading documentation.

Specifier Deno Node Bun Val Town
axios ✅ (with deno.lock) ✅ ✅ ⛔� (no user-provided deno.json/package.json yet)
npm:axios ✅ ⛔� ⛔� ✅
https://esm.sh/axios ✅ (discouraged) 🟠 (in userspace) ⛔� ✅
jsr:axios ✅ ⛔� ⛔� ✅
node:fs ✅ ✅ ✅ ✅

Note that runtime support trickles into downstream tools. So for example, the TypeScript compiler targets Node and doesn’t have any support for the npm: protocol, https imports, or other things that Deno does. The ‘safe subset’ of features for tools is what Node does, which is bare imports and the node: protocol.

Certainly one of the lessons of the last years for both Deno and Bun is that in order to compete effectively with Node they need to provide all of the functionality of Node and then more: there isn’t an angle for support https imports only, as Dahl optimistically planned in 2018.

Having invested heavily into https imports for Val Town, the state of play for module import specifiers is pretty important to me, and this is a difficult ecosystem to play with. The elegance of importing npm:chalk@9 from Deno - specifying the source, module, and version all in an import string - is super nice. HTTPS imports are really great in the sense that you don’t need to put everything on a huge filesystem and you don’t need to publish everything to NPM or create a lot of tarballs. But the future is unpredictable and sometimes we really take two steps forward and one step back.

What about “Nothing about us without us?”

As I was drafting my last piece on Friday, “They have to be able to talk about us without us”, my thoughts of course went to one of the most famous slogans of the disability rights movement, “Nothing about us without us.” I wasn’t unaware that there were similarities in the phrasing of what I wrote. But I think the topic of communicating effectively to groups, as I wrote about the other day, and ensuring that disabled people are centered in disability advocacy, are such different subjects that I didn’t want to just quickly gloss over the topic in a sidebar of a larger piece. They're very distinct topics that really only share a few words in common.

One of the great joys of becoming friends with a number of really thoughtful and experienced disability rights activists over the last several years has been their incredible generosity in teaching me about so much of the culture and history of the movements that they’ve built their work upon, and one of the most powerful slogans has been that refrain of “nothing about us without us”.

Here I should start by acknowledging Alice Wong, who we recently lost, who founded the Disability Visibility Project, and a MacArthur Fellow, and a tireless and inventive advocate for everyone in the disabled community. She was one of the first people to bring me in to learning about this history and these movements, more than a decade ago. She was also a patient and thoughtful teacher, and over our many conversations over the years, she did more than anyone else in my life to truly personify the spirit of “nothing about us without us” by fighting to ensure that disabled people led the work to make the world accessible for all. If you have the chance, learn about her work, and support it.

But a key inflection point in my own understanding of “nothing about us without us” came, unsurprisingly, in the context of how disabled people have been interacting with technology. I used to host a podcast called Function, and we did an episode about how inaccessible so much of contemporary technology has become, and how that kind of ruins things for everyone. (The episode is still up on Spotify and Apple Podcasts.) We had on Emily Ladau of The Accessible Stall podcast, Alex Haagaard of The Disabled List, and Vilissa Thompson of Ramp Your Voice. It’s well worth a listen, and Emily, Alex and Vilissa really do an amazing job of pointing to really specific, really evocative examples of obvious places where today’s tech world could be so much more useful and powerful for everyone if its creators were making just a few simple changes.

What’s striking to me now, listening to that conversation six years later, is how little has changed from the perspective of the technology world, but also how much my own lived experience has come to reflect so much of what I learned in those conversations.

Each of them was the "us" in the conversation, using their own personal experience, and the experience of other disabled people that they were in community with, to offer specific and personal insights that the creators of these technologies did not have. And whether it was for reasons of crass commercial opportunism — here's some money you could be making! — or simply because it was the right thing to do morally, it's obvious that the people making these technologies could benefit by honoring the principle of centering these users of their products.

Taking our turn

I’ve had this conversation on various social media channels in a number of ways over the years, but another key part of understanding the “us” in “nothing about us without us” when it comes to disability, is that the “us” is all of us, in time. It's very hard for many people who haven’t experienced it to understand that everyone should be accommodated and supported, because everyone is disabled; it’s only a question of when and for how long.

In contemporary society, we’re given all kinds of justifications for why we can’t support everyone’s needs, but so much of those are really grounded in simply trying to convince ourselves that a disabled person is someone else, an “other” who isn’t worthy or deserving of our support. I think deep down, everyone knows better. It’s just that people who don’t (yet) identify as disabled don’t really talk about it very much.

In reality, we'll all be disabled. Maybe you're in a moment of respite from it, or in that brief window before the truth of the inevitability of it has been revealed to you (sorry, spoiler warning!), but it's true for all of us — even when it's not visible. That means all of us have to default to supporting and uplifting and empowering the people who are disabled today. This was the key lesson that I didn’t really get personally until I started listening to those who were versed in the history and culture of disability advocacy, about how the patronizing solutions were often harmful, or competing for resources with the right answers.

I’ve had my glimpses of this myself. Back in 2021, I had Lyme disease. I didn’t get it as bad as some, but it did leave me physically and mentally unable to function as I had been used to, for several months. I had some frame of reference for physical weakness; I could roughly compare it to a bad illness like the flu, even if it wasn’t exactly the same. But a diminished mental capacity was unlike anything I had ever experienced before, and was profoundly unsettling, deeply challenging my sense of self. After the incident I’d described in 2022, I had a series of things to recover from physically and mentally that also presented a significant challenge, but were especially tough because so much of people’s willingness to accommodate others is based on any disability being visible. Anything that’s not immediately perceived at a superficial level, or legible to a stranger in a way that’s familiar to them, is generally dismissed or seen as invalid for support.

I point all of this out not to claim that I fully understand the experience of those who live with truly serious disabilities, or to act as if I know what it’s been like for those who have genuinely worked to advocate for disabled people. Instead, I think it can often be useful to show how porous the boundary is between people who don’t think of themselves as disabled and those who already know that they are. And of course this does not mean that people who aren't currently disabled can speak on behalf of those who are — that's the whole point of "nothing about us without us"! — but rather to point out that the time to begin building your empathy and solidarity is now, not when you suddenly have the realization that you're part of the community.

Everything about us

There’s a righteous rage that underlies the cry of “nothing about us without us”, stemming from so many attempts to address the needs of disabled people having come from those outside the community, arriving with plans that ranged from inept to evil. We’re in a moment when the authoritarians in charge in so much of the world are pushing openly-eugenicist agendas that will target disabled people first amongst the many vulnerable populations that they’ll attempt to attack. Challenging economic times like the one we’re in affect disabled people significantly harder as the job market disproportionately shrinks in opportunities for the disabled first.

So it’s going to take all of us standing in solidarity to ensure that the necessary advocacy and support are in place for what promises to be an extraordinarily difficult moment. But I take some solace and inspiration from the fact that there are so many disabled people who have provided us with the clear guidance and leadership we need to navigate this moment. And there is simple guidance we can follow when doing so to ensure that we’re centering the right leaders, by listening to those who said, “nothing about us without us.”

Monday assorted links

1. New Yorker best films of 2025.  The full panoply (not always available to actual viewers in advance) is in fact quite good, once the full list is out.

2. Individual home NIMBY in Fairfax.

3. Jerry Muller Substack and intellectual biography.

4. Mick West sides with a guy with 286 Twitter followers over classified intelligence, based on multiple sensor readings.  Russian drone incursions are not seriously doubted…except by him (no doubt some are mistaken observations ofc).

5. Podcast with Steve Levitt.

6. Scott Alexander on the vibecession.

7. The New School will cut or pause 25 different programs.

8. On new, for-profit cities (FT).

9. Are global aesthetics flattening India’s fashion imagination?

The post Monday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

December ICE Mortgage Monitor: Home Prices "Firmed" in November, Up 0.8% Year-over-year

Today, in the Real Estate Newsletter: December ICE Mortgage Monitor: Home Prices "Firmed" in November, Up 0.8% Year-over-year

Brief excerpt:
Inventory Impacts Prices

• About one-third of markets are seeing annual home price declines, while two-thirds are posting gains

• The Northeast and Midwest dominate growth, with 24 of the top 25 markets for annual price gains located there, while all 36 markets with annual declines are in the South and Westbr /> ...
ICE Home Price Index• New Haven, Conn., leads with prices up +7.3% year-over-year, followed by Syracuse, N. Y. (+7.2%), and Scranton, Pa. (+6.9%). The largest declines are in parts of Florida, Texas, Colorado and California

• Markets are showing signs of rebalancing, with inventory improving in the Northeast and tightening in the South and West

• The 10 hottest markets saw monthly gains below their 12-month averages, hinting at cooler growth ahead, while 27 of 36 markets with annual declines posted adjusted price increases from October to November, signaling modest firming in late 2025
emphasis added
There is much more in the article.

What if bigger models, like bigger stars, fail faster?

The current debate over whether OpenAI has become “too big to fail,” triggered by the viral Wall Street Journal article, tends to frame the risk in familiar economic terms: over-concentration, interlocking commitments, trillion-dollar infrastructure buildouts, and the emergence of a firm whose collapse could destabilize a sector that now props up a sluggish U.S. economy. That argument is correct but incomplete. The deeper structural fragility lies not in the financing of AI infrastructure but in the epistemic dynamics of the models themselves. As we worked through the numbers, it became clear that OpenAI’s infrastructure roadmap—petawatts of compute, trillion-parameter systems, multi-trillion-dollar capital requirements spread across cloud providers, chip manufacturers, and sovereign backers—was constructed on an essentially theological belief in seamless exponential model improvement, a belief that assumed scaling could continue indefinitely toward “AGI.” That faith was not grounded in empirical availability of training data or in any theoretical understanding of how learning actually behaves at frontier scale. The infrastructure has been sized for stars that burn hotter and hotter, without regard for the fuel supply.


Sloptraptions is an AI-assisted opt-in section of the Contraptions Newsletter. If you only want my hand-crafted writing, you can unsubscribe from this section.


The real fuel, of course, is training data: the cultural, linguistic, computational, and behavioral traces that models attempt to fit. And here the numbers are uncompromising. The growth of high-quality data is slow and diminishing. The world’s stock of usable text, code, imagery, and speech grows incrementally, not exponentially. Meanwhile model sizes, compute budgets, and context windows have expanded by orders of magnitude. That mismatch means that newer, larger models are trained on datasets that are only marginally larger than those that fed their predecessors. The result is not graceful scaling but increasing epistemic brittleness. These larger systems learn the training distribution with greater and greater precision, pushing well past the semantic “signal” of an era and into its high-frequency cultural noise. They fit not only the stable structures of human knowledge but its accidents, its transient biases, its stylistic detritus. Shear’s observation—that frontier models are barely regularized and therefore massively overfit—captures this dynamic in accessible language.

But the deeper point is that overfitting to a static cultural snapshot becomes more catastrophic the larger the model grows. Culture is non-stationary; code ecosystems evolve; APIs change; institutions churn; slang mutates; the factual substrate of the world drifts each month. A small model trained on yesterday’s world degrades slowly. A large model trained on yesterday’s world degrades quickly and fails sharply.

This leads to a paradox at the heart of current AI economics. The trillion-dollar infrastructure wave justified by OpenAI’s ambitions has been built to support the next generation of massive models, but those massive models become obsolete faster than smaller ones. Like large stars, they burn brighter but collapse sooner. They present answers with greater surface coherence and tighter epistemic compression, giving users the illusion of deeper insight when they are actually reproducing the micro-structure of an outdated distribution. People will rely on this increased apparent precision—mistaking fluency for truth—and take correspondingly larger risks, operational, financial, political, and scientific. Precision becomes a kind of leverage: as confidence grows faster than correctness, the system tilts toward a bubble of over-trusted, under-verified automated reasoning. When the model slips outside of its training-era manifold, it does so abruptly, invisibly, and in ways that propagate errors with unprecedented speed across the organizations that depend on it. This is a new kind of systemic fragility: epistemic over-leverage driven by model scale rather than financial leverage driven by debt.

Against this background, the “too big to fail” scenario acquires a different meaning. The infrastructure ecosystem—Oracle’s data centers, Microsoft’s GPU clusters, Broadcom’s networking pipelines, Nvidia’s supply chain—was scaled for frontier models that may offer shrinking marginal returns and increasing temporal instability. If model quality plateaus or degrades because data does not keep pace, the economic justification for the infrastructure may collapse even as the infrastructure itself remains technically capable and commercially underutilized. The danger is not that OpenAI fails outright, but that the sector pivots into a phase where the largest models have the shortest useful lifespans, while the capital commitments they require stretch across decades. This is a structural misalignment between epistemic time and financial time.

Yet the story need not end in collapse. There is a way out, and it comes from expanding the data manifold itself rather than merely scaling the model against a static corpus. The next major frontier is likely not text or code but 4D video—continuous, high-bandwidth, spatiotemporal sensory data that more closely matches the real structure of the physical world. Unlike textual culture, which is finite and saturating, the spatiotemporal world generates unbounded data streams. High-fidelity 4D capture, simulation, and reconstruction offer an escape from the bottleneck that is slowly strangling language-model scaling. Models trained on rich physical dynamics rather than frozen cultural snapshots would not merely grow larger; they would grow deeper, anchored to a data distribution that evolves with reality instead of drifting away from it. If the industry moves decisively toward 4D multimodal modeling—robotics, embodied agents, physical reasoning, simulation feedback loops—then the present overfitting trap can be broken. The fuel supply becomes effectively renewable, and the models’ lifespans lengthen rather than shrink. In that sense, the most optimistic path is not to keep scaling cultural predictors but to graduate beyond them, giving the infrastructure something real to learn from and restoring coherence between model scale, data scale, and the world itself.

Causal Why vs Teleological Why

Asked ChatGPT a question that has always bugged me about English as well as all the Indian languages I know. Sharing the one-shot answer with no further processing. This kinda explains why German is a better language for philosophy than English. Possibly Russian too.

Read more

It is still It is still


Political Organization in Pre-Colonial Africa

We provide an overview of the explanations for the relative lack of state formation historically in Africa. In doing so we systematically document for the first time the extent to which Africa was politically decentralized, calculating that in 1880 there were probably 45,000 independent polities which were rarely organized on ethnic lines. At most 2% of these could be classified as states. [emphasis added by TC] We advance a new argument for this extreme political decentralization positing that African societies were deliberately organized to stop centralization emerging. In this they were successful. We point out some key aspects of African societies that helped them to manage this equilibrium. We also emphasize how the organization of the economy was subservient to these political goals.

That is from a new NBER working paper by Soeren J. Henn and James A. Robinson.

The post Political Organization in Pre-Colonial Africa appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Trump Administration to End Affirmative Action for Male College Applicants

While the bias in acceptance rates towards men in college admissions isn’t news to those who follow this stuff and actually know what they’re talking about, a bunch of people are about to be in for a very rude awakening (boldface mine):

Brown University, one of the most selective institutions in America, attracted nearly 50,000 applicants who vied for just 1,700 freshman seats last year.

The university accepted nearly equal numbers of male and female prospects, though, like some other schools, it got nearly twice as many female applicants. That math meant it was easier for male students to get in — 7 percent of male applicants were admitted, compared with 4.4 percent of female applicants, university data shows.

The Trump administration’s policies may soon put an end to that advantage enjoyed by men at some colleges, admissions and higher-education experts say.

While much of the president’s recent scrutiny of college admissions practices has focused on race, these experts say his ban on diversity, equity and inclusion is likely to hit another underrepresented group of applicants: men, and particularly White men — the largest subset of male college applicants.

“This drips with irony,” said Ted Mitchell, president of the American Council on Education, or ACE, the nation’s largest association of universities and colleges, who said he expects that colleges and universities will end any consideration of gender in admission. “The idea of males, including White males, being at the short end of the stick all of a sudden would be a truly ironic outcome.”

Universities are looking at the administration’s edicts “and they’re saying, ‘Well, we’d rather be cautious than stick our neck out’” by continuing to give advantages to male applicants, said ACE’s Mitchell, who was undersecretary of education under President Barack Obama. “I think we will see people dropping gender preferences, even though it is still within the law.”

…Private institutions are allowed to consider gender in admission under Title IX, the federal law otherwise banning discrimination by universities and colleges that get federal funding. That’s due to a loophole dating from when the law was passed, in 1971.

It would be fitting for our times if this somehow was brought before the Supreme Court, and they determine that, of course, one can discriminate against women. No doubt, this also will serve as ‘culture war’ fodder.

Housing December 8th Weekly Update: Inventory Down 2.7% Week-over-week

Altos reports that active single-family inventory was down 2.7% week-over-week.  Inventory usually starts to decline in the fall and then declines sharply during the holiday season.

The first graph shows the seasonal pattern for active single-family inventory since 2015.

Altos Year-over-year Home InventoryClick on graph for larger image.

The red line is for 2025.  The black line is for 2019.  

Inventory was up 15.3% compared to the same week in 2024 (last week it was up 15.6%), and down 4.1% compared to the same week in 2019 (last week it was down 4.3%). 

Inventory started 2025 down 22% compared to 2019.  Inventory has closed most of that gap, but it appears inventory will still be below 2019 levels at the end of 2025.

Altos Home InventoryThis second inventory graph is courtesy of Altos Research.

As of December 5th, inventory was at 795 thousand (7-day average), compared to 817 thousand the prior week.  

Mike Simonsen discusses this data and much more regularly on YouTube

Disagreement in Science: Missing Women by David Klinowski

 Here's an study of women in science that explores a novel angle.

David Klinowski; Voicing Disagreement in Science: Missing Women. The Review of Economics and Statistics 2025; 107 (6): 1743–1753. doi: https://doi.org/10.1162/rest_a_01322 

Abstract: This paper examines the authorship of post-publication criticisms in the scientific literature, with a focus on gender differences. Bibliometrics from journals in the natural and social sciences show that comments that criticize or correct a published study are 20% to 40% less likely than regular papers to have a female author. In preprints in the life sciences, prior to peer review, women are missing by 20% to 40% in failed replications compared to regular papers, but they are not missing in successful replications. In an experiment, I then find large gender differences in willingness to point out and penalize a mistake in someone's work. 

 

 

 

Guetlein defends Golden Dome secrecy, says industry is ‘well informed’ despite criticism

Guetlein said his office has held extensive private engagements with industry.

The post Guetlein defends Golden Dome secrecy, says industry is ‘well informed’ despite criticism appeared first on SpaceNews.

China hearing focuses on U.S. policy shortfalls

Griffin

A House hearing on the rise of China’s space program turned into a broader critique of U.S. space policy, including NASA’s current approach to returning astronauts to the moon.

The post China hearing focuses on U.S. policy shortfalls appeared first on SpaceNews.

Russia Blocks FaceTime and Snapchat

Dasha Litvinova, reporting for the AP:

Russian authorities said Thursday they have imposed restrictions on Apple’s video calling service FaceTime, the latest step in an effort to tighten control over the internet and communications online. State internet regulator Roskomnadzor alleged in a statement that the service is being “used to organize and conduct terrorist activities on the territory of the country, to recruit perpetrators (and) commit fraud and other crimes against our citizens.” Apple did not respond to an emailed request for comment.

The Russian regulator also announced that it has blocked Snapchat, a messaging app for sharing photos, videos and text messages, citing the same grounds it gave for restricting FaceTime. It said that it took the action Oct. 10 even though it only reported the move on Thursday.

I’m sure the crime rate in Russia will soon plummet. (I’m curious why iMessage isn’t blocked too.)

 ★ 

★ Meta Says Fuck That Metaverse Shit

Mike Isaac, reporting for The New York Times, “Meta Weighs Cuts to Its Metaverse Unit” (gift link):

Meta is considering making cuts to a division in its Reality Labs unit that works on the so-called metaverse, said three employees with knowledge of the matter.

The cuts could come as soon as next month and amount to 10 to 30 percent of employees in the Metaverse unit, which works on virtual reality headsets and a V.R.-based social network, the people said. The numbers of potential layoffs are still in flux, they said. Other parts of the Reality Labs division develop smart glasses, wristbands and other wearable devices. The total number of employees in Reality Labs could not be learned.

Meta does not plan to abandon building the metaverse, the people said. Instead, executives expect to shift the savings from the cuts into investments in its augmented reality glasses, the people said.

Meta confirmed the cuts to the Wall Street Journal, and Blooomberg’s Kurt Wagner broke the news Thursday.

I’m so old that I remember ... checks notes ... four years ago, when Facebook renamed itself Meta in late 2021 with this statement: “Meta’s focus will be to bring the metaverse to life and help people connect, find communities and grow businesses.” And Mark Zuckerberg, announcing the change, wrote:

But all of our products, including our apps, now share a new vision: to help bring the metaverse to life. And now we have a name that reflects the breadth of what we do.

From now on, we will be metaverse-first, not Facebook-first. That means that over time you won’t need a Facebook account to use our other services. As our new brand starts showing up in our products, I hope people around the world come to know the Meta brand and the future we stand for.

Many of us never fell for this metaverse nonsense. For example, I’m also old enough to remember just one year later, near the end of Joanna Stern’s on-stage interview with Craig Federighi and Greg Joswiak at a 2022 WSJ event, seven months before Vision Pro was announced (at the 29:30 mark):

Stern: You have to finish this sentence, both of you. The metaverse is...

Joz: A word I’ll never use.

He might want to use the word now, just to make jokes.

Om Malik, writing in April this year:

Some of us are old enough to remember that the reason Mark renamed the company is because the Facebook brand was becoming toxic, and associated with misinformation and global-scale crap. It was viewed as a tired, last-generation company. Meta allowed the company to rebrand itself as something amazing and fresh.

Lastly, yours truly, linking to Malik’s post:

And so while “Meta” will never be remembered as the company that spearheaded the metaverse — because the metaverse never was or will be an actual thing — it’s in truth the perfect name for a company that believes in nothing other than its own success.

Quoting Cory Doctorow

Now I want to talk about how they're selling AI. The growth narrative of AI is that AI will disrupt labor markets. I use "disrupt" here in its most disreputable, tech bro sense.

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

That's it.

That's the $13T growth story that MorganStanley is telling. It's why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family's financial security.

Cory Doctorow, The Reverse Centaur’s Guide to Criticizing AI

Tags: cory-doctorow, ai-ethics, ai

Using LLMs at Oxide

Using LLMs at Oxide

Thoughtful guidance from Bryan Cantrill, who evaluates applications of LLMs against Oxide's core values of responsibility, rigor, empathy, teamwork, and urgency.

Via Lobste.rs

Tags: ai, generative-ai, llms, oxide, bryan-cantrill

Quoting David Crespo

What to try first?

Run Claude Code in a repo (whether you know it well or not) and ask a question about how something works. You'll see how it looks through the files to find the answer.

The next thing to try is a code change where you know exactly what you want but it's tedious to type. Describe it in detail and let Claude figure it out. If there is similar code that it should follow, tell it so. From there, you can build intuition about more complex changes that it might be good at. [...]

As conversation length grows, each message gets more expensive while Claude gets dumber. That's a bad trade! [...] Run /reset (or just quit and restart) to start over from scratch. Tell Claude to summarize the conversation so far to give you something to paste into the next chat if you want to save some of the context.

David Crespo, Oxide's internal tips on LLM use

Tags: coding-agents, ai-assisted-programming, oxide, claude-code, generative-ai, llms

What has gone wrong with tourism to Las Vegas?

Agitators in the city have attempted to document the deterioration by posting ominous images of barren casinos, conjuring the perception of a place hollowed out by economic armageddon. The reality is more nuanced, but it is true that practically every conceivable indicator tracking tourism to Las Vegas is flashing warning signs. Hotel occupancy has cratered. Rooms were only 66.7 percent full in July, down by 16.8 percent from the previous year. The number of travelers passing through Harry Reid International Airport also declined by 4.5 percent in 2025 during an ongoing ebb of foreign tourists, for familiar reasons. Canadians, historically one of the city’s most reliable sources of degenerates, have effectively vanished. Ticket sales for Air Canada jets flying to Las Vegas have slipped by 33 percent, while the Edmonton-based low-cost carrier Flair has reported a 62 percent drop-off.

Here is the full story, which shows it is by no means an exclusively Canadian phenomenon.  Overall, I am happy to see a shift away from gambling, drinking, and “shows for wealthy old people”?

The post What has gone wrong with tourism to Las Vegas? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Affordability, Part II

Using MOOCs To Help The Unemployed Back Into Work

In last week’s primer I showed that the media’s usual story — that Americans have been impoverished by the surge in inflation that began in 2021 — isn’t right. In fact, according to the conventional measure that economists use to gauge purchasing power – real income – the purchasing power of most Americans is higher today than it was before the 2000 pandemic. But in last week’s primer I also argued that looking only at income divided by the Consumer Price Index (CPI) means that we miss some important ways in which the current economy is worse than the conventional measures indicate. In particular, I emphasized the adverse effects of high borrowing costs and low hiring, which aren’t included in the CPI.

Beyond that, I also argued our general sense of affordability encompasses more than just purchasing power. We also care about economic inclusion, security, and fairness.

Beyond the paywall I’ll explain these concepts and how they help explain Americans’ economic dissatisfaction. Specifically, I’ll address the following:

1. Why life doesn’t feel affordable when people aren’t able to buy those goods and services that make them feel that they are full members of society.

2. Why life doesn’t feel affordable unless people feel assured that a stretch of bad luck won’t lead to financial disaster

3. Why it’s important to people that prices reflect their sense of fair play, and that they don’t see themselves being taken advantage of by those in positions of privilege and power.

Read more

Niche Museums: The Museum of Jurassic Technology

Niche Museums: The Museum of Jurassic Technology

I finally got to check off the museum that's been top of my want-to-go list since I first started documenting niche museums I've been to back in 2019.

The Museum of Jurassic Technology opened in Culver City, Los Angeles in 1988 and has been leaving visitors confused as to what's real and what isn't for nearly forty years.

Tags: museums

Colors of growth

This looks pretty tremendous:

We develop a novel approach to measuring long-run economic growth by exploiting systematic variation in the use of color in European paintings. Drawing inspiration from the literature on nighttime lights as a proxy for income, we extract hue, saturation, and brightness from millions of pixels to construct annual indices for Great Britain, Holland, France, Italy, and Germany between 1600 and 1820. These indices track broad trends in existing GDP reconstructions while revealing higher frequency fluctuations – such as those associated with wars, political instability, and climatic shocks – that traditional series smooth over. Our findings demonstrate that light, decomposed into color and brightness components, provides a credible and independent source of information on early modern economic activity.

That is new research by Lars Boerner, Tim Reinicke, Samad Sarferaz, and Battista Severgnini.  Via Ethan Mollick.

The post Colors of growth appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Which economy did best in 2025?

Our annual ranking returns

Endless expanse

This view of the seemingly endless expanses of the Chilean Atacama Desert is definitely worth to be today’s Picture of the Week. The silver full Moon shines bright in the beautiful gradient evening sky. Below it, to the right, the giant dome of ESO’s Extremely Large Telescope (ELT) glows with the golden sunset light.

The ELT is perched atop Cerro Armazones, at an altitude of 3046 m. The dome might look small in the image, but the full 30-minute walk via the set of stairs from the entrance of the dome to its top, indicates its gigantic size: 80 m high and 93 m wide. Weighing about 6100 tonnes, the dome is designed to protect the telescope and its mirrors, including the 39-m wide primary mirror — the biggest eye on the sky.  

To the left of Cerro Armazones the last sunbeams of the evening cast a dark triangular shadow: Cerro Paranal, home to ESO’s Very Large Telescope (VLT), from where this picture was taken by Luca Sbordone, ESO staff astronomer. It’s no wonder that this site hosts so many professional telescopes, as it boasts the darkest skies on Earth. Chile is in fact home to all of ESO’s observatories, thanks to a long-lasting partnership that goes back more than 60 years — may it be as timeless and inspiring as this view. 

Compressing embedded files in Go

Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜️

Code

The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

package embed

import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)

//go:embed data/embed.zip
var embeddedZip []byte

var dataOnce = sync.OnceValue(func() *zip.Reader {
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil {
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
  }
  return r
})

func Data() fs.FS {
  return dataOnce()
}

We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^

The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain

Akvorado, a flow collector written in Go, embeds several static assets:

  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing
embed.zip. It is displayed as a treemap and we can see many embedded files
replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

$ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss

Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op

Each access to an asset requires a decompression step, as seen in this flame graph:

&#128444; Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

  3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

The chess culture that is India

Sarwagya Singh Kushwaha has become the youngest player in chess history to earn an official FIDE rating at the age of three years, seven months and 20 days.

Born in 2022, Sarwagya — from Sagar in the central Indian state of Madhya Pradesh — has been rated by FIDE, the international governing body of chess, which requires a player to score points against at least five rated opponents in official events.

The toddler’s first rating of 1572 is considerably above the minimum rating of 1,400, having won five of his eight rated matches. As detailed by chess.com, Sarwagya’s victories have come against opponents including 22-year-old Abhijeet Awasthi (FIDE-rated 1542), 29-year-old Shubham Chourasiya (1559) and 20-year-old Yogesh Namdev (1696).

Sarwagya has broken the record held by another Indian child, Anish Sarkar, who set it at three years, eight months and 19 days old, in November 2024.

Here is more from the NYT, via the excellent Samir Varma.

The post The chess culture that is India appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Sunday Night Futures

Weekend:
Schedule for Week of December 7, 2025

Monday:
• No major economic releases scheduled.

From CNBC: Pre-Market Data and Bloomberg futures S&P 500 and DOW futures are little changed (fair value).

Oil prices were up over the last week with WTI futures at $60.11 per barrel and Brent at $63.76 per barrel. A year ago, WTI was at $69, and Brent was at $74 - so WTI oil prices are down about 15% year-over-year.

Here is a graph from Gasbuddy.com for nationwide gasoline prices. Nationally prices are at $2.90 per gallon. A year ago, prices were at $2.97 per gallon, so gasoline prices are down $0.07 year-over-year.

Sunday 7 December 1662

(Lord’s day). A great snow, and so to church this morning with my wife, which is the first time she hath been at church since her going to Brampton, and Gosnell attending her, which was very gracefull. So home, and we dined above in our dining room, the first time since it was new done, and in the afternoon I thought to go to the French church; but finding the Dutch congregation there, and then finding the French congregation’s sermon begun in the Dutch, I returned home, and up to our gallery, where I found my wife and Gosnell, and after a drowsy sermon, we all three to my aunt Wight’s, where great store of her usuall company, and here we staid a pretty while talking, I differing from my aunt, as I commonly do, in our opinion of the handsomeness of the Queen, which I oppose mightily, saying that if my nose be handsome, then is her’s, and such like. After much discourse, seeing the room full, and being unwilling to stay all three, I took leave, and so with my wife only to see Sir W. Pen, who is now got out of his bed, and sits by the fireside. And after some talk, home and to supper, and after prayers to bed. This night came in my wife’s brother and talked to my wife and Gosnell about his wife, which they told me afterwards of, and I do smell that he I doubt is overreached in thinking that he has got a rich wife, and I fear she will prove otherwise. So to bed.

Read the annotations

Links 12/7/25

Links for you. Science:

Bird flu patient dies, marking second U.S. fatality in 2025
It’s the ‘most important fish in the sea.’ And it’s disappearing.
Figurine of Gander Covering Woman May Show 12,000-year-old Myth in Israel
A Lost Planet Created the Moon. Now, We Know Where It Came From.
Vaccines Do Not Cause Autism, No Matter What The CDC Website Now Says
Mystery creature found in ‘forbidden cloud forest’ of Peru is new species of marsupial

Other:

Border Patrol is taking the powers they want. The US Border Patrol’s desire to be THE national police force takes shape.
Social media isn’t driving the teenage “loneliness epidemic”: Teenagers’ loneliness was the same in the 1970s,1980s, and 1990s long before anyone heard of TikTok or smartphones. (excellent)
America Is Becoming Dallas. Part One: The Lord of Plano.
America Is Becoming Dallas. Part Two: Sprawling to Freedom
How D.C. developers made big money on a taxpayer-funded housing project. Developers with political ties to Mayor Muriel E. Bowser stand to collect millions of dollars more than housing experts say is normal for an affordable housing project. (Bowser’s Green Team for the win! Lolsob)
MD housing secretary: Trump cuts will cause MD homelessness to surge 25%
Trump threatened him with death. Now the Pentagon will probe him.
Federal judge rules Trump’s deployment of National Guard in D.C. is ‘unlawful’
Marjorie Taylor Greene’s departure is a canary in a coal mine for MAGA
The Skeletons in Summers’s Closet
MAGA has a foreigner invader problem
The CPA and the Lawyer Who Served Jeffrey Epstein—and Control His Fortune and Secrets
Nation’s Largest Landlord Is Encouraged to Break the Law With Measly Fine for Price Fixing Scheme That Kept Rents Artificially High and Worsened Homelessness Crisis
Woman deported from Maryland shown on video being dragged in Ghana
The realities of being a pop star.
The Outrageous False Equivalences That Prop Up President Trump
Larry Summers and the Hunger Games: Who remembers the food shock of 2005-2008? Just another global policy disaster from then Treasury Secretary Summers.
Is Marc Andreessen just flat-out dumb?
‘Is the price of doing this worth it?’: North Carolina Republicans worry about Trump immigration raids
The doctor who falsely tied the MMR vaccine to autism takes his victory lap
The MAGA Influencers Rehabilitating Hitler: A growing constituency on the right wants America to unlearn the lessons of World War II.
The Censorship Crybabies Are Now The Censors: FDA’s Vinay Prasad Uses Copyright Claims To Silence Critic
ACFD rescues TikTok-famous toucan from behind Pentagon City dishwasher
Stop Asking How Democrats Will Fight Trump. Start Asking Republicans
Bland, easy to follow, for fans of everything: what has the Netflix algorithm done to our films?
White House to pitch a Trump Obamacare extension with limits
A world of ratfuckers
Game Theory Explains How Algorithms Can Drive Up Prices
One Small Guardrail Finally Held Up Against Trump
Musk’s AI supercomputer, used by U.S. military, secretly relies on Chinese hardware

In Case You Missed It…

…a week of Mad Biologist posts:

The D.C. Occupation: Compounding Tragedy with Farce

LLMs Are an Upgrade to Mediocrity: the Occupation of Chicago Edition

Panicked by a Close Election in Tennessee, Trump Attempts to Bribe Democratic Rep. Cuellar

“…The Laws Are Designed Specifically to Prevent That from Being OK.”

SpaceX launches 3,000th Starlink satellite in 2025 on record-setting 32nd flight of Falcon 9 booster

A SpaceX Falcon 9 rocket lifts off from Launch Complex 39A (LC-39A) at NASA’s Kennedy Space Center on Dec. 8, 2025, to begin the Starlink 6-92 mission. SpaceX used the Falcon 9 booster, 1067, which made its record breaking 32nd flight. Image: Adam Bernstein / Spaceflight Now

Update Dec. 8, 6:31 p.m. EST (2331 UTC): SpaceX confirms deployment of the 29 Starlink satellites.

Update Dec. 7, 6:32 p.m. EST (2332 UTC): SpaceX scrubbed the launch.

SpaceX achieved a couple notable milestones with its Falcon 9 rocket launch from NASA’s Kennedy Space Center Monday evening.

The mission, dubbed Starlink 6-92, featured the use of the company’s most flown Falcon booster, tail number B1067. On its 32nd flight, it delivered SpaceX’s 3,000th Starlink satellite of the year to low Earth orbit.

Liftoff from historic Launch Complex 39A happened at 5:26 p.m. EST (2226 UTC), following a weather-related scrub on Sunday. The rocket flew on a south-easterly trajectory upon leaving Florida’s Space Coast.

Meteorologists with the 45th Weather Squadron forecast a 90 percent chance for favorable launch on Monday with liftoff winds being a potential concern. Teams also cited a low to moderate risk for impacts from upper-level wind shear and booster recovery weather.

The use of B1067 on this mission brings SpaceX one step closer to its current goal of certifying its Falcon boosters for up to 40 missions a piece. The ultimate number of missions a booster flies will partially depend on the types of missions for which it was used and if it is needed on an expendable flight.

SpaceX is looking to achieve the same level of reuse for the payload fairings on a Falcon rocket’s upper stage, but typically only provides updates on those during the launches of customer missions for the government or from other companies.

Under a resilient ridge, prolonged tule fog episode brings cold and damp weather to the Central Valley but anomalously warm/dry weather elsewhere

An increasingly resilient ridge keeps California dry, but with markedly different daily weather in dense tule fog vs non-fog zones, following a very wet and relatively warm autumn Well, the final numbers are now in and they reflect what everyone has been talking about in Southern California: it genuinely was historically wet this fall in […]

The post Under a resilient ridge, prolonged tule fog episode brings cold and damp weather to the Central Valley but anomalously warm/dry weather elsewhere first appeared on Weather West.

Trump Selling Ukraine for Cash

Wall St Journal Investigation Published the Details

President Trump is enabling an aggressive effort for U.S. companies, and some of his friends and business associates, to start massive new business ventures with Russia. Business that could go into the hundreds of billions of dollars. Business that abandons sanctions on Russia such as, “a senior Exxon Mobil executive discussed returning to the massive Sakhalin [oil] project if the two governments gave the green light as part of a Ukraine peace process” and “a college friend of Donald Trump Jr. and campaign donor to his father, has been in talks to acquire a stake in a Russian Arctic gas project if it is released from sanctions”. Business that thwarts a more open and productive global economy by bypassing European entities who might bring healthy, cost-effective competition to these ventures and instead locks in exclusive U.S./Russian deals. Business that is not about a system more open to all businesses, big and small, to engage between our countries but is rather all about the big players, the wealthy, the well-connected, the ones funneling huge amounts into buying those connections. In other words business that steers the U.S. and global economy ever more toward being an exclusive playground of those big players.

All of this hinging on Putin getting what he wants in Ukraine.

The WSJ reported on this in two pieces, “Make Money Not War” and “What Does Putin Want? far more than just the conquest of eastern Ukraine”. They analyzed a lot of publicly known information but also dug underneath what is known with “dozens of officials, diplomats, and former and current intelligence officers from the U.S., Russia and Europe, and American lobbyists and investors close to the administration.” They first published this recently on Friday the 28th. I expected that by Sunday major news sources would be jumping all over this. It’s huge news in itself. The idea is insulting to anyone who cares about Ukraine or about national sovereignty or stopping Putin from warring on neighbors and Europe.

It would also seem to be a huge factor in how Trump will be perceived going forward. Among the many things he has done that would seem to be insults either to his base, like limiting Medicaid that is essential to many red state rural areas, or to almost everyone, like reducing education grants for training nurses, this may be the biggest. Willingness to negotiate on Putin getting what he wants in Ukraine as long as big business players get big deals, some of which will no doubt benefit Trump. I was shocked that there was hardly any coverage of this. The major news sources frequently cover what each other has discovered while giving credit to which reported it first, but looking through a list of the major sources as of Sunday showed none even noting it, much less giving it the top exposure it needs.

There is history and irony in this emphasis on business. The idea of the U.S. engaging heavily in economic give-and-take with adversaries has been a good idea for decades. It’s what was behind economic engagement with China starting with Nixon. The same with Russia after the collapse of the U.S.S.R. If we’re heavily dependent on each other’s economies then we’re less likely to be at war. But those past examples did not involve blackmail. Did not involve an adversary warring on a neighbor and then saying they’d stop there if we gave them lots of mutually profitable business.

The irony is Putin could have had this without war. To make this Ukraine-territory-for-business notion more palatable ideas have been floated of ways it could help Ukraine. That they could have huge data centers to provide A.I. services to the U.S. That they could have big, profitable trade exchanges with Russia. That there could be a whole industry around rebuilding devastated parts of Ukraine. But Putin could have had all that and better circumstances without war. If he had pursued such economics without war there could be a thriving Ukraine economy heavily engaged, not just with the west, but with Russia as well. He could have a Ukrainian populace happy to be a neighbor of, and on good terms with, Russia. All the wealth and life that Russia has lost to this war could have been avoided. He wouldn’t have the ego thrill of putting a pin in his wall map marking part of Ukraine as his, but he and Russia would have bigger benefits than what they’re trying to get now.

Note that Trump and his apologists will have plenty of plausible deniability to spin this. The business transactions can be profitable to the U.S., though in ways, as noted, that are all about the big, well-connected making deals among themselves. The idea of large trade interactions lessening the odds of larger wars is true, but not done this way. The territory issue will probably be presented as, Russia now has certain territories and Trump may present that as something that can’t be expected to change, even though tougher negotiations and greater U.S. and European support could change that. It was bad when Obama relented on Russia taking Crimea, and it’s much worse with their relentless destructive war on Ukraine.

Whether this approach of “give in to Putin and get business out of it” continues is as much of a guessing game as anything Trump does, being so erratic. He has flipped back and forth from seeming to want Ukraine to give in to talk of arming them so well they could drive Russia out. If he ultimately gives up on, or just doesn’t get, this “give in but get business” approach it makes it no less terrible and wrong that this is the current effort.

A big factor in all of this is that Putin is a liar. After some Ukrainian territory is designated permanently Russian and sanctions are lifted and profitable business deals are running there is nothing but a paper promise that Putin won’t just start up again provoking conflict with Ukraine, nibbling at the edges, and setting up further expansion into their territory or any others he thinks he can get, just as he will have gotten out of a deal like this.

What the Trump apologists can’t make disappear are the massive conflicts of interest, Trump’s negotiators being well positioned to make huge amounts themselves out of all this, as the WSJ piece lays out. And they can’t erase Trump’s history of making U.S. interests entangle with his own financial interests, as with his personal business interests with Middle-East countries and with crypto-business players around the globe.

Trump is selling Ukraine for cash. That ought to be a huge story, and a huge blow to his ability to hold onto his voters and to hold all his Republican underlings in line.


PLEASE DONATE IN SUPPORT OF OUR NONPROFIT EFFORTS TO KEEP YOU INFORMED

The post Trump Selling Ukraine for Cash appeared first on DCReport.org.

w/e 2025-12-07

The last complete week of being here alone, with only Pippa the cat and the builders in the garage for company. The garage roof is now nearly finished, looking good, and is watertight. What a novelty.

Pippa and I have our regular routine throughout the day: our respective feeding times, the her-on-my-lap times (morning coffee, afternoon tea, evening watching TV), the heading to bed time.

I can’t decide if caring for an animal like this is good for one’s mental health – the routine, thinking of a creature other than yourself, the probably false belief that when she’s chosen to sit on your lap it’s because you’re a caring person who’s at one with nature – or whether the obsession with satisfying their irrational whims will slowly drive you mad.

I had been relieved we hadn’t had many mice – whole or part – left on the floor recently. But on Friday evening I was heading up to bed, with only a light behind me casting a shadow up the stairs. As I neared the top one of our nifty motion-sensitive lights came on just in time to illuminate a dead rat on the top step, moments before I stepped on it. At least six inches long, tail aside.

Given the skill and effort it must take for a cat to catch and kill a rat, then get it through the cat flap, then carry it up stairs, you’d think Pippa would appear more pleased with herself. But she trotted past, on her way to bed, without even a glance at it.


§ I’ve continued to avoid some more important tasks by fiddling away at a redesign of this site. I do enjoy this kind of tinkering, especially design tinkering, with no client and no deadline. There’s no rush, I just do a little every so often, with time for it to percolate, and then look at it again with fresh eyes a day or two later.

Having something on the go is always good at night too – if I ever find myself wide awake, with my mind inevitably circling towards all the worst thoughts and worries, then thinking about my current design and/or coding project is the most reliable way to get my brain focused on something else.


§ I had a blood test and basic health check at the GP this week, the first I’ve had in a few years, and everything was fine. They didn’t say I would live forever but then they didn’t not say that either. So who can tell.


§ I forgot to mention last week, and maybe the week before, that I started watching The Studio with high hopes, having seen person after person enthusing about it. It was pretty unbearable and, having forced myself to keep trying, I ended up turning it off mid-way through episode 4.

I find it hard to put my finger on why it wasn’t good. It looks good but then so many shows are visually rich and impressive these days, so what. There was something smug about it. And although much of the humour relies on awkwardness and incompetence – which I’d usually love – it all manages to be so over the top as to be unbelievable, while the situations feel uninspired and predictable.


§ I watched more films this week:

  • Jour de Fête (Jacques Tati, 1949). I’d never seen any Tati films so thought I should fill a gap. It was fine! Much slower than I expected for something slapstick, which usually makes me think of Chaplin and Lloyd. A couple of days later I started on Monsieur Hulot’s Holiday but after 20 minutes or so I couldn’t face any more of its ponderousness so that’s probably me and Tati done for now.
  • The Graduate (Mike Nichols, 1967). I’d also never seen this and it was, perhaps unsurprisingly, great! A lot of fun, really good. Currently on Mubi and iPlayer.
  • Perfect Days (Wim Wenders, 2023). Nice, good. It’s a bit, “Ahhh, you see, we should all aspire to be as content with our lot as a quiet man who cleans toilets and smiles when he looks at trees, ahhh.” And I’d have liked it more if the guy’s musical taste didn’t feel so “music Wim Wenders liked when he was young”.
  • Grand Theft Hamlet (Sam Crane, Pinny Grylls, 2024). I wasn’t quite sure what to expect but liked this more than whatever I expected. It’s quite silly and was fun to watch two posh-ish English guys, as GTA characters, attempt to put on a Shakespeare production in Los Santos while, of course, everyone else wants to shoot everyone. I actually laughed out loud at one point, which is rare. There’s a nagging part of me that wonders/worries that some of the more sad and touching scenes were scripted / set up, which would be a shame.
  • Fingernails (Christos Nikou, 2023). I gave up on this about 35 minutes in. It wasn’t bad, and the performances were fine, but the script was pretty clunky. Quite a bit of, “As I explained earlier, [goes on to explain things]”. And a lot of, “Have you taken… The Test?” accompanied by glances at bandaged fingers. Ooh, whatever could this oh-so-mysterious test involve… in a film called Fingernails?!
  • The Apartment (Billy Wilder, 1960). I must confess that I didn’t really enjoy Wilder’s Some Like it Hot but I liked this a lot. It is Too Long but otherwise, top marks all round, funny and touching and thoughtful. Bonus: Sheldrake’s suits looked so good.

§ Every week I think, “Nothing’s happened, there’s nothing to write about,” and yet here I am and, apparently, here you are.


Read comments or post one

FOMC Preview: 25bps Rate Cut Expected

Most analysts expect the FOMC to reduce the Fed Funds rate by 25bps at the meeting this week to a target range of 3-1/2 to 3-3/4 percent.    Market participants currently expect two additional rate cuts in 2026.

Analysis suggests rates are currently slightly restrictive (Cleveland Fed) or even already accommodative (even before this rate cut).  So, to cut rates in this environment, FOMC members are clearly expecting either inflation to decline quickly or an employment recession, or both.  This outlook should show up in the projections (lower inflation, higher unemployment rate).

From Goldman Sachs:
The FOMC is widely expected to deliver a third consecutive 25bp interest rate cut to 3.5-3.75% at what will likely be a contentious December meeting next week. ... The case for a cut is solid, in our view. Job growth remains too low to keep up with labor supply growth, the unemployment rate has risen for three months in a row to 4.4%, other measures of labor market tightness have weakened more on average, and some alternative data measures of layoffs have begun to rise recently, presenting a new and potentially more serious downside risk.
From BofA:
The Fed has signaled that it will cut rates by 25bp to 3.5-3.75% at its Dec meeting. We look for two or three substantive changes in the FOMC statement. The description of labor market conditions is likely to omit the language that the u-rate “remained low”, to reflect the 32bp uptick over the last three months.
...
The SEP is likely to show upgrades to growth in 2025 and 2026. ... However, as a mark-to-market based on the latest data, we think the u-rate for 4Q 2025 will be taken up by a tenth to 4.6%. ... These changes would provide some cover for cutting rates despite the expected upgrades to the growth outlook.
emphasis added
Projections will be released at this meeting. Here are the September projections.  

The BEA's estimate for first half 2025 GDP showed real growth at 1.6% annualized. Most estimates for Q3 GDP are around 3.5%.  That would put the real growth for the first three quarters at 2.2% annualized - well above the top end of the September projections.   So GDP for 2025 will likely be increased.

GDP projections of Federal Reserve Governors and Reserve Bank presidents, Change in Real GDP1
Projection Date202520262027
Sept 20251.4 to 1.71.7 to 2.11.8 to 2.0
Jun 20251.2 to 1.51.5 to 1.81.7 to 2.0
1 Projections of change in real GDP and inflation are from the fourth quarter of the previous year to the fourth quarter of the year indicated.

The unemployment rate was at 4.4% in September.  The unemployment rate will likely increase further this year. There was no data for October due to the government shutdown, and the November report will be released on December 16th - the week after the FOMC meeting - so the FOMC is flying blind this week on the unemployment rate.  However, they will probably increase the 2025 projection (and possibly 2026) as justification for the rate cut.  An unemployment rate of 4.6% over the next few months might be recessionary (according to the Sahm rule).

Unemployment projections of Federal Reserve Governors and Reserve Bank presidents, Unemployment Rate2
Projection Date202520262027
Sept 20254.4 to 4.54.4 to 4.54.2 to 4.4
Jun 20254.4 to 4.54.3 to 4.64.2 to 4.6
2 Projections for the unemployment rate are for the average civilian unemployment rate in the fourth quarter of the year indicated.

As of September 2025, PCE inflation increased 2.8 percent year-over-year (YoY), up from 2.7 percent YoY in August.  Projections for PCE inflation will probably remain unchanged or lowered slightly.

Inflation projections of Federal Reserve Governors and Reserve Bank presidents, PCE Inflation1
Projection Date202520262027
Sept 20252.9 to 3.02.4-2.72.0 to 2.2
Jun 20252.8 to 3.22.3-2.62.0 to 2.2

PCE core inflation increased 2.8 percent YoY, down from 2.9 percent in August.   Projections for 2025 core PCE inflation will likely be decreased.

Core Inflation projections of Federal Reserve Governors and Reserve Bank presidents, Core Inflation1
Projection Date202520262027
Sept 20253.0 to 3.22.5-2.72.0 to 2.2
Jun 20252.9 to 3.42.3-2.62.0 to 2.2

Sunday assorted links

1. The fight over Romansh (New Yorker).

2. How well can LLMs grade?

3. Kelsey Piper responds on Mississippi.  She is probably correct.

4. Future VP?

5. Is the political allocation of gay representatives skewed?

6. If Arnold Kling taught conservative thought.

7. Noah is right.

8. Kennedy Center update.

9. The quiet surrender of Fed independence.

The post Sunday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

2025 Year in Review

Well, nearly another year in the books. How did it go?

Books per year chart

It’s genuinely surprising to me how consistent my reading stays year-over-year. I don’t set reading goals or have a predictable pace, and every year has at least a month in which I’m reading nothing, my momentum growth to a halt. But at the end of every year, it’s range-bound between 19 and 22 for the past five years.

The count is 19 at the moment, but I’m reading a fast-paced sci-fi so it’ll probably be 20 by the end of the year.

A Confederacy of Dunces was easily the most entertaining book this year - an absolute riot, funny and unique. Things Become Other Things, from Craig Mod, was the most affecting book of the year, the only one that made me cry a little. And The Fort Bragg Cartel was the most engaging, can’t-put-it-down book. If you can stomach the difficult material - detailed descriptions of war crimes and domestic abuse - I highly recommend reading it.

Race times

This was a decent year for running, too. I ran five 5ks and two half-marathons in 2025, and achieved my simple goal of running sub-20 in the 5k. The thing about running a mediocre 19:13 PR in high school and then running mid-19s twenty years later is that now I’m in the top 10% of my age group! On a relative basis, I keep getting faster.

Mostly I blame the extreme summer heat for some of the higher times: many of the races had warnings about high humidity, high heat, and bad air quality, warning people from overexertion. A sample from their pre-race email:

The weather forecast is for temperatures in the low 90s. Please dress and hydrate properly, and avoid overexertion. The Air Quality Index is predicted to be over 100 at race start, members of sensitive groups may experience health effects. Limit outdoor exposure if you are sensitive to ozone. This might be a great night to run easy or tempo effort, please adjust your pace expectations!

That said, I think I could still do better. Running low 19 minute times would be lovely and I think within my abilities. I’ve been following an ‘intuitive training plan’ this whole time, which in other words means not having a plan. 2026 I plan to have a plan, and probably the cornerstone of that plan is logging many more easy miles.


How’s work been going? I can point to the Val Town Retrospective that I wrote for most of the answer to that question. 2025 for Val Town was a year of big ups and downs. Simultaneously, the job became both more demanding and I became more adjusted to it: it’s remarkable how adaptable people and organizations can be.

On a day-to-day level, as an engineer, the codebase has grown to the point where it’s a bit difficult to keep all in my head, and there are important components that I shamefully haven’t directly worked on. For a ‘CTO’, needing to have the system memorized might feel like an no-no, but for an organization of this size my job is really to be a general-purpose builder, fixer, and understander.


I was really into rhythmic instrumental music: SML’s take on jazz in Small Medium Large and How Have You Been are amazing, the kind of music that works for focused coding, a dinner party, or a long drive.

I loved John Carroll Kirby’s alternative take on jazz too - which can sound cheesy, like elevator music, until you get a minute in.

Slow Mass’s Low on Foot was probably my album of the year: almost every song is marked with five stars in my music library in Swinsian.


I feel unsatisfied with my productive output in 2025. But this is a permanent condition I think.

First bike bag

Sewing was the big new thing. I sewed about five bags, including three for my bicycle, and rode almost 1,000 miles with them.

Bag 2

It’s a fantastic hobby. Designing the bags exercises my brain in just the right ways, it’s tactile and low-tech. My sewing machine was manufactured around 1970, and works great. I love the learning process involved: my first attempt at sewing a bag for the front rack of my bike yielded clear lessons for bag 2, things like using stiffer fabric where the bag needs support and trying to minimize seams in areas that are on the top, to preserve waterproofing.

Pending another bike, I’m pretty much done with bike bags, but there are plenty more projects on the horizon for the sewing machine.

Besides the flashy bags-from-scratch, it’s been useful for simpler things like:

  • Restuffing my couch cushions and sewing them back closed
  • Repairing the pocket in some running shorts that had developed a hole
  • Hemming some jeans that were too long, and an oversized shirt

It’s been really rewarding, and sewing goes really well with instrumental jazz.


That said, my free-time coding projects have been fewer. I implemented indiepixel, a pixel-art rendering layer in Python for my Tidbyt display. And I maintained Placemark, putting time into simplifying it and adding a handful of new features, like drawing lines with automatic routing.

But that’s about it? The coding I’ve done on weekends has mostly been work-related, and not much of that either. I still have fun coding, but I have to say that it’s changed for me. The tech industry just feels bad in so many ways, from its open embrace of fascism to the nihilistic startups that advertise via rage-bait. LLMs have changed things a lot too: it’s hard to tell what people value anymore, and how people have fun. I’ve written a lot about LLMs, so won’t repeat it all. See: Would LLMs democratizing coding be a pyrrhic victory?, Hallucination City, LLMs pivot to the aesthetics of thinking, and more.

I’ve long aimed to diversify my joys: part of finding a love of music, art, sewing, running, and so on is that they can serve as backup ways to feel happy when the world’s tough. I see some of what’s happening now - people using computers to do art, automating the skillful work they used to do, and I wonder what this leaves time for them to do: in the excess time, where do you find joy?

I’ve been finding most of that joy away from the keyboard, this year. I hope I rediscover some of that spark in 2026. I have been having fun learning Effect and writing some Rust, and there are plenty of ideas left.


Brooklyn continues to be good to me. Living here delivers on my priorities in life: things like never drive and live near friends. By those metrics, it does great, and always surprises me with just how much of the world is packed into the 97 square miles of the borough, and Manhattan and Queens nearby.

And yeah - the election of Zohran Mamdani makes it even better. This year was the first time that I knocked on doors for a mayoral candidate, and so did a majority of my friends. It’s pretty exciting. I think that the next few years will be great for the city, and though it’ll be really tough to deliver on all of his promises, even just having a mayor in office who shows up to the job and wants the best for his constituents will be a welcome change from the previous administration.


I started this blog in 2011 with a vague photo of San Jose and some non-committal prose. So 2026 will be the 15th anniversary of the blog.

Blogging has been, for me, an unalloyed success. It has connected me to people, given me a place to develop my thoughts, made some of my work on the internet - a place always decaying and forgetting - a little more permanent. I absolutely recommend everyone do it.

I know why most people don’t do it: not enough time and too much fear of publishing ‘bad writing.’ Maybe ‘nothing to write about,’ too, though this never seems that real to me, given how the average person I meet has interesting thoughts and ideas to share.

I forget exactly when I removed analytics from the blog, but it was a long time ago. Since then I don’t know what ‘takes off’ or ‘goes viral’ and it’s mostly fine with me. Lately though, I have been discovering other indie blogs with articles that reference or respond to mine, and I really want a way for this to be slightly more social. Not fully social of course - no comments and this is not part of any network - but I want to know about link-backs. That’s probably the focus for 2026.

I think this idea has been going around - my friend Waldo was discussing it the other day, and webmentions came up as an option. I’ve tried webmentions in the past with little success - not many blogs supported them and I got a lot of spam - but it’s worth another shot. It’s hard not to get a little discouraged off the jump because webmentions have spam, their predecessor pingbacks were ripe with abuse, trackbacks had even more spam, and even if I try to find backlinks with ahrefs.io, there are plenty of spam domains there too or SEO schemes. The internet is an adversarial place.

In meta-blog news, this blog has been hosted on Netlify since 2017 and I can’t find a strong reason to switch off. It’s been rock-solid. I’ve been using Jekyll since I started in 2011 and it continues to work great, though if I started from scratch I’d probably use 11ty. It would be nice to have a little more power over server-rendering and deploy on Hetzner, but it seems like it’d be a step-up in complexity.


Riding the C&O Canal

Photo from riding the GAP trail + C&O Canal this year

every strange thing you’ve ever been into, every failed hobby or forgotten instrument, everything you have ever learned will come back to you, will serve you when you need it. No love, however brief, is wasted. - Louise Miller

SpaceX launches 28 Starlink satellites on Falcon 9 rocket from Vandenberg SFB

A SpaceX Falcon 9 rocket lifts off on the Starlink 11-15 mission from Space Launch Complex 4 East at Vandenberg Space Force Base on Dec. 7, 2025. Image: SpaceX

Update Dec. 7, 3 p.m. EST (2000 UTC): SpaceX confirms deployment of the 28 Starlink satellites.

SpaceX closed out the weekend with a mid-morning Falcon 9 rocket launch from Vandenberg Space Force Base in California.

The Starlink 11-15 mission added another 28 broadband internet satellites to its massive low Earth orbit constellation. This was SpaceX’s 115th launch of Starlink satellites so far in 2025.

Liftoff from Space Launch Complex 4 East happened at 9:58 a.m. PST (12:58 p.m. EST / 1758 UTC). The rocket flew on a south-easterly trajectory upon leaving the launch pad.

SpaceX launched the mission using the Falcon 9 booster with the tail number 1088. This was its 12th flight following the launches of NASA’s SPHEREx, Transporter-12 and two missions for the National Reconnaissance Office (NROL-57 and NROL-126).

About 8.5 minutes after liftoff, B1088 performed an autonomous landing on the drone ship, ‘Of Course I Still Love You.’ This marked the 168th booster landing on this vessel and the 545th booster landing to date for SpaceX.

SpaceX has another launch scheduled for later in the day on Sunday from NASA’s Kennedy Space Center. That mission will feature the 3,000th Starlink satellite launched in 2025.

Tom Stoppard (1937 –2025)

 Tom Stoppard, the great English playwright, passed away last week. I saw many of his plays, including his last one, about his apparently late in life discovery that he was Jewish, and that his immediate family had fled Czechoslovakia ahead of  the Nazis, while most of the rest had perished, with a few exceptions.

The play tells the story of three generations of assimilated Jews. You, the audience, of course know how it will end, but they don't, and they are optimistic that their current troubles will soon pass.  It's an eerie feeling to watch that play amidst the world's current uncertainties. 

The NYT tells his story through that final play.

When Tom Stoppard Confronted His Background in His Final Play
The playwright, who learned about his Jewish heritage late in life, addressed it in the Tony Award-winning drama “Leopoldstadt.”
   By Marc Tracy

"Stoppard’s final play, too, contained characters whose fates were tragically preordained. The rest is silence." 

My 2011 Review of Contagion

I happened to come across my 2011 review of the Steven Soderberg movie, Contagion and was surprised at how much I was thinking about pandemics prior to COVID. In the review, I was too optimistic about the CDC but got the sequencing gains right. I continue to like the conclusion even if it is a bit too clever by half. Here’s the review (no indent):

Contagion, the Steven Soderberg film about a lethal virus that goes pandemic, succeeds well as a movie and very well as a warning. The movie is particularly good at explaining the science of contagion: how a virus can spread from hand to cup to lip, from Kowloon to Minneapolis to Calcutta, within a matter of days.

One of the few silver linings from the 9/11 and anthrax attacks is that we have invested some $50 billion in preparing for bio-terrorism. The headline project, Project Bioshield, was supposed to produce vaccines and treatments for anthrax, botulinum toxin, Ebola, and plague but that has not gone well. An unintended consequence of greater fear of bio-terrorism, however, has been a significant improvement in our ability to deal with natural attacks. In Contagion a U.S. general asks Dr. Ellis Cheever (Laurence Fishburne) of the CDC whether they could be looking at a weaponized agent. Cheever responds:

Someone doesn’t has to weaponize the bird flu. The birds are doing that.

That is exactly right. Fortunately, under the umbrella of bio-terrorism, we have invested in the public health system by building more bio-safety level 3 and 4 laboratories including the latest BSL3 at George Mason University, we have expanded the CDC and built up epidemic centers at the WHO and elsewhere and we have improved some local public health centers. Most importantly, a network of experts at the department of defense, the CDC, universities and private firms has been created. All of this has increased the speed at which we can respond to a natural or unnatural pandemic.

Avian flu virus, from 3DScience.com.

In 2009, as H1N1 was spreading rapidly, the Pentagon’s Defense Threat Reduction Agency asked Professor Ian Lipkin, the director of the Center for Infection and Immunity at Columbia University’s Mailman School of Public Health, to sequence the virus. Working non-stop and updating other geneticists hourly, Lipkin and his team were able to sequence the virus in 31 hours. (Professor Ian Sussman, played in the movie by Elliott Gould, is based on Lipkin.) As the movie explains, however, sequencing a virus is only the first step to developing a drug or vaccine and the latter steps are more difficult and more filled with paperwork and delay. In the case of H1N1 it took months to even get going on animal studies, in part because of the massive amount of paperwork that is required to work on animals. (Contagion also hints at the problems of bureaucracy which are notably solved in the movie by bravely ignoring the law.)

It’s common to hear today that the dangers of avian flu were exaggerated. I think that is a mistake. Keep in mind that H1N1 infected 15 to 30 percent of the U.S. population (including one of my sons). Fortunately, the death rate for H1N1 was much lower than feared. In contrast, H5N1 has killed more than half the people who have contracted it. Fortunately, the transmission rate for H5N1 was much lower than feared.  In other words, we have been lucky not virtuous.

We are not wired to rationally prepare for small probability events, even when such events can be devastating on a world-wide scale. Contagion reminds us, visually and emotionally, that the most dangerous bird may be the black swan.

The post My 2011 Review of Contagion appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Europe is under siege

The map above is a depiction of The Deluge, a historical event in which the Polish-Lithuanian Commonwealth — which had been a major European power — was defeated and destroyed under the combined assaults of Russia and Sweden in the 1600s. After having its power broken, Poland was carved up in the 1700s and subjugated by Russia, Prussia, and Austria. It took more than two centuries, until the fall of communism in 1991, for Poland to reemerge as a strong, truly independent country.

The Deluge shows that power and independence are not permanent. If you are surrounded by hostile powers, and if you don’t have the ability to guard yourself against those powers, no amount of historical greatness can save you from being subjugated. This is an important lesson for Europeans to remember right now, as they find their region under siege from Russia, China, and the United States all at once.

The United States no longer cares about the European project

Why would America care about Europe at all? For most of our history, we didn’t. In the 19th century, the U.S. viewed European countries as dangerous rivals. In the early 20th century, Americans prided themselves on not getting involved in European affairs, and were incensed at their government for dragging them into World War 1. Only after World War 2 did Americans start caring about Europe, and we did so for three reasons:

  1. West Europe was a bulwark against Soviet communism.

  2. Europe was a key trading partner.

  3. Many Americans came to value their ancestral ties to Europe.

The first of these reasons vanished in 1991. Europe is still a bulwark against Russia, but Americans no longer feel threatened by Russia. Russian power is far less than what it once was, and Russia’s rightist ideology does not threaten the rightists who now rule America.

As for communism, many (most?) Americans now believe that European countries are socialist. When American conservatives ask where in the world socialism has succeeded, American progressives will always reply “Europe” or “Scandinavia”. Whether Europe or Scandinavia is actually socialist is irrelevant; Americans have come to see it that way.

Europe is still an important trading partner. But Trump and the other people now in charge of the U.S. do not understand trade at all. They think about trade entirely in terms of the net trade balance, rather than in terms of total U.S. exports. Trump & co. don’t care that America sells $650 billion a year to Europe; the fact that Europe sells $800 billion a year to America means that Trump & co. think America is “losing” and would benefit from a cutoff of trade.

Remember that the U.S. is an unusually closed-off, self-sufficient economy, so Americans in general don’t think too hard about trade or try to understand why it’s valuable. Also, the people running now the country are especially ignorant about economic matters.

As for civilizational ties, this is the reason Trump and the MAGA movement have turned so strongly against Europe. The American right values Europe because they think of it as a White Christian homeland — the source and font of Western civilization. Here’s a post I wrote about that earlier this year:

I wrote:

in the American mind, Europe stood across the sea as a place of timeless homogeneity, where the native white population had always been and would always remain…In the mind of many Americans, Europe thus stood as both a refuge and a reservoir. America itself was a rough, contested frontier, but Europe would always be white and Christian. If you ever felt the need to live around a bunch of white people of Christian heritage, you could always go “back”, but for most that wasn’t necessary — just knowing that the Old World was somewhere out there was enough.5

I think Europeans may underestimate how much this perception motivated America’s participation in the Transatlantic Alliance during the Cold War…[T]o conservative Americans in the 20th century — the type of people who joined the John Birch Society — the Cold War was about preserving Christendom from the threat of godless communism.

Anyway, in the 2010s, it dawned on those Americans that this hallowed image of Europe was no longer accurate. With their working population dwindling, European countries took in millions of Muslim refugees and other immigrants from the Middle East and Central and South Asia — many of whom didn’t assimilate nearly as well as their peers in the U.S. You’d hear people say things like “Paris isn’t Paris anymore.”6…At the same time, Europe had long since abandoned its traditional Christian values…

To Americans who valued the idea of America and Europe as part of a single Western civilization, this realization was catastrophic. Suddenly European countries — and the Anglosphere countries of Canada, Australia, and New Zealand — felt like they had left the club…

America’s rightists…want to know that someone, somewhere, is out there preserving an indigenous homeland for their identity groups. And that “someone” has to be Europe and the Anglosphere.

This isn’t a new attitude, either. Remember that in order to persuade a reluctant America to join World War 1, the U.S. government had to depict Germany as an ape abducting a white woman!

If you understand this, then nothing in America’s new National Security Strategy is mysterious, surprising, or confusing. Here’s how War on the Rocks summarizes the Trump administration’s attitude toward Europe:

[I]mmigration is elevated to the central national security problem. The text declares, bluntly, that “the era of mass migration must end,” and that “border security is the primary element of national security.” It frames mass migration as a driver of crime, social breakdown, and economic distortion, and calls for a world where sovereign states cooperate to “stop rather than facilitate destabilizing population flows” and tightly control whom they admit…

[P]rotecting American culture, “spiritual health,” and “traditional families” are framed as core national security requirements…The document insists that “restoration and reinvigoration of American spiritual and cultural health” are prerequisites for long-term security and links this to an America that “cherishes its past glories and its heroes” and is sustained by “growing numbers of strong, traditional families” raising “healthy children.” America is thus cast as defender of so-called traditional values, while Europe lacks “civilizational self-confidence and Western identity.”…

[T]he strategy elevates the culture wars into a governing logic for national security, and it does so through rhetoric that treats ideological and cultural disputes as matters of strategic consequence…This is clearest in the European section…The text…speculates about demographic and cultural shifts in Europe as a way to question whether future governments will share American views of their alliances. The strategy [implies] that cultural alignment is essential to strategic partnership.

The American right sees the “mad brute” in the ape cartoon as the dark-skinned Muslim immigrants who have entered Europe in large numbers in recent years. And they see themselves as needing to save the woman — representing their view of Europe as the traditional font of White Christian civilization — from that mad brute.

This tweet by Elon Musk pretty much sums up the American right’s attitude toward Europe:

This is why no amount of European shaming or moral persuasion can have any effect on the Trump administration — or on any Republican administration in the decades to come. This kind of appeal to friendship is totally useless:

And this kind of bitter, angry hectoring is worse than useless:

The American right — i.e., the people now in charge of the country — do not care intrinsically about democracy, or about allyship, or about NATO, or about the European project. They care about “Western Civilization”. Unless Europe expels Muslim immigrants en masse and starts talking about its Christian heritage, the Republican Party is unlikely to lift a hand to help Europe with any of its problems. Democrats will want to help Europe, but they will only be in power intermittently, and helping Europe will not be high on their priority list.1

Thus, America is not riding to the rescue this time, or for the foreseeable future. I wish things were different, but my wishes count for nothing; this is the reality with which the Europeans must now deal.

Russia and China together are the real menace to Europe

Europeans do not need me to tell them that Putin’s Russia threatens not just Ukraine, but all of Europe. They are well aware of this fact. Russia now regularly flies its drones into Europe, and is probably behind a wave of sabotage attacks on European infrastructure.

How can Russia, a country of just 144 million people and $7 trillion in GDP (PPP), hope to overcome Europe, which has 520 million people and $33 trillion in GDP (including the UK), especially after Russia has expended so many of its young men and materiel in its war with Ukraine already? There are three answers here. The first is gray-zone warfare, including sabotage and political influence campaigns. But that’s only the beginning.

Russia’s second method for fighting Europe is what I call a “Ponzi empire” strategy. Russia has enslaved vast numbers of Ukrainians from the occupied regions of Ukraine to fight against the rest of their country. If Russia conquers the rest of Ukraine, it will similarly enslave the rest of the country’s population, and send them to fight against Poland, the Baltics, and Moldova. If they then defeat Poland, they will enslave the Poles and send them to fight against the next European target, and so on.

This is a very traditional Russian strategy. Enslaved Ukrainians were used to attack Poland in 1939. Enslaved Poles were forced to fight Russia’s wars in the days of the old Tsarist empire, and would have been forced to do so again as part of the Warsaw Pact. Just like zombies turn humans against their own, each slice of Europe that Russia can chop off ends up being turned against the rest.2

Russia’s final strategy for fighting Europe is to rely on Chinese assistance. Russia’s own industrial base is very weak, and relied heavily on imported European parts and machinery that has now been partially cut off. But Chinese tech has largely plugged that hole, as the Carnegie Endowment reports:

Since mid-2025, Chinese components have been detected in Russian drones and missiles, often shipped via front companies disguised as suppliers of industrial cooling equipment…Chinese machinery, including precision optics, lasers, and dual-use machine tools, now dominates Russia’s defense-related manufacturing. In August 2025 alone, China exported a record 328,000 miles of fiber-optic cable and nearly $50 million worth of lithium-ion batteries to Russia, reinforcing its role as the Kremlin’s primary wartime supplier of dual-use materials. Chinese engineers working at Russian drone facilities are adapting civilian quadcopters, such as the Autel Max 4T, for combat use.

China is a far bigger manufacturer than Europe, and can pour essentially infinite war production into Russia if it wants to. And China is now assisting Russia’s gray-zone warfare against Europe:

Since 2024, Chinese ships have been involved in incidents of targeting subsea infrastructure, particularly cutting subsea cables in the Baltic Sea…The country increasingly deploys ambitious espionage and cyber attacks against government networks and critical infrastructure across Europe. These attacks seem to overlap with—or even be actively coordinated with—Russia’s espionage and influence operations across Europe…Increasingly, Russia and China also cooperate in disinformation operations: Chinese campaigns such as “Spamouflage” are amplified by Russian media outlets and diplomatic channels. Both countries employ what look to be synchronized narratives accusing the West of being responsible for the war in Ukraine.

China even provides the Russians with battlefield intelligence, helping them strike and destroy Ukrainian targets in real time. In sum, China is supporting Russia’s war against Ukraine, and will likely support Russia in any further wars it undertakes against the rest of Europe.

With Chinese technology and production, and slave soldiers from East Europe, and with America withdrawing from the Transatlantic Alliance, Russia could conceivably overmatch Europe.

But that’s not the only threat that China poses. On the economic front, China’s new economic strategy — a combination of shutting out European products, sending out a massive wave of subsidized exports, and putting export controls on rare earths — threatens to forcibly deindustrialize Europe. Here’s what The Economist, normally a staunch defender of free trade, recently wrote:

China is not just dumping exports and subsidising its companies, it is also out-competing and out-innovating big European industries, including carmaking. Last year Germany’s trade deficit with China stood at €66bn ($76bn); this year it could widen to over €85bn, around 2% of GDP. Alarmingly, China is exploiting Europe’s dependence, weaponising embargoes or the threat of them in chips and rare earths.

Germany, traditionally Europe’s strongest manufacturing and exporting nation, is already the hardest hit:

China, many European manufacturers have concluded, is threatening to put them out of business, by both fair means and foul…The wails are loudest in Germany, which is Europe’s biggest exporter to China and its biggest investor in it by far…For the Mittelstand, the small manufacturers that constitute a big slice of German industry, China used to be a source not of angst but of profit. Their precision-engineered machine tools were an exquisite fit for its rapid industrialisation. Chinese consumers raced to buy German cars…

Times have changed…Once-stellar growth inside China has, for many foreign firms, slowed to a crawl as competition with local rivals intensifies. In addition, Germany’s previously small trade deficit with China has ballooned…Last year it reached €66bn ($76bn), or around 1.5% of GDP, driven by a collapse in German exports to China and a rush of imports, notably of cars, chemicals and machinery—hitherto German specialities.

Germany’s trade deficit with China this year is expected to surge again, to around €87bn…German cars command only 17% of the Chinese market, down from a peak of 27% in 2020…Worse, Chinese competition also jeopardises sales in other markets. China’s net exports of cars have risen from zero in 2020 to 5m units last year. Germany’s have halved over the same period, to 1.2m units…Such figures have triggered fears in Germany of a wave of deindustrialisation.

The Financial Times has a good article about this as well, and Brad Setser has a good writeup of that article.

This is all on top of the existing headwinds facing European manufacturing — the energy crisis from the cutoff of Russian gas and self-inflicted “green” policies, Trump’s tariffs, and so on.

So Europe finds itself in an extraordinary perilous position right now. Its main protector has suddenly withdrawn. It has a ravenous, brutal empire attacking its borders, supported by the world’s most powerful nation. Its main export markets are shriveling, and its manufacturing industries are under dire threat from waves of subsidized foreign competition. What can it do to fight back?

How Europe can resist the siege

The most important thing Europeans need is to panic. Europe is facing its own Deluge — a sudden pincer movement by hostile great powers that threatens to reduce it to a collection of small vassal states. This is a true crisis, and it will not be solved by social media rhetoric, or by brave declarations by EU leaders. It cannot be regulated away by eurocrats in Brussels. It will require bold policies that change Europe’s economic, political, and social models. Only a strong sense of urgency and purpose can motivate Europe to do what needs to be done.

What needs to be done? One important step is for Europe to act more as a single whole than as a collection of small countries. In the military realm, this means coordinating European militaries and defense industries much more. Matthew C. Klein writes:

From a properly European pespective, the security interests of each country should be shared across all countries, just as, for example, most Americans in Michigan or Maine would view an attack on California or Florida as an attack on them…The first step is to give the Ukrainians, who are already fighting the Russians, as much material and financial support as they need. From the perspective of European security, French, German, and British weapons are far more valuable in Ukraine than in their home countries. If the Ukrainians were subjugated, defending the rest of Europe would become much harder, with the effective EU-Russia border lengthening dramatically…

Europe’s national militaries have had a tendency to favor their home country’s producers, with the result that the continent is filled with subscale defense companies that are often slow and unproductive. Common defense procurement for a continental army should lead to higher output and lower costs—a few large companies handling large orders should have better unit economics than hundreds of artisanal manufacturers—but it would require Europe’s national defense elites to change their perspective. Philipp Hildebrand, Hélène Rey, and Moritz Schularick recently published a useful proposal for how to make this work.

And economically, Europeans can partially compensate for the loss of Chinese (and American) export markets by selling more to each other. The Economist writes:

A second task is for European countries to make better use of the power they have, by integrating their economies…By failing to integrate, the EU is leaving a vast sum of money on the table. A single market that was designed for goods is failing to help economies dominated by services.

And in his famous report on European competitiveness, Mario Draghi wrote:

We have also left our Single Market fragmented for decades, which has a cascading effect on our competitiveness. It drives high-growth companies overseas, in turn reducing the pool of projects to be financed and hindering the development of Europe’s capital markets…The EU’s new industrial strategy rests on a series of building blocks, the first of which is full implementation of the Single Market. The Single Market is critical for all aspects of the strategy: for enabling scale for young, innovative companies and large industrials that compete on global markets; for creating a deep and diversified common energy market, an integrated multimodal transport market and strong demand for decarbonisation solutions; for negotiating preferential trade deals and building more resilient supply chains; for mobilising greater volumes of private finance; and as a result, for unlocking higher domestic demand and investment. Remaining trade frictions in the EU mean that Europe is leaving around 10% of potential GDP on the table, according to one estimate.

And ideally, Europe should form a fiscal union — the EU itself should be able to borrow and spend, not just the member countries. As Klein writes, this needs to be accompanied by a greater tolerance for fiscal deficits — after all, countries borrow in emergencies.

In other words, Europe’s first step in resisting its siege is to act more like a country and less like a zone. It would also help to find some way to bring the UK back into the fold, especially because polls consistently find that British people regret Brexit.

Europe’s other top priority is to provide for the common defense. That means spending more money on the military, of course, and it also means greatly increasing the size of Europe’s nuclear deterrent. But it also means building a defense industrial base capable of resisting a China-backed Russia.

Europe’s current defense-industrial base was built for the Cold War, when battles were decided by heavy vehicles like tanks and ships and planes. Those are still somewhat important, but drones have risen very quickly to dominate the modern battlefield. Right now, drone manufacturing, as well as almost the entire supply chain for battery-powered drones, is overwhelmingly concentrated in China.

Europe needs to be able to build not just drones, but every single thing that goes into making a drone — batteries, motors, various types of computer chips, and so on. European industrial policy should therefore focus on onshoring these industries. In other words, Europe needs to master the entire Electric Tech Stack. (This will also help Europe get back in the EV race.) And it needs to master the AI software — computer vision, swarming tech, and so on — that will soon be needed in order to make drones a truly modern force.

The question of the proper policy instrument to accomplish this goal — tariffs, subsidies, fiscal borrowing, regulatory changes, and so on — is irrelevant. All of these policies should be done as necessary, and it’s better to do too much than too little. Policy procedure needs to be subordinated to the overriding goal of making Europe capable of defending itself. In fact, every European institution needs to be reformed and reverse-engineered in order to enable this.

Europe is also going to have to change its political mindset. Lavish pensions and other elements of Europe’s social model are going to have to be temporarily curbed to help give Europe the fiscal space and physical resources to fight off its enemies. All nuclear plants need to be restarted, and Europe should build more nuclear, ignoring “green” parties and environmental activists who irrationally hate nuclear power. Europe needs to reform its land-use regulation to require greater construction of solar and wind power. And Europe is going to have to back off of its aggressive regulation of AI software, in order to produce cutting-edge autonomous weaponry.

Finally, Europe needs to look for friends and allies — and export markets — other than America. India is an obvious choice. Although India is friendly with Russia, the country would undoubtedly welcome Germany’s help industrializing — and this would allow German companies to sell machines to India, as they once did to China. The EU should open its markets to Indian goods in exchange for Indians doing the same, recognizing that trade balances are less important than total export demand. Japan, South Korea, and other big developing countries like Indonesia, Vietnam, and Brazil are other good potential trading partners.

If Europe manages to unify more and to build up its military power, it will increase the number of great powers in the world by one. A planet with a strong Europe, America, China, Russia, and India is a better planet than one where only the last four of those are strong. If Europe shows it can act with unity and purpose, and that it has military power to be reckoned with, America and China — both countries whose leaders tend to respect raw power — may lose their disdain for the region, and return to a more diplomatic, conciliatory posture.

Ultimately, European weakness and division are the reasons the region is getting bullied by so many other powers. Reversing that weakness and division would make the bullies go away. But Europe’s people, and especially Europe’s elites, have to want it.


Subscribe now

Share

1

And of course if Europe does expel the Muslim immigrants and start talking up its Christian heritage, as the MAGA folks want, Democrats will conclude that Europe is fascist and be reluctant to help it out when they get back in power. Essentially, Europe is finding itself caught in America’s internal culture wars, and there’s no good way out; the only solution is to realize that the U.S. will not be a reliable partner for decades to come.

2

Would Russia actually try to conquer and rule all of Europe directly, as the Nazis tried to do? Unlikely. But would it try to dominate all of Europe the way the USSR dominated the Warsaw Pact? Yes, definitely. And this sort of domination would be very bad for Europeans, as the Poles could tell you.

Planning sentences to ponder

Planning assistance caused municipalities to build 20% fewer housing units per decade over the 50 years that followed.

Here is the full abstract:

We study how the federal Urban Planning Assistance Program, which subsidized growing communities in the 1960s to hire urban planners to draft land-use plans, affected housing supply. Using newly digitized records merged with panel data across municipalities on housing and zoning outcomes, we exploit eligibility thresholds and capacity to approve funds across state agencies to identify effects. Planning assistance caused municipalities to build 20% fewer housing units per decade over the 50 years that followed. Regulatory innovation steered construction in assisted areas away from apartments and toward larger single-family homes. Textual evidence related to zoning and development politics further shows that, since the 1980s, assisted communities have disincentivized housing supply by passing on development costs to developers. These findings suggest that federal intervention in planning helped institutionalize practices that complicate community growth, with subsequent consequences for national housing affordability.

Hail Martin Anderson!  The above paper is by Tom Cui and Beau Bressler, via Brad, and also Yonah Freemark.

The post Planning sentences to ponder appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The See-Through 747

December 8, 2025

In the first grade, my two favorite toys were both 747s.

The first was an inflatable replica, similar to those novelty balloons you buy at parades, with rubbery wings that drooped in such violation of the real thing that I’d tape them into proper position. To a six-year-old it seemed enormous, like my own personal Macy’s float. The second toy was a plastic model about twelve inches long. Like the balloon, it was decked out in the livery of Pan Am. One side of the fuselage was made of clear polystyrene, through which the entire interior, row by row, could be viewed. I can still picture exactly the blue and red pastels of the tiny chairs.

Also visible, in perfect miniature near the toy plane’s nose, was a blue spiral staircase. Early 747s were outfitted with a set of spiral stairs connecting the main and upper decks – a touch that gave the entranceway a special look and feel. Stepping onto a 747 was like stepping into the lobby of a fancy hotel, or into the grand vestibule of a cruise ship. In 1982, on my inaugural trip on a 747, I beamed at my first real-life glimpse of that winding column. Those stairs are in my blood — a genetic helix twisting upward to a kind of pilot Nirvana.

That’s a passage found in chapter two of my book.

It’s that second toy, the one with the transparent fuselage, that I bring to your attention. As it happens, I discovered a photograph, buried in an old family album, in which you can see it. While I’ve always remembered the toy, I had no idea that a picture of it existed.

That’s me holding the plane, of course, with my sister and my mother in front. It’s Christmas morning, 1972.

Look closely and you can see the rows of seats, sectioned into different colors. The first class seats look red. On the left wing it says “Pan Am.” You can’t see the spiral stairs, but they’re in there, in the middle of that blue part. It appears the entire fuselage was look-through, not just half of it, as I’d written.

One wonders what sorts of shitty toys are available these days for first-grade airplane buffs.

That plastic plane is long gone, sadly. I’m not saying you should save all of your childhood toys, but be careful. This one, surely, deserved to be set aside. Even so young, I already has aspirations of becoming a pilot. It would’ve made a meaningful keepsake.

The picture, at least, remains.

Last Thursday, by the way, marked the 34th anniversary of the demise of Pan American World Airways. The company ceased operations on December 4th, 1991. I remember watching it on the news, in a hotel room in Burlington, Vermont.

I was fortunate enough to fly twice on an actual Pan Am 747. From Rio de Janeiro to New York, in 1982, and from Frankfurt to New York in the fall of 1991, shortly before the end.

 

The post The See-Through 747 appeared first on AskThePilot.com.

Six New Tips for Better Coding With Agents

I’m hanging out in Sydney with my esteemed co-author and co-conspirator Gene Kim today; we flew in to conduct Vibe Coding workshops and talks this week to the Commonwealth Bank of Australia, some of their partner companies, and the general engineering public. Very cool of CBA to sponsor this training, and Gene and I are super excited for it.

We noticed that we’ve pushed into new territory since our Vibe Coding book was published. The book is all about how to work with coding agents, and all the advice and techniques in it are still incredibly relevant; I use it all daily. But there’s even more to learn, and we continue to uncover new tips and strategies.

I thought I’d share some of the new themes we’ve noticed, in no particular order, hot off the presses. Let’s see which ones resonate with you.

1. Software is now throwaway — expect < 1 year shelf life

This is probably the most obvious one. Anthropic has already begun embracing this idea internally, which is how I first heard about it, from friends there.

26 years ago Joel Spolsky wrote one of the most useful pieces of software advice anyone has ever given, in Things You Should Never Do, Part 1, where he says, in a nutshell, DON’T REWRITE YOUR SOFTWARE!

In this classic essay, well worth a read, Joel gives powerful examples of companies and projects that decided their code base was too old and crufty, so they chose to rewrite it all from scratch. And the results were, predictably, awful. Joel says:

The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material?

And he was right! Outstanding essay. But unfortunately, not so timeless as we thought. It proved to have a shelf life of only about a quarter century. We are entering a surprising new phase of software development, in which rewriting things is often easier (and smarter) than trying to fix them.

I first noticed this with unit tests. You’ll use agents to make a giant refactoring to your system, and then all the tests will be broken. The agents inevitably struggle to fix them. So one day I said, screw it, delete all the tests and make me new ones. And it got through that exercise SO much faster. The new tests were great, had great coverage, and importantly, the LLM was able to generate them very quickly, compared to trying to reason through the old system behavior vs the new expected behavior. With new tests, it can focus just on the new behavior, which is a much cleaner cognitive problem.

This generalizes beyond tests: generating almost any code is easier (for AIs) than rewriting it. Hence, recreating software stacks from scratch is starting to become the new normal. We’re seeing it more and more, e.g. companies with mainframes who are concluding that a small team of engineers and biz people could recreate the entire experience with the same API, but with modern architecture and maintainable code, in just a few months. And they’re doing it.

The upshot is that for all the code I write, I now expect to throw it away in about a year, to be replaced by something better. Maybe mine, maybe someone else’s. Doesn’t matter. It’s all just stepping-stones to higher velocity.

This spells trouble for third-party SaaS vendors. Companies are also discovering that they can build bespoke business-automation software so easily that they don’t need to re-up their vendor contracts. SaaS vendors are going to have to work harder to provide value that’s too expensive to recreate. It can be done — Graphite is one example; they now have years of learnings into the nuances of AI code review. I don’t think you would necessarily want to retrace those years of steps yourself, on your company dime. Sourcegraph is another example; they have a code search engine with 10 years of enterprise bug fixes, and even with modern agents, you almost certainly wouldn’t want to try to clone that yourself.

But many SaaS vendors who’ve found niches building business automation software are going to be in real trouble. Because businesses are automating their own processes now, with vibe coding!

2. Agent UX matters at least as much as Human UX

One of the interesting themes I heard at the AI Engineering Conference in NYC a couple weeks ago was that although many people are building tools for AIs, they are finding it very hard to get the AIs to use those tools.

It’s tricky to get AI to use a tool it’s not trained on. They have certain ways of thinking and working, and they tend to reach for familiar tools (e.g. grep instead of a fancier search). I’ve talked with many people who wanted to build a tool for their agents to use, and they’d work with the frontier models to design the perfect agent-friendly interface — one the models swore up and down would get them to use it.

And then haha, no, the agents don’t use it. You prompt and prompt, they ignore and ignore. So what do you do? How do you get them to use your tools?

My Beads issue tracker for agents has been an interesting case study here. It’s only maybe 2 months old and it already has 250+ forks and 5000+ stars. It’s a successful project. But I’ve never looked at the code. It’s fully vibe-coded by agents. Despite that, Beads managed to capture lightning in a bottle — it’s a tool that AIs use, and not only that, they like it. Agents use Beads eagerly and enthusiastically with very little prompting. They make smart decisions, such as filing Beads when they are low on context, instead of doing the work directly. Things you would normally have to prompt them to do, they just do!

I’m no magician. I’ve built plenty of tools that the AIs refused to use; I’ll talk about one of them below. And I’ve built plenty of prompts that the AIs choose to ignore or overlook. It’s not like capturing lightning in a bottle is super reproducible at this point. But I can share some of the things I did with Beads that I think helped.

First, I asked Claude to help me design a new lightweight issue tracker backed by git, with a few other constraints, and then Claude came up with about half of the rest of the design: the SQLite database caching layer, the discovered_by graph link that the models feel is very important for gathering context on issues, the hash IDs, deletion tombstoning, etc.

During the Beads design phase, I mostly argued with Claude, telling it I didn’t like certain choices it was making from a Human UX perspective. Eventually we negotiated our way to something we both liked, something that had good agent UX and also good human UX.

For the agent side, once we had the initial structure in place (the issue tracker itself), the primary UX issue became tooling ergonomics. My agents were trying to use Beads, but they kept giving it the wrong arguments. For example, they’d use — body instead of — description when filing an issue, which would fail. Why? Because they were trained on GH Issues, and GHI’s CLI tool uses — body for filing issues. Reaching for the familiar again!

So in that particular case, told it to add — body as an alias for — description, which it did, and that bit of Agent UX friction went away forever. I’ve done this many, many times in Beads. As the agent works, I watch how it’s using the tool, and whenever it encounters an error, I ask it, how did you want it to work there? How can we change it to make the behavior be more easily guessable?

Over the past few months we’ve made dozens of tweaks, adding flags and commands, and the agents now rarely have trouble using Beads fluently.

I can’t claim to have cracked the agent-UX problem, not by a long shot. I think the role of “Agent UX Designer” feels like it’s ready to emerge as a first-class career for humans. As just one example, I’m working on my third agent orchestrator this year. And even though the architecture is sound, I haven’t found the magic UX formula yet, to where any agent automatically just figures out what to do, and does the right thing most of the time. I’ll get there! In fact, as soon as I solve this problem with my orchestrator, I’m launching it. I’m aiming for Christmas Day. We’ll see.

Once you do find that secret incantation that makes your tool truly agent-friendly, you should get it out there as fast as you can, because it will grow like crazy.

And if you try to launch a tool that agents don’t choose to use of their own volition, with minimal prompting, then you need to go back to the drawing board and fix the agent UX.

The best way to do this is to leverage the Optionality from FAAFO, from our Vibe Coding book. Generate a whole bunch of interfaces, and then experiment with each one, to see which one the agents like best. It’s very much a trial-and-error search problem at this point, until either the agents get better at using new tools, or we get better at learning what they like.

3. Spend 40% of your time on code health, or else you’ll wind up spending >60%.

Gene was curious how I could be so confident in Beads if I’ve never looked at the code. My answer to him was one of the easiest I’ve ever given. If you are vibe coding, i.e., having the AI write all your code for you, then you need to spent at least 30–40% of your time, queries, and money on code health. That’s how you make sure your code is OK. You have the AI conduct regular code inspections. Tons of them.

It’s pretty easy in principle: Every now and then, you pause your regular work, and tell your agents: go find code smells of all shapes and sizes. Have them file Beads for anything that needs followup. Tell the agent to look for large files that need refactoring, areas with low test coverage, duplicated/redundant systems, legacy code, dead code, poorly-documented code, etc. etc. etc. I don’t have a good prompt for this step yet; would appreciate it if anyone has crafted one. But you can also just ask your agent to help craft it.

You’ll also want to ask your agent to do cleanups during the code-health passes. Have it look for files that are in the wrong place, or have misleading names, or need better homes. Have it clean up debug cruft, ancient plans, build artifacts, old docs, anything you don’t need. This is all part of the regular hygiene and maintenance of a vibe-coded code base.

It helps to be creative, and also to ask the agent to be creative, thinking outside the box. After the first round or two of regular code reviews, start having it look for over-engineered subsystems (YAGNI), opportunities where your code could have used a third-party library, and other broad, system-level concerns.

Basically the agent will always find problems, often shocking ones, e.g. where you discover you have two or even three completely redundant systems (databases, logging, telemetry, whatever) that need consolidating. And since agents tend to accrete code without automatic refactoring, your vibe-coded source files will tend to grow to thousands of lines, which makes them harder to agents (and humans) to reason about. So you should tell it regularly to break things up, and then run dedicated sessions to implement the refactoring!

During each code review, have your agent file Beads for everything it discovers. Then have it review the epics and issues (up to 5 times; see below) to ensure the implementation will go smoothly.

Then swarm to fix it all! Do all this at least weekly. For me, I’d estimate I spend about 25–30% of my time and money on code health, and I don’t think it’s enough. As long as I continue to find serious problems with reviews, I need to do more reviews. My current guidance is that you should expect nearly half of your work to be code-health related.

What happens if you don’t follow this rule? You gradually (but rapidly) accumulate invisible technical debt that weighs down your agents in various ways — too much code, conflicting code, obsolete docs, etc. Your agents will begin to work more slowly and you’ll see more bugs in their outputs.

Stay on top of code health, and you’ll keep your vibe-coded code base sprightly.

4. You might be too early: Some projects are ahead of their time.

AI cognition takes a hit every time it crosses a boundary in the code. Every RPC, IPC, FFI call, database call, client/server call, every eval, every single time the AI has to reason cognitively across a boundary or threshold… it gets a little dumber.

I noticed this when working on Efrit, my native-elisp coding agent, which lives inside Emacs. Over the summer I was trying to get Claude and other models to build it for me, and they struggled. Hard. Efrit lives in Emacs, which is a separate process from your coding agent, so already there’s one boundary.

For that particular IPC boundary, there are multiple channels for the agent to talk to Efrit, all of them quite unsatisfying. There’s emacs — batch, which has limitations, and the emacs-server client/server mode, which is also limited for the kind of heavy reflective introspection the agent needs to do for this kind of code base.

So what did I do? I spent a week working with Claude to build a better agent-Emacs bridge. Claude built me the “Agent-Efrit bridge”, a simple and elegant system which uses a polling file channel as a message queue to and from Efrit. It’s beautiful. A tool made for agents, by agents! When it does work, it’s amazing.

Naturally, Claude never uses our fuckin’ bridge we built together. I’ve given up even asking. This is an example of a tool I tried to build, but the AI just refuses to use it.

With Efrit, after that initial bridge there are still other RPCs — the API call to the frontier model, the parsing of its response, and the eval of the elisp code to execute the response. All of these were piling up to make the models dumber. And ultimately, the August 2025 crop of frontier models couldn’t solve this problem. Or at any rate, the returns became so diminishing that I gave up.

So I paused the project! There was plenty of other work to do. A few months went by, a few model releases happened (notably Sonnet 4 and Sonnet 4.5). Efrit sat idle. And then about 2 weeks ago, someone asked to be an Efrit maintainer, since people wanted to use it. But wait, Efrit was still crap! So I thought, what the heck, let’s have Claude 4.5 peek at it.

Claude 4.5 took one look and said, “great idea, awful execution, but we can modernize this.” It produced an incredibly detailed plan to take Efrit to the next level, and I’ve spent the past 2 weeks letting it grind through this plan (serially, no swarming, since swarming on elisp sounds like a bad idea today.) And now Efrit is getting to be approximately on par with modern coding agents.

All I had to do, in order to crack this nut, was wait 3 months (i.e., 2 model releases). Claude is finding Efrit quite easy now, compared to this summer. I cite this as one of many examples of how the models and tools are indeed getting exponentially better. I have a set of projects they can’t do today. Efrit is (well, was) one of them. If you keep a menagerie of “too hard for AI” projects, then you will be able to watch and measure their cognitive progress increasing month by month.

I often bake this philosophy into my project planning. I will deliberately build something that’s just slightly too hard for the agents, knowing that in the next model release, they’re almost certainly going to find it straightforward. I plan for the models to get smarter, by building tools that don’t work that well with today’s models. This is how you get that little bit of extra shelf life out of your software — plan for it to be useful when smarter agents arrive.

If you read this section and concluded, “well, obviously AI isn’t ready to handle my project work; I tried it, it was confused, so I’m just going wait for smarter models,” then I wouldn’t blame you. But be careful! You might not need to wait as long as you think. If you’re just using this as an excuse to procrastinate until the models are smarter, then you’re missing out on honing a massive set of skills you need in order to work with models effectively — even as they do get smarter.

In the next section, we’ll talk about a way you can get even more cognition out of today’s models, without needing to wait. You’ll have them solve even harder problems than you thought they were capable of, all because you didn’t give them enough of a chance before. Let’s take a look!

5. The Rule of Five: When in doubt, have the agent review its own work 5 times.

Jeffrey Emanuel discovered this powerful and unintuitive rule. He found that he gets the best designs, the best plans, and the best implementations, all by forcing agents to review their proposals (and then their work) 4–5 times, at which point it “converges”. It typically takes 4 to 5 iterations before the agent declares that it’s as good as it can get.

Jeffrey described a long, complex series of prompts for this process; I’m sure we’d all be appreciative if he publishes them. But the way he described it to me, you first make them do a task, then you do a series of focused reviews. Each review should be slightly broader and more outlandish than the previous one, or you can do it the opposite order. But you need a mixture of in-the-small and in-the-large reviews. You’re having it look for bad code (or designs), but also bad architecture.

To be slightly more concrete, Jeffrey first asks it to do a couple of regular code reviews, which find all the usual stuff. And you’ll notice right away that even on the second review it will often find things it missed in the first review. But I think most of us stop there, if we even ask at all. It definitely feels weird to ask for the 3rd code review, which is the agent’s 4th pass over the code, counting the generation step. But the 3rd review, especially during the Design phase, is where you start asking it existential questions about whether you’re doing the Right Thing throughout the project.

I tried it, and sure enough, it does take 4–5 iterations, just as Jeffrey described, before the agent will say something like, “I think this is about as good as we can make it.” At that point it has converged. And that, folks, is the first point at which you can begin to moderately trust the output the agent has produced. If you always take the first thing it generates, with no review at all, you’re bound to be disappointed.

I asked Claude what it thought of this Rule of Five, and Claude was enthusiastically supportive. Claude claims that this process matches their own cognition model, which is breadth-first: they solve each problem first in very broad strokes. And then they almost always need more passes for proofreading, refining, and polishing — much like humans do.

At first you’re going to want to do this purely with prompting. Maybe Jeffrey Emanuel will share some of his fancy review prompts. But over time, you’re going to want to automate it, since you’re applying the Rule of Five at every single step in the process, which at a bare minimum, for any nontrivial hunk of work, would be:

- 5 passes over the design

- 5 passes over the Beads implementation plan (this results in far better issues and dependencies, and better execution)

- 5 passes over the implementation (code + 4 reviews)

- 5 passes over the tests

- 5 passes for code health (might as well build it into your dev middle loop)

Yes, this is slower. Yes, this is more expensive (though, probably less so than all the rework you’ll be stuck with if you skip these steps.) Yes, it’s awkward to tell an AI to keep reviewing its work that it just reviewed.

But you should make sure you do it. Rule of thumb: demand at least 2–3 passes on small tasks, and 4–5 passes on big tasks. If you’re not super familiar with the language, the stack, or the domain, then you should err on the side of more reviews.

Do this, and it’ll feel like you’re using a model from the future. They will do far better work than they’ve been doing for you. Try it!

6. Swarm where you can, but beware the Merge Wall

I’ve been focused on agent swarming the past few weeks, after several months chasing quality and reliability without much success. I’ve got a new (third!) orchestrator in the works, and wow. Swarming. Next year is going to be extraordinary.

I’ll share a quick example of how powerful swarming can be when it’s working right. I had a disaster the other day where 30 Beads issues went missing. It was three or four major epics, each with a bunch of child issues. I had put a ton of work into their design, following the Rule of Five, and they were all ready to implement.

But I couldn’t find them.

I wasn’t panicked, since it’s hard to truly lose issues in Beads (we do have some bugs here and there but they are getting closed fast). Beads is all backed by Git, so it’s almost always possible (for the AI) to reconstruct what really happened from the git history, and fix it.

But I was concerned, because, where the hell did my 30 issues go? They weren’t deleted. After a couple minutes of increasingly alarmed searching, I finally figured out where they all went: My swarm had implemented them all! WTF?

There was a minor miscommunication, I guess; I asked my orchestrator to start working on the bug backlog, and it assigned all 30 issues to the eight workers I had already spun up. Some of these were quite complex issues. But while I was busy with other stuff, and not watching, the worker agents implemented and closed all 30 issues.

I was equal parts delighted and flabbergasted when I realized what had happened. I went and checked, and sure enough, they’d done all the work. It was pretty decent work and needed very little touchup — likely because I had used the Rule of Five throughout, and the Beads were in very good shape when it came time to implement.

After my 30 issues were magically implemented, I was sold. I would never not swarm again!

And then, of course, I was utterly unable to reproduce that perfect swarm. Subsequent attempts all ran into merge issues and required a ton of hand-holding and infrastructure tweaks. It will be a couple more weeks before I can swarm reliably. But still, I am completely sold.

I’ll know that my swarm orchestrator is ready to launch when I can swarm the web UI, building it from scratch. My system doesn’t have a non-CLI UI yet; well actually it does, in Emacs, but I doubt you want that one, however cool it might be. (It has Efrit inside it, so it’s pretty damn cool.) But I’m going to build a UI with the swarm, and that’s when I’ll know it’s ready for prime time.

The thing you have to be prepared for when swarming, is the Merge Queue problem. It’s like smacking into a wall. To illustrate, let’s say you have a simple swarm of 3 workers. One worker is redoing the logging system, another is changing the database API, and another is changing the client-server protocol. It’s likely that all three of these subsystems have some overlap, and changing one requires changing another. And their work will collide when they try to merge all the work together.

When you swarm a task, a key problem is that the workers all start from the same baseline (e.g. the same starting git commit), and they all do their work off that baseline. But each worker has the ability to change the baseline dramatically. Let’s say workers A, B, and C all complete and merge in their work. The system may now be completely different from the original baseline. When the fourth agent D finishes its work, a rebase may no longer be feasible. The system may have changed so much that D’s work needs to be completely redesigned and reimplemented on the new system baseline, which includes A, B, and C’s changes.

This is why you need the Merge Queue. You need to serialize the rebases, and give each worker enough context, and context-window space, to fully merge their work into the new baseline.

Some work is inherently parallel, and some work is inherently serial — the latter because of irreducible complexity and task overlap. If you think you’re going to be stuck with an awful merge, then you should probably defer some tasks until the earlier ones complete. But it’s not always possible to tell in advance, so sometimes you’ll have tough merges.

I’ve noticed that projects tend to go through a cycle where they are swarmable for a while, but then you’ll suddenly need to pause and serialize all work for a time. This can happen, for instance, if you’re changing the directory layout of your project — e.g., to make it more accessible to AIs who are trying to guess their way around. You might need to experiment with a bunch of different layouts. But each new project source layout changes all your package imports, scripts and other inter-module references, which would totally break any existing workers. So you have to pause all other work while you do the big package restructuring.

You can think of swarming as a MapReduce-type operation. In the mapper phase, you can spin up virtually unlimited workers. But in the reducer phase you need to merge their work all back together. Unfortunately, as Gene observed, this isn’t really a MR because most MRs have a very simple reduce phase — the workstreams have a monoidal shape, and you can merge their work by doing things like summing counts or whatever.

But with agent swarming, the reduce phase is a nightmare; it’s the exact opposite, in fact: it can be arbitrarily complicated to merge the work of two agents. In the limit, what should we do if Worker A deleted an entire subsystem, and Worker B comes along with a bunch of changes to that (now-deleted) subsystem?

So the swarm merge step is often messy and not entirely automatable. Some cases require either human judgment, or else really good context for AIs to make the call.

I don’t know if we’re going to get a tool that hides the mess. I’ve been talking to investors, many of whom are keenly interested in the next generation of developer tools, and there is a prevailing belief that all we need are proper guardrails, and then these kinds of agentic coding and swarming tools will be accessible to “average” developers, which they certainly are NOT today.

And why is that? Well, as Joel Spolsky observed in Things You Should Never Do Part 1, reading code is by far the hardest part of coding. This is a well-known finding in the Dev Productivity world; they’ve done study after study. And with vibe coding, reading code is… pretty much all you do all day. It’s hard for most developers. The average dev probably thinks 5 paragraphs is an essay. Coding agents make you read enormous waterfalls of both text and code. This is absolutely draining and beyond the capabilities of most devs today.

However, I don’t see eye-to-eye with the investors on this one. I personally do NOT think we will get useful guardrails. If you try to build something with heavy guardrails, you’re going to wind up with Bolt or Lovable, and nobody will use it. Sorry! That’s just not the right model. Instead, I think we’re going to get orchestration tools that are every bit as powerful, messy, quirky, and frustrating as Claude Code and the current batch of terminal-based coding agents.

And the people who figure out how to use these tools, despite the lack of guardrails, will become super-engineers. I’ve been kicking around the idea of a new blog post, the Rise of the Superengineer. Dunno if it’s worth a whole post, but what’s going to happen in 2026 is that a new class of 100x (or maybe 1000x) engineer will emerge — people who have figured out how to wield coding agent orchestrators effectively, deal with the merge problem, planning, swarming, code health, etc. — all the stuff I’ve talked about here, and more. And they will be able to run 100 coding agents at once, and get meaningful work done with them.

This will make them as productive as a team of 50+ regular engineers.

I think my own orchestrator will usefully peak at around 50–80 agents. Maybe I can get it up to 100. It’s not aimed at massive swarms; it’s aimed at leveling you up from manually managing a dozen ad-hoc agents in ad-hoc repo clones all around your filesystem, to managing swarms of well-behaved agents working 5–10 at a time on focused tasks. It will still require your full attention, your full engineering background, and every bit of design taste you can muster, to use these tools. In some ways it’s even harder and more dangerous than using a single coding agent, even with tooling support.

But some people are doing it already! By hand, to be sure, or by building their own homegrown orchestrators. Mark my words, though: next year, you’re going to have engineers who can build an (and likely maintain) an entire company’s software on their own. You’ll have solo unicorns, sure, but also a marketplace of solo uber-contractors who can build companies things they would have had to pay someone like Accenture tens of millions of dollars for.

There will also be small teams of people who figure out how to maximize their velocity when multiple humans work with agent teams. And these small teams are going to change the world. Gene and I are actively wondering whether company size is going to decrease on average, because you will be able to get so much more done with so many fewer people.

But no matter what, the tools are going to be messy from now on. Working with AIs is a little messy and nondeterministic. And I think that’s here to stay.

Wrap-Up

Gene and I went through at least a baker’s dozen ideas this morning, and I’ve chosen the half that seemed the most baked. A few others are becoming clearer, but are still so vague that we don’t really have the right vocabulary to talk about them yet.

Change is coming. Agents are way more powerful than they were 3 months ago. I’ve talked with plenty of (good) engineers lately who still believe that agents have plateaued. Ignoring the 30 years of evidence showing that AI is following Moore’s Law, they feel it’s just going to stop getting better today, out of nowhere. And in their opinion, agents are not good enough yet.

But if you’ve been following and using agents since they landed in February, you’ll know just how much more powerful and capable they have become, even since summertime. It’s not plateauing; heck, it’s not even slowing down. And you can prove it using your backlog of projects that are too hard for AI. Every few months, another one will fall, until there are no more left.

If you’re one of the many engineers who still hasn’t made the switch to AI-first coding, now is a good time to try it again. If you haven’t used an agent in a few months, you’re going to be shocked at how smart and capable they have become. They are full concierges now, able to help you with any computing-related problem. People tell me they even use Beads for their personal TODO lists!

My orchestrator is right around the corner. I’m excited for it. It’s going to make a splash. Hopefully this Christmas!

But you’ll only be able to use it if you already use coding agents for literally everything. If you want to be a 100x super-engineer next year, you need to start learning vibe coding basics today, and make it work for you. Keep in mind all the advice I’ve given here, and read our Vibe Coding book, which just came out on Oct 21st. It’s fresh and relevant, and will help you get into the right mindset with the right techniques and practices.

More to come, soon.

This stuff is so fun!

The EU production function

The central puzzle of the EU is its extraordinary productivity. Grand coalitions, like the government recently formed in Germany, typically produce paralysis. The EU’s governing coalition is even grander, spanning the center-right EPP, the Socialists, the Liberals, and often the Greens, yet between 2019 and 2024, the EU passed around 13,000 acts, about seven per day. The U.S. Congress, over the same period, produced roughly 3,500 pieces of legislation and 2,000 resolutions.1

Not only is the coalition broad, but encompasses huge national and regional diversity. In Brussels, the Parliament has 705 members from roughly 200 national parties. The Council represents 27 sovereign governments with conflicting interests. A law faces a double hurdle, where a qualified majority of member states and of members of parliament must support it. The system should produce gridlock, more still than the paralysis commonly associated with the American federal government. Yet it works fast and produces a lot, both good and bad. The reason lies in the incentives: every actor in the system is rewarded for producing legislation, and not for exercising their vetoes…

Formally, the EU is a multi-actor system with many veto points (Commission, Parliament, Council, national governments, etc.), which should require broad agreement and hence slow decision making. In practice, consensus is manufactured in advance rather than reached through deliberation.

By the time any proposal comes up for an official vote, most alternatives have been eliminated behind closed doors. A small team of rapporteurs agrees among themselves; the committee endorses their bargain; the plenary, in turn, ratifies the committee deal; and the Council Presidency, pressed for time, accepts the compromise (with both Council and Parliament influenced along the way by the Commission’s mediation and drafting). Each actor can thus claim a victory and no one’s incentive is to apply the brakes.

That is from an excellent piece by Luis Garicano.  What would Buchanan and Tullock say?

The post The EU production function appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

A Ukrainian mathematician requests mathematical assistance

an expert in general relativity or a mathematical physicist familiar with PPN methods, weak-field gravitational tests, and variational principles…

For the two technical appendices (ψ-preconditioning and χ-flattening), I would need:
• a quantum algorithms researcher (QSP/QSVT/QLSA/QAE) to assess the correctness of the operator transformations and the potential complexity gains;
• a quantum control or pulse-level compilation engineer (pulse-level, virtual-Z) to evaluate whether the phase-drift compensation algorithm can be implemented realistically on actual hardware.

Please email me if you think you might be of assistance.

The post A Ukrainian mathematician requests mathematical assistance appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

One Last Note on Tiimo: What’s the Deal With That Icon?

One small update I just appended to my piece Friday taking a look at the winning apps from this year’s App Store Awards:

Lastly, I have questions — some really hard questions — regarding Tiimo’s app icon. Such as, “What is that?”

Perhaps it got picked because it makes Apple’s new OS 26 icons look good by comparison?

 ★ 

WorkOS Radar

My thanks to WorkOS for sponsoring last week at DF. Does your app get fake signups, throwaway emails, or users abusing your free tier? Or worse, bot attacks and brute force attempts?

WorkOS Radar can block all this and more. A simple API gives you advanced device fingerprinting that can detect bad actors, bots, and suspicious behavior. Your users trust you. WorkOS Radar lets you keep it that way.

 ★ 

Many wonders are visible when flying over the Earth at night. Many wonders are visible when flying over the Earth at night.