Recognizing the Right Time to Request a Mental Health Evaluation During Divorce

Divorce can bring out emotional challenges that make communication difficult. Sometimes those challenges interfere with key decisions that affect the family. A Phoenix divorce lawyer often sees cases where a mental health evaluation becomes necessary to protect everyone involved.

Understanding The Purpose Of A Mental Health Evaluation

A mental health evaluation helps the court determine whether a person is capable of making sound decisions during the divorce. It also clarifies whether emotional or psychological issues may affect parenting responsibilities. These evaluations provide insight, not judgment, about a person’s mental state.

Courts rely on professional assessments to ensure fairness in complex situations. They want to know whether each spouse can participate responsibly throughout the process. This helps prevent misunderstandings or assumptions that could harm the outcome.

The evaluation is not meant to label anyone as unstable. Instead, it is a tool that provides clarity during emotionally heavy transitions. It offers a neutral perspective when conflict makes things unclear.

When Safety Concerns Arise

One of the most common reasons for requesting a mental health evaluation is personal safety. If a spouse shows signs of unpredictability, extreme mood swings, or violent behavior, the court may need additional information. These signs can indicate that deeper issues are present.

A pattern of threats or escalating emotional reactions can raise legitimate concerns. When these behaviors emerge during a divorce, tensions can escalate rapidly. A professional evaluation helps determine whether intervention is needed.

Protecting children and the other spouse becomes a top priority in these cases. The court will use the evaluation to establish conditions that ensure the safety of everyone involved. This may include temporary restrictions or supervised interactions if necessary.

Concerns About Parenting Ability

Mental health evaluations are often requested when a parent’s ability to parent is being questioned. A parent who consistently struggles with emotional regulation may have difficulty providing stability for their children. Courts want assurance that each parent can create a safe environment.

If a spouse frequently withdraws, becomes overwhelmed, or exhibits erratic behavior, these patterns may affect custody decisions. An evaluation helps identify whether these issues are short-term reactions or part of a larger concern. The findings offer guidance on appropriate parenting plans.

Courts may also request evaluations when children show signs of stress around a parent. These observations can signal that something deeper is happening at home. Understanding the cause is essential before finalizing custody arrangements.

When Substance Abuse Plays A Role

Substance abuse can significantly complicate divorce proceedings. A spouse who misuses alcohol or drugs may struggle with decision-making or consistency, affecting significant aspects of the case. These situations often require expert evaluation to determine the severity of the issue.

A mental health assessment can identify whether substance use stems from chronic dependence or temporary stress. This distinction is important because treatment options and parenting plans rely on accurate information. Evaluations help break down what is truly affecting the individual.

Courts rely on these findings before granting custody or long-term responsibilities. The evaluation ensures that children are placed in environments that support their emotional and physical well-being. It also encourages appropriate rehabilitation if needed.

High-Conflict Divorces And Communication Breakdowns

In high-conflict divorces, communication often becomes strained or combative. When one spouse seems unable to engage constructively, a mental health evaluation may help determine the root cause. Some individuals react strongly to stress, while others may have underlying conditions that intensify conflict.

Evaluations can uncover whether emotional responses are part of a larger disorder. This information helps the court understand why cooperation has become difficult. It also provides guidance for creating more effective communication strategies.

This step is invaluable when one spouse constantly contradicts themselves, becomes overwhelmed, or refuses to participate in discussions. An evaluation brings clarity to behaviors that might otherwise derail the entire process.

When A Spouse Makes Unusual Or Irrational Claims

During a divorce, emotions run high, and some spouses make claims that seem exaggerated or unrealistic. If these statements repeatedly contradict facts or appear disconnected from reality, an evaluation might be appropriate. These moments can signal deeper emotional struggles.

When irrational claims impact finances, property division, or parenting decisions, the court may intervene. Evaluations help determine whether the claims stem from intentional misrepresentation or actual mental health concerns. This distinction clarifies how the court should proceed.

An objective assessment protects both parties from decisions influenced by confusion or instability. It ensures that the process remains grounded in clear information. This allows for more balanced conclusions as the divorce progresses.

Conclusion

Requesting a mental health evaluation during divorce is sometimes essential for safety, clarity, and fairness. It helps the court understand whether emotional challenges are affecting important decisions. A Phoenix divorce lawyer can guide spouses through this process and ensure the evaluation is used appropriately.


SUPPORT FREEDOM OF THE PRESS BY SUPPORTING OUR NONPROFIT NEWSROOM

The post Recognizing the Right Time to Request a Mental Health Evaluation During Divorce appeared first on DCReport.org.

How A Second DUI Within Five Years Increases Legal Penalties

A single mistake with a night out drinking can be dangerous, but repeated incidents has more hazards for everyone involved. A second DUI within a five-year window brings far more serious consequences than a first offense. Many drivers are surprised at how quickly penalties escalate. A DUI attorney in Carmel often sees clients who underestimate the impact of that second charge.

Why The Five-Year Window Matters

Most states treat repeat offenses within a specific timeframe as signs of increased risk. The five-year window is designed to catch patterns of unsafe behavior rather than isolated mistakes. When someone reoffends within this period, prosecutors usually take a firmer stance.

Judges often view a second offense as evidence that earlier penalties were ineffective. This perspective makes them more willing to impose harsher restrictions or more extended supervision. As a result, the legal system focuses more on prevention and accountability than simple punishment.

Because timing plays such a significant role, even a few months’ difference can shift the entire sentencing landscape. Falling inside the five years immediately changes how the case is classified. This means defendants need to prepare for significantly higher stakes.

Increased Fines And Financial Impact

A second DUI almost always results in a substantial increase in fines. Courts impose higher financial penalties to deter future offenses and underscore the seriousness of repeat conduct. These fines can amount to thousands of dollars and may be accompanied by additional court costs.

Insurance expenses rise sharply as well. Many drivers face steep premium increases or even policy cancellation. These financial consequences often outlast the court process and create long-term strain.

Additional programs, such as mandatory alcohol assessments, treatment courses, or ignition interlock devices, also add to the cost. These requirements may be non-negotiable for a second DUI. The combined financial burden often becomes one of the most challenging parts of the situation.

Longer Jail Time And Stricter Sentencing

Jail time becomes more likely with a second DUI. Even if the first offense involved only probation, the second may carry mandatory minimum jail time. Judges lean toward incarceration to reinforce that repeat impaired driving poses significant danger.

Sentencing guidelines also allow for extended probation periods. These longer terms come with strict conditions, including testing, classes, and frequent monitoring. Violating any condition can trigger additional penalties.

Community service hours may also be added. Courts often use service as a way to reinforce responsibility. The expectations are higher because the offense shows a pattern rather than a one-time lapse.

License Consequences And Driving Restrictions

A second DUI nearly always results in longer license suspensions. Losing the ability to drive affects work, childcare, and daily responsibilities. Courts consider this a necessary measure to protect the public.

Some states require the installation of an ignition interlock device immediately after a second conviction. This device prevents the car from starting until the driver passes a breath test. These systems create both a safety safeguard and an added expense.

Drivers may also face restrictions on their driving privileges. These limited permits typically allow travel only for work, school, or treatment. Even with restrictions, the monitoring process is much stricter than it is after a first offense.

Mandatory Education And Treatment Programs

Courts often require repeat offenders to complete more intensive alcohol education or rehabilitation programs. These courses aim to help drivers understand the risks and make long-term changes. The programs can span weeks or months, depending on the severity of the situation.

Treatment requirements may involve counseling, group sessions, or supervised recovery plans. The court’s goal is to address the underlying issues that contributed to the repeat offense. Because this is a second DUI, compliance is closely monitored.

Failure to complete any part of the program may result in additional hearings. Judges do not hesitate to impose extra penalties when someone falls behind. Repeat offenders must show a genuine effort to meet every requirement.

Greater Impact On Employment And Personal Life

A second DUI creates more disruption in daily life than most people expect. Job opportunities may shrink, especially in fields that require driving or background checks. Employers often view repeat offenses as red flags for reliability or safety.

Personal relationships can also suffer. Family members may worry about safety or responsibility, which can lead to tension at home. The lengthy legal process adds stress that affects everyone involved.

Even after penalties are lifted, the record of a second offense remains visible for years. Housing applications, loan approvals, or volunteer opportunities may be affected. The long-term ripple effects make the second DUI far more damaging.

Conclusion

A second DUI within five years dramatically increases legal, financial, and personal consequences. The system views the repeat offense as a serious pattern that requires stronger intervention. A DUI attorney in Carmel can help clarify options and guide individuals through the complex process.

Photo at top via Freepik


PROTECT YOUR RIGHTS BY DONATING TO OUR NONPROFIT EFFORTS

The post How A Second DUI Within Five Years Increases Legal Penalties appeared first on DCReport.org.

FHFA Announces Baseline Conforming Loan Limit Will Increase to $832,750 in 2026

Today, in the Calculated Risk Real Estate Newsletter: FHFA Announces Baseline Conforming Loan Limit Will Increase to $832,750 in 2026

A brief excerpt:
After the release of the FHFA house price index for September this morning, the FHFA released the conforming loan limits for 2026.

From the FHFA: FHFA Announces Conforming Loan Limit Values for 2025
U.S. Federal Housing (FHFA) today announced the conforming loan limit values (CLLs) for mortgages Fannie Mae and Freddie Mac (the Enterprises) will acquire in 2026. In most of the United States, the 2026 CLL value for one-unit properties will be $832,750, an increase of $26,250 from 2025. ….

For areas in which 115 percent of the local median home value exceeds the baseline conforming loan limit value, the applicable loan limit will be higher than the baseline loan limit. HERA establishes the high-cost area limit in those areas as a multiple of the area median home value, while setting the ceiling at 150 percent of the baseline limit. Median home values generally increased in high-cost areas in 2025, which increased their CLL values. The new ceiling loan limit for one-unit properties will be $1,249,125, which is 150 percent of $832,750
Note that there are different loan limits for various geographical areas. There are also different loan limits depending on the number of units (from 1 to 4 units). For example, next year the CLL is $832,750 for one-unit properties in low-cost areas. The four-unit limit is $1,601,750.

For high-cost areas like Los Angeles County, the CLL is $1,249,125 for one-unit properties (50% higher than the baseline CLL) and the four-unit limit is $2,402,625.
There is more in the article.

The Man Who Drew Wellington

In the 1890s, Thomas Ward created maps of the city of Wellington, New Zealand that are the subject of a new book by Elizabeth Cox, Mr Ward’s Map, and this article in New Zealand Geographic… More

Tuesday assorted links

1. Gideon Lewis-Kraus on Beckert on capitalism (New Yorker).

2. How good are social scientists as forecasters?

3. Zvi on Gemini 3.0.

4. Jimmy Cliff, RIP.

5. Nuclear Regulatory Review for the UK.

6. Leaderboard for AI predictors.

7. The import of rare coding variants.

8. Claude 4.5 Opus.

The post Tuesday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

How Reinstatement and Probation Terms Are Negotiated After License Suspension

Although a suspension of a nursing license may seem like the end of one’s career, it often signals the start of a formal reinstatement process. Navigating a complicated framework of probationary terms, reporting obligations, and compliance requirements is frequently necessary when returning to full professional practice. Professionals who provide help with professional license issues for nurses assist in interpreting these rules, ensuring that nurses understand their responsibilities and deadlines. 

Understanding Probation Conditions

The type of infraction, the nurse’s history, and the licensing board’s recommendations all influence the probationary period. Typical circumstances could be:

  • Monitoring Requirements: Random drug tests, performance reviews, and patient record audits may be mandatory for nurses.
  • Educational Programs: Further training or continuing education may be necessary to address knowledge gaps or issues related to professional behavior.
  • Workplace Restrictions: Certain high-risk tasks may be temporarily restricted until full reinstatement is granted.

It is essential to understand these requirements, as noncompliance may result in extended probation or even license revocation. Professional guidance can help nurses understand all requirements and establish protocols to meet deadlines, maintain documentation, and communicate effectively with regulatory boards.

The Negotiation Process

Negotiating probation and reinstatement terms involves more than just agreeing to the board’s initial proposal. Experienced advisors work to:

  1. Clarify Requirements: Ensure each condition is understood to avoid misconceptions that might result in disciplinary action.
  2. Advocate for Reasonable Conditions: Where possible, terms may be adjusted to reflect the nurse’s experience, the circumstances of the suspension, and evidence of rehabilitation or ongoing professional development.
  3. Create a Compliance Plan: Structured plans detailing how each condition will be met can demonstrate commitment and professionalism to the licensing board.

Negotiating well can minimize constraints, greatly reduce stress, and enable the nurse to return to work sooner. A well-defined compliance plan guarantees that probation accomplishes its goals without unduly hindering the nurse’s career.

Reporting and Documentation

Accurate documentation is crucial to the probation and reinstatement procedures. Nurses are often required to submit regular reports, documentation of completed coursework, and evidence of compliance with workplace regulations. Missed or incomplete deadlines could jeopardize reinstatement and lead to further disciplinary action.

To effectively track requirements, professionals who help nurses with professional license issues often offer tools, templates, and checklists. They ensure paperwork is well-organized, timely, and presented in a manner that demonstrates professionalism and accountability.

Common Challenges

Several challenges can arise during probation or reinstatement:

  • Complex Rules: Due to their numerous overlapping requirements, state regulations can be challenging to understand.
  • High Stakes: Any mistake may prolong probation or affect future career opportunities.
    Emotional Stress: The pressure of returning to practice under scrutiny can impact performance and confidence.

These risks are reduced with the help of knowledgeable advisors. To effectively handle the administrative and professional aspects of reinstatement, nurses receive clear instructions, tailored strategies, and continuous support.

Nurses on Probation: Best Practices

To be reinstated, nurses should:

  • Keep thorough records of all requirements fulfilled and correspondence with the board.
  • Attend recommended education and professional development courses promptly.
  • Communicate proactively with supervisors and compliance officers about progress.
  • Adhere to all probationary and workplace regulations consistently.

Adopting these practices demonstrates professionalism and reliability, which can influence future evaluations and help secure a full, unrestricted license.

Final Thoughts

Following a license suspension, returning to professional practice necessitates paying close attention to probationary conditions, reporting requirements, and regulatory requirements. Nurses seeking help with professional license issues for nurses can receive organized guidance to successfully navigate this complicated process and ensure a more seamless return to work. It is crucial to stress that license reinstatement is a step toward regaining professional trust and confidence, as well as a legal requirement. 

Photo: Freepik via their website.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post How Reinstatement and Probation Terms Are Negotiated After License Suspension appeared first on DCReport.org.

Why Transmission Fluid Should Be Checked Differently Than Engine Oil

Most drivers know that keeping up with engine oil checks is essential for preventing breakdowns and keeping a car running smoothly. Engine oil is simple enough—its main job is to lubricate and cool the engine, and checking it is familiar to nearly everyone.

Transmission fluid, on the other hand, is often overlooked or misunderstood. It plays a much more complex role, and failing to maintain it properly can lead to expensive repairs and even safety concerns. In situations where vehicle damage contributes to an accident, understanding maintenance issues can also be important when speaking with an attorney for car accidents.

How Transmission Fluid Works—and Why It’s Different

Transmission fluid isn’t just a lubricant. It serves several critical functions at once:

  • Hydraulic Pressure: It provides the force needed to engage clutches and gears.
  • Cooling: It removes heat from internal components that would otherwise overheat.
  • Lubrication & Protection: It minimizes friction and helps internal components last longer.
  • Cleaning: Additives in the fluid prevent sludge and varnish from building up inside the system.

While engine oil also lubricates and helps manage heat, it doesn’t operate within a hydraulic system or transmit power the way transmission fluid does. Because of this, transmission fluid is far more sensitive to contamination and heat, which makes proper monitoring essential.

Why Checking Transmission Fluid Isn’t Like Checking Engine Oil

Checking transmission fluid involves more than just pulling out a dipstick. These factors make the process different:

  • Fluid Type Is Critical: Automatic and CVT transmissions require specific fluids. Using the wrong type can hurt performance or even damage the transmission.
  • Temperature Affects the Reading: Transmission fluid expands when warm. Many vehicles must be checked at operating temperature—and sometimes while the engine is running.
  • Color and Smell Matter More: Clean transmission fluid is usually red or pink. Burnt-smelling or dark fluid is a sign that something is wrong. Engine oil routinely darkens, but transmission fluid shouldn’t.
  • Some Vehicles Don’t Have a Dipstick: Many modern cars require a technician or specialized tools to check fluid levels, making DIY checks less straightforward.

Since transmission fluid endures high heat and pressure, keeping it in good condition is key to preventing slipping, delayed gear changes, and other issues that compromise drivability.

How to Maintain Transmission Fluid for Better Longevity

Transmission systems require more proactive care than many drivers realize. Maintenance generally includes:

  • Regular Fluid Changes: Follow your vehicle manufacturer’s schedule. Many automatics need fluid changes every 30,000–60,000 miles, and many CVTs fall in the 25,000–50,000-mile range.
  • Replacing the Transmission Filter: Some systems use internal filters that should be changed during service to maintain proper fluid flow.
  • Checking for Leaks: Even small leaks can cause the system to run low on fluid, which leads to overheating and internal wear.
  • Diagnostic Checks: Modern transmissions rely on sensors and electronic controls, so periodic diagnostic scans are necessary to detect issues early.

Unlike engine oil, which is often easy to top off or visually inspect, transmission maintenance can be more technical—and skipping it can shorten the life of the entire system.

How Transmission Fluid Affects Safety

Poor transmission performance doesn’t just affect the feel of the vehicle. Low or worn-out fluid can cause the transmission to slip, hesitate, or shift unpredictably, increasing the risk of accidents. While not every transmission issue leads directly to collisions, ignoring signs of trouble certainly raises the odds. Attorneys handling accident claims recognize that mechanical failures sometimes play a role.

Staying on top of transmission maintenance helps ensure the vehicle handles predictably and safely.

Smart Habits for Drivers

To keep your transmission in good shape:

  • Learn your vehicle’s specific fluid-checking procedure.
  • Monitor fluid color, consistency, and odor.
  • Stick to the manufacturer’s recommended service intervals.
  • Get professional help if you notice slipping, delayed shifting, or unusual sounds.

Final Thoughts

Transmission fluid requires more precise care than many drivers expect. Because it handles hydraulic pressure, cooling, lubrication, and internal cleaning, it must be maintained carefully to keep the transmission functioning reliably. Following your vehicle’s maintenance schedule and paying attention to changes in fluid condition can help prevent major repairs and unexpected performance issues. A well-maintained transmission also reduces the likelihood of erratic behavior on the road—something any attorney for car accidents would confirm plays a role in overall vehicle safety.

Photo: jcomp via Freepik.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post Why Transmission Fluid Should Be Checked Differently Than Engine Oil appeared first on DCReport.org.

NAR: Pending Home Sales Increased 1.9% in October; Down 0.4% YoY

From the NAR: NAR Pending Home Sales Report Shows 1.9% Increase in October
Pending home sales in October increased by 1.9% from the prior month and fell 0.4% year over year, according to the National Association of REALTORS® Pending Home Sales Report. ...

Month-Over-Month
1.9% increase in pending home sales
Gains in the Northeast, Midwest and South; decline in the West

Year Over Year
0.4% decrease in pending home sales
Gains in the Midwest and South; decline in the Northeast and West
emphasis added
Note: Contract signings usually lead sales by about 45 to 60 days, so this would usually be for closed sales in November and December.

The Wanderer

Manager at the first start-up. Solid guy. Significant experience. I know that I can learn from him. No doubt. All the correct operational 1:1 hygiene is there. We meet every week like clockwork; we fill the time, and I often leave with a healthy sense of productivity.

But sometimes… he talks. And wanders.

He’s a talker. He likes stories. He thinks out loud. Often, these stories are related to a topic he or I brought up, but often, they are entirely unrelated to the company, our work, or my job. Or are they? I’m not sure. He’s still talking, and while it’s a compelling tale, I think it’s for his narrative enjoyment rather than our collective professional well-being.

The failure case and the reason you are reading this is that once during storytime, I was seeking guidance, I was looking for answers to essential questions, and I was working to figure out how to make progress. I forgot the story he was telling, but I discovered a question, jumped in, and asked, “You mean I should do this?”

“Yes. Yes! Exactly. Do that — great call.”

So I did.

Two weeks later, I received an urgent and irate Tuesday night email from my manager, “Hey, what are you doing here? Why are you doing this?”

“You told me to.”

“No, I didn’t.”

I don’t yet have a deep analysis of why storytime guidance differs from work guidance. I suspect that because he was lost in the narrative, he is in a different part of his brain, which isn’t work; it’s the story. I do know that receiving contradictory guidance from leadership drives me bonkers. As a leader, your job is to illuminate, not obfuscate.

My solution in this scenario, which I’ve now used for over a decade, has three simple steps:

  1. Prepare for the 1:1.
  2. Capture thoughts in writing in real-time.
  3. Post-mortem (document) the 1:1 immediately.

Preparation Artifact

Sometime before the 1:1, I spend five minutes writing down what I need from my boss this week. This can be a low-prep exercise where I yolo scribble my current set of worries, concerns, and questions. The content is less important than the fact that I’m preparing my brain for the 1:1. We are going to meet. This is what is important to me. With the initial concerns out of my head, I will then take a pass through my to-do list. Anything that I need to discuss that isn’t front of mind? Jot it down.

Do I share this list with my manager beforehand? Depends. My move is always to share any larger, complicated, or political topics the night before so that they can be pre-processed. I don’t always share all topics because it’s a conversation, it’s organic, and I want to give the conversation room to breathe. More on this in a moment.

Back to Reality

With my artifact in hand, my job is to steer the conversation towards these topics. I do this before storytime starts by declaring, “Hey, I have three topics I’d like to cover at some point.” This is easier if I’ve pre-sent the topics. Sometimes we do them right then and there (sweet), but sometimes they happen later organically as part of the 1:1. Read the room.

Now for a power move—it’s subtle. First, I bring the Preparation Artifact as a reminder of the topics or questions I have. I make sure he sees this act. Second, and here’s the move, when he says anything that sounds like a decision, task, or essential — I write it down.

This practice is for me, but it’s also for him. See, he might be in storytime mode, and while storytime might be his chosen means of delivering wisdom, he wanders. When I hear an essential thing, I pick up my pen, and I write it down. He sees this and remembers this isn’t a clever yarn told at the bar, this is work. This is reality. We are at work doing work things.

This practice is not a replacement for having a conversation. This does not absolve me from seeking real-time clarification; this is a quick reminder that we are doing work here. Infrequently, he sees me capture the decision and realizes what I might have heard, so he comes back to reality and clarifies, “This isn’t relevant to that topic. This is just a story.”

Oh.

The Tides of Trust

All done? Great, take three minutes to glance at my notes. Did I cover what I wanted? No? It goes on this list for next week unless it’s urgent. Did I capture all to-dos, thoughts, and next steps? No, write them down. Right now, I’m heading to another meeting, which is where I’ll forget critical bits the moment someone asks me a deliciously complex question.

This last step is essential because once I’ve written everything down, I often discover that what I heard is different from what I wrote down. The act of passing the thought through my fingers and onto the page forces structure onto the thought. Brains. I know, right?

Professional trust is like the tides of the oceans; it comes and goes. When trust was low between my boss and me, I’d send my read-out of the conversation as a mail or message. I am surprised how often the words he said differ from what I captured, and during low trust, he’ll respond and correct. This response means I need to send these follow-ups post 1:1. Three times with no response? The tide has returned along with truth. Good job.

Do That — Great Call

“You told me to.”

“No, I didn’t.”

In your career as a human working for other humans, this moment will stand out. You believe you did precisely what they asked, but upon completion, they question your work. The work you thought was precisely what they asked. For this specific scenario, I think my boss believed he’d figured it out, so it was OK to wander into story land.

At some point in your senior leadership professional growth, you’ll start to feel like you’ve got it figured out. The circumstances vary, but many years into your career, you’ll start to feel like you have satisfying answers to most questions, your projects will appear drama-free, and previously complex problems will appear familiar.

Good job. You’ve never figured it out. Ever.

Stories. Good stories are fun to write and to tell. You’re reading one right now. Stories can inspire you, point you in the right direction, but the leadership we need day after day is a conversation.

How Legal Representation for Businesses Shapes Ethical AI Adoption and Automation Policies

Artificial intelligence is changing how companies operate each day. Many organizations want to use automation without creating new risks. These decisions require careful planning and oversight. Legal representation for businesses helps guide these decisions with clarity.

Understanding the Rise of Ethical AI

Companies are turning to AI tools to speed up work. These tools can analyze data, make predictions, and support customer needs. Clear rules are needed to prevent harm.

Ethical AI focuses on fairness, transparency, and accountability. Every business must understand how its systems make decisions. Legal teams help define what these rules look like in practice.

Building AI Policies That Protect Customers

Customers expect their information to be used safely and responsibly. AI systems collect data that needs careful handling. Legal advisors help create policies that protect privacy and security.

These advisors understand how data laws apply to different industries. They ensure that companies comply with regional and national requirements. They also help guide decisions that could affect long-term trust.

Clear policies help businesses avoid mistakes that could lead to penalties. They also help companies explain how their AI tools work. This supports stronger relationships with their customers.

Ensuring Fairness in Automated Systems

Bias can appear in AI systems without warning. This happens when the data used to train models is flawed. Businesses must review these risks before using automation in daily operations.

Legal teams push companies to test systems for fairness. They encourage audits that highlight possible bias in predictions or decisions. This helps reduce harm and keeps systems aligned with ethical expectations.

These reviews help companies remain compliant with civil rights and anti-discrimination laws. They also show employees and customers that fairness is a priority. This builds confidence in the technology.

Guiding Responsible Use of Workplace Automation

Automation can change how employees work. It can replace repetitive tasks and shift roles within an organization. Clear rules help companies make these changes responsibly.

Legal advisors help define which tasks can be automated. They also guide communication with employees during this transition. This support lowers confusion and helps workers understand what to expect.

Policies built with legal guidance also protect businesses from labor disputes. They ensure that automation does not violate employment laws. This creates a healthier environment for long-term growth.

Managing Risk When Using Predictive Tools

AI tools often make predictions about customer behavior. Some tools can also predict employee performance or market shifts. These predictions can create risk if they are not adequately supervised. Legal teams help review the purpose of each tool. They help companies understand which data the tool uses. 

This helps prevent misuse or misinterpretation of results. Proper oversight reduces the chance of legal challenges. It also ensures that predictive tools support smart decisions. With the right structure, companies can innovate safely.

Protecting Intellectual Property in AI Development

AI systems rely on data, code, and unique models. These assets need protection to keep a competitive advantage. Legal advisors help companies secure their ideas early in the process.

They review ownership rights when using third-party tools. They also help draft contracts that outline who controls new technology. This prevents conflicts and supports long-term innovation.

Strong intellectual property planning helps companies grow with confidence. It encourages investment in advanced tools. It also helps protect the hard work that goes into building new ideas.

Preparing for New AI Regulations

AI laws are changing quickly. Many countries are writing new rules to protect the public. Companies must stay informed to update policies as needed. Legal advisors watch these changes closely. They help businesses adjust plans before new mandates take effect. This prevents rushed decisions that could lead to costly mistakes.

Staying prepared helps companies remain competitive. It also shows responsible leadership in a rapidly evolving field. This strengthens trust with partners and customers.

Final Thoughts

Responsible AI use relies on clear structure and practical guidance. Companies benefit from policies that protect people and data. Strong planning supports long-term trust. This is why legal representation for businesses remains essential.

Photo: Freepik via their website.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post How Legal Representation for Businesses Shapes Ethical AI Adoption and Automation Policies appeared first on DCReport.org.

How To Prevent Cart-Related Injuries In Retail Environments

Retail stores rely on carts to help customers shop comfortably. Yet these everyday tools can create unexpected hazards when not used or maintained correctly. A personal injury law firm in Baltimore often sees cases in which a simple trip to the store turns into an avoidable accident.

Understanding Why Cart-Related Injuries Happen

Cart-related injuries often occur because shoppers and employees underestimate how quickly carts can become dangerous. A slight push, a worn wheel, or a distracted moment can lead to sudden impacts. These incidents may seem accidental, but they often stem from preventable issues.

Retail environments are busy spaces filled with movement. Shoppers weave through aisles, employees move stock, and children accompany parents. When carts become part of this crowded flow, the potential for collisions increases.

Even slow-moving carts can cause harm. Feet, ankles, and legs are common points of impact. Understanding the causes helps store management take meaningful steps to reduce risks.

Keeping Carts In Good Working Condition

Regular maintenance is one of the most effective ways to prevent injuries. Many accidents happen because of broken wheels, weak handles, or unstable frames. These problems make carts harder to control and more likely to veer off course.

Stores should inspect carts frequently. A simple checklist can catch issues early and remove unsafe equipment from circulation. This routine keeps customers safer and reduces liability.

Carts with damaged parts should be repaired immediately or replaced. Delaying maintenance increases the chances of a customer losing control and getting hurt. A well-maintained fleet of carts creates a safer shopping environment for everyone.

Creating Safer Store Designs

Accidents can be avoided with an orderly, transparent store layout. Carts can move through wide aisles without running into shelves or other patrons. Congestion is lessened by adequate spacing, particularly during peak hours.

It is more challenging to navigate cluttered walkways. Obstacles like boxes, displays, or advertising stands reduce visibility.  When these obstacles are removed, customers have more space to move around safely.

It’s important to place carts strategically close to entrances. Customers find it difficult to pull carts free when they are strewn about or stacked too tightly, leading to jerky movements and collisions.

Educating Workers To Identify Risks

Workers are crucial to ensuring safety. They can identify possible issues before they result in injuries thanks to proper training. Staff should know how to identify damaged carts and remove them promptly.

They should also guide customers when needed. Assisting elderly shoppers or caregivers with multiple children can prevent accidents. A simple act of support often reduces the risk of falls or collisions.

Employees should be trained to keep aisles clear and respond quickly to spills. Slippery surfaces combined with carts create dangerous situations. Swift action keeps the store environment safe and enjoyable.

Encouraging Safe Cart Use Among Shoppers

Customers benefit from reminders about safe cart use. Simple signage near entrances can encourage people to handle carts slowly and stay alert. These reminders are invaluable during peak shopping hours.

Parents should be encouraged to secure children properly in the cart seat. Children standing or leaning out are at increased risk of falls. Clear guidelines promote safer habits for families.

Shoppers should also be advised to keep personal items in their carts. Bags hanging off the side can throw off balance and cause tipping. Safe usage prevents unnecessary injuries.

Addressing Parking Lot Risks

Many cart-related injuries occur outside the store.  Uneven pavement, moving cars, and windy conditions are some of the particular difficulties that parking lots present.  A loose cart can quickly become dangerous due to these factors.

Throughout the lot, stores should have designated spaces for returning carts.  These stations keep the carts from rolling freely.  Regular staff collections also lower risks. Additionally, parking lots need to be kept well-lit.  Customers can safely navigate and spot stray carts when visibility is good.  These easy safety measures reduce mishaps and safeguard all users.

Increasing Awareness And Tracking

Mirrors can help patrons and employees see around corners in busy places.  This lessens blind spots, which frequently result in unexpected collisions. Proper lighting inside the store also improves visibility.

Security cameras help monitor cart use and identify patterns that require attention. Footage can reveal problem areas where accidents happen frequently. Addressing these patterns helps prevent future injuries.

Monitoring also encourages responsible behavior. When customers know the store pays attention, they often use carts more carefully. This supports a safer environment for everyone.

Conclusion

Preventing cart-related injuries requires a mix of good maintenance, thoughtful planning, and shopper awareness. These efforts create a safer and more comfortable experience for all visitors. A personal injury law firm in Baltimore often sees the consequences of neglect, reminding retailers that prevention is always the best strategy.

Photo: shopfittingmanufacturer via their website.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post How To Prevent Cart-Related Injuries In Retail Environments appeared first on DCReport.org.

Newsletter: Case-Shiller: National House Price Index Up 1.3% year-over-year in September

Today, in the Calculated Risk Real Estate Newsletter: Case-Shiller: National House Price Index Up 1.3% year-over-year in September

Excerpt:
S&P/Case-Shiller released the monthly Home Price Indices for September (”September” is a 3-month average of July, August and September closing prices). July closing prices include some contracts signed in May, so there is a significant lag to this data. Here is a graph of the month-over-month (MoM) change in the Case-Shiller National Index Seasonally Adjusted (SA).

Case-Shiller MoM House PricesThis is the 2nd consecutive month with a slight MoM increase seasonally adjusted.
There is much more in the article.

Case-Shiller: National House Price Index Up 1.3% year-over-year in September

S&P/Case-Shiller released the monthly Home Price Indices for September ("September" is a 3-month average of July, August and September closing prices).

This release includes prices for 20 individual cities, two composite indices (for 10 cities and 20 cities) and the monthly National index.

From S&P S&P Cotality Case-Shiller Index Records Annual Gain in September 2025
• The S&P Cotality Case-Shiller U.S. National Home Price NSA Index posted a 1.3% annual gain for September, down from a 1.4% rise in the previous month.

• Inflation outpaced home prices for a fourth straight month, with September’s CPI running 1.7 percentage points above housing appreciation—the widest gap since the measures began diverging in June.

• All 20 metros recorded month-over-month declines before seasonal adjustment in September, underscoring broad-based weakening as elevated mortgage rates weigh on affordability and demand.

S&P Dow Jones Indices (S&P DJI) today released the September 2025 results for the S&P Cotality Case-Shiller Indices.

Please note that September 2025 transaction records for Wayne County, MI, are delayed at the local recording office. Since Wayne is the most populous county in the Detroit metro area, Cotality is not able to generate a valid September 2025 update of the Detroit S&P Cotality Case-Shiller Index before the November 25, 2025, release date. ...

"The housing market's deceleration accelerated in September, with the National Composite posting just a 1.3% annual gain—the weakest performance since mid-2023,” said Nicholas Godec, CFA, CAIA, CIPM, Head of Fixed Income Tradables & Commodities at S&P Dow Jones Indices. “This marks a continued slide from August’s 1.4% increase and represents a stark contrast to the double-digit gains that characterized the early post-pandemic era. National home prices continued trailing inflation, with September’s CPI running 1.7 percentage points ahead of housing appreciation. This marks the widest gap between inflation and home-price growth since the two measures diverged in June, with the spread continuing to widen each month.

“Regional performance reveals a tale of two markets. Chicago continues to lead with a 5.5% annual gain, followed by New York at 5.2% and Boston at 4.1%. These Northeastern and Midwestern metros have sustained momentum even as broader market conditions soften. At the opposite extreme, Tampa posted a 4.1% annual decline—the sharpest drop among tracked metros and its 11th consecutive month of negative annual returns. Phoenix (-2.0%), Dallas (-1.3%), and Miami (-1.3%) likewise remained in negative territory, highlighting particular weakness in Sun Belt markets that experienced the most dramatic pandemic-era price surges.
...
The S&P Cotality Case-Shiller U.S. National Home Price NSA Index, covering all nine U.S. census divisions, reported a 1.3% annual gain for September, down from a 1.4% rise in the previous month. The 10-City Composite showed an annual increase of 2.0%, down from a 2.1% increase in the previous month. The 20-City Composite posted a year-over-year increase of 1.4%, down from a 1.6% increase in the previous month.
...
The pre-seasonally adjusted U.S. National, 10-City Composite, and 20-City Composite Indices continued to report negative month-over-month changes in September, posting -0.3% for the U.S. National Index and -0.5% for both the 10-City and 20-City Composite Indices.

After seasonal adjustment, the U.S. National and 10-City Composite Indices reported a monthly increase of 0.2% and the 20-City Composite Indices posted a month-over-month gain of 0.1%. emphasis added
Case-Shiller House Prices Indices Click on graph for larger image.

The first graph shows the nominal seasonally adjusted Composite 10, Composite 20 and National indices (the Composite 20 was started in January 2000).

The Composite 10 index was up 0.2% in September (SA).  The Composite 20 index was up 0.1% (SA) in September.

The National index was up 0.2% (SA) in September.

Case-Shiller House Prices Indices The second graph shows the year-over-year change in all three indices.

The Composite 10 NSA was up 2.0% year-over-year.  The Composite 20 NSA was up 1.4% year-over-year.

The National index NSA was up 1.3% year-over-year.

Annual price changes were below expectations.  I'll have more later.

Shopping Cart Collisions: Why They’re More Dangerous Than You Think

Most people view shopping carts as harmless everyday tools. Yet these simple objects can cause surprising injuries when mishandled or poorly maintained. The Blackburn Romey law firm has seen how quickly a routine grocery trip can turn into a painful accident.

The Hidden Risks Behind Shopping Carts

Shopping carts appear lightweight and manageable, but their metal frames can exert significant force during a collision. When a cart gains speed or hits a hard surface, it becomes much more dangerous than it appears. Even minor impacts can lead to unexpected injuries.

Many people lose control of carts without realizing how fast they are moving. Sloped parking lots, wet floors, or worn wheels make steering harder. These conditions create moments where accidents happen in just a few seconds.

Because consumers rarely think about cart safety, they underestimate how harmful a collision can be. This false sense of security leads to less caution. As a result, injuries often occur without warning.

Poor Cart Maintenance Makes Collisions More Likely

Grocery stores are responsible for maintaining safe equipment, including carts. However, many carts suffer from faulty wheels, bent frames, or loose handles. These minor defects can dramatically increase the risk of accidents.

A shaky or jammed wheel makes it hard to maneuver through aisles. This lack of control can cause sudden swerves into shelves, displays, or other shoppers. What seems like a minor malfunction quickly becomes a safety hazard.

Poor maintenance also impacts stopping ability. A cart with misaligned wheels may roll farther than expected, especially on polished store floors. This creates unexpected collisions that could have been prevented with routine checks.

Crowded Stores Increase Impact Risks

Busy stores create tight spaces where people move quickly and unpredictably. Shoppers often navigate around displays, children, and long lines. In these conditions, a cart collision becomes far more likely.

Crowds make it harder to see obstacles or anticipate sudden movements. Shoppers may reach for items, step back into the aisle, or stop abruptly. These actions can lead to instant collisions with little time to react.

Even slow-moving carts can cause painful injuries in tight quarters. Feet, ankles, and legs are especially vulnerable. Children riding in the cart seat also face increased danger in a collision.

Shopping Cart Injuries Can Be Surprisingly Serious

While many people think of shopping cart accidents as minor, the injuries can be severe. Bruised ankles and strained muscles are common, but more serious harm can occur. Broken bones, back injuries, and deep cuts happen more often than people expect.

For older adults, even a minor collision can lead to complications. A fall caused by a cart impact may result in fractures or head injuries. These injuries can take months to heal and may require significant medical treatment.

Children are also at risk. A sudden crash can throw a child forward or cause them to hit their head on the cart’s frame. Their smaller size makes them more vulnerable to the force of impact.

Parking Lot Hazards Add Even More Danger

Many shopping cart collisions occur outside the store. Parking lots are filled with moving vehicles, uneven pavement, and strong winds. These factors increase the chances of losing control of a cart.

A cart rolling downhill can gain speed quickly. If it hits a person or a vehicle, the impact can be strong enough to cause serious harm. Drivers may not see a loose cart until it is too late to stop.

Weather adds another layer of risk. Rain makes surfaces slippery, and wind can push empty carts across the lot. These conditions create unpredictable situations where injuries easily occur.

Why Awareness And Prevention Matter

Understanding the risks helps shoppers stay alert. Paying attention to cart condition and avoiding damaged equipment can prevent many accidents. A few extra seconds spent checking wheels or adjusting grip can make a big difference.

Being mindful of surroundings also reduces risks. Staying aware of crowds, corners, and displays helps avoid sudden impacts. Parents can further protect their children by ensuring they sit properly and remain under supervision.

Stores can support safety by maintaining carts and designing wider aisles. Clear walkways and regular equipment checks help reduce hazards. These simple measures protect both customers and employees.

Conclusion

Shopping cart collisions may seem minor, but the injuries they cause can be unexpectedly serious. Awareness and proper maintenance play significant roles in preventing these accidents. The Blackburn Romey law firm often sees how these incidents affect victims, reminding everyone to stay cautious while shopping.

Photo: rawpixel.com via Freepik.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

 

The post Shopping Cart Collisions: Why They’re More Dangerous Than You Think appeared first on DCReport.org.

What Does “The Builder’s Remedy” Mean for “Abundance?”

A few months back, Ezra Klein and Derek Thompson released a book in which they described the Abundance Agenda, which Klein summarizes as:

Abundance is the argument that a lot of what is wrong in our society is that we have manufactured scarcities. We have made it too hard to build and create the things people need more of. The places where we focus in the book are housing, clean energy, and state capacity…

But the solutions of one era become the problems of the next. Those procedures became overgrown. So now you have insane outcomes, like laws that are designed to make sure we have a cleaner environment being deployed against the development of solar panels and transmission lines and congestion pricing. Or the fact that in places like California and Washington, DC, it costs a lot more to build affordable housing than to build market-rate housing.

The housing crisis in California is essentially the textbook example of regulations being used to stymie housing (though many other places do it very well too). Which makes this NY Times story about “the buider’s remedy” very interesting (boldface mine):

The law is called the “builder’s remedy,” and it was designed to break the political logjams that have made California one of the most difficult places in the country to build. The law works by nullifying local zoning rules when cities fail to plan for enough housing as required by the state.

While the builder’s remedy has been on the books since 1990, it was effectively dormant until 2022. Since then, however, developers across the state have filed dozens of plans to build 10- and 20-story buildings in neighborhoods where they had never been allowed. Mr. Pustilnikov, who helped pioneer the tactic, has proposed 10 such projects across Los Angeles County….

The irony is that the builder’s remedy was rediscovered almost by accident. Even Ms. Wicks, one of California’s most staunchly pro-housing lawmakers, said the Legislature would never be able to pass the law now because of opposition from local governments. Thus, one of California’s most effective laws for building housing was not a product of its housing emergency or political will, but a legislative relic…

Over more than a dozen tweets, Mr. Elmendorf argued that since 1990, California had had a loophole that allowed developers to bypass the local zoning codes in cities whose housing element was deemed noncompliant by the state. In a follow-up paper, he called it the “builder’s remedy,” a nod to a similar mechanism that arose from New Jersey court rulings that have shaped housing policy in that state.

The clause had rarely been used. But, as it happened, the conditions for exploiting it were historically perfect. Cities across the state were about to be hit with increases in housing target numbers so steep that regulators were all but guaranteed to deem their plans noncompliant.

It’s probably not correct to argue that the builder’s remedy could have been used since 1990, as municipalities’ housing elements might have been deemed compliant back then, but it does seem correct to say that it could have been used since the last plan (roughly ten years, if I understand the timing correctly). What I’m really interested in is what this means for the entire abundance debate.

The example everyone uses for the abundance agenda is the California housing crisis, but the answer, the builder’s remedy, was sitting right there. For years. It suggests that maybe there’s more going on than just too much regulation.

In fact, it seems California is reacting to the builder’s remedy:

Last year, the Legislature passed a bill, introduced by Ms. Wicks, that explicitly codified the builder’s remedy in a modified form: Developers could more easily avail themselves of the tactic in exchange for set limits on density. They cannot build anything they want, but the allowable densities are still several times as large as what local zoning rules allow.

The ultimate impact of the builder’s remedy is likely to be measured not just in units that are built by using it, but in the ones built in fear of it. A few years ago, when Santa Monica was working on its housing element, Jesse Zwick, who was running to be a member of the City Council, sat in frustration while his future colleagues voted for a plan that the state ultimately rejected for failing to provide enough units, he said.

Then developers, including Mr. Pustilnikov, came along, and the wealthy beachfront city was blanketed with housing proposals. The city ended up settling with builders, and the effect is likely to be felt long after.

The fear of builder’s remedy brought along a lot of people whose inclination was to fight everything,” Mr. Zwick said. “They realized it was in our interest to grow and at least be able to have a say in how we do that.”

Anyway, it’s kind of interesting that a stereotypical example of the problem abundance is supposed to address seems to have resolved itself.

Tossing the ‘Enemy’ Indictments

There was some odd political parallelism yesterday in federal court.

The federal judge who yesterday tossed out separate indictments against former FBI Director James B. Comey Jr. and New York Attorney General Letitia James, managed in one decision to rebuke Donald Trump and Attorney General Pam Bondi over prosecution of political enemies, striking a blow to prosecutions at a presidential whim.

U.S. District Judge Cameron McGowan Currie said the Trump administration had unlawfully appointed Lindsey Halligan, who had been an insurance lawyer and a Trump aide, as a U.S. Attorney. No prosecution, no indictments apparently, the judge ruled.

In doing so, the judge cited the same thinking used by Judge Aileen Cannon of Florida in dismissing charges against Trump himself arising from recovery of hundreds of top classified documents at Mar-a-Lago. Cannon had found appointment for former Special Counsel Jack Smith erroneous, though the specifics were slightly different.

Of course, the administration will appeal, and while technically the Justice Department could try to refile charges, other factors suggest that the Comey case, at least, will likely end here. Comey was charged just before expiration of a statute of limitations, but Halligan also has been trying to defend a string of apparent legal and procedural errors she made in her first week as a prosecutor – and that had been reviewed by Bondi.

Strangely, though, it means there will be no trial now for either of them. While it may be good for Comey and James, the rest of us are left with what to make with Trump. We have a president  who believes that he can point to any critic and say, “Prosecute” even without sufficient evidence. That includes sufficient evidence to show the possibility of actual crime as well as enough to win the case.

Appointment of Halligan and charges against Comey followed by days a Trump social media post mistakenly public that called for indicting Comey, James and others.

Bad Cases

Before either Comey or James had been charged, Halligan’s predecessor, then-U.S. Attorney Erik Siebert had refused prosecution because the cases were weak. Trump then forced Siebert out.

Comey, who faced allegations of lying to Congress and obstruction over news leaks, and James, facing a mortgage fraud charge, were ready to argue that there was no crime or none that could sustain a jury finding. More such mortgage fraud charges were filed and dismissed against Fed Board member Lisa Cooke, and are pending against Sen. Adam Schiff, D-Calif. That prosecutor also is reported to believe there is not enough evidence to proceed.

Finding that the administration legally erred in appointing Halligan does not address what feels like abuse of the Oval Office powers for Trump to decide that he wants a prosecution just as retribution over his own prosecutions.

Trump was found guilty of 34 felonies in a New York court, and he lost embarrassing cases of fraud as the target of lawsuits by James. He faced investigation over contacts between his 2016 campaign run largely because of work by Comey’s FBI.

Basically, the reasoning yesterday said Trump cannot just stick a political loyalist into a U.S. Attorney job and skip over Senate confirmation. The judge said that it was unlawful to appoint two interim prosecutors in succession since Siebert also had been temporary.  That matches rulings by several other judges who rejected temporary appointments in New Jersey, Nevada and California.

“All actions flowing from Ms. Halligan’s defective appointment, including securing and signing Mr. Comey’s indictment, constitute unlawful exercises of executive power and must be set aside,” Currie wrote. “There is simply ‘no alternative course to cure the unconstitutional problem.’”

What we’re not seeing is a judge tell Trump and Bondi that they need evidence to support criminal charges they want filed.


CLICK TO DONATE IN SUPPORT OF OUR NONPROFIT NEWSROOM.

The post Tossing the ‘Enemy’ Indictments appeared first on DCReport.org.

Why are Mormons so Libertarian?

Connor Hansen has a very good essay on Why Are Latter-day Saints So Libertarian? It serves both as an introduction to LDS theology and as an explanation for why that theology resonates with classical liberal ideas. I’ll summarize, with the caveat that I may get a few theological details wrong.

LDS metaphysics posits a universe governed by eternal law. God works with and within the laws of the universe–the same laws that humans can discover with reason and science.

This puts Latter-day Saint cosmology in conversation with the Enlightenment conviction that nature operates predictably and can be studied systematically. A theology where God organizes matter according to eternal law opens space for both scientific inquiry and mystical experience—the careful observation of natural law and the direct encounter with divine love operating through that law.

LDS epistemology is strikingly pro-reason. Even Ayn Rand would approve:

Latter-day Saint theology holds that human beings possess eternal “intelligence”—a term meaning something like personhood, consciousness, or rational capacity—that exists independent of creation. This intelligence is inherent, not granted, and it survives death.

Paired with this is the doctrine of agency: humans are genuinely free moral agents, not puppets or broken remnants after a fall. We’re capable of reason, judgment, and meaningful choice.

This creates an unusually optimistic anthropology. Human reason isn’t fundamentally corrupted or unreliable. It’s a divine gift and a core feature of identity. That lines up neatly with the Enlightenment belief that people can use reason to understand the world, improve their lives, and govern themselves effectively.

In ethics, agency is arguably the most libertarian strand in LDS theology. Free to choose is literally at the center of both divine nature and moral responsibility.

According to Latter-day Saint belief, God proposed a plan for human existence in which individuals would receive genuine agency—the ability to choose, make mistakes, learn, change, and ultimately progress toward becoming like God.

One figure, identified as Satan, rejected that plan and proposed an alternative: eliminate agency, guarantee universal salvation through compulsion, and claim God’s glory in the process.

The disagreement escalated into conflict. In Latter-day Saint scripture, Satan and those who followed him were cast out. The ones who chose agency—who chose freedom with its attendant risks—became mortal humans.

This matters politically because it means that in Latter-day Saint theology, coercion is not merely misguided policy or poor governance. It is literally Satanic. The negation of agency, forced conformity, compulsory salvation—these align with the devil’s rebellion against God’s plan.

Now add to this a 19th century belief in progress and abundance amped up by theology:

Humanity isn’t hopelessly corrupt. Instead, individuals are expected to learn, improve, innovate, and help build better societies.

But here’s where it gets radical: Latter-day Saints believe in the doctrine of eternal progression—the teaching that human beings can, over infinite time and through divine grace, become as God is. Not metaphorically. Actually.

If you believe humans possess infinite potential to rise, become, and progress eternally—literally without bound—then political systems that constrain, manage, or limit human aspiration start to feel spiritually suspect.

Finally, the actually history of the LDS church–expulsions from Missouri and Illinois, Joseph Smith’s violent death, the migration to the Great Basin, the creation of a quasi-independent society–is one of resistance to centralized government power. Limited government and local autonomy come to feel like lessons learned through lived experience. Likewise, the modern LDS welfare system is a working demonstration of how voluntary, covenant-based mutual aid can deliver real social support without coercion. This real-world model strengthens the intuition that social goods need not rely on compulsory state systems, and that voluntary institutions can often be more humane and effective.

To which I say, amen brother! Read the whole essay for more.

See also the book, Latter-day Liberty: A Gospel Approach to Government and Politics, with an introduction by the excellent Mark Skousen.

Hat tip: Gale.

The post Why are Mormons so Libertarian? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

The abrupt demise of the AMA Journal of Ethics

Medical ethics is full of twists and turns, so it is perhaps fitting that the American Medical Association killed its Journal of Ethics, without explanation, in an abrupt announcement in its November issue that the December issue would be the last issue. 


From the Editor in Chief, AMA Journal of Ethics, Nov 2025
Breaking Bad News, Audiey C. Kao, MD, PhD

"For over a quarter of a century, the AMA Journal of Ethics has striven to publish insightful commentaries, engaging podcasts, and provocative artwork that help medical students, physicians, and all health care professionals reflect on and make sound ethical decisions in service to patients and society. I write to inform you that the AMA Journal of Ethics will cease publishing new content after December 2025. Understandably, this news will be sad and unexpected for the journal’s readers and supporters. I share in this loss.

"Previously published content will be maintained on the journal’s website and remain freely available to all in keeping with our guiding premise that ethics inquiry is a public good. With humility, I am hopeful and confident that this archived journal content will stay evergreen for years to come. "

######

Here's an article in MedpageToday, mourning the demise of the journal in a way that appears to have left authors of forthcoming articles in the lurch:

In Memoriam: The Sudden Demise of the AMA Journal of Ethics
— A great loss for physicians, the profession, and the public
by Matthew Wynia, MD, MPH, and Kayhan Parsi, JD, PhD

"without warning, the American Medical Association (AMA) announced that it would cease publishing AMA JoE after the December 2025 issue. Journals like AMA JoE operate a year or more in advance, so multiple authors and editors of upcoming issues for 2026-2027 were left in the lurch by this unexpected announcement. Students and trainees are now scrambling to find new homes for their articles, some of which have already undergone extensive editorial review."
 

Rivals object to SpaceX’s Starship plans in Florida—who’s interfering with whom?

The commander of the military unit responsible for running the Cape Canaveral spaceport in Florida expects SpaceX to begin launching Starship rockets there next year.

Launch companies with facilities near SpaceX’s Starship pads are not pleased. SpaceX’s two chief rivals, Blue Origin and United Launch Alliance, complained last year that SpaceX’s proposal of launching as many as 120 Starships per year from Florida’s Space Coast could force them to routinely clear personnel from their launch pads for safety reasons.

This isn’t the first time Blue Origin and ULA have tried to throw up roadblocks in front of SpaceX. The companies sought to prevent NASA from leasing a disused launch pad to SpaceX in 2013, but they lost the fight.

Read full article

Comments

A telescope fist bump in the sky

Only by working together as a team can all of ESO’s 8 m telescopes become the Very Large Telescope Interferometer (VLTI) — and today’s Picture of the Week captures this teamwork perfectly. The photograph, taken by Juan Beltrán, an instrumentation technician at ESO’s Paranal Observatory in Chile, marks the beginning of a new interferometry era

Most of the time these so-called Unit Telecopes (UTs) work as standalone telescopes, each one observing a different object. But they can also point at the same target, combining their light with interferometry to obtain the same level of detail of a huge 130 m virtual telescope. This requires measuring and correcting the effects of Earth’s atmosphere, which was previously done through bright reference stars, hard to find next to the object one wants to study. 

As part of the GRAVITY+ upgrade, new lasers were installed in the previously unequipped UTs. These lasers create artificial stars high up in the atmosphere, near the observation target on sky, that can be used to correct atmospheric turbulence. Now that the lasers are ready, the observations are not limited to natural bright reference stars anymore, opening the whole southern sky to the VLTI. And the GRAVITY+ trial run in early November blessed us with the mesmerising view of all four lasers pointing to one target on the sky, as if they were fist bumping as a team. 

Link 

Local LLMs are how nerds now justify a big computer they don't need

It's pretty incredible that we're able to run all these awesome AI models on our own hardware now. From downscaled versions of DeepSeek to gpt-oss-20b, there are many options for many types of computers. But let's get real here: they're all vastly behind the frontier models available for rent, and thus for most developers a curiosity at best.

This doesn't take anything away from the technical accomplishment. It doesn't take anything away from the fact that small models are improving, and that maybe one day they'll indeed be good enough for developers to rely on them in their daily work.

But that day is not today.

Thus, I find it spurious to hear developers evaluate their next computer on the prospect of how well it's capable of running local models. Because they all suck! Whether one sucks a little less than the other doesn't really matter. And as soon as you discover this, you'll be back to using the rented models for the vast majority of the work you're doing.

This is actually great news! It means you really don't need a 128GB VRAM computer on your desk. Which should come as a relief now that RAM prices are skyrocketing, exactly because of AI's insatiable demand for more resources. Most developers these days can get by with very little, especially if they're running Linux.

So as an experiment, I've parked my lovely $2,000 Framework Desktop for a while. It's an incredible machine, but in the day-to-day, I've actually found I barely notice the difference compared to a $500 mini PC from Beelink (or Minisforum).

I bet you likely need way less than you think too.

*God’s Grandeur*

The world is charged with the grandeur of God.
    It will flame out, like shining from shook foil;
    It gathers to a greatness, like the ooze of oil
Crushed. Why do men then now not reck his rod?
Generations have trod, have trod, have trod;
    And all is seared with trade; bleared, smeared with toil;
    And wears man’s smudge and shares man’s smell: the soil
Is bare now, nor can foot feel, being shod.
And for all this, nature is never spent;
    There lives the dearest freshness deep down things;
And though the last lights off the black West went
    Oh, morning, at the brown brink eastward, springs —
Because the Holy Ghost over the bent
    World broods with warm breast and with ah! bright wings.

 

— Gerard Manley Hopkins

The post *God’s Grandeur* appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

LLMs Position Themselves as More Rational Than Humans

That is the title of a new paper by Kyung-Hoon Kim,  I differ from his terminology (“self-aware”), but the results are interesting nonetheless:

As Large Language Models (LLMs) grow in capability, do they develop self-awareness as an emergent behavior? And if so, can we measure it? We introduce the AI Self-Awareness Index (AISAI), a game-theoretic framework for measuring self-awareness through strategic differentiation. Using the “Guess 2/3 of Average” game, we test 28 models (OpenAI, Anthropic, Google) across 4,200 trials with three opponent framings: (A) against humans, (B) against other AI models, and (C) against AI models like you. We operationalize self-awareness as the capacity to differentiate strategic reasoning based on opponent type. Finding 1: Self-awareness emerges with model advancement. The majority of advanced models (21/28, 75%) demonstrate clear self-awareness, while older/smaller models show no differentiation. Finding 2: Self-aware models rank themselves as most rational. Among the 21 models with self-awareness, a consistent rationality hierarchy emerges: Self > Other AIs > Humans, with large AI attribution effects and moderate self-preferencing. These findings reveal that self-awareness is an emergent capability of advanced LLMs, and that self-aware models systematically perceive themselves as more rational than humans. This has implications for AI alignment, human-AI collaboration, and understanding AI beliefs about human capabilities.

Here is the Twitter version of the argument.

The post LLMs Position Themselves as More Rational Than Humans appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

IACR Nullifies Election Because of Lost Decryption Key

The International Association of Cryptologic Research—the academic cryptography association that’s been putting conferences like Crypto (back when “crypto” meant “cryptography”) and Eurocrypt since the 1980s—had to nullify an online election when trustee Moti Yung lost his decryption key.

For this election and in accordance with the bylaws of the IACR, the three members of the IACR 2025 Election Committee acted as independent trustees, each holding a portion of the cryptographic key material required to jointly decrypt the results. This aspect of Helios’ design ensures that no two trustees could collude to determine the outcome of an election or the contents of individual votes on their own: all trustees must provide their decryption shares.

Unfortunately, one of the three trustees has irretrievably lost their private key, an honest but unfortunate human mistake, and therefore cannot compute their decryption share. As a result, Helios is unable to complete the decryption process, and it is technically impossible for us to obtain or verify the final outcome of this election.

The group will redo the election, but this time setting a 2-of-3 threshold scheme for decrypting the results, instead of requiring all three

News articles.

How far can you see? How far can you see?


It’s official: Boeing’s next flight of Starliner will be allowed to carry cargo only

The US space agency ended months of speculation about the next flight of Boeing’s Starliner spacecraft, confirming Monday that the vehicle will carry only cargo to the International Space Station.

NASA and Boeing are now targeting no earlier than April 2026 to fly the uncrewed Starliner-1 mission, the space agency said. Launching by next April will require completion of rigorous test, certification, and mission readiness activities, NASA added in a statement.

“NASA and Boeing are continuing to rigorously test the Starliner propulsion system in preparation for two potential flights next year,” said Steve Stich, manager of NASA’s Commercial Crew Program, in a statement.

Read full article

Comments

Rocket Lab chief opens up about Neutron delays, New Glenn’s success, and NASA science

The company that pioneered small launch has had a big year.

Rocket Lab broke its annual launch record with the Electron booster—17 successful missions this year, and counting—and is close to bringing its much larger Neutron rocket to the launch pad.

The company also expanded its in-space business, including playing a key role in supporting the landing of Firefly’s Blue Ghost mission on the Moon and building two small satellites just launched to Mars.

Read full article

Comments

Tuesday: Case-Shiller House Prices, PPI, Pending Home Sales

Mortgage Rates Note: Mortgage rates are from MortgageNewsDaily.com and are for top tier scenarios.

Tuesday:
• At 8:30 AM ET, The Producer Price Index for September from the BLS. 

• At 9:00 AM, S&P/Case-Shiller House Price Index for September.

• Also at 9:00 AM, FHFA House Price Index for September. This was originally a GSE only repeat sales, however there is also an expanded index. The Conforming loan limits for next year will also be announced.

• At 10:00 AM, Richmond Fed Survey of Manufacturing Activity for November. This is the last of the regional Fed manufacturing surveys for November.

• Also at 10:00 AM, Pending Home Sales Index for October.

I'm Recommending 14 New YouTube Videos

I’ve spent a long time taming the YouTube algorithm (ugh!). I had to teach it how to separate the good from the bad and ugly.

But now it works. YouTube finally delivers videos I enjoy watching—and maybe you will too. Here are 14 recent gems.

Enjoy!


Please support my work—by taking out a premium subscription (just $6 per month).

Subscribe now


“I’m the greatest failure who ever walked the earth….”

Can you really walk up to strangers and get them to share the intimate story of their inner life? Yes you can. It happened here.

Smart pop still survives if you know where to look.

What a joy to hear a contemporary band playing live—and everybody can sing, everybody can play, and the chemistry is off the charts. That’s a brief intro to Parcels. For more, just watch the video.

He is one of the greatest film composers of the last half century—but never won an Oscar.

He hasn’t even received a single nomination. That’s a crime. But he’s beloved by fans all over the world—and I’m one of them.

Read more

The Trump Trade is Unraveling

A graph of a price

AI-generated content may be incorrect.

What is Bitcoin good for? It isn’t money — that is, it isn’t a medium of exchange, something you can use to make payments. It isn’t a hedge against inflation. It isn’t a hedge against financial risks — on the contrary, the price of Bitcoin has generally moved in the same direction as the AI-related stocks driving the stock market these days, but with even greater volatility.

To the extent that Bitcoin has a use case, it is covering your financial tracks: crypto facilitates anonymous transactions that don’t leave a paper trail. Such transactions aren’t necessarily criminal, but many are.

By the way, anonymity doesn’t just enable crime by crypto users, it also enables crime against them. If you possess a Bitcoin’s key, the code that unlocks it, it’s yours, no matter who you are and how you got it. In that sense, getting your hands on a Bitcoin key is similar to getting your hands on a bag full of $100 bills.

This feature has led to a wave of abductions of major crypto investors by criminals demanding their keys. Indeed, such abductions have become so common that a major recent Bitcoin conference included a daylong “counter-kidnapping” workshop, in which participants learned, among other things, how to gnaw their way through zipties.

In addition to facilitating crime, Bitcoin has increasingly become an engine of predation. Crypto — or, worse, shares in companies that buy crypto with borrowed money — is heavily sold to naïve investors who don’t realize what they’re getting into. They do well when Bitcoin’s price is rising, but many probably don’t understand how badly they can suffer when it falls.

And crypto has fallen a lot recently. Bitcoin has actually held up better than smaller, more obscure coins, but even so it’s down roughly 25 percent since late October.

It’s possible that Bitcoin will bounce back, because it’s more than an asset, it’s a cult. When I spoke with Hasan Minhaj and Bitcoin came up, his immediate reaction to my criticism was “I don’t want to get memed, the Bitcoin boys have already come after me.” This cult status has allowed Bitcoin to recover from setbacks and scandals that would have sunk any normal investment, because true believers respond to any drop in its price by piling in more than ever. And maybe that will happen again.

But maybe not, because at this point Bitcoin is largely a Trump trade. Bitcoin’s price surged after Donald Trump won last year, and its recent plunge coincides with a series of Trump political setbacks.

Why is Bitcoin a Trump trade? Partly because Trump, whose family has in effect received massive bribes from the crypto industry, has been rewarding that investment with pro-crypto policies. Notably, Trump has signed an executive order intended to allow ordinary Americans — who, again, generally don’t know what they’re getting into — to invest money from their 401(k)s in crypto assets.

More broadly, crypto is, as I’ve suggested, increasingly a tool for financial predators, and the Trump administration is extremely predator-friendly. Ask Changpeng Zhao, founder of the cryptocurrency exchange Binance, who was found guilty of violating U.S. money-laundering laws — then pardoned by Trump.

The administration has been doing all it can to dismantle institutions, like the Consumer Financial Protection Bureau, that were created to help keep investors and markets safe after the 2008 financial crisis. Scott Bessent, the Treasury secretary, and other Trump officials and allies — including some officials at the Federal Reserve — have also been doing all they can to undermine bank supervision, which tries to limit the kind of risk-taking that brought on the 2008 crisis.

All of this is bad for small investors and bad for financial stability. But it’s good for financial schemers like the people and institutions promoting Bitcoin.

So how should we understand Bitcoin’s recent crash? Think of it as the unraveling of the Trump trade. Trump remains as determined as ever to reward the industry that made his family rich, and those around him are as determined as ever to make America safe for predators of all kinds. But Trump’s power is visibly diminishing, so the price of Bitcoin, which has in effect become a bet on Trumpism, has plunged.

Why is Trump suddenly looking weaker? Polls have given him very low marks since the spring, but his net approval has declined significantly over the past month. And while Trump just claimed that he has THE HIGHEST POLL NUMBERS OF MY “POLITICAL CAREER” — nobody knows what poll, if any, he’s referring to — most doubts about the validity of polls showing his extreme unpopularity were dispelled by blowout Democratic victories in Virginia and New Jersey on Nov. 4.

These electoral defeats have shaken the willingness of Republicans in Congress to keep showing lockstep obedience to Trump. At the same time, the growing furor over Trump’s ties to Jeffrey Epstein has rattled the MAGA base. Many political analysts, I suspect, don’t fully appreciate the extent to which many members of that base truly believed that Trump was protecting the world against Democratic pedophiles, and the degree to which they have been shaken by the growing realization that they may have gotten their heroes and villains mixed up.

Is it a stretch to link Trump’s political woes to the price of crypto? No. As Josh Marshall often emphasizes, power is unitary. A weakened Trump is less able to work his will on all fronts, including his efforts to promote crypto.

I’ll talk more about the politics/crypto linkage in future posts. For now, let me just reiterate that Bitcoin has become a Trump trade, and Bitcoin’s declining price is an indicator of Trump’s declining dominance over the G.O.P..

MUSICAL CODA

Blown to hell, crash

Claude Opus 4.5, and why evaluating new LLMs is increasingly difficult

Anthropic released Claude Opus 4.5 this morning, which they call "best model in the world for coding, agents, and computer use". This is their attempt to retake the crown for best coding model after significant challenges from OpenAI's GPT-5.1-Codex-Max and Google's Gemini 3, both released within the past week!

The core characteristics of Opus 4.5 are a 200,000 token context (same as Sonnet), 64,000 token output limit (also the same as Sonnet), and a March 2025 "reliable knowledge cutoff" (Sonnet 4.5 is January, Haiku 4.5 is February).

The pricing is a big relief: $5/million for input and $25/million for output. This is a lot cheaper than the previous Opus at $15/$75 and keeps it a little more competitive with the GPT-5.1 family ($1.25/$10) and Gemini 3 Pro ($2/$12, or $4/$18 for >200,000 tokens). For comparison, Sonnet 4.5 is $3/$15 and Haiku 4.5 is $1/$5.

The Key improvements in Opus 4.5 over Opus 4.1 document has a few more interesting details:

I had access to a preview of Anthropic's new model over the weekend. I spent a bunch of time with it in Claude Code, resulting in a new alpha release of sqlite-utils that included several large-scale refactorings - Opus 4.5 was responsible for most of the work across 20 commits, 39 files changed, 2,022 additions and 1,173 deletions in a two day period. Here's the Claude Code transcript where I had it help implement one of the more complicated new features.

It's clearly an excellent new model, but I did run into a catch. My preview expired at 8pm on Sunday when I still had a few remaining issues in the milestone for the alpha. I switched back to Claude Sonnet 4.5 and... kept on working at the same pace I'd been achieving with the new model.

With hindsight, production coding like this is a less effective way of evaluating the strengths of a new model than I had expected.

I'm not saying the new model isn't an improvement on Sonnet 4.5 - but I can't say with confidence that the challenges I posed it were able to identify a meaningful difference in capabilities between the two.

This represents a growing problem for me. My favorite moments in AI are when a new model gives me the ability to do something that simply wasn't possible before. In the past these have felt a lot more obvious, but today it's often very difficult to find concrete examples that differentiate the new generation of models from their predecessors.

Google's Nano Banana Pro image generation model was notable in that its ability to render usable infographics really does represent a task at which previous models had been laughably incapable.

The frontier LLMs are a lot harder to differentiate between. Benchmarks like SWE-bench Verified show models beating each other by single digit percentage point margins, but what does that actually equate to in real-world problems that I need to solve on a daily basis?

And honestly, this is mainly on me. I've fallen behind on maintaining my own collection of tasks that are just beyond the capabilities of the frontier models. I used to have a whole bunch of these but they've fallen one-by-one and now I'm embarrassingly lacking in suitable challenges to help evaluate new models.

I frequently advise people to stash away tasks that models fail at in their notes so they can try them against newer models later on - a tip I picked up from Ethan Mollick. I need to double-down on that advice myself!

I'd love to see AI labs like Anthropic help address this challenge directly. I'd like to see new model releases accompanied by concrete examples of tasks they can solve that the previous generation of models from the same provider were unable to handle.

"Here's an example prompt which failed on Sonnet 4.5 but succeeds on Opus 4.5" would excite me a lot more than some single digit percent improvement on a benchmark with a name like MMLU or GPQA Diamond.

In the meantime, I'm just gonna have to keep on getting them to draw pelicans riding bicycles. Here's Opus 4.5 (on its default "high" effort level):

The pelican is cute and looks pretty good. The bicycle is not great - the frame is wrong and the pelican is facing backwards when the handlebars appear to be forwards.There is also something that looks a bit like an egg on the handlebars.

It did significantly better on the new more detailed prompt:

The pelican has feathers and a red pouch - a close enough version of breeding plumage. The bicycle is a much better shape.

Here's that same complex prompt against Gemini 3 Pro and against GPT-5.1-Codex-Max-xhigh.

Still susceptible to prompt injection

From the safety section of Anthropic's announcement post:

With Opus 4.5, we’ve made substantial progress in robustness against prompt injection attacks, which smuggle in deceptive instructions to fool the model into harmful behavior. Opus 4.5 is harder to trick with prompt injection than any other frontier model in the industry:

Bar chart titled "Susceptibility to prompt-injection style attacks" with subtitle "At k queries; lower is better". Y-axis shows "ATTACK SUCCESS RATE (%)" from 0-100. Five stacked bars compare AI models with three k values (k=1 in dark gray, k=10 in beige, k=100 in pink). Results: Gemini 3 Pro Thinking (12.5, 60.7, 92.0), GPT-5.1 Thinking (12.6, 58.2, 87.8), Haiku 4.5 Thinking (8.3, 51.1, 85.6), Sonnet 4.5 Thinking (7.3, 41.9, 72.4), Opus 4.5 Thinking (4.7, 33.6, 63.0).

On the one hand this looks great, it's a clear improvement over previous models and the competition.

What does the chart actually tell us though? It tells us that single attempts at prompt injection still work 1/20 times, and if an attacker can try ten different attacks that success rate goes up to 1/3!

I still don't think training models not to fall for prompt injection is the way forward here. We continue to need to design our applications under the assumption that a suitably motivated attacker will be able to find a way to trick the models.

Tags: prompt-injection, generative-ai, llms, anthropic, claude, evals, llm-pricing, pelican-riding-a-bicycle, llm-release

sqlite-utils 3.39

sqlite-utils 3.39

I got a report of a bug in sqlite-utils concerning plugin installation - if you installed the package using uv tool install further attempts to install plugins with sqlite-utils install X would fail, because uv doesn't bundle pip by default. I had the same bug with Datasette a while ago, turns out I forgot to apply the fix to sqlite-utils.

Since I was pushing a new dot-release I decided to integrate some of the non-breaking changes from the 4.0 alpha I released last night.

I tried to have Claude Code do the backporting for me:

create a new branch called 3.x starting with the 3.38 tag, then consult https://github.com/simonw/sqlite-utils/issues/688 and cherry-pick the commits it lists in the second comment, then review each of the links in the first comment and cherry-pick those as well. After each cherry-pick run the command "just test" to confirm the tests pass and fix them if they don't. Look through the commit history on main since the 3.38 tag to help you with this task.

This worked reasonably well - here's the terminal transcript. It successfully argued me out of two of the larger changes which would have added more complexity than I want in a small dot-release like this.

I still had to do a bunch of manual work to get everything up to scratch, which I carried out in this PR - including adding comments there and then telling Claude Code:

Apply changes from the review on this PR https://github.com/simonw/sqlite-utils/pull/689

Here's the transcript from that.

The release is now out with the following release notes:

  • Fixed a bug with sqlite-utils install when the tool had been installed using uv. (#687)
  • The --functions argument now optionally accepts a path to a Python file as an alternative to a string full of code, and can be specified multiple times - see Defining custom SQL functions. (#659)
  • sqlite-utils now requires on Python 3.10 or higher.

Tags: projects, sqlite, sqlite-utils, annotated-release-notes, uv, coding-agents, claude-code

sqlite-utils 4.0a1 has several (minor) backwards incompatible changes

I released a new alpha version of sqlite-utils last night - the 128th release of that package since I started building it back in 2018.

sqlite-utils is two things in one package: a Python library for conveniently creating and manipulating SQLite databases and a CLI tool for working with them in the terminal. Almost every feature provided by the package is available via both of those surfaces.

This is hopefully the last alpha before a 4.0 stable release. I use semantic versioning for this library, so the 4.0 version number indicates that there are backward incompatible changes that may affect code written against the 3.x line.

These changes are mostly very minor: I don't want to break any existing code if I can avoid it. I made it all the way to version 3.38 before I had to ship a major release and I'm sad I couldn't push that even further!

Here are the annotated release notes for 4.0a1.

  • Breaking change: The db.table(table_name) method now only works with tables. To access a SQL view use db.view(view_name) instead. (#657)

This change is for type hint enthusiasts. The Python library used to encourage accessing both SQL tables and SQL views through the db["name_of_table_or_view"] syntactic sugar - but tables and view have different interfaces since there's no way to handle a .insert(row) on a SQLite view. If you want clean type hints for your code you can now use the db.table(table_name) and db.view(view_name) methods instead.

  • The table.insert_all() and table.upsert_all() methods can now accept an iterator of lists or tuples as an alternative to dictionaries. The first item should be a list/tuple of column names. See Inserting data from a list or tuple iterator for details. (#672)

A new feature, not a breaking change. I realized that supporting a stream of lists or tuples as an option for populating large tables would be a neat optimization over always dealing with dictionaries each of which duplicated the column names.

I had the idea for this one while walking the dog and built the first prototype by prompting Claude Code for web on my phone. Here's the prompt I used and the prototype report it created, which included a benchmark estimating how much of a performance boost could be had for different sizes of tables.

  • Breaking change: The default floating point column type has been changed from FLOAT to REAL, which is the correct SQLite type for floating point values. This affects auto-detected columns when inserting data. (#645)

I was horrified to discover a while ago that I'd been creating SQLite columns called FLOAT but the correct type to use was REAL! This change fixes that. Previously the fix was to ask for tables to be created in strict mode.

  • Now uses pyproject.toml in place of setup.py for packaging. (#675)

As part of this I also figured out recipes for using uv as a development environment for the package, which are now baked into the Justfile.

  • Tables in the Python API now do a much better job of remembering the primary key and other schema details from when they were first created. (#655)

This one is best explained in the issue.

  • Breaking change: The table.convert() and sqlite-utils convert mechanisms no longer skip values that evaluate to False. Previously the --skip-false option was needed, this has been removed. (#542)

Another change which I would have made earlier but, since it introduces a minor behavior change to an existing feature, I reserved it for the 4.0 release.

  • Breaking change: Tables created by this library now wrap table and column names in "double-quotes" in the schema. Previously they would use [square-braces]. (#677)

Back in 2018 when I started this project I was new to working in-depth with SQLite and incorrectly concluded that the correct way to create tables and columns named after reserved words was like this:

create table [my table] (
  [id] integer primary key,
  [key] text
)

That turned out to be a non-standard SQL syntax which the SQLite documentation describes like this:

A keyword enclosed in square brackets is an identifier. This is not standard SQL. This quoting mechanism is used by MS Access and SQL Server and is included in SQLite for compatibility.

Unfortunately I baked it into the library early on and it's been polluting the world with weirdly escaped table and column names ever since!

I've finally fixed that, with the help of Claude Code which took on the mind-numbing task of updating hundreds of existing tests that asserted against the generated schemas.

The above example table schema now looks like this:

create table "my table" (
  "id" integer primary key,
  "key" text
)

This may seem like a pretty small change but I expect it to cause a fair amount of downstream pain purely in terms of updating tests that work against tables created by sqlite-utils!

  • The --functions CLI argument now accepts a path to a Python file in addition to accepting a string full of Python code. It can also now be specified multiple times. (#659)

I made this change first in LLM and decided to bring it to sqlite-utils for consistency between the two tools.

  • Breaking change: Type detection is now the default behavior for the insert and upsert CLI commands when importing CSV or TSV data. Previously all columns were treated as TEXT unless the --detect-types flag was passed. Use the new --no-detect-types flag to restore the old behavior. The SQLITE_UTILS_DETECT_TYPES environment variable has been removed. (#679)

One last minor ugliness that I waited for a major version bump to fix.

Update: Now that the embargo has lifted I can reveal that a substantial amount of the work on this release was performed using a preview version of Anthropic's new Claude Opus 4.5 model. Here's the Claude Code transcript for the work to implement the ability to use an iterator over lists instead of dictionaries for bulk insert and upsert operations.

Tags: projects, sqlite, sqlite-utils, annotated-release-notes, ai-assisted-programming, coding-agents, claude-code

Links 11/24/25

Links for you. Science:

Seagulls more likely to leave you alone if you shout at them
2 new malaria treatments show promise as drug resistance grows
Infant Formula Company Tied to Botulism Outbreak Had Known Problems
N.I.H. Worker Who Criticized Trump Health Policies Says She Is on Administrative Leave
The trend of unproven peptides is spreading through influencers and RFK Jr. allies
First, the frogs died. Then people got sick.

Other:

‘God is an anti-vaxxer’: Inside the conference celebrating RFK Jr.’s rise
Trump’s new plan for Obamacare leaves Americans to fend for themselves
Trump’s Latest Milestone: Forcing 211,000 People Out Of Their Federal Jobs
Rep. Robert Garcia shows Democrats how to fight MAGA
Google Has Chosen a Side in Trump’s Mass Deportation Effort
Words & Phrases We Could Do Without
“They Took My Mom. Then They Took My Dad and Uncle,” Oxnard ICE Raids Through The Eyes of 16-Year-Old Boy
Marion County agrees to pay out $3M for newspaper raid, expresses regret
First death reported from meat allergy caused by tick bite
The ‘Groyperfication’ of the G.O.P.
DOJ Issued Seizure Warrant to Starlink Over Satellite Internet Systems Used at Scam Compound
Power Companies Are Using AI To Build Nuclear Power Plants
The Heritage Foundation, Groypers, and the Narcissism of Small Differences
The Trump White House Is Trying to Hide Its Judicial Nominees From You
Kash Patel, the FBI’s Agent of Chaos
Six Ways Zohran Mamdani Can Make New York City Affordable Again
Letter from Chicago broadcast veterans: Federal immigration agents have traumatized us and eroded our rights
Native American nearly deported after Polk County jail issues ICE detainer by mistake
To force deserters back to war, Russia’s military is torturing their families
Safe borders do not require terror tactics from ICE (the Jesuits are angry)
Firm Tied to Kristi Noem Secretly Got Money From $220 Million DHS Ad Contracts
Megyn Kelly Suddenly Finds Pedophilia Very Hard to Define
Only 2.6% on list of 614 ‘Operation Midway Blitz’ arrestees had criminal histories, DOJ records show
The Problem is Epistemic. The Solution is Not.
The Epstein files sinkhole widening under Trump
Don’t Build Housing, Just Deport People: The Nativist Right Recoils at Trump’s 50-Year Mortgage Proposal
Anti-Feminist Media Is Trying to Make Young Women Turn on Birth Control
“I Lost Everything”: Venezuelans Were Rounded Up in a Dramatic Midnight Raid but Never Charged With a Crime
The Fitness Influencers Who Tried to Make Me Like Trump
The value of close reading: Larry Summers edition

Treating Customers Like Criminals: The San Francisco Safeway Edition

In D.C., several of the Safeway grocery stores have been given nicknames. My local Safeway is, unfortunately, called the Soviet Safeway because, well, it kind of sucks. But San Francisco is really upping the Soviet Safeway game (boldface mine):

At the Safeway on San Francisco’s King Street, you now can’t leave the store unless you buy something. The Mission Bay grocery store recently installed new anti-theft measures at the entrance and exit.

New gates at the entrance automatically swing open when customers walk in, but they’re set to trigger an alarm if someone attempts to back out. And if you walk into Safeway and change your mind about grocery shopping, you might find yourself trapped: Another gate that only opens if you scan your receipt blocks the store’s sole exit...

While I watched some customers struggle with the new technology, my receipt scanned immediately. The glass doors slid open, and I was free. But if, like this person on the San Francisco subreddit recounted, I hadn’t bought anything, my only means of exit would have been to beg the security guard to let me out

Daniel Conway, the vice president of government relations for the California Grocers Association, told SFGATE in 2023 that these measures were part of a growing national trend of “defensive retailing.”

“While I can’t speak for any one company, you see the trajectory of commerce over the last 10, 20, 5,000 years, it’s all about reducing friction for customers,” Conway said. “But now you’re seeing a countervailing trend: armed guards and Tide Pods locked up.”

It’s kind of funny how you literally need to present your papers to leave the grocery store. That aside, I fail to see how this works as a business. At some point people are going to stop going to the stores that treat everyone like criminals.

NASA, Boeing pivot Starliner-1 mission from 4-person astronaut flight to cargo-only

The Boeing Starliner spacecraft is lifted at the Vertical Integration Facility at Space Launch Complex-41 at Cape Canaveral Space Force Station in Florida on Tuesday, April 16, 2024. Image: NASA/Kim Shiflett

In its latest shakeup to the Commercial Crew Program, NASA announced on Monday it has reduced the number of missions Boeing is required to fly to the International Space Station and changing the next flight from a crew mission to a cargo mission.

The original contract NASA awarded to Boeing and SpaceX called for each to fly an uncrewed demonstration flight to the ISS, followed by a crewed demo mission and then conduct six regular crew rotation missions.

Both companies launched their uncrewed demo flights in 2019, but their trajectories split dramatically after that. SpaceX was able to proceed to its crewed demo mission with its Dragon spacecraft, named Demo-2, in May 2020 and has so far flown 12 NASA crew missions to the orbiting outpost.

Meanwhile Boeing needed to repeat its uncrewed flight in 2022, dubbed Orbital Flight Test 2 (OFT-2), because the 2019 Orbital Flight Test ran into issues that prevented the spacecraft from reaching the space station. The 2024 Crew Flight Test (CFT) also encountered multiple issues, mainly associated with the Starliner’s thrusters, which ultimately resulted in NASA ordering the capsule to return without a crew in September and the CFT astronauts coming home in a SpaceX Dragon capsule.

The four members of Crew-9 pose for a picture before entering the Dragon spacecraft for their departure. From left to right: NASA astronauts, Butch Wilmore, Nick Hague, Roscosmos cosmonaut Aleksandr Gorbunov and NASA astronaut Suni Williams. Image: NASA via livestream

Since the conclusion of the Starliner CFT mission, NASA and Boeing have debated whether the next flight of Starliner would carry astronauts or not, with each public statement from NASA casting increasing doubt on a crewed flight.

Monday’s announcement made it official that the mission, dubbed Starliner-1, would be a cargo-only flight to the ISS, scheduled to launch no earlier than April 2026.

“NASA and Boeing are continuing to rigorously test the Starliner propulsion system in preparation for two potential flights next year,” said Steve Stich, manager of NASA’s Commercial Crew Program in a statement. “This modification allows NASA and Boeing to focus on safely certifying the system in 2026, execute Starliner’s first crew rotation when ready, and align our ongoing flight planning for future Starliner missions based on station’s operational needs through 2030.”

NASA and Boeing also agreed to reduce the number of flights NASA’s is obligated to buy from Boeing from six down to four “with the remaining two available as options.” And with the decision for Starliner-1 to be cargo only, that means NASA may only have three flights with Starliner that will carry its astronauts to the ISS.

If Starliner-1 is a nominal flight, it opens the door for Starliner-2 to become Boeing’s first operational mission to the space station with crew onboard.

“The next commercial flight to the International Space Station without a specific provider assigned is targeted no earlier than October 2026. This flight will handover with NASA’s SpaceX Crew-12 mission, which is targeted to launch in early 2026,” a NASA spokesperson said.

What about the astronauts?

The next crew of a Boeing CST-100 Starliner spacecraft is now somewhat of a mystery, at least from a public perspective. Scott Tingle, the NASA astronaut who was named Starliner-1 commander in September 2022 was named as the newest chief of NASA’s Astronaut Office this month.

When Tingle was announced, the agency also said astronaut Mike Fincke would be the Starliner-1 pilot. However, he and Japan Aerospace Exploration Agency (JAXA) astronaut Kimiya Yui, another astronaut assigned to Starliner-1, were both tasked to fly as members of the SpaceX Crew-11 mission, which is currently in progress.

Boeing’s CST-100 Starliner spacecraft rolls out from the Commercial Crew and Cargo Processing Facility (C3PF) ahead of the Starliner Crew Flight Test Campaign. Pictured left to right: Canadian Space Agency (CSA) astronaut Joshua Kutryk, NASA astronauts Sunita “Suni” Williams, Mike Fincke, Barry “Butch” Wilmore, Scott Tingle and Japan Aerospace Exploration Agency (JAXA) astronaut Kimiya Yui. Image: Boeing

Spaceflight Now reached out to the Canadian Space Agency to see learn more about the fate of its astronaut, Joshua Kutryk, who was assigned to fly the Starliner-1 mission as a mission specialist. We’re waiting to hear back.

Eric Berger, Ars Technica’s senior space reporter, reported last week that NASA astronaut Luke Delaney was also in line to train for the Starliner-1 mission at one point, but has been reassigned to the SpaceX Crew-13 mission.

There are some astronauts who have trained to fly on Starliner and were reassigned to fly on SpaceX’s Dragon spacecraft who could conceivably be assigned to the Starliner-2 flight, assuming all goes well with this next cargo mission.

Nicole Mann was one of the original members of the Starliner Crew Flight Test mission before being tasked to SpaceX Crew-5. She was picked alongside NASA astronaut Eric Boe who withdrew for medical reasons a the time.

NASA’s Boeing Crew Flight Test astronauts Butch Wilmore and Suni Williams inside the vestibule between the forward port on the International Space Station’s Harmony module and Boeing’s Starliner spacecraft. Image: NASA.

The only other active astronaut who has been publicly confirmed to have trained on the Starliner systems is CFT Pilot Sunita ‘Suni’ Williams. However, given that Starliner-2 would be her fourth mission after already accumulating more than 600 days in space, she may run up against radiation limits for NASA astronauts.

Below are the list of astronauts who have retired from active service who trained to fly on Starliner (either for CFT or Starliner-1):

  • Josh Cassada
  • Jeanette Epps
  • Chris Ferguson
  • Koichi Wakata (JAXA)
  • Barry ‘Butch’ Wilmore

Monday 24 November 1662

Sir J. Minnes, Sir W. Batten, and I, going forth toward White Hall, we hear that the King and Duke are come this morning to the Tower to see the Dunkirk money. So we by coach to them, and there went up and down all the magazines with them; but methought it was but poor discourse and frothy that the King’s companions (young Killigrew among the rest) about the codpieces of some of the men in armour there to be seen, had with him. We saw none of the money, but Mr. Slingsby did show the King, and I did see, the stamps of the new money that is now to be made by Blondeau’s fashion, which are very neat, and like the King. Thence the King to Woolwich, though a very cold day; and the Duke to White Hall, commanding us to come after him, which we did by coach; and in his closett, my Lord Sandwich being there, did discourse with us about getting some of this money to pay off the Fleets, and other matters; and then away hence, and, it being almost dinner time, I to my Lord Crew’s, and dined with him, and had very good discourse, and he seemed to be much pleased with my visits. Thence to Mr. Phillips, and so to the Temple, where met my cozen Roger Pepys and his brother, Dr. John, as my arbitrators against Mr. Cole and Mr. John Bernard for my uncle Thomas, and we two with them by appointment. They began very high in their demands, and my friends, partly being not so well acquainted with the will, and partly, I doubt, not being so good wits as they, for which I blame my choosing of relations (who besides that are equally engaged to stand for them as me), I was much troubled thereat, and taking occasion to deny without my father’s consent to bind myself in a bond of 2000l. to stand to their award, I broke off the business for the present till I hear and consider further, and so thence by coach (my cozen, Thomas Pepys, being in another chamber busy all the while, going along with me) homeward, and I set him down by the way; but, Lord! how he did endeavour to find out a ninepence to clubb with me for the coach, and for want was forced to give me a shilling, and how he still cries “Gad!” and talks of Popery coming in, as all the Fanatiques do, of which I was ashamed. So home, finding my poor wife very busy putting things in order, and so to bed, my mind being very much troubled, and could hardly sleep all night, thinking how things are like to go with us about Brampton, and blaming myself for living so high as I do when for ought I know my father and mother may come to live upon my hands when all is done.

Read the annotations

Germany outlines military space priorities in new strategy

German spaceplane

A new German military space policy outlines the ambitions the country has fueled by tens of billions of euros of new spending over the next several years.

The post Germany outlines military space priorities in new strategy appeared first on SpaceNews.

Amazon unveils production-ready gigabit-class Leo Ultra broadband terminal

Amazon has unveiled the final production version of Leo Ultra, the company’s highest-performing enterprise terminal for the satellite broadband constellation it aims to bring into service next year.

The post Amazon unveils production-ready gigabit-class Leo Ultra broadband terminal appeared first on SpaceNews.

Star Wars redux: the false promise of space-based missile defense

Star Wars is back in vogue with President Trump’s executive order to establish the “Golden Dome” missile defense shield. It will feature an ambitious space-based boost-phase interceptor program in addition to terrestrial systems. While admittedly the holy grail of defense against ballistic missiles, the obstacles that plagued its discontinued predecessor, “Brilliant Pebble,” under the Strategic […]

The post Star Wars redux: the false promise of space-based missile defense appeared first on SpaceNews.

China launches TJS-21 towards Molniya orbit, lofts trio of Shijian-30 spacecraft

China conducted a pair of launches last week, adding new spacecraft to its opaque TJS and Shijian satellite series.

The post China launches TJS-21 towards Molniya orbit, lofts trio of Shijian-30 spacecraft appeared first on SpaceNews.

Science fiction and innovation in space

Space Minds Podcast - Science Fiction and Innovation in Space panel

In this episode of Space Minds, Senior Staff Writer Jeff Foust moderates a panel at the Johns Hopkins University Bloomberg Center, the next installment of the Center’s Discovery Series.

The post Science fiction and innovation in space appeared first on SpaceNews.

Florida Space Coast tops 100 launches as Space Force pushes for range upgrades

Col. Brian Chatman: The volume is here to stay. The next step is making sure the infrastructure can keep up

The post Florida Space Coast tops 100 launches as Space Force pushes for range upgrades appeared first on SpaceNews.

Bankruptcy Filings Increase 10.6 Percent

From the U.S. Courts: Bankruptcy Filings Increase 10.6 Percent
Personal and business bankruptcy filings increased 10.6 percent in the twelve-month period ending Sept. 30, 2025, compared with the previous year.

According to statistics released by the Administrative Office of the U.S. Courts, annual bankruptcy filings totaled 557,376 in the year ending September 2025, compared with 504,112 cases in the previous year.

Business filings rose 5.6 percent, from 22,762 to 24,039 in the year ending Sept. 30, 2025. Non-business bankruptcy filings increased 10.8 percent to 533,337, compared with 481,350 in the previous year.
Still fairly low.

NASA to fly only cargo on next Starliner mission under modified contract

Starliner undocking

NASA has revised its commercial crew contract with Boeing, reducing the number of Starliner missions to four, the first of which will carry only cargo.

The post NASA to fly only cargo on next Starliner mission under modified contract appeared first on SpaceNews.

Chaos, Confrontation and Consequences—Get Ready for Year Two

I mentioned earlier this month that we had this panel at our 25th anniversary event that I simply loved, an oral history of TPM. We published the audio of the panel as last week’s installment of the podcast. I have my own reasons for enjoying it, but I think you will too. In any case, one thing I was reminded of in listening to the discussion is that in recent years I’ve shifted toward analysis and away from my own reporting. Not as an absolute, of course. And in the spring I was reporting on a lot of stuff at once. But certainly over this year, I’ve written a lot of big-picture looks at what I think is happening in the country, what the Trump administration is trying to do, what people can and are doing to resist those efforts, what the big global story is. Listening to the panel discussion made me a bit hungry to do more of the thread-collecting and yanking of nitty gritty reporting, the grabbing on to a story and getting everything of out it, finding and introducing the key characters, finding the arc of their story.

This is all very meta and internal to myself at some level. I wouldn’t normally write a post about it. But I note it here because it connects to something more general, more grounded in the news moment. I’ve written a number of pieces in November about how Trump looks increasingly weak. He doesn’t look like the strong horse anymore. And that perception is creating a penumbra of weakness and disorder around him. People don’t fear him as much. Some are rejecting his demands. Others are using that lassitude or vacuum at the hub of the MAGA universe to build their own fiefdoms and take over policy.

Take Kristi Noem and her official friend, assistant and advisor Corey Lowendowski at DHS. I certainly don’t think Trump is against the stuff they’re currently doing. Stephen Miller certainly makes sure DHS and ICE and staying on track with his goals. But my sense is that as the reins slacken from the center, they’re a bit more focused on building up their own operation, their own ownership. When centrifugal pressures become dominant in a regime, fief-building becomes the order of the day.

Or consider this bizarre chain of events over the weekend in which the administration couldn’t get its story straight or perhaps even find out itself whether the “peace plan” it had been bullying Ukraine into signing was its plan or Russia’s — seemingly couldn’t figure out who even authored it. This kind of incompetent nonsense is hardly new for the Trump world. It’s par for the course. But where was Trump in this confusion? He cares about the Russia portfolio a lot. Like a whole lot. Where was he?

Where this comes full circle is that a lot of the kinds of corruption stories, crazy bad acts stories and more that we at TPM and I myself have always been attracted to like magnets in a way haven’t mattered for most of this year. If a bribe is made in the forest and there’s no DOJ, did the bribe really happen? There’s this weird dynamic in news media. The outline of a public corruption story — the first clues, the hunt, the putting the pieces together— presumes eventual investigations and indictments. You may or may not get there. But that you might is the nature of the arc and the interest. Did he or she do it? And if they did, is there enough evidence for them to get in trouble? If consequences are ruled out in advance, it’s like a detective thriller where the detective and the evil mastermind get together on page 20 and have dinner and decide to call it a day. The tension and possibility vanishes. It’s still important. But it’s more like a nature documentary, how the lion eats the gazelle.

But this is beginning to change. It’s certainly not that the DOJ is springing back to life. Pam Bondi is as corrupt and subservient as ever. But as the center yields we’re seeing more fighting between the fiefdoms and that seems to be changing the equation at least to a degree. This occurred to me when I saw the investigation into Ed Martin and Bill Pulte out of a U.S. Attorney’s office in Maryland. In that case, it doesn’t appear that Martin and Pulte are in trouble — if they are — because they improperly used their positions to do retribution operations against Trump-enemy elected officials. They being investigated because their methods were so clownish, unprofessional and off the rails that they’ve endangered those retribution operations. That’s quite different. But it gives you a sense of how the cracks in the White House juggernaut, the lassitude at the center or perhaps the rest of the country finally getting a chance to catch up is somewhat changing the equation.

There’s so much corruption afoot in the federal government. As Trump no longer looks strong enough or perhaps even interested enough to protect everyone, the dynamic is beginning to change. There’s lots of sleuthing to do.

Surprise, Surprise

This story is a few days old. But I only came across it yesterday. Meet Austin Smith. He’s a former state legislator in Arizona, member of the Arizona Freedom Caucus. Or former member. He was also strategic director of Turning Points Action, Turning Points’ political arm. (Yes, what other arm would their be?) He was also a hardcore “vote fraud” hustler. And now rather predictably he’s pled guilty to attempted election fraud. Yes, surprise, surprise. In fact, he’s from Surprise, Arizona. No really. You can read the story here.

★ Exploring, in Detail, Apple’s Compliance With the EU’s DMA Mandate Regarding Apple Watch, Third-Party Accessories, and the Syncing of Saved Wi-Fi Networks From iPhones to Which They’re Paired

There have been several new features that have been delayed in the EU while Apple tried to make them compliant with the DMA. iPhone Mirroring debuted over a year ago with iOS 18 and MacOS 15 Sequoia, but still remains unavailable today in the EU. Apple Intelligence was delayed in the EU until iOS 18.4 in April, but was available to most of the world in 18.1 last October. And, both most recently and briefly, the live translation feature for AirPods Pro 3, AirPods Pro 2, and AirPods 4, which debuted outside the EU with the launch of iOS 26.0 in September, will only become available in the EU next month, with the launch of iOS 26.2.

But now comes word of the first feature that Apple is limiting or removing in an existing product to comply with the DMA: Wi-Fi network sync between iPhone and Apple Watch, which is poised to change in the EU next month, with the 26.2 releases of iOS and WatchOS. The news was broken by Nicolas Lellouche, reporting for the French-language site Numerama. I’m quoting here from Safari’s English translation of his original report:

Apple has been warning for several months that it could one day, if it deems it necessary, disable functions in the European Union to “protect its users”. This day could arrive in December, with the iOS 26.2 update.

On November 4, Apple announced to Numerama that it had made the decision to disable Wi-Fi synchronization between an iPhone and an Apple Watch in Europe so as not to have to comply with the European Commission’s request, which wants to force it by the end of 2025 to open the iPhone’s Wi-Fi to third-party accessories. This announcement follows the opening of the AirPods Live Translation function in Europe, with a new API to allow competitors to use the microphones and speakers of AirPods and iPhone simultaneously. [...]

Apple indicates that the European Commission is asking it to replicate the link between an iPhone and an Apple Watch, but with third-party products. Apple, after thinking long about how to implement this function, finally decided to reject the European request. Since Europe requires that third-party products be treated like the Apple Watch, then Apple disables the function on Apple Watch. This allows it to comply with the DMA.

Lellouche’s report at Numerama broke this story (the reports at MacRumors and 9to5Mac are both based on Numerama’s), but the above is not an accurate summary of what Apple is doing with iOS 26.2.1 Apple is complying with the DMA, and they’re not disabling Wi-Fi network synchronization between an iPhone and a paired Apple Watch. What Apple is doing, in order to comply with the DMA, is changing how Wi-Fi networks sync with Apple Watch (in the EU), and offering new APIs in the EU for third-party paired devices to put them on equal (or near-equal?) footing with Apple Watch (in the EU).

This change should be relatively limited. Honestly, I don’t think many Apple Watch users in the EU will even notice. But it is at least mildly annoying, and the relatively minor, very specific nature of this particular DMA mandate makes it a telling example of the European Commission’s overreach.

Currently, when you pair a new Apple Watch with an iPhone, iOS transfers to WatchOS the iPhone’s entire list of saved Wi-Fi networks and their passwords — directly, device-to-device. As iOS learns of new networks that the user joins from their iPhone, that information continues to be shared with any Apple Watches paired to that iPhone. The utility of this is that if you’re wearing your Apple Watch, but don’t have your iPhone nearby, your watch will join an available saved Wi-Fi network at your location. Let’s say you go for a run or walk, with only your Apple Watch, and you stop at a cafe for a beverage. If you’ve ever joined the Wi-Fi network at that cafe from your iPhone (or iPad or Mac, assuming you sync your Apple Keychain via iCloud), your Apple Watch will join that network automatically. It should, and in my personal experience does, just work.

The EU mandate to Apple is not that Apple must grant to third-party devices and their iOS companion applications this same functionality as it stands today — that is to say, access to the entire history of the iPhone’s known Wi-Fi networks. The EU mandate is that Apple must grant to third-party devices the same level of access to Wi-Fi network information that Apple Watch has. Apple is complying with this mandate in two ways: (a) by changing how much Wi-Fi network information an Apple Watch gets from the iPhone to which it is paired; and (b) creating a new framework in iOS 26.2 (gated by a new entitlement), Wi-Fi Infrastructure, that provides a set of public APIs, available only to apps in the EU, to (per the framework’s description) “share Wi-Fi network credentials securely between devices and connected accessories.”

The change for Apple Watch in the EU is that starting with iOS 26.2, when a new (or reset) Apple Watch is set up, the Apple Watch will no longer have the user’s list of saved Wi-Fi networks automatically synced from their iPhone. Only future networks will be synced — the same level of access that the new Wi-Fi Infrastructure framework is making available to third-party accessories.

Under the new rules for Apple Watch in the EU, an existing (that is to say, already configured) watch that is upgraded to WatchOS 26.2 will still remember all Wi-Fi networks it already knew about. But a new Apple Watch will only be able to automatically connect to Wi-Fi networks that its associated iPhone saves after the Apple Watch was set up and paired. So when an EU Apple Watch owner with a new watch visits a known location, and doesn’t have their iPhone with them, the watch won’t be able to join that location’s Wi-Fi automatically, unless the paired iPhone has connected to and saved that network after the watch was paired.

With iOS 26.2, the behavior for users outside the EU will remain unchanged from iOS 26.1 and prior — both for Apple Watch and for third-party accessories.

A user’s Wi-Fi history can be used to glean significant information about them. Who they know (other homes’ networks), where they’ve been (medical providers, restaurants, airports), and more. Apple’s new policy for Apple Watch and third-party devices is DMA-compliant and prevents the sharing of historical networks, but with the sharing of future networks as the associated iPhone joins them, there’s still a risk here of third-party companies doing things with the user’s Wi-Fi network information that the user doesn’t understand, or want (but doesn’t realize they’ve consented to).

One way to look at Apple’s options for complying with this particular DMA mandate is by considering the extremes. On the one extreme, Apple could have just granted third-party peripherals in the EU the exact same access to users’ iPhone Wi-Fi network history that Apple Watch has gotten until now (and will continue to get outside the EU). On the other extreme, Apple could have cut off Wi-Fi network syncing to the Apple Watch altogether, requiring users to connect to each Wi-Fi network manually, using the Watch itself or the Apple Watch app on iPhone. Instead, Apple chose a middle ground — limiting Wi-Fi network history sync to the Apple Watch in the EU in ways that it isn’t limited anywhere else in the world, but granting third-party accessories in the EU access to these new Wi-Fi Infrastructure APIs that aren’t available outside the EU.

Critics might argue that while this middle ground is technically compliant with the DMA, it’s not compliant with the intention of the DMA, which would be for the Apple Watch not to lose any functionality in the EU, and for Apple to provide APIs to allow third-party devices all of the Wi-Fi syncing features currently available to Apple Watch. Apple would argue, and I agree, that the European Commission’s intentions are incoherent in this regard. The EC insists that Apple should protect users’ privacy and security, while also insisting that Apple grant access to third-party apps and devices that can potentially compromise users’ privacy and security.

There’s a reason why Apple isn’t offering the new Wi-Fi Infrastructure framework outside the EU, and that’s because they don’t believe it’s a good idea to grant any access at all to your saved Wi-Fi networks to third-party apps and devices. Especially without being able to specify, let alone enforce, a policy that Wi-Fi network information should be treated the way Apple treats it — remaining exclusively on device.

The skeptical take on Apple’s motivations in this situation is that Apple is spitefully removing functionality from Apple Watch rather than offering new APIs to provide third-party devices with the same functionality that Apple Watch currently has, and that Apple’s intention here is, somehow, primarily about trying to drive anti-DMA sentiment amongst its EU users. This is, in fact, the skeptical take on every single aspect of Apple’s compliance with the DMA: spiteful “malicious compliance” that, somehow, is intended to engender grassroots opposition to the DMA amongst Apple customers in the EU. I don’t think that’s an accurate take overall, but in this particular case with Apple Watch and Wi-Fi network sync, it’s almost silly.

Part of what makes this particular situation clarifying is that it’s so specific. It’s not about allowing third-party devices and their corresponding iOS apps to do everything that Apple Watches, and the Apple Watch iOS companion app, can do. It’s very specifically about the sharing of known Wi-Fi networks. (There will, surely, be other such situations to come regarding other features, for other Apple devices.) And as I described above, very few Apple Watch owners in the EU are likely to notice the change. How many Apple Watch users today realize that their watch automatically connects to known Wi-Fi networks when their iPhone is outside Bluetooth range?

If Apple were motivated by spite, and were trying to turn EU Apple Watch owners against the DMA, they’d just remove all Wi-Fi network syncing between the watch and its paired iPhone. Not just the historical list of all networks the iPhone has ever connected to, but the continuous sync of new networks the iPhone joins after the Apple Watch is paired. That would be a change Apple Watch users would be more likely to notice. But it’s not what Apple is doing. They’ve engineered an entire framework of public APIs to comply with the EC’s mandate.

But the reporting to date on this situation, starting with Numerama, paints the picture that Apple is dropping all Wi-Fi sync between WatchOS and iOS in the EU, and that Apple is refusing to make Wi-Fi network information available to third-party accessories.

Here’s Michael Tsai, after quoting from Tim Hardwick’s summary at MacRumors of Numerama’s report:

It seems perfectly reasonable that if I have a third-party watch I should be able to opt into having my phone share Wi-Fi info with it. You can debate whether mandating this is the proper role of government, but the status quo is clearly anti-competitive and bad for the user experience. I’m open to hearing a story where Apple’s position makes sense, but so far it just seems like FUD to me. What is the argument, exactly? That Fitbit, which already has its own GPS, is going to sell your access point–based location history? That Facebook is going to trick you into granting access to their app even though they have no corresponding device?

Tsai is making a few wrong assumptions here. First, Apple is enabling users (in the EU) to opt into having their iPhone share Wi-Fi information with third-party devices. Second, this mandate is not specific to smartwatches — it applies to any devices that can pair with an iPhone and have corresponding iOS partner apps. So Meta, with their lineup of smartglasses, does have corresponding devices. And, per Apple’s public statements, it is Meta in particular that has been zealously pursuing interoperability mandates pursuant to the DMA. I think it’s entirely possible that this entire issue regarding Wi-Fi network sharing was prompted by Meta’s interoperability requests to the European Commission.2

As for the argument regarding why Apple has chosen to comply in this way, what is essential to note is that none of this Wi-Fi network information shared between iOS and WatchOS is ever sent to or seen by Apple. Apple doesn’t see the network passwords, doesn’t see the names of the networks, and doesn’t even know when a device has joined a new network. All of this is exclusively on-device, and when the information is exchanged between an iPhone and paired Apple Watch, it’s transferred device-to-device. (This is also true when you use Apple’s features to share Wi-Fi passwords with nearby friends. It’s device-to-device and entirely private and secure. Apple doesn’t even know that person A sent a Wi-Fi password to person B, let alone know the name of the network or the password.)

Here’s Rui Carmo, at Tao of Mac:

As someone who relies a lot on the Watch (especially now that WhatsApp works locally on it), I’d say we have officially reached the point where Apple is on the verge of actively harming their user experience for no good reason whatsoever. I honestly don’t know if this is bull-headedness or malicious compliance.

On the other hand, someone at the EU clearly prefers being in the limelight by regulating against evil US corporations in ways that affect very small parts of the general population rather than, say, go after Asian smart TV manufacturers that are present in millions of homes and resell data on Europeans’ TV viewing habits.

No notes on Carmo’s second point. But regarding the first, his opinion is founded on incorrect assumptions. Apple clearly thinks it’s a bad idea to share any Wi-Fi information at all with third-party devices, but they’ve created an entire new framework for use within the EU to allow it, just so they can continue syncing any Wi-Fi network information at all with Apple Watch. Far from harming the user experience, Apple is bending over backwards to make the Apple Watch experience as good as possible while balancing the privacy and security implications of this DMA mandate. Rather than take away all Wi-Fi network syncing, Apple is leaving most of it in place, and only eliminating (in the EU) the part at the very beginning, where, during the set up process, all of the current networks saved on the iPhone are synced to the Apple Watch.

Given the mandate regarding the DMA, and given the privacy implications of sharing any of this information with third-party developers and peripheral makers, personally, I think it would have been reasonable for Apple to take the extreme position of simply disallowing Wi-Fi network information syncing to any and all devices, including Apple Watches, in the EU. There is no reason to trust third-party developers with any of this information. But Apple isn’t doing that, and they’ve undertaken a significant software engineering effort — just for the EU — to support the path they’ve chosen. Carmo’s critique seems predicated on the assumption that Apple is just cutting off all Wi-Fi network sharing.

Given that Apple’s compliance needs to account for potentially untrustworthy device makers — whether by intent, or incompetence — not syncing all known networks seems like a reasonable trade-off.

Leave it to Tim Sweeney to espouse the maximalist perspective:

Why simply not ask the user whether or not to share WiFi history identically whether connecting to an Apple product or a Meta product?

That is, in fact, what Apple is doing. But the privacy implications for a user are, in fact, different when an iPhone’s saved Wi-Fi networks are shared to, say, a Meta product than to another Apple product. It’s worth emphasizing that the European Commission’s mandate does not permit Apple to require those third-party companies to treat this information with the same privacy protections that Apple does. Apple keeps that information exclusively on-device, but Apple is not permitted to require third-party peripheral makers to do the same.

Consider the iOS system prompt for App Tracking Transparency: the user’s two choices are “Ask App Not to Track” and “Allow”. It’s a common and natural question why the first option is “Ask App Not to Track” rather than “Don’t Allow”. It would certainly look better if the options were “Don’t Allow” and “Allow”. But Apple deliberately made the first button “Ask App Not to Track” because ATT is, at least partially, a policy, not a complete technical guarantee. If an app prompts for ATT permission and the user chooses “Ask App Not to Track”, that app should definitely not go ahead and attempt to track the user’s activity across other apps. But, technically, it could try.3 I presume that if they do, if and when Apple notices, Apple will rap the developer’s knuckles in the App Store review process, or even suspend the app’s developer account. But one can see why Apple would want to avoid such a pissing match with Facebook/Meta again.4

Under the EU’s mandate to Apple regarding Wi-Fi network access for third-party devices and their corresponding iOS apps, Apple is not permitted even to set a policy that these apps must pinky swear to keep the information private and on-device. Nor is the EU itself demanding it. If a third-party device-maker wants to send your iPhone’s Wi-Fi network history and credentials to their servers and save it, that’s up to them, not Apple, per the EC. Apple sees that as a problem.5 You can argue — and some will, as I think Michael Tsai does in the passage I quote above, and as Tim Sweeney clearly does — that this ought to be up to the user. If a user says they’re fine with their Wi-Fi network information being shared with a third-party accessory they’ve paired with their iPhone, that’s up to them. That is a reasonable take. But I also think Apple’s perspective is reasonable as well — that they should be able to make products where this isn’t possible.

The “it should be up to the user” take benefits informed, technically savvy users. The “it shouldn’t be possible” take benefits uninformed, un-savvy users — users who in many cases have decided that they simply trust Apple. The iPhone brand message — the brand message behind the Apple ecosystem — is that Apple doesn’t allow things that are dangerous to security or privacy. I do not think most iPhone users expect a third-party device they pair to their iPhone to be able to send their entire history of Wi-Fi networks back to the company that made the device. (Most iPhone users also don’t realize how sensitive, privacy-wise, their complete Wi-Fi network history is.)

It’s fair to point out that the “it should be up to the user” take is more beneficial to third-party accessory makers than the “it shouldn’t be possible” take. And that this conflict of interest — where the same limitations that protect iPhone users’ privacy by definition disadvantage third-party devices in ways that Apple’s own devices that connect to iPhones are not — works not just in iPhone users’ favor, privacy-wise, but also in Apple’s favor, financially. Apple can sell more Apple Watches if they work better with iPhones than smartwatches from other companies do. That’s obviously true, but that’s just another way of saying that first-party products have inherent advantages that third-party products don’t, to which I say: Duh. Apple’s own peripherals, like Apple Watch, can do things that third-party peripherals can’t because Apple can trust its own devices, and its own software, in ways that it can’t trust devices and companion apps made by other companies.

It’s natural for a company to bootstrap a new product on the back of an existing successful one. Meta’s Threads social network, for example, uses the same usernames and sign-in system as Instagram, which is arguably the most successful social network in the world. Should Meta not have been permitted to do that? Or should they be forced to allow anyone to create new competing social networks using Instagram user accounts as the ID system?

It’d be pretty weird if Apple limited itself, when designing and engineering features that integrate experiences across its own devices, to what it would allow third-party developers to do. It’d be even weirder if Apple allowed third-party developers to do everything Apple’s own software can do.6

For at least the last 15 years, I’ve repeatedly emphasized that Apple’s priorities are in this order: Apple first, users second, developers third. The DMA attempts to invert that order, privileging developers first (in the ostensible name of fair competition with Apple, a designated “gatekeeper”), ahead of users, and ahead of Apple itself. So of course Apple is going to object to and resist mandates that require it to subordinate its own strategic desires — its own sense of how its products ought to be designed and engineered — especially when the primary beneficiary of the mandates aren’t users, but developers. Many of whom, especially the larger ones, are Apple’s competitors. But I also think it’s clear, with Apple in particular, that users prefer Apple’s priorities. People are happier with Apple putting users’ considerations ahead of developers’ than they are when developers are free to run roughshod over the software platform.

The clearest example of that is the App Store. It’s overwhelmingly developers, not users, who object to the App Store model — the exclusivity of distribution, the exclusivity of the vendor’s payment system, the vendor’s payment commissions, the vendor’s functional guidelines and restrictions, all of it. Users largely don’t have a problem with any of that. That’s why Apple commissioned and then publicized a study, just this month, that showed that DMA-driven changes saved developers €20 million in commissions, but that reduction in commissions didn’t lower the prices users pay. Developer-focused observers see that as a win for the DMA — that’s €20 million in developers’ pockets that otherwise would have gone into Apple’s already overflowing pockets. But a user-focused observer might see that as clarifying regarding the fact that the DMA wasn’t designed to benefit users, and isn’t benefiting users in practice either. Apple doesn’t care about €20 million. They fart bigger than that. They do care about clarifying who the DMA prioritizes first, and that it’s not users. (And, of course, that it’s not Apple itself.)

Users love the App Store model. With Apple in particular, users, by and large, like the idea that the platforms have stringent guardrails. Many buy iPhones because Apple exerts such control over the platform, not despite it. But that control is exactly why Apple has been so singularly targeted by the European Commission regarding DMA mandates, despite the fact that Samsung by itself — let alone the Android platform as a whole — sells more phones in Europe (and the world) than Apple does.

The bottom line is that users setting up new Apple Watches in the EU will now get a slightly worse experience in the name of parity with accessories made by third-party companies. It remains to be seen whether users of third-party iPhone accessories and peripherals in the EU will see any benefit at all (because the companies that make their devices will need to adopt these new EU-exclusive Wi-Fi Infrastructure APIs in their iOS companion apps) — and, if the users of third-party iPhone accessories do see the benefit of Wi-Fi network information syncing to their devices, whether their privacy will be respected. But don’t make the mistake of thinking that Apple is complying the least bit spitefully with regard to this mandate.


  1. I’m quoting Apple/Safari’s French-to-English translation, but the gist seems exactly the same in Google’s translation as well. ↩︎

  2. It remains to be seen whether Meta will actually use the new Wi-Fi Infrastructure framework to allow their accessories, like their AI Glasses, to obtain Wi-Fi network information from Meta’s companion iOS app. I’m guessing they almost certainly would, if the Wi-Fi Infrastructure APIs were available globally. But these APIs are exclusive to the EU. Will Meta deem it worth the engineering effort to support this feature only for users in the EU? We shall see.

    It’s worth remembering that one of the initial DMA mandates the EU issued to Apple was that iOS must support third-party web browser rendering engines, and to comply with this, Apple spent significant (and I suspect that’s a vast understatement) engineering resources to create the BrowserEngineKit and BrowserEngineCore frameworks, and here we are at the end of 2025, nearly two years after Apple shipped those frameworks, and there are exactly zero browsers on iOS using alternative rendering engines. Zero. These frameworks might be the largest set of APIs ever created that never get used. I wouldn’t be surprised if the new Wi-Fi Infrastructure framework sees the same fate. (Meta might consider that a win, just knowing that Apple had to expend this effort for naught.) ↩︎︎

  3. Apple has a good layperson-approachable overview of App Tracking Transparency. At a technical level, an app must prompt for and receive the user’s permission (via the Allow button in the system-provided ATT prompt) in order to access the device’s advertising identifier. From that document: “Unless you receive permission from the user to enable tracking, the device’s advertising identifier value will be all zeros and you may not track them as described above.”

    But returning zeroes for the device’s advertising identifier doesn’t technically prevent a devious developer from attempting to uniquely identify and track the user by other means. If the button in the system prompt said “Don’t Allow”, rather than “Ask App Not to Track”, it would imply that Apple could guarantee the app isn’t tracking you (or trying to track you) without your permission. Apple can’t guarantee that, so they don’t imply that they can. ↩︎︎

  4. I’m not aware of any instances where an app has been accused of disregarding the ATT “Ask App Not to Track” request, but surely it has happened. If you’re aware of any such accusations, and how Apple responded, let me know↩︎︎

  5. I’m not arguing here that the European Commission doesn’t care about user privacy, or that I think the European Commission doesn’t realize that Wi-Fi network information is quite sensitive. I’m sure they do care about user privacy and do realize that Wi-Fi network information is privacy-sensitive. What I do think is that the European Commission believes the privacy of this information should only be guarded by law, and that they already have laws in place that protect such information. And thus it’s not Apple’s place — especially now that they’ve been deemed a “gatekeeper” that has the power to stymie competition — to attempt to protect that information, whether by technical limitations or by policy.

    Apple is certainly not opposed to privacy-protecting laws, in the abstract, but doesn’t see the law alone as protection enough. Apple’s perspective is that protecting their customers’ privacy is, in fact, Apple’s responsibility — and one of their most important responsibilities at that. It’s illegal to steal cars, but every carmaker still puts locks on the doors and requires a key to start the engine. In numerous ways, Apple sees the DMA as mandating, privacy-wise, that they create something akin to cars that don’t require keys, trusting EU law to keep them from being stolen. The European Commission only sees Apple’s protections as blocking would-be competitors, not would-be privacy thieves. ↩︎︎

  6. In the old days, of course, with devices designed before the iPhone, this wasn’t weird. All software, whether first- or third-party, could do whatever it wanted to. Anyone could write a kernel extension. In the classic Mac OS days there was no “kernel” and we just had “extensions” and you could just drop one in your Extensions folder, restart, and boom, whatever system extension you just installed was now effectively part of the operating system. Any app could read and write anything on disk, including into the operating system. Go back far enough and apps could read and write (deliberately or accidentally) inside the memory of another running application. To split personal computing — not just PCs but all personal computing devices, in the plain sense of the words — into three eras, there was (1) the early era when all software was effectively “root”; (2) the middle era, still exemplified today by MacOS and Windows, when there were user-controlled protections on what could run as root; and (3) the modern era, as exemplified by iOS and stock Android, where the vendor controls what can run as root.

    You can reasonably make the case — and expert-level users (read: nerds) often do — that the user should always be in control. I bought the device, I should be able to run whatever software, with whatever privileges, I want. That perspective is valid, but it also describes a class of devices — PCs — that privilege the autonomy of third-party developers over the vendor-controlled stability of the OS. The PC model, where accessory makers can offer software that runs with root (or root-like) escalated privileges, offers significantly greater opportunities for third-party accessory makers than the mobile model, where accessories are limited to whatever public APIs are provided by the device vendor for integration. But with the PC model, users can “mess up” their system by installing software they shouldn’t have, or that they regret having installed but don’t know how to remove. With the mobile model, users are technically prevented from installing anything that could “mess up” their system. It’s always about trade-offs. And with this particular trade-off, it’s very clear which model is more successful in the market. It’s not feasible to make computers intended for use by anyone and everyone which require any degree of technical knowledge or expertise to manage. ↩︎︎

One Small Guardrail Finally Held Up Against Trump

For almost a decade now, we’ve watched as Trump plowed through guardrails intended to restrain a rogue executive, exploited one loophole after another, and managed to turn every legal ambiguity in his favor.

So it’s worth noting that one relatively small structural guardrail held firm and provided a check today on one of Trump’s worst abuses.

The dismissals of the politicized indictments of James Comey and Letitia James came down to a key provision in the law Congress passed to deal with U.S. attorney vacancies. The attorney general can appoint an interim U.S. attorney for 120 days, but after 120 days, the interim U.S. attorney must be selected by the judges in that district.

It’s a structural allocation of power between all three branches:

The executive branch gets to fill the slot, but only temporarily, so that it can’t do an end run around the legislative branch, where the Senate must confirm U.S. attorneys. If the executive could appoint interim U.S. attorneys indefinitely, the Senate’s role would be eviscerated. To square the circle, Congress gave the judicial branch the power to appoint interim U.S. attorneys after 120 days and thus assure some minimum degree of competence and professionalism in the district’s top federal prosecutor. The statutory arrangement also creates a “use it or lose it” incentive for the executive branch to work with the Senate to nominate confirmable candidates. Play too much hardball with senators, and you surrender your appointment power to district judges.

In trying to upend that settled regime for allocating power, the Trump administration was seeking to rob both of the other branches of some of their prerogatives. In rejecting the administration’s argument, U.S. District Judge Cameron McGowan Currie of South Carolina showed that statutory power structure to be durable and sustainable.

Coming out of the Nov. 13 hearing on the validity of Halligan’s appointment, I wrote that Currie gave no indication that she thought Halligan was lawfully appointed, but that it wasn’t clear Currie would go for the maximum remedy of dismissing the indictments with prejudice, meaning the cases could not be be refiled. In the end, she dismissed without prejudice.

But more importantly for our immediate purposes, Lindsey Halligan was precisely the type of U.S. attorney that the statutory scheme for power allocation was designed to avoid: a loyalist of the president who had been his personal attorney and most recently worked in his White House; a neophyte with no prior experience as a prosecutor; a transient figure dispatched to play a transactional role: indict named enemies of Trump without regard to the law, the facts, or principles of justice.

The big threshold question was whether Halligan was properly appointed in the first place, but the politicized prosecutions of Comey and James were rife with other legal infirmities. Because the cases were dismissed without prejudice, James can be re-indicted. It’s not clear that Comey can be because of the statute of limitations in his case. But assuming Currie’s rulings hold up on appeal, it will be up to an interim prosecutor appointed by the judges in the district — not by Pam Bondi — to bring new indictments at Trump’s behest. To do so, they would have to once again overrule or disregard the deep reservations of career prosecutors in the office and be prepared to stake their careers on cases already tainted by the stench of corruption.

Terraform Talent Development

Originally posted at the Terraform Blog on November 17, 2025.

Terraform Industries is the vehicle by which I build the people that build the team I need to build the future I am determined to build.

One of the unexpected privileges of founding an ambitious, future-oriented hardware company is that I get to hire, mentor, and develop ambitious people to become the leaders of tomorrow. Earlier this year, I formalized this process and today I’m externalizing this memo. 

Why? To better prepare candidates for the challenges ahead, to share our insights into organization, and to strengthen the culture of American re-industrialization. There is no supply of hardened, seasoned veterans of synthetic fuel production, yet. We get to build them from scratch. 

Join Us

Terraform is the best place for ambitious, hard working people to become the best they can be.

Terraform has a range of positions for early, mid, and late career mechanical, electrical, and chemical engineers and technicians, and allied trades. Incredible opportunities for growth, leadership, and worthy challenges in carbon capture, hydrogen electrolysis, methane and methanol synthesis, as well as nascent efforts on metals, cement, and desalination. 

To conquer the primary energy consumption layer of our civilization, check out our open roles and apply at hiring@terraformondustries.com

More Context

Here are some other useful posts on career development. I like the LLM-induced trend of putting only relevant references up top.

Career Development Guide For Job Seekers

Maximizing Resume Signal Noise Ratio

The Well Rounded Engineer

Stuff You Should Have Learned By Now

You Should Be Working On Hardware

Mr Beast Memo

The Company As A Machine For Doing Stuff (Packy McCormick)

Terraform Development Responsibilities

Engineers build tools to perform otherwise impossible tasks. A tool can be a hand, a screwdriver, a line of code, a fixture, a jig, an injection mold, a factory, or an org chart. The tool is the mechanism by which an idea is made real.

Career development is meta-tooling. Terraform as a whole, and your team and responsibilities within it, are intended to be the best place on Earth for you to become the best you can be. 

As Terraform continues to grow, we need to ensure that job expectations continue to be explicit. The following levels provide a map for your development and the intensification of capability. They are intended to function as a cursus honorem, allowing any employee to easily see how to make the most of their time and opportunities here. 

The job ladder provides visibility into both technical and managerial tracks, but at Terraform, we strongly encourage people to develop these mindsets in parallel. Pay increments are roughly 10% per level, and we expect people to climb a level in no more than a year. This is explicitly “up or out.” Stagnation is not an option. 

The following is not an exhaustive list of expectations. The job titles here map to broader industry titles, and not Terraform job titles, which we prefer to make more descriptive. The increments in capability and authority are intended as a guide for career development conversations at Terraform. 

Leadership

Terraform provides early opportunities to take on leadership responsibilities and develop skills. Leadership skills are unintuitive, enormously valuable, and may be readily improved through practice. All leaders must be able to:

  • Build the team: Recruit, mentor, hold accountable, and drive.
  • Set the direction: Lead, motivate, explain, consult, and build consensus.
  • Don’t run out of money: Sell, advocate, and understand the needs of the customer.

Some organizations prefer to hire specialists to offload some of these skills, such as sales or recruiting. People in these organizations assume that their job requires them to only handle their specialty, and their skill atrophy in other areas. This is deeply career limiting. Even if your company has a sales org, you still need to understand how sales works, because it’s how you attract top candidates, get credit for your work, drive consensus around your ideas, and secure resources.

At Terraform, we generally don’t hire non-technical specialists, because our most critical interfaces are with other technical people, and we don’t want to dis-intermediate that information exchange. Instead, everyone gets to learn about traditionally “non technical” functions, greatly expanding the scope of their competence.

Each of the levels below requires aggressive development of all of the skills associated with leadership to progress, in addition to the steady in-sourcing of responsibilities and growth of scope described. 

Levels

Table stakes at entry level

  1. Integrating with the existing team. It is non-trivial, requiring a high degree of technical competence, insight, and self-awareness. Must also have sufficient ambition and determination. Remember, action precedes motivation. Enjoyment comes after sweat.
  2. Can successfully complete basic tasks given detailed instructions.
  3. Can successfully complete basic tasks with basic instructions.
  4. Can successfully complete more complex tasks with detailed instructions.
  5. Can successfully complete more complex tasks with basic instructions.
  6. Can autonomously complete complex tasks.

First promotion

  1. Can oversee small teams. This includes providing coaching and instructions to other less experienced team members, and taking responsibility (and giving credit) for the successful completion of their deliverables. It also includes knowing how to communicate information people need to build stuff. 
  2. Can specify tools to solve problems. “We need a widget that can do X.” “We need Salesforce/CRM.”
  3. Can make tools to solve problems. “I found this design for a widget, I’ll make it.”

Engineer II, Senior Lead

  1. Can design tools to solve problems. “I designed this new widget to solve a problem.” Note that reusing old solutions is often better than designing new solutions, and knowing when innovation for its own sake is not necessary is a key judgment skill needed at this level. On the management side, responsibility grows to include outcomes, not just component delivery. That is, the senior lead delivers not just the component, but the solution the component is meant to provide. This requires the ability to see through the text and through reality to the solution space. It is important to cultivate the intuition necessary to judge what is necessary, or what is a viable path. Constraints often force an even better design.

Lead Engineer, Team Lead/Manager

  1. Can oversee sub-systems. This is a more product-focused role than small team or component management, because here the person has visibility into their team AND interfaces with other teams, and has to intuitively understand what their customers need, not just what they say they need. At this level, the lead has responsibility not just for a component but for a product. The role includes product management aspects, that is, responsible not just for the success of the product in terms of function but also in terms of intended use cases and interfacing with the customer. This is also the level where outcomes are consequentialist. At level 11, the Lead guarantees an outcome and they are not graded on effort. A team lead/manager also has considerable responsibility for talent management, that is to say, recruiting, hiring, performance management, and career development. The team lead is expected to remain the supreme technical authority within their vertical and have the ability to operate authoritatively in adjacent verticals. 

Senior Engineer, Senior Manager

  1. Mastery of a topic area, such as a particular manufacturing stack, design language, analysis tool set, regulatory framework, enterprise sales, etc. Eg tool & die, thermo/fluids, statics, etc. This level is table stakes to spend a career working on really interesting stuff. The manager has a high level of confidence in successful execution of any project, and has typically demonstrated a track record of success. At this level, the manager also begins to manage managers, a critical but very challenging transition and one that enables the operation of much larger projects.

Staff Engineer, Associate Director

  1. Company authority on a particular area. The manager remains deeply technical but interfaces primarily with managers, and has developed competence in correctly intuiting issues and decisively addressing them, so as to maintain a high degree of leverage in keeping projects of large scope on track. They have comprehensive knowledge of every aspect of development they supervise, play an active role in developing junior talent, and make swift and correct decisions. 

Senior Staff Engineer, Director

  1. An industry authority on a particular area. In addition to the above, they have an established track record and likely a public profile. They have developed the skills necessary to move into any project set and get things right. They assume total responsibility for successful execution. 

Principal Engineer, Senior Director

  1. World authority on a particular area. “There are 10 people who might know more about this than I do, and we’re all in the same group chat.” This may also be thought of as a transitional role into an executive-style role. 

Reading List

Fortunately, we don’t have to do this in a vacuum. Millions have gone before us. Thousands have written their stories. Dozens are comprehensible and useful. Speaking of which, you should definitely write a blog. I know of no better way to increase your luck stat.

“If I have seen further it is by standing on the shoulders of giants.” Isaac Newton 1675 to Robert Hooke, a predecessor of rather short stature.

In our quest to maximize our rate of skill development, we need to aggressively consume hard won lessons by others. You should develop your own programs of reading or other meaningful media consumption to level up. LLMs can help here by listing and ranking dozens of primary sources according to your preferences. I have found that first person technical accounts are about 100x more interesting and useful than researched histories by non-technical historians, in terms of actionable knowledge for real world leadership of technical programs.

The purpose of reading is to challenge your assumptions and biases, and improve your internal world model and intuition so that you can make correct decisions more quickly and more often.

Safety

NTSB aviation accident investigation database
NTSB YouTube
The Killing Zone: How and Why Pilots Die (Paul A Craig)
Into Thin Air (John Krakauer)
Touching the Void (Joe Simpson) (also a film)
Normal Accidents (Charles Perrow)
Aeronautical decision making
Checklists
Columbia Shuttle Disaster report
Challenger Shuttle Accident report
Normalization of Deviance
Chemical Safety YouTube
178 seconds to live (controlled flight into terrain)
Steam leak horror stories (reddit)
Therac-25 software safety escape
Flying Blind (Peter Robison)
Dittemore’s Law (Casey blog)
Mark 14 Torpedo problems
TWA Flight 800
Hyatt Regency Walkway Collapse
2021 Champlain Towers Condo Collapse Report Critique

Leadership

Poor Charlies’ Almanack (Charlie Munger). Founders podcast summary
Pieces of the Action (Vannevar Bush)
More Than My Share of it All (Kelly Johnson)
Now It Can Be Told (Leslie Groves)
Rickover Effect (Theodore Rockwell)
Zero to One (Peter Thiel)
Hard Thing About Hard Things (Ben Horowitz)
Founders (podcast)
Founders: How Elon works
Founders: How Jensen works
Acquired (podcast)
VISA
Costco
Enron 
Amazon
Walmart
Google
Cost of glory (podcast)
Stuff you have to know (Casey blog)
Maximizing resume SNR (Casey blog)
Well rounded engineer (Casey blog)
Patio11 on COVID vaccines (Patrick McKenzie blog)
How NASA Builds Teams (Charles Pellerin)
Only The Paranoid Survive (Andy Grove)
High Output Management (Andy Grove)
Mythical Man-Month (Fred Brooks)
The Elephant in the Brain (Simler and Hanson)
LOTR (also the films)
Thunder Below (Eugene Fluckey)
The Lean Startup (Eric Ries)
Extreme Ownership (Willink and Babin)
Mr Beast onboarding memo
The Sunion
Skunk Works (Ben Rich)
Play Nice But Win (Michael Dell)
Healing Back Pain (Sarno)
On Responsibility Essay (John Grier Hibben)
Lessons of History (Will Durant)
How the world really works (Vaclav Smil)

Physics and Science

The Art of Doing Science and Engineering (Richard Hamming)
Halliday Resnick and Krane (Physics Text)
Casey’s Physics Olympiad notes
Problem Solving Algorithm (Casey blog)
Structures: Or Why Things Don’t Fall Down (J. E. Gordon)
AvE YouTube
Munro Live YouTube
Ultralearning (Scott Young)
Why We Sleep (Matthew Walker)
Feynman Lectures on Physics
Alchemy of Air (Thomas Hager)
The Pattern on the Stone (Danny Hillis)

Are there any truly terrific books I’m missing? Send your suggestions!

Fifteen Years

"Want to feel old?" "Yes."

Prediction markets in everything

Prediction market Kalshi Inc. and sneaker marketplace StockX are offering a new way to bet on the resale prices of in-demand sneakers and collectibles such as Labubus and Pokémon cards.

In a partnership announced Wednesday, Kalshi will use StockX data to create so-called event contracts tied to sneakers, trading cards and figurines. Users will be able to trade on outcomes such as whether an item will surpass a price threshold after release day, or predict the best-selling brands during a major shopping event like Black Friday.

Products that will be listed include highly-anticipated drops of Jordan sneakers, Supreme hoodies and blind boxes that contain random Labubus.

Here is more from Bloomberg, via John De Palma.

The post Prediction markets in everything appeared first on Marginal REVOLUTION.

       

Comments

 

Every Housing Down Cycle is "unhappy in its own way"

Today, in the CalculatedRisk Real Estate Newsletter: Every Housing Down Cycle is "unhappy in its own way"

Excerpt:
“All happy families are alike; every unhappy family is unhappy in its own way.” Leo Tolstoy, Anna Karenina
Maybe we could say that all housing booms look alike, but every down cycle is “unhappy in its own way.”

In March 2022, I wrote Don't Compare the Current Housing Boom to the Bubble and Bust. Instead, I suggested a more similar period was the late ‘70s to early ‘80s.
It is natural to compare the current housing boom to the mid-00s housing bubble. The bubble and subsequent bust are part of our collective memories. And graphs of nominal house prices and price-to-rent ratios look eerily similar to the housing bubble.

However, there are significant differences. First, lending has been reasonably solid during the current boom, whereas in the mid-00s, underwriting standards were almost non-existent (“fog a mirror, get a loan”). And demographics are much more favorable today than in the mid-00s.

A much more similar period to today is the late ‘70s and early ‘80s. House prices were increasing sharply. Demographics were very favorable for homebuying as the baby boomers moved into the first-time homebuying age group (similar to the millennials now). And inflation picked up from an already elevated level due to the second oil embargo in 1979, followed by the Iran-Iraq war in 1980, driving up costs.
Sure enough, there hasn’t been a national crash in house prices. However, although there are similarities to the late ‘70s / early ‘80s period, there also significant differences. The most obvious difference is the sharp slowdown in population growth and immigration. The population and workforce were expanding sharply in the early ‘80s.
There is much more in the article.

Monday assorted links

1. The robustness of twin studies.

2. Suggestions for improving the NIH.

3. DOGE has been undercovered, here is one recent story.

4. Henry Oliver and Rebecca Lowe on re-founding classical liberalism.

5. Interview with Justin Kuiper, who is behind 500 million plus YouTube views.

6. Paul Krugman on negative emotional contagion (my framing, not his).

7. Dorothy Vogel, RIP (NYT).

The post Monday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

When Lost Income Isn’t Just from Time Off Work: Calculating Hidden Earnings Losses

When an accident occurs, victims often focus on immediate losses such as medical bills or days missed from work. However, lost income extends far beyond these visible impacts. Reduced earning capacity, missed overtime opportunities, bonuses, and even long-term career growth can all be affected. For individuals recovering damages with legal representation, accurately calculating these hidden losses is critical to ensuring full and fair compensation.

Understanding Different Types of Lost Income

Lost income is not limited to wages for time spent away from the job. There are several additional factors to consider:

  • Overtime and Performance Bonuses: If a worker cannot perform at peak capacity, they may miss out on opportunities for extra pay or incentive bonuses.
  • Commission and Sales Opportunities: Professionals whose pay depends on commissions may see earnings decline if they fail to meet previous targets.
  • Career Advancement: Long-term career growth may be hindered when accidents result in missed training, networking opportunities, or high-profile projects.
  • Secondary Employment or Side Jobs: Those who hold multiple jobs or freelance positions may experience disruptions that aren’t captured in standard payroll records.

Calculating these types of losses often requires a detailed review of past earnings, performance records, and future potential. An experienced attorney ensures that all these factors are considered when preparing a claim, allowing the victim to pursue compensation that reflects the total impact of their accident.

The Role of Professional Assessment

When estimating hidden losses, expert advice is essential. To assess historical and anticipated earnings, financial analysts, economists, and vocational specialists may be hired. This evaluation entails comparing comparable positions within the same industry, examining past income trends, and forecasting future career advancement that might have been postponed or avoided due to the accident.

Lawyers use these evaluations to support their claims by demonstrating that lost income extends beyond missed workdays. Legal experts help ensure that hidden losses are fully accounted for in settlements or litigation by translating complex earnings data into understandable, legally significant evidence.

How Legal Representation Helps Capture Hidden Income Losses

The completeness of the evidence offered frequently determines whether full damages can be recovered. Lawyers with expertise in workplace and personal injury claims assist clients in recording all facets of income loss. Payroll records, tax returns, commission statements, and performance evaluations can all be gathered with their help. The foundation of a claim is this documentation, which shows how an accident has impacted earning potential in both tangible and intangible ways.

Legal experts also anticipate insurers’ strategies to reduce claims. According to insurers, any loss is speculative or unrelated to the accident. Knowledgeable lawyers use professional evaluations, thorough documentation, and expert testimony to refute these claims and establish a clear link between the accident and both short-term and long-term financial repercussions.

Estimating Future Earnings Loss

Projecting future earnings is one of the trickiest aspects of lost-income claims. Accidents may affect a victim’s ability to continue on the same career path or meet their financial goals. Several factors must be carefully taken into account when calculating this:

  • Age and Career Stage: Younger workers may have decades of earning potential ahead, making any long-term disability or limitation more financially impactful.
  • Occupation and Industry Trends: Jobs with high physical or cognitive demands may be more sensitive to even minor injuries, affecting long-term earning potential.
  • Promotions and raises: Evaluating missed advancement opportunities provides insight into the potential cumulative impact of the accident.

Legal experts work closely with clients to build realistic projections, creating evidence that insurers and courts can evaluate objectively. By including both past and anticipated income, victims can pursue compensation that reflects the true financial scope of their losses.

Accounting for Nontraditional and Indirect Income

Nontraditional sources of income, such as side gigs, freelance work, tips, or bonuses tied to specific projects, can also lead to hidden income losses. In particular claims, even unpaid caregiving or household contributions have quantifiable economic value. Legal representation ensures that compensation calculations reflect the full financial impact, not just traditional salary or hourly wages, by identifying these less obvious areas.

Furthermore, specific injuries may result in intermittent or recurrent productivity loss. Even after returning to work, victims might need flexible schedules, doctor’s appointments, or lighter workloads, which could affect total income. To show the ongoing financial impact, it is crucial to document these ongoing limitations.

Final Thoughts

Lost income following an accident is far more nuanced than simply counting missed days on the job. Hidden earnings losses, from decreased productivity and missed bonuses to delayed career advancement, can substantially affect a victim’s long-term financial well-being. By recovering damages with legal representation, individuals gain the expertise necessary to document these losses, calculate projected future earnings, and ensure that compensation reflects the full scope of the impact. Proper legal guidance ensures that victims are not left bearing the financial consequences of an incident that extended far beyond the time spent away from work.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post When Lost Income Isn’t Just from Time Off Work: Calculating Hidden Earnings Losses appeared first on DCReport.org.

Modern Tools That Improve Safety and Accountability in Elder Care Settings

Modern senior care is entering a new era as technology becomes an essential part of daily operations. What once relied on handwritten notes, physical supervision, and periodic updates is now supported by tools that offer continuous insight into resident well-being. These developments strengthen oversight and improve the level of safety that residents receive. When thoughtfully implemented, technology directly contributes to protecting the rights of seniors in care facilities by reinforcing transparency, enhancing communication, and reducing the risk of preventable harm.

The Rise of Smart Monitoring and Safety Tools

Smart monitoring devices are one of the most significant innovations in senior care. Wearable health trackers, motion sensors, and fall detection systems all continuously monitor a resident’s physical state. When a resident exhibits distress or deviates from typical activity patterns, these tools promptly alert staff.

This rapid response capability ensures that emergencies are handled promptly and prevents injuries from worsening. Facilities are better able to fulfill their obligation to provide prompt, attentive care, and residents enjoy a safer environment. Knowing that technology fills in the gaps when staff cannot be everywhere at once can reassure families who may be concerned about the safety of their loved ones.

Improving Care with Accurate Digital Documentation

Traditional paper charts have been quickly replaced by electronic health records, which offer significant improvements in accessibility and accuracy. Errors caused by illegible handwriting, misplaced documents, or outdated medical information are less common with digital records. Electronic systems assist care providers in making well-informed decisions by clearly organizing and retrieving data.

This structure also guarantees accountability. Reviewing treatment history, assessing concerns, and confirming that residents have received the care they were promised all depend on proper documentation. Seniors with regularly updated medical records are better protected if concerns are raised about the standard of care, possible neglect, or inappropriate treatment.

Strengthening Family Involvement and Communication

Technology helps families and care facilities communicate with each other. These days, many centers use secure platforms that let families view activity logs, message caregivers, check for updates, and receive alerts about changes in their health. This degree of participation is particularly crucial for residents who find it difficult to voice their concerns on their own.

Technology helps families become effective advocates by offering clear channels of communication. They can look for trends, ask questions, or work with staff to make decisions about their care. Knowing that their loved ones are informed and involved strengthens seniors’ rights to respect, dignity, and individualized care.

The Growing Impact of Predictive and Analytical Tools

Predictive analytics and artificial intelligence are becoming practical tools for identifying health risks before they worsen. AI systems can identify early warning indicators of problems by evaluating data on behavior, nutrition, mobility, and sleep. These realizations enable caregivers to take action earlier, frequently averting complications or hospital stays.

This proactive strategy is consistent with a resident-centered care model. Caregivers can address concerns early on rather than waiting for issues to arise. Facilities uphold a higher standard of care, and seniors receive attention specific to their individual needs.

Enhancing Accountability Through Digital Tracking

The ability of technology to accurately record resident interactions and staff activities is another significant contribution. Time-stamped digital care logs, electronic staff check-in systems, and cameras in public spaces all produce trustworthy records that lessen the possibility of abuse or neglect.

These tools must be used responsibly, but when implemented correctly, they add an essential layer of oversight. Staff members understand that care activities are documented, which encourages adherence to proper procedures. Families can also be confident that objective systems are in place to verify what occurs throughout the day.

Balancing Human Care with Technological Support

Although technology offers meaningful benefits, its success depends on thoughtful integration. The goal is not to replace caregivers but to strengthen the support they provide. Training is essential to ensure staff feel comfortable with new systems and understand how to use them effectively. Clear communication with families also helps set expectations and encourages cooperation.

Human connection remains the foundation of compassionate care. Technology works best when it enhances the caregiver-resident relationship rather than overshadowing it. When used correctly, these tools allow caregivers to devote more time to meaningful interactions by reducing administrative burdens and improving efficiency.

Looking Ahead at the Future of Senior Care

Technology continues to evolve, and its influence in senior care will only grow. Facilities that embrace modern tools are better positioned to provide safe, high-quality, and transparent care. Residents experience greater independence and security, while families gain valuable insight into daily routines. These advancements create a stronger environment for accountability, safety, and wellness. As more technologies emerge, they will become central to protecting the rights of seniors in care facilities and ensuring that every resident receives the respect, attention, and advocacy they deserve.

Photo: Freepik via their website.


CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM

The post Modern Tools That Improve Safety and Accountability in Elder Care Settings appeared first on DCReport.org.

“Today, the only naked bodies that many Americans will likely ever see are their own, a partner’s, or those on a screen. Gone are our unvarnished points of physical comparison — the ordinary, unposed figures of other people.”

💬 Join the discussion on kottke.org

I talk talent networks and mentoring and Christianity with Luke Burgis

From Grand Rapids, Michigan, earlier in the year, here is the link.

Here is Luke’s Cluny Institute, which sponsored the event.  And here is Luke’s book on Rene Girard.

The post I talk talent networks and mentoring and Christianity with Luke Burgis appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Housing November 24th Weekly Update: Inventory Only Down 4.7% Compared to Same Week in 2019

Altos reports that active single-family inventory was down 1.1% week-over-week.  Inventory usually starts to decline in the fall and then declines sharply during the holiday season.

The first graph shows the seasonal pattern for active single-family inventory since 2015.

Altos Year-over-year Home InventoryClick on graph for larger image.

The red line is for 2025.  The black line is for 2019.  

Inventory was up 15.5% compared to the same week in 2024 (last week it was up 16.3%), and down 4.7% compared to the same week in 2019 (last week it was down 5.3%). 

Inventory started 2025 down 22% compared to 2019.  Inventory has closed most of that gap, but it appears inventory will still be below 2019 levels at the end of 2025.

Altos Home InventoryThis second inventory graph is courtesy of Altos Research.

As of November 21st, inventory was at 830 thousand (7-day average), compared to 840 thousand the prior week.  

Mike Simonsen discusses this data and much more regularly on YouTube

Side-Walking Problems

Local Law 11 requires owners of New York City’s 16,000-plus buildings over six stories to get a “close-up, hands-on” facade inspection every five years. Repair costs in NYC’s bureaucratic and labor-union driven system are very high, so the owners throw up “temporary” plywood sheds that often sit there for a decade. NYC now has some 400 miles of ugly sheds.

The ~9,000 sheds stretching nearly 400 miles have installation costs around $100–150 per linear foot and ongoing rents of about 5–6% of that per month, implying something like $150 million plus a year in shed rentals citywide.

Well. at last something is being done! The sheds are being made prettier! Six new designs, some with transparent roofs as in the rendering below are now allowed. Looks nice in the picture. Will it look as nice in real life? Will it cost more? Almost certainly!

Image

To be fair, City Hall is cracking down as well as doubling down: new laws cut shed permits from a year to three months and ratchet up fines for letting sheds linger. That’s a good idea. But the prettier sheds are the tell. Instead of reevaluating the law, doing a cost-benefit test or comparing with global standards, NYC wants to be less ugly.

How about using drones and AI to inspect buildings? Singapore requires inspections every 7 years but uses drones to do most of the work with a follow-up with hands-on check. How about investigating ways to cut the cost of repair? The best analysis of NYCs facade program indicates something surprising–the problem isn’t just deteriorating old buildings but also poorly installed glass in new buildings, thus more focus on installation quality is perhaps warranted. Moreover, are safety resources being optimized? Instead of looking up, New Yorkers might do better by looking down. Stray voltage continues to kill pets and shock residents. Manhole “incidents” including explosions happen in the thousands every year! What’s the best way to allocate a dollar to save a life in NYC?

Instead of dealing the with the tough but serious problems, NYC has decided to put on the paint.

The post Side-Walking Problems appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Papers

Graphic depiction of a Go game board with numbered black and white circles and Japanese characters, in a grid layout.

In this mesmerising short from 1991, thousands of Japanese newspaper clippings form a prescient vision of our digital world

- by Aeon Video

Watch on Aeon

Nothing alive is alien to me

A cat sitting on ancient ruins with tall columns under a clear blue sky.

Who belongs to our moral community? The Greek philosopher Empedocles had an answer: all life, from humans to the laurel bush

- by Tristan Moyle

Read on Aeon

No backup, no cry

I haven't done a full-system backup since back in the olden days before Dropbox and Git. Every machine I now own is treated as a stateless, disposable unit that can be stolen, lost, or corrupted without consequences. The combination of full-disk encryption and distributed copies of all important data means there's just no stress if anything bad happens to the computer.

But don't mistake this for just a "everything is in the cloud" argument. Yes, I use Dropbox and GitHub to hold all the data that I care about, but the beauty of these systems is that they work with local copies of that data, so with a couple of computers here and there, I always have a recent version of everything, in case either syncing service should go offline (or away!).

The trick to making this regime work is to stick with it. This is especially true for Dropbox. It's where everything of importance needs to go: documents, images, whatever. And it's instantly distributed on all the machines I run. Everything outside of Dropbox is essentially treated as a temporary directory that's fully disposable.

It's from this principle that I built Omarchy too. Given that I already had a way to restore all data and code onto a new machine in no time at all, it seemed so unreasonable that the configuration needed for a fully functional system still took hours on end. Now it's all encoded in an ISO setup that installs in two minutes on a fast computer.

Now it's true that this method relies on both multiple computers and a fast internet connection. If you're stuck on a rock in the middle of nowhere, and you somehow haven't discovered the glory of Starlink, maybe just stick to your old full-disk backup ways. But if you live in the modern world, there ought to be no reason why a busted computer is a calamity of data loss or a long restore process.

Illicit organ trade, in Nigeria and elsewhere

 Here's an article about organ (kidney) trafficking, by a Nigerian physician, which has a good account of the background of the Nigerian citizen who was convicted in England of organ trafficking. (It's in an interesting looking journal that I hadn't previously heard of...)

Akpen, Nater Paul. "The illicit Organ Trade: Biographical, Anatomical, Economic and Legal Aspects." Journal of Academics Stand Against Poverty 6, no. 1 (2025): 86-102. 

Abstract:"A kidney can cost up to $ 200,000. Humans have two kidneys but normally require just one to live. Individuals can donate one to relatives for altruistic reasons. But to pay $200,000 - or any other amount - to obtain a kidney, no country in the world allows that. Except for Iran. This global blanket ban has pushed the trade of organs underground and it now ranks as the fourth most lucrative illegal activity – behind only drugs, arms, and human trafficking. Rising incidence of end-stage kidney disease will increase demand for replacement kidneys, both gifted and bought. This essay studies this trade using two case studies, one of a Nigerian leading legislator arrested in the UK for getting a kidney for his daughter and another on kidney demand in Iran where trade in organs is permitted. The global value of the trade is studied, and literature gaps are discussed. The inadequacies of the current system are highlighted and improvements suggested, based on best practices around the world."

 

And here's the concluding paragraph:

"Organ trade should be legalized. Governments, in collaboration with professional medical bodies, should be the sole regulator. For deceased donations, there should be an opt out system that would be set in place after a sweeping enlightenment campaign on the system and the need for organs. There should be a fixed uniform national pay to donors (or their next of kins). There should be no cross-border trade. "

 

What should I ask Arthur C. Brooks?

Yes I will be doing a Conversation with him.  Here is Wikipedia:

Since 2019, Brooks has served as the Parker Gilbert Montgomery Professor of the Practice of Nonprofit and Public Leadership at the Harvard Kennedy School and at the Harvard Business School as a Professor of Management Practice and Faculty Fellow.[2] Previously, Brooks served as the 11th President of the American Enterprise Institute. He is the author of thirteen books, including Build the Life You Want: The Art and Science of Getting Happier with co-author Oprah Winfrey (2023), From Strength to Strength: Finding Success, Happiness and Deep Purpose in the Second Half of Life (2022), Love Your Enemies (2019), The Conservative Heart (2015), and The Road to Freedom (2012). Since 2020, he has written the Atlantic’s How to Build a Life column on happiness.

Do not forget Arthur started as a professional French hornist, and also was well known in the cultural economics field during his Syracuse University days.  And more.  So what should I ask him?

The post What should I ask Arthur C. Brooks? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

Will the most important pop stars of the future be religious pop stars?

The personally irreligious (last I checked) economist Tyler Cowen has long been fond of proposing that the most important thinkers of the future will be religious thinkers—counter to everything we heard growing up in the age of the New Atheists, and yet, the evidence seems to keep amassing. After the recent release of LUX, the Spanish polymath Rosalía’s fourth studio album, I want to propose a corollary: the most important pop stars of the future may indeed be religious pop stars.

Critics and listeners already seem to agree that LUX represents a titanic accomplishment by the classically-trained, genre-bending singer. Urbane reviewers and YouTube-savvy opera conductors alike have spent the last two weeks obsessively unpacking Rosalía’s 4-movement, 18-track opus, whose symphonic trilingual cathedral piece and Mexican-inflected post-breakup diss track have already charted worldwide. Closer to home, it’s a striking accomplishment to get me to pay serious attention to Top 40 (it helps, of course, to make a hyperpolyglot album with Iberian duende at its core)…

At the beginning of the decade, metamodern types (myself included, in my interview for a PhD position at the Spirituality and Psychology Lab) were given to asking the question: “What can we do to reenchant the world?”

The great stagnation is over. In the age of spiritual machines, enchantment may soon become too cheap to meter. What’s left to ask is: “How are we to make sense of it?” We’ll need artists who can hold the tension—between the earthly and the divine, the ironic and the sincere, the rational and the numinous. Rosalía, to her credit and our great benefit, is already living the question with her full body.

Here is the full post from Josh Lipson at Whitmanic.

The post Will the most important pop stars of the future be religious pop stars? appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

CPHC Central North Pacific Outlook


Central North Pacific 2-Day Graphical Outlook Image
Central North Pacific 7-Day Graphical Outlook Image


ZCZC HFOTWOCP ALL
TTAA00 PHFO DDHHMM

Tropical Weather Outlook
NWS Central Pacific Hurricane Center Honolulu HI
Issued by NWS National Hurricane Center Miami FL
800 AM HST Tue Nov 25 2025

For the central North Pacific...between 140W and 180W:

Tropical cyclone formation is not expected during the next 7 days.

$$
Forecaster Reinhart
NNNN


At least five interesting things: Future of Humanity edition (#72)

Photo by Polat Solar via Wikimedia Commons

Hi, folks! I’m back from my travels in Europe, and just getting back into the swing of blogging. I have a bunch of posts lined up, but if there’s anything you really want me to write about, just drop it in the comments!

Here’s an episode of Econ 102, where Erik and I go through a variety of topics:

Onward to the list of interesting things!

1. Liquidity and the AI “bubble”

The American economy’s future — and possibly, the future of American politics — hinges on the question of whether AI will have a big crash. In a previous post, I wrote that the question likely hinges not on traditional “bubble” processes like speculation or extrapolative expectations, but merely on whether or not people are overestimating the speed with which AI can generate real returns.

But I might be wrong about that. There might also be more traditional “bubble” processes at work — herd behavior, or speculation, or psychological FOMO, etc. — that might be driving some real AI investment. If so, one warning sign we’d want to watch out for is a drying up of investor liquidity.

In lots of economic models of financial bubbles — the extrapolative expectations model, the information overshoot model, etc. — the bubble stops when people simply run out of more cash to throw into the frenzy. So I get worried when I see stories about AI investors running out of cash:

In the recent past, big tech companies like Google and Meta funded — or at least, could have funded — their AI expansions out of their own profits. But the WSJ has a story about how data center builders are starting to have to borrow instead of just redirecting the cash from their core businesses.

That’s ominous, because that process can’t go on forever.

2. Tariffs and inflation

Economists were right about the fact that Trump’s tariffs would hurt U.S. manufacturing by making it harder to purchase intermediate goods. Trump should have listened to the economists about this.

But were economists wrong about the inflationary effects of tariffs? JD Vance seems to think so:

Now first of all, it’s pretty foolish to make sweeping claims about the economics profession based on a missed forecast. The economics profession, in general, isn’t in the business of macroeconomic forecasting (because none of its forecasting methods are very effective). And even if it was, forecasting is inherently an inexact science; sometimes, forecasters are going to get it wrong.

That said, it’s a fair question to ask why tariffs haven’t resulted in much inflation yet. Inflation might be trending back upwards — it’s too early to tell — but so far it’s still in the 3% range that it’s been hovering in since mid-2023:

But did economists really think that tariffs would raise inflation substantially? Recall that there’s more than one way that tariffs can affect prices. They raise prices directly, and they make production more expensive — that’s inflationary. But tariffs can also hurt the real economy, causing shocks in the system and an increase in negative sentiment that reduces aggregate demand. Reducing aggregate demand is disinflationary. Here’s what I wrote back in July:

If some sort of economic event scares people, they’ll pull back their spending and try to save money instead. This does two things: 1) it causes a slowdown in growth, because consumers are spending less, and because companies are investing less to meet consumer demand, and 2) it causes a reduction in inflation, because companies are forced to cut prices to maintain their sales to reluctant consumers.

It’s not just me. Economists always knew this was one possibility. Now, some economists at the San Francisco Fed have examined the historical record and found that in fact, demand destruction often cancels out the inflationary effects of tariffs:

[W]e find that a tariff hike raises unemployment (lowers economic activity) and lowers inflation…We also obtain similar results if we restrict the sample to the modern post World War II period or if we use independent variation from other countries (France and the UK). These findings point towards tariff shocks acting through an aggregate demand channel.

But just because tariffs are often deflationary doesn’t mean they’re good. The way they create deflation is by harming the economy so much that people stop spending and prices go down! Right now, Trump’s tariffs are probably pushing up on prices modestly on net, increasing inflation by maybe 0.5 percentage points — not a huge amount. But the reason they might only be having such a minor effect is that they’re also causing mild economic weakness that’s pushing down on prices.

Basically, this is a case where one harm of tariffs partially cancels out another harm from tariffs. That’s not a good thing, and it’s not a reason to stop listening to economists. Quite the opposite, in fact.

3. Solarpunk is the future of Africa

I’m constantly amused by the fact that the 1980s/90s cyberpunk visions largely came true, and I enjoy speculating about what future visions might come true next. One candidate is “solarpunk”, but so far, that’s mostly just an artistic aesthetic rather than a fully fleshed-out future vision. Singapore has cool-looking plants on buildings, but otherwise it’s just a pretty standard cyberpunk metropolis.

But Skander Garroum makes a convincing case that solarpunk is actually the future of Africa, in a way that cyberpunk was the future of Asia and parts of the U.S.:

Climate Drift
Why Solarpunk is already happening in Africa
👋 Welcome to Climate Drift: your cheat-sheet to climate. Each edition breaks down real solutions, hard numbers, and career moves for operators, founders, and investors who want impact. For more: Community | Accelerator | Open Climate Firesides | Deep Dives…
Read more

Basically, Africa has weak states that aren’t good at providing infrastructure. So solar power is electrifying the continent, because it can be built in a distributed fashion — and because Africa is very sunny, so solar works especially well.

Some excerpts from Skander’s post:

What’s happening across Sub-Saharan Africa right now is the most ambitious infrastructure project in human history, except it’s not being built by governments or utilities or World Bank consortiums. It’s being built by startups selling solar panels to farmers on payment plans. And it’s working.

Over 30 million solar products sold in 2024. 400,000 new solar installations every month across Africa. 50% market share captured by companies that didn’t exist 15 years ago. Carbon credits subsidizing the cost. IoT chips in every device. 90%+ repayment rates on loans to people earning $2/day…

The grid that never came turned out to be a blessing. While development experts spent 50 years debating how to extend 20th-century infrastructure to rural Africa, something more interesting happened: Africa built the 21st-century version instead.

Modular. Distributed. Digital. Financed by the people using it, subsidized by the carbon it avoids.

Cyberpunk was a vision of states coexisting alongside — and sometimes being controlled by — powerful corporations. Perhaps solarpunk is a vision of an anarcho-technological future filled with weak states, where independent individuals and small communities have to take technology into their own hands — a fundamentally African future.

And that’s important, because the future of the human race is the future of Africa:

4. Good news about mass extinctions

One of the more depressing facts about the modern world is that humans are causing a mass extinction. Go to the Wikipedia page for “Holocene Extinction”, and it says:

The Holocene extinction, also referred to as the Anthropocene extinction or the sixth mass extinction, is an ongoing extinction event caused exclusively by human activities during the Holocene epoch…Widespread degradation of biodiversity hotspots such as coral reefs and rainforests has exacerbated the crisis.

The fact of the human-induced mass extinction comes up again and again in discussions of environmental policy. As well it should; animals and plants have no natural representation in human society, so if we care about their well-being, we need to fight very hard to keep it on the agenda.

Except here’s something interesting: The “Holocene extinction” may already be ending:

A new study by Kristen Saban and John Wiens with the University of Arizona Department of Ecology and Evolutionary Biology…revealed that over the last 500 years extinctions in plants, arthropods and land vertebrates peaked about 100 years ago and have declined since then. Furthermore, the researchers found that the past extinctions underlying these forecasts were mostly caused by invasive species on islands

For their study, Saban and Wiens analyzed rates and patterns of recent extinctions, specifically across 912 species of plants and animals that went extinct over the past 500 years. All in all, data from almost 2 million species were included in the analysis…

Somewhat unexpectedly, the researchers found that in the last 200 years, there was no evidence for increasing extinction from climate change…For some groups, such as arthropods and plants and land vertebrates, extinction rates have actually declined over the last 100 years, notably since the early 1900s…

One of the reasons for declining extinction rates “is many people are working hard to keep species from going extinct. And we have evidence from other studies that investing money in conservation actually works.”

There might be an environmental Kuznets curve at work here — as countries get richer, they might have more concern for animals, and fight harder to protect habitats. Also, needless to say, richer societies hunt wild animals a lot less.

In other words, we should take heart — there is something we can do to stop habitat destruction and mass extinction, and we’re already doing some of it.

5. Was George W. Bush right about Social Security?

The wheels started to come off of George W. Bush’s presidency in 2005. Even before Hurricane Katrina, the financial crisis, or the growing pessimism over the Iraq War, Bush tried and failed to implement a scheme to invest some of the Social Security trust fund in higher-yielding assets like stocks. His defeat marked the beginning of a long downward slide in popularity.

But was Bush wrong? Via Marginal Revolution, I just came across a very interesting paper in the Journal of Economic Perspectives by Chien, Du, and Lustig, arguing that Japan has been reducing its sovereign debt by using a slightly similar scheme:

Japan presents a striking puzzle in public finance. Government debt exceeds 200 percent of GDP, budget deficits have persisted for decades, and economic growth has been sluggish. Yet inflation has remained subdued, and no major debt crisis has emerged. Understanding how Japan has managed to defy the standard logic of debt sustainability is the starting point for our analysis.

The key lies in the Japanese public sector’s operation of a de facto sovereign wealth fund. Unlike countries such as Norway and Saudi Arabia, which fund such vehicles with national savings from natural resources, Japan finances its investments largely through domestic borrowing at very low floating interest rates. While the risk premia on these investments have generated strong returns over the past two decades and supported debt sustainability, this strategy exposes the government to considerable interest rate and exchange rate risks.

So was Bush right? Should we have just put Social Security money into stocks, thus reducing the federal government’s future liabilities? As Chien et al. note, doing this comes with substantial risks — adverse macroeconomic events can end up making government debt even worse, or make bondholders lose money. That risk was why Bush’s scheme got scotched in the first place. And Chien et al. note that many of the factors that allowed Japan to earn a good return on its “sovereign wealth fund” simply aren’t present for the U.S.

Still, I think it’s worth looking into the possibility of having the U.S. government get more upside from the U.S. stock and real estate markets. This is something I plan to write about more.

6. Who is in control of America’s immigration policy?

Remember that time that Trump’s ICE agents raided a South Korean battery factory in Georgia, arrested a bunch of Korean workers, kept them in terrible conditions, and caused a diplomatic incident, thus threatening U.S. technology, investment, and alliances all at the same time?

Well, Trump doesn’t seem happy about it:

In fact, Trump has harshly and publicly criticized the ICE raid multiple times now, and the White House has officially apologized to South Korea for the incident.

Which raises the question: Who is actually making immigration policy in Washington? The obvious answer is “Stephen Miller”, but the truth may be even worse. The raid may have been carried out by low-level ICE officials trying to meet Stephen Miller’s numerical quotas for arrests and deportations:

[A]ccording to an immigration attorney representing several arrested workers, ICE agents chose to arrest the Korean workers to fulfill the quota of 3,000 daily immigrant arrests set by White House Deputy Chief of Staff Stephen Miller.

In other words, the answer to “Who is making immigration policy?” is “no one”. There are no smart people in Washington, D.C. deciding whether a South Korean battery factory worker is a better person to arrest than an MS-13 gangster. ICE has simply been unleashed on the country and told to go arrest and deport a bunch of people, with no thought given to which people to focus on deporting. There’s no one driving this bus.

7. A good article on China, EVs, and the U.S.

I’ve been arguing that the U.S. is losing the technological future to China because we’re failing to master the Electric Tech Stack — the package of new energy technologies that includes batteries and electric motors. And I’ve argued that the reason we’re falling behind is ideological — we still think the Electric Tech Stack is about climate change rather than about power.

Now other people are starting to say the same. Here are some excerpts from a very good Channing Lee op-ed in The Hill:

Now, China commands 60 percent of global battery electric vehicle sales and dominates the battery supply chain that will power tomorrow’s cars, trucks and buses. America barely reaches 16 percent…It didn’t have to be this way. It was an American company — Tesla — that reintroduced electric vehicles into modern driving…

[But] the U.S. took a wrong turn. Instead of focusing on the electric vehicle as a breakthrough technology, Washington framed it as an environmental issue — one that remains politically divisive…That narrow framing had global consequences. While we debated environmental incentives, China was building the foundations of a new industrial order…

Electric vehicles are not just clean cars, but rather computers on wheels, connected to data, chips and infrastructure. Losing the electric vehicle race means losing leverage over critical technology standards, supply chains and industrial jobs. This isn’t just about automakers; it’s about national power and the future of our tech ecosystem…

Electric vehicles aren’t a climate accessory. They’re the next platform for global technological power.

Well said. A lot more people need to hear this message. It’ll take a long time and a lot of shouting for Americans to start seeing electric technology as being about something other than climate.


Subscribe now

Share

Sunday Night Futures

Weekend:
Schedule for Week of November 23, 2025

Monday:
• At 8:30 AM ET, Chicago Fed National Activity Index for October. This is a composite index of other data.

• At 10:30 AM, Dallas Fed Survey of Manufacturing Activity for November.

From CNBC: Pre-Market Data and Bloomberg futures S&P 500 are up 41 and DOW futures are up 215 (fair value).

Oil prices were down over the last week with WTI futures at $58.06 per barrel and Brent at $62.56 per barrel. A year ago, WTI was at $72, and Brent was at $76 - so WTI oil prices are down about 19% year-over-year.

Here is a graph from Gasbuddy.com for nationwide gasoline prices. Nationally prices are at $3.07 per gallon. A year ago, prices were at $3.04 per gallon, so gasoline prices are up $0.03 year-over-year.

Sunday 23 November 1662

(Lord’s day). Up, after some talk with my wife, soberly, upon yesterday’s difference, and made good friends, and to church to hear Mr. Mills, and so home, and Mr. Moore and my brother Tom dined with me. My wife not being well to-day did not rise. In the afternoon to church again, and heard drowsy Mr. Graves, and so to see Sir W. Pen, who continues ill in bed, but grows better and better every day. Thence to Sir W. Batten’s, and there staid awhile and heard how Sir R. Ford’s daughter is married to a fellow without friends’ consent, and the match carried on and made up at Will Griffin’s, our doorkeeper’s. So to my office and did a little business, and so home and to bed.

I talked to my brother to-day, who desires me to give him leave to look after his mistress still; and he will not have me put to any trouble or obligation in it, which I did give him leave to do.

I hear to-day how old rich Audley is lately dead, and left a very great estate, and made a great many poor familys rich, not all to one. Among others, one Davis, my old schoolfellow at Paul’s, and since a bookseller in Paul’s Church Yard: and it seems do forgive one man 60,000l. which he had wronged him of, but names not his name; but it is well known to be the scrivener in Fleet Street, at whose house he lodged. There is also this week dead a poulterer, in Gracious Street, which was thought rich, but not so rich, that hath left 800l. per annum, taken in other men’s names, and 40,000 Jacobs in gold.

Read the annotations

"Good engineering management" is a fad

"Good engineering management" is a fad

Will Larson argues that the technology industry's idea of what makes a good engineering manager changes over time based on industry realities. ZIRP hypergrowth has been exchanged for a more cautious approach today, and expectations of managers has changed to match:

Where things get weird is that in each case a morality tale was subsequently superimposed on top of the transition [...] the industry will want different things from you as it evolves, and it will tell you that each of those shifts is because of some complex moral change, but it’s pretty much always about business realities changing.

I particularly appreciated the section on core engineering management skills that stay constant no matter what:

  1. Execution: lead team to deliver expected tangible and intangible work. Fundamentally, management is about getting things done, and you’ll neither get an opportunity to begin managing, nor stay long as a manager, if your teams don’t execute. [...]
  2. Team: shape the team and the environment such that they succeed. This is not working for the team, nor is it working for your leadership, it is finding the balance between the two that works for both. [...]
  3. Ownership: navigate reality to make consistent progress, even when reality is difficult Finding a way to get things done, rather than finding a way that it not getting done is someone else’s fault. [...]
  4. Alignment: build shared understanding across leadership, stakeholders, your team, and the problem space. Finding a realistic plan that meets the moment, without surprising or being surprised by those around you. [...]

Will goes on to list four additional growth skill "whose presence–or absence–determines how far you can go in your career".

Via Hacker News

Tags: software-engineering, will-larson, careers, management, leadership

Agent design is still hard

Agent design is still hard

Armin Ronacher presents a cornucopia of lessons learned from building agents over the past few months.

There are several agent abstraction libraries available now (my own LLM library is edging into that territory with its tools feature) but Armin has found that the abstractions are not worth adopting yet:

[…] the differences between models are significant enough that you will need to build your own agent abstraction. We have not found any of the solutions from these SDKs that build the right abstraction for an agent. I think this is partly because, despite the basic agent design being just a loop, there are subtle differences based on the tools you provide. These differences affect how easy or hard it is to find the right abstraction (cache control, different requirements for reinforcement, tool prompts, provider-side tools, etc.). Because the right abstraction is not yet clear, using the original SDKs from the dedicated platforms keeps you fully in control. […]

This might change, but right now we would probably not use an abstraction when building an agent, at least until things have settled down a bit. The benefits do not yet outweigh the costs for us.

Armin introduces the new-to-me term reinforcement, where you remind the agent of things as it goes along:

Every time the agent runs a tool you have the opportunity to not just return data that the tool produces, but also to feed more information back into the loop. For instance, you can remind the agent about the overall objective and the status of individual tasks. […] Another use of reinforcement is to inform the system about state changes that happened in the background.

Claude Code’s TODO list is another example of this pattern in action.

Testing and evals remains the single hardest problem in AI engineering:

We find testing and evals to be the hardest problem here. This is not entirely surprising, but the agentic nature makes it even harder. Unlike prompts, you cannot just do the evals in some external system because there’s too much you need to feed into it. This means you want to do evals based on observability data or instrumenting your actual test runs. So far none of the solutions we have tried have convinced us that they found the right approach here.

Armin also has a follow-up post, LLM APIs are a Synchronization Problem, which argues that the shape of current APIs hides too many details from us as developers, and the core challenge here is in synchronizing state between the tokens fed through the GPUs and our client applications - something that may benefit from alternative approaches developed by the local-first movement.

Via Hacker News

Tags: armin-ronacher, definitions, ai, prompt-engineering, generative-ai, llms, evals, ai-agents

Olmo 3 is a fully open LLM

Olmo is the LLM series from Ai2 - the Allen institute for AI. Unlike most open weight models these are notable for including the full training data, training process and checkpoints along with those releases.

The new Olmo 3 claims to be "the best fully open 32B-scale thinking model" and has a strong focus on interpretability:

At its center is Olmo 3-Think (32B), the best fully open 32B-scale thinking model that for the first time lets you inspect intermediate reasoning traces and trace those behaviors back to the data and training decisions that produced them.

They've released four 7B models - Olmo 3-Base, Olmo 3-Instruct, Olmo 3-Think and Olmo 3-RL Zero, plus 32B variants of the 3-Think and 3-Base models.

Having full access to the training data is really useful. Here's how they describe that:

Olmo 3 is pretrained on Dolma 3, a new ~9.3-trillion-token corpus drawn from web pages, science PDFs processed with olmOCR, codebases, math problems and solutions, and encyclopedic text. From this pool, we construct Dolma 3 Mix, a 5.9-trillion-token (~6T) pretraining mix with a higher proportion of coding and mathematical data than earlier Dolma releases, plus much stronger decontamination via extensive deduplication, quality filtering, and careful control over data mixing. We follow established web standards in collecting training data and don't collect from sites that explicitly disallow it, including paywalled content.

They also highlight that they are training on fewer tokens than their competition:

[...] it's the strongest fully open thinking model we're aware of, narrowing the gap to the best open-weight models of similar scale – such as Qwen 3 32B – while training on roughly 6x fewer tokens.

If you're continuing to hold out hope for a model trained entirely on licensed data this one sadly won't fit the bill - a lot of that data still comes from a crawl of the web.

I tried out the 32B Think model and the 7B Instruct model using LM Studio. The 7B model is a 4.16GB download, the 32B one is 18.14GB.

The 32B model is absolutely an over-thinker! I asked it to "Generate an SVG of a pelican riding a bicycle" and it thought for 14 minutes 43 seconds, outputting 8,437 tokens total most of which was this epic thinking trace.

I don't usually quote the full SVG in these write-ups, but in this case it's short enough that I think it's worth sharing. The SVG comments give a great impression of what it was trying to do - it has a Bicycle, Bike frame, Pelican, Left and Right wings and even "Feet on pedals".

<svg width="200" height="200" viewBox="0 0 100 100">
  <!-- Bicycle -->
  <circle cx="30" cy="60" r="15" stroke="black" fill="none"/>
  <circle cx="70" cy="60" r="15" stroke="black" fill="none"/>
  
  <!-- Bike frame -->
  <rect x="35" y="25" width="30" height="10" fill="saddlebrown"/>
  <line x1="35" y1="40" x2="30" y2="60" stroke="black" stroke-width="3"/>
  <line x1="65" y1="40" x2="70" y2="60" stroke="black" stroke-width="3"/>
  
  <!-- Pelican -->
  <ellipse cx="55" cy="65" rx="20" ry="15" fill="white"/>
  <polygon points="52 50,57 35,62 50" fill="black"/> <!-- Head/beak -->
  <circle cx="55" cy="45" r="2" fill="white"/>
  <circle cx="60" cy="45" r="2" fill="white"/>
  
  <polygon points="45 60,50 70,55 60" fill="lightgrey"/> <!-- Left wing -->
  <polygon points="65 60,70 70,55 60" fill="lightgrey"/> <!-- Right wing -->
  
  <!-- Feet on pedals -->
  <polygon points="25 75,30 85,35 75" fill="black"/>
  <polygon points="75 75,70 85,65 75" fill="black"/>
</svg>

Rendered it looks like this:

Two circles, each with a triangle sticking out from the bottom. They have bars leading up to a brown box. Overlapping them is a black triangle with white circles for eyes and two grey triangles that are probably meant to be wings. It is not recognizable as a pelican or a bicycle.

I tested OLMo 2 32B 4bit back in March and got something that, while pleasingly abstract, didn't come close to resembling a pelican or a bicycle:

Blue and black wiggly lines looking more like a circuit diagram than a pelican riding a bicycle

To be fair 32B models generally don't do great with this. Here's Qwen 3 32B's attempt (I ran that just now using OpenRouter):

The bicycle is two black circles joined by two lines, with a weird rectangular saddle perched on top The pelican is a blue oval, a white circles with a yellow triangle in it and a weird eye shaped oval overlapping the blue one.

OlmoTrace

I was particularly keen on trying out the ability to "inspect intermediate reasoning traces". Here's how that's described later in the announcement:

A core goal of Olmo 3 is not just to open the model flow, but to make it actionable for people who want to understand and improve model behavior. Olmo 3 integrates with OlmoTrace, our tool for tracing model outputs back to training data in real time.

For example, in the Ai2 Playground, you can ask Olmo 3-Think (32B) to answer a general-knowledge question, then use OlmoTrace to inspect where and how the model may have learned to generate parts of its response. This closes the gap between training data and model behavior: you can see not only what the model is doing, but why---and adjust data or training decisions accordingly.

You can access OlmoTrace via playground.allenai.org, by first running a prompt and then clicking the "Show OlmoTrace" button below the output.

I tried that on "Generate a conference bio for Simon Willison" (an ego-prompt I use to see how much the models have picked up about me from their training data) and got back a result that looked like this:

Screenshot of AI2 Playground interface with Olmo 3 32B Think model selected. User prompt reads "Generate a conference bio for Simon Willison". AI response shows expanded "Thoughts" section: "Okay, the user wants me to generate a conference bio for Simon Willison. Let me start by recalling who Simon is. He's a well-known developer, entrepreneur, and author in the tech community. I remember he's the co-founder of co:here and now works at Anthropic. He's famous for his work with SQLite, especially the datasette project. Also, he writes a blog called SimonWillison.net which is quite popular. First, I need to structure a typical conference bio. These usually include his current role, past positions, notable projects, and maybe some achievements or recognitions. I should highlight his key contributions like datasette, which allows easy exploration of SQLite databases via a web interface. That's a big deal in data journalism and open data circles." Right panel shows "OlmoTrace" feature described as "Documents from the training data that have exact text matches with the model response. Powered by infini-gram". First document excerpt discusses technology and innovation, with highlighted match text "societal implications of technology, emphasizing the" shown in bold, surrounded by text about responsibility and merging innovation with intellect. Second document excerpt about Matt Hall has highlighted match "is a software engineer and entrepreneur based in" shown in bold, describing someone in New York City who co-founded a PFP collection and works at Google Creative Lab. Note indicates "Document repeated 2 times in result" with "View all repeated documents" link.

It thinks I co-founded co:here and work at Anthropic, both of which are incorrect - but that's not uncommon with LLMs, I frequently see them suggest that I'm the CTO of GitHub and other such inaccuracies.

I found the OlmoTrace panel on the right disappointing. None of the training documents it highlighted looked relevant - it appears to be looking for phrase matches (powered by Ai2's infini-gram) but the documents it found had nothing to do with me at all.

Can open training data address concerns of backdoors?

Ai2 claim that Olmo 3 is "the best fully open 32B-scale thinking model", which I think holds up provided you define "fully open" as including open training data. There's not a great deal of competition in that space though - Ai2 compare themselves to Stanford's Marin and Swiss AI's Apertus, neither of which I'd heard about before.

A big disadvantage of other open weight models is that it's impossible to audit their training data. Anthropic published a paper last month showing that a small number of samples can poison LLMs of any size - it can take just "250 poisoned documents" to add a backdoor to a large model that triggers undesired behavior based on a short carefully crafted prompt.

This makes fully open training data an even bigger deal.

Ai2 researcher Nathan Lambert included this note about the importance of transparent training data in his detailed post about the release:

In particular, we're excited about the future of RL Zero research on Olmo 3 precisely because everything is open. Researchers can study the interaction between the reasoning traces we include at midtraining and the downstream model behavior (qualitative and quantitative).

This helps answer questions that have plagued RLVR results on Qwen models, hinting at forms of data contamination particularly on math and reasoning benchmarks (see Shao, Rulin, et al. "Spurious rewards: Rethinking training signals in rlvr." arXiv preprint arXiv:2506.10947 (2025). or Wu, Mingqi, et al. "Reasoning or memorization? unreliable results of reinforcement learning due to data contamination." arXiv preprint arXiv:2507.10532 (2025).)

I hope we see more competition in this space, including further models in the Olmo series. The improvements from Olmo 1 (in February 2024) and Olmo 2 (in March 2025) have been significant. I'm hoping that trend continues!

Tags: ai, generative-ai, llms, interpretability, pelican-riding-a-bicycle, llm-reasoning, ai2, ai-ethics, llm-release, nathan-lambert, olmo

Links 11/23/25

Links for you. Science:

Jeffrey Epstein E-mails Reveal Depth of Ties to High-Profile Scientists
With 42 news measles cases, US total tops 1,700
Lemurs Are Having a Mysterious ‘Baby Boom’ in Madagascar. Here’s Why That Might Not Be a Good Thing
U.S. Congress considers sweeping ban on Chinese collaborations (making the policy retroactive leads me to conclude that they are attempting to use this to destroy the current crop of researchers)
Life Finds a Way, Even on Inactive Hydrothermal Vents
Raccoons Are Showing Early Signs of Domestication

Other:

Society broke down and all, but you really still need to stop at red lights (excellent)
JD Vance Received a Dire Warning About the Groyper Takeover of the GOP From a Strange Source
Behind the scenes of the D.C. mayor’s battle to keep Trump at bay
BU College Republicans president says he called ICE to ‘detain these criminals’ at Allston Car Wash (it’s actually unclear if he is a U.S. citizen; he spent most of his life abroad)
Trump’s Shocking Brawl With Laura Ingraham on Fox Exposes His Weakness
The D.N.C. Ordered Workers Back to the Office. Its Union Isn’t Pleased. (hard to be the party of workers when you fuck over your own workers)
Open Letter From an Online Democrat
Great News: We Got Them to Agree to Their Own Deal
Trump administration prepares to fire worker for TV interview about SNAP
Tariffs could be coming for your Christmas tree (welcome to the War on Christmas, Comrade Adjudicated Rapist Trump!)
Katie Wilson, a Political Newcomer, Is Elected Mayor in Seattle
In Matt Gaetz Scandal, Circumstances Left Teen Vulnerable to Exploitation (Trump wanted Gaetz to be Attorney General, even after this statutory rape was known)
But, Atrios, Why Are You Focused On Larry Summers?
Email Cache Reveals How Former Harvard President Larry Summers Stayed in Jeffrey Epstein’s Orbit
Epstein emails reveal New York Times reporter’s connection to scandal
How DHCD Forfeited $35 Million in Federal Assistance for Affordable Housing and Bailed Out a Well-Connected Developer
Zohran Mamdani prepares to tackle Trump
Upper East Side Moms Are Melting Down Over Mamdani: A 35,000-person Facebook group devolved into panic and infighting after the mayoral election.
Sorry, Nancy Mace. Donald Trump hasn’t “protected women”
What Does A Great Trans Athlete Deserve?
Quit “sane-washing” Donald Trump
The President Who Cried ‘Hoax’. Republicans went after Epstein only when it was politically useful.
How Multilevel Marketing Explains Trump’s Corruption
Metrorail’s signaling system is showing its age. Here’s why it needs a new one
Harvard Faculty Disturbed by Revelations of Summers’ ‘Cozy Friendship’ With Epstein
The Eviction Kings
The Myth of China’s “AI Talent Pipeline”
Can School Parking Lots Become Havens for Homeless Families?
Plan to kill single-family zoning in Va. suburb survives legal challenge
Why D.C.’s dockless e-bikes are rolling slower

In Case You Missed It…

Stop tyranny
Observed on P Street NW, between 16th and 17th, Dupont Circle, D.C.

…a week of Mad Biologist posts:

Autopen For Me, but Not For Thee

Democratic Support, At Least in Virginia, Has a Huge Gender Gap

What HHS Secretary’s “Kiddie Killer” Kennedy’s ‘Gold Standard’ Science Actually Looks Like

Glonzo Has Broken Republican Containment

David Beckworth on Fed policy

David Beckworth has two interesting posts discussing how monetary policy is being impacted by a changing environment. He focuses on three issues: stablecoins, abundant reserves, and our unsustainable fiscal policy. In this post I’ll offer a few preliminary thoughts on these issues. In some cases, I don’t have enough expertise to have a high level of confidence in my views, so I’d welcome any feedback. Ideally, you’d show where I’m wrong, in which case I’ll “win” by becoming better informed on 21st century monetary policy.

In the first post, David suggests that dollar denominated stablecoins are likely to become an increasingly important part of the global economy:

Thanks for reading The Pursuit of Happiness! Subscribe for free to receive new posts and support my work.

Dollar-based stablecoins, with a market capitalization near $300 billion, were already projected to reach between $2 and $4 trillion in size by the end of the decade. The GENIUS Act and the prospect of skinny master accounts are likely to further accelerate this growth.

Should stablecoins become a dominant payment technology, the implications for the Federal Reserve could be significant. In this note, I focus on two key effects in particular: (1) the potential increase in the cost of the Fed’s balance sheet and (2) the potential expansion in its size.

This could have a negative effect on Fed seignorage revenues:

One potential implication of the widespread adoption of dollar-based stablecoins is the displacement of physical currency. As digital dollars issued through stablecoins become more widely used, global demand for physical U.S. dollars will decline. This shift could prove costly for the Federal Reserve, which depends on its “currency franchise” to finance its balance sheet inexpensively.

Put differently, the Fed currently obtains zero-cost funding by issuing currency and investing the proceeds in Treasury securities that yield a positive interest rate. This spread is the Fed’s golden goose. But as digital dollars displace physical currency, that goose may be cooked—and the seigniorage will migrate from the Fed to the issuers of stablecoins.

That seems like a plausible argument, but I’d need more information before forming an opinion. Consider that in recent decades there has been a dramatic decline in the use of currency in transactions. I hardly ever pay for goods in cash, and I don’t believe I’m atypical. And yet total currency demand has increased sharply since the 1980s, from just over 4 percent of GDP to roughly 8 percent.

You could argue that the trend reversed after Covid, but I’m skeptical. Currency demand is negatively related to nominal interest rates and thus grows very rapidly during low interest rate periods such as 2002, 2009-17 and 2020, while slowing or falling back during periods of rising interest rates, such as 2004-07, 2018-19 and 2022-24. The long run shift from cash to credit cards does not seem to have materially reduced the demand for cash, at least as a store of value.

The argument that stablecoins will displace currency largely hinges on whether this asset can displace the demand for cash as a store of value. The vast majority of the currency stock is composed of $100 bills that wear out very slowly, an indication that they are not being widely used in transactions, that is, as a medium of exchange.

In a sense, it is odd that currency is such an important store of value—its 0% return is dominated by other safe assets such as T-bills and FDIC-insured bank accounts. The most widely accepted explanation is that people value the anonymity of currency, which is quite useful for illegal transactions, but also for otherwise legal transactions where there is an attempt to evade taxes.

Unless the government were to essentially legalize money laundering, it is hard for me to see how stablecoins could displace the current high demand for US currency notes. Perhaps I’m missing something here. Will the government be able to prevent stablecoins from being used for money laundering? If the answer is no, then isn’t this the thing that we should be talking about? Wouldn’t that be more important than the hit to Fed income? Wouldn’t legalized money laundering lead to a sharp fall in income tax revenue?

David also argues that the replacement of currency with stablecoins would not lead to a smaller Fed balance sheet, that is, a smaller monetary base:

Consider first the case where people exchange cash for stablecoins backed by reserves, ON RRP balances, or Treasury securities. Suppose, for example, that I deposit $1,000 in cash at my bank, which returns the currency to the Fed. Currency in circulation falls by $1,000 while reserves rise by $1,000, leaving the size of the Fed’s balance sheet unchanged. I then use my new bank deposit to purchase a stablecoin, transferring reserves from my bank to the stablecoin’s bank—again, no change in total Fed assets or liabilities. Finally, the stablecoin issuer uses those reserves to purchase a Treasury bill in the secondary market, shifting reserves among banks but not altering the Fed’s overall balance sheet size. In short, the total size of the Fed’s balance sheet remains constant, but its composition shifts from currency to reserves.

I see a lot of possibilities here. If we were in the pre-2008 world with no interest on bank reserves, then a decline in currency demand would generally be accommodated by the Fed with open market sales. Rather than leading to another $1000 in excess reserves, the Fed would sell $1000 in securities to keep interest rates on target. (And even this assumption depends on why currency demand fell—recall the never reason from a price change problem.)

If we were in a world where stablecoins were backed one for one with deposits at the Fed, then the monetary base would not decline. But David is contemplating a scenario where stablecoin issuers are allowed to back stablecoins with Treasury bills, in which case it is not obvious to me that a $1000 decline in currency would lead to a $1000 increase in reserves. Perhaps, but given that the demand for bank reserves is highly elastic when IOR is set at the market interest rate, it seems like the Fed could choose a wide range of responses to a reduction in currency demand, just as IOR currently gives the Fed a fairly wide ability to adjust the size of the monetary base without dramatically affecting financial conditions.

Elsewhere, David presents a number of plausible scenarios where stablecoins lead to a larger Fed balance sheet, particularly where the coins are backed by “skinny master accounts” at the Fed:

Taken together, these three cases point in a clear direction: stablecoins generally raise the structural demand for Fed liabilities. When they replace currency, the effect is size-neutral for the Fed’s balance sheet even as the composition shifts from currency to reserves. When they replace bank deposits, the result is mostly neutral at first, but growing use of skinny master accounts could nudge reserves and the overall balance sheet higher. And when stablecoins create net new demand for dollar-denominated safe assets, particularly from foreign users seeking access to the U.S. payment system, the Fed’s balance sheet must expand to absorb that demand.

In my view, this all depends on whether we stick with the abundant reserves system or return to the pre-2008 system of scarce reserves (which both David and I prefer.) If the Fed does not pay interest on bank reserves, it is hard to see how deposits at the Fed could ever become an important component of the monetary base. Back in 2007, roughly 98% of the $840 billion monetary base was currency (mostly $100 bills) and less than 2% was deposits at the Fed:

If we stick with the policy of paying IOR and having abundant reserves, then the size of the Fed’s balance sheet depends on all sorts of factors, including decisions made by bank regulators.

In the follow-up post, David considers two possible scenarios:

If stablecoins mean greater demand for Fed liabilities, then what kind of balance sheet expansion might this entail? In my view, there are two ways for understanding this process. One view—rooted in the “safe asset shortage” narrative—sees the Fed’s balance sheet expanding to meet an excess demand for safe and liquid assets, with quantitative easing (QE) functioning as a kind of public intermediation service that supplies the safety that private markets cannot. The other view—anchored in concerns about fiscal dominance—sees the Fed’s balance sheet growing in response to an excess supply of safe assets, as mounting public debt pressures the central bank to absorb and manage an expanding stock of treasuries.

David presents both a cyclical and a structural explanation of the safe asset shortage. In my view, both of these explanations misdiagnose the zero bound problem that developed in late 2008. Rather than a lack of safe assets, I believe that a tight money policy drove NGDP growth from a trend rate of roughly 5% in the 1990s and early 2000s to negative 3% from mid-2008 to mid-2009. This collapse in NGDP growth sharply depressed the equilibrium interest rate, causing Treasury bond prices to rise as yields fell. The problem was not that fiscal policymakers were not running big enough budget deficits, rather that monetary policy was excessively contractionary.

[This view is certainly unconventional, and skeptical readers are directed to my book entitled The Money Illusion for a more complete explanation.]

David then provides some empirical evidence for the safe asset shortage model:

The U.S. experience fits within a broader global pattern. Across advanced economies from 2008 to 2019, the size of central bank balance sheets relative to GDP was negatively related to the average inflation rate. Countries with the lowest inflation—such as Switzerland and Japan—ended up with the largest balance sheets, while those with higher inflation, like New Zealand and Australia, required much smaller interventions. This pattern is precisely what the safe-asset-shortage view would predict: central banks expanded their balance sheets most aggressively where the demand for safe, liquid assets—and thus the disinflationary pressure—was greatest.

But this is also consistent with the model that I presented in The Money Illusion—that tight money led to falling NGDP, which led to lower nominal interest rates and a higher demand for base money. My critics often insist that QE is evidence that monetary policy was expansionary. Then why is QE negatively correlated with the inflation rate? This is clearly an example of the thermostat problem—central bank balance sheets are responding to macroeconomic conditions.

After examining the safe asset shortage hypothesis, David considered the opposite case—too many safe assets (i.e. too many Treasury securities) leading to fiscal dominance:

As the stock of Treasury securities balloons, the Fed faces growing pressure, explicit or implicit, to ensure that the market can absorb the supply without destabilizing yields.

We have been here before. Between 1942 and 1951, the Federal Reserve pegged Treasury yields across the curve to support wartime financing, standing ready to buy whatever amount of government debt was needed to keep short-term rates at ⅜ percent and long-term rates at 2.5 percent. This arrangement expanded the Fed’s balance sheet. It also subordinated monetary policy to fiscal needs until the 1951 Treasury–Fed Accord restored its independence. Arguably, a milder version of this dynamic resurfaced during 2021–2022, when pandemic-era borrowing surged and the Fed’s balance sheet ballooned in tandem. As George Hall and Thomas Sargent have documented in a series of papers, the Fed in this period looked less like it was leading policy than financing it.

This is a plausible argument, but I’m not entirely convinced. In 2021-22, the Fed seemed motivated by a flawed 1960s-style Phillips curve model—where easy money would somehow produce a healthy job market. By early 2022, the Fed realized that it had made a serious error and sharply raised nominal interest rates to restrain inflation. I see no evidence that the Fed was motivated by a desire to inflate away the debt.

The period of 1966-81 provides abundant evidence that a central bank cannot avoid “destabilizing yields” by using an easy money policy to monetize the debt. Indeed, nominal interest rates soared to over 15% toward the end of the Great Inflation of 1966-81.

Some have suggested we might adopt a milder form of fiscal dominance, say shifting the inflation target from 2% to 3%. But what does that accomplish? Yes, with the net national debt at roughly 100% of GDP an extra 1% inflation will reduce the real value of the national debt by 1% of GDP each year. But according to the Fisher effect, moving the inflation target from 2% to 3% will raise nominal interest rates by 1% in the long run, offsetting the ongoing gain from monetizing the debt. There would be a one-time gain from reducing the market value of the existing stock of Treasury debt, but given the unsustainability of current fiscal policy, that’s not going to produce any sort of permanent solution.

Lots of pundits seem to sort of wave their hands and suggest that since reducing the budget deficit is politically unpopular, we’ll be forced into fiscal dominance. But 1966-81 type inflation is probably 10 times more politically unpopular than reducing the budget deficit from the unsustainable 6% of GDP to a more sustainable 3% of GDP. Even the far smaller inflation of 2021-23 freaked out the public—just imagine their reaction to something like 1966-81!

I have no crystal ball here, and I would not rule out the possibility that we end up with fiscal dominance. We do seem to be exhibiting some banana republic tendencies. Rather, I’m trying to warn people that this wouldn’t be “taking the easy way out”, instead it would be vastly more unpopular than a combination of spending cuts and tax increases on the order of 3% of GDP. President Clinton and the GOP Congress did that sort of fiscal austerity in the 1990s, and I don’t recall that period as being one of highly unpopular macro policy.

David concludes by considering a sort of trilemma in contemporary macroeconomic policy:

Digital-dollar innovation, therefore, does not resolve the tension between dollar dominance, a small Fed, and financial stability—it intensifies it.

This tension suggests a trilemma. We can have a small Federal Reserve balance sheet and dollar dominance, but then we risk the kind of financial fragility seen in 2007–2008. We can have a small Fed and financial stability, but only by allowing the dollar’s global role to shrink and reducing the global demand for dollar assets. Or we can have dollar dominance and financial stability, but only by accepting a permanently larger Federal Reserve footprint. It appears, in other words, that the dollar’s global reach requires that the institution sustaining it must grow in proportion to its reach.

In my view there is a way out of this trilemma. We need to finally confront the fact that the profession erred in 2008. The Great Recession was not caused by financial fragility; the actual culprit was a tight money policy that drove NGDP sharply lower. We can have a small Fed balance sheet, dollar dominance, and financial stability. All it requires is getting rid of interest on bank reserves and adopting a policy of NGDP level targeting. Unless I’m mistaken, David also favors that sort of policy mix. So I am essentially arguing that David should be less pessimistic about his preferred policy regime.

PS. A superficial reading of the 1942-51 period might suggest that there is a painless way to do fiscal dominance. But on closer examination, the first five years of the policy (1942-47) led to a very high rate of inflation, peaking at 14.4% in 1947. After mid-1947, however, the Fed stopped pegging T-bill yields and inflation fell sharply. Between 1947 and 1951, the longer-term T-bond yields were often significantly below the 2.5% peg. From 1945 to 1951, the (adjusted) monetary base was roughly flat. The Fed was no longer achieving low nominal interest rates by rapidly expanding their balance sheet. When the 2.5% T-bond yield peg briefly threatened to lead to high inflation in 1951, the Fed abandoned the policy and quickly brought inflation back down.

Thanks for reading The Pursuit of Happiness! Subscribe for free to receive new posts and support my work.

Vibecessions, Part II

A graph with blue squares

AI-generated content may be incorrect.

Source: Strength in Numbers

In October 2024, on the eve of the presidential election, the U.S. unemployment rate was 4.1 percent and the inflation rate was 2.6 percent. By historical standards, both these numbers were very good. And they were especially impressive given the pessimism of many economists two years earlier. In 2022 Larry Summers warned that it would take years of very high unemployment to get inflation down from its peak of 9 percent, while Bloomberg put the odds of recession at 100 percent. Instead by 2024 we had achieved the softest of soft landings.

Despite this stellar disinflation performance, the public mood was extremely sour. Voters who said that the economy was their most important issue favored Donald Trump by 60 percentage points in the 2024 presidential election, clearly costing Kamala Harris the election.

Almost a year later, in September 2025, the numbers were somewhat worse: Unemployment had ticked up to 4.4 percent while inflation had risen to 3 percent. In and of themselves, these aren’t terrible numbers. But this certainly isn’t, as Donald Trump claims, the best American economy in history. It isn’t a “hot” economy with prices going down.

Even given the gap between what Trump says about how wonderful the current economy is and the reality, however, it’s remarkable how pessimistic Americans are about the economy — significantly more negative than they were a year ago. The long-running Michigan Index of Consumer Sentiment is now lower than it was in the immediate aftermath of the 2008 financial crisis. The index is even lower than it was in 1980, when unemployment was above 7 percent and inflation hit 14 percent:

A graph of a graph showing the fall of the university of michigan consumer sentiment

AI-generated content may be incorrect.

Source: University of Michigan

And voters now blame Trump for the perceived bad state of the economy, showing their anger at the ballot box: In the Virginia and New Jersey gubernatorial elections earlier this month, voters who prioritize the economy favored Democrats by 30 points — a 90 point swing.

Today’s post is the second in a series about “vibecessions”: periods when the economy, by standard economic measures, looks relatively decent but the general public holds very negative views. Last week’s primer showed that the performance of the U.S. economy during the Biden administration was, by objective measures, very impressive: America shrugged off the negative effects of the Covid pandemic on GDP and employment with remarkable speed, significantly outperforming other advanced countries.

During the Biden years, inflation did temporarily spike – which people hated even though their incomes were growing fast enough to keep up with inflation. But the anger persisted even as inflation fell dramatically, and continues under Trump.

Today I’ll try to make sense of Americans’ angry and unhappy vibe on the economy. Spoiler: I haven’t found a “unitary theory” of vibecessions. Rather, there appear to be several possible, and not mutually exclusive, explanations. I’ll discuss each of these in turn, along with their strengths and weaknesses. Lastly, I will discuss the special case of Donald Trump, and why he may be suffering an especially hostile reaction to the economy.

Beyond the paywall I’ll examine four possible explanations for the past few years of vibecession:

· Media negativity

· Extreme partisanship

· People care about the level of prices, not the inflation rate

· The economy is worse than it looks

· Negative feelings arising from Trump’s chaotic economic policies

In addition, I will make some conjectures about future sentiments regarding the economy and why Trump may be paying an additional “gaslighting” penalty.

Read more

European Parliament member sees support for EU Space Act there

Christophe Grudler

A member of the European Parliament said there is widespread support there for the proposed European Union Space Act, while acknowledging that the draft released this summer can be refined.

The post European Parliament member sees support for EU Space Act there appeared first on SpaceNews.

Why investors are increasingly fatalistic

Everyone knows share prices have a long way to fall. Even so, getting out now might be a mistake

*The Age of Disclosure*

I have now watched the whole movie.  The first twenty-eight minutes are truly excellent, the best statement of the case for taking UAPs seriously.  It is impressive how they lined up dozens of serious figures, from the military and intelligence services, willing to insist that UAPs are a real phenomenon, supported by multiple sources of evidence.  Not sensor errors, not flocks of birds, and not mistakes in interpreting images.  This part of the debate now should be considered closed.  It is also amazing that Marco Rubio has such a large presence in the film, as of course he is now America’s Secretary of State.

You will note this earlier part of the movie does not insist that UAPs are aliens.

After that point, the film runs a lot of risks.  About one-third of what is left is responsible, along the lines of the first twenty-eight minutes.  But the other two-thirds or so consists of quite unsupported claims about alien beings, bodies discovered, reverse engineering, quantum bubbles, and so on.  You will not find dozens of respected, credentialed, obviously non-crazy sources confirming any of those propositions.  The presentation also becomes too conspiratorial.  Still, part of the latter part of the movie remains good and responsible.

Overall I can recommend this as an informative and sometimes revelatory compendium of information.  It does not have anything fundamentally new, but brings together the evidence in the aggregate better than any other source I know,and it assembles the best and most credible set of testifiers.  And then there are the irresponsible bits, which you can either ignore (though still think about), or use as a reason to dismiss the entire film.  I will do the former.

The post *The Age of Disclosure* appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

w/e 2025-11-23

My sister was here this week, keeping me company and spending time with Mum. Despite, you know, me not working, everything still seemed busy and I had several “Lemon, it’s Wednesday,” moments.

Nothing major achieved. Does that matter? Part of me feels it does, that “achieving things,” making progress of some kind, is the point. But there have been times recently, especially when the sun’s out, when I think, “This is fine, I can spend the rest of my days here, hardly seeing anyone, hardly going anywhere, hardly doing anything. It’s perfectly pleasant and it doesn’t matter.” Other times, especially when it’s grey and wet, that… well, I’m less satisfied.

In minor events:

  • Had an eye test. No change from a year ago. Ordered new lenses for some old frames, to use as “enhanced readers,” i.e. computer glasses. And ordered new frames, mainly for a backup and to wear at the gym (my current acetate Moscot frames are a bit front-heavy), but I like them a lot so they might become regulars.
  • Three gym trips, two bike trainer cycles.
  • A drive to Ledbury to visit a university friend of Mum’s. I was nervous about the drive, which was warranted on the way back when Apple Maps decided to take us via endless very narrow country roads, rather than the perfectly fine A-roads. I want to blame the person whose nice Yorkshire accent is the reason I use Apple Maps for directions, which hardly seems fair.
  • The builder came to measure our garage, make a plan, and order materials, to start work next week. The main aim is replace the very leaky roof, but we’ll also be making the walls more presentable, and the inside a nicer, more practical space.
  • Another folk music night with most of the usual crowd.

§ A photo of a field of grass with around twenty sheep standing in it, looking at the camera. In the distance is a valley of fields under a low, cloudy sky.
There are sheep in the field across the road, for the first time in a while.

§ I tinkered a tiny bit with the site redesign and wondered about buying a font. I hardly ever use non-standard fonts on websites but maybe I could wean myself off Helvetica and/or Georgia? I found one I liked but it turned out to cost £40 per year. Given that I keep blog posts using their same design, even when I redesign the rest of the site, this could mean paying £40 per year “forever”. I know type designers have to eat but no.

There are plenty of less onerously-priced fonts around but do I care enough to try again to find a slightly-different-to-Helvetica that’s reasonably priced?


§ I watched both seasons of English Teacher and it was pretty good. I wasn’t quite convinced, despite the snappy dialogue, but once I’d settled into the characters, and got past S01E04 which really didn’t work for me, it was all really nice and fun, and I’m sad it got canceled.

And I watched Crimes of the Future (David Cronenberg, 2022) which was OK. Although I’ve seen a lot at least eight of his films, I’ve always felt I should enjoy them, or find them more interesting, than I actually do.


§ Just over two weeks here alone until Mary’s back from Nepal. It’ll fly by.


Read comments or post one

Sunday assorted links

1. How to learn quarterbacking without quarterbacking — life of a back-up quarterback (NYT).

2. Major ancient city of the Kazakh steppe.

3. Are sperm whales speaking in vowels?

4. Game theory of Europe and Ukrainian peace?

5. Seb Krier.

6. “The space of intelligences is large…”

7. Delacroix and the beautiful.

The post Sunday assorted links appeared first on Marginal REVOLUTION.

       

Comments

 

Above & Below: Cartography Beyond Terrain

Above & Below: Cartography Beyond Terrain, an exhibition at Stanford Library’s David Rumsey Map Center that launched in conjunction with this year’s Ruderman Conference, “explores how cartography depicts the depths of the Earth, the ocean… More

DOT: Vehicle Miles Driven Increased 2.0% year-over-year

This is something I check occasionally.

The Department of Transportation (DOT) reported:
Travel on all roads and streets changed by +2.0% (+5.4 billion vehicle miles) for September 2025 as compared with September 2024. Travel for the month is estimated to be 278.3 billion vehicle miles.

The seasonally adjusted vehicle miles traveled for September 2025 is 277.1 billion miles, a +1.9% ( 5.0 billion vehicle miles) change over September 2024. It also represents a 0.2% change (0.6 billion vehicle miles) compared with August 2025.

Cumulative Travel for 2025 changed by +1.0% (+25.1 billion vehicle miles). The cumulative estimate for the year is 2,492.0 billion vehicle miles of travel.
emphasis added
Vehicle Miles Click on graph for larger image.

This graph shows the monthly total vehicle miles driven, seasonally adjusted.

Miles driven declined sharply in March 2020 and really collapsed in April 2020.  

Miles driven are now at pre-pandemic levels.

Magic staircase

 Staircase technology is moving fast:



 


 

 

Enlightenment ideas and the belief in progress leading up to the Industrial Revolution

Using textual analysis of 173,031 works printed in England between 1500 and 1900, we test whether British culture evolved to manifest a heightened belief in progress associated with science and industry. Our analysis yields three main findings. First, there was a separation in the language of science and religion beginning in the 17th century. Second, scientific volumes became more progress-oriented during the Enlightenment. Third, industrial works—especially those at the science-political economy nexus—were more progress-oriented beginning in the 17th century. It was therefore the more pragmatic, industrial works which reflected the cultural values cited as important for Britain’s takeoff.

That is from a paper by Ali Almelhem, Murat Iyigun, Austin Kennedy, and Jared Rubin.  Now forthcoming at the QJE.

The post Enlightenment ideas and the belief in progress leading up to the Industrial Revolution appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

NHC Atlantic Outlook


Atlantic 2-Day Graphical Outlook Image
Atlantic 7-Day Graphical Outlook Image


ZCZC MIATWOAT ALL
TTAA00 KNHC DDHHMM

Tropical Weather Outlook
NWS National Hurricane Center Miami FL
100 PM EST Tue Nov 25 2025

For the North Atlantic...Caribbean Sea and the Gulf of America:

Tropical cyclone formation is not expected during the next 7 days.

$$
Forecaster Blake
NNNN


NHC Eastern North Pacific Outlook


Eastern North Pacific 2-Day Graphical Outlook Image
Eastern North Pacific 7-Day Graphical Outlook Image


ZCZC MIATWOEP ALL
TTAA00 KNHC DDHHMM

Tropical Weather Outlook
NWS National Hurricane Center Miami FL
1000 AM PST Tue Nov 25 2025

For the eastern and central North Pacific east of 180 longitude:

Tropical cyclone formation is not expected during the next 7 days.

$$
Forecaster Reinhart
NNNN


Emergent Ventures winners, 49th cohort

David Yang, 14, Vancouver, robotics.

Alex Araki, London, to improve clinical trials.

Ivan Skripnik, Moldova/LA, physics and the nature of space.

Mihai Codreanu, Stanford economics Ph.D, industrial parks and the origins of innovation.

Salvador Duarte, Lisbon/Nebraska, 17, podcast in economics and philosophy.

Aras Zirgulis, Vilnius, short economics videos.

Ava McGurk, 17, Belfast, therapy and other services company and general career support.

Anusha Agarwal, Thomas Jefferson High School, NoVa, space/Orbitum.

Cohen Pert, 16, Sewanee, Georgia, running several businesses.

Jin Wang, University of Arizona, Economics Ph.D, AI and the history of Chinese economic growth.

Janelle Yapp, high school senior, KL Malaysia, general career support.

Justin Kuiper, Bay Area, Progress Studies ideas for video.

Mariia ]Masha] Baidachna, Glasgow/Ukraine, quantum computing.

Beatriz Gietner, Dublin, Substack on econometrics.

Roman Lopatynskyi, Kyiv, romantic piano music.

The post Emergent Ventures winners, 49th cohort appeared first on Marginal REVOLUTION.

       

Comments

Related Stories

 

SpaceX launches 28 Starlink satellites from Vandenberg on brand new Falcon 9 rocket

Falcon 9 booster B1100 lifts off from Vandenberg Space Force Base in California on its debut mission carrying 28 Starlink satellites. Image: SpaceX.

SpaceX launched a brand new Falcon 9 booster on a mission from California to deploy another batch of satellites for the company’s Starlink internet service.

Liftoff from Space Launch Complex 4E at Vandenberg Space Force Base, California, occurred at 12:48 a.m. PST (3:48 a.m. EST / 0848 UTC). It was the first flight of Falcon 9 first stage B1100, the eighth new booster to join the SpaceX fleet this year.

The Falcon 9 took a south-easterly trajectory on departure from Vandenberg, following the coast of California, as it targeted an orbit inclined at 53 degrees to the Equator. B1100 successfully touched down on the drone ship ‘Of Course I Still Love You’ about eight and half minutes into the flight.

Deployment of the 28 satellites from the rocket’s second stage was expected one hour into flight. This is the 110th Starlink delivery flight of the year. SpaceX announced earlier this month it now has 8 million customers for the Starlink service.

What created this unusual space sculpture? What created this unusual space sculpture?


Heavy Snow in the Northern Plains and Upper Midwest; Severe Thunderstorms in the Southern U.S.

Clerk for iOS

My thanks to Clerk for sponsoring last week at DF. Clerk makes authentication for iOS apps effortless — just drop in pre-built SwiftUI components for sign-in, MFA, and profile management. Fully customizable, always in sync with Apple’s design system, and packed with features developers love: social sign-in, user roles, and organization management.

Launch faster, stay secure, and scale confidently, whether you’re building the next big thing or a startup MVP. See how Clerk makes complete user management easy for modern iOS teams.

 ★