17819 stories
·
173 followers

This Day in Labor History: April 9, 1917

1 Share

On April 9, 1917, the Supreme Court upheld Oregon’s new 10 hour day law for both men and women that also provided for overtime pay. The Court went away from its usual position as hard-core defenders of contract doctrine, deciding the limited nature of the law meant that the state did not expand its police powers too much and workers could still be exploited if they wanted. This was an important precedent, although it did not mean the Court was moving toward a more liberal position on workers’ rights.

Oregon had led the way on workers’ rights for some time by the 1910s. The Supreme Court was generally hostile to these laws. This was very much the Lochner era. In 1905, the Court had ruled in Lochner v. New York that a law regulating the hours for bakeries was unconstitutional. But it made an exception to this state in 1908. In Muller v. Oregon, the Court ruled that a law specifically in favor of women’s hours was constitutional because women played a special role in the body politic as mothers. This was seen by a certain class of feminist as discriminatory, but labor feminists lauded the decision, understanding that this was not only protecting women from exploitation but opening the door for further advancement in laws limiting working hours.

In 1913, Oregon passed a new law that created a 10-hour day for both men and women. It applied broadly to mills, factories, and manufacturing facilities. But it also included a pioneering time and a half law for overtime, up to 3 hours a day. This was critical, as it turns out, because it kept open the option of workers laboring more, which was a good way to get around the general atmosphere of the era that regulating the workplace was a constitutional violation.

Franklin Bunting ran a flour mill in Lake County, presumably Lakeview since the rest of that county is mountains and scrubland. He hated every part of this new law. So he just refused to comply and sued the state when it fined him $50 for violating it. The state supreme court upheld the law in 1915. Bunting appealed to the Supreme Court. There were a lot of folks invested in Oregon being able to pass such a law. Among them was Felix Frankfurter, the future Supreme Court justice who would lead the state’s appeal. But Bunting had his major supporters too, including former senator Charles Fulton, a classic Gilded Age Republican who had served a term from the state from 1903-09.

This case was all about contract doctrine. This is the critical labor issue of the era for the courts. In short, going back to the 1830s and then especially after the Civil War, employment was seen as a contract between two willing individuals. So what right did the courts or the state have in adjudicating decisions made by two equal parties? Of course the idea of two equal parties when it came to employment was completely ridiculous and even more so after the Civil War. To say that the millionaire and the starving immigrant were equal parties in a contract of choice is not just to ignore the reality of power dynamics, but to laugh in the face of common sense. And yet, the more unequal the nation became, the more the courts and other hacks for the millionaire elites clung to this idea like it came down on high from God. This all got in the way of all sorts of ways to make work slightly less equal? Workplace safety law? Violation of a worker’s right to labor for higher wages in an unsafe working environment if they wanted. Minimum wage law? Violation of a workers’ right to choose to sell their labor for less money. Child labor laws? Violation of a parent’s right to sell their children’s labor. Etc.

Added to this was the perversion of the Fourteenth Amendment. The same courts who decided cases such as Plessy v. Ferguson or throwing out the Civil Rights Act of 1875 as unconstitutional–i.e., deciding that the 14th Amendment did not actually protect black Americans–decided that in fact that 14th Amendment did protect corporations. They basically wrote this contract ideology into the 14th Amendment. It’s true enough that Reconstruction Era Republicans did have contracts in mind with the 14th Amendment. These people loved contracts and believed the best way to solve the southern labor issue was for ex-slaves to sign contracts with planters that would regulate wages and conditions. And let’s not pretend that most of these guys were pro-labor either, they weren’t. But at the time of writing the 14th Amendment, their vision was very much not providing cover for corporations to ensure that they could exploit their workers and that any laws to protect workers would be ruled unconstitutional on these grounds. But then we should know by now that the courts are filled with hacks who just rewrite the language of the law to support their personal political preferences, with a few exceptions around ideas of principle, sometimes.

The specific argument Bunting and his lawyers made was that the law intervened in the labor market to compel employers to pay more for labor than market value and thus was a wage law rather than an hours law. Oregon countered that it was strictly an hours law and that the very mild penalties on the employer also demonstrated its limited police power. What both sides understood is that an actual law regulating wages would be tossed by the courts in this era.

To some surprise, the Court ruled 5-3 in favor of the state. Louis Brandeis sat out, which is interesting since he was the lawyer whose pioneering use of sociological evidence is what swayed the Court in Muller. Perhaps this is why he sat it out. In any case, Joseph McKenna wrote the majority opinion, with Holmes, Day, Pitney and Clarke joining. White, Van Devanter, and McReynolds each wrote separate dissents. McKenna argued that the state had engaged in appropriate police powers. The key here is that the law did nothing to set wages, outside of the overtime. McKenna noted this specifically. Because this law was just about hours and not wages, it did not discriminate against employers and the contract law could be upheld. Bunting’s lawyers also argued that the law did not even protect the workers from unsafe conditions, which McKenna rejected outright.

This was a positive step for labor reformers. Alas, the Court would continue to rule for employers on most cases for another twenty years and the throwing out of the Washington D.C. minimum wage law for women in Adkins v. Children’s Hospital in 1923 demonstrated that only a revolution on the Court would lead to a scenario in which the law could protect American workers. That happened under Franklin Delano Roosevelt, leading to the Fair Labor Standards Act in 1938 and it being upheld in U.S. v. Darby in 1941.

This is the 597th post in this series. Previous posts are archived here.

The post This Day in Labor History: April 9, 1917 appeared first on Lawyers, Guns & Money.

Read the whole story
fxer
7 hours ago
reply
Bend, Oregon
Share this story
Delete

The AI Great Leap Forward

1 Comment and 2 Shares

In 1958, Mao ordered every village in China to produce steel. Farmers melted down their cooking pots in backyard furnaces and reported spectacular numbers. The steel was useless. The crops rotted. Thirty million people starved.

In 2026, every other company is having top down mandate on AI transformation.

Same energy.

The AI Great Leap Forward

Backyard Furnaces

The rallying cry of the Great Leap Forward was 超英趕美 — surpass England, catch up to America. Every province, every village, every household was expected to close the gap with industrialized Western nations by sheer force of will. Peasants who had never seen a factory were handed quotas for steel production. If enough people smelt enough iron, China becomes an industrial power overnight. Expertise was irrelevant. Conviction was sufficient.

The mandate today is identical, just swap the nouns. Every company, every function, every individual contributor is expected to close the AI gap. Ship AI features. Build agents. Automate workflows. That nobody on the team has ever trained a model, designed an evaluation system, or debugged a retrieval system is beside the point. Conviction is sufficient.

So everyone builds. PMs build AI dashboards. Marketing builds AI content generators. Sales ops builds AI lead scorers. Software engineers are building AI and data solutions that look pixel-perfect and function terribly. The UI is clean. The API is RESTful. The architecture diagram is beautiful. The outputs are wrong. Nobody checks because nobody on the team knows what correct outputs look like. They’ve never looked at the data. They’ve never computed a baseline.

Backyard Furnaces

Entire departments are stitching together n8n workflows and calling it AI — dozens of automated chains firing prompts into models, zero evaluation on any of them. These tools are merchants of complexity: they sell visual simplicity while generating spaghetti underneath. A drag-and-drop canvas makes it trivially easy to chain ten LLM calls together and impossibly hard to debug why the eighth one hallucinates on Tuesdays. The people building these workflows have never designed an evaluation pipeline, never measured model drift, never A/B tested a prompt. They don’t need to — the canvas looks clean, the arrows point forward, the green checkmarks fire. The complexity isn’t avoided. It’s hidden behind a GUI where nobody with ML expertise will ever look.

The backyard steel of 1958 looked like steel. It was not steel. Today’s backyard AI looks like AI. It is not AI. A TypeScript workflow with hardcoded if-else branches is not an agent. A prompt template behind a REST endpoint is not a model. Calling these things AI is like calling pig iron from a backyard furnace high-grade steel. It satisfies the reporting requirement. It fails every real-world test.

But the most dangerous furnace is the one that produces something functional. Teams are building demoware — pretty interfaces, working endpoints, impressive walkthroughs — with zero validation underneath. Some are in-housing SaaS products by vibe coding some frontend with coding agents: it runs, it has a dashboard, it cost a fraction of the vendor. Klarna announced in 2024 that it would replace Salesforce and other SaaS providers with internal AI-built solutions. What these replacements don’t have is data infrastructure, error handling, monitoring, on-call support, security patching, or anyone who will maintain them after the builder gets promoted and moves on.

These apps will win awards at the next all-hands. In two years they’ll be unmaintainable tech debt some poor soul inherits and rewrites from scratch. The furnace produced pig iron. Someone stamped “steel” on it. Now it’s load-bearing.

Meanwhile, the actual product that customers pay for rots in the field. But hey, 超英趕美. The AI adoption dashboard is green.

Reporting Grain Production to the Central Committee

During the Great Leap Forward, provinces competed to report the most spectacular grain yields. Hubei reported 10,000 jin per mu. Guangdong said 50,000. Some counties claimed over 100,000 — physically impossible numbers, rice plants supposedly so dense that children could stand on top of them. Officials staged photographs. Everyone knew the numbers were fake. Everyone reported them anyway, because the alternative was being labeled a saboteur. The central government, delighted by the bounty, increased grain requisitions based on the reported yields. Farmers starved eating the difference between the real number and the fantasy.

You’ve seen this meeting.

One team reports their AI copilot “reduced development time by 40%.” The next team, not to be outdone, reports 60%. A third claims their AI agent “automated 80% of analyst workflows.” Nobody asks how these were measured. Nobody checks the methodology. Nobody points out that the team claiming 80% automation still has the same headcount doing the same work. The numbers go into a slide deck. The slide deck goes to the board. The board is delighted. The board increases investment.

Reporting Grain Production to the Central Committee

Then someone — there’s always someone — builds a leaderboard tracking how many prompts you wrote this week, how much of your code is AI-generated, your ranking versus your team, versus your org, versus the entire company. One day your company announces: stop everything, it’s AI Week. Build something with AI. Show what you’ve got. You think you’re done after the hackathon? No no no. Now you have to promote it. Daily posts: look what I built, here’s how many agents I used, here’s how many skills I shipped. Pull in teammates. Pull in strangers. Ask for feedback. “Humbly.”

Your AI usage is now a KPI. You are being evaluated on how much grain you reported, not how much grain you grew. This is Goodhart’s Law at organizational scale: when a measure becomes a target, it ceases to be a good measure. The metric was supposed to track whether AI is making the company better. Instead, the entire company is now optimizing to make the metric look better. The beatings will continue until adoption improves.

Killing the Sparrows

The Great Leap Forward’s most tragicomic chapter was the 除四害运动 (Eliminate Four Pests Campaign). Mao declared sparrows an enemy of the state — they ate grain seeds, so killing them would increase harvests. The entire country mobilized. Citizens banged pots and pans to keep sparrows airborne until they dropped dead from exhaustion. Children climbed trees to smash nests. Villages competed for the highest kill count. It worked. They nearly eradicated sparrows.

Then the locusts came.

Sparrows ate locusts. Without sparrows, locust populations exploded. The swarms devoured far more grain than the sparrows ever did. The campaign to save the harvest destroyed it. Mao quietly replaced sparrows with bedbugs on the official pest list and never spoke of it again.

Every AI Great Leap Forward has its sparrow campaign.

Middle managers are the sparrows. They’re declared pests — too many layers, too slow, too expensive. Flatten the org! Move faster! Let AI handle coordination! So companies eliminate M1s, turn managers into tech leads running pods, and let the teams self-organize with AI tools.

Killing the Sparrows

Then the locusts come. Those middle managers held institutional knowledge — which customer had the weird integration, why the data model had that inexplicable column, the undocumented business rule that kept compliance from flagging every third transaction. That context lived in their heads. Now they’re gone, and the AI system they were replaced with needs exactly that context to function.

QA is a sparrow too. “AI writes the tests now.” So you cut QA. The AI writes tests that validate its own assumptions — a machine checking its own homework. Senior engineers who mentored juniors? Sparrows. Documentation writers? Sparrows. The ops team that knew how to restart the weird legacy service at 2 AM? Definitely sparrows.

Each elimination looks rational in isolation. The second-order effects arrive six months later, and by then nobody connects the locust swarm to the dead sparrows.

Let a Hundred Skills Bloom

In 1956, Mao launched the 百花运动 (Hundred Flowers Campaign): “Let a hundred flowers bloom, let a hundred schools of thought contend.” Speak freely. Share your honest criticisms. The Party wants to hear your real thoughts.

Intellectuals took the bait. They spoke openly.

Then came the 反右运动 (Anti-Rightist Campaign). Everyone who had spoken honestly was identified, labeled, and purged. The Hundred Flowers was a trap — an efficient mechanism for surfacing exactly who knew what, then eliminating them. The lesson every survivor internalized: never honestly reveal what you know, because it will be used against you.

Now Meta and a growing list of companies have launched their own Hundred Flowers. The mandate: every employee must build “agent skills” — distill your subject matter expertise into structured prompts and workflows that AI agents can execute. Or even worse, build “agents” using some drag and drop legacy tech that never worked and had already been given up by the leading edge labs back in 2024. Encode your judgment. Document your decision-making. Make yourself legible to the machine.

Let a Hundred Skills Bloom

The stated goal is distilling your subject matter expertise. Turn the expert’s craft into the organization’s asset. What leadership actually wants is to convert individual human capital into organizational capital that survives any single employee’s departure.

Employees see the game immediately. If I distill my ten years of domain expertise into a skill that any junior can invoke with a prompt, I have just automated my own replacement. The knowledge that makes me the critical node — the person they call at 2 AM, the one who knows why the model does that weird thing for Brazilian entities — is my moat. You’re asking me to drain it.

So they adapt to build anti-distillation agent skills, just as the intellectuals adapted after the Anti-Rightist trap.

We are already seeing agent skills built specifically for job security. The performative skill looks comprehensive and demos well but omits the 20% of edge-case knowledge that makes it work in production — you are now more indispensable, not less. The poison pill encodes expertise faithfully but with subtle dependencies on context only you hold — internal wikis you maintain, terminology you coined, data pipelines you own — so removing you causes outputs to drift quietly until someone says “we need to bring them back on this.” The complexity moat makes the skill so architecturally entangled with your other work that extracting your knowledge is harder than keeping you around. You are now a load-bearing wall disguised as a decoration.

The campaign designed to reduce organizational dependence on individual experts has now created experts who are strategically indispensable — not because of what they know, but because of how they’ve booby-trapped the system to need them. The flowers bloomed. They’re full of thorns.

Meanwhile, the “everyone builds with AI” mandate has turned into a hunger game of scope creep. Engineers use AI to generate designs and ship prototypes without waiting for the design team. PMs use AI to write code and spin up dashboards without filing engineering tickets. Designers use AI to build product specs and run user research without looping in product. Everyone is expanding into everyone else’s territory — not because they’re better at it, but because AI makes it possible and the mandate makes it rewarded. The org chart says collaboration; the incentive structure says land grab. What looks like productivity gains is actually a war of all against all, where every function is simultaneously trying to prove it can absorb the others before the others absorb it.

Engineering, PM, and Design scope creep

The Famine Comes Later

The Great Leap Forward’s famine didn’t arrive immediately. For a while, the numbers looked spectacular. Every province reported record harvests. Leadership was pleased. The requisitions increased.

The famine came when the real grain ran out but the reported grain kept flowing upward.

We’re still in the reporting phase. The dashboards are green. Adoption is up and to the right. Every team reports productivity gains that, if summed across the company, would imply engineers are shipping at 300% efficiency while somehow still missing the same deadlines.

Underneath the metrics, it’s a race to the bottom. One person builds a skill, so someone else builds a better one. One person demos a prototype, so someone else benchmarks it. Everyone competing to prove, more thoroughly than the next person, that their own role is replaceable. All accelerating. All sinking.

The sparrows are dead. The locusts haven’t arrived yet. The flowers bloomed full of poison pills. The furnaces produced pig iron stamped as steel that’s now load-bearing. The grain numbers look fantastic.

But it’s fine. We’re surpassing and catching up.

Oh, and Klarna? The company that loudly announced it would replace Salesforce with internal AI solutions? They quietly replaced Salesforce with another SaaS vendor instead. The backyard furnace couldn’t produce real steel. They bought it from a different mill.

The question nobody’s asking: what did any of this actually produce?

The answer, when it arrives, will be awkward.

References

@article{
    leehanchung,
    author = {Lee, Hanchung},
    title = {The AI Great Leap Forward},
    year = {2026},
    month = {04},
    day = {05},
    howpublished = {\url{https://leehanchung.github.io}},
    url = {https://leehanchung.github.io/blogs/2026/04/05/the-ai-great-leap-forward/}
}
Read the whole story
fxer
16 hours ago
reply
Bleak
Bend, Oregon
dreadhead
18 hours ago
reply
Vancouver Island, Canada
Share this story
Delete

Tecmo Super Bowl returns in trading card form for upcoming Topps set

1 Share
Bo Jackson and present-day stars get the Tecmo treatment for what is likely to be a highly coveted set of cards.
Read the whole story
fxer
1 day ago
reply
Bend, Oregon
Share this story
Delete

These Old Bike Frames Upcycled Into Armchairs Are The Coolest Thing You’ll See Today

1 Share

Most upcycling projects ask you to forget what something used to be. Omri Piko Kahan’s bike frame chairs ask the opposite. The geometry is still unmistakably a bicycle frame, the head tube, the top tube, the triangulated rear triangle, all of it present and accounted for, just oriented sideways and asked to hold a person instead of propel one. Kahan, an industrial designer based in Israel, builds lounge chairs from pairs of retired frames, and the whole point is that the donor material remains fully readable, repurposed without being disguised.

Structurally, the approach is clean and considered. Each frame pair is positioned symmetrically, fork and chainstay ends touching the floor as legs, the top tube running horizontally as an armrest. A slung seat and backrest in leather or canvas complete the form. The result has the relaxed posture of a Barcelona chair and the material honesty of something that was clearly built, not styled.

Designer: Omri Piko Kahan

Bicycle frames are absurdly overbuilt for what Kahan is asking them to do. A modern aluminum road frame is engineered to survive repeated impact loads from a rider pushing 300 watts through rough tarmac, and it does that while weighing somewhere between 1,000 and 1,400 grams. The structural surplus in that kind of engineering is enormous, which is why two of them positioned as a chair frame and asked to support a seated adult is, from a load-bearing standpoint, almost comically within spec. The geometry does the rest. Bicycle frames already resolve forces through triangulated sections, and a lounge chair asks for exactly that kind of lateral and compressive stability.

What Kahan has figured out is the orientation problem. Flip a frame on its side and the existing tube angles don’t automatically produce a useful chair geometry. The fork legs and chainstay ends need to hit the floor at the right height relative to each other, the top tube needs to land at armrest height, and the whole thing needs to produce a seat rake that doesn’t pitch you forward or swallow you whole. The matched top tube angles across both frames in the Cube and Trek build suggest this took real iteration, because they align with a precision that reads as deliberate rather than lucky. Filed fillets at the junctions and a custom setback upper support holding the sling confirm someone was paying close attention to finish quality.

The two builds photographed so far, one pairing a blue Cube road frame with a Trek, another combining a GT Transeo 3.0 with what appears to be a Supreme-branded MTB frame, show how much the donor bikes drive the final character of each piece. The GT build in particular has a longer wheelbase geometry that gives the chair a wider, more reclined stance than the Cube version. Kahan is taking custom orders, with pricing worked out per commission, which makes sense given that no two donor frame combinations will produce the same structural or ergonomic outcome.

The post These Old Bike Frames Upcycled Into Armchairs Are The Coolest Thing You’ll See Today first appeared on Yanko Design.

Read the whole story
fxer
1 day ago
reply
Bend, Oregon
Share this story
Delete

A Cryptography Engineer’s Perspective on Quantum Computing Timelines

2 Shares

My position on the urgency of rolling out quantum-resistant cryptography has changed compared to just a few months ago. You might have heard this privately from me in the past weeks, but it’s time to signal and justify this change of mind publicly.

There had been rumors for a while of expected and unexpected progress towards cryptographically-relevant quantum computers, but over the last week we got two public instances of it.

First, Google published a paper revising down dramatically the estimated number of logical qubits and gates required to break 256-bit elliptic curves like NIST P-256 and secp256k1, which makes the attack doable in minutes on fast-clock architectures like superconducting qubits. They weirdly1 frame it around cryptocurrencies and mempools and salvaged goods or something, but the far more important implication are practical WebPKI MitM attacks.

Shortly after, a different paper came out from Oratomic showing 256-bit elliptic curves can be broken in as few as 10,000 physical qubits if you have non-local connectivity, like neutral atoms seem to offer, thanks to better error correction. This attack would be slower, but even a single broken key per month can be catastrophic.

They have this excellent graph on page 2 (Babbush et al. is the Google paper, which they presumably had preview access to):

graph of physical qubit cost over time

Overall, it looks like everything is moving: the hardware is getting better, the algorithms are getting cheaper, the requirements for error correction are getting lower.

I’ll be honest, I don’t actually know what all the physics in those papers means. That’s not my job and not my expertise. My job includes risk assessment on behalf of the users that entrusted me with their safety. What I know is what at least some actual experts are telling us.

Heather Adkins and Sophie Schmieg are telling us that “quantum frontiers may be closer than they appear” and that 2029 is their deadline. That’s in 33 months, and no one had set such an aggressive timeline until this month.

Scott Aaronson tells us that the “clearest warning that [he] can offer in public right now about the urgency of migrating to post-quantum cryptosystems” is a vague parallel with how nuclear fission research stopped happening in public between 1939 and 1940.

The timelines presented at RWPQC 2026, just a few weeks ago, were much tighter than a couple years ago, and are already partially obsolete. The joke used to be that quantum computers have been 10 years out for 30 years now. Well, not true anymore, the timelines have started progressing.

If you are thinking “well, this could be bad, or it could be nothing!” I need you to recognize how immediately dispositive that is. The bet is not “are you 100% sure a CRQC will exist in 2030?”, the bet is “are you 100% sure a CRQC will NOT exist in 2030?” I simply don’t see how a non-expert can look at what the experts are saying, and decide “I know better, there is in fact < 1% chance.” Remember that you are betting with your users’ lives.2

Put another way, even if the most likely outcome was no CRQC in our lifetimes, that would be completely irrelevant, because our users don’t want just better-than-even odds3 of being secure.

Sure, papers about an abacus and a dog are funny and can make you look smart and contrarian on forums. But that’s not the job, and those arguments betray a lack of expertise. As Scott Aaronson said:

Once you understand quantum fault-tolerance, asking “so when are you going to factor 35 with Shor’s algorithm?” becomes sort of like asking the Manhattan Project physicists in 1943, “so when are you going to produce at least a small nuclear explosion?”

The job is not to be skeptical of things we’re not experts in, the job is to mitigate credible threats, and there are credible experts that are telling us about an imminent threat.

In summary, it might be that in 10 years the predictions will turn out to be wrong, but at this point they might also be right soon, and that risk is now unacceptable.

Now what

Concretely, what does this mean? It means we need to ship.

Regrettably, we’ve got to roll out what we have.4 That means large ML-DSA signatures shoved in places designed for small ECDSA signatures, like X.509, with the exception of Merkle Tree Certificates for the WebPKI, which is thankfully far enough along.

This is not the article I wanted to write. I’ve had a pending draft for months now explaining we should ship PQ key exchange now, but take the time we still have to adapt protocols to larger signatures, because they were all designed with the assumption that signatures are cheap. That other article is now wrong, alas: we don’t have the time if we need to be finished by 2029 instead of 2035.

For key exchange, the migration to ML-KEM is going well enough but:

  1. Any non-PQ key exchange should now be considered a potential active compromise, worthy of warning the user like OpenSSH does, because it’s very hard to make sure all secrets transmitted over the connection or encrypted in the file have a shorter shelf life than three years.

  2. We need to forget about non-interactive key exchanges (NIKEs) for a while; we only have KEMs (which are only unidirectionally authenticated without interactivity) in the PQ toolkit.

It makes no more sense to deploy new schemes that are not post-quantum. I know, pairings were nice. I know, everything PQ is annoyingly large. I know, we had basically just figured out how to do ECDSA over P-256 safely. I know, there might not be practical PQ equivalents for threshold signatures or identity-based encryption. Trust me, I know it stings. But it is what it is.

Hybrid classic + post-quantum authentication makes no sense to me anymore and will only slow us down; we should go straight to pure ML-DSA-44.6 Hybrid key exchange is reasonably easy, with ephemeral keys that don’t even need a type or wire format for the composite private key, and a couple years ago it made sense to take the hedge. Authentication is not like that, and even with draft-ietf-lamps-pq-composite-sigs-15 with its 18 composite key types nearing publication, we’d waste precious time collectively figuring out how to treat these composite keys and how to expose them to users. It’s also been two years since Kyber hybrids and we’ve gained significant confidence in the Module-Lattice schemes. Hybrid signatures cost time and complexity budget,5 and the only benefit is protection if ML-DSA is classically broken before the CRQCs come, which looks like the wrong tradeoff at this point.

In symmetric encryption, we don’t need to do anything, thankfully. There is a common misconception that protection from Grover requires 256-bit keys, but that is based on an exceedingly simplified understanding of the algorithm. A more accurate characterization is that with a circuit depth of 2⁶⁴ logical gates (the approximate number of gates that current classical computing architectures can perform serially in a decade) running Grover on a 128-bit key space would require a circuit size of 2¹⁰⁶. There’s been no progress on this that I am aware of, and indeed there are old proofs that Grover is optimal and its quantum speedup doesn’t parallelize. Unnecessary 256-bit key requirements are harmful when bundled with the actually urgent PQ requirements, because they muddle the interoperability targets and they risk slowing down the rollout of asymmetric PQ cryptography.

In my corner of the world, we’ll have to start thinking about what it means for half the cryptography packages in the Go standard library to be suddenly insecure, and how to balance the risk of downgrade attacks and backwards compatibility. It’s the first time in our careers we’ve faced anything like this: SHA-1 to SHA-256 was not nearly this disruptive,7 and even that took forever with the occasional unexpected downgrade attack.

Trusted Execution Environments (TEEs) like Intel SGX and AMD SEV-SNP and in general hardware attestation are just f***d. All their keys and roots are not PQ and I heard of no progress in rolling out PQ ones, which at hardware speeds means we are forced to accept they might not make it, and can’t be relied upon. I had to reassess a whole project because of this, and I will probably downgrade them to barely “defense in depth” in my toolkit.

Ecosystems with cryptographic identities (like atproto and, yes, cryptocurrencies) need to start migrating very soon, because if the CRQCs come before they are done, they will have to make extremely hard decisions, picking between letting users be compromised and bricking them.

File encryption is especially vulnerable to store-now-decrypt-later attacks, so we’ll probably have to start warning and then erroring out on non-PQ age recipient types soon. It’s unfortunately only been a few months since we even added PQ recipients, in version 1.3.0.8

Finally, this week I started teaching a PhD course in cryptography at the University of Bologna, and I’m going to mention RSA, ECDSA, and ECDH only as legacy algorithms, because that’s how those students will encounter them in their careers. I know, it feels weird. But it is what it is.

For more willing-or-not PQ migration, follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @filippo@abyssdomain.expert.

The picture

Traveling back from an excellent AtmosphereConf 2026, I saw my first aurora, from the north-facing window of a Boeing 747.

Aurora borealis seen from an airplane window, with green vertical columns and curtains of light above a cloud layer, stars visible in the dark sky above.

My work is made possible by Geomys, an organization of professional Go maintainers, which is funded by Ava Labs, Teleport, Tailscale, and Sentry. Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement.) Here are a few words from some of them!

Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews.

Ava Labs — We at Ava Labs, maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team.


  1. The whole paper is a bit goofy: it has a zero-knowledge proof for a quantum circuit that will certainly be rederived and improved upon before the actual hardware to run it on will exist. They seem to believe this is about responsible disclosure, so I assume this is just physicists not being experts in our field in the same way we are not experts in theirs. 

  2. “You” is doing a lot of work in this sentence, but the audience for this post is a bit unusual for me: I’m addressing my colleagues and the decision-makers that gate action on deployment of post-quantum cryptography. 

  3. I had a reviewer object to an attacker probability of success of 1/536,870,912 (0.0000002%, 2⁻²⁹) after 2⁶⁴ work, correctly so, because in cryptography we usually target 2⁻³². 

  4. Why trust the new stuff, though? There are two parts to it: the math and the implementation. The math is also not my job, so I again defer to experts like Sophie Schmieg, who tells us that she is very confident in lattices, and the NSA, who approved ML-KEM and ML-DSA at the Top Secret level for all national security purposes. It is also older than elliptic curve cryptography was when it first got deployed. (“Doesn’t the NSA lie to break our encryption?” No, the NSA has never intentionally jeopardized US national security with a non-NOBUS backdoor, and there is no way for ML-KEM and ML-DSA to hide a NOBUS backdoor.) On the implementation side, I am actually very qualified to have an opinion, having made cryptography implementation and testing my niche. ML-KEM and ML-DSA are a lot easier to implement securely than their classical alternatives, and with the better testing infrastructure we have now I expect to see exceedingly few bugs in their implementations. 

  5. One small exception in that if you already have the ability to convey multiple signatures from multiple public keys in your protocol, it can make sense to to “poor man’s hybrid signatures” by just requiring 2-of-2 signatures from one classical public key and one pure PQ key. Some of the tlog ecosystem might pick this route, but that’s only because the cost is significantly lowered by the existing support for nested n-of-m signing groups. 

  6. Why ML-DSA-44 when we usually use ML-KEM-768 instead of ML-KEM-512? Because ML-KEM-512 is Level 1, while ML-DSA-44 is Level 2, so it already has a bit of margin against minor cryptanalytic improvements. 

  7. Because SHA-256 is a better plug-in replacement for SHA-1, because SHA-1 was a much smaller surface than all of RSA and ECC, and because SHA-1 was not that broken: it still retained preimage resistance and could still be used in HMAC and HKDF. 

  8. The delay was in large part due to my unfortunate decision of blocking on the availability of HPKE hybrid recipients, which blocked on the CFRG, which took almost two years to select a stable label string for X-Wing (January 2024) with ML-KEM (August 2024), despite making precisely no changes to the designs. The IETF should have an internal post-mortem on this, but I doubt we’ll see one. 

Read the whole story
fxer
1 day ago
reply
Bend, Oregon
acdha
3 days ago
reply
Washington, DC
Share this story
Delete

Linux kernel maintainers are following through on removing Intel 486 support

1 Share

One point in favor of the sprawling Linux ecosystem is its broad hardware support—the kernel officially supports everything from '90s-era PC hardware to Arm-based Apple Silicon chips, thanks to decades of combined effort from hardware manufacturers and motivated community members.

But nothing can last forever, and for a few years now, Linux maintainers (including Linus Torvalds) have been pushing to drop kernel support for Intel's 80486 processor. This chip was originally introduced in 1989, was replaced by the first Intel Pentium in 1993, and was fully discontinued in 2007. Code commits suggest that Linux kernel version 7.1 will be the first to follow through, making it impossible to build a version of the kernel that will support the 486; Phoronix says that additional kernel changes to remove 486-related code will follow in subsequent kernel versions.

Although these chips haven't changed in decades, maintaining support for them in modern software isn't free.

"In the x86 architecture we have various complicated hardware emulation facilities on x86-32 to support ancient 32-bit CPUs that very, very few people are using with modern kernels," writes Linux kernel contributor Ingo Molnar in his initial patch removing 486 support from the kernel. "This compatibility glue is sometimes even causing problems that people spend time to resolve, which time could be spent on other things."

This echoes comments from Linus Torvalds in 2022, suggesting there was "zero real reason for anybody to waste one second of development effort" on 486-related problems. The removal of 486 support would also likely affect a handful of 486-compatible chips from other companies, including the Cyrix 5x86 and the Am5x86 from AMD. Molnar was also a driving force the last time Linux dropped support for an older Intel chip—support for the 80386 processor family was removed in kernel version 3.8 back in early 2013.

"Unfortunately there's a nostalgic cost: your old original 386 DX33 system from early 1991 won't be able to boot modern Linux kernels anymore," Molnar wrote. "Sniff."

A tree falling in a forest

The practical impact of the end of 486 support will be negligible; the number of modern Linux distributions that use the kernel's 486 support is negligible.

Many of the consumer-focused Linux distros have more Windows-like minimum system requirements, an acknowledgment of how CPU and RAM-intensive modern web browsers and browser-based apps have become; Ubuntu raised its minimum RAM requirement from 4GB to 6GB for the 26.04 LTS release. Even lightweight distros like Xubuntu or AntiX recommend 512MB to 1GB of RAM, amounts far in excess of what any 486-based PC ever shipped with (or could reasonably work with, using actual hardware).

One of the few actively maintained distros that explicitly mentions 486 support is Tiny Core Linux (and its GUI-less counterpart, Micro Core Linux). These OSes can run on a 486DX chip as long as it's paired with at least 48MB or 28MB of RAM, respectively, though a Pentium 2 with at least 128MB of RAM is the recommended configuration. But even on the Tiny Core forums, few users are mourning the loss of 486 support.

"I get the nostalgia, like classic cars, but a car you've spent a year's worth of weekends fixing up isn't a daily driver," writes user andyj. "Some of the extensions I maintain, like rsyslog and mariadb, require that the CPU be set to i586 as they will no longer compile for i486. The end is already here."

Those still using a 486 for one reason or another will still be able to run older Linux kernels and vintage operating systems—running old software without emulation or virtualization is one of the few reasons to keep booting up hardware this old. If you demand an actively maintained OS, you still have options, though—the FreeDOS project isn't Linux, but it does still run on PCs going all the way back to the original IBM Personal Computer and its 16-bit Intel 8088.

Read full article

Comments



Read the whole story
fxer
2 days ago
reply
Bend, Oregon
kazriko
1 day ago
There's always NetBSD.
Share this story
Delete
Next Page of Stories