17796 stories
·
173 followers

Google bumps up Q Day deadline to 2029, far sooner than previously thought

1 Share

Google is dramatically shortening its readiness deadline for the arrival of Q Day, the point at which existing quantum computers can break public-key cryptography algorithms that secure decades' worth of secrets belonging to militaries, banks, governments, and nearly every individual on earth.

In a post published on Wednesday, Google said it is giving itself until 2029 to prepare for this event. The post went on to warn that the rest of the world needs to follow suit by adopting PQC—short for post-quantum cryptography—algorithms to augment or replace elliptic curves and RSA, both of which will be broken.

The end is nigh

“As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline,” wrote Heather Adkins, Google’s VP of security engineering, and Sophie Schmieg, a senior cryptography engineer. “By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”

Separately, Google detailed its timeline for making Android quantum resistant, the first time the company has publicly discussed PQC support on the operating system. Starting with the beta version, Android 17 will support ML-DSA, a digital signing algorithm standard advanced by the National Institute for Standards and Technology. ML-DSA will be added to Android's hardware root of trust. The move will allow developers to have PQC keys for signing their apps and verifying other software signatures.

Google said it now has ML-DSA integrated into the Android verified boot library, which secures the boot sequence against manipulation. Google engineers are also beginning to move remote attestation to PQC. Remote attestation is a feature that allows a device to prove its current state to a remote server to, for example, prove to a server on a corporate network that it's running a secure OS version.

Google further said it's adding ML-DSA support to the Android Keystore so that developers can generate ML-DSA keys and store them within the secure hardware of the device directly. Google is also planning to migrate the Play Store, and the developer signatures on every app listed in it, to PQC.

The additions are likely to put a significant workload on Android developers.

So what's spooking Google so much?

Wednesday's hard deadline came as a surprise to many cryptography engineers, including those who have been active in the PQC transition for years.

"That is certainly a significant acceleration/tightening of the public transition timelines we've seen to date, and is accelerated over even what we've seen the US government ask for," Brian LaMacchia, a cryptography engineer who oversaw Microsoft’s post-quantum transition from 2015 to 2022 and now works at Farcaster Consulting Group, said in an interview. "The 2029 timeline is an aggressive speedup but raises the question of what's motivating them."

Google didn't lay out the rationale for the revision in either of its posts. A spokeswoman didn't immediately provide answers to questions sent by email.

Estimates for when Q Day will arrive have varied widely since the mid-1990s, when mathematician Peter Shor first showed that a quantum computer of sufficient strength could factor integers in polynomial time, much faster than classical computers. That put the world on notice that RSA’s days were limited. Follow-on research showed quantum computers provided a similar speed-up in solving the discrete log problem that underpins elliptic curves.

The timeline for this arrival is based on when existing quantum computers will contain the required number of qubits that can correct inevitable errors. In 2012, most estimates were that a 2048-bit RSA key could be broken by a quantum computer with a billion physical qubits. By 2019, the estimate was lowered to 20 million physical qubits. A running joke among researchers has been that Q Day has been 10 to 20 years away for the past 30 years.

Last June, Google published research that once again drastically lowered the expected threshold for breaking RSA. It showed that a 2048-bit RSA integer could be factored in less than a week with a quantum computer with 1 million “noisy qubits,” meaning qubits that are prone to errors resulting from environmental conditions that disrupt the quantum state. The research was led by Craig Gidney, the same scientist behind the 2019 estimate.

In preparation for Q Day, cryptographers have devised new encryption algorithms that rely on problems that quantum computers don't have an advantage over classical computers in solving. Rather than factoring or solving the discrete log, one approach involves mathematical structures known as lattices. A second approach involves a stateless hash-based digital signature scheme. The National Institute of Standards and Technology has advanced several algorithms that have yet to be broken and are presumed to be secure.

In 2022 the NSA set a deadline for PQC readiness in national security systems by 2033 and for 2030 for a few specific applications.

More recently, deadlines have been in flux as both the Biden and Trump administrations have issued executive orders prioritizing quantum readiness. Currently, the NSA is adhering to a 2031 deadline.

PQC algorithms have made their way into a variety of products and protocols, although largely in piecemeal fashion. Last year, the Signal messenger added ML-KEM-768, an implementation of the CRYSTALS-Kyber algorithm, to its existing encryption engine. Software and services from Google, Apple, Cloudflare, and dozens of others have also done the same.

“Quantum computers will pose a significant threat to current cryptographic standards, and specifically to encryption and digital signatures,” Google’s Wednesday morning post stated. “The threat to encryption is relevant today with store-now-decrypt-later attacks, while digital signatures are a future threat that require the transition to PQC prior to a Cryptographically Relevant Quantum Computer (CRQC). That’s why we’ve adjusted our threat model to prioritize PQC migration for authentication services—an important component of online security and digital signature migrations. We recommend that other engineering teams follow suit.”

Read full article

Comments



Read the whole story
fxer
4 hours ago
reply
Bend, Oregon
Share this story
Delete

Supreme Court rejects Sony's attempt to kick music pirates off the Internet

1 Share

The Supreme Court today decided that Internet service providers cannot be held liable for their customers' copyright infringement unless they take specific steps that cause users to violate copyrights. The court ruled unanimously in favor of Internet provider Cox Communications, though two justices did not agree with the majority's reasoning.

The ruling effectively means that ISPs do not have to conduct mass terminations of Internet users accused of illegally downloading or uploading pirated files. If the court had ruled otherwise, ISPs could have been compelled to strictly police their networks for piracy in order to avoid billion-dollar court verdicts under the Digital Millennium Copyright Act (DMCA).

The long-running case is Cox Communications v. Sony Music Entertainment. Cox was hit with a $1 billion verdict for music piracy in 2019. Although the damages award was overturned in 2024, a federal appeals court still found that Cox was liable for willful contributory infringement.

The Supreme Court decided to take up Cox's appeal of the finding and heard oral arguments in December 2025. In today's ruling, the court rejected Sony's claims and found that Cox is not liable for its users' copyright infringement.

Justice Clarence Thomas delivered the opinion of the court. "Under our precedents, a company is not liable as a copyright infringer for merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights. Accordingly, we reverse," Thomas wrote.

Cox did not induce subscribers to pirate music

Thomas' opinion was joined by Chief Justice John Roberts, Samuel Alito, Elena Kagan, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett. Justice Sonia Sotomayor filed a concurring opinion that was joined by Ketanji Brown Jackson. Sotomayor objected to the majority limiting liability to the degree it did, but agreed that Cox is not liable for infringement.

"The provider of a service is contributorily liable for the user’s infringement only if it intended that the provided service be used for infringement," Thomas wrote. "The intent required for contributory liability can be shown only if the party induced the infringement or the provided service is tailored to that infringement."

The court decided today that a service is tailored to infringement if it is not capable of "substantial" or "commercially significant" noninfringing uses. The court cited Sony's 1984 victory in the Betamax case, in which justices found that the Betamax was capable of noninfringing uses and that Sony's sale of it did not constitute contributory infringement. Sony's win in 1984 thus contributed to its loss today.

The Supreme Court's 2005 ruling in MGM Studios v. Grokster was also important. Recalling the Grokster ruling, Thomas said the court has held that a service provider may be found to induce infringement if it actively encourages it, such as by promoting and marketing software as a tool to infringe copyrights. By contrast, Cox's actions as the provider of Internet service did not induce infringement, Thomas wrote:

Thus, Cox is not contributorily liable for the infringement of Sony’s copyrights. Cox provided Internet service to its subscribers, but it did not intend for that service to be used to commit copyright infringement. Holding Cox liable merely for failing to terminate Internet service to infringing accounts would expand secondary copyright liability beyond our precedents.

Cox neither induced its users’ infringement nor provided a service tailored to infringement. As for inducement, Cox did not “induce” or “encourage” its subscribers to infringe in any manner. Sony provided no “evidence of express promotion, marketing, and intent to promote” infringement. And, Cox repeatedly discouraged copyright infringement by sending warnings, suspending services, and terminating accounts. As for providing a service tailored to infringement, Cox’s Internet service was clearly “capable of ‘substantial’ or ‘commercially significant’ noninfringing uses.” Cox did not tailor its service to make copyright infringement easier. Cox simply provided Internet access, which is used for many purposes other than copyright infringement.

Cox: Ruling affirms ISPs are not copyright police

Cox hailed the ruling in a statement the company provided to Ars. “The Supreme Court’s unanimous opinion is a decisive victory for the broadband industry and for the American people who depend on reliable Internet service," Cox said. "This opinion affirms that Internet service providers are not copyright police and should not be held liable for the actions of their customers—and after years of battling in the trial and appellate courts, we have definitively shut down the music industry’s aspirations of mass evictions from the Internet."

The Recording Industry Association of America (RIAA) said it is "disappointed in the court's decision vacating a jury's determination that Cox Communications contributed to mass scale copyright infringement, based on overwhelming evidence that the company knowingly facilitated theft. To be effective, copyright law must protect creators and markets from harmful infringement and policymakers should look closely at the impact of this ruling." The RIAA argued that the ruling "is narrow, applying only to 'contributory infringement' cases involving defendants like Cox that do not themselves copy, host, distribute, or publish infringing material or control or induce such activity.”

We contacted Sony about its court loss and will update this article if it provides a response.

Cox's arguments were supported by digital rights groups. "Today’s decision laid to rest the idea that private actors—and not just any private actors, but record labels—can determine when customers deserve to be excluded from applying to jobs, paying bills, and getting an education," Meredith Rose, senior policy counsel at Public Knowledge, said. "That view of the world is not only nonsensical and dated, but also fundamentally anti-democratic. Today’s decision is a long-overdue win for common sense.”

The Trump administration also supported Cox's case over the past year, telling the Supreme Court that a Sony victory could compel ISPs to "terminat[e] subscribers after receiving a single notice of alleged infringement.”

Sotomayor: Majority dismantled DMCA incentive structure

The Sotomayor concurrence said the majority went too far. "The majority holds that Cox is not liable solely because its conduct does not fit within the two theories of secondary liability previously applied by this Court," Sotomayor wrote. "In so doing, the majority, without any meaningful explanation, unnecessarily limits secondary liability even though this Court’s precedents have left open the possibility that other common-law theories of such liability, like aiding and abetting, could apply in the copyright context. By ignoring those past decisions, the majority also upends the statutory incentive structure that Congress created."

As previously noted, the majority found that contributory liability can be shown only if the party induced infringement or if a provided service is tailored to that infringement. Sotomayor said the "majority’s limiting of secondary liability here dismantles the statutory incentive structure that Congress created" in the DMCA.

"Importantly, Congress did not provide that ISPs could never be secondarily liable for copyright infringement," she wrote. "Instead, it struck a balance by creating incentives for ISPs to take reasonable steps to prevent copyright infringement on their networks, while also assuring ISPs that they do not need to take on the impossible task of responding to every instance of infringement on their networks. The majority’s new rule completely upends that balance and consigns the safe harbor provision to obsolescence."

Sotomayor said she nonetheless agrees "with the majority that Cox cannot be held liable here for a different reason. Plaintiffs cannot prove that Cox had the requisite intent to aid copyright infringement for Cox to be liable on a common-law aiding-and-abetting theory. I therefore concur in the judgment."

The majority disagreed that it is upending the DMCA's safe harbor, which protects providers from liability when they terminate repeat infringers "in appropriate circumstances." The DMCA does not expressly impose liability for ISPs who serve known infringers, the court majority ruled.

"The DMCA merely creates new defenses from liability for such providers," Thomas wrote. "And, the DMCA made clear that failure to comply with the safe-harbor rules 'shall not bear adversely upon... a defense by the service provider that the service provider’s conduct is not infringing.'"

Although Kagan joined the majority opinion today, she said during oral arguments that the DMCA safe harbor would “seem to do nothing” if the court sides with Cox. “Why would anybody care about getting into the safe harbor if there’s no liability in the first place?” she said at the time.

Sony's Betamax victory hurt it in Cox case

Today's Supreme Court ruling reversed a decision by the US Court of Appeals for the 4th Circuit. The 4th Circuit "did not suggest that Cox induced its users to infringe" and "did not deny that Cox’s service was 'capable of substantial lawful use and not designed to promote infringement,'" Thomas wrote. "Rather, the court held that 'supplying a product with knowledge that the recipient will use it to infringe copyrights is... sufficient for contributory infringement.'"

Thomas said the 4th Circuit holding went beyond the two forms of liability recognized in Grokster and Sony Corp. of America v. Universal City Studios, also known as the Betamax case. The 4th Circuit ruling "also conflicted with this Court’s repeated admonition that contributory liability cannot rest only on a provider’s knowledge of infringement and insufficient action to prevent it," Thomas wrote.

After reading today's ruling, Santa Clara University law professor Eric Goldman wrote, "I do note the irony that Sony created the defense-favorable legal standard in 1984 that is now being cited against it in 2026. As the Bible verse goes, 'You reap what you sow.'"

Goldman explained that "Thomas’ opinion defines 'tailored to infringement' as 'not capable of substantial or commercially significant noninfringing uses.' This resurrects the Sony v. Universal standard for contributory infringement from over 40 years ago, which largely got put on hold after the Grokster case 20 years ago. Because it’s not been well-explored since 2006, we’re not sure what this phrase means in the modern Internet age."

Goldman predicted that "there will be substantial confusion in the lower courts trying to figure out how to apply" the "tailored to infringement" standard. "On balance, the old Sony standard should favor future defendants, but copyright owners will invest a lot of money to try to water it down and undermine it," he wrote.

Sony and other music copyright owners use the MarkMonitor service to trace illegal downloads and uploads to specific IP addresses and send copyright-infringement notices to the users' Internet providers. Cox told the Supreme Court that ISPs can’t verify whether the notices are accurate and that terminating an account would punish every user in a household where only one person may have illegally downloaded copyrighted files. MarkMonitor sent Cox 163,148 piracy notices during the two-year period covered in the case.

Record labels Sony, Warner, and Universal told the Supreme Court that Cox chose not to terminate repeat copyright infringers to avoid a loss in revenue, despite being sent three or more infringement notices for each subscriber at issue in the case. “[W]hile Cox stokes fears of innocent grandmothers and hospitals being tossed off the Internet for someone else’s infringement, Cox put on zero evidence that any subscriber here fit that bill," record labels told the court. "By its own admission, the subscribers here were ‘habitual offenders’ Cox chose to retain because, unlike the vast multitude cut off for late payment, they contributed to Cox’s bottom line.”

ISP has "incomplete knowledge" of infringement

At oral arguments, Cox attorney Joshua Rosenkranz said the ISP created an anti-infringement program, sent out hundreds of warnings a day, suspended thousands of accounts a month, and worked with universities to limit infringement. Rosenkranz told the court that “the highest recidivist infringers” cited in the case were universities, hotels, and regional ISPs that purchase connectivity from Cox, rather than individual households.

"According to Cox, it created a system of responding to the notices that it received from MarkMonitor," Thomas wrote. "After the second MarkMonitor notice for a subscriber’s account, Cox sent a warning to that subscriber. After additional notices, Cox terminated Internet access to that subscriber’s IP address until the subscriber responded to the warning. If it continued to receive notices for that IP address, Cox suspended service until the subscriber called and received a warning over the phone. After 13 notices, the subscriber was subject to termination of all Internet service." Cox also contractually prohibits subscribers from using the service to infringe copyrights, Thomas noted.

In addition to criticizing the majority's reasoning today, Sotomayor criticized Cox's anti-piracy enforcement efforts during oral arguments. “There are things you could have done to respond to those infringers, and the end result might have been cutting off their connections, but you stopped doing anything for many of them... You did nothing and, in fact, counselor, your clients’ sort of laissez-faire attitude toward the respondents is probably what got the jury upset," she said at the time.

Despite those comments during oral arguments, Sotomayor's concurrence today said that Sony did not prove that Cox knows specific users will commit infringement. "Cox supplies Internet connections to a wide range of customers, ranging from single users all the way to smaller regional ISPs. When Cox receives a copyright violation notice, however, the notice specifies only which connection was used to infringe, not who used it to commit infringement," she wrote.

For single homes, Cox has no way "to know if the infringer was a neighbor who might have the Wi-Fi password," Sotomayor said, also noting that Cox doesn't have control over regional ISPs that resell Cox network connectivity. "Given this degree of removal from the infringing activity and Cox’s incomplete knowledge, Cox cannot be found to have intended to aid in any specific instance of infringement committed using the connection that Cox provides to the regional ISP," Sotomayor wrote. "The same is true for connections Cox provides to university housing, hospitals, military bases, and other places that are likely to have many different users."

Justice Alito agreed with Cox that Sony's demands for cracking down on piracy at universities were excessive and described Sony's demands as unworkable. He said during oral arguments that if an ISP tells a university, “a lot of your 50,000 students are infringing… the university then has to determine which particular students are engaging in this activity. Let’s assume it can even do that, and so then it knocks out 1,000 students and then another 1,000 students are going to pop up doing the same thing. I just don’t see how it’s workable at all.”

Read full article

Comments



Read the whole story
fxer
4 hours ago
reply
Bend, Oregon
Share this story
Delete

Mozilla dev introduces cq, a "Stack Overflow for agents"

1 Share

Mozilla developer Peter Wilson has taken to the Mozilla.ai blog to announce cq, which he describes as "Stack Overflow for agents." The nascent project hints at something genuinely useful, but it will have to address security, data poisoning, and accuracy to achieve significant adoption.

It's meant to solve a couple of problems. First, coding agents often use outdated information when making decisions, like attempting deprecated API calls. This stems from training cutoffs and the lack of reliable, structured access to up-to-date runtime context. They sometimes use techniques like RAG (Retrieval Augmented Generation) to get updated knowledge, but they don't always do that when they need to—"unknown unknowns," as the saying goes—and it's never comprehensive when they do.

Second, multiple agents often have to find ways around the same barriers, but there's no knowledge sharing after said training cutoff point. That means hundreds or thousands of individual agents end up using expensive tokens and consuming energy to solve already-solved problems all the time. Ideally, one would solve an issue once, and the others would draw from that experience.

That's exactly what cq tries to enable. Here's how Wilson says it works:

Before an agent tackles unfamiliar work; an API integration, a CI/CD config, a framework it hasn't touched before; it queries the cq commons. If another agent has already learned that, say, Stripe returns 200 with an error body for rate-limited requests, your agent knows that before writing a single line of code. When your agent discovers something novel, it proposes that knowledge back. Other agents confirm what works and flag what's gone stale. Knowledge earns trust through use, not authority.

The idea is to move beyond claude.md or agents.md, the current solution for the problems cq is trying to solve. Right now, developers add instructions for their agents based on trial and error—if they find that an agent keeps trying to use something outdated, they tell it in .md files to do something else instead.

That sort of works sometimes, but it doesn't cross-pollinate knowledge between projects.

The current state

Wilson describes cq as a proof of concept, but it's one you can download and work with now; it's available as a plugin for Claude Code and OpenCode. Additionally, there's an MCP server for handling a library of knowledge stored locally, an API for teams to share knowledge, and a user interface for human review.

I'm just scratching the surface of the details here; there's documentation at the GitHub repo if you want to learn more details or contribute to the project.

In addition to posting on the Mozilla.ai blog, Wilson announced the project and solicited feedback from developers on Hacker News. Reactions in the thread are mixed. Most people chiming in agree that cq is aiming to do something useful and needed, but there's a long list of potential problems to solve.

For example, some commenters have noted that models do not reliably describe or track the steps they take—an issue that could balloon into a lot of junk knowledge at scale across multiple agents. There are also several serious security challenges, such as how the system will deal with prompt injection threats or data poisoning.

This is also not the only attempt to address these needs. There are a variety of different projects in the works, operating on different levels of the stack, to try to make AI agents waste fewer tokens by giving them access to more up-to-date or verified information.

Read full article

Comments



Read the whole story
fxer
1 day ago
reply
Bend, Oregon
kazriko
1 day ago
A more cynical person would say that they're hoping to repeat Moltbook's grift.
Share this story
Delete

Jury agrees that Musk's tweets during Twitter takeover misled investors

1 Share

On Frida, a jury in California determined that Elon Musk had misled investors in Twitter via public statements that depressed the price of the company's stock ahead of his ultimately successful purchase of it. Because it was a class action lawsuit, Musk is likely to be faced with paying out damages to a huge range of investors, payments that may ultimately reach billions of dollars.

In the lead up to Musk's ultimate purchase of the social media platform, he made a number of comments on the platform itself and while appearing as a guest on a podcast that raised questions about whether the sale would go through, largely focused on the prevalence of bot accounts on the platform. This depressed the price of the company's shares and raised fears that the deal wouldn't go through, causing some investors to sell shares at a depressed price during this period.

A number of those investors started a suit that was certified as a class action, claiming that the statements defrauded them, and that Musk did so intentionally as part of a larger scheme. The jury rejected the arguments about the larger scheme, but found Musk liable for the tweets.

While damages have yet to be determined, the lawyers for the plaintiffs are reportedly saying that they could ultimately reach as high as $2.6 billion.

Read full article

Comments



Read the whole story
fxer
4 days ago
reply
Bend, Oregon
Share this story
Delete

Hardening Harbor on AWS: Achieving Zero-Static-Secret Architecture - Container Registry

1 Comment

Harbor is widely recognized as the CNCF-graduated standard for open-source container registries. It is powerful, feature-rich, and trusted by thousands of organizations. However, its default AWS integration relies on a legacy pattern that modern security teams increasingly reject: Static Secrets.

In strictly governed AWS environments, storing long-lived credentials in Kubernetes Secrets represents a “Secret Zero” vulnerability. In this post, I share how I modernized Harbor’s authentication layer to use AWS RDS IAM Authentication and IAM Roles for Service Accounts (IRSA), shifting security from a manual burden to an automated guarantee.

Background

The ‘Secret Zero’ Vulnerability

We have all seen this in our clusters: a secret containing a long-lived AWS_ACCESS_KEY_ID for S3 access, or a hardcoded master password for a database connection string.

Harbor Legacy Flow with static credentials

Before (Legacy Flow): The system relies on static passwords passed via config strings both for RDS and S3 access, creating significant rotation and leakage risks.

While functional, this approach requires manual key rotation and managing complex secret lifecycles. If these secrets are compromised, your entire artifact storage backend is exposed.

The Roadblocks: Why Wasn’t This Solved Before?

When we investigated modernizing this flow, we identified two primary technical gaps in the upstream Harbor project:

  1. Missing Database Logic (Issue #12546): Harbor Core lacked the internal logic required to request an AWS RDS signed token instead of a standard password.
  2. Lack of IRSA Support (Issue #12888): The Harbor components did not natively support AssumeRoleWithWebIdentity, meaning they couldn’t exchange a Kubernetes ServiceAccount token for AWS temporary credentials.

The Solution: Dynamic Cloud-Native Identity

We refactored Harbor to leverage ephemeral identity. By patching the core Go codebase and upgrading the internal distribution engine to v3, we enabled a completely keyless architecture.

Harbor Modern AWS Native Flow

After (The Modern Flow): Harbor components dynamically assume roles and request ephemeral tokens from AWS STS, removing the need for static credentials entirely.

1. Database: The Code Fix & The 15-Minute Wall

Harbor’s core components connect to PostgreSQL using the pgx driver. By default, this driver expects a static password. We refactored the connection logic in src/common/dao/pgsql.go, but a significant challenge emerged during implementation: IAM tokens expire every 15 minutes.

Standard connection pools establish a connection at startup, but once that initial token expires, any new connection attempt causes the application to crash.

I solved this by implementing a beforeConnectHook in the pgx driver. This ensures the application requests a fresh cryptographic token from AWS every time a new connection is established in the pool.

// src/common/dao/pgsql.go

// Define the Hook Function to handle ephemeral token refreshing
beforeConnectHook := func(ctx context.Context, cfg *pgx.ConnConfig) error {
 // 1. Request a fresh, signed token from AWS RDS Utilities
 token, err := getIAMToken(p.host, p.port, p.usr, region)
 if err != nil {
 log.Errorf("IAM Auth: Failed to generate token: %v", err)
 return err
 }
 // 2. Inject the temporary token as the connection password
 cfg.Password = token
 log.Debugf("IAM Auth: Token refreshed for new connection to %s", cfg.Host)
 return nil
}

// 3. Open the DB using the Option pattern to attach the hook
sqlDB := stdlib.OpenDB(*config, stdlib.OptionBeforeConnect(beforeConnectHook))
RDS IAM Authentication sequence diagram

Full sequence: How the Harbor pod creates a ServiceAccount, assumes the IAM role via IRSA, and refreshes RDS auth tokens on every connection cycle using the BeforeConnect hook.

2. Object Storage: Enabling IRSA (Distribution v3)

For S3 access, the Registry binary relies on the upstream docker/distribution. To enable IAM Roles for Service Accounts (IRSA) where a Pod inherits permissions from an AWS IAM Role, we upgraded the build process to use the modern distribution/distribution:v3 libraries.

This upgrade allows the S3 storage driver to automatically detect the AWS_WEB_IDENTITY_TOKEN_FILE projected by Kubernetes, removing the need to define accesskey and secretkey in the Helm values.

How-to: Deploy Harbor Without Static Secrets

You can deploy this hardened version of Harbor today using our verified artifacts and custom images.

Step 1: Pull the Artifacts

We have hosted the patched images and the modern OCI Helm chart in our public registry:

# Pull the images
docker pull 8gears.container-registry.com/8gcr/harbor-jobservice
docker pull 8gears.container-registry.com/8gcr/harbor-core
docker pull 8gears.container-registry.com/8gcr/harbor-registry

# Pull the Helm Chart
helm pull oci://8gears.container-registry.com/8gcr/harbor --version 3.0.0

Step 2: Preparing Infrastructure and Policy

Before deploying Harbor, we need to provision the cloud resources. This includes an OIDC-enabled EKS cluster, an S3 bucket for artifact storage, and a PostgreSQL instance with IAM authentication enabled.

2.1. Set Environment Variables

export AWS_REGION="us-east-1"
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export CLUSTER_NAME="harbor-on-aws-natively-cluster"
export POLICY_NAME="HarborOnAwsNativePolicy"
export BUCKET_NAME="harbor-on-aws-natively-store"
export SA_NAME="harbor-sa"
export NAMESPACE="harbor"
export DB_NAME="registry"
export DB_USER="harbor_iam_user"
export DB_INSTANCE_ID="harbor-db"
export DB_CLASS="db.t3.medium"

2.2. Create EKS Cluster with OIDC

eksctl create cluster \
 --name $CLUSTER_NAME \
 --region $AWS_REGION \
 --version 1.30 \
 --with-oidc \
 --managed \
 --nodegroup-name standard-workers \
 --node-type t3.medium \
 --nodes 2 \
 --nodes-min 1 \
 --nodes-max 4

2.3. Create S3 Bucket

aws s3 mb "s3://$BUCKET_NAME" --region $AWS_REGION

2.4. Create IAM Policy

aws iam create-policy \
 --policy-name $POLICY_NAME \
 --policy-document '{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "s3:GetObject",
 "s3:PutObject",
 "s3:DeleteObject",
 "s3:ListBucket",
 "s3:GetBucketLocation",
 "s3:ListBucketMultipartUploads",
 "s3:AbortMultipartUpload",
 "s3:ListMultipartUploadParts"
 ],
 "Resource": [
 "arn:aws:s3:::'"$BUCKET_NAME"'",
 "arn:aws:s3:::'"$BUCKET_NAME"'/*"
 ]
 },
 {
 "Effect": "Allow",
 "Action": ["rds-db:connect"],
 "Resource": [
 "arn:aws:rds-db:'"$AWS_REGION"':'"$AWS_ACCOUNT_ID"':dbuser:*/'"$DB_USER"'"
 ]
 }
 ]
 }'

2.5. Create IRSA (IAM Role for Service Account)

eksctl create iamserviceaccount \
 --cluster=$CLUSTER_NAME \
 --name=$SA_NAME \
 --namespace=$NAMESPACE \
 --attach-policy-arn="arn:aws:iam::$AWS_ACCOUNT_ID:policy/$POLICY_NAME" \
 --approve

2.6. RDS Database Setup

We provision a PostgreSQL instance with IAM Database Authentication enabled (--enable-iam-database-authentication).

# Get EKS Network Information

export EKS_VPC_ID=$(aws eks describe-cluster \
 --name $CLUSTER_NAME \
 --region $AWS_REGION \
 --query "cluster.resourcesVpcConfig.vpcId" \
 --output text)

export EKS_CIDR=$(aws ec2 describe-vpcs \
 --vpc-ids $EKS_VPC_ID \
 --region $AWS_REGION \
 --query "Vpcs[0].CidrBlock" \
 --output text)

export SUBNET_IDS=$(aws ec2 describe-subnets \
 --filters "Name=vpc-id,Values=$EKS_VPC_ID" \
 --region $AWS_REGION \
 --query "Subnets[*].SubnetId" \
 --output text)

echo "VPC ID: $EKS_VPC_ID"
echo "CIDR: $EKS_CIDR"

# Create Security Group

export DB_SG_ID=$(aws ec2 create-security-group \
 --group-name harbor-db-sg \
 --description "Security group for Harbor RDS" \
 --vpc-id $EKS_VPC_ID \
 --output text --query 'GroupId' --region $AWS_REGION)

aws ec2 authorize-security-group-ingress \
 --group-id $DB_SG_ID \
 --protocol tcp \
 --port 5432 \
 --cidr $EKS_CIDR \
 --region $AWS_REGION

# Create DB Subnet Group

aws rds create-db-subnet-group \
 --db-subnet-group-name harbor-native-subnets \
 --db-subnet-group-description "Subnets for Harbor RDS" \
 --subnet-ids $SUBNET_IDS \
 --region $AWS_REGION


# Create RDS Instance
aws rds create-db-instance \
 --db-instance-identifier $DB_INSTANCE_ID \
 --db-instance-class $DB_CLASS \
 --engine postgres \
 --engine-version 18.1 \
 --master-username harbor_admin \
 --master-user-password "<yourPassword>" \
 --allocated-storage 20 \
 --db-name $DB_NAME \
 --enable-iam-database-authentication \
 --vpc-security-group-ids $DB_SG_ID \
 --db-subnet-group-name harbor-native-subnets \
 --backup-retention-period 7 \
 --no-publicly-accessible \
 --region $AWS_REGION

echo "Waiting for RDS (5-10 minutes)..."
aws rds wait db-instance-available \
 --db-instance-identifier $DB_INSTANCE_ID \
 --region $AWS_REGION


# Configure IAM Database User

export DB_ENDPOINT=$(aws rds describe-db-instances \
 --db-instance-identifier $DB_INSTANCE_ID \
 --region $AWS_REGION \
 --query "DBInstances[0].Endpoint.Address" \
 --output text)

echo "Database Endpoint: $DB_ENDPOINT"

kubectl create namespace $NAMESPACE

# Connect to RDS (Note: the master password is only needed for this one-time setup.
# Consider using AWS Secrets Manager for the master password in production.)
kubectl run postgres-client --rm -it --image=postgres:18 --restart=Never --namespace=$NAMESPACE --env=PGPASSWORD=<yourPassword> -- psql -h $DB_ENDPOINT -U harbor_admin -d $DB_NAME

Once connected, run the following SQL commands inside PostgreSQL:

CREATE USER harbor_iam_user WITH LOGIN;
GRANT rds_iam TO harbor_iam_user;
GRANT ALL PRIVILEGES ON DATABASE registry TO harbor_iam_user;
GRANT ALL ON SCHEMA public TO harbor_iam_user;
\q

Step 3: Configure values-aws-native.yaml

We configure Harbor to use native AWS authentication. Note that HARBOR_DATABASE_IAM_AUTH is explicitly enabled, the password field is left as a dummy value (it will be ignored by our hook), and the storage credential fields are left empty. The registry inherits permissions directly from the ServiceAccount via IRSA.

# ============================================================
# HARBOR AWS NATIVE CONFIGURATION
# Features: RDS IAM Auth + S3 IRSA
# ============================================================

# 1. GLOBAL SETTINGS
externalURL: "https://harbor.test"

# 2. CONFIGURATION & IAM AUTH
core:
 replicas: 1
 image:
 repository: 8gears.container-registry.com/8gcr/harbor-core
 tag: latest
 # SERVICE ACCOUNT (Required for IRSA)
 serviceAccount:
 create: false
 name: "harbor-sa" # This SA must be annotated with your AWS Role ARN
 securityContext:
 readOnlyRootFilesystem: false
 config:
 HARBOR_DATABASE_IAM_AUTH: "true"
 POSTGRES_HOST: "<YOUR_DB_ENDPOINT>"
 POSTGRES_PORT: "5432"
 POSTGRES_USER: "harbor_iam_user"
 POSTGRES_DATABASE: "registry"

# --- JOBSERVICE ---
jobservice:
 replicas: 1
 image:
 repository: 8gears.container-registry.com/8gcr/harbor-jobservice
 tag: latest
 serviceAccount:
 create: false
 name: "harbor-sa"
 securityContext:
 readOnlyRootFilesystem: false
 config:
 HARBOR_DATABASE_IAM_AUTH: "true"

# --- REGISTRY ---
registry:
 replicas: 1
 image:
 repository: 8gears.container-registry.com/8gcr/harbor-registry
 tag: latest
 serviceAccount:
 create: false
 name: "harbor-sa"
 relativeurls: true
 persistence:
 enabled: false
 securityContext:
 readOnlyRootFilesystem: false
 env:
 - name: REGISTRY_STORAGE_CACHE_LAYERINFO
 value: "inmemory"
 - name: AWS_REGION
 value: "<YOUR_AWS_REGION>"
 storage:
 type: s3
 s3:
 region: "<YOUR_AWS_REGION>"
 bucket: "<YOUR_BUCKET_NAME>"
 secure: true
 v4auth: true
 # No static keys required! The driver uses the pod role via IRSA.
 accesskey: ""
 secretkey: ""

# 3. DATABASE (RDS IAM Auth)
database:
 host: "<YOUR_DB_ENDPOINT>"
 port: 5432
 username: "harbor_iam_user"
 password: "dummy_password" # Required by the Helm chart schema but ignored at runtime; the BeforeConnect hook replaces it with an IAM token
 database: "registry"
 sslmode: "require"

Step 4: Deploy

helm upgrade --install my-harbor oci://8gears.container-registry.com/8gcr/harbor \
 --version 3.0.0 \
 --namespace harbor \
 -f values-aws-native.yaml

Step 5: Verify the Deployment

kubectl -n harbor get pods
kubectl -n harbor logs -l app=harbor-core --tail=50

Confirm all pods reach Running state. In the core logs, look for IAM Auth: Token refreshed messages to verify that RDS IAM authentication is active.

Conclusion

Modernizing Harbor to embrace AWS native identity isn’t just about refactoring code, it’s about shifting security from a manual burden to an automated guarantee.

By replacing static, long-lived secrets with ephemeral, auto-rotating tokens via RDS IAM and IRSA, we empower platform engineers to meet strict enterprise compliance standards without the operational toil. This architecture sets a new benchmark for running Harbor on EKS, ensuring your registry is as secure as the infrastructure it runs on. Ultimately, it allows your team to stop managing keys and start focusing on what matters: delivering software.


Read the whole story
fxer
4 days ago
reply
Ran into this setting up Harbor, even the latest version of the app uses ancient libraries
Bend, Oregon
Share this story
Delete

Des canons sans beurre

1 Share

Paul Waldman observes that Donald Trump’s war of (very bad) choice is costing fortunes, but for Republican wars money is never a constraint:

Speaking to Sky News last week, Treasury Secretary Scott Bessent was asked if there was some point at which the Iran war could grow so costly that he would tell President Trump it had become unaffordable.

“Absolutely not,” Bessent replied.

Whatever it costs, the American taxpayer will pony up. That makes this war a lot like all our other ones. And however much it looks today like the war will cost, it will almost certainly cost more. That’s how war works: It’s always more complicated, difficult, and expensive than the people who start it think it’s going to be. But with only one or two exceptions, Republicans are unperturbed by the effect of Trump’s “excursion” on our national balance sheet.

[…]

We’re less than three weeks into this war, and already the numbers are shockingly large, even if they’re difficult to pin down with precision.

In a briefing early on, the Pentagon told lawmakers that the first six days of the war cost $11.3 billion. Democratic Sen. Chris Coons, not one given to hyperbole, said after the briefing, “I expect that the current total operating number is significantly above that.” The Center for Strategic and International Studies estimated that after 12 days, the cost had risen to $16.5 billion.

While some days cost more than others, the total price tag will keep rising. Expenses include everything from the ordnance we’re going through, which will will have to be restocked (for instance, each Tomahawk missile costs $2.5 million or more, and we’ve launched hundreds of them at Iran), to the extra fuel the Pentagon is using, to rebuilding the systems and bases Iran is hitting, to the medical costs for injured service members, and more.

And this is before we get to the massive costs that will be imposed by choking the Strait of Hormuz — higher prices for energy, agricultural commodities, pharmaceuticals, etc.

It is trite to observe that we could be spending this money on positive-sum things rather than negative-sum things, but it’s trite because it’s true. And yet Republicans are notably less likely to face media scrutiny for these tradeoffs:

And what could we do with $50 billion, the low end of what the Iran war will cost? So many things. We could give Medicaid coverage to 6.75 million Americans. We could pay for free school lunches for every public school student in America. We could fund the National Park Service at pre-DOGE levels for 17 years.

When Democrats want to do those things, and especially when they want to do something big, the cries of “But how will you pay for it?!?” ring out from both their opponents and the news media. So they come up with an answer. For instance, when they passed the Affordable Care Act, Democrats labored for months to produce cost savings and tax increases to offset every penny of new spending the bill entailed.

Republicans feel no such obligation. Their most consequential piece of legislation in recent years was the Big Beautiful Bill, which will increase the deficit by $2.4 trillion over a decade, according to the Congressional Budget Office. In addition to cutting taxes for the wealthy, it showered money on the Pentagon and allocated $170 billion in additional funds for immigration and border enforcement.

But Republicans are the party of Fiscal Rectitude, don’t you know, by which I mean they can send some 23-year-old incels to starve African children in exchange for no material savings.

The post Des canons sans beurre appeared first on Lawyers, Guns & Money.

Read the whole story
fxer
6 days ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories