17683 stories
·
175 followers

Canada loses measles elimination status

1 Comment and 2 Shares
Two signs on a counter warn about measles symptoms in English and a dialect. The large red stop signs emphasize not to enter and to call Alberta Health.

Read the whole story
fxer
20 minutes ago
reply
You’re stealing our bit!
Bend, Oregon
dreadhead
8 hours ago
reply
Vancouver Island, Canada
Share this story
Delete

Credible Charges

2 Shares

Today’s deluge of Donald Trump-Jeffrey Epstein revelations reminded me of a January 1994 Washington Post editorial in which the Post called for the appointment of an independent counsel to investigate a land deal on which the Clintons had lost money more than a decade earlier “even though – and this should be stressed – there has been no credible charge in this case that either the president or Mrs. Clinton did anything wrong.”
SCR-20251112-nkbb.png

I’ve thought about that editorial often during the Trump era. Countless examples of flagrant corruption, law-breaking, potential treason, and just generally antisocial behavior by Trump, all infinitely more troubling than losing $50,000 on a real estate deal, have come and went without the Washington Post – or any comparable media, political, or civic institution – calling for an independent investigation. It’s a striking contrast to how eager institutions like the Post were to back investigations of previous Democratic presidents, even in the absence of so much as a credible allegation of wrongdoing.

There are a lot of reasons for the fact that the corporate news media has been reluctant to push for the kinds of consequences – from investigations to resignation – for Trump misdeeds they backed during Democratic institutions. One is that the perverse reality that these news organizations and the people who lead them are both conservative-leaning and liberal-identifying, and their misconception of their own biases causes them to tilt even further to the Right. I won’t belabor that point today; I’ve written often and at great length about it.

Another factor is the belief that it’s pointless for news companies to call for something – whether an independent investigation, resignation, or impeachment – that they know Trump and the Republicans who control our government will reject. They (correctly) think they can influence the behavior of Democratic politicians, so they try to do so. They don’t think they can influence the behavior of Republican politicians, so they don’t press for the same kinds of accountability they demand from Democrats.

This way of thinking is deeply wrong. For one thing, there is value in saying clearly what should happen even if it is unlikely to pass. There is value in saying certain things are unacceptable, even if you likely cannot stop them. The refusal to do so helps us sink further into the abyss. For another: Autocrats like Donald Trump who lack popular support for their corrupt and damaging practices rely on the perception of inevitability. The assumption that nothing can ever change, that they cannot be held accountable, that they can get away with anything, is self perpetuating. When news companies throw in the towel or outright align themselves with Trump, they signal to judges that they face no reproach for doing the same. Judges send the same signals to prosecutors, corporations to universities, universities to think tanks, and so on.

So: It is clear that neither the Trump administration nor his Republican allies in Congress can be trusted to investigate and tell the truth about his relationship with Jeffrey Epstein. A truly independent investigation is necessary. And we already know enough about Trump to know that resignation and impeachment are the least of the consequences he should face, for a wide variety of wrongdoing. There can be no accountability and no way out if we are unwilling to say these things. And institutions that don't understand that or aren't willing to act on it aren't worth saving. We're going to have to build new ones instead.

Read the whole story
fxer
21 minutes ago
reply
Bend, Oregon
acdha
6 hours ago
reply
Washington, DC
Share this story
Delete

Stopped clock at midnight

1 Comment

The penny is being ended:

 The U.S. ended production of the penny Wednesday, abandoning the 1-cent coins that were embedded in American culture for more than 230 years but became nearly worthless.

When it was introduced in 1793, a penny could buy a biscuit, a candle or a piece of candy. Now most of them are cast aside to sit in jars or junk drawers, and each one costs nearly 4 cents to make.

“God bless America, and we’re going to save the taxpayers $56 million,” Treasurer Brandon Beach said at the U.S. Mint in Philadelphia before hitting a button to strike the final penny. The coins were then carefully placed on a tray for journalists to see. The last few pennies were to be auctioned off.

I’m not sure how Trump stumbled into this, but it is unambiguously the correct policy, as Caity Weaver explained in her classic story on the subject [gift link]:

Most pennies produced by the U.S. Mint are given out as change but never spent; this creates an incessant demand for new pennies to replace them, so that cash transactions that necessitate pennies (i.e., any concluding with a sum whose final digit is 1, 2, 3, 4, 6, 7, 8 or 9) can be settled. Because these replacement pennies will themselves not be spent, they will need to be replaced with new pennies that will also not be spent, and so will have to be replaced with new pennies that will not be spent, which will have to be replaced by new pennies (that will not be spent, and so will have to be replaced). In other words, we keep minting pennies because no one uses the pennies we mint.

A conservative estimate holds that there are 240 billion pennies lying around the United States — about 724 ($7.24) for every man, woman and child there residing, and enough to hand two pennies to every bewildered human born since the dawn of man. (To distribute them all, in fact, we’d have to double back to the beginning and give our first six billion ancestors a third American penny.) These are but a fraction of the several hundreds of billions of pennies issued since 1793, most of which have suffered a mysterious fate sometimes described in government records, with a hint of supernaturality generally undesirable in bookkeeping, as “disappearance.” As far as anyone knows, the American cent is the most produced coin in the history of civilization, its portrait of Lincoln the most reproduced piece of art on Earth. Although pennies are almost never used for their ostensible purpose (to make purchases), right now one out of every two circulating coins minted in the United States has a face value of 1 cent. A majority of the ones that have not yet disappeared are, according to a 2022 report, “sitting in consumers’ coin jars in their homes.”

It’s crucial that they remain there. Five years ago, Mint officials conceded that if even a modest portion of these dormant pennies were suddenly to return to circulation, the resulting flow-back would be “logistically unmanageable.” There would be so unbelievably many pennies that there most likely would not be enough room to contain them inside government vaults. Moving them from place to place would be time-consuming, cumbersome and costly. (Just $100 worth of pennies weighs a touch over 55 pounds.) With each new penny minted, this problem becomes slightly more of a problem.

The United States government has willfully ignored this nonsensical math problem for decades. Forty-eight years ago, in letters to Congress, William E. Simon, then the Treasury secretary, begged lawmakers to “give serious consideration” to abandoning 1-cent coins as soon as possible. The frantic tempo at which pennies were plummeting out of circulation, a Treasury report warned, would soon plunge the Mint into “a never-ending spiral” of “ever-increasing production” as it flailed to replace unused pennies with more pennies that would likewise remain unused — a bit like deploying a bucket to combat a dripping ceiling leak, and it turns out the leak is the ocean because the room was built under the sea, and the only way out of this anyone can think of is to engineer increasingly large buckets. The coin should be eradicated, the report reasoned, “no later than 1980.”

The only question is why this wasn’t done a lot sooner.

The post Stopped clock at midnight appeared first on Lawyers, Guns & Money.

Read the whole story
fxer
1 hour ago
reply
> Because these replacement pennies will themselves not be spent, they will need to be replaced with new pennies that will also not be spent, and so will have to be replaced with new pennies that will not be spent, which will have to be replaced by new pennies (that will not be spent, and so will have to be replaced). In other words, we keep minting pennies because no one uses the pennies we mint.
Bend, Oregon
Share this story
Delete

New quantum hardware puts the mechanics in quantum mechanics

1 Share

Quantum computers based on ions or atoms have one major advantage: The qubits themselves aren’t manufactured, and there’s no device-to-device among atoms. Every atom is the same and should perform similarly every time. And since the qubits themselves can be moved around, it’s theoretically possible to entangle any atom or ion with any other in the system, allowing for a lot of flexibility in how algorithms and error correction are performed.

This combination of consistent, high-fidelity performance with all-to-all connectivity has led many key demonstrations of quantum computing to be done on trapped-ion hardware. Unfortunately, the hardware has been held back a bit by relatively low qubit counts—a few dozen compared to the hundred or more seen in other technologies. But on Wednesday, a company called Quantinuum announced a new version of its trapped-ion hardware that significantly boosts the qubit count and uses some interesting technology to manage their operation.

Trapped-ion computing

Both neutral atom and trapped-ion computers store their qubits in the spin of the nucleus. That spin is somewhat shielded from the environment by the cloud of electrons around the nucleus, giving these qubits a relatively long coherence time. While neutral atoms are held in place by a network of lasers, trapped ions are manipulated via electromagnetic control based on the ion’s charge. This means that key components of the hardware can be built using standard electronic manufacturing, although lasers are still needed for manipulations and readout.

While the electronics are static—they stay wherever they were manufactured—they can be used to move the ions around. That means that as long as the trackways the atoms can move on enable it, any two ions can be brought into close proximity and entangled. This all-to-all connectivity can enable more efficient implementation of algorithms performed directly on the hardware qubits or the use of error-correction codes that require a complicated geometry of connections. That’s one reason why Microsoft used a Quantinuum machine to demonstrate error-correction code based on a tesseract.

But arranging the trackways so that any two qubits can be next to each other can become increasingly complicated. Moving ions around is a relatively slow process, so retrieving two ions from the far ends of a chip too often can cause a system to start pushing up against the coherence time of the qubits. In the long term, Quantinuum plans to build chips with a square grid reminiscent of the street layout of many cities. But doing so will require a mastery of controlling the flow of ions through four-way intersections.

And that’s what Quantinuum is doing in part with its new chip, named Helios. It has a single intersection that couples two ion-storage areas, enabling operations as ions slosh from one end of the chip to the other. And it comes with significantly more qubits than its earlier hardware, moving from 56 to 96 qubits without sacrificing performance. “We’ve kept and actually even improved the two qubit gate fidelity,” Quantinuum VP Jenni Strabley told Ars. “So we’re not seeing any degradation in the two-qubit gate fidelity as we go to larger and larger sizes.”

Doing the loop

The image below is taken using the fluorescence of the atoms in the hardware itself. As you can see, the layout is dominated by two features: A loop at the left and two legs extending to the right. They’re connected by a four-way intersection. The Quantinuum staff described this intersection as being central to the computer’s operation.

A black background on which a series of small blue dots trace out a circle and two parallel lines connected by an x-shaped junction. The actual ions trace out the physical layout of the Helios system, featuring a storage ring and two legs that contain dedicated operation sites. Credit: Quantinuum

The system works by rotating the ions around the loop. As an ion reaches the intersection, the system chooses whether to kick it into one of the legs and, if so, which leg. “We spin that ring almost like a hard drive, really, and whenever the ion that we want to gate gets close to the junction, there’s a decision that happens: Either that ion goes [into the legs], or it kind of makes a little turn and goes back into the ring,” said David Hayes, Quantinuum’s director of Computational Design and Theory. “And you can make that decision with just a few electrodes that are right at that X there.”

Each leg has a region where operations can take place, so this system can ensure that the right qubits are present together in the operation zones for things like two-qubit gates. Once the operations are complete, the qubits can be moved into the leg storage regions, and new qubits can be shuffled in. When the legs fill up, the qubits can be sent back to the loop, and the process is restarted.

“You get less traffic jams if all the traffic is running one way going through the gate zones,” Hayes told Ars. “If you had to move them past each other, you would have to do kind of physical swaps, and you want to avoid that.”

Obviously, issuing all the commands to control the hardware will be quite challenging for anything but the simplest operations. That puts an increasing emphasis on the compilers that add a significant layer of abstraction between what you want a quantum computer to do and the actual hardware commands needed to implement it. Quantinuum has developed its own compiler to take user-generated code and produce something that the control system can convert into the sequence of commands needed.

The control system now incorporates a real-time engine that can read data from Helios and update the commands it issues based on the state of the qubits. Quantinuum has this portion of the system running on GPUs rather than requiring customized hardware.

Quantinuum’s SDK for users is called Guppy and is based on Python, which has been modified to allow users to describe what they’d like the system to do. Helios is being accompanied by a new version of Guppy that includes some traditional programming tools like FOR loops and IF-based conditionals. These will be critical for the sorts of things we want to do as we move toward error-corrected qubits. This includes testing for errors, fixing them if they’re present, or repeatedly attempting initialization until it succeeds without error.

Hayes said the new version is also moving toward error correction. Thanks to Guppy’s ability to dynamically reassign qubits, Helios will be able to operate as a machine with 94 qubits while detecting errors on any of them. Alternatively, the 96 hardware qubits can be configured as a single unit that hosts 48 error-corrected qubits. “It’s actually a concatenated code,” Hayes told Ars. “You take two error detection codes and weave them together… it’s a single code block, but it has 48 logical cubits housed inside of it.” (Hayes said it’s a distance-four code, meaning it can fix up to two errors that occur simultaneously.)

Tackling superconductivity

While Quantinuum hardware has always had low error rates relative to most of its competitors, there was only so much you could do with 56 qubits. With 96 now at their disposal, researchers at the company decided to build a quantum implementation of a model (called the Fermi-Hubbard model) that’s meant to help study the electron pairing that takes place during the transition to superconductivity.

“There are definitely terms that the model doesn’t capture,” Quantinuum’s Henrik Dreyer acknowledged. “They neglect their electrorepulsion that [the electrons] still have—I mean, they’re still negatively charged; they are still repelling. There are definitely terms that the model doesn’t capture. On the other hand, I should say that this Fermi-Hubbard model—it has many of the features that a superconductor has.”

Superconductivity occurs when electrons join to form what are called Cooper pairs, overcoming their normal repulsion. And the model can tell that apart from normal conductivity in the same material.

“You ask the question ‘What’s the chance that one of the charged particles spontaneously disappears because of quantum fluctuations and goes over here?'” Dreyer said, describing what happens when simulating a conductor. “What people do in superconductivity is they take this concept, but instead of asking what’s the chance of a single-charge particle to tunnel over there spontaneously, they’re asking what is the chance of a pair to tunnel spontaneously?”

Even in its simplified form, however, it’s still a model of a quantum system, with all the computational complexity that comes with that. So the Quantinuum team modeled a few systems that classical computers struggle with. One was simply looking at a larger grid of atoms than most classical simulations have done; another expanded the grid in an additional dimension, modeling layers of a material. Perhaps the most complicated simulation involved what happens when a laser pulse of the right wavelength hits a superconductor at room temperature, an event that briefly induces a superconducting state.

And the system produced results, even without error correction. “It’s maybe a technical point, but I think it’s very important technical point, which is [that] the circuits that we ran, they all had errors,” Dreyer told Ars. “Maybe on the average of three or so errors, and for some reason, that is not very fully understood for this application, it doesn’t matter. You still get almost the perfect result in some of these cases.”

That said, he also indicated that having higher-fidelity hardware would help the team do a better job of putting the system in a ground state or running the simulation for longer. But those will have to wait for future hardware.

What’s next

If you look at Quantinuum’s roadmap for that future hardware, Helios would appear to be the last of its kind. It and earlier versions of the processors have loops and large straight stretches; everything in the future features a grid of squares. But both Strabley and Hayes said that Helios has several key transitional features. “Those ions are moving through that junction many, many times over the course of a circuit,” Strabley told Ars. “And so it’s really enabled us to work on the reliability of the junction, and that will translate into the large-scale systems.”

Image of a product roadmap, with years from 2020 to 2029 noted across the top. There are five processors arrayed from left to right, each with increasingly complex geometry. Helios sits at the pivot between the simple geometries of earlier Quantinuum processors and the grids of future designs. Credit: Quantinuum

The collection of squares seen in future processors will also allow the same sorts of operations to be done with the loop-and-legs of Helios. Some squares can serve as the equivalent of a loop in terms of storage and sorting, while some of the straight lines nearby can be used for operations.

“What will be common to both of them is kind of the general concept that you can have a storage and sorting region and then gating regions on the side and they’re separated from one another,” Hayes said. “It’s not public yet, but that’s the direction we’re heading: a storage region where you can do really fast sorting in these 2D grids, and then gating regions that have parallelizable logical operations.”

In the meantime, we’re likely to see improvements made to Helios—ideas that didn’t quite make today’s release. “There’s always one more improvement that people want to make, and I’m the person that says, ‘No, we’re going to go now. Put this on the market, and people are going to go use it,'” Strabley said. “So there is a long list of things that we’re going to add to improve the performance. So expect that over the course of Helios, the performance is going to get better and better and better.”

That performance is likely to be used for the sort of initial work done on superconductivity or the algorithm recently described by Google, which is at or a bit beyond what classical computers can manage and may start providing some useful insights. But it will still be a generation or two before we start seeing quantum computing fulfill some of its promise.

Read full article

Comments



Read the whole story
fxer
1 hour ago
reply
Bend, Oregon
Share this story
Delete

Good Luck, Have Fun, Don’t Die trailer ushers in AI apocalypse

1 Share

Director Gore Verbinski has racked up an impressive filmography over the years, from The Ring and the first three installments of the Pirates of the Caribbean franchise to the 2011 Oscar-nominated animated Western Rango. Granted, he’s had his share of failures (*cough* The Lone Ranger *cough*), but if this trailer is any indication, Verbinski has another winner on his hands with the absurdist sci-fi dark comedy Good Luck, Have Fun, Don’t Die.

Sam Rockwell stars as the otherwise unnamed “Man from the Future,” who shows up at a Los Angeles diner looking like a homeless person but claiming to be a time traveler from an apocalyptic future. He’s there to recruit the locals into his war against a rogue AI, although the diner patrons are understandably dubious about his sanity. (“I come from a nightmare apocalypse,” he assures the crowd about his grubby appearance. “This is the height of f*@ing fashion!”) Somehow, he convinces a handful of Angelenos to join his crusade, and judging by the remaining footage, all kinds of chaos breaks out.

In addition to the eminently watchable Rockwell, the cast includes Haley Lu Richardson as Ingrid, Michael Pena as Mark, Zazie Beetz as Janet, and Juno Temple as Susan. Dino Fetscher, Anna Acton, Asim Chaudhury, Daniel Barnett, and Dominique Maher also appear in as-yet-undisclosed roles. Matthew Robinson (The Invention of Lying, Love and Monsters) penned the script. This is Verbinski’s first indie film, and Tom Ortenberg, CEO of distributor Briarcliff Entertainment, praised it as “wildly original, endlessly entertaining, and unlike anything audiences have seen before.” Color us intrigued.

Good Luck, Don’t Die, Have Fun hits theaters on February 13, 2026.

Read full article

Comments



Read the whole story
fxer
1 hour ago
reply
Bend, Oregon
Share this story
Delete

Quantum computing tech keeps edging forward

1 Share

The end of the year is usually a busy time in the quantum computing arena, as companies often try to announce that they’ve reached major milestones before the year wraps up. This year has been no exception. And while not all of these announcements involve interesting new architectures like the one we looked at recently, they’re a good way to mark progress in the field, and they often involve the sort of smaller, incremental steps needed to push the field forward.

What follows is a quick look at a handful of announcements from the past few weeks that struck us as potentially interesting.

IBM follows through

IBM is one of the companies announcing a brand-new architecture this year. That’s not at all a surprise, given that the company promised to do so back in June; this week sees the company confirming that it has built the two processors it said it would earlier in the year. These include one called Loon, which is focused on the architecture that IBM will use to host error-corrected logical qubits. Loon represents two major changes for the company: a shift to nearest-neighbor connections and the addition of long-distance connections.

IBM had previously used what it termed the “heavy hex” architecture, in which alternating qubits were connected to either two or three of their neighbors, forming a set of overlapping hexagonal structures. In Loon, the company is using a square grid, with each qubit having connections to its four closest neighbors. This higher density of connections can enable more efficient use of the qubits during computations. But qubits in Loon have additional long-distance connections to other parts of the chip, which will be needed for the specific type of error correction that IBM has committed to. It’s there to allow users to test out a critical future feature.

The second processor, Nighthawk, is focused on the now. It also has the nearest-neighbor connections and a square grid structure, but it lacks the long-distance connections. Instead, the focus with Nighthawk is to get error rates down so that researchers can start testing algorithms for quantum advantage—computations where quantum computers have a clear edge over classical algorithms.

In addition, the company is launching GitHub repository that will allow the community to deposit code and performance data for both classical and quantum algorithms, enabling rigorous evaluations of relative performance. Right now, those are broken down into three categories of algorithms that IBM expects are most likely to demonstrate a verifiable quantum advantage.

This isn’t the only follow-up to IBM’s June announcement, which also saw the company describe the algorithm it would use to identify errors in its logical qubits and the corrections needed to fix them. In late October, the company said it had confirmed that the algorithm could work in real time when run on an FPGA made in collaboration with AMD.

Record lows

A few years back, we reported on a company called Oxford Ionics, which had just announced that it achieved a record-low error rate in some qubit operations using trapped ions. Most trapped-ion quantum computers move qubits by manipulating electromagnetic fields, but they perform computational operations using lasers. Oxford Ionics figured out how to perform operations using electromagnetic fields, meaning more of their processing benefited from our ability to precisely manufacture circuitry (lasers were still needed for tasks like producing a readout of the qubits). And as we noted, it could perform these computational operations extremely effectively.

But Oxford Ionics never made a major announcement that would give us a good excuse to describe its technology in more detail. The company was ultimately acquired by IonQ, a competitor in the trapped-ion space.

Now, IonQ is building on what it gained from Oxford Ionics, announcing a new, record-low error rate for two-qubit gates: greater than 99.99 percent fidelity. That could be critical for the company, as a low error rate for hardware qubits means fewer are needed to get good performance from error-corrected qubits.

But the details of the two-qubit gates are perhaps more interesting than the error rate. Two-qubit gates involve bringing both qubits involved into close proximity, which often requires moving them. That motion pumps a bit of energy into the system, raising the ions’ temperature and leaving them slightly more prone to errors. As a result, any movement of the ions is generally followed by cooling, in which lasers are used to bleed energy back out of the qubits.

This process, which involves two distinct cooling steps, is slow. So slow that as much as two-thirds of the time spent in operations involves the hardware waiting around while recently moved ions are cooled back down. The new IonQ announcement includes a description of a method for performing two-qubit gates that doesn’t require the ions to be fully cooled. This allows one of the two cooling steps to be skipped entirely. In fact, coupled with earlier work involving one-qubit gates, it raises the possibility that the entire machine could operate with its ions at a still very cold but slightly elevated temperature, avoiding all need for one of the two cooling steps.

That would shorten operation times and let researchers do more before the limit of a quantum system’s coherence is reached.

State of the art?

The last announcement comes from another trapped-ion company, Quantum Art. A couple of weeks back, it announced a collaboration with Nvidia that resulted in a more efficient compiler for operations on its hardware. On its own, this isn’t especially interesting. But it’s emblematic of a trend that’s worth noting, and it gives us an excuse to look at Quantum Art’s technology, which takes a distinct approach to boosting the efficiency of trapped-ion computation.

First, the trend: Nvidia’s interest in quantum computing. The company isn’t interested in the quantum aspects (at least not publicly); instead, it sees an opportunity to get further entrenched in high-performance computing. There are three areas where the computational capacity of GPUs can play a role here. One is small-scale modeling of quantum processors so that users can perform an initial testing of algorithms without committing to paying for access to the real thing. Another is what Quantum Art is announcing: using GPUs as part of a compiler chain to do all the computations needed to find more efficient ways of executing an algorithm on specific quantum hardware.

Finally, there’s a potential role in error correction. Error correction involves some indirect measurements of a handful of hardware qubits to determine the most likely state that a larger collection (called a logical qubit) is in. This requires modeling a quantum system in real time, which is quite difficult—hence the computational demands that Nvidia hopes to meet. Regardless of the precise role, there has been a steady flow of announcements much like Quantum Art’s: a partnership with Nvidia that will keep the company’s hardware involved if the quantum technology takes off.

In Quantum Art’s case, that technology is a bit unusual. The trapped-ion companies we’ve covered so far are all taking different routes to the same place: moving one or two ions into a location where operations can be performed and then executing one- or two-qubit gates. Quantum Art’s approach is to perform gates with much larger collections of ions. At the compiler level, it would be akin to figuring out which qubits need a specific operation performed, clustering them together, and doing it all at once. Obviously, there are potential efficiency gains here.

The challenge would normally be moving so many qubits around to create these clusters. But Quantum Art uses lasers to “pin” ions in a row so they act to isolate the ones to their right from the ones to their left. Each cluster can then be operated on separately. In between operations, the pins can be moved to new locations, creating different clusters for the next set of operations. (Quantum Art is calling each cluster of ions a “core” and presenting this as multicore quantum computing.)

At the moment, Quantum Art is behind some of its competitors in terms of qubit count and performing interesting demonstrations, and it’s not pledging to scale quite as fast. But the company’s founders are convinced that the complexity of doing so many individual operations and moving so many ions around will catch up with those competitors, while the added efficiency of multiple qubit gates will allow it to scale better.

This is just a small sampling of all the announcements from this fall, but it should give you a sense of how rapidly the field is progressing—from technology demonstrations to identifying cases where quantum hardware has a real edge and exploring ways to sustain progress beyond those first successes.

Read full article

Comments



Read the whole story
fxer
1 hour ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories