17695 stories
·
175 followers

They are growing the world's most expensive spice in Canada. Here's how

2 Shares
A farmer holds a green basket as he harvests purple saffron flowers from the field beneath him.

As golden hour settles over Avtar Dhillon’s farm in Abbotsford, B.C., rows of delicate purple flowers are in full bloom. Inside lies an ancient spice some Canadian farmers are beginning to get excited about.

Read the whole story
fxer
10 hours ago
reply
Bend, Oregon
dreadhead
1 day ago
reply
Vancouver Island, Canada
Share this story
Delete

Testing shows Apple N1 Wi-Fi chip improves on older Broadcom chips in every way

1 Share

This year’s newest iPhones included one momentous change that marked a new phase in the evolution of Apple Silicon: the Apple N1, Apple’s first in-house chip made to handle local wireless connections. The N1 supports Wi-Fi 7, Bluetooth 6, and the Thread smart home communication protocol, and it replaces the third-party wireless chips (mostly made by Broadcom) that Apple used in older iPhones.

Apple claimed that the N1 would enable more reliable connectivity for local communication features like AirPlay and AirDrop but didn’t say anything about how users could expect it to perform. But Ookla, the folks behind the SpeedTest app and website, have analyzed about five weeks’ worth of users’ testing data to get an idea of how the iPhone 17 lineup stacks up to the iPhone 16, as well as Android phones with Wi-Fi chips from Qualcomm, MediaTek, and others.

While the N1 isn’t at the top of the charts, Ookla says Apple’s Wi-Fi chip “delivered higher download and upload speeds on Wi-Fi compared to the iPhone 16 across every studied percentile and virtually every region.” The median download speed for the iPhone 17 series was 329.56Mbps, compared to 236.46Mbps for the iPhone 16; the upload speed also jumped from 73.68Mbps to 103.26Mbps.

Ookla noted that the N1’s best performance seemed to improve scores most of all in the bottom 10th percentile of performance tests, “implying Apple’s custom silicon lifts the floor more than the ceiling.” The iPhone 17 also didn’t top Ookla’s global performance charts—Ookla found that the Pixel 10 Pro series slightly edges out the iPhone 17 in download speed, while a Xiaomi 15T Pro with MediaTek Wi-Fi silicon featured better upload speeds.

Ookla’s testing data suggests Apple’s N1 Wi-Fi chip is more reliable when Wi-Fi connections are spottier. Credit: Ookla

Android phones also sometimes benefit from faster adoption of new technologies and support for 6 GHz Wi-Fi 7 with a 320 MHz channel width. While the N1’s lack of support for these features “does not materially affect performance in real world use for most people,” Android phones like the Pixel 10 series and Samsung’s Galaxy S25 can outrun the iPhone 17 in areas where those technologies are being used.

Note that Ookla’s approach can’t control for things like people’s distance from their Wi-Fi router, what kind of router they’re using, and the upload and download speeds set by their ISP. To control for this and minimize outliers, Ookla only publishes median numbers for the phones it’s tracking; it also lumps together phones from the same product families (the iPhone 17 results also include the 17 Pro and the iPhone Air, for example).

Ookla published similar data about the performance of Apple’s C1 cellular modem, also a first-generation chip design. As with the N1, Ookla’s main finding was that the C1 didn’t support the cutting-edge technologies it would need to top the performance charts, but that its speeds were mostly in the same ballpark as the Qualcomm modems in the iPhone 16 and that the C1 actually fared best in countries with less-robust cellular networks.

Since announcing the N1 in the iPhone 17 series in September, Apple has also launched a new Apple M5 iPad Pro with the N1 inside, though the chip was not included in the M5 MacBook Pro that Apple announced at the same time. The N1’s Thread support also makes it a good fit for Apple’s smart home and smart home-adjacent devices like the HomePod speaker or the Apple TV streaming box. The next time we see hardware refreshes for those devices—and updates are supposedly coming sooner rather than later—we expect to see the N1 included.

Read full article

Comments



Read the whole story
fxer
10 hours ago
reply
Bend, Oregon
Share this story
Delete

Massive Cloudflare outage was triggered by file that suddenly doubled in size

1 Share

When a Cloudflare outage disrupted large numbers of websites and online services yesterday, the company initially thought it was hit by a “hyper-scale” DDoS (distributed denial-of-service) attack.

“I worry this is the big botnet flexing,” Cloudflare co-founder and CEO Matthew Prince wrote in an internal chat room yesterday, while he and others discussed whether Cloudflare was being hit by attacks from the prolific Aisuru botnet. But upon further investigation, Cloudflare staff realized the problem had an internal cause: an important file had unexpectedly doubled in size and propagated across the network.

This caused trouble for software that needs to read the file to maintain the Cloudflare bot management system that uses a machine learning model to protect against security threats. Cloudflare’s core CDN, security services, and several other services were affected.

“After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file,” Prince wrote in a post-mortem of the outage.

Prince explained that the problem “was triggered by a change to one of our database systems’ permissions which caused the database to output multiple entries into a ‘feature file’ used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.”

These machines run software that routes traffic across the Cloudflare network. The software “reads this feature file to keep our Bot Management system up to date with ever changing threats,” Prince wrote. “The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.”

Sorry for the pain, Internet

After replacing the bloated feature file with an earlier version, the flow of core traffic “largely” returned to normal, Prince wrote. But it took another two-and-a-half hours “to mitigate increased load on various parts of our network as traffic rushed back online.”

Like Amazon Web Services, Cloudflare is relied upon by many online services and can take down much of the web when it has a technical problem. “On behalf of the entire team at Cloudflare, I would like to apologize for the pain we caused the Internet today,” Prince wrote, saying that any outage is unacceptable because of “Cloudflare’s importance in the Internet ecosystem.”

Cloudflare’s bot management system classifies bots as good or bad with “a machine learning model that we use to generate bot scores for every request traversing our network,” Prince wrote. “Our customers use bot scores to control which bots are allowed to access their sites—or not.”

Prince explained that the configuration file this system relies upon describes “features,” or individual traits “used by the machine learning model to make a prediction about whether the request was automated or not.” This file is updated every five minutes “and published to our entire network and allows us to react to variations in traffic flows across the Internet. It allows us to react to new types of bots and new bot attacks. So it’s critical that it is rolled out frequently and rapidly as bad actors change their tactics quickly.”

Unexpected query response

Each new version of the file is generated by a query running on a ClickHouse database cluster, Prince wrote. When Cloudflare made a change granting additional permissions to database users, the query response suddenly contained more metadata than it previously had.

Cloudflare staff assumed “that the list of columns returned by a query like this would only include the ‘default’ database.” But the query didn’t include a filter for the database name, causing it to return duplicates of columns, Prince wrote.

This is the type of query that Cloudflare’s bot management system uses “to construct each input ‘feature’ for the file,” he wrote. The extra metadata more than doubled the rows in the response, “ultimately affecting the number of rows (i.e. features) in the final file output,” Prince wrote.

Cloudflare’s proxy service has limits to prevent excessive memory consumption, with the bot management system having “a limit on the number of machine learning features that can be used at runtime.” This limit is 200, well above the actual number of features used.

“When the bad file with more than 200 features was propagated to our servers, this limit was hit—resulting in the system panicking” and outputting errors, Prince wrote.

Worst Cloudflare outage since 2019

The number of 5xx error HTTP status codes served by the Cloudflare network is normally “very low” but soared after the bad file spread across the network. “The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file,” Prince wrote. “What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.”

This unusual behavior was explained by the fact “that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management,” Prince wrote. “Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.”

This fluctuation initially “led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state,” he wrote.

Prince said that Cloudflare “solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue,” and then “forcing a restart of our core proxy.” The team then worked on “restarting remaining services that had entered a bad state” until the 5xx error code volume returned to normal later in the day.

Prince said the outage was Cloudflare’s worst since 2019 and that the firm is taking steps to protect against similar failures in the future. Cloudflare will work on “hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input; enabling more global kill switches for features; eliminating the ability for core dumps or other error reports to overwhelm system resources; [and] reviewing failure modes for error conditions across all core proxy modules,” according to Prince.

While Prince can’t promise that Cloudflare will never have another outage of the same scale, he said that previous outages have “always led to us building new, more resilient systems.”

Read full article

Comments



Read the whole story
fxer
10 hours ago
reply
Bend, Oregon
Share this story
Delete

Widespread Cloudflare outage blamed on mysterious traffic spike

1 Share

A Cloudflare outage caused large chunks of the Internet to go dark Tuesday morning, temporarily impacting big platforms like X and ChatGPT.

“A fix has been implemented and we believe the incident is now resolved. We are continuing to monitor for errors to ensure all services are back to normal,” Cloudflare’s status page said. “Some customers may be still experiencing issues logging into or using the Cloudflare dashboard.”

The company initially attributed the widespread outages to “an internal service degradation” and provided updates as it sought a fix over the past two hours.

A Cloudflare spokesperson told Ars that the cloud services provider saw “a spike in unusual traffic to one of Cloudflare’s services,” which “caused some traffic passing through Cloudflare’s network to experience errors.”

After the company investigated the “spike in unusual traffic,” Cloudflare’s spokesperson provided a more detailed update, telling Ars, “the root cause of the outage was a configuration file that is automatically generated to manage threat traffic. The file grew beyond an expected size of entries and triggered a crash in the software system that handles traffic for a number of Cloudflare’s services.”

“To be clear, there is no evidence that this was the result of an attack or caused by malicious activity,” the spokesperson said. “We expect that some Cloudflare services will be briefly degraded as traffic naturally spikes post-incident, but we expect all services to return to normal in the next few hours”

About 20 percent of the web relies on Cloudflare to manage and protect traffic, a Cloudflare blog noted in July. Some intermediate fixes have been made, Cloudflare’s status page said. But as of this writing, many sites remain down. According to DownDetector, Amazon, Spotify, Zoom, Uber, and Azure also experienced outages.

“Given the importance of Cloudflare’s services, any outage is unacceptable,” Cloudflare’s spokesperson said. “We apologize to our customers and the Internet in general for letting you down today. We will learn from today’s incident and improve.”

Cloudflare will continue to update the status page as fixes come in, and a blog will be posted later today discussing the issue, the spokesperson told Ars.

It’s the latest massive outage site owners have coped with after an Amazon Web Services outage took out half the web last month. Both the AWS outage and the chaotic CrowdStrike outage last year were estimated to cost affected parties billions.

Critics have suggested that outages like these make it clear how fragile the Internet really is, especially when everyone relies on the same service providers. During the AWS outage, some sites considered diversifying service providers to avoid losing business during future outages.

The outage may have caused some investors to panic, as Cloudflare’s stock fell about 3 percent amid the widespread outage.

Ars will update this story when Cloudflare provides more information on the outage.

This story was updated on November 18 to add new information from Cloudflare.

Read full article

Comments



Read the whole story
fxer
1 day ago
reply
Bend, Oregon
Share this story
Delete

Judge smacks down Texas AG’s request to immediately block Tylenol ads

1 Share

A Texas Judge has rejected a request from Texas Attorney General Ken Paxton to issue a temporary order barring Tylenol’s maker, Kenvue, from claiming amid litigation that the pain and fever medication is safe for pregnant women and children, according to court documents.

In records filed Friday, District Judge LeAnn Rafferty, in Panola County, also rejected Paxton’s unusual request to block Kenvue from distributing $400 million in dividends to shareholders later this month.

The denials are early losses for Paxton in a politically charged case that hinges on the unproven claim that Tylenol causes autism and other disorders—a claim first introduced by President Trump and his anti-vaccine health secretary, Robert F. Kennedy Jr.

In a bizarre press conference in September, Trump implored Americans repeatedly not to take the drug. But, scientific studies have not shown that Tylenol (acetaminophen) causes autism or other neurologic disorders. Some studies have claimed to find an association between Tylenol use and autism, but the studies have significant flaws, and others have found no link. Moreover, Tylenol is considered the safest pain and fever drug for use during pregnancy, and untreated pain and fevers in pregnancy are known to cause harms, including an increased risk of autism.

Still, Paxton filed the lawsuit October 28, claiming that Kenvue and Tylenol’s former parent company, Johnson & Johnson, deceptively marketed Tylenol as safe while knowing of an increased risk of autism and other disorders. The lawsuit sought to force Kenvue to change the way it markets Tylenol and pay fines, among other requests.

As a first step, the attorney general—who is running to unseat U.S. Sen. John Cornyn in next year’s Republican primary—attempted to get the judge to temporarily bar some of Tylenol’s safety claims and stop Kenvue from paying the dividends. He failed on both accounts.

Paxton made the request to stop the dividends under a state law that can keep companies on the brink of financial ruin from giving out funds that could otherwise be reserved for creditors, such as those suing the company over claims that Tylenol caused autism or other harms. Kenvue is facing a number of such lawsuits in the wake of Trump’s announcement. But, even the state’s lawyers acknowledged that Paxton’s request to block dividends was “extraordinary,” according to The Texas Tribune.

According to Reuters, one of Kenvue’s lawyers, Kim Bueno, explained that the problem with the state of Texas making this request is that Kenvue is based in New Jersey and incorporated in Delaware. “There was no jurisdiction to challenge that,” she said.

Rafferty determined that she did not have jurisdiction over the dividend claim. She also denied the marketing claim, which even the Trump administration is not standing by. The day after Paxton filed his lawsuit, Kennedy said that “the causative association… between Tylenol given in pregnancy and the perinatal periods is not sufficient to say it definitely causes autism.” Though, he called some studies “very suggestive.”

Read full article

Comments



Read the whole story
fxer
2 days ago
reply
Bend, Oregon
Share this story
Delete

Ancient Egyptians likely used opiates regularly

1 Comment

Scientists have found traces of ancient opiates in the residue lining an Egyptian alabaster vase, indicating that opiate use was woven into the fabric of the culture. And the Egyptians didn’t just indulge occasionally: according to a paper published in the Journal of Eastern Mediterranean Archaeology, opiate use may have been a fixture of daily life.

In recent years, archaeologists have been applying the tools of pharmacology to excavated artifacts in collections around the world. As previously reported, there is ample evidence that humans in many cultures throughout history used various hallucinogenic substances in religious ceremonies or shamanic rituals. That includes not just ancient Egypt but also ancient Greek, Vedic, Maya, Inca, and Aztec cultures. The Urarina people who live in the Peruvian Amazon Basin still use a psychoactive brew called ayahuasca in their rituals, and Westerners seeking their own brand of enlightenment have also been known to participate.

For instance, in 2023, David Tanasi, of the University of South Florida, posted a preprint on his preliminary analysis of a ceremonial mug decorated with the head of Bes, a popular deity believed to confer protection on households, especially mothers and children. After collecting sample residues from the vessel, Tanasi applied various techniques—including proteomic and genetic analyses and synchrotron radiation-based Fourier-transform infrared microspectroscopy—to characterize the residues.

Tanasi found traces of Syrian rue, whose seeds are known to have hallucinogenic properties that can induce dream-like visions, per the authors, thanks to its production of the alkaloids harmine and harmaline. There were also traces of blue water lily, which contains a psychoactive alkaloid that acts as a sedative, as well as a fermented alcoholic concoction containing yeasts, wheat, sesame seeds, fruit (possibly grapes), honey, and, um, “human fluids”: possibly breast milk, oral or vaginal mucus, and blood. A follow-up 2024 study confirmed those results and also found traces of pine nuts or Mediterranean pine oil; licorice; tartaric acid salts that were likely part of the aforementioned alcoholic concoction; and traces of spider flowers known to have medicinal properties.

Vessels of alabaster

Now we can add opiates to the list of pharmacological substances used by the ancient Egyptians. The authors of this latest paper focused on one alabaster vase in particular, housed in the Yale Peabody Museum’s Babylonian Collection. The vase is intact—a rare find—and is inscribed in four ancient languages and mentions Xerxes I, who reigned over the Achaemenid Empire from 486 to 465 BCE. The authors were particularly intrigued by the presence of a dark-brown residue inside the vase.

Papaver somnifera entry in a facsimile of the ca. 515 CE Anicia Juliana Codex of De materia medica by Dioscorides.
Papaver somnifera entry in a facsimile of the ca. 515 CE Anicia Juliana Codex of De materia medica by Dioscorides. Credit: C. Zollo/courtesy of Medical Historical Library, Harvey Cushing/John Hay Whitney Medical Library, Yale University
pXRF analysis of YBC alabastron
pXRF analysis of YBC alabastra. Credit: Andrew J. Koh
FTIR analysis of YBC alabastra
FTIR analysis of YBC alabastra. Credit: Andrew J. Koh

Past scholars had speculated the vases most likely held cosmetics or perfumes, or perhaps hidden messages between the king and his officials. Yet there are also several known pharmacopeia recipes contained in such works as the Anicia Juliana Codex of De materia medica by Dioscorides. The current authors analyzed residue samples with nondestructive techniques, namely portable X-ray fluorescence XRF (pXRF) and passive Fourier Transform Infrared (pFTIR) spectroscopy.

The result: distinct traces of several biomarkers for opium, such as noscapine, hydrocotarnine, morphine, thebaine, and papaverine. That’s consistent with an earlier identification of opiate residues found in several Egyptian alabaster vessels and Cypriot juglets excavated from a merchant’s tomb south of Cairo, dating back to the New Kingdom (16th to 11th century BCE).

The authors think these twin findings warrant a reassessment of prior assumptions about Egyptian alabaster vessels, many of which they believe could also have traces of ancient opiates. A good starting point, they suggest, is a set of vessels excavated from Tutankhamun’s tomb in 1922 by archaeologist Howard Carter. Many of those vessels have the same sticky dark brown organic residues. There was an early attempt to chemically analyze those residues in 1933 by Albert Lucas, who simply didn’t have the necessary technology to identify the compounds, although he was able to determine that the residues were not unguents or perfumes. Nobody has attempted to analyze the residues since.

Additional evidence of the value of the residues lies in the fact that looters didn’t engage in the usual “smash and grab” practices employed to collect precious metals when it came to the alabaster vessels. Instead, looters transferred the organic stuff into portable bags; there are still finger marks inside many of the vessels, as well as remnants of the leather bags used to collect the organics.

“It remains imminently possible, if not probable, that at least some of the vast remaining bulk of calcite vessels… in fact contained opiates as part of a long-lived Egyptian tradition we are only beginning to understand,” the authors concluded. Looters missed a few of the vessels, which still contain their original organic contents, making them ideal candidates for future analysis.

“We now have found opiate chemical signatures that Egyptian alabaster vessels attached to elite societies in Mesopotamia and embedded in more ordinary cultural circumstances within ancient Egypt,” said co-author Andrew Koh of the Yale Peabody Museum. “It’s possible these vessels were easily recognizable cultural markers for opium use in ancient times, just as hookahs today are attached to shisha tobacco consumption. Analyzing the contents of the jars from King Tut’s tomb would further clarify the role of opium in these ancient societies.”

DOI: Journal of Eastern Mediterranean Archaeology, 2025. 10.5325/jeasmedarcherstu.13.3.0317  (About DOIs).

Read full article

Comments



Read the whole story
fxer
2 days ago
reply
Hittin that black tar
Bend, Oregon
Share this story
Delete
Next Page of Stories