17667 stories
·
175 followers

New HDR10+ Advanced standard will try to fix the soap opera effect

1 Share

Motion smoothing has a bad reputation among most cinephiles, as well as many home theater enthusiasts and content creators. Also known as motion or video interpolation, motion smoothing is available in virtually every modern TV today. It’s supposed to remove judder from films and TV shows that are shot with 24p (24 frames per second) or 25p film and displayed on 60Hz or 120Hz TVs. But motion smoothing often results in the dreaded soap opera effect and unwanted visual artifacts.

Two upcoming HDR standards, HDR10+ Advanced and Dolby Vision 2, are looking to change how we perceive motion smoothing and more closely align motion interpolation with a creator’s vision. However, it’s unclear if these standards can pull that off.

HDR10+ Advanced’s Intelligent FRC

Today, Samsung provided details about the next version of the HDR10 format, which introduces six new features. Among HDR10+ Advanced’s most interesting features is HDR10+ Intelligent FRC (frame rate conversion), which is supposed to improve motion smoothing.

A TV using motion smoothing analyzes each video frame and tries to determine what additional frames would look like if the video were playing at a frame rate that matched the TV’s refresh rate. The TV then inserts those frames into the video. A 60Hz TV with motion smoothing on, for example, would attempt to remove judder from a 24p film by inserting frames so that the video plays as if it were shot at 60p. For some, this appears normal and can make motion, especially camera panning or zooming, look smoother. However, others will report movies and shows that look more like soap operas, or as if they were shot on higher-speed video cameras instead of film cameras. Critics, including some big names in Hollywood, argue that motion smoothing looks unnatural and deviates from the creator’s intended vision.

Intelligent FRC takes a more nuanced approach to motion smoothing by letting content creators dictate the level of motion smoothing used in each scene, Forbes reported. The feature is also designed to adjust the strength of motion interpolation based on ambient lighting.

Dolby Vision 2’s Authentic Motion

HDR10+ Advanced’s Intelligent FRC sounds awfully similar to the Authentic Motion feature that Dolby announced for its upcoming HDR standard, Dolby Vision 2, in September.

Dolby’s announcement described Authentic Motion as “the world’s first creative driven motion control tool to make scenes feel more authentically cinematic without unwanted judder on a shot-by-shot basis.” Authentic Motion will be available on TVs that adopt Dolby Vision 2’s most advanced tier, which is called Dolby Vision 2 Max, and will target high-end TVs.

TechRadar reported in September that Authentic Motion will have 10 levels of motion smoothing, citing a demo of the feature applied to a scene from the Amazon Prime Video series Paris Has Fallen, which was shot at 25p. In the demo, the video reportedly went from level 5 motion smoothing during a tracking shot to level 3 when “the camera switched to tilting down gently,” to level 1 “as the camera settled,” and then level 0 “when the still camera watched the woman talk.”

Will this work?

We don’t have sufficient information about either HDR standard to be convinced yet that the technologies will improve the appearance of videos using motion smoothing, especially to viewers who are already put off by motion smoothing.

Giving creators greater control over when exactly motion smoothing is implemented and how strong it is could mean that the soap opera effect isn’t applied to scenes unnecessarily. But neither standard has proven that motion smoothing will look natural when applied at different scales to specific shots.

Neither standard has mentioned addressing the visual artifacts associated with motion smoothing, such as halos. Artifacts appear when a TV struggles to determine how the frame in between two very different-looking frames should appear. Having creators set per-scene motion smoothing levels doesn’t directly address that problem in an obvious way.

Samsung showed some publications a “simulation” of what it would like HDR10+ Advanced’s Intelligent FRC to look like, but a simulation is far different from the technology running in real time on a supported TV. If you’re curious, though, you can see images of the simulations from Forbes and Trusted Reviews.

Another question will be adoption and availability, not just by TV makers, but by creators. HDR10+ was announced in 2017 and is supported by 500 movies and 16 streaming services, per Forbes. Dolby Vision came out in 2014, and in 2020, Dolby said that 900 movie titles support the format [PDF]. We don’t know how much more of a burden that mastering content for HDR10+ Advanced or Dolby Vision 2’s motion smoothing features could put on content creators compared to today’s HDR standards.

HDR10+ Advanced is supposed to debut on Samsung’s 2026 TVs and be supported by Prime Video. Dolby Vision 2 HDR doesn’t have a release date yet.

With many TVs having motion smoothing enabled by default, improvements to the technology’s performance could enhance the viewing experience for a large audience. But both upcoming HDR standards have a long way to go to make motion smoothing look natural and to win over some of the biggest names in cinema.

Read full article

Comments



Read the whole story
fxer
3 hours ago
reply
Bend, Oregon
Share this story
Delete

Flock haters cross political divides to remove error-prone cameras

1 Share

Flock Safety—the surveillance company behind the country’s largest network of automated license plate readers (ALPRs)—currently faces attacks on multiple fronts seeking to tear down the invasive and error-prone cameras across the US.

This week, two lawmakers, Senator Ron Wyden (D-Ore.) and Representative Raja Krishnamoorthi (D-Ill.), called for a federal investigation, alleging that Flock has been “negligently handling Americans’ personal data” by failing to use cybersecurity best practices. The month prior, Wyden wrote a letter to Flock CEO Garrett Langley, alleging that Flock’s security failures mean that “abuse of Flock cameras is inevitable” and that they threaten to expose billions of people’s harvested data should a catastrophic breach occur.

“In my view, local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities,” Wyden wrote.

Several communities have already come to this conclusion, although their concerns go beyond fears of hackers or potential data breaches.

They’re also concerned that law enforcement will use the sweeping database for invasive tracking. For instance, Texas scanned more than 80,000 ALPRs to allegedly do a wellness check on a woman suspected of self-administering an abortion, 404 Media reported.

Immigration and Customs Enforcement (ICE) has also worked with local police to conduct “immigration”-related searches of Flock data, 404 Media reported. (Langley wrote in a blog that providing ICE access is a local decision, “not Flock’s decision.”)

Reaching across the political spectrum, people in seven states have won fights to remove Flock’s invasive cameras in their towns and cities, sharing templates for success that are inspiring even more opposition campaigns. These critics oppose Flock not only because cameras threaten to violate the privacy of anyone who drives past them but also because the cameras are error-prone and can lead to wrongful detentions, the Electronic Frontier Foundation (EFF) reported.

For years, the EFF has tracked cases where ALPRs misread license plates, with the software accidentally reading an “H” as an “M” or a “2” as a “7.” Other times, ALPRs confuse the state on the license plate, giving cops the completely wrong target. Several Americans have been accused of stealing cars because of these errors, some held at gunpoint and detained until the cops figured out the errors, the EFF reported.

And now, as Flock seeks to roll out a new product that would detect human threats by audio, there is another emerging threat widening the range of possible errors: police mishandling correct data. As cops nationwide increasingly come to rely too heavily on ALPR camera feeds, disturbing cases suggest that departments big and small tend to avoid basic police work as a check on the tech, which would prevent baseless accusations.

Removing cameras may be easier than fighting Flock footage

A financial planner in her 40s living in a Denver neighborhood, Chrisanna Elser, learned the hard way that cops can’t always be trusted to vet Flock data.

Earlier this fall, a small-town cop with the Columbine Valley Police Department accused her of theft, apparently just because she happened to drive through his town at the time of a crime, Denverite reported.

Cop relying on Flock falsely accuses woman of package theft.

Back in September, Elser shared footage from her smart doorbell. It showed an off-putting confrontation with a cop, Sgt. Jamie Milliman, who patrols Bow Mar, a town outside of Denver with a population of under 900.

The officer was investigating the theft of a $25 package from a Bow Mar resident’s stoop and appeared completely confident that Flock footage he reviewed, which placed Elser’s car in town, was proof enough of her guilt.

In the video, Milliman warned Elser, “You know we have cameras in that jurisdiction and you can’t get a breath of fresh air, in or out of that place, without us knowing, correct?”

After Elser adamantly denied taking the package, Milliman then claimed that the theft victim provided footage that supposedly showed Elser in the act. However, he refused to show Elser that footage because she would not admit her guilt.

“If you’re going to deny it, I’m not going to give you any courtesy,” Milliman said. “If you’re going to lie to me, I’m not going to give you any courtesy.”

Given no choice but to accept a summons for a charge of petty theft, Elser told Denverite that she went “on a warpath” to prove her innocence. She retraced her steps through Bow Mar on the day of the theft, requesting surveillance images from the tailor she’d visited. She also gathered footage from her car and her husband’s car, which showed she never stopped in Bow Mar on either leg of the trip. At no point did her car get closer than a quarter-mile to the residence where the package was stolen, GPS data from her car and phone showed.

Even after making this huge effort, nobody wanted to review the evidence, though, Elser told Denverite. The police department told her to expect a long wait to speak to anyone about her case, and local officials, including the mayor, declined to intervene. Seeking justice, she sent “a voluminous Google Drive folder” along with a seven-page affidavit to the Columbine Valley police chief, Bret Cottrell. In October, he dropped the charges, giving Elser kudos for doing her own police work, without offering any apologies.

“After reviewing the evidence you have provided (nicely done btw), we have voided the summons that was issued,” Cottrell told Elser, never mentioning the doorbell video Milliman claimed placed her at the scene.

Elser isn’t the only one who has been targeted by a cop seemingly just for being in the area of a crime.

The EFF documented a 2024 Detroit case where cops used ALPRs to search all Dodge Chargers in an area following a shooting. That broad search led cops to a car located two miles away. They ultimately detained the car’s owner, Isoke Robinson, and placed her 2-year-old son in their squad car. Robinson’s car was impounded for three weeks while cops failed to check obvious clues that should have ruled her out as a suspect—including that the shooter’s vehicle was missing a fog light, and Robinson’s was not, the EFF reported.

Cities installing cameras risk Flock lawsuits

Some wrongfully detained people have successfully sued after shocking ALPR arrests, earning awards as high as $1.9 million. Robinson got a $35,000 payout from Detroit for her traumatic experience, the Detroit Free Press reported.

Elser told Denverite that she’s considering suing and remains “unnerved by the prospect of being wrongfully accused on the basis of circumstantial evidence from a mass surveillance system.” As a financial planner, Elser feared an accusation of theft could have ruined her career, she noted.

Ashley White, a former public defender, told Denver news outlet 9News that likely “this small police department didn’t want to spend any resources or time looking into the actual facts of the case and just charged an offense and figured, ‘We’ll figure out what happens later.’ And that’s not how it’s supposed to work.”

Elser’s and Robinson’s cases, along with others where Flock data was used to accuse innocent people of crimes, should put communities on high alert about potential misuses of Flock cameras in their areas, White said.

“People should care because this could be you,” White said. “This is something that police agencies are now using to document and watch what you’re doing, where you’re going, without your consent.”

Haters cross political divides to fight Flock

Currently, Flock’s reach is broad, “providing services to 5,000 police departments, 1,000 businesses, and numerous homeowners associations across 49 states,” lawmakers noted. Additionally, in October, Flock partnered with Amazon, which allows police to request Ring camera footage that widens Flock’s lens further.

However, Flock’s reach notably doesn’t extend into certain cities and towns in Arizona, Colorado, New York, Oregon, Tennessee, Texas, and Virginia, following successful local bids to end Flock contracts. These local fights have only just started as groups learn from each other, Sarah Hamid, EFF’s director of strategic campaigns, told Ars.

“Several cities have active campaigns underway right now across the country—urban and rural, in blue states and red states,” Hamid said.

A Flock spokesperson told Ars that the growing effort to remove cameras “remains an extremely small percentage of communities that consider deploying Flock technology (low single digital percentages).” To keep Flock’s cameras on city streets, Flock attends “hundreds of local community meetings and City Council sessions each month, and the vast majority of those contracts are accepted,” Flock’s spokesperson said.

Hamid challenged Flock’s “characterization of camera removals as isolated incidents,” though, noting “that doesn’t reflect what we’re seeing.”

“The removals span multiple states and represent different organizing strategies—some community-led, some council-initiated, some driven by budget constraints,” Hamid said.

Most recently, city officials voted to remove Flock cameras this fall in Sedona, Arizona.

A 72-year-old retiree, Sandy Boyce, helped fuel the local movement there after learning that Sedona had “quietly” renewed its Flock contract, NBC News reported. She felt enraged as she imagined her tax dollars continuing to support a camera system tracking her movements without her consent, she told NBC News.

“I’d drive by them and flip them off and curse them,” Boyce told NBC News. “It was like we were building our own prisons, and we were paying for it.”

A conservative who voted for Donald Trump, Boyce became determined to take the cameras down. She soon realized that many people in her community who opposed Flock were liberal, people she “normally wouldn’t be having conversations with.” But it was worth being “open to having conversations with them,” she said, if it meant ending Flock’s invasive tracking.

Banding together, Boyce’s coalition against Flock won their fight, with Sedona’s City Council unanimously voting to end Flock’s contract in September.

A template to remove Flock in your town

After winning her fight, Boyce published a “template” that she believes can help people shut down Flock in their towns. In it, she suggested that a core group of three to four residents spearhead initiatives, starting with filing a public records request to determine whether a city or town has a Flock contract.

With the contract in hand, the group should next meet with a member of city council to schedule a community talk about Flock as an agenda item at an upcoming meeting.

“Once you have an agenda item, use all your resources to invite residents to attend and comment,” Boyce’s template said. That means reaching out to local press, as well as getting the word out by distributing postcards at local events or community recreation areas, like at farmers markets or on “pickleball courts.”

Postcards should be written to help residents understand the many reasons Flock cameras are concerning, Boyce’s website said. Recommended “talking points” seek to appeal to logical and conspiratorial-minded people while also disputing what Boyce described as the “exaggerated” safety impact that Flock touts when convincing cities to sign contracts.

If the contract isn’t canceled at the first scheduled meeting, “keep the pressure on,” Boyce’s site recommended. “They need to know the residents will not give up on this, that they are watching, and they will persist politely until the cameras are taken down,” her website said.

Hamid agreed that the “diversity of pathways to camera removal suggests the momentum isn’t dependent on any single narrative or tactic. And it’s change that [is] broad-based and across the political spectrum. This scale of location data surveillance is a threat to all of us, and more and more communities are recognizing that.”

She told Ars that EFF has learned that advocates can win when they target their town’s “specific procurement vulnerabilities—renewal dates, incomplete council votes, budget constraints.” Moving at these times can create “real openings for meaningful change at the municipal level” and give communities opportunities to ask “harder questions about Flock contracts.”

“The most effective campaigns combine three elements: clear technical documentation of Flock’s known failure modes and risks; concrete municipal procurement pathways that show how contracts can be terminated or not renewed; and broad coalition-building across impacted communities, civil rights organizations, and elected officials willing to act—united around the core demand that mass, profit-driven surveillance has no place in municipal governance,” Hamid said.

Flock accused of downplaying harms

It’s possible that a Virginia court could rule that Flock’s cameras violate the Fourth Amendment’s privacy protections, NBC News reported.

In that case, a retired veteran and co-plaintiff sued the City of Norfolk over Flock privacy concerns, filing a motion for summary judgment last month. In response, however, the city maintained that courts have agreed that ALPRs “are not a search,” citing precedent concluding that “a person travelling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”

But whether the court sides with residents won’t matter much to communities who already won peace of mind by getting cameras removed without legal interventions. Hamid told Ars that cases like the abortion tracking and ICE access to Flock “illustrate exactly what EFF has been warning about: Flock’s databases become tools for tracking protected activities and facilitating invasive federal enforcement actions.” Growing awareness of these harms, Hamid suggested, is “already shifting community perception.”

“When people understand that Flock data can be used to track them exercising reproductive freedom or to facilitate federal immigration enforcement, the calculus around ‘public safety’ technology changes entirely,” Hamid said. “The location surveillance itself becomes a risk to public safety.”

EFF is encouraged that lawmakers are pushing the FTC to probe Flock’s data handling, even if no investigation follows, Hamid said. “This represents a significant shift—federal policymakers are now directly confronting the fact that Flock’s infrastructure poses genuine risks to people’s privacy and safety,” she suggested.

Flock’s spokesperson told Ars that “Flock is relentlessly focused on data integrity and security” and “values the concerns raised” by lawmakers. The company defended its security practices but noted that it’s up to agencies to adhere to best practices that protect local travelers’ data.

Back in June, EFF accused Flock of blaming users, downplaying harms, and doubling-down on “the very systems that enabled the violations in the first place” rather than taking steps to address citizens’ concerns. To have elected officials “publicly questioning Flock’s business practices,” Hamid suggested, “signals to local officials that these concerns are legitimate, and that contract rejections are defensible.”

More cameras could come down, Hamid said, as pressure intensifies and communities face risks posed by the cameras: that cops may take Flock shortcuts and that data breaches will reveal sensitive information, as well as other scary possibilities that come with increased access and ALPR errors. Opposition may ramp up as new kinds of errors emerge, with the latest threat being Flock’s plan to use microphones to detect gunshots in communities. That could be “a recipe for innocent people to get hurt,” EFF warned, if cops show up with guns drawn, expecting violence after cameras record common sounds like a car backfiring or kids setting off fireworks.

“Communities rejecting Flock aren’t choosing to be less safe; quite the opposite,” Hamid said. “They are coming to terms with how such a massive, sprawling, and ultimately ungovernable surveillance system puts themselves and their community members at risk.”

Read full article

Comments



Read the whole story
fxer
3 hours ago
reply
Bend, Oregon
Share this story
Delete

Secret Slide in Wellington, New Zealand

1 Share

View from the bottom!

Located on Mount Victoria, one of the most scenic points in Wellington, Roseneath Park offers something for the whole family. In addition to its stunning lookout points and relaxing hikes, the Roseneath Park Playground is a perfect spot for kids (or adults) to play to their heart's content.

The most iconic feature of the playground is its slide - riding this colossal slide has become a Wellington rite of passage. The slide has become famous for the speed that riders are able to achieve, as well as for the views it offers at the top. The slide's open top allows visitors to enjoy a view of Wellington city and the harbor while they make their way down - if you manage to catch a quick enough glimpse, that is!

Thrill seekers be warned - the descent really is faster than you think, but if you're up for a ride, you won't be disappointed! 

Read the whole story
fxer
1 day ago
reply
Bend, Oregon
Share this story
Delete

OpenAI signs massive AI compute deal with Amazon

1 Share

On Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft.

The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.

Wall Street apparently liked the deal, because Amazon shares hit an all-time high on Monday morning. Meanwhile, shares for long-time OpenAI investor and partner Microsoft briefly dipped following the announcement.

Massive AI compute requirements

It’s no secret that running generative AI models for hundreds of millions of people currently requires a lot of computing power. Amid chip shortages over the past few years, finding sources of that computing muscle has been tricky. OpenAI is reportedly working on its own GPU hardware to help alleviate the strain.

But for now, the company needs to find new sources of Nvidia chips, which accelerate AI computations. Altman has previously said that the company plans to spend $1.4 trillion to develop 30 gigawatts of computing resources, an amount that is enough to roughly power 25 million US homes, according to Reuters.

Altman has also said that eventually, he would like OpenAI to add 1 gigawatt of compute every week. That ambitious plan is complicated by the fact that one gigawatt of power is roughly equivalent to the output of one typical nuclear power plant, and Reuters reports that each gigawatt of compute build-out currently comes with a capital cost of over $40 billion.

These aspirational numbers are far beyond what long-time cloud partner Microsoft can provide, so OpenAI has been seeking further independence from its wealthy corporate benefactor. OpenAI’s restructuring last week moved the company further from its nonprofit roots and removed Microsoft’s right of first refusal to supply compute services in the new arrangement.

Even before last week’s restructuring deal with Microsoft, OpenAI had been forced to look elsewhere for computing power: The firm made a deal with Google in June to supply it with cloud services, and the company struck a deal in September with Oracle to buy $300 billion in computing power for about five years. But it’s worth noting that Microsoft’s compute power is still essential for the firm: Last week, OpenAI agreed to purchase $250 billion of Microsoft’s Azure services over time.

While these types of multi-billion-dollar deals seem to excite investors in the stock market, not everything is hunky dory in the world of AI at the moment. OpenAI’s annualized revenue run rate is expected to reach about $20 billion by year’s end, Reuters notes, and losses in the company are also mounting. Surging valuations of AI companies, oddly circular investments, massive spending commitments (which total more than $1 trillion for OpenAI), and the potential that generative AI might not be as useful as promised have prompted ongoing speculation among both critics and proponents alike that the AI boom is turning into a massive bubble.

Meanwhile, Reuters has reported that OpenAI is laying the groundwork for an initial public offering that could value the company at up to $1 trillion. Whether that prospective $1 trillion valuation makes sense for a company burning through cash faster than it can make it back is another matter entirely.

Read full article

Comments



Read the whole story
fxer
1 day ago
reply
Bend, Oregon
Share this story
Delete

Trump on why he pardoned Binance CEO: “Are you ready? I don’t know who he is.”

1 Comment

President Trump says he still doesn’t know who Binance founder and former CEO Changpeng Zhao is, despite having pardoned Zhao last month.

CBS correspondent Norah O’Donnell asked Trump about the pardon in a 60 Minutes interview that aired yesterday, noting that Zhao pleaded guilty to violating anti-money laundering laws. “The government at the time said that C.Z. had caused ‘significant harm to US national security,’ essentially by allowing terrorist groups like Hamas to move millions of dollars around. Why did you pardon him?” O’Donnell asked.

“Okay, are you ready? I don’t know who he is. I know he got a four-month sentence or something like that. And I heard it was a Biden witch hunt,” answered Trump, who has criticized his predecessor for signing pardons with an autopen.

Zhao was charged with failing to maintain an adequate anti-money laundering program as required by the Bank Secrecy Act and pleaded guilty. He was sentenced to four months in prison in April 2024, and released in September 2024. The US government’s sentencing request asked for three years in prison.

Trump family wheeling and dealing

Trump pardoned Zhao on October 21 in a move that is likely to help Binance fully return to the US market. (Since 2019, Binance has operated a separate exchange for US customers.) Months before the pardon, the Trump family reportedly held talks with Binance about taking a financial stake in the crypto exchange’s US arm.

Binance facilitated a $2 billion purchase of the USD1 stablecoin offered by the Trump-backed World Liberty Financial and built the technology behind USD1, a Wall Street Journal report last week said. Trump sons Eric and Donald Jr. have played a leading role in making lucrative crypto deals for the Trump family business.

“My sons are involved in crypto much more than I—me,” Trump said on 60 Minutes. “I—I know very little about it, other than one thing. It’s a huge industry. And if we’re not gonna be the head of it, China, Japan, or someplace else is. So I am behind it 100 percent.”

Did Trump ever meet Zhao? Did he form his own opinion about Zhao’s conviction, or was he merely “told about it”? Trump doesn’t seem to know:

This man was treated really badly by the Biden administration. And he was given a jail term. He’s highly respected. He’s a very successful guy. They sent him to jail and they really set him up. That’s my opinion. I was told about it.

I said, “Eh, it may look bad if I do it. I have to do the right thing.” I don’t know the man at all. I don’t think I ever met him. Maybe I did. Or, you know, somebody shook my hand or something. But I don’t think I ever met him. I have no idea who he is. I was told that he was a victim, just like I was and just like many other people, of a vicious, horrible group of people in the Biden administration.

Trump: “A lot people say that he wasn’t guilty”

Pointing out that Trump’s pardon of Zhao came after Binance helped facilitate a $2 billion purchase of World Liberty’s stablecoin, O’Donnell asked Trump to address the appearance of a pay-to-play deal.

“Well, here’s the thing, I know nothing about it because I’m too busy doing the other… I can only tell you this. My sons are into it. I’m glad they are, because it’s probably a great industry, crypto. I think it’s good… I know nothing about the guy, other than I hear he was a victim of weaponization by government. When you say the government, you’re talking about the Biden government. It’s a corrupt government. Biden was the most corrupt president and he was the worst president we’ve ever had.”

Even though Zhao pleaded guilty, Trump said shortly after the pardon that he was told Zhao “wasn’t guilty of anything.” The statement came when CNN correspondent Kaitlin Collins asked Trump at a press conference why he pardoned Zhao and whether it had anything to do with the Trump family’s crypto business.

Trump answered, “I don’t know, he was recommended by a lot of people… are you talking about the crypto person? A lot people say that he wasn’t guilty of anything. He served four months in jail and they say that he was not guilty of anything.”

Trump told Collins, “you don’t [know] much about crypto, you know nothing about nothing, you fake news… I don’t know him, I don’t believe I’ve ever met him, but I’ve been told… he had a lot of support and they said that what he did is not even a crime, it wasn’t a crime, that he was persecuted by the Biden administration. And so I gave him a pardon at the request of a lot of very good people.”

Zhao is no longer CEO of Binance but maintains a controlling stake in the company and has an estimated net worth of $52.6 billion. After being pardoned, Zhao said in an X post that Binance “will do everything we can to help make America the Capital of Crypto and advance web3 worldwide.”

Planning its return to the US, Binance “is considering a range of options including consolidating Binance.US into its global operation or having its global exchange enter the US market,” Bloomberg reported.

Read full article

Comments



Read the whole story
fxer
1 day ago
reply
> Trump, who has criticized his predecessor for signing pardons with an autopen.

Zing!
Bend, Oregon
Share this story
Delete

Real humans don’t stream Drake songs 23 hours a day, rapper suing Spotify says

1 Share

Spotify profits off fake Drake streams that rob other artists of perhaps hundreds of millions in revenue shares, a lawsuit filed Sunday alleged—hoping to force Spotify to reimburse every artist impacted.

The lawsuit was filed by an American rapper known as RBX, who may be best known for cameos on two of the 1990s’ biggest hip-hop records, Dr. Dre’s The Chronic and Snoop Dogg’s Doggystyle.

The problem goes beyond Drake, RBX’s lawsuit alleged. It claims Spotify ignores “billions of fraudulent streams” each month, selfishly benefiting from bot networks that artificially inflate user numbers to help Spotify attract significantly higher ad revenue.

Drake’s account is a prime example of the kinds of fake streams Spotify is inclined to overlook, RBX alleged, since Drake is “the most streamed artist of all time on the platform,” in September becoming “the first artist to nominally achieve 120 billion total streams.” Watching Drake hit this milestone, the platform chose to ignore a “substantial” amount of inauthentic activity that contributed to about 37 billion streams between January 2022 and September 2025, the lawsuit alleged.

This activity, RBX alleged, “appeared to be the work of a sprawling network of Bot Accounts” that Spotify reasonably should have detected.

Apparently, RBX noticed that while most artists see an “initial spike” in streams when a song or album is released, followed by a predictable drop-off as more time passes, the listening patterns of Drake’s fans weren’t as predictable. After releases, some of Drake’s music would see “significant and irregular uptick months” over not just ensuing months, but years, allegedly “with no reasonable explanations for those upticks other than streaming fraud.”

Most suspiciously, individual accounts would sometimes listen to Drake “exclusively” for “23 hours a day”—which seems like the sort of “staggering and irregular” streaming that Spotify should flag, the lawsuit alleged.

It’s unclear how RBX’s legal team conducted this analysis. At this stage, they’ve told the court that claims are based on “information and belief” that discovery will reveal “there is voluminous information” to back up the rapper’s arguments.

Fake Drake streams may have robbed artists of millions

Spotify artists are supposed to get paid based on valid streams that represent their rightful portion of revenue pools. If RBX’s claims are true, based on the allegedly fake boosting of Drake’s streams alone, losses to all other artists in the revenue pool are “estimated to be in the hundreds of millions of dollars,” the complaint said. Actual damages, including punitive damages, are to be determined at trial, the lawsuit noted, and are likely much higher.

“Drake’s music streams are but one notable example of the rampant streaming fraud that Spotify has allowed to occur, across myriad artists, through negligence and/or willful blindness,” the lawsuit alleged.

If granted, the class would cover more than 100,000 rights holders who collected royalties from music hosted on the platform from “January 1, 2018, through the present.” That class could be expanded, the lawsuit noted, depending on how discovery goes. Since Spotify allegedly “concealed” the fake streams, there can be no time limitations for how far the claims could go back, the lawsuit argued. Attorney Mark Pifko of Baron & Budd, who is representing RBX, suggested in a statement provided to Ars that even one bad actor on Spotify cheats countless artists out of rightful earnings.

“Given the way Spotify pays royalty holders, allocating a limited pool of money based on each song’s proportional share of streams for a particular period, if someone cheats the system, fraudulently inflating their streams, it takes from everyone else,” Pifko said. “Not everyone who makes a living in the music business is a household name like Taylor Swift—there are thousands of songwriters, performers, and producers who earn revenue from music streaming who you’ve never heard of. These people are the backbone of the music business and this case is about them.”

Spotify did not immediately respond to Ars’ request for comment. However, a spokesperson told Rolling Stone that while the platform cannot comment on pending litigation, Spotify denies allegations that it profits from fake streams.

“Spotify in no way benefits from the industry-wide challenge of artificial streaming,” Spotify’s spokesperson said. “We heavily invest in always-improving, best-in-class systems to combat it and safeguard artist payouts with strong protections like removing fake streams, withholding royalties, and charging penalties.”

Fake fans appear to move hundreds of miles between plays

Spotify has publicly discussed ramping up efforts to detect and penalize streaming fraud. But RBX alleged that instead, Spotify “deliberately” “deploys insufficient measures to address fraudulent streaming,” allowing fraud to run “rampant.”

The platform appears least capable at handling so-called “Bot Vendors” that “typically design Bots to mimic human behavior and resemble real social media or streaming accounts in order to avoid detection,” the lawsuit alleged.

These vendors rely on virtual private networks (VPNs) to obscure locations of streams, but “with reasonable diligence,” Spotify could better detect them, RBX alleged—especially when streams are coming “from areas that lack the population to support a high volume of streams.”

For example, RBX again points to Drake’s streams. During a four-day period in 2024, “at least 250,000 streams of Drake’s song ‘No Face’ originated in Turkey but were falsely geomapped through the coordinated use of VPNs to the United Kingdom,” the lawsuit alleged, based on “information and belief.”

Additionally, “a large percentage of the accounts streaming Drake’s music were geographically concentrated around areas whose populations could not support the volume of streams emanating therefrom. In some cases, massive amounts of music streams, more than a hundred million streams, originated in areas with zero residential addresses,” the lawsuit alleged.

Just looking at how Drake’s fans move should raise a red flag, RBX alleged:

“Geohash data shows that nearly 10 percent of Drake’s streams come from users whose location data showed that they traveled a minimum of 15,000 kilometers in a month, moved unreasonable locations between songs (consecutive plays separated by mere seconds but spanning thousands of kilometers), including more than 500 kilometers between songs (roughly the distance from New York City to Pittsburgh).”

Spotify could cut off a lot of this activity, RBX alleged, by ending its practice of allowing free ad-supported accounts to sign up without a credit card. But supposedly it doesn’t, because “Spotify has an incentive for turning a blind eye to the blatant streaming fraud occurring on its service,” the lawsuit said.

Spotify has admitted fake streams impact revenue

RBX’s lawsuit pointed out that Spotify has told investors that, despite its best efforts, artificial streams “may contribute, from time to time, to an overstatement” in the number of reported monthly average users—a stat that helps drive ad revenue.

Spotify also somewhat tacitly acknowledges fears that the platform may be financially motivated to overlook when big artists pay for fake streams. In an FAQ, Spotify confirmed that “artificial streaming is something we take seriously at every level,” promising to withhold royalties, correct public streaming numbers, and take other steps, like possibly even removing tracks, no matter how big the artist is. Artists’ labels and distributors can also get hit with penalties if fake streams are detected, Spotify said. Spotify has defended its prevention methods as better than its rivals’ efforts.

“Our systems are working: In a case from last year, one bad actor was indicted for stealing $10 million from streaming services, only $60,000 of which came from Spotify, proving how effective we are at limiting the impact of artificial streaming on our platform,” Spotify’s spokesperson told Rolling Stone.

However, RBX alleged that Spotify is actually “one of the easiest platforms to defraud using Bots due to its negligent, lax, and/or non-existent—Bot-related security measures.” And supposedly that’s by design, since “the higher the volume of individual streams, the more Spotify could charge for ads,” RBX alleged.

“By properly detecting and/or removing fraudulent streams from its service, Spotify would lose significant advertising revenue,” the theory goes, with RBX directly accusing Spotify of concealing “both the enormity of this problem, and its detrimental financial impact to legitimate Rights Holders.”

For RBX to succeed, it will likely matter what evidence was used to analyze Drake’s streaming numbers. Last month, a lawsuit that Drake filed was dismissed, ultimately failing to convince a judge that Kendrick Lamar’s record label artificially inflated Spotify streams of “Not Like Us.” Drake’s failure to show any evidence beyond some online comments and reports (which suggested that the label was at least aware that Lamar’s manager supposedly paid a bot network to “jumpstart” the song’s streams) was deemed insufficient to keep the case alive.

Industry group slowly preparing to fight streaming fraud

A loss could smear Spotify’s public image after the platform joined an industry coalition formed in 2023 to fight streaming fraud, the Music Fights Fraud Alliance (MFFA). This coalition is often cited as a major step that Spotify and the rest of the industry are taking; however, the group’s website does not indicate the progress made in the years since.

As of this writing, the website showed that task forces were formed, as well as a partnership with a nonprofit called the National Cyber-Forensics and Training Alliance, with a goal to “work closely together to identify and disrupt streaming fraud.” The partnership was also supposed to produce “intelligence reports and other actionable information in support of fraud prevention and mitigation.”

Ars reached out to MFFA to see if there are any updates to share on the group’s work over the past two years. MFFA’s executive director, Michael Lewan, told Ars that “admittedly MFFA is still relatively nascent and growing,” “not even formally incorporated until” he joined in February of this year.

“We have accomplished a lot, and are going to continue to grow as the industry is taking fraud seriously,” Lewan said.

Lewan can’t “shed too many details on our initiatives,” he said, suggesting that MFFA is “a bit different from other trade orgs that are much more public facing.” However, several initiatives have been launched, he confirmed, which will help “improve coordination and communication amongst member companies”—which include streamers like Spotify and Amazon, as well as distributors like CD Baby and social platforms like SoundCloud and Meta apps—“to identify and disrupt suspicious activity, including sharing of data.”

“We also have efforts to raise awareness on what fraud looks like and how to mitigate against fraudulent activity,” Lewan said. “And we’re in continuous communication with other partners (in and outside the industry) on data standards, artist education, enforcement and deterrence.”

Read full article

Comments



Read the whole story
fxer
1 day ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories