17721 stories
·
175 followers

How AI coding agents work—and what to remember if you use them

1 Share

AI coding agents from OpenAI, Anthropic, and Google can now work on software projects for hours at a time, writing complete apps, running tests, and fixing bugs with human supervision. But these tools are not magic and can complicate rather than simplify a software project. Understanding how they work under the hood can help developers know when (and if) to use them, while avoiding common pitfalls.

We'll start with the basics: At the core of every AI coding agent is a technology called a large language model (LLM), which is a type of neural network trained on vast amounts of text data, including lots of programming code. It's a pattern-matching machine that uses a prompt to "extract" compressed statistical representations of data it saw during training and provide a plausible continuation of that pattern as an output. In this extraction, an LLM can interpolate across domains and concepts, resulting in some useful logical inferences when done well and confabulation errors when done poorly.

These base models are then further refined through techniques like fine-tuning on curated examples and reinforcement learning from human feedback (RLHF), which shape the model to follow instructions, use tools, and produce more useful outputs.

A screenshot of the Claude Code command-line interface. A screenshot of the Claude Code command-line interface. Credit: Anthropic

Over the past few years, AI researchers have been probing LLMs' deficiencies and finding ways to work around them. One recent innovation was the simulated reasoning model, which generates context (extending the prompt) in the form of reasoning-style text that can help an LLM home in on a more accurate output. Another innovation was an application called an "agent" that links several LLMs together to perform tasks simultaneously and evaluate outputs.

How coding agents are structured

In that sense, each AI coding agent is a program wrapper that works with multiple LLMs. There is typically a "supervising" LLM that interprets tasks (prompts) from the human user and then assigns those tasks to parallel LLMs that can use software tools to execute the instructions. The supervising agent can interrupt tasks below it and evaluate the subtask results to see how a project is going. Anthropic's engineering documentation describes this pattern as "gather context, take action, verify work, repeat."

If run locally through a command-line interface (CLI), users give the agents conditional permission to write files on the local machine (code or whatever is needed), run exploratory commands (say, "ls" to list files in a directory), fetch websites (usually using "curl"), download software, or upload files to remote servers. There are lots of possibilities (and potential dangers) with this approach, so it needs to be used carefully.

In contrast, when a user starts a task in the web-based agent like the web versions of Codex and Claude Code, the system provisions a sandboxed cloud container preloaded with the user's code repository, where Codex can read and edit files, run commands (including test harnesses and linters), and execute code in isolation. Anthropic's Claude Code uses operating system-level features to create filesystem and network boundaries within which the agent can work more freely.

The context problem

Every LLM has a short-term memory, so to speak, that limits the amount of data it can process before it "forgets" what it's doing. This is called "context." Every time you submit a response to the supervising agent, you are amending one gigantic prompt that includes the entire history of the conversation so far (and all the code generated, plus the simulated reasoning tokens the model uses to "think" more about a problem). The AI model then evaluates this prompt and produces an output. It's a very computationally expensive process that increases quadratically with prompt size because LLMs process every token (chunk of data) against every other token in the prompt.

Anthropic's engineering team describes context as a finite resource with diminishing returns. Studies have revealed what researchers call "context rot": As the number of tokens in the context window increases, the model's ability to accurately recall information decreases. Every new token depletes what the documentation calls an "attention budget."

This context limit naturally limits the size of a codebase a LLM can process at one time, and if you feed the AI model lots of huge code files (which have to be re-evaluated by the LLM every time you send another response), it can burn up token or usage limits pretty quickly.

Tricks of the trade

To get around these limits, the creators of coding agents use several tricks. For example, AI models are fine-tuned to write code to outsource activities to other software tools. For example, they might write Python scripts to extract data from images or files rather than feeding the whole file through an LLM, which saves tokens and avoids inaccurate results.

Anthropic's documentation notes that Claude Code also uses this approach to perform complex data analysis over large databases, writing targeted queries and using Bash commands like "head" and "tail" to analyze large volumes of data without ever loading the full data objects into context.

(In a way, these AI agents are guided but semi-autonomous tool-using programs that are a major extension of a concept we first saw in early 2023.)

Another major breakthrough in agents came from dynamic context management. Agents can do this in a few ways that are not fully disclosed in proprietary coding models, but we do know the most important technique they use: context compression.

The command line version of OpenAI codex running in a macOS terminal window. The command-line version of OpenAI Codex running in a macOS terminal window. Credit: Benj Edwards

When a coding LLM nears its context limit, this technique compresses the context history by summarizing it, losing details in the process but shortening the history to key details. Anthropic's documentation describes this "compaction" as distilling context contents in a high-fidelity manner, preserving key details like architectural decisions and unresolved bugs while discarding redundant tool outputs.

This means the AI coding agents periodically "forget" a large portion of what they are doing every time this compression happens, but unlike older LLM-based systems, they aren't completely clueless about what has transpired and can rapidly re-orient themselves by reading existing code, written notes left in files, change logs, and so on.

Anthropic's documentation recommends using CLAUDE.md files to document common bash commands, core files, utility functions, code style guidelines, and testing instructions. AGENTS.md, now a multi-company standard, is another useful way of guiding agent actions in between context refreshes. These files act as external notes that let agents track progress across complex tasks while maintaining critical context that would otherwise be lost.

For tasks requiring extended work, both companies employ multi-agent architectures. According to Anthropic's research documentation, its system uses an "orchestrator-worker pattern" in which a lead agent coordinates the process while delegating to specialized subagents that operate in parallel. When a user submits a query, the lead agent analyzes it, develops a strategy, and spawns subagents to explore different aspects simultaneously. The subagents act as intelligent filters, returning only relevant information rather than their full context to the lead agent.

The multi-agent approach burns through tokens rapidly. Anthropic's documentation notes that agents typically use about four times more tokens than chatbot interactions, and multi-agent systems use about 15 times more tokens than chats. For economic viability, these systems require tasks where the value is high enough to justify the increased cost.

Best practices for humans

While using these agents is contentious in some programming circles, if you use one to code a project, knowing good software development practices helps to head off future problems. For example, it's good to know about version control, making incremental backups, implementing one feature at a time, and testing it before moving on.

What people call "vibe coding"—creating AI-generated code without understanding what it's doing—is clearly dangerous for production work. Shipping code you didn't write yourself in a production environment is risky because it could introduce security issues or other bugs or begin gathering technical debt that could snowball over time.

Independent AI researcher Simon Willison recently argued that developers using coding agents still bear responsibility for proving their code works. "Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review," Willison wrote. "That's no longer valuable. What's valuable is contributing code that is proven to work."

In fact, human planning is key. Claude Code's best practices documentation recommends a specific workflow for complex problems: First, ask the agent to read relevant files and explicitly tell it not to write any code yet, then ask it to make a plan. Without these research and planning steps, the documentation warns, Claude's outputs tend to jump straight to coding a solution.

Without planning, LLMs sometimes reach for quick solutions to satisfy a momentary objective that might break later if a project were expanded. So having some idea of what makes a good architecture for a modular program that can be expanded over time can help you guide the LLM to craft something more durable.

As mentioned above, these agents aren't perfect, and some people prefer not to use them at all. A randomized controlled trial published by the nonprofit research organization METR in July 2025 found that experienced open-source developers actually took 19 percent longer to complete tasks when using AI tools, despite believing they were working faster. The study's authors note several caveats: The developers were highly experienced with their codebases (averaging five years and 1,500 commits), the repositories were large and mature, and the models used (primarily Claude 3.5 and 3.7 Sonnet via Cursor) have since been superseded by more capable versions.

Whether newer models would produce different results remains an open question, but the study suggests that AI coding tools may not always provide universal speed-ups, particularly for developers who already know their codebases well.

Given these potential hazards, coding proof-of-concept demos and internal tools is probably the ideal use of coding agents right now. Since AI models have no actual agency (despite being called agents) and are not people who can be held accountable for mistakes, human oversight is key.

Read full article

Comments



Read the whole story
fxer
15 minutes ago
reply
Bend, Oregon
Share this story
Delete

“Yo what?” LimeWire re-emerges in online rush to share pulled “60 Minutes” segment

1 Share

CBS cannot contain the online spread of a "60 Minutes" segment that its editor-in-chief, Bari Weiss, tried to block from airing.

The episode, "Inside CECOT," featured testimonies from US deportees who were tortured or suffered physical or sexual abuse at a notorious Salvadoran prison, the Center for the Confinement of Terrorism. "Welcome to hell," one former inmate was told upon arriving, the segment reported, while also highlighting a clip of Donald Trump praising CECOT and its leadership for “great facilities, very strong facilities, and they don’t play games."

Weiss controversially pulled the segment on Monday, claiming it could not air in the US because it lacked critical voices, as no Trump officials were interviewed. She claimed that the segment "did not advance the ball" and merely echoed others' reporting, NBC News reported. Her plan was to air the segment when it was "ready," insisting that holding stories "for whatever reason" happens "every day in every newsroom."

But Weiss apparently did not realize that the "Inside CECOT" would still stream in Canada, giving the public a chance to view the segment as reporters had intended.

Critics accusing CBS of censoring the story quickly shared the segment online Monday after discovering that it was available on the Global TV app. Using a VPN to connect to the app with a Canadian IP address was all it took to override Weiss' block in the US, as 404 Media reported the segment was uploaded to "to a variety of file sharing sites and services, including iCloud, Mega, and as a torrent," including on the recently revived file-sharing service LimeWire. It's currently also available to stream on the Internet Archive, where one reviewer largely summed up the public's response so far, writing, "cannot believe this was pulled, not a dang thing wrong with this segment except it shows truth."

CBS did not immediately respond to Ars' request to comment. The network faces criticism from both outside and within its studios, as reporters and CBS viewers question the integrity of Weiss' decision now that the segment has aired. Recently appointed CBS editor-in-chief, Weiss' prior experience as a contrarian opinion writer helming her own right-leaning platform, The Free Press, prompted early concerns that she might water down CBS's critical coverage of the Trump administration. And the seeming censorship of the "60 Minutes" episode was perceived by some as a canary in a coal mine, confirming critics' fears.

CBS correspondent Sharyn Alfonsi, who anchored the segment, noted that the Trump administration had repeatedly declined to comment as the story came together. By delaying the segment solely because of Trump officials' silence, Weiss appeared to be giving the Trump administration a "kill switch" to block any story they don't want aired, Alfonsi suggested.

"Our story was screened five times and cleared by both CBS attorneys and Standards and Practices," Alfonsi wrote in a note to CBS colleagues that was widely shared online. “It is factually correct. In my view, pulling it now, after every rigorous internal check has been met, is not an editorial decision, it is a political one.”

Tim Richardson, journalism and disinformation program director at PEN America, told NBC News that Weiss risked damaging CBS's credibility by making a seemingly hasty decision to postpone a report that may have upset the Trump administration.

"CBS journalists, among the best in this country, appropriately made an outreach effort to get the government to weigh in on a deeply reported story out of El Salvador," Richardson said. "Pulling it back at the last minute because the government chose not to respond is an insult not only to the integrity of the journalists but to core principles of independent news gathering."

Early 2000s tool LimeWire used to pirate episode

As Americans scrambled to share the "Inside CECOT" story, assuming that CBS would be working in the background to pull down uploads, a once-blacklisted tool from the early 2000s became a reliable way to keep the broadcast online.

On Reddit, users shared links to a LimeWire torrent, prompting chuckles from people surprised to see the peer-to-peer service best known for infecting parents' computers with viruses in the 2000s suddenly revived in 2025 to skirt feared US government censorship.

"Yo what," one user joked, highlighting only the word "LimeWire." Another user, ironically using the LimeWire logo as a profile picture, responded, "man, who knew my nostalgia prof pic would become relevant again, WTF."

LimeWire was created in 2000 and quickly became one of the Internet's favorite services for pirating music until record labels won a 2010 injunction that blocked all file-sharing functionality. As the Reddit thread noted, some LimeWire users were personally targeted in lawsuits.

For a while after the injunction, a fraction of users kept the service alive by running older versions of the software that weren't immediately disabled. New owners took over LimeWire in 2022, officially relaunching the service. The service's about page currently notes that "millions of individuals and businesses" use the global file-sharing service today, but for some early Internet users, the name remains a blast from the past.

"Bringing back LimeWire to illegally rip copies of reporting suppressed by the government is definitely some cyberpunk shit," a Bluesky user wrote.

"We need a champion against the darkness," a Reddit commenter echoed. "I side with LimeWire."

Read full article

Comments



Read the whole story
fxer
39 minutes ago
reply
Bend, Oregon
Share this story
Delete

Odyssey trailer brings the myth to vivid life

1 Share

Director Christopher Nolan won two well-deserved Oscars for 2023's Oppenheimer, and Hollywood was soon buzzing about what his next project might be. A vampire period piece, perhaps? Or maybe a reboot of 1983's Blue Thunder or British 1960s spy series The Prisoner? Instead, Nolan chose to adapt one of the greatest epic sagas in history: Homer's Odyssey. At long last, Universal has released the first official trailer for Nolan's The Odyssey, starring Matt Damon as the wandering Ithacan king. Frankly, it looks appropriately epic.

Most of us read some version of The Odyssey in high school, so we're familiar with the story: Odysseus, legendary Greek king of Ithaca, begins the long journey home after 10 years of fighting in the Trojan War. (We actually catch a glimpse of the famous Trojan horse in the trailer.) But the journey does not go smoothly, as Odysseus and his men encounter the cyclops Polyphemus, the Sirens, and an enchantress named Circe, among other obstacles. Meanwhile, his long-suffering wife Penelope is warding off hundreds of suitors eager to usurp Odysseus' position.

It's difficult to underestimate the tremendous influence Homer's epic has had on global culture. Nolan himself recalled seeing the Odyssey performed as a school play when he was just 5 or 6 years old. "I remember the Sirens and him being strapped to the mast and things like that," he recently told Empire. "I think it's in all of us, really. And when you start to break down the text and adapt it, you find that all of these other films—and all the films I've worked on—you know, they're all from the Odyssey. It's foundational."

In addition to Damon, the cast includes Anne Hathaway as Penelope; Tom Holland as Odysseus' son, Telemachus; Robert Pattinson as Antinous, one of Penelope's many suitors; Jon Bernthal as the Spartan king, Menelaus; Benny Safdie as the Achaean commander during the Trojan War, Agamemnon; John Leguizamo as Odysseus' faithful servant, Eumaeus; Himesh Patel as his second-in-command, Eurylochus; Will Yun Lee and Jimmy Gonzales as crew members; and Mia Goth as Penelope's maid Melantho. We also have Zendaya as Athena, Charlize Theron as Circe, and Lupita Nyong'o in an as-yet-undisclosed role.

The Odyssey hits theaters on July 17, 2026.

Read full article

Comments



Read the whole story
fxer
2 days ago
reply
Bend, Oregon
Share this story
Delete

World’s largest shadow library made a 300TB copy of Spotify’s most streamed songs

1 Share

The world's largest shadow library—which is increasingly funded by AI developers—shocked the Internet this weekend by announcing it had "backed up Spotify" and started distributing 300 terabytes of metadata and music files in bulk torrents.

According to Anna's Archive, the data grab represents more than 99 percent of listens on Spotify, making it "the largest publicly available music metadata database with 256 million tracks." It's also "the world’s first 'preservation archive' for music which is fully open," with 86 million music files, the archive boasted.

The music files supposedly represent about 37 percent of songs available on Spotify as of July 2025. The scraped files were prioritized by popularity, with Anna's Archive weeding out many songs that are never streamed or are of poor quality, such as AI-generated songs.

On Monday, Spotify told Android Authority on Monday that it was investigating whether Anna's Archive had actually scraped its platform "at scale," as its blog claimed.

"An investigation into unauthorized access identified that a third party scraped public metadata and used illicit tactics to circumvent DRM to access some of the platform’s audio files," Spotify said. "We are actively investigating the incident."

It's unclear how much Spotify data was actually scraped, Android Authority noted, or if the company will possibly pursue legal action to take down the torrents. Asked for comment, a Spotify spokesperson told Ars that "Spotify has identified and disabled the nefarious user accounts that engaged in unlawful scraping."

For Anna's Archive, the temptation to scrape the data may have been too much after stumbling upon "a way to scrape Spotify at scale," supposedly "a while ago."

"We saw a role for us here to build a music archive primarily aimed at preservation," the archive said. Scraping Spotify data was a "great start," they said, toward building an "authoritative list of torrents aiming to represent all music ever produced."

A list like that "does not exist for music," the archive said, and would be akin to LibGen—which was used by tech giants like Meta and startups like Anthropic to notoriously pirate book datasets to train AI.

Releasing the metadata torrents this December was the first step toward achieving this "preservation" mission, Anna's Archive said. Next, the Archive will release torrents of music files, starting with the most popular streams first, then eventually releasing torrents of less popular songs and album art. In the future, "if there is enough interest, we could add downloading of individual files to Anna’s Archive," the blog said.

Spotify told Ars that it's taking steps to avoid any future scraping.

"We’ve implemented new safeguards for these types of anti-copyright attacks and are actively monitoring for suspicious behavior," Spotify's spokesperson said. "Since day one, we have stood with the artist community against piracy, and we are actively working with our industry partners to protect creators and defend their rights."

"This is insane": Users fear data grab will doom archive

Anna's Archive claimed that the Spotify data was scraped to help preserve "humanity’s musical heritage," protecting it "forever" from "destruction by natural disasters, wars, budget cuts, and other catastrophes."

However, some Anna's Archive fans—who largely use the search engine to find books, academic papers, and magazine articles—were freaked out by the news that Spotify data was scraped. On Hacker News, some users questioned whether the data would be useful to anyone but AI researchers, since searching bulk torrents for individual songs seemed impractical for music fans.

One user pointed out that "there are already tools to automatically locate and stream pirated TV and movie content automatic and on demand"—suggesting that music fans could find a way to stream the data. But others worried Anna's Archive may have been baited into scraping Spotify, perhaps taking on legal risks that AI companies prone to obscuring their training data sources likely wish to avoid.

"This is insane," a top commenter wrote. "Definitely wondering if this was in response to desire from AI researchers/companies who wanted this stuff. Or if the major record labels already license their entire catalogs for training purposes cheaply enough, so this really is just solely intended as a preservation effort?"

But Anna's Archive is clearly working to support AI developers, another noted, pointing out that Anna's Archive promotes selling "high-speed access" to "enterprise-level" LLM data, including "unreleased collections." Anyone can donate "tens of thousands" to get such access, the archive suggests on its webpage, and any interested AI researchers can reach out to discuss "how we can work together."

"AI may not be their original/primary motivation, but they are evidently on board with facilitating AI labs piracy-maxxing," a third commenter suggested.

Meanwhile, on Reddit, some fretted that Anna's Archive may have doomed itself by scraping the data. To them, it seemed like the archive was "only making themselves a target" after watching the Internet Archive struggle to survive a legal attack from record labels that ended in a confidential settlement last year.

"I'm furious with AA for sticking this target on their own backs," a redditor wrote on a post declaring that "this Spotify hacking will just ruin the actual important literary archive."

As Anna's Archive fans spiraled, a conspiracy was even raised that the archive was only "doing it for the AI bros, who are the ones paying the bills behind the scenes" to keep the archive afloat.

Ars could not immediately reach Anna's Archive to comment on users' fears or Spotify's investigation.

On Reddit, one user took comfort in the fact that the archive is "designed to be resistant to being taken out," perhaps preventing legal action from ever really dooming the archive.

"The domain and such can be gone, sure, but the core software and its data can be resurfaced again and again," the user explained.

But not everyone was convinced that Anna's Archive could survive brazenly torrenting so much Spotify data.

"This is like saying the Titanic is unsinkable" that user warned, suggesting that Anna's Archive might lose donations if Spotify-fueled takedowns continually frustrate downloads over time. "Sure, in theory data can certainly resurface again and again, but doing so each time, it will take money and resources, which are finite. How many times are folks willing to do this before they just give up?"

This story was updated to include Spotify's statement. 

Read full article

Comments



Read the whole story
fxer
2 days ago
reply
Bend, Oregon
Share this story
Delete

Does swearing make you stronger? Science says yes.

1 Share

If you’re human, you’ve probably hollered a curse word or two (or three) when barking your shin on a table edge or hitting your thumb with a hammer. Perhaps you’ve noticed that this seems to lessen your pain. There’s a growing body of scientific evidence that this is indeed the case. The technical term is the “hypoalgesic effect of swearing.” Cursing can also improve physical strength and endurance, according to a new paper published in the journal American Psychologist.

As previously reported, co-author Richard Stephens, a psychologist at Keele, became interested in studying the potential benefits of profanity after noting his wife’s “unsavory language” while giving birth and wondered if profanity really could help alleviate pain. “Swearing is such a common response to pain. There has to be an underlying reason why we do it,” Stephens told Scientific American after publishing a 2009 study that was awarded the 2010 Ig Nobel Peace Prize.

For that study, Stephens and his colleagues asked 67 study participants (college students) to immerse their hands in a bucket of ice water. They were then instructed to either swear repeatedly using the profanity of their choice or chant a neutral word. Lo and behold, the participants said they experienced less pain when they swore and were also able to leave their hands in the bucket about 40 seconds longer than when they weren’t swearing. It has been suggested that this is a primitive reflex that serves as a form of catharsis.

The team followed up with a 2011 study showing that the pain-relief effect works best for subjects who typically don’t swear that often, perhaps because they attach a higher emotional value to swears. They also found that subjects’ heart rates increased when they swore. But it might not be the only underlying mechanism. Other researchers have pointed out that profanity might be distracting, thereby taking one’s mind off the pain rather than serving as an actual analgesic.

So in 2020, the Stephens team conducted a follow-up study, using the same methodology as they had back in 2009, asking participants to either chant the F-word or the fake swears “fouch” and “twizpipe.” (Fun fact: the earliest known appearance of the F-word in the English language is “Roger F$#%-by-the-Navel” who appears in some court records from 1310-11. )

The result: Only the F-word had any effect on pain outcomes. The team also measured the subjects’ pain threshold, asking them to indicate when the ice water began to feel painful. Those who chanted the F-word waited longer before indicating they felt pain—in other words, the swearing increased their threshold for pain. Chanting “fouch” or “twizpipe” had no effect on either measure.

F@%*-ing go for it

For this latest study, Stephens was interested in investigating potential mechanisms for swearing as a possible form of disinhibition (usually viewed negatively), building on his team’s 2018 and 2022 papers showing that swearing can improve strength in a chair push-up task. “In many situations, people hold themselves back—consciously or unconsciously—from using their full strength,” said Stephens. “By swearing, we throw off social constraint and allow ourselves to push harder in different situations. Swearing is an easily available way to help yourself feel focused, confident and less distracted, and ‘go for it’ a little more.”

In two separate experiments, participants were asked to select a swear word they’d normally use after, say, bumping their head, and a more neutral word to describe an inanimate object like a table. They then performed the aforementioned chair push-up task: sitting on a sturdy chair and placing their hands under their thighs with the fingers pointed inwards. Then they lifted their feet off the floor and straightened their arms to support their body weight for as long as possible, chanting either the swear word or the neutral word every two seconds. Afterward, subjects competed a questionnaire to assess various aspects of their mental state during the task.

The results: Subjects who swore during the task could support their body weight much longer than those who merely repeated the neutral word. This confirms the reported results of similar studies in the past. Furthermore, subjects reported increases in their sense of psychological “flow,” distraction, and self-confidence, all indicators of increased disinhibition.

“These findings help explain why swearing is so commonplace,” said Stephens. “Swearing is literally a calorie-neutral, drug-free, low-cost, readily available tool at our disposal for when we need a boost in performance.” The team next plans to explore the influence of swearing on public speaking and romantic behaviors, since these are situations where most people are more hesitant and less confident in themselves, and hence more likely to hold back.

DOI: American Psychologist, 2025. 10.1037/amp0001650  (About DOIs).

Read full article

Comments



Read the whole story
fxer
3 days ago
reply
Bend, Oregon
Share this story
Delete

YouTube bans two popular channels that created fake AI movie trailers

1 Share

Google is generally happy to see people using generative AI tools to create content, and it’s doubly happy when they publish it on its platforms. But there are limits to everything. Two YouTube channels that attracted millions of subscribers with AI-generated movie trailers have been shuttered.

Screen Culture and KH Studio flooded the site with fake but often believable trailers. The channels, which had a combined audience of more than 2 million subscribers, became a thorn in Google’s side in early 2025 when other YouTubers began griping about their sudden popularity in the age of AI. The channels produced videos with titles like “GTA: San Andreas (2025) Teaser Trailer” and “Malcom In The Middle Reboot (2025) First Trailer.” Of course, neither of those projects exist, but that didn’t stop them from appearing in user feeds.

Google demonetized the channels in early 2025, forcing them to adopt language that made it clear they were not official trailers. The channels were able to monetize again, but the disclaimers were not consistently used. Indeed, many of the most popular videos from those channels in recent months included no “parody” or “concept trailer” disclosures. Now, visiting either channel’s page on YouTube produces an error reading, “This page isn’t available. Sorry about that. Try searching for something else.”

Deadline reports that the behavior of these creators ran afoul of YouTube’s spam and misleading-metadata policies. At the same time, Google loves generative AI—YouTube has added more ways for creators to use generative AI, and the company says more gen AI tools are coming in the future. It’s quite a tightrope for Google to walk.

AI movie trailers A selection of videos from the now-defunct Screen Culture channel. Credit: Ryan Whitwam

While passing off AI videos as authentic movie trailers is definitely spammy conduct, the recent changes to the legal landscape could be a factor, too. Disney recently entered into a partnership with OpenAI, bringing its massive library of characters to the company’s Sora AI video app. At the same time, Disney sent a cease-and-desist letter to Google demanding the removal of Disney content from Google AI. The letter specifically cited AI content on YouTube as a concern.

Both the banned trailer channels made heavy use of Disney properties, sometimes even incorporating snippets of real trailers. For example, Screen Culture created 23 AI trailers for The Fantastic Four: First Steps, some of which outranked the official trailer in searches. It’s unclear if either account used Google’s Veo models to create the trailers, but Google’s AI will recreate Disney characters without issue.

While Screen Culture and KH Studio were the largest purveyors of AI movie trailers, they are far from alone. There are others with five and six-digit subscriber counts, some of which include disclosures about fan-made content. Is that enough to save them from the ban hammer? Many YouTube viewers probably hope not.

Read full article

Comments



Read the whole story
fxer
3 days ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories