17726 stories
·
175 followers

SpaceX begins “significant reconfiguration” of Starlink satellite constellation

1 Share

The year 2025 ended with more than 14,000 active satellites from all nations zooming around the Earth. One-third of them will soon move to lower altitudes.

The maneuvers will be undertaken by SpaceX, the owner of the largest satellite fleet in orbit. About 4,400 of the company's Starlink Internet satellites will move from an altitude of 341 miles (550 kilometers) to 298 miles (480 kilometers) over the course of 2026, according to Michael Nicolls, SpaceX's vice president of Starlink engineering.

"Starlink is beginning a significant reconfiguration of its satellite constellation focused on increasing space safety," Nicolls wrote Thursday in a post on X.

The maneuvers undertaken with the Starlink satellites' plasma engines will be gradual, but they will eventually bring a large fraction of orbital traffic closer together. The effect, perhaps counterintuitively, will be a reduced risk of collisions between satellites whizzing through near-Earth space at nearly 5 miles per second. Nicolls said the decision will "increase space safety in several ways."

Why now?

There are fewer debris objects at the lower altitude, and although the Starlink satellites will be packed more tightly, they follow choreographed paths distributed in dozens of orbital lanes. "The number of debris objects and planned satellite constellations is significantly lower below 500 km, reducing the aggregate likelihood of collision," Nicolls wrote.

The 4,400 satellites moving closer to Earth make up nearly half of SpaceX's Starlink fleet. At the end of 2025, SpaceX had nearly 9,400 working satellites in orbit, including more than 8,000 Starlinks in operational service and hundreds more undergoing tests and activation.

There's another natural reason for reconfiguring the Starlink constellation. The Sun is starting to quiet down after reaching the peak of the 11-year solar cycle in 2024. The decline in solar activity has the knock-on effect of reducing air density in the uppermost layers of the Earth's atmosphere, a meaningful factor in planning satellite operations in low-Earth orbit.

With the approaching solar minimum, Starlink satellites will encounter less aerodynamic drag at their current altitude. In the rare event of a spacecraft failure, SpaceX relies on atmospheric resistance to drag Starlink satellites out of orbit toward a fiery demise on reentry. Moving the Starlink satellites lower will allow them to naturally reenter the atmosphere and burn up within a few months. At solar minimum, it might take more than four years for drag to pull the satellites out of their current 550-kilometer orbit, according to Nicolls. At the lower altitude, it will take just a few months.

The constellation shuffle will help ensure any Starlink satellites that become space junk will deorbit as quickly as possible. "These actions will further improve the safety of the constellation, particularly with difficult to control risks such as uncoordinated maneuvers and launches by other satellite operators," Nicolls wrote.

The passage of Starlink satellites is seen in the sky over southern Poland on November 1, 2024. Credit: Jakub Porzycki/NurPhoto

Performance boost

There are other important reasons for making the change that Nicolls did not mention in his social media post. Moving the satellites closer to the Earth—and closer to SpaceX's Starlink subscribers—should improve the network's performance.

Elon Musk, SpaceX's founder and CEO, suggested this is actually the "biggest advantage" of moving to a lower altitude. "Beam diameter is smaller for a given antenna size, allowing Starlink to serve a higher density of customers," he wrote on X, his social media platform.

Reducing the distance between Starlink satellites and SpaceX's 9 million Starlink customers will also provide a small improvement in latency, or the time it takes Internet signals to travel between a transmitter and receiver. The lower altitude may also make the Starlink satellites appear slightly brighter in the sky, although the precise effect hasn't been quantified.

Hundreds of Starlink satellites specially modified to beam connectivity directly to smartphones already fly in orbits as low as 223 miles (360 kilometers).

SpaceX launched 165 missions with its workhorse Falcon 9 rocket last year, and nearly three-quarters of them carried Starlink satellites into space. The company reported its assembly line in Redmond, Washington, churned out new Starlink satellites at a rate of more than 10 per day.

Aside from continuing Starlink network expansion with more Falcon 9 launches, SpaceX intends to debut the more powerful Starlink V3 satellite platform this year. Starlink V3 is too big to fit on a Falcon 9, so it must launch on SpaceX's super-heavy Starship rocket, which has not yet begun operational flights.

Read full article

Comments



Read the whole story
fxer
19 hours ago
reply
Bend, Oregon
Share this story
Delete

Stewart Cheifet, PBS host who chronicled the PC revolution, dies at 87

1 Share

Stewart Cheifet, the television producer and host who documented the personal computer revolution for nearly two decades on PBS, died on December 28, 2025, at age 87 in Philadelphia. Cheifet created and hosted Computer Chronicles, which ran on the public television network from 1983 to 2002 and helped demystify a new tech medium for millions of American viewers.

Computer Chronicles covered everything from the earliest IBM PCs and Apple Macintosh models to the rise of the World Wide Web and the dot-com boom. Cheifet conducted interviews with computing industry figures, including Bill Gates, Steve Jobs, and Jeff Bezos, while demonstrating hardware and software for a general audience.

From 1983 to 1990, he co-hosted the show with Gary Kildall, the Digital Research founder who created the popular CP/M operating system that predated MS-DOS on early personal computer systems.

Computer Chronicles - 01x25 - Artificial Intelligence (1984)

From 1996 to 2002, Cheifet also produced and hosted Net Cafe, a companion series that documented the early Internet boom and introduced viewers to then-new websites like Yahoo, Google, and eBay.

A legacy worth preserving

Computer Chronicles began as a local weekly series in 1981 when Cheifet served as station manager at KCSM-TV, the College of San Mateo's public television station. It became a national PBS series in 1983 and ran continuously until 2002, producing 433 episodes across 19 seasons. The format remained consistent throughout: product demonstrations, guest interviews, and a closing news segment called "Random Access" that covered industry developments.

After the show's run ended and Cheifet left television production, he worked to preserve the show's legacy as a consultant for the Internet Archive, helping to make publicly available the episodes of Computer Chronicles and Net Cafe.

In a comment on Slashdot, Brewster Kahle, founder of the Internet Archive, remembered meeting Cheifet during a Net Cafe interview and later collaborating with him to bring the show's archives online: "After it I asked what he was doing with his archive, we kept talking and he founded the 'collections group' at the Internet Archive and helped us get all of Computer Chronicles on this new site and so much more. Wonderful man, and oh that voice!"

As a result of that collaboration, most episodes of the show remain freely available on the Internet Archive, where they serve as a historical record of the personal computing era. A re-digitization project that involves Cheifet's personal tapes is underway to recover episodes of Computer Chronicles that were missed in the original archiving effort.

Cheifet was born in Philadelphia on September 24, 1938, and earned degrees in mathematics and psychology from the University of Southern California in 1960. He later graduated from Harvard Law School. In 1967, while working at CBS News in Paris, he met Peta Kennedy, whom he married later that year.

In addition to his television work, Cheifet taught broadcast journalism at the Donald W. Reynolds School of Journalism at the University of Nevada, Reno. In a 2014 interview with the school, he explained why he pursued both law and journalism: "They are the two legal revolutionaries. They are the two professions that allow you to change the world without having to blow someone up."

Read full article

Comments



Read the whole story
fxer
19 hours ago
reply
Bend, Oregon
Share this story
Delete

Marvel rings in new year with Wonder Man trailer

1 Share

Marvel Studios decided to ring in the new year with a fresh trailer for Wonder Man, its eight-episode miniseries premiering later this month on Disney+. Part of the MCU’s Phase Six, the miniseries was created by Destin Daniel Cretton (Shang-Chi and the Legend of Five Rings) and Andrew Guest (Hawkeye), with Guest serving as showrunner.

As previously reported, Yahya Abdul-Mateen II stars as Simon Williams, aka Wonder Man, an actor and stunt person with actual superpowers who decides to audition for the lead role in a superhero TV series—a reboot of an earlier Wonder Man incarnation. Demetrius Grosse plays Simon’s brother, Eric, aka Grim Reaper; Ed Harris plays Simon’s agent, Neal Saroyan; and Arian Moayed plays P. Clearly, an agent with the Department of Damage Control. Lauren Glazier, Josh Gad, Byron Bowers, Bechir Sylvain, and Manny McCord will also appear in as-yet-undisclosed roles

Rounding out the cast is Ben Kingsley, reprising his MCU role as failed actor Trevor Slattery. You may recall Slattery from 2013’s Iron Man 3, hired by the villain of that film to pretend to be the leader of an international terrorist organization called the Ten Rings.Slattery showed up again in 2021’s Shang-Chi and the Legend of the Ten Rings,rehabilitated after a stint in prison; he helped the titular Shang-Chi (Simu Liu) on his journey to the mythical village of Ta Lo.

A one-minute teaser that leaned into the meta-humor was released just before New York Comic Con last fall, followed by a full trailer during the event itself which mostly laid out the premise as Simon prepared to audition for his dream role. The new trailer repackages some of that footage, except Simon is asked to sign a form stating that he doesn't have superpowers. The problem is that he does, and the stress of the audition and the acting process itself brings those superpowers to the fore in explosive fashion. So the "Department of Damage Control" naturally declares Simon an "extraordinary threat."

Wonder Man premieres on Disney+ on January 27, 2026.

 

Read full article

Comments



Read the whole story
fxer
4 days ago
reply
Bend, Oregon
Share this story
Delete

Northwestern hires Chip Kelly as offensive coordinator after stint with Raiders fizzled

1 Share
The veteran coach is returning to college after a disappointing season with the Las Vegas Raiders.
Read the whole story
fxer
6 days ago
reply
Bend, Oregon
Share this story
Delete

Erik Visits an American Grave, Part 2,045

1 Comment and 2 Shares

This is the grave of Sam Goody.

Born in 1904 in New York City, Sam Gutowitz grew up in the city and started running stores, part of the Jewish merchant class. His father was a tailor and his parents had migrated from Poland. A common enough trajectory. Gutowitz was a fine enough name for the Jewish community, but in the broader world, he didn’t want to be tainted with his heritage, so he went with Sam Goody. It really stuck too, as you can see from the grave. He didn’t even use Gutowitz on the tombstone, even though it’s a Jewish cemetery with Hebrew on the grave.

Well, Goody did one thing in American history that matters. In the 1940s, he opened a record store, right after the creation of long-playing records. He already had plenty of history running stores by this time. He worked in the discount world, where there was a lot of money if you were crafty and lucky enough to make it work. This happened because in 1938, when he was running a toy store. Someone came in asking if he had any records. He said no, but the customer then said that if he had any, he’d probably buy them. So he started scouring around basements and people’s sales and buying them and it turned out they sold. He bought 300 opera records from a family in Brooklyn for $60. He sold them for $1,100. This was good business.

Goody basically cornered the market on discounted LPs, for which there was a huge market among young music fans who didn’t have a lot of money. What this meant was something no one had experienced before–an enormous record store that had a huge amount of stock and variety. So if you were in the know, you could visit his store on 49th Street and be in what must have been complete paradise. It’s hard to imagine in this day of having every conceivable music available to you in some form or another how amazing this must have been.

As you can see from my Music Notes posts, I can listen to some old country and then some new post-punk album and then some jazz from any era and then an African album from the 70s and I can live this life of real musical enjoyment with enormous diversity. This hasn’t quite been my whole life, but the idea of a really broad range of music has been known since I was kid. I remember going to House of Records in Eugene when I was in college and just picking random things that sounded interesting. In fact, I took a lot of ribbing from my friends when I bought Muzsikas’ Maramoros: The Lost Jewish Music of Transylvania when in college, just because I wanted to hear what that sounded like. I still have that album too and while I don’t listen to the whole thing much, I do enjoy it when it comes up on shuffle. Well, it was Goody who created the world when such a thing was plausible.

Anyway, Goody’s discount record store model worked like gangbusters. He made money hand over fist here. When most stores sold a record for $3.98, he sold it for $3.25. Notably, he knew nothing about music and didn’t care. But he did hire people who did know about music for his store. In 1951, he created the chain of record stores in his name that you know him for. Later, Goody had money problem and creditors took over the chain in 1959, but he was associated with it to some degree for a long time. He was still around in 1978 when the American Can Company bought the chain and he made plenty of money on that sale. Why American Can? It was just a conglomerate by that point and had already purchased Musicland, which was the main rival to Goody’s discount empire, so it was the kind of logical consolidation at the heart of capitalism. Goody died in 1991, at the age of 87. It was heart failure.

It’s interesting that there’s so little on the man on the internet. The chain has a Wikipedia page, but he doesn’t. He did get a decent New York Times obit. But everyone of not one but about three generations at least knows who he is. No more though. There was talk that the last Sam Goody record store was going to close this year, though I am not 100% sure if that happened.

Sam Goody is buried in New Montefiore Cemetery, West Babylon, New York.

If you would like this series to visit other people who gave their names to the store chains they founded, you can donate to cover the required expenses here. Richard Warren Sears is in Chicago and so is Aaron Montgomery Ward. Same cemetery in fact. Previous posts in this series are archived here and here.

The post Erik Visits an American Grave, Part 2,045 appeared first on Lawyers, Guns & Money.

Read the whole story
fxer
10 days ago
reply
Mall fixture Sam Goody, across from Orange Julius and Claire’s
Bend, Oregon
hannahdraper
10 days ago
reply
Washington, DC
Share this story
Delete

How AI coding agents work—and what to remember if you use them

1 Share

AI coding agents from OpenAI, Anthropic, and Google can now work on software projects for hours at a time, writing complete apps, running tests, and fixing bugs with human supervision. But these tools are not magic and can complicate rather than simplify a software project. Understanding how they work under the hood can help developers know when (and if) to use them, while avoiding common pitfalls.

We'll start with the basics: At the core of every AI coding agent is a technology called a large language model (LLM), which is a type of neural network trained on vast amounts of text data, including lots of programming code. It's a pattern-matching machine that uses a prompt to "extract" compressed statistical representations of data it saw during training and provide a plausible continuation of that pattern as an output. In this extraction, an LLM can interpolate across domains and concepts, resulting in some useful logical inferences when done well and confabulation errors when done poorly.

These base models are then further refined through techniques like fine-tuning on curated examples and reinforcement learning from human feedback (RLHF), which shape the model to follow instructions, use tools, and produce more useful outputs.

A screenshot of the Claude Code command-line interface. A screenshot of the Claude Code command-line interface. Credit: Anthropic

Over the past few years, AI researchers have been probing LLMs' deficiencies and finding ways to work around them. One recent innovation was the simulated reasoning model, which generates context (extending the prompt) in the form of reasoning-style text that can help an LLM home in on a more accurate output. Another innovation was an application called an "agent" that links several LLMs together to perform tasks simultaneously and evaluate outputs.

How coding agents are structured

In that sense, each AI coding agent is a program wrapper that works with multiple LLMs. There is typically a "supervising" LLM that interprets tasks (prompts) from the human user and then assigns those tasks to parallel LLMs that can use software tools to execute the instructions. The supervising agent can interrupt tasks below it and evaluate the subtask results to see how a project is going. Anthropic's engineering documentation describes this pattern as "gather context, take action, verify work, repeat."

If run locally through a command-line interface (CLI), users give the agents conditional permission to write files on the local machine (code or whatever is needed), run exploratory commands (say, "ls" to list files in a directory), fetch websites (usually using "curl"), download software, or upload files to remote servers. There are lots of possibilities (and potential dangers) with this approach, so it needs to be used carefully.

In contrast, when a user starts a task in the web-based agent like the web versions of Codex and Claude Code, the system provisions a sandboxed cloud container preloaded with the user's code repository, where Codex can read and edit files, run commands (including test harnesses and linters), and execute code in isolation. Anthropic's Claude Code uses operating system-level features to create filesystem and network boundaries within which the agent can work more freely.

The context problem

Every LLM has a short-term memory, so to speak, that limits the amount of data it can process before it "forgets" what it's doing. This is called "context." Every time you submit a response to the supervising agent, you are amending one gigantic prompt that includes the entire history of the conversation so far (and all the code generated, plus the simulated reasoning tokens the model uses to "think" more about a problem). The AI model then evaluates this prompt and produces an output. It's a very computationally expensive process that increases quadratically with prompt size because LLMs process every token (chunk of data) against every other token in the prompt.

Anthropic's engineering team describes context as a finite resource with diminishing returns. Studies have revealed what researchers call "context rot": As the number of tokens in the context window increases, the model's ability to accurately recall information decreases. Every new token depletes what the documentation calls an "attention budget."

This context limit naturally limits the size of a codebase a LLM can process at one time, and if you feed the AI model lots of huge code files (which have to be re-evaluated by the LLM every time you send another response), it can burn up token or usage limits pretty quickly.

Tricks of the trade

To get around these limits, the creators of coding agents use several tricks. For example, AI models are fine-tuned to write code to outsource activities to other software tools. For example, they might write Python scripts to extract data from images or files rather than feeding the whole file through an LLM, which saves tokens and avoids inaccurate results.

Anthropic's documentation notes that Claude Code also uses this approach to perform complex data analysis over large databases, writing targeted queries and using Bash commands like "head" and "tail" to analyze large volumes of data without ever loading the full data objects into context.

(In a way, these AI agents are guided but semi-autonomous tool-using programs that are a major extension of a concept we first saw in early 2023.)

Another major breakthrough in agents came from dynamic context management. Agents can do this in a few ways that are not fully disclosed in proprietary coding models, but we do know the most important technique they use: context compression.

The command line version of OpenAI codex running in a macOS terminal window. The command-line version of OpenAI Codex running in a macOS terminal window. Credit: Benj Edwards

When a coding LLM nears its context limit, this technique compresses the context history by summarizing it, losing details in the process but shortening the history to key details. Anthropic's documentation describes this "compaction" as distilling context contents in a high-fidelity manner, preserving key details like architectural decisions and unresolved bugs while discarding redundant tool outputs.

This means the AI coding agents periodically "forget" a large portion of what they are doing every time this compression happens, but unlike older LLM-based systems, they aren't completely clueless about what has transpired and can rapidly re-orient themselves by reading existing code, written notes left in files, change logs, and so on.

Anthropic's documentation recommends using CLAUDE.md files to document common bash commands, core files, utility functions, code style guidelines, and testing instructions. AGENTS.md, now a multi-company standard, is another useful way of guiding agent actions in between context refreshes. These files act as external notes that let agents track progress across complex tasks while maintaining critical context that would otherwise be lost.

For tasks requiring extended work, both companies employ multi-agent architectures. According to Anthropic's research documentation, its system uses an "orchestrator-worker pattern" in which a lead agent coordinates the process while delegating to specialized subagents that operate in parallel. When a user submits a query, the lead agent analyzes it, develops a strategy, and spawns subagents to explore different aspects simultaneously. The subagents act as intelligent filters, returning only relevant information rather than their full context to the lead agent.

The multi-agent approach burns through tokens rapidly. Anthropic's documentation notes that agents typically use about four times more tokens than chatbot interactions, and multi-agent systems use about 15 times more tokens than chats. For economic viability, these systems require tasks where the value is high enough to justify the increased cost.

Best practices for humans

While using these agents is contentious in some programming circles, if you use one to code a project, knowing good software development practices helps to head off future problems. For example, it's good to know about version control, making incremental backups, implementing one feature at a time, and testing it before moving on.

What people call "vibe coding"—creating AI-generated code without understanding what it's doing—is clearly dangerous for production work. Shipping code you didn't write yourself in a production environment is risky because it could introduce security issues or other bugs or begin gathering technical debt that could snowball over time.

Independent AI researcher Simon Willison recently argued that developers using coding agents still bear responsibility for proving their code works. "Almost anyone can prompt an LLM to generate a thousand-line patch and submit it for code review," Willison wrote. "That's no longer valuable. What's valuable is contributing code that is proven to work."

In fact, human planning is key. Claude Code's best practices documentation recommends a specific workflow for complex problems: First, ask the agent to read relevant files and explicitly tell it not to write any code yet, then ask it to make a plan. Without these research and planning steps, the documentation warns, Claude's outputs tend to jump straight to coding a solution.

Without planning, LLMs sometimes reach for quick solutions to satisfy a momentary objective that might break later if a project were expanded. So having some idea of what makes a good architecture for a modular program that can be expanded over time can help you guide the LLM to craft something more durable.

As mentioned above, these agents aren't perfect, and some people prefer not to use them at all. A randomized controlled trial published by the nonprofit research organization METR in July 2025 found that experienced open-source developers actually took 19 percent longer to complete tasks when using AI tools, despite believing they were working faster. The study's authors note several caveats: The developers were highly experienced with their codebases (averaging five years and 1,500 commits), the repositories were large and mature, and the models used (primarily Claude 3.5 and 3.7 Sonnet via Cursor) have since been superseded by more capable versions.

Whether newer models would produce different results remains an open question, but the study suggests that AI coding tools may not always provide universal speed-ups, particularly for developers who already know their codebases well.

Given these potential hazards, coding proof-of-concept demos and internal tools is probably the ideal use of coding agents right now. Since AI models have no actual agency (despite being called agents) and are not people who can be held accountable for mistakes, human oversight is key.

Read full article

Comments



Read the whole story
fxer
11 days ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories