17877 stories
·
173 followers

DNA identifies four more crew members of doomed Franklin expedition

1 Share

Archaeologists continue to use DNA analysis to identify the recovered remains of the doomed crew members of Captain Sir John S. Franklin's 1846 Arctic expedition to cross the Northwest Passage. They can now add four more names to the list of previously identified crew members. The findings were reported in two papers, one published in the Journal of Archaeological Science and the other in the Polar Record.

As we've reported previously, Franklin’s two ships, the HMS Erebus and the HMS Terror, became icebound in the Victoria Strait, and all 129 crew members ultimately died. It has been an enduring mystery that has captured imaginations ever since. The expedition set sail on May 19, 1845, and was last seen in July 1845 in Baffin Bay by the captains of two whaling ships. Historians have compiled a reasonably credible account of what happened: The crew spent the winter of 1845–1846 on Beechey Island, where the graves of three crew members were found.

When the weather cleared, the expedition sailed into the Victoria Strait before getting trapped in the ice off King William Island in September 1846. Franklin died on June 11, 1847, per a surviving note signed by Fitzjames dated the following April. HMS Erebus Captain James Fitzjames had assumed overall command after Franklin’s death, leading 105 survivors from their ice-trapped ships. It’s believed that everyone else died while encamped for the winter or while attempting to walk back to civilization.

There was no concrete news about the expedition’s fate until 1854, when local Inuits told 19th-century Scottish explorer John Rae that they had seen about 40 people dragging a ship’s boat on a sledge along the south coast. The following year, several bodies were found near the mouth of the Back River. A second search in 1859 led to the discovery of a location some 80 kilometers to the south of that site, dubbed Erebus Bay, as well as several more bodies and one of the ships' boats still mounted on a sledge. In 1861, yet another site was found just two kilometers away with even more bodies. When those two sites were rediscovered in the 1990s, archaeologists designated them NgLj-3 and NgLj-2, respectively.

The actual shipwrecks of the HMS Erebus and the HMS Terror were not found until 2014 and 2016, respectively. Thanks to the cold water temperature, lack of natural light, and the layers of silt covering many of the artifacts, the ship and its contents were in remarkably good condition. Even some of the windowpanes were still intact. The first underwater images and footage showing the ships' exteriors and interiors were released in 2019.

It's in the DNA

2D forensic facial reconstruction of David Young, Boy 1st Class from the HMS Erebus, who died at Erebus Bay. 2D forensic facial reconstruction of David Young, Boy 1st Class from the HMS <em>Erebus</em>, who died at Erebus Bay. Credit: Diana Trepkov

For several years, scientists have been conducting DNA research to identify the remains found at these sites by comparing DNA profiles of the remains with samples taken from descendants of the expedition members. Some 46 archaeological samples (bone, tooth, or hair) from Franklin expedition-related sites on King William Island have been genetically profiled and compared to cheek swab samples from 25 descendant donors. Most did not match, but in 2021, they identified one of those bodies as chief engineer John Gregory, who worked on the Erebus.

By 2024, the team had added four more descendant donors—one related to Fitzjames (technically a second cousin five times removed through the captain’s great-grandfather). That same year, DNA analysis revealed that a tooth recovered from a mandible at one of the relevant archaeological sites was that of Captain James Fitzjames of the HMS Erebus. His remains showed clear signs of cannibalism, confirming early Inuit reports of desperate crew members resorting to eating their dead.

We can now add three more crew members identified through their DNA. As before, to make the identifications, the team extracted DNA from archaeological samples and compared it with mitochondrial and Y-chromosome DNA from descendants. These included a molar and humerus shaft from NgLj-3; two molars, a premolar, and a temporal cranium bone from NgLj-2; and a sample taken from a left humerus found in 2018 at NgLj-1. The researchers were able to identify three individuals: William Orren, able seaman; David Young, boy 1st class; and John Bridgens, subordinate officers’ steward. All served on the HMS Erebus, and they all died at Erebus Bay.

Meanwhile, the Polar Reports paper focused on identifying an unburied skeleton found in 1859 on the south shore of King William Island. The skeleton was found with a seaman's certificate and other papers in a leather pocketbook belonging to Petty Officer Harry Peglar of the HMS Terror. However, the clothing found scattered around the remains was not of the sort usually worn by seamen or officers. The items included a double-breasted waistcoat and a black silk neckerchief tied in a bowknot, more indicative of what would be worn by a steward or officer's servant, as well as a clothes brush.

For a long time, the consensus was that the remains were most likely those of a steward. There were four on each of the two ships in the Franklin expedition, with the best candidates being Thomas Armitage, gunroom steward, or William Gibson, subordinate officers' steward, both of whom served on the HMS Terror. The authors estimated the skeleton's height via osteological analysis and compared DNA samples taken from the skeleton to those of descendants of six of the eight stewards and Harry Peglar. The DNA revealed that the skeleton was, in fact, Peglar.

DOI: Journal of Archaeological Science, 2026. 10.1016/j.jasrep.2026.105739  (About DOIs).

DOI: Polar Reports, 2026. 10.1017/S003224742610031X  (About DOIs).

Read full article

Comments



Read the whole story
fxer
2 hours ago
reply
Bend, Oregon
Share this story
Delete

Canvas Down!

1 Share

ShinyHunters, a black-hat hacking group, has brought down Canvas across a significant portion of America’s higher education system:

Students were unable to access Canvas on Thursday afternoon after cybercrime group ShinyHunters shut down Penn’s access to the interface. 

The May 7 data breach comes after ShinyHunters — notorious in the hacking community for large-scale data breaches — claimed responsibility for breaching Instructure, the company that manages Canvas, last week. In the message posted on Penn’s Canvas page, the hackers wrote that any university that does not wish to have its data released should contact the group before May 12.

A request for comment was left with a University spokesperson. 

“ShinyHunters has breached Instructure (again),” the warning read. “Instead of contacting us to resolve it they ignored us and did some ‘security patches.’”

I am being told by professionals that ShinyHunters is angling for direct payouts from individual institutions. Good thinking to pull this in the midst of Finals Week. OMG we’re basically living in season two of The Pitt! Pay up, admins! Or don’t. I don’t really care, honestly. Also, “scheduled maintenance” is a nice cover story…

… actual footage from the Office of the President…

The post Canvas Down! appeared first on Lawyers, Guns & Money.

Read the whole story
fxer
14 hours ago
reply
Bend, Oregon
Share this story
Delete

RIP social media. What comes next is messy.

1 Share

Last fall, we featured an extensive interview with Petter Törnberg of the University of Amsterdam, who studies the underlying mechanisms of social media that give rise to its worst aspects: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. He wasn't optimistic about social media's future.

Törnberg's research showed that, while numerous platform-level intervention strategies have been proposed to combat these issues, none are likely to be effective. And it’s not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we’re probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Törnberg has been very busy since then, producing two new papers and one new preprint building on this realization that social media is structured quite differently than the physical world, with unexpected downstream consequences. The first new paper, published in PLoS ONE, specifically focused on the echo chamber effect, using the same combined standard agent-based modeling with large language models (LLMs)—essentially creating little AI personas to simulate online social media behavior.

Those simulated users were randomly programmed to either hold an opinion or its opposite and then interact randomly with selected members of a simulated online community. And if the proportion of community members who disagreed with those simulated users exceeded a given threshold, those agents were programmed to leave and join a different online community.

Filter bubbles: Not a culprit, but a cure

Consistent with last year's results, echo chambers emerge naturally from the basic architecture of social media platforms. "One surprising finding is the fact that we get echo chambers even without any filter bubbles, even if people really love being in diverse spaces," said Törnberg. "You don't need an algorithmic nudge. You can still get these highly segregated spaces. The other surprising finding is that filter bubbles, which have been blamed for homogeneity, can be a cure."

It doesn't take much to destabilize or stabilize the system, Törnberg found. Even if the threshold for disagreement was quite low, disagreements were amplified to the point that each random interaction was increasingly likely to exceed the threshold. More and more users were pushed to relocate until what was once a community with a solid diversity of opinion rapidly became polarized and/or overly homogenous.

Conversely, if just 10 percent of users in a given social media community largely agree with your stances, you will be more tolerant toward diverse opinions that contradict your own. "There's a certain chance that some users will end up in communities where it's very homogenous and 99 percent of users are disagreeing with them," said Törnberg. "That will cause them to leave, and you get this feedback effect just because of the structure of interaction. But if you have a filter bubble effect, where everyone is shown 10 percent of their own type, that creates a possibility for you to find the people who you agree with within the community. And that stabilizes the entire dynamics so it doesn't tip over to one side or the other and become extreme or overly homogenous."

Törnberg found some confirmation of those dynamics when he analyzed an actual online echo chamber: the subreddit r/MensRights. He found that members of the subreddit were more likely to leave if their posts diverged too far, linguistically, from the community's center of gravity.

"Who are the users leaving the community?" said Törnberg. "The users that are more ideologically distant are more likely to leave. So it captures the same mechanism of feedback dynamics, where the community becomes more homogenous and more extreme because users leave—[and they leave] because they feel it's becoming too homogenous and extreme. Eventually it tips over to one direction. And of course, as the community becomes more extreme, there's this boiling the frog effect where the users who stay are influenced by the community and become more extreme."

In principle, it could be possible to exploit these feedback effects to preserve viewpoint diversity—but there are caveats. "Ultimately, it's about changing the fundamental rules of what people are seeing and being mindful of the feedback effects that always play out in any complex system," said Törnberg. "That being said, do I want to tell [Mark] Zuckerberg to implement more filter bubbles on Facebook? I think I'd want a little bit more evidence before going that far. But it does highlight that we need to have a little more humility when it comes to our design of these systems and what the downstream consequences are. We tend to maybe think one step ahead, but miss the fact that these are highly complex systems, full of feedback effects that often do the exact opposite of what you intend."

The "botification" of social media

For his second new paper, published in the Journal of Quantitative Description: Digital Media (JQD:DM), Törnberg relied on nationally representative data from the 2020 and 2024 American National Election Studies surveys, covering US citizens from all 50 states and Washington, DC. The objective was to learn more about shifting trends in how people were using (or not using) social media across all platforms, demographics, and political affiliations.

Törnberg found that visits and posting activity on Facebook, YouTube, and Twitter/X—what one might consider legacy social media platforms—showed marked declines. However, "My sense is that the number of posts on Twitter and Facebook has probably not really declined despite the fact that the number of people posting—humans who are alive and have a pulse—has dropped by 50 percent, because of the rise of AI and LLMs and the botification of those platforms," said Törnberg.

Most social media platforms slightly shifted politically to the right, although they remained Democratic-leaning on balance—except for Twitter/X. In that case, "The engagement behavior was a 72 percentage point shift to the right, which is just insane," said Törnberg. "It used to be that the more you posted on Twitter, there was a slight correlation with how much you liked the Democrats and how much you disliked Republicans—how effectively polarized you were to the left. Now it's very strongly and very clearly correlated with hating Democrats and liking Republicans. So the graph appropriately becomes an X, which I guess is exactly what [Elon Musk] paid for."

Meanwhile, on Facebook, posting behavior is correlated on both sides of the partisan divide and has more to do with how active the most partisan users are, prompting casual users to disengage so that those louder voices dominate, making the platform narrower and more ideologically extreme. "The more you're effectively polarized, the more you post on Facebook," said Törnberg. "That's the social media prism or the fun house mirror of social media in action, because the most extreme voices are the voices that tend to post, and also they tend to become more visible because of the engagement algorithms."

Reddit and TikTok were outliers, showing modest growth instead of decline. Törnberg thinks TikTok's growth, in particular, indicates another interesting shift. "I think that there is a general transition from the text-based, interaction-based social media to this more fully algorithmic video, short video form," he said. "So is it even a social media anymore? We tend to put TikTok and Instagram in the same basket as Twitter/X. I don't think that really makes sense because we're seeing a shift away from one form of social media to a new form of media platform that is fundamentally different."

Is it even "social media" anymore?

That shift is the focus of a new preprint that Törnberg co-authored with University of Amsterdam colleague Richard Rogers. "When we talk about social media, there are certain assumptions about what it is," said Törnberg. "It's user-generated, and there's a platform that organizes interaction, but the platform cannot produce content on its own. So instead the platform allows people to connect with each other, and it just provides infrastructure for that. The [terms] social network and social media is almost synonymous. Those describe pre-algorithm Twitter circa 2012 quite well."

Now that more and more users are disengaging and often leaving those platforms entirely, the AI bots are moving in, often at the instigation of the social media platforms themselves. "We don't need the users anymore," said Törnberg of the reasoning behind such decisions. "We don't need them to generate content. We can generate our own content and we can automate the users. So there's a splintering of what used to be social media."

Törnberg identified three new kinds of emerging online media platforms, starting with private or semi-private group chats like WhatsApp. "The social part has just moved into these private group chat features," he said. Then there other protected communities like Substack, often organized around a certain influential leader, "where there are more boundaries to joining in such a way that bots doesn't make sense. The dynamic and logic of those places are very different from social media and much more driven by parasocial relationships."

The second category is what Törnberg calls algorithmic broadcasting media, like TikTok, Instagram, and even Facebook, to a certain degree, thanks to the Reels aspect. The third is users interacting with AI chatbots. "If you look at the data, it seems like about twice as many people are talking to a chatbot versus posting on social media," said Törnberg. "It's coming to replace a little bit of that function of sociality that social media provided."

While setting up smaller private spaces online might seem like a way to reproduce the local coffeehouse/public square dynamic that we all ideally wanted social media to be, Törnberg says it is not. "The local coffee shop model is geographically local," he said. "It becomes diverse because it is constrained by geographical distance. It forces a coming together of diverse groups because there's one coffeehouse. A WhatsApp group is a non-local space. It's precisely the example of a system that can tip over one side or another to become an echo chamber. Just because Meta doesn't have the platform control doesn't mean it's going to not turn horrible."

"Abandoning or fleeing responsibilities is not going to be the solution to the fact that digital technology is reshaping our society," Törnberg added. "It needs functional scaffolding and democratic systems for doing it responsibly and actually pursuing positive democratic prosocial values, which is not something that is seemingly on offer at the moment."

Törnberg does think it's possible to reorganize social media spaces in positive ways so that most users can find that 10 percent of other users who agree with them, thus making them more open to divergent views. And it helps that most users really do prefer more pleasant online communities, not platforms rife with toxic waste. "But then how do we shape the rules to produce those outcomes?" he said. "It's a much harder question. How do we create spaces that are both engaging and fun to use, but that don't go down to that dark place because of all of these feedback effects?"

BlueSky's highly effective blocking tools, and even Twitter/X's community notes feature, which often bridges cross-partisan divides, provide useful examples of possible solutions, if judiciously applied. "We can think of and construct similar systems," said Törnberg. "We just need to find ways of pushing those effects to a more positive place by finding the pivot points. This is what I'm studying right now. I just don't have an answer yet."

PLoS, 2026. DOI: 10.1371/journal.pone.0347207  (About DOIs).

JQD: DM, 2026. DOI: 10.51685/jqd.2026.005 .

Read full article

Comments



Read the whole story
fxer
14 hours ago
reply
Bend, Oregon
Share this story
Delete

Mozilla says 271 vulnerabilities found by Mythos have "almost no false positives"

1 Share

The disbelief was palpable when Mozilla’s CTO last month declared that AI-assisted vulnerability detection meant “zero-days are numbered” and “defenders finally have a chance to win, decisively.” After all, it looked like part of an all-too-familiar pattern: Cherry-pick a handful of impressive AI-achieved results, leave out any of the fine print that might paint a more nuanced picture, and let the hype train roll on.

Mindful of the skepticism, Mozilla on Thursday provided a behind-the-scenes look into its use of Anthropic Mythos—an AI model for identifying software vulnerabilities—to ferret out 271 Firefox security flaws over two months. In a post, Mozilla engineers said the finally ready-for-prime-time breakthrough they achieved was primarily the result of two things: (1) improvement in the models themselves and (2) Mozilla’s development of a custom “harness” that supported Mythos as it analyzed Firefox source code.

"Almost no false positives"

The engineers said their earlier brushes with AI-assisted vulnerability detection were fraught with “unwanted slop.” Typically, someone would prompt a model to analyze a block of code. The model would then produce plausible-reading bug reports, and often at unprecedented scales. Invariably, however, when human developers further investigated, they’d find a large percentage of the details had been hallucinated. The humans would then need to invest significant work handling the vulnerability reports the old-fashioned way.

Mozilla’s work with Mythos was different, Mozilla Distinguished Engineer Brian Grinstead said in an interview. The biggest differentiating factor was the use of an agent harness, a piece of code that wraps around an LLM to guide it through a series of specific tasks. For such a harness to be useful, it requires significant resources to customize it to the project-specific semantics, tooling, and processes it will be used for.

Grinstead described the harness his team built as “the code that drives the LLM in order to accomplish a goal. It gives the model instructions (e.g., ‘find a bug in this file’), provides it tools (e.g., allowing it to read/write files and evaluate test cases), then runs it in a loop until completion." The harness gave Mythos access to the same tools and pipeline that human Mozilla developers use, including the special Firefox build they use for testing.

He elaborated:

With these harnesses, so long as you can define a deterministic and clear success signal or task verification signal, you can just keep telling it to keep working. In our case when we’re looking for memory safety issues we have our sanitizer build of Firefox and if you make it crash you win. We point that agent off to a source file and say: “we know there’s an issue in this file, please go find it.” It will craft test cases. We have our existing fuzzing systems and tools to be able to run those tests. It will say: “I think there’s an issue here if I craft the HTML exactly so.” It sends it off to a tool, the tool says yes or no. If the tool says yes then there’s some additional verification.

The additional verification comes in the form of a second LLM that grades the output from the first LLM. A high score gives developers the same confidence they have when viewing reports generated through more traditional discovery methods.

“In terms of the bugs coming out on the other side, there are almost no false positives,” he said.

Thursday’s behind-the-scenes view includes the unhiding of full Bugzilla reports for 12 of the 271 vulnerabilities Mozilla discovered using Mythos and, to a lesser extent, Claude Opus 4.6. The test cases—meaning the HTML or other code that triggers an unsafe memory condition—are provided in each one and meet the same criteria Mozilla requires for all bugs to be considered security vulnerabilities in Firefox. At least one researcher said Thursday that a cursory look at the reports showed they were "pretty impressive."

Unlike previous vulnerability disclosure slop, Grinstead said, the details provided by its harness-guided Mythos analysis, and confirmed by the second LLM, and ultimately included in the reports, provide a level of confidence his team didn't have before.

“That’s the key thing that has unlocked our ability to operate at the scale we’ve been operating at now,” he said. “It gives the engineer a crank they can pull that says: ‘Yep, this has the problem,’ and then you can iterate on the code and know clearly when you’ve fixed it and eventually land the test case in the tree such that you don’t regress it.”

As noted earlier, Mozilla’s characterization of AI-assisted vulnerability discovery as a game changer has been met with massive, vocal skepticism in many quarters. Critics initially scoffed when Mozilla didn’t obtain CVE designations for any of the 271 vulnerabilities. Like many developers, however, Mozilla doesn’t obtain CVE listings for internally discovered security bugs. Instead, they are bundled into a single patch. Normally, Bugzilla reports detailing these "rollups" are hidden for several months after being fixed to protect those who are slow to patch. Now that Mozilla has revealed a dozen of them, the same critics will surely claim they too were cherry-picked and conceal less accurate results.

Of the 271 bugs found using Mythos, 180 were sec-high, Mozilla's highest designation for internally reported vulnerabilities. These types of vulnerabilities can be exploited through normal user behavior, such as browsing to a web page. (The only higher rating, sec-critical, is reserved for zero-days.) Another 80 were sec-moderate, and 11 were sec-low.

The critics are right to keep pushing back. Hype is a key method for inflating the already high puffed-up valuations of AI companies. Given the extensive praise Mozilla has given to Mythos, it’s easy for even more trusting people to wonder: What’s it getting in return? Far from settling the debate, Thursday’s elaborations are likely to only further stoke the controversy.

To hear Grinstead tell it, however, the details are clear evidence of the usefulness of AI-assisted discovery, and Mozilla's motivation is simple.

“People are a bit burned from the last year of these slop commits so we felt it was important to show some of our work, open up some of the bugs, and talk about it in a little more detail as a way to hopefully spur some action or continue the conversation,” he said. “There’s no sort of marketing angle here. Our team has completely bought in on this approach. We are trying to get a message out about this technique in general and not any specific model provider, company, or anything like that.”

Read full article

Comments



Read the whole story
fxer
14 hours ago
reply
Bend, Oregon
Share this story
Delete

Ted Turner RIP

1 Share

I assume Loomis will have more to say about the passing of Ted Turner…

Ted Turner, the media mogul who cut a brash and vivid figure on the American scene of the late 20th century by dominating the cable television industry, creating the 24-hour news cycle with CNN, and extending his restless reach into professional sports, environmentalism and philanthropy, died on Wednesday at his home near Tallahassee, Fla. He was 87.

Phillip Evans, a spokesman for the family, confirmed the death. Mr. Turner announced in 2018 that he had Lewy body dementia, a progressive brain disorder.

Mr. Turner’s signature creation was CNN — the Cable News Network — which revolutionized television news in 1980 by presenting it all hours of the day and eventually inspiring other media operations to follow suit. But his portfolio of business ventures bulged with much more, and their impact on American culture was considerable.

The post Ted Turner RIP appeared first on Lawyers, Guns & Money.

Read the whole story
fxer
1 day ago
reply
Bend, Oregon
Share this story
Delete

Ars Asks: Share your shell and show us your tricked-out terminals!

1 Share

I spend more time today than ever before interacting with terminal windows, which is something I don't think Past Me would have believed in the early '90s. Back then, poor MS-DOS was the staid whipping boy of the industry, and at least on the consumer side, graphical environments like Windows (and maybe even odder creatures like AmigaOS) seemed poised to stamp the command line into oblivion, leaving text interfaces behind as we all blasted into the ooey-GUI future.

As it turns out, though, the command line is still the best tool for some jobs—many jobs, in fact. I read a wise post some years ago (probably on Slashdot) arguing that a mouse-driven point-and-click interface essentially reduces the user to pointing at something on the screen and grunting, "DO! DO THAT!" at the computer. (The rise of right-click context menus adds the ability for the user to also grunt "MORE THINGS!" but doesn't otherwise add vocabulary.)

The command line, by contrast, gives the user the opportunity to precisely tell the computer what they want done, using words instead of one or two gestalts that the computer must interpret based on context.

Screenshot showing a multi-line curl command It's not that you can't do this kind of thing with a GUI—but it does require changing one's approach a bit.

It sounds kind of silly to say it, but the command line is what finally dragged me off Windows as my daily driver back in 2007. At the time, I'd been forced into regular bash usage at work as I took over the day-to-day administration of Boeing Houston's fleet of then-brand-new EMC Celerra NSX enterprise NAS appliances, and while there were GUI management options available (I am perhaps triggering trauma in a small subset of older readers by saying the words "EMC Control Center"), the environment I'd inherited was firmly held together by bash scripts.

At first, I had turned up my nose at the Linux-ness of it all, but kind of like the fungus in The Last of Us, the shell's tendrils slowly infected my brain. I began to realize that sad old cmd.exe and MS-DOS batch files really were kind of terrible, and that maybe, just maybe, the Linux-y ravings of my angry graybeard sysadmin mentor were not as crazy as they seemed.

I didn't think I'd ever arrive at his method of only running manually compiled Slackware—and, indeed, 20 years on, I'm still not even close—but the guy had a point. The more I used a Unix-y shell at work, the more I began to miss it at home. Windows Vista and its early WDDM woes had reduced my previously badass main PC with two Nvidia 7900GT cards in SLI to a stuttering BSOD-spitting mess, and the future of Microsoft OSes looked bleak—Windows 7 wouldn't be along to change the situation for years.

Exposure therapy to the bash shell brought me to the tipping point, and I jumped ship to the Macintosh side of the house. It was a move calculated to give me the best of all possible worlds—a good graphical interface with the same bash shell under the hood that I'd come to depend on at work.

Photograph of Lee's desk showing a PC.
The before... Credit: Lee Hutchinson
Photograph of Lee's desk showing an iMac
...and the after. Credit: Lee Hutchinson

I haven't looked back. These days, I run three different operating systems at home. MacOS is still my daily driver on the desktop; Windows lives on the gaming PC in the corner; and Linux (in the form of Ubuntu server LTS) is headless in the closet, where it belongs. God is in his heaven, and all is right with my computing world—and still, as with every day since sometime in early 2007, I spend at least an hour or two with a terminal window doing things the old-fashioned, text-y way.

The fish shell long ago became my default on my Mac, in no small part because I like fish's colors and find them helpful (don't judge me!). When I'm logged into Linux, though, I stick with good old bash. I know zsh and other modern alternatives have their fans, but I've found my happy place, and I'm content to stay there.

Being a child of the BBS era, when ANSI graphics were the hotness, I have spent about as much time as any other terminal-enjoying admin customizing my environment and making it into a place where I feel comfy working.

... Oh God, I'm doing it. I'm doing the thing they do on recipe sites where all the reader really wants is directions for making pecan pie but instead gets a giant personal backstory. Forgive me. I'm old. Let's get to the pie, and by pie, I mean the screenshots and code!

My favorite thing: The terminal timer

It's incredibly handy, at least for me, to have an easy-to-see reference of how long the last command took to run. (You don't need that kind of thing until you need it, and then you often really need it.) To that end, I have some functions living in my .bashrc file that time each command and then append that time—and the last error code emitted—to the next bash prompt. In practice, it looks like this:

Screenshot of a prompt showing program timings It's neat seeing how long each of these things took to execute. And sometimes it's even useful! Credit: Lee Hutchinson

I dig this. Coupled with printing the current time as part of the prompt, it gives you a good idea of not just how long the last few commands took to run but also when you were running them. That's very handy for absentminded admins (/me raises hand) who leave terminal sessions up for days at a time with important work sitting in them.

Here's the code to make this happen, which you should feel free to adapt to your needs. As noted, I keep this in .bashrc as part of my PS1 prompt statement:

color_prompt=yes

if [ "$color_prompt" = yes ]; then

function timer_now_us {
    local seconds=${EPOCHREALTIME%.*}
    local micros=${EPOCHREALTIME#*.}
    micros="${micros}000000"
    REPLY="${seconds}${micros:0:6}"
}

function timer_stop {
    if [[ ${timer_command_active:-0} -ne 1 ]] || [[ -z ${timer_started_at_us:-} ]]; then
        timer_show=0us
        return
    fi

    timer_now_us
    local delta_us=$((REPLY - timer_started_at_us))
    local us=$((delta_us % 1000))
    local ms=$(((delta_us / 1000) % 1000))
    local s=$(((delta_us / 1000000) % 60))
    local m=$(((delta_us / 60000000) % 60))
    local h=$((delta_us / 3600000000))
    # always show 3 digits of accuracy
    if ((h > 0)); then timer_show=${h}h${m}m
    elif ((m > 0)); then timer_show=${m}m${s}s
    elif ((s >= 10)); then timer_show=${s}.$((ms / 100))s
    elif ((s > 0)); then timer_show=${s}.$(printf %03d $ms)s
    elif ((ms >= 100)); then timer_show=${ms}ms
    elif ((ms > 0)); then timer_show=${ms}.$((us / 100))ms
    else timer_show=${us}us
    fi

    unset timer_started_at_us
    timer_command_active=0
}

#Prompt and prompt colors
function set_prompt {
  local Last_Command=${1:-$?}
  FancyX='\342\234\227'
  Checkmark='\342\234\223'
  export PS1="\n$WHITE[\t] "
  if [[ $Last_Command == 0 ]]; then
  	PS1+="\$? $GREEN$Checkmark "
  else
  	PS1+="\$? $RED$FancyX "
  fi
  timer_stop
  PS1+="$WHITE($timer_show)"
  PS1+="\n\[$HOSTCOLOR\]\u@\h\[\033[00m\]:\[\033[1;38;5;027m\]\w\[\033[00m\] \\$ "
}

function timer_prompt_command {
  local last_command=${1:-$?}
  set_prompt "$last_command"
}

PS0='${ timer_now_us; timer_started_at_us=$REPLY; timer_command_active=1; }'
PROMPT_COMMAND='timer_prompt_command'

fi

This mess of functions will jam all that goodness into your prompt, complete with a fancy green "check" if the program exited with error code 0 or a red "X" and the error code if it exited with something else. The color definitions—$WHITE, $BLUEBOLD, $HOSTCOLOR, and others—are just plain ol' ANSI escape sequences defined elsewhere in .bashrc and not presented here to try to keep the code excerpts from being too long. You can and should replace them with whatever tickles your fancy.

The timer_stop function also has the job of converting the timer into a human-readable format, and it's probably messier than it needs to be. I'm no developer, though, so this is what Past Lee settled on after a few hours of searching through examples.

Doing it in fish for folks like me

That's for bash when I'm ssh'd into one of my Linux hosts, but I run fish on MacOS. I have a separate fish function for getting the same results there, complete with gross hacks for turning the measurement into human-readable form. I made this code, and I am unapologetic. Witness my cobbled-together StackOverflow-sourced kludge.

function fish_prompt --description 'Write out the prompt'
    # Save the last status
    set -l last_status $status

    # Calculate the command duration if available
    set -l cmd_duration ""
    if set -q CMD_DURATION
        # Convert milliseconds to microseconds for more precise comparison
        set -l duration_us (math "$CMD_DURATION * 1000")

        # Calculate different time units
        set -l us (math "$duration_us % 1000")
        set -l ms (math "floor($duration_us / 1000) % 1000")
        set -l s (math "floor($duration_us / 1000000) % 60")
        set -l m (math "floor($duration_us / 60000000) % 60")
        set -l h (math "floor($duration_us / 3600000000)")

        # Format duration string
        if test $h -gt 0
            set cmd_duration (string join '' "(" $h "h" $m "m)")
        else if test $m -gt 0
            set cmd_duration (string join '' "(" $m "m" $s "s)")
        else if test $s -ge 10
            set -l fraction (math "floor($ms / 100)")
            set cmd_duration (string join '' "(" $s "." $fraction "s)")
        else if test $s -gt 0
            set cmd_duration (string join '' "(" $s "." (printf "%03d" $ms) "s)")
        else if test $ms -ge 100
            set cmd_duration (string join '' "(" $ms "ms)")
        else if test $ms -gt 0
            set -l fraction (math "floor($us / 100)")
            set cmd_duration (string join '' "(" $ms "." $fraction "ms)")
        else
            set cmd_duration (string join '' "(" $us "us)")
        end
    end

    # Define unicode symbols for status
    set -l checkmark "✓"
    set -l cross "✗"

    # Colors
    set -l normal (set_color normal)
    set -l dark_gray (set_color 555555)
    set -l blue (set_color -o blue)
    set -l red (set_color red)
    set -l green (set_color green)
    set -l purple (set_color -o purple)

    # First line
    echo # New line
    echo -n -s $dark_gray "["(date +%T)"] $last_status " # Time in brackets and exit status

    # Status indicator with exit status
    if test $last_status -eq 0
        echo -n -s $green $checkmark
    else
        echo -n -s $red $cross
    end

    # Actually echo the duration
    echo -n -s $dark_gray " $cmd_duration"

    # Do the rest of the prompt
    echo
    set -l host_color $purple
    echo -n -s $host_color $USER "@" (prompt_hostname) $normal ":" $blue (prompt_pwd) $normal " \$ "
end

A splash of color

Spending my formative years immersed in ANSI BBS graphics has probably made me a little more fond of colorful text in my terminal than the average frumpy, button-downed admin. Look, I know some folks feel that syntax highlighting and colors in general kill comprehension and encourage skimming, but what can I say? I love them and rely on them. Perhaps I skim too much, but so be it. You can take my colorful shell tools from my cold, dead hands.

To that end, I lean on a little program called GRC (for Generic Colorizer) to add highlighting and coloration to other tools. It's broadly available and works without any additional configuration.

Image showing the before and after of using GRC with ping
Nothing wrong with a little color! Credit: Lee Hutchinson
Image showing the before and after of using ip a with ping

There's a bit of aliasing (which I keep in .bash_aliases like a good citizen) to make colorful output the defaults on some common commands:

    alias ls='ls --color=auto'
    alias ll='ls -AlFh --group-directories-first'
    alias df='grc df -h'
    alias du='grc du -h'
    alias free='grc free -h'
    alias ping='grc ping'
    alias traceroute='grc traceroute'
    alias ip='grc ip'

I'm also a big fan of making my numbers human-readable, and the -h switch is therefore applied liberally.

(Do note that wrapping commands like ip in GRC can sometimes do weird things if you're piping its output into something else. Use caution. Or don't! It's your computer, knock yourself out!)

The terminal itself

Sharp-eyed readers will note from the screenshots that I'm using MacOS's Terminal.app for my terminal program, despite there being far better options. I suppose the excuse I have is that I'm comfy with Terminal.app and nothing has pulled me off of it. I've test-driven the usual suspects—Ghostty, Alacritty, the mighty iTerm2 with its awesome tmux windowing integration, and even fancy new reinterpretations of the terminal experience like Warp.

But I just can't find a reason to switch that sticks with me. Changing terminal applications inevitably means things look different—ANSI colors are reinterpreted or mapped oddly, highlighting uses different tones, or a blue I'm particularly fond of is suddenly a different blue, and I have to spend 20 minutes fiddling with ANSI escape sequences to try to make it match again. Life's too short for that.

Screenshot showing four different terminal applications tiled This is iTerm at the upper left, Ghostty at the lower left, Warp at the upper right, and MacOS's Terminal.app at the lower right, showing different default color interpretations and layout conventions. Warp is... opinionated. Credit: Lee Hutchinson

If I were a smarter and more advanced terminal user—the kind who employs Vim mode to expertly pluck things from my command history via laser-focused artisanal regexes instead of piping history through grep like a caveman while frantically trying to figure out what the hell Past Lee was thinking—then maybe something like Ghostty would be a natural fit. Or iTerm, if I could ever actually make the mental leap, commit to tmux, and ascend to terminal godhood.

I can at least take some solace in knowing that I'm not alone down here in the mud, as Ars High Tech Priest Jason Marlin and I found out while discussing this article:

Screenshot of Lee and Jason commiserating over grepping history instead of using vim mode
Slack is a safe space to confess secrets. Credit: Lee Hutchinson
Second screenshot of Lee and Jason commiserating over grepping history instead of using vim mode
Too many secrets. Credit: Lee Hutchinson

Apotheosis, it seems, will have to wait.

A brief bonus section on Vim

The path that led me to the terminal also led me to my side in the editor wars, and as might be expected, I worship at the Church of vi (or at least at its Protestant-esque offshoot, the Church of Vim). As with Terminal.app, it's a relationship dependent primarily upon inertia rather than anything like love. Vim and I have reached an acceptable détente.

Screenshot of Lee's Vim environment Good ol' Vim. Or maybe you hate Vim, in which case, I guess it's bad ol' Vim. Credit: Lee Hutchinson

But getting there required a lot of frustrated searching and yelling at old StackOverflow posts. One thing that makes Vim comfortable for me is the combination of Vim-Airline and Promptline, which together provide a nice status bar that helps highlight some useful info as one is editing a file.

I also prefer to tweak the color that the Ubuntu-flavored version of Vim uses to denote comments, as the maintainers changed it from a dark blue to a cyan-y light blue some years back, and that annoyed me. Targeting that specific color and making the changes stick actually turned into a legitimate adventure, with Ars alum Jim Salter finally cracking the code on exactly what file to edit (hint, it was not .vimrc!). The linked article should make for fun reading if you want to spelunk through Vim's coloration guts. (And shout out to Jim!)

Of course, now that I've mentioned .vimrc, we have to see what's in there, too:

    syntax on
    set hlsearch "Highlight search results
    set ignorecase "Ignore case while searching...
    set smartcase " ...unless search includes mixed case
    set gdefault "Substitutions are automatically global
    set colorcolumn=80 "Highlight column 80
    set linebreak "Wrap whole words
    imap <silent> <Down> <C-o>gj
    imap <silent> <Up> <C-o>gk
    nmap <silent> <Down> gj
    nmap <silent> <Up> gk
    filetype plugin indent on
    set tabstop=4
    set softtabstop=4
    set shiftwidth=4
    set expandtab
    highlight MatchParen ctermbg=black guibg=black
    highlight MatchParen cterm=underline gui=underline

    set guifont=Menlo\ for\ Powerline

    " air-line
    let g:airline_powerline_fonts = 1

    if !exists('g:airline_symbols')
        let g:airline_symbols = {}
    endif

    " unicode symbols
    let g:airline_left_sep = '▶'
    let g:airline_right_sep = '◀'
    let g:airline_symbols.linenr = '␊'
    let g:airline_symbols.branch = '⎇'
    let g:airline_symbols.paste = 'ρ'
    let g:airline_symbols.whitespace = 'Ξ'

    " airline symbols
    let g:airline_left_sep = ''
    let g:airline_left_alt_sep = ''
    let g:airline_right_sep = ''
    let g:airline_right_alt_sep = ''
    let g:airline_symbols.branch = ''
    let g:airline_symbols.readonly = ''
    let g:airline_symbols.linenr = ''

Briefly:

  • First, we enable syntax highlighting always.
  • The next three lines modify Vim's search behavior, making all-lowercase searches case-insensitive but keeping mixed-case searches case-sensitive and highlighting all the search results at once.
  • The gdefault line makes substitutions global by default, which works for me because that's almost always what I want when I'm doing substitutions.
  • Next, colorcolumn lights up column 80 so I can be mindful of width where I need to be.
  • The linebreak setting forces Vim to do line wrapping with whole words instead of just starting a new line in the middle of things.
  • The four imap and nmap lines make the arrow keys move the cursor up and down in both normal and insert mode via display lines rather than the actual file lines, which really helps with arrow key navigation with long wrapping lines. (I know, I know, the real fix is to ditch this crutch and get better at Vim, but eh.)
  • The filetype line makes Vim aware of file types and that some file types might have specific plugins in the plugins directory, which they do.
  • The four set lines enforce my preferred tab orthodoxy—four spaces, with tab and backspace both aware of this.
  • The last two highlight lines alter the way Vim highlights matching parentheses by changing the highlight method to an underline instead of a big full-height cursor-emulating block. The default behavior looks so much like the terminal default cursor that it often makes me lose visual track of the actual cursor, whereas the underlining does not.
  • Finally, the last half is all Airline/Powerline stuff. You can ignore those, especially the section with the broken glyphs (they're not broken when you have the right typeface installed!) unless you want to crib my specific Airline special character config.

A very short word on fonts

I'm a huge fan of the Monaspace family of typefaces for use in one's terminal, and I love and use Monaspace Neon—it features in all the terminal screenshots in this piece. I know terminal fonts are about as personal as picking a brand of underwear, but after literal decades of trying various options, Monaspace Neon is the closest I've ever come to finding a typeface that approximates my monospaced platonic ideal. Your mileage will vary, of course, but I like it, I use it, and I feel that stumbling on it has meant the end of a career's worth of searching.

Share your terminal tricks in the comments!

I can't claim to have thought up any of the customization in this article. The pieces have accreted over time, gobbled up from years and years of StackExchange posts and Reddit threads. Whenever I saw a neat thing, I'd copy the code and try it out. And now you can do the same if any of this is useful to you.

But this, finally, brings us to the whole point of the article—what are your cool terminal tricks and hacks? We'd love to see how you rock the command line, from carefully cultivated Neofetch login splashes to fully from-scratch terminal replacements, and all points in between. We'll promote the best stuff below the article for everyone to see.

So share! Share your terminals with us, and let us all rejoice, for we are here in the post-GUI era, and it's nowhere near as scary as I used to think it would be. At least the colors are nice.

Read full article

Comments



Read the whole story
fxer
2 days ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories