8468 stories

How Fast Is Amp Really?

2 Comments and 3 Shares

AMP has caused quite the stir from a philosophical perspective, but the technology hasn’t received as close of a look. A few weeks ago, Ferdy Christant wrote about the unfair advantage being given to AMP content through preloading. This got me wondering: how well does AMP really perform. I’ve seen folks, like Ferdy, analyze one or two pages, but I hadn’t seen anything looking at the broader picture…yet.

Evaluating the effectiveness of AMP from a performance standpoint is actually a little less straightforward than it sounds. You have to consider at least four different contexts:

  1. How well does AMP perform in the context of Google search?
  2. How well does the AMP library perform when used as a standalone framework?
  3. How well does AMP perform when the library is served using the AMP cache?
  4. How well does AMP perform compared to the canonical article?

As Ferdy pointed out, when you click through to an AMP article from Google Search, it loads instantly—AMP’s little lightning bolt icon seems more than appropriate. But what you don’t see is that Google gets that instantaneous loading by actively preloading AMP documents in the background.

In the case of the search carousel, it’s literally an iframe that gets populated with the entirety of the AMP document. If you do end up clicking on that AMP page, it’s already been downloaded in the background and as a result, it displays right away.

In the context of Google search, then, AMP performs remarkably well. Then again, so would any page that was preloaded in the background before you navigated to it. The only performance benefit AMP has in this context is the headstart that Google gives it.

In other words, evaluating AMP’s performance based on how those pages load in search results tells us nothing about the effectiveness of AMP itself, but rather the effectiveness of preloading content.

How well does the AMP library perform when used as a standalone framework?

In Ferdy’s post, he analyzed a page from Scientas. He discovered that without the preloading, it’s far from instant. On a simulated 3G connection, the Scientas AMP article presents you with a blank white screen for 3.3 seconds.

Now, you might be thinking, that’s just one single page. There’s a lot of variability and it’s possible Scientas is a one-off example. Those are fair concerns so let’s dig a little deeper.

The first thing I did was browse the news. I don’t recommend this to anyone, but there was no way around it.

Anytime I found an AMP article, I dropped the URL in a spreadsheet. It didn’t matter what the topic was or who the publisher was: if it was AMP, it got included. The only filtering I did was to ensure that I tested no more than two URL’s from any one domain.

In the end, after that filtering, I came up with a list of 50 different AMP articles. I ran these through WebPageTest over a simulated 3G connection using a Nexus 5. Each page was built with AMP, but each page was also loaded from the origin server for this test.

AMP is comprised of three basic parts:

  • AMP JS
  • AMP Cache

When we talk about the AMP library, we’re talking about AMP JS and AMP HTML combined. AMP HTML is both a subset of HTML (there are restrictions on what you can and can’t use) and an augmentation of it (AMP HTML includes a number of custom AMP components and properties). AMP JS is the library that is used to give you those custom elements as well as handles a variety of optimizations for AMP-based documents. Since the foundation is HTML, CSS, and JS, you can absolutely build a document using the AMP library without using the Google AMP Cache.

The AMP library is supposed to help ensure a certain level of consistency with regards to performance. It does this job well, for the most part.

The bulk of the pages test landed within a reasonable range of each other. There was, however, some deviance on both ends of the spectrum: the minimum values were pretty low and the maximum values frightening high.

Metric Min Max Median 90th Percentile
Start Render 1,765ms 8,130ms 4,617ms 5,788ms
Visually Complete 4,604ms 35,096ms 7,475ms 21,432ms
Speed Index 3729 16230 6171 10144
Weight 273kb 10,385kb 905kb 1,553kb
Requests 14 308 61 151

Most of the time, AMP’s performance is relatively predictable. However, the numbers also showed that because a page is a valid AMP document, that is not a 100% guarantee that the site will be fast or lightweight. As with pages built with any technology, it’s entirely possible to build an AMP document that is slow and heavy.

Any claim that AMP ensures a certain level of performance depends both on how forgiving you are of the extremes, and on what your definition of “performant” is. If you were to try and build your entire site using AMP, you should be aware that while it’s not likely to end up too bloated, it’s also not going to end up blowing anyone’s mind for its speed straight of the box. It’s still going to require some work.

At least that’s the case when we’re talking about the library itself. Perhaps the AMP cache will provide a bit of a boost.

How well does AMP perform when the library is served using the AMP cache?

The AMP library itself helps, but not to the degree we would think. Let’s see if the Google cache puts it over the top.

The Google AMP Cache is a CDN for delivering AMP documents. It caches AMP documents and—like most CDN’s—applies a series of optimizations to the content. The cache also provides a validation system to ensure that the document is a valid AMP document. When you see AMP served, for example, through Google’s search carousel, it’s being served on the Google AMP Cache.

I ran the same 50 pages through WebPagetest again. This time, I loaded each page from the Google AMP CDN. Pat Meenan was kind enough to share a script for WebPagetest that would pre-warm the connections to the Google CDN so that the experience would more closely resemble what you would expect in the real world.

logdata	0
navigate	https://cdn.ampproject.org/c/www.webpagetest.org/amp.html
logdata	1
navigate	%URL%

When served from the AMP Cache, AMP pages get a noticeable boost in performance across all metrics.

Metric Min Max Median 90th Percentile
Start Render 1,427ms 4,828ms 1,933ms 2,291ms
Visually Complete 2,036ms 36,001ms 4,924ms 19,626ms
Speed Index 1966 18677 3277 9004
Weight 177kb 10,749kb 775kb 2,079kb
Requests 13 305 53 218

Overall the benefits of the cache are pretty substantial. On the high-end of things, the performance is still pretty miserable (the slightly higher max’s here mostly have to do with differences in the ads pulled in from one test to another). But that middle range where most of the AMP documents live becomes faster across the board.

The improvement is not surprising given the various performance optimizations the CDN automates, including:

  • Caching images and fonts
  • Restricting maximum image sizes
  • Compressing images on the fly, as well as creating additional sizes and adding srcset to serve those sizes
  • Uses HTTP/2 and HTTPS
  • Strips out HTML comments
  • Automates inclusion of resource hints such as dns-prefetch and preconnect

Once again, it’s worth noting that none of these optimizations requires that you use AMP. Every last one of these can be done by most major CDN providers. You could even automate all of these optimizations yourself by using a build process.

I don’t say that to take away from Google’s cache in any way, just to note that you can, and should, be using these same practices regardless of if you use AMP or not. Nothing here is unique to AMP or even the AMP cache.

How well does AMP perform compared to the canonical article?

So far we’ve seen that the AMP library by itself ensures a moderate level of performance and that the cache takes it to another level with its optimizations.

One of the arguments put forward for AMP is that it makes it easier to have a performant site without the need to be “an expert”. While I’d quibble a bit with whether labeling many of the results I found “performant”, it does make sense to compare these AMP documents with their canonical equivalents.

For the next round of testing, I found the canonical version of each page and tested that as well, under the same conditions. It turns out that while the AMP documents I tested were a mixed bag, they do out-perform their non-AMP equivalents more often than not (hey publishers, call me).

Metric Min Max Median 90th Percentile
Start Render 1,763ms 7,469ms 4,227ms 6,298ms
Visually Complete 4,231ms 108,006ms 20,418ms 54,546ms
Speed Index 3332 45362 8152 21495
Weight 251kb 11,013kb 2,762kb 5,229kb
Requests 24 1743 318 647

Let’s forget the Google cache for a moment and put the AMP library back on even footing with the canonical article page.

Metrics like start render and Speed Index didn’t see much of a benefit from the AMP library. In fact, Start Render times are consistently slower in AMP documents.

That’s not too much of a surprise. As mentioned above, AMP documents use the AMP JS library to handle a lot of the optimizations and resource loading. Anytime you rely on that much JavaScript for the display of your page, render metrics are going to take a hit. It isn’t until the AMP cache comes into play that AMP pulls back ahead for Start Render and Speed Index.

For the other metrics though, AMP is the clear winner over the canonical version.

Improving performance….but for who?

The verdict on AMP’s effectiveness is a little mixed. On the one hand, on an even playing field, AMP documents don’t necessarily mean a page is performant. There’s no guarantee that an AMP document will not be slow and chew right through your data.

On the other hand, it does appear that AMP documents tend to be faster than their counterparts. AMP’s promise of improved distribution cuts a lot of red tape. Suddenly publishers who have a hard time saying no to third-party scripts for their canonical pages are more willing (or at least, made to) reduce them dramatically for their AMP counterparts.

AMP’s biggest advantage isn’t the library—you can beat that on your own. It isn’t the AMP cache—you can get many of those optimizations through a good build script, and all of them through a decent CDN provider. That’s not to say there aren’t some really smart things happening in the AMP JS library or the cache—there are. It’s just not what makes the biggest difference from a performance perspective.

AMP’s biggest advantage is the restrictions it draws on how much stuff you can cram into a single page.

For example. here are the waterfalls showing all the requests for the same article page written to AMP requirements (the right) versus the canonical version (the left). Apologies to your scroll bar.

The 90th percentile weight for the canonical version is 5,229kb. The 90th percentile weight for AMP documents served from the same origin is 1,553kb— a savings of around 70% in page weight. The 90th percentile request count for the canonical version is 647, for AMP documents it’s 151. That’s a reduction of nearly 77%.

AMP’s restrictions mean less stuff. It’s a concession publishers are willing to make in exchange for the enhanced distribution Google provides, but that they hesitate to make for their canonical versions.

If we’re grading AMP on the goal of making the web faster, the evidence isn’t particularly compelling. Every single one of these publishers has an AMP version of these articles in addition to a non-AMP version.

Every. Single. One.

And for more often than not, these non-AMP versions are heavy and slow. If you’re reading news on these sites and you didn’t click through specifically to the AMP library, then AMP hasn’t done a single thing to improve your experience. AMP hasn’t solved the core problem; it has merely hidden it a little bit.

Time will tell if this will change. Perhaps, like the original move from m-dot sites to responsive sites, publishers are still kicking the tires on a slow rollout. But right now, the incentives being placed on AMP content seem to be accomplishing exactly what you would think: they’re incentivizing AMP, not performance.

Read the whole story
2 hours ago
basically if i click an AMP link on mobile I know I'll be able to read the article. if I click a non-AMP link I don't even know if the page will load before I just give up.
Bend, Oregon
Share this story
1 public comment
2 hours ago
AMP is Google's attempt to get some Facebook-style lockin over the news industry. If they cared about performance, they'd weight that higher and stop telling you that performance comes from more render-blocking JavaScript.
Washington, DC

Managing database schema changes without downtime

1 Share

How we manage schema changes at Discourse minimizing downtime

At Discourse we have always been huge fans of continuous deployment. Every commit we make heads to our continuous integration test suite. If all the tests pass (ui, unit, integration, smoke) we automatically deploy the latest version of our code to https://meta.discourse.org.

This pattern and practice we follow allows the thousands self-installers out there to safely upgrade to the tests-passed version whenever they feel like it.

Because we deploy so often we need to take extra care not to have any outages during deployments. One of the most common reasons for outages during application deployment is database schema changes.

The problem with schema changes

Our current deployment mechanism roughly goes as follows:

  • Migrate database to new schema
  • Bundle up application into a single docker image
  • Push to registry
  • Spin down old instance, pull new instance, spin up new instance (and repeat)

If we ever create an incompatible database schema we risk breaking all the old application instances running older versions of our code. In practice, this can lead to tens of minutes of outage! :boom:

In ActiveRecord the situation is particularly dire cause in production the database schema is cached and any changes in schema that drop or rename columns very quickly risk breaking every query to the affected model raising invalid schema exceptions.

Over the years we have introduced various patterns to overcome this problem and enable us to deploy schema changes safely, minimizing outages.

Tracking rich information about migrations

ActiveRecord has a table called schema_migrations where is stores information about migrations that ran.

Unfortunately the amount of data stored in this table is extremely limited, in fact it boils down to:

connection.create_table(table_name, id: false) do |t|
  t.string :version, version_options

The table has a lonely column storing the “version” of migrations that ran.

  1. It does not store when the migration ran
  2. It does not store how long it took the migration to run
  3. It has nothing about the version of Rails that was running when the migration ran

This lack of information, especially, not knowing when stuff ran makes creating clean systems for dealing with schema changes hard to build. Additionally, debugging strange and wonderful issues with migrations is very hard without rich information.

Discourse, monkey patches Rails to log rich information about migrations:

module FreedomPatches
  module SchemaMigrationDetails
    def exec_migration(conn, direction)
      rval = nil

      time = Benchmark.measure do
        rval = super

      sql = <<SQL
      INSERT INTO schema_migration_details(
      ) values (
This file has been truncated. show original

Our patch provides us a very rich details surrounding all the migration circumstances. This really should be in Rails.

Defer dropping columns

Since we “know” when all previous migrations ran due to our rich migration logging, we are able to “defer drop” columns.

What this means is that we can guarantee we perform dangerous schema changes after we know that the new code is in place to handle the schema change.

In practice if we wish to drop a column we do not use migrations for it. Instead our db/seed takes care of defer dropping.

  1. Migration::ColumnDropper.drop(
  2. table: 'users',
  3. after_migration: 'DropEmailFromUsers',
  4. columns: %w[
  5. email
  6. email_always
  7. mailing_list_mode
  8. email_digests
  9. email_direct
  10. email_private_messages
  11. external_links_in_new_tab
  12. enable_quoting
  13. dynamic_favicon
  14. disable_jump_reply
  15. edit_history_public
  16. automatically_unpin_topics
  17. digest_after_days
  18. auto_track_topics_after_msecs
  19. new_topic_duration_minutes
  20. last_redirected_to_top_at
This file has been truncated. show original

These defer drops will happen at least 30 minutes after the particular migration referenced ran (in the next migration cycle), giving us peace of mind that the new application code is in place.

If we wish to rename a column we will create a new column, duplicate the value into the new column, mark the old column readonly using a trigger and defer drop old column.

If we wish to drop or rename a table we follow a similar pattern.

The logic for defer dropping lives in ColumnDropper and TableDropper.

Not trusting ourselves

A big problem with spectacular special snowflake per-application practices is enforcement.

We have great patterns for ensuring safety, however sometimes people forget that we should never drop a column or a table the ActiveRecord migration way.

To ensure we never make the mistake of committing dangerous schema changes into our migrations, we patch the PG gem to disallow certain statements when we run them in the context of a migration.

Want to DROP TABLE? Sorry, an exception will be raised. Want to DROP a column, an exception will be raised.

This makes it impractical to commit highly risky schema changes without following our best practices:

== 20180321015226 DropRandomColumnFromUser: migrating =========================
-- remove_column(:categories, :name)

An attempt was made to drop or rename a column in a migration
SQL used was: 'ALTER TABLE "categories" DROP "name"'
Please use the deferred pattrn using Migration::ColumnDropper in db/seeds to drop
or rename columns.

Note, to minimize disruption use self.ignored_columns = ["column name"] on your
ActiveRecord model, this can be removed 6 months or so later.

This protection is in place to protect us against dropping columns that are currently
in use by live applications.
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

Attempt was made to rename or delete column
/home/sam/Source/discourse/db/migrate/20180321015226_drop_random_column_from_user.rb:3:in `up'
Tasks: TOP => db:migrate
(See full trace by running task with --trace)

This logic lives in safe_migrate.rb. Since this is a recent pattern we only enforce it for migrations after a certain date.


Some of what we do is available in gem form and some is not:

Strong Migrations offers enforcement. It also takes care of a bunch of interesting conditions like nudging you to create indexes concurrently in postgres. Enforcement is done via patching active record migrator, meaning that if anyone does stuff with SQL direct it will not be caught.

Zero downtime migrations very similar to strong migrations.

Outrigger allows you to tag migrations. This enables you to amend your deploy process so some migrations run pre-deploy and some run post-deploy. This is the simplest technique for managing migrations in such a way that you can avoid downtimes during deploy.

Handcuffs: very similar to outrigger, define phases for your migrations

What should you do?

Our current pattern for defer dropping columns and tables works for us, but is not yet ideal. Code that is in charge of “seeding” data now is also in charge of amending schema and timing of column drops is not as tightly controlled as it should be.

On the upside, rake db:migrate is all you need to run and it works magically all the time. Regardless of how you are hosted and what version your schema is at.

My recommendation though for what I would consider best practice here is a mixture of a bunch of ideas. All of it belongs in Rails.

Enforcement of best practices belongs in Rails

I think enforcement of safe schema changes should be introduced into ActiveRecord. This is something everyone should be aware of. It is practical to do zero downtime deploys today with schema changes.

class RemoveColumn < ActiveRecord::Migration[7.0]
  def up
     # this should raise an error
     remove_column :posts, :name

To make it work, everyone should be forced to add the after_deploy flag to the migration:

class RemoveColumn < ActiveRecord::Migration[7.0]
  after_deploy! # either this, or disable the option globally 
  def up
     # this should still raise if class Post has no ignored_columns: [:name]
     remove_column :posts, :name
class RemoveColumn < ActiveRecord::Migration[7.0]
  after_deploy!(force: true)
  def up
     # this should work regardless of ignored_columns
     remove_column :posts, :name

I also think the ideal enforcement is via SQL analysis, however it is possible that this is a bit of a can-of-worms at Rails scale. For us it is practical cause we only support one database.

rake db:migrate should continue to work just as it always did.

For backwards compatibility rake db:migrate should run all migrations including after_deploy migrations. Applications who do not care about “zero downtime deploys” should also be allowed to opt out of the safety.

New post and pre migrate rake tasks should be introduced

To run all the application code compatible migrations you would run:

rake db:migrate:pre
# runs all migrations without `after_deploy!`

To run all the destructive operations you would run:

rake db:migrate:post
# runs all migrations with `after_deploy!`


If you are looking to start with “safe” zero downtime deploys today I would recommend:

  1. Amending build process to run pre deploy migrations and post deploy migrations (via Outrigger or Handcuffs)

  2. Introduce an enforcement piece with Strong Migrations

Read the whole story
2 hours ago
Bend, Oregon
Share this story

Spirit Island review: Finally, an anti-colonialist board game

1 Share

Enlarge / Get off my island.

A side effect of Euro-style board games’ preoccupation with European history as a theme is that many such games hinge on colonialism. Most board games are not “pro-colonialist,” of course, but simulating a long history of European imperialism necessarily means that a lot of us sit around on game nights trying to figure out the most efficient way to exploit the resources (and sometimes, uncomfortably, the people) of a newly “discovered” land.

Spirit Island, a cooperative strategy game for one to four players, flips this well-worn script on its head. Instead of playing as settlers building out villages and roads in a new land, you and your friends take on the role of god-like elemental spirits charged with protecting the island's various landscapes from those pesky invaders, who are controlled by the game itself. It’s kind of like a complex, wildly asymmetric Pandemic—but here, people are the disease.

The island's natives are there to help you fight back when they can, but it's mostly up to you and your teammates to destroy the settlers' fledgling cities, remove the blight they introduce as they ravage your pristine lands, and gain more and better powers to help you on your way. Gameplay is driven by cards, and as the game progresses, you'll get more and better powers and strike more and more fear into the invaders' hearts. Drive them off to win.

Spirit Island follows the template of most co-op game engines, with each round allotting a phase to the players during which they can mount an attack against the game’s machinations.

Then, of course, the game gets a turn.


The invaders first enter the island as explorers, represented by spindly little miniatures armed with flags and conquistador helmets. They then build towns and cities, which bolster their strength in the map’s various segmented landscapes. Then, presumably due to underdeveloped carbon footprint reduction programs, they ravage the land. Ravaging causes blight, represented by plastic tokens taken from a supply that’s assembled at the beginning of the game. If the invaders cause enough blight to wipe out your supply, you instantly lose.

Of course, this being a co-op board game—a genre that prides itself on cultivating a sadomasochistic relationship with its players—there are actually three ways to lose. In addition to death-by-blight, you can also lose by letting the invaders wipe one of the player-controlled spirits off the island. Or you can simply run out of time—in this case by burning through the “invader deck.”

The invader deck is a wonderful bit of design that acts a bit like a programmed AI which controls the invaders’ actions. Every turn, cards move down a track, and the invaders on the landscape types shown on the cards will do the action of the slot they’re on. In the above example, the invaders will explore the jungle (put new explorer minis on the green spaces), then build towns and/or cities in the wetlands, then ravage in the mountains. The “explore” and “build” actions are essentially the game’s way of spawning new threats for you to deal with; the “ravage” action is the game attacking your land. If the minis on the ravaged land do more damage than you’re able to defend against, the landscape will be blighted.

Co-op game engines generally drive the action by starting fires each round that you need to put out as quickly as possible. There’s never enough time do everything, of course, so you have to prioritize disaster response to the most pressing matters. Spirit Island keeps this general setup, and the game’s invader card track ensures you know what’s coming every round. The invaders are going to ravage in the mountains next turn, but they’re building in the wetlands, which are already being overrun. Do you take the blight hit to work on hampering their expansion?

You win by clearing every invader—including explorers, towns, and cities—off the board. At least, that’s your goal when the game begins. As you reduce the invaders’ cities to cinders and fertilize the land with their blood, the colonists start to get jittery. “Fear” is a resource, represented by tokens, that you generate as you terrorize your unwanted guests. Cause enough of it and you uncover Fear cards, which give you extra actions. Clear away enough Fear cards and your win conditions change, becoming easier as the invaders start to reconsider their surprisingly antagonistic new home. By the end of the game, you only have to clear out the invaders’ cities to win—the explorers and towns can stay (head canon: the leftovers are driven into the sea, shrieking regret and repentance as the game’s credits roll).

The game boasts complexity and depth rarely seen in one-and-done co-ops. Figuring out how to manage the ever-expanding mass of invaders requires you to constantly think several turns ahead. Deciding how to play your cards is a wonderfully thinky puzzle and easily the best part of the game, but again, it’s complex. Players prone to “analysis paralysis” may feel the gears in their heads grinding to a halt as they work out all the possible permutations of actions—and how to combo their actions with those of their teammates. Cooperation is essential here, and that cooperation can compound the complexity (and play time).


Co-op games are wildly popular in board gaming. They allow players to work together to overcome a common threat, which is perfect for players who don’t want to fight their friends in their free time. Personally, I’ve never been much of a fan.

Competitive Eurogames almost always allow you to build stuff, whether that’s something concrete like a kingdom made of tiles or something more abstract like an economic engine. At the end of a competitive Euro, win or lose, you have something that you've built. If you lose a co-op, which you often will (co-ops are hard by design), you… just kinda lose. If you win, it generally means you’ve successfully limped across the finish line after spending 90 minutes fighting for every breath.

Spirit Island bridges this gap for me by letting me build something: my spirit. Every single round, your spirit gets better. At the beginning of the round, each player gets to choose between three or four “growth” options for their spirit. Many of them allow you move a disc or two off tracks on your player board and put them on a landscape on the central board. As you uncover spaces on those tracks, you gain access to more energy (think “mana”) and extra card plays.

By the end of the game, you feel like a literal god, playing cards to push invaders around the map with violent waves and explosive volcanoes, setting traps and watching as a teammate detonates a perfect combo that wipes out an entire metropolis. Later rounds see the invaders exploring and ravaging multiple lands every turn, which provides for a great sense of progression. You get stronger and stronger, and the invaders start frantically throwing more bodies at you as they get increasingly desperate. This is far from the only co-op game with an emphasis on character progression, but the narrative arc of each game feels especially satisfying here.

The cards that represent your powers allow you to push and pull the enemy around the board, deal damage, buff up allies, generate fear, and just generally cause mayhem. Elemental symbols on the left-hand side of each card let you trigger special, spirit-specific effects, which adds even more complexity to your calculations.

When you get new cards, you draw four from one of two decks and choose one. There are minor and major powers; the majors are naturally more powerful, but they’re also more expensive to play—and they force you to forget (discard) another card from your stock. One of the growth options lets you reclaim your played cards at the expense of slowing down your growth on the board, so choosing the right moment to take a tempo hit to get all your goodies back is essential.

The spirits themselves—there are eight included in the base game—all play completely different from one another, with different starting cards, growth options, special powers, and even wildly different energy generation and card play potential. Play to your strengths, shore up your friends’ weaknesses, and draft cards that work well in your team composition.

Spirit Island also plays surprisingly well as a solo game, though playing with one spirit is about all I can handle—the cognitive load required when playing as two or more spirits is… heavy.

The variety of spirits, a bunch of randomized power cards, and a variable setup mean there's a ton of replayability in the box, while scenarios and country-specific adversaries introduce new rules and let you scale up the difficulty. The game is always hard, but you can make it very hard if you're really itching for punishment. And if you burn through the base game’s content and want even more options, the Branch and Claw expansion adds two new spirits, a new adversary, and new powers, cards, and scenarios.

The game's MSRP of $80 is a bit high, but if you're at all interested in co-ops with a cool theme and even better gameplay (and you're not afraid of doing a bit of thinking), Spirit Island is an easy recommendation. It was one of our favorite games of 2017, and we're still playing it obsessively.

Read Comments

Read the whole story
2 hours ago
Bend, Oregon
Share this story

The Mac gaming console time has forgot

1 Share

Enlarge / Nope, that's not an Xbox, Playstation, or even a Dreamcast... (credit: Macgeek.org's Museum)

Apple in mid-1993 was reeling. Amidst declining Mac sales, Microsoft had gained a stranglehold over the PC industry. Worse, the previous year Apple had spent $600 million on research and development, on products such as laser printers, powered speakers, color monitors, and the Newton MessagePad system—the first device to be branded a "personal digital assistant," or PDA. But little return had yet come from it—or indeed looked likely to come from it.

The Newton's unreliable handwriting recognition was quickly becoming the butt of jokes. Adding to the turmoil, engineering and marketing teams were readying for a radical transition from the Motorola 68k (also known as the 680x0) family of microprocessors that had powered the Mac since 1984 to the PowerPC, a new, more powerful computer architecture that was jointly developed by Apple, Motorola, and IBM. Macs with 68k processors wouldn't be able to run software built for PowerPC. Similarly, software built for 68k Macs would need to be updated to take advantage of the superior PowerPC.

It was in this environment that COO Michael Spindler—a German engineer and strategist who'd climbed through the ranks of Apple in Europe to the very top layer of executive management—was elevated to CEO. (The previous CEO, John Sculley, was asked to resign.) Spindler spearheaded a radical and cost-heavy reorganisation of the company, which harmed morale and increased the chaos, and he developed a reputation for having horrendous people skills. He'd hold meetings in which he'd ramble incoherently, scribble illegible notes on a whiteboard, then leave before anybody could ask a question, and his office was usually closed.

Under Spindler's rule Apple became increasingly dysfunctional. The company lost focus and direction. One year the board decided to drop Mac prices to raise market share, the next they backflipped and chased profits. Innovation all but disappeared from their product line, and now they embraced an idea long abhorred internally: sanctioning Mac clones.

The Mac had reached 12 percent share of the personal computer market in 1993, only to immediately begin its decline as the PC, which was outselling the Mac ten-to-one, ticked over 90 percent the following year. Apple's board and senior executives theorised that allowing other companies to manufacture Macintosh hardware would somehow reverse this trend—that Apple could beat Microsoft at the licensing game and overturn their massive market share deficit.

Apple had licensed the Mac system before, but only for specialised uses in new markets—things that didn't compete with Apple's Mac sales. Eric Sirkin, director of Macintosh OEM products in the New Media Division, had brokered deals for Mac OS to be used in embedded systems—computers with dedicated, specific functions. (OEM, or original equipment manufacturer, is when a product is licensed to be resold as a part or subsystem in another company's product.) But when the clone program started, he wasn't interested. He doubted the value of other companies selling consumer Macs, so he stayed clear. Soon after, through indirect channels, Sirkin got wind of an approach by a large Japanese toy company called Bandai to make a Mac- based games console. It was in the territory of the newly formed Personal Interactive Electronics (PIE) division, run by former Philips Electronics vice-president Gaston Bastiaens. "They weren't able to capitalise on the opportunity," Sirkin recalls, which frustrated some of the people in the PIE group.

Sirkin was already managing a project (the FireWire communications interface) that involved regular travel to Japan, so he was happy to look into it. His PIE group colleagues connected him with Bandai, and off he went to Japan discuss their idea.

Founded in 1950 by the son of a rice merchant, Bandai had grown into one of the largest toy manufacturers in the world. It had made popular toy cars in the 1960s and 1970s, and by the 1990s was the toy licensee for most of the popular Japanese children's manga and anime—including Ultraman, Super Robot, Gundam, Dragon Ball, and Digimon. The company had been making waves in the American market as the maker of the action figure toys for the hit new children's superhero TV show Mighty Morphin Power Rangers, which was based on a Japanese show called Super Sentai. In 1994, Bandai would generate $330 million in revenue from sales of Power Rangers merchandise in the US alone.

CEO Makoto Yamashina, the son of the founder, wanted Bandai to be more than an action-figure toy company, however. He saw their future as a global entertainment company like Disney or Nintendo. He had pushed for years for Bandai to produce its own animated films and television serials and to delve deeper into home electronics. In the process, he drastically diversified their product line. They made sweets, bathroom products, clothing, videos, dolls, robots, action figures, and video games. The older Yamashina once publicly lamented his son's business strategy of bringing out ten toys in the hope that three would become hits.

But Bandai had grown considerably in both stature and revenue since Yamashina had taken over in 1987. Now he had an idea that would allow the company to take on the giants of home entertainment. Bandai's idea centred around the CD-ROM, which was surging in popularity as CD drives dropped in price. Myst, a video game, was often the first thing people bought. And many of Bandai's licenses, including Dragon Ball Z, Power Rangers, and Sailor Moon, were perfect for the games market. Bandai saw an opportunity to leverage these properties and the CD format together, and to thereby conquer the living room. They admired Apple and the Mac, so they hoped to partner with the Cupertino company in developing and releasing a game console and multimedia machine. Better yet, if the system could be a low-cost, more specialised Mac then they could avoid the problem facing the similar 3DO system—which had limited software available.

It was complicated

Wikimedia, taken by Evan Amos

It fell to Eric Sirkin to explain that Apple, in its present state, would likely not be willing, or able, to launch it as an Apple-branded product. "My charter was to create opportunities for the Macintosh outside of its core market," he says. A stripped-down Mac packaged as a living room multimedia system could fit the charter, but only on the proviso that it was neither built nor sold by Apple. Sirkin explained that what Apple could do was lead the engineering and design of the product and then charge a per-system licence fee to Bandai. The manufacturing, marketing, and branding would all be Bandai's responsibility.

They liked that idea. So we went through a series of meetings, going back and forth, and started involving Satjiv [Chahil], my boss, who also raised it to the attention of Ian Diery [head of Apple's personal computer division], so we had all the visibility in what we were doing. It was seen as an activity not costing the company a lot of money and possibly having an opportunity to reposition the technology of the company in another market.

Apple and Bandai soon entered into an agreement. Sirkin returned to Cupertino and put a team of engineers onto the project to help him design the device internals. They codenamed the project Pippin, after the type of apple, because the name was already registered by Apple and it hadn't been used yet.

The core technology would come from the Macintosh—specifically the new PowerPC line. To keep costs down, they opted for the low-end PowerPC 603 rather than the more powerful but much more expensive 604 processor. The Pippin, then, would be a low-cost Macintosh designed for the living room. A clone by a different name, for a different purpose.

Immediately, things got complicated. Sirkin and his team were instructed by Apple management to make the system un-Mac-like. Pippin could not be allowed to cannibalise desktop Mac sales. It had to be so limited that people couldn't possibly use it as a primary personal computer.

This distancing from the Mac affected the Pippin in a number of ways. First, Apple deemed it important that the device be both manufactured and branded as a Bandai product. "The Bandai people would have loved to have Apple just go off and make it," recalls Richard Sprague, who acted as intermediary and interpreter between Apple and Bandai. "But they felt like manufacturing was the price that they had to pay to get an Apple-compatible media device."

Apple's business people believed that the real money in the computer business came from software. "The problem with software is that people copy it," they'd argue, "so we're going to put the best copy protection on it that humankind has ever known. We're going to make this thing so locked down that it'll be impossible for them to play anything other than the stuff we put out." This, Sprague says, led to some ill-advised mathematics that spurred ill-advised policies:

It would have been nice to have a $200 machine where you take a copy of Myst off the shelf that works on a PC, it works on a Mac, and just pop it into the Pippin and have it play. That would be kind of cool. But no, we had to make it so that the Myst developers would make a special version of their disc just for us. It was a whole bunch of things just like that that were about ensuring that nobody would ever mistake it for a Macintosh.

Sprague had been hired by Apple in 1991 to help recruit Japanese software companies to write software for the Mac. "In those days Apple was really growing quickly in Japan," he recalls. "From every perspective it looked like the Japanese were going to dominate the world in all kinds of things. So it was kind of a hot, special place to be." Sprague was also a fluent Japanese speaker, so he often had to play the role of interpreter for visiting Apple executives. One day he got dragged along to a "super top-secret meeting" between New Media Division head Satjiv Chahil and Bandai's top executives, including company president Makoto Yamashina.

"Somewhere in the middle of the meeting it turned out that Bandai was really mad at Apple," Sprague recalls. "[Yamashina] was like in his most polite but kind of mean Japanese talking about how Apple had screwed him over—how they had signed this agreement months ago and now Apple hasn't done a single thing." Apple was supposed to have put a full-time employee in Japan to work with Bandai.

Satjiv, without batting an eye, he says, "Well we did hire a full-time person. That's why I brought Richard Sprague." He told me to translate that. I'm like "Satjiv, I've already got a job. It's not this. I was just dragged along because you asked me." He goes, "Just play along. Just tell him this. I'll make it up later." A lot of Pippin was run exactly that way. Just kinda making things up as we go.

Apple's increasing managerial dysfunction took a more immediate toll on the Pippin project. "We went through all kinds of struggles in the engineering team," Sirkin recalls. At one point four key software engineers went on strike. "They said they couldn't deliver the product on the schedule committed and they'd decided they didn't want to work on it anymore after working on it for like six months," Sirkin continues. "I ended up having to fire them." In their place he assigned a group of other software engineers from his group who were willing to work overtime to get the project back on schedule.

Third party pitfalls

Presto Studios programmer Bob Bell remembers how stark this change was for third-party relations. He was working on the Mac and Pippin versions of The Journeyman Project: Pegasus Prime, a remake of the first Journeyman Project adventure game. "It felt like the old guard had been replaced by these young whippersnappers," he says. Where the old team didn't seem to care much, this new team were "go-getters" who showed a genuine interest in the project and responded quickly to queries.

That was great news for Bell, who had his hands full figuring out how to get Pegasus Prime to run fluidly off a CD-ROM (on Mac, the Journeyman Project games installed most of the content to a user's hard drive). Presto had pioneered an animation technology that would play micro-movies to transition from one fixed location to another. This involved opening and closing files over and over again, which was much, much slower on a CD. "So I decided I'm just gonna make one gigantic movie," Bell explains. Pegasus Prime on the Pippin would open the movie containing all of the walking animations for a level at launch, read in a data structure that specified what frames corresponded to what locations, then push the relevant frames to the foreground as they were needed. "I would never close the movie," Bell continues, "and that made it a lot faster."

Bell was excited by the Pippin. He thought it seemed to be on the cutting edge—a cool device that could really get big. "I never thought of it as underpowered, or like the reputation it has today of 'the worst console in history' or whatever," he says. Instead, he thought it was fun and challenging. Sure, it had low memory and no hard drive, but then so did the other game consoles of the time. And he appreciated that, unlike his colleague working on the PlayStation version of Pegasus Prime, he could leverage Apple's QuickTime library.

Bell wasn't the only one who thought Pippin had promise. "Whenever we said the words 'Dragon Ball' and 'Power Rangers' and whatnot," Sprague recalls, "people got really excited because they thought, 'Wow, this is going to be a real powerhouse.'" The people inside Bandai were totally behind the project, too. Sprague remembers that they'd studied the games industry inside-out and believed that Pippin was their future.

That made things frustrating. "Yamashina, the top guy—this was his number one priority," he recalls. "Anything we needed from Bandai was a phone call away. There was no problem with money or resources. All the best people were working on it. Whereas with the Apple side it was the exact opposite."

"Yamashina would come to Cupertino because there would be some problem happening that needs to be addressed and nobody would meet with him," Sprague continues. Top-ranking executives had little time for the project. "They thought it was a dumb idea," he says.

Sirkin recalls how Pippin's pre-launch buzz briefly earned the attention of Apple's senior management—for better or worse:

Pippin was so successful prior to launch that the executives decided it was such a valuable project and Eric—being me—did such a great job we were split in half. They put the marketing group in this division and the engineering team in another division, and that was going to help the project—which of course was totally idiotic and had the opposite effect because one of the reasons we were successful was that we had all the resources in one organisation.

The decline: "You can't make money on the Internet"

The Pippin team struggled on, pulled this way and that by territorial executives. "I remember being in a heated argument with people inside the Pippin team," Sprague recalls, "where it was the smartest people in the room—they're all these MBAs and they're like 'that Netscape thing is kinda cool.' But it's not great, they argued. There are lots of things it can't do. And furthermore, this Internet thing is never going to happen 'because you can't make money on the Internet.'"
They insisted that the way to make money online was with walled-garden online services like AOL and Apple's own eWorld—which had email, news, chat, and community features that required a subscription fee to access. Sprague continues:

Bandai [then] came to us and said, "Have you guys seen this Internet thing? This Pippin would be the perfect way for people to get onto the Internet. You could make a set-top box ... so that now people can experience the Internet from their couch in the living room, and people who don't know how to use a keyboard—who aren't natural customers for a PC—everybody's going to go crazy when they see this thing." So we told them, "No, you can't put the Internet on this thing because it's a media player and the Internet's not gonna make it. So just trust us. You don't want to waste time and effort on something that connects to the Internet."

Bandai, to their credit, didn't believe Apple. Many of the people they had on Pippin were young and hip, at the cutting edge of Japanese society, so they recognised the potential of the Internet. They insisted that Pippin be distributed with a modem and the necessary software to get online. "That was their marquee title," Sprague says. "Actually getting on the Internet."

In the meantime, more outsiders had started to note the problems inherent in Pippin's design. One response in particular stood out to Sirkin. "We had a developer conference in California, which we hosted and Bandai co-sponsored," he says. Bandai flew lots of Japanese developers over while Apple subsidised the travel costs of a number of American studios.

In the Q&A session at the end of Sirkin's introductory speech, one developer stood up and asked, "Well, what is this game console going to do better than anything else? Is it going to be an Internet device? Is it gonna be a communications device? Is it going to be a great game console?" Sirkin couldn't answer. Pippin wasn't great at anything. It couldn't do games as well as a PlayStation or Sega Saturn, even though it was technically more powerful than them, nor could it do general computing tasks as well as a desktop Mac or PC. It was a low-end Macintosh with the hard drive pulled out, a new graphics engine added in (so that it could look decent on a TV screen), and a heavily stripped-down operating system (so that it could be loaded from a CD-ROM on startup) that could only run one application at a time.

There had already been an undercurrent of internal doubt about Pippin's short- term commercial potential, but this magnified Sirkin's own uncertainties. "It was like in no man's land," he says. "It was more expensive than a video game console and it wasn't as powerful as a PC, so how do you explain it to the market? How do you position it?"

Existing Mac game and multimedia developers were courted to produce Pippin versions of their work. Marathon creators Bungie, the Mac's premier game developers, were among those who signed on with hopes of reaching a new audience and strengthening relationships with Apple and Bandai.

Super Marathon, a Pippin port of Marathon and Marathon 2: Durandal bundled into one package, was a nightmare project for sole programmer Jason Regier. The game's keyboard and mouse controls had to be shoehorned into working on the Pippin's strange boomerang-shaped AppleJack controller, which had four action buttons on its front, two trigger buttons on the top, three smaller buttons on the bottom, a directional pad, and a trackball at its centre. He had to redo text rendering in its many in-game computer terminals, because it needed to be readable from a greater distance. Memory constraints meant music and other features had to be cut. And any technical problems he had had to go through his colleague Alex Rosenberg's friends at Apple—because Bandai US had taken over technical support and they weren't answering his queries.

The Game Technology Group in Apple's Mac division later modified the GameSprockets technology—new development libraries to help Mac games perform better—for Pippin to alleviate some of these issues. Still, as Pippin neared release, they were called on for assistance by their friends at a few major Mac game companies who wanted to publish Pippin versions of their big titles. "None of them deemed it worthwhile to finish the work," recalls Chris De Salvo, one of the engineers in the group, because performance was so poor.

Only three Mac games ever completed the transition from Mac to Pippin: Bungie's Marathon/Marathon 2, 3D racing game Racing Days (which had some of the best graphics yet seen in its genre), and Presto's Journeyman Project: Pegasus Prime. The rest of the Pippin catalogue consisted of several dozen multimedia and edutainment titles, such as Dragon Ball Z, Anime Designer and Compton's Interactive Encyclopedia, and Japan-only games like Tunin'Glue and L-Zone.

Bandai expected its two Pippin models to sell 500,000 units worldwide in a year. They prepared to spend $100 million on promoting it heavily in Japan and the US as a hip new thing with huge cool factor. But the system's identity crisis would harm consumer reception. For a few hundred dollars more, they could buy a much more powerful Internet-capable computer; for a few hundred less, they could get a dedicated games console with a large library of great games like the PlayStation.

Apple's Senior Director of New Media Operations, Steve Franzese, told the Los Angeles Daily News that he expected Pippin to sell three million units—across multiple manufacturing partners, not just Bandai—in three years (at a royalty of less than $15 per sale), with each software sale bringing in a $3 royalty to Apple.

Pippin achieved neither of these projections. Estimates range from just 5,000 to 42,000 @WORLD units sold in North America. Sales in Japan fared only slightly better, while in Europe, as the short-lived Katz Media KMP 2000, it sold considerably worse. (American electronics company DayStar Digital would later acquire Bandai's leftover inventory in 1998, whereupon they reportedly sold as many as 2,000 systems.)

The Pippin project was shut down in March 1997, when Apple laid off 4,100 employees in a bid to stop the bleeding from its still-failing Mac business. Sirkin offered it for the chop, since he knew it was as good as dead, even though his team had been diligently working to fix Pippin's problems with a second-generation model. "We had a true graphics engine which could compete with the games consoles, and we had a better graphics engine than we had in the first generation," Sirkin says. "And we did this on our own, hoping that we would market this second version."

Part of the Pippin team left before the project was officially cancelled to start their own company and build a set-top box with Internet capabilities. They eventually got bought by Sun Microsystems. Sirkin had no interest in waiting around for Steve Jobs to come back (which at that point wasn't a sure thing, despite Apple having just bought Jobs" company, NeXT), so he went off to start a company around the FireWire technology. The rest of the Pippin team also left Apple.

In hindsight, Sirkin believes that Pippin was always destined for failure. It could never have survived in the Apple of its day:

It was schizophrenic. It was not sure what it was going to be. And there was no patience in the company. No interest in investing in something and iterating on it until you got a market adopted for it. I really believed that if we had been given the opportunity to complete the second generation and move onto the third then it would have been a very successful product. But that wasn't on the cards. It wasn't going to happen.

Richard Moss is a writer and technology/games historian based in Melbourne, Australia. In addition to his new book, The Secret History of Mac Gaming, he produces Ludiphilia, a storytelling podcast about how and why we play. You can follow him on Twitter @MossRC and read his past work on everything from FireWire to gaming genres to GTA V Easter eggs on Ars.

Read Comments

Read the whole story
2 hours ago
Bend, Oregon
Share this story

Winterraum cabin at Watzmannhaus in BerchtesgadeslandSubmitted...

1 Share

Winterraum cabin at Watzmannhaus in Berchtesgadesland

Submitted by Wouter Reininga / @wouterreininga

A long hike to this cabin with minus 23 celsius. Luckily a nice stove inside and enough of blankets.

Read the whole story
2 hours ago
Bend, Oregon
Share this story

‘Black Panther’ just became North America’s highest-grossing superhero movie

1 Share

After the remarkable string of weekends it’s been having, it was really just a matter of time before Black Panther wrestled the top spot from his fellow Avengers. As of today, the Ryan Coogler-directed film is North America’s highest grossing superhero film of all-time (not adjusted for inflation, mind).

The news comes via The Hollywood Reporter, which notes that the film has pulled in $624 million on the continent, roaring past 2012’s The Avengers, which made $623.4 million. The film, which cost around $200 million, is now one of only seven movies to make more than $600 million, domestically.

Black Panther has been pulling in money at a steady clip since opening, with a record-setting $218 million opening weekend. Worldwide, it’s pulled in $1.2 billion and is on-track to become the third highest-grossing superhero film internationally, behind the first two Avengers films. The third Avengers film, Infinity War, is due out on April 27.

Even more so than those films, however, Black Panther has become a cultural touchstone for moviegoers, and, hopefully, a wake up call for Hollywood, which has traditionally shied away from diversity in film. Those who still haven’t seen it, can check out Anthony’s review here — or just take my word that it’s awesome. 

Read the whole story
3 hours ago
Bend, Oregon
Share this story
Next Page of Stories