8630 stories
·
66 followers

Kubernetes best practices: How and why to build small container images

3 Shares


Editor’s note: Today marks the first installment in a seven-part video and blog series from Google Developer Advocate Sandeep Dinesh on how to get the most out of your Kubernetes environment. Today he tackles the theory and practicalities of keeping your container images as small as possible.

Docker makes building containers a breeze. Just put a standard Dockerfile into your folder, run the docker ‘build’ command, and shazam! Your container image is built!

The downside of this simplicity is that it’s easy to build huge containers full of things you don’t need—including potential security holes.

In this episode of “Kubernetes Best Practices,” let’s explore how to create production-ready container images using Alpine Linux and the Docker builder pattern, and then run some benchmarks that can determine how these containers perform inside your Kubernetes cluster.

The process for creating containers images is different depending on whether you are using an interpreted language or a compiled language. Let’s dive in!

Containerizing interpreted languages


Interpreted languages, such as Ruby, Python, Node.js, PHP and others send source code through an interpreter that runs the code. This gives you the benefit of skipping the compilation step, but has the downside of requiring you to ship the interpreter along with the code.

Luckily, most of these languages offer pre-built Docker containers that include a lightweight environment that allows you to run much smaller containers.

Let’s take a Node.js application and containerize it. First, let’s use the “node:onbuild” Docker image as the base. The “onbuild” version of a Docker container pre-packages everything you need to run so you don’t need to perform a lot of configuration to get things working. This means the Dockerfile is very simple (only two lines!). But you pay the price in terms of disk size— almost 700MB!

FROM node:onbuild
EXPOSE 8080
By using a smaller base image such as Alpine, you can significantly cut down on the size of your container. Alpine Linux is a small and lightweight Linux distribution that is very popular with Docker users because it’s compatible with a lot of apps, while still keeping containers small.

Luckily, there is an official Alpine image for Node.js (as well as other popular languages) that has everything you need. Unlike the default “node” Docker image, “node:alpine” removes many files and programs, leaving only enough to run your app.

The Alpine Linux-based Dockerfile is a bit more complicated to create as you have to run a few commands that the onbuild image otherwise does for you.

FROM node:alpine
WORKDIR /app
COPY package.json /app/package.json
RUN npm install --production
COPY server.js /app/server.js
EXPOSE 8080
CMD npm start
But, it’s worth it, because the resulting image is much smaller at only 65 MB!

Containerizing compiled languages


Compiled languages such as Go, C, C++, Rust, Haskell and others create binaries that can run without many external dependencies. This means you can build the binary ahead of time and ship it into production without having to ship the tools to create the binary such as the compiler.

With Docker’s support for multi-step builds, you can easily ship just the binary and a minimal amount of scaffolding. Let’s learn how.

Let’s take a Go application and containerize it using this pattern. First, let’s use the “golang:onbuild” Docker image as the base. As before, the Dockerfile is only two lines, but again you pay the price in terms of disk size—over 700MB!

FROM golang:onbuild
EXPOSE 8080
The next step is to use a slimmer base image, in this case the “golang:alpine” image. So far, this is the same process we followed for an interpreted language.

Again, creating the Dockerfile with an Alpine base image is a bit more complicated as you have to run a few commands that the onbuild image did for you.

FROM golang:alpine
WORKDIR /app
ADD . /app
RUN cd /app && go build -o goapp
EXPOSE 8080
ENTRYPOINT ./goapp

But again, the resulting image is much smaller, weighing in at only 256MB!
However, we can make the image even smaller: You don’t need any of the compilers or other build and debug tools that Go comes with, so you can remove them from the final container.

Let’s use a multi-step build to take the binary created by the golang:alpine container and package it by itself.

FROM golang:alpine AS build-env
WORKDIR /app
ADD . /app
RUN cd /app && go build -o goapp

FROM alpine
RUN apk update && \
   apk add ca-certificates && \
   update-ca-certificates && \
   rm -rf /var/cache/apk/*
WORKDIR /app
COPY --from=build-env /app/goapp /app
EXPOSE 8080
ENTRYPOINT ./goapp

Would you look at that! This container is only 12MB in size!
While building this container, you may notice that the Dockerfile does strange things such as manually installing HTTPS certificates into the container. This is because the base Alpine Linux ships with almost nothing pre-installed. So even though you need to manually install any and all dependencies, the end result is super small containers!

Note: If you want to save even more space, you could statically compile your app and use the “scratch” container. Using “scratch” as a base container means you are literally starting from scratch with no base layer at all. However, I recommend using Alpine as your base image rather than “scratch” because the few extra MBs in the Alpine image make it much easier to use standard tools and install dependencies.

Where to build and store your containers


In order to build and store the images, I highly recommend the combination of Google Container Builder and Google Container Registry. Container Builder is very fast and automatically pushes images to Container Registry. Most developers should easily get everything done in the free tier, and Container Registry is the same price as raw Google Cloud Storage (cheap!).

Platforms like Google Kubernetes Engine can securely pull images from Google Container Registry without any additional configuration, making things easy for you!

In addition, Container Registry gives you vulnerability scanning tools and IAM support out of the box. These tools can make it easier for you to secure and lock down your containers.

Evaluating performance of smaller containers


People claim that small containers’ big advantage is reduced time—both time-to-build and time-to-pull. Let’s test this, using containers created with onbuild, and ones created with Alpine in a multistage process!

TL;DR: No significant difference for powerful computers or Container Builder, but significant difference for smaller computers and shared systems (like many CI/CD systems). Small Images are always better in terms of absolute performance.

Building images on a large machine


For the first test, I am going to build using a pretty beefy laptop. I’m using our office WiFi, so the download speeds are pretty fast!


For each build, I remove all Docker images in my cache.

Build:
Go Onbuild: 35 Seconds
Go Multistage: 23 Seconds
The build takes about 10 seconds longer for the larger container. While this penalty is only paid on the initial build, your Continuous Integration system could pay this price with every build.

The next test is to push the containers to a remote registry. For this test, I used Container Registry to store the images.

Push:
Go Onbuild: 15 Seconds
Go Multistage: 14 Seconds
Well this was interesting! Why does it take the same amount of time to push a 12MB object and a 700MB object? Turns out that Container Registry uses a lot of tricks under the covers, including a global cache for many popular base images.

Finally, I want to test how long it takes to pull the image from the registry to my local machine.

Pull:
Go Onbuild: 26 Seconds
Go Multistage: 6 Seconds
At 20 seconds, this is the biggest difference between using the two different container images. You can start to see the advantage of using a smaller image, especially if you pull images often.

You can also build the containers in the cloud using Container Builder, which has the added benefit of automatically storing them in Container Registry.

Build + Push:
Go Onbuild: 25 Seconds
Go Multistage: 20 Seconds
So again, there is a small advantage to using the smaller image, but not as dramatic as I would have expected.

Building images on small machines


So is there an advantage for using smaller containers? If you have a powerful laptop with a fast internet connection and/or Container Builder, not really. However, the story changes if you’re using less powerful machines. To simulate this, I used a modest Google Compute Engine f1-micro VM to build, push and pull these images, and the results are staggering!

Pull:
Go Onbuild: 52 seconds
Go Multistage: 6 seconds
Build:
Go Onbuild: 54 seconds
Go Multistage: 28 seconds
Push:
Go Onbuild: 48 Seconds
Go Multistage: 16 seconds
In this case, using smaller containers really helps!

Pulling on Kubernetes


While you might not care about the time it takes to build and push the container, you should really care about the time it takes to pull the container. When it comes to Kubernetes, this is probably the most important metric for your production cluster.

For example, let’s say you have a three-node cluster, and one of the node crashes. If you are using a managed system like Kubernetes Engine, the system automatically spins up a new node to take its place.

However, this new node will be completely fresh, and will have to pull all your containers before it can start working. The longer it takes to pull the containers, the longer your cluster isn’t performing as well as it should!

This can occur when you increase your cluster size (for example, using Kubernetes Engine Autoscaling), or upgrade your nodes to a new version of Kubernetes (stay tuned for a future episode on this).

We can see that the pull performance of multiple containers from multiple deployments can really add up here, and using small containers can potentially shave minutes from your deployment times!

Security and vulnerabilities


Aside from performance, there are significant security benefits from using smaller containers. Small containers usually have a smaller attack surface as compared to containers that use large base images.

I built the Go “onbuild” and “multistage” containers a few months ago, so they probably contain some vulnerabilities that have since been discovered. Using Container Registry’s built-in Vulnerability Scanning, it’s easy to scan your containers for known vulnerabilities. Let’s see what we find.

Wow, that’s a big difference between the two! Only three “medium” vulnerabilities in the smaller container, compared with 16 critical and over 300 other vulnerabilities in the larger container.

Let’s drill down and see which issues the larger container has.

You can see that most of the issues have nothing to do with our app, but rather programs that we are not even using! Because the multistage image is using a much smaller base image, there are just fewer things that can be compromised.

Conclusion

The performance and security advantages of using small containers speak for themselves. Using a small base image and the “builder pattern” can make it easier to build small images, and there are many other techniques for individual stacks and programming languages to minimize container size as well. Whatever you do, you can be sure that your efforts to keep your containers small are well worth it!

Check in next week when we’ll talk about using Kubernetes namespaces to isolate clusters from one another. And don’t forget to subscribe to our YouTube channel and Twitter for the latest updates.

If you haven’t tried GCP and our various container services before, you can quickly get started with our $300 free credits.

Read the whole story
fxer
44 minutes ago
reply
Bend, Oregon
Share this story
Delete

Why a Cryptocurrency Mining Giant Is Burning Money in a ‘Black Hole’

1 Share

Learning about cryptocurrency economics can be a bit like biting into an oatmeal cookie and finding raisins when you thought they were chocolate chips: Entirely unexpected, and frankly unsettling.

Consider the case of AntPool—one of the largest cryptocurrency mining pools in the world. According to announcements on social media on Friday, the pool is “burning” 12 percent of the money it makes mining the Bitcoin Cash blockchain and is encouraging other miners to also burn a portion of their earnings. This appears counterintuitive, since for profit-seeking businesses there is normally an imperative not to light the money you make on fire. Very interesting! But why?

Coin burning, if you’re not familiar, is a well-trod path to inflating the value of a cryptocurrency with a fixed supply, like Bitcoin Cash. The value of coins with a fixed supply is based on increasing demand and steadily decreasing supply. If you can accelerate the diminishment of available coin stock, then theoretically that should increase demand for the remaining supply and, in turn, the coin’s value. Burning involves sending coins to an irrecoverable address, which Antpool calls a “black hole.” The end goal is to further enrich people already hoarding Bitcoin Cash tokens.

“While having active users spending BCH is very important for the ecosystem, having investors who hold BCH is also a fundamental requirement for maintaining a strong economy,” the announcement stated. “Without these holders, BCH’s exchange value loses significant support. We believe that they too should profit from the growth of BCH by their continued stake in the Bitcoin Cash ecosystem.”

Bitmain, the multi-billion dollar Chinese firm behind AntPool, didn’t immediately respond to Motherboard’s request for comment.

AntPool combines people’s computing power to guess the correct number that verifies a block of cryptocurrency transaction data, for which AntPool receives an automatic reward that is split between pool contributors, and the company holds on to the voluntary transaction fees. AntPool accounts for 14 percent of the Bitcoin network’s mining power, and 12 percent of Bitcoin Cash, a recent fork of Bitcoin that is now in competition with its predecessor. Bitcoin Cash has lost half of its value since the beginning of the year (to be fair, so has Bitcoin), which is bad for business. What’s a miner to do? If you’re AntPool, you burn coins.

But since Antpool isn’t the biggest Bitcoin Cash mining concern out there, it needs some help to push the value of Bitcoin Cash up. “We call for other miners to join us in burning 12 percent of the transaction fees collected,” AntPool’s announcement stated.

You can position this practice as “sharing revenue” with the network, as AntPool has, or you could just as easily see the move as artificially inflating the value of an asset that AntPool itself hoards. As one Twitter user put it in response, “Sounds desperate,” and another: “Rekttttttt.”

Get six of our favorite Motherboard stories every day by signing up for our newsletter .



Read the whole story
fxer
50 minutes ago
reply
Bend, Oregon
glenn
20 minutes ago
Somehow they make fiat money seem less insane
Share this story
Delete

German Supreme Court dismisses Axel Springer lawsuit, says ad blocking is legal

1 Share

Germany’s Supreme Court dismissed a lawsuit yesterday from Axel Springer against Eyeo, the company behind AdBlock Plus.

The European publishing giant (which acquired Business Insider in 2015) argued that ad blocking, as well as the business model where advertisers pay to be added to circumvent the white list, violated Germany’s competition law. Axel Springer won a partial victory in 2016, when a lower court ruled that it shouldn’t have to pay for white listing.

However, the Supreme Court has now overturned that decision. In the process, it declared that ad-blocking and Eyeo’s white list are both legal. (German speakers can read the court’s press release.)

After the ruling, Eyeo sent me the following statement from Ben Williams, its head of operations and communications:

Today, we are extremely pleased with the ruling from Germany’s Supreme Court in favor of Adblock Plus/eyeo and against the German media publishing company Axel Springer. This ruling confirms — just as the regional courts in Munich and Hamburg stated previously — that people have the right in Germany to block ads. This case had already been tried in the Cologne Regional Court, then in the Regional Court of Appeals, also in Cologne — with similar results. It also confirms that Adblock Plus can use a whitelist to allow certain acceptable ads through. Today’s Supreme Court decision puts an end to Axel Springer’s claim that they be treated differently for the whitelisting portion of Adblock Plus’ business model.

Axel Springer, meanwhile, described ad blocking as “an attack on the heart of the free media” and said it would appeal to the country’s Constitutional Court.

Read the whole story
fxer
1 hour ago
reply
Bend, Oregon
Share this story
Delete

Smugmug Acquires Flickr

1 Share
Comments

Comments
Read the whole story
fxer
1 hour ago
reply
Bend, Oregon
Share this story
Delete

Google shuttering domain fronting, Signal moving to souqcdn.com

1 Share
Comments
Browse files

Setup alternate domain front.

In preparation for Google shutting down domain fronting.

Closes #7584

Comments
Read the whole story
fxer
18 hours ago
reply
Bend, Oregon
Share this story
Delete

Why Nuclear Clocks Will Be the Most Accurate Clocks on Earth

1 Share

“What time is it?”

It’s a question that’s heard less and less as the proliferation of smartphones puts some of the most advanced timekeeping devices in history into the hands of everyday people. The clocks on smartphones are based on signals from space, sent from the 24 GPS satellites that keep track of time by using four onboard atomic clocks. These clocks measure time based on the frequency that electrons transition between energy levels within an atom.

Since they were first created in the mid-twentieth century, atomic clocks have been the gold standard of timekeeping. Indeed, the most accurate clock in the world, which is run by the National Institute of Standards and Technology in Colorado, is a ytterbium atomic clock. But now researchers around the world are working on building a better model based not on the electrons of an atom, but the nucleus.

Since protons and neutrons are densely packed in the nucleus and are thus less likely to be disturbed by outside influences, researchers think the nucleus could serve as the basis for an ultra-precise atomic clock in the future. In a new paper published today in Nature, researchers in Germany describe experiments that showed for the first time what this clock would actually look like, a major milestone into making a nuclear clock a reality.

In 1955, English physicists Louis Essen and Jack Parry built the first accurate atomic clock at the National Physical Laboratory in the UK. It worked by exposing cesium-133 atoms in a vacuum to microwave energy and then measuring how well the atom absorbs this microwave radiation.

Electrons orbit the nucleus of an atom at certain stable energy levels that depend on the electrical properties of the nucleus itself. These orbits can be changed by adding energy to the system, which causes the electrons to temporarily get bumped up to a higher energy level and emit electromagnetic radiation during the transition. Different types of atoms are able to absorb energy at different wavelengths.

Louis Essen and Jack Parry stand next to the world’s first cesium-133 atomic clock built in 1955 in the UK. Image: Wikimedia Commons

In the case of cesium-133, that wavelength is about 3.2 cm, which means the wave oscillates at a frequency of 9,192,631,770 cycles per second. When cesium-133 atoms are hit with microwaves at this frequency, it causes the atom’s single outermost electron to transition between energy states at the same rate, and it is this rapid transition that was used to formally define the length of a second in 1967. (Another way to think about this is that the cesium atom is a clock and the electron is its pendulum. In this analogy, microwave energy sets the pendulum in motion and every 9,192,631,770 swings is marked as one second on the clock face).

This method of keeping time with atomic clocks has been refined over the last fifty years to the point that the most accurate clock in the world will only deviate by a second over 200 million years, but the basic principles have remained the same. Nevertheless, physicists have wondered if still more precise atomic clocks were possible by using a transition frequency in the nucleus of an atom, rather than in its electrons. The advantage of using a nucleus is that its energy transitions occur at much higher frequencies than electron transitions, which would allow for even more precise measurements of time.

The difficulty with nuclear excitations, however, is that they require much higher energy levels than electron energy transitions because their protons and neutrons are more densely packed. Whereas electrons in cesium-133 can be bumped to a higher energy state at a frequency of approximately 9.1 gigahertz (on the low end of the microwave range of the electromagnetic spectrum), exciting an atom’s nucleus requires energy in the x-ray range, where frequencies range from 30 petahertz to 30 exahertz (read: 30 quadrillion to 30 quintillion cycles per second). This high energy requirement was thought to make atomic clocks based on nuclear transitions infeasible.

The NIST-F1, one of the most precise atomic clocks in the world. Image: NIST

Yet according to a paper published today in Nature, a team of researchers at Physikalisch-Technische Bundesanstalt (PTB) in Germany may have found an exception to this rule. According to the paper, it should be possible to excite the nucleus of thorium-229 using ultraviolet light, a far more manageable energy requirement and similar lasers are already used in laser-based atomic clocks today. Moreover, the PTB team performed the first measurements that may pave the way for an atomic clock based on the energy transitions of a thorium-229 nucleus.

Although ten different teams around the globe are exploring the possibility of a thorium-229 nuclear clock, progress has been slow going, because the energy level required to excite the thorium nucleus is still known only approximately. In order to make an ultra-precise nuclear clock, physicists have to dial in on the correct UV frequency. As PTB physicist Ekkehard Peik put it in a statement, finding the correct frequency “resembles the proverbial search for a needle in a haystack.”

Although Peik and his colleagues haven’t yet dialed in the correct frequency to transition a thorium-229 nucleus from a ground state to an excited state, as would be the case in a future nuclear clock, their most recent paper did manage to characterize what this excited state would look like.

Read More: The Physicist Building the World’s Most Precise Clock

To do this, they obtained thorium-229 nuclei already in their excited state as result of alpha decay in uranium-233. These ions were then caught and stored in an ion trap, where the scientists could measure transition frequencies of the electrons in the thorium ion with precision. Since electron energies are directly influenced by the nucleus of the atom, this gave Peik and his colleagues insight into how the nucleus of the thorium-229 atom will behave during a transition provoked by an ultraviolet laser.

So while the needle in the haystack—that is, the correct UV frequency to excite a thorium-229 nucleus—hasn’t yet been found, Peik and his colleagues now have a picture of what the needle looks like. That way, they’ll know it when they see it.



Read the whole story
fxer
18 hours ago
reply
Bend, Oregon
Share this story
Delete
Next Page of Stories