Skip to main content
The DevOps FAUNCast

The DevOps FAUNCast

By FAUN

We made a march towards continuous development, and we changed the way we develop, build, deploy, secure, and monitor software.
Do you think you missed this march? The good news is that it's actually happening continuously. Join us! We're part of it too. We observe it, we document it, and we tell it.

In each episode of The DevOps FAUNCast, we'll treat you to an in-depth talk about a topic related to DevOps, SRE, distributed computing, Kubernetes, cloud computing, containers and other similar concerns.

You'll listen to the stories behind the stories and learn new things. www.faun.dev/podcast
Available on
Apple Podcasts Logo
Google Podcasts Logo
Overcast Logo
Pocket Casts Logo
RadioPublic Logo
Spotify Logo
Currently playing episode

GitOps: This is What You Need to Know (1/2)

The DevOps FAUNCastJan 26, 2021

00:00
11:18
GitOps: This is What You Need to Know (1/2)

GitOps: This is What You Need to Know (1/2)

This episode is sponsored by Linode:

Simplify your infrastructure and cut your cloud bills in half with Linode’s Linux virtual machines. Develop, deploy, and scale your modern applications faster and easier. Whether you’re developing a personal project or managing larger workloads, you deserve simple, affordable, and accessible cloud computing solutions.

Get started on Linode today with a $100 in free credit for listeners of the FaunCast. You can find all the details at linode.com/faun.

Linode has data centers around the world with the same simple and consistent pricing regardless of location. Choose the data center nearest to you.

You also receive 24/7/365 human support with no tiers or hand-offs regardless of your plan size. You can choose shared and dedicated compute instances or you can use your $100 in credit on S3-compatible object storage, Managed Kubernetes, and more. 

If it runs on Linux, it runs on Linode.

Visit linode.com/faun and click on the “Create Free Account” button to get started.

---


On a chilly November morning in 2020, it's GitOps Days, and we heard Alexis Richardson via Video Conferencing speak. Git allowed us to do Cloud-native development. It gave us the tooling for a distributed source control, continuous integration, container image distribution, and others.

The rate of development improved with Git. However, the one thing that Git did not give us is better operations. Git focuses on collaboration between developers and versioning, but it was never intended to help in operations. However, this was the case before GitOps.

GitOps gives you a mechanical, programmatic, automated way to operate. But why? Why would you want this new way to operate? Why does this matter?

Ok, before we dig deep into any answers, let's start with another question: How do you know your systems are in a correct state now?

If your auditors come to your office the next morning and ask if all your applications are running in your Kubernetes Cluster in a correct state now? How do you prove that?

In this episode, we will dive deep into GitOps and its raison d'etre.

We will answer questions you may have asked about GitOps, like the advantages and disadvantages of adopting it. We will also walk through the important patterns and security considerations in adopting GitOps.

Jan 26, 202111:18
Securing Kubernetes: The Paranoid Guide

Securing Kubernetes: The Paranoid Guide

This episode is sponsored by The Chief I/O, an online publication where you can read and share stories about cloud native, DevOps, Kubernetes, AIOps, and many other topics. You can subscribe to The Chief I/O newsletter to receive our best stories and the latest cloud native news and trends twice a week. Visit thechief.io/newsletter.

It's a sunny May afternoon in a Barcelona KubeCon. Liz Rice is on the stage discussing penetration testing in Kubernetes.

She says that one of the reasons why you might want to do penetration testing is stories such as this.

In 2018, Tesla left their Kubernetes Dashboard open to the internet. The Dashboard has cluster-admin privileges.

They were hacked, and the end result was their system was used to run cryptocurrency mining malware.

"The hackers had infiltrated Tesla's Kubernetes console, which was not password-protected," RedLock researchers wrote. "Within one Kubernetes pod, access credentials were exposed to Tesla's AWS environment, which contained an Amazon S3 (Amazon Simple Storage Service) bucket that had sensitive data such as telemetry."

It was a big headline and one that prompted the larger Kubernetes industry to focus more on security.

But why?

How did one of the biggest tech companies in Silicon Valley got hacked?

Is it simply a human issue? Or is there more to Security in Kubernetes?

I'm your host Kassandra Russel, and today we are going to talk about Security in Kubernetes.

We will examine the differences between securing a traditional environment and a container-based environment.

Next, we will discuss industry standards and emerging thought patterns around security.

And finally, we will go through some of the best security practices and general security advice for production workloads in Kubernetes.

Before diving into all of this, we’ve been busy during the last weeks working on a new project. If you like this podcast, you will certainly like the new project, it’s a surprise, we are going to talk more about it in the future. In the meantime, you can subscribe to the podcast announcement list, we will announce it soon.

Back to the subject at hand, remember the two generals' problem from one of our previous episodes?

It's a classic thought experiment exposing an unsolvable problem and demonstrating the design challenges of distributed systems and the pitfall of reaching consensus over a lossy network.

If you are interested in knowing more about this, we recommend you listen to our 5th episode “The Ubiquity of Kubernetes”.

Oct 23, 202012:38
Diving Deep Into Serverless Architectures (2/2)

Diving Deep Into Serverless Architectures (2/2)

This episode is sponsored by The Chief I/O.

The Chief I/O serves Cloud-Native professionals with the knowledge and insights they need to build resilient and scalable systems and teams. Visit The Chief I/O, read our publication, and subscribe to our newsletter and RSS feed. You can also apply to become a writer.

Visit www.thechief.io.

The global serverless architecture market size is projected to grow from USD 7.6 billion in 2020 to USD 21.1 billion by 2025, at a Compound Annual Growth Rate of 22.7% during the forecast period.

The major factors driving the growth of the serverless architecture market include the rising need to shift from Capital Expenditure (CapEx) to Operating Expenditure (OpEx) by removing the need to manage servers, thereby reducing the infrastructure cost.

This is what "MarketsAndMarkets" research company states in one of its reports about Serverless.

The expected rise of Kubernetes may make some of us think that Serverless is just a hype that will disappear with the emergence of more robust frameworks and architectures, but the industry trends show that this is wrong.

Serverless has discerned how to adapt to the competitiveness of distributed systems such as Kubernetes. Instead of disappearing and giving up space to such technologies, Serverless has followed the wave and found its niche. If we take the example of AWS Fargate, Google Cloud Run, or Knative, we will surely realize that.

It is possible to run Serverless in public or private clouds, using a micro VM technology like Firecracker or a containerization technology like Docker running on top of a Kubernetes based cluster.

In short, serverless made it through the storm and gained wide recognition.

This is part 2 of our series about Serverless. In part 1, we discussed technical details about Serverless use cases, best practices, and productization. Today, we are going to continue in the same direction but in a different way, so stay tuned.

Wisdom and experience dictate that before taking any application to production, you must ensure that it is fully observable, both at a component level and end to end.

This practice applies to serverless too. However, the abstraction and complexity of the Serverless architecture make monitoring, observability, and debugging a real challenge. Most remarkably, you don't have a full overview of every part of your system.

This gets even worse when you run multiple serverless functions that work together.

For the same reasons, the Serverless ecosystem has seen the birth of different Serverless monitoring solutions like Dashbird.

That was Taavi Rehemägi - CEO & Co-Founder of Dashbird. As a founder of a Serverless monitoring startup, we're excited to have him as our main guest today.

We wanted to learn his vision about Serverless Architectures and the challenges around using it. We would like to understand the use cases, best practices, and his experiences as an entrepreneur in the DevOps and Cloud-Native space.

Sep 24, 202038:03
Diving Deep Into Serverless Architectures (1/2)

Diving Deep Into Serverless Architectures (1/2)

This episode is sponsored by The Chief I/O.

The Chief I/O serves Cloud-Native professionals with the knowledge and insights they need to build resilient and scalable systems and teams. Visit The Chief I/O, read our publication, and subscribe to our newsletter and RSS feed. You can also apply to become a writer.

Visit www.thechief.io.

In November 2017, The Register published an article, 'Lambda and serverless is one of the worst forms of proprietary lock-in we've ever seen in the history of humanity'.

The article goes on and elaborates: "It's code that is tied not just to hardware – which we've seen before – but to a data center, you can't even get the hardware yourself. And that hardware is now custom fabbed for the cloud providers with dark fiber that runs all around the world, just for them. So literally, the application you write will never get the performance or responsiveness or the ability to be ported somewhere else without having the deployment footprint of Amazon."

What happened next was nothing short of spectacular. Well known figures in the Cloud computing space such as John Arundel, Forrest Brazeal, Yan Cui started to have diverging opinions.

Yan Cui is known for his serverless articles in medium and his blog. In an article published in lumigo.com titled “You are wrong about vendor lock-in” he wrote:

The biggest misconception about serverless vendor lock-in arguments is that technology choices are never lock-ins. Being “locked in” implies that there is no escape, but that’s not the case with technology choices. Not with serverless, not with databases, not with frameworks, not with programming languages.

Instead, technology choices create coupling, and no matter the choices you make, your solution will always be coupled to something. Moving away from those technologies requires time and effort, but there is always a way out.

I'm your host Kassandra Russel, and today we are going to discuss serverless architectures.

We will examine arguments for and against this technology.

Next, we will discuss architectures, triggers, and use cases for serverless.

Most importantly, we will discuss how to get your serverless functions productionized.

This episode is the first part of a series about Serverless; more topics will be discussed in the next episodes.

If you are thinking about adopting serverless or if you are already using it, this episode will give you useful insights, so stay tuned.


Computing started with bare metal servers, then with virtual machines and later with containers and distributed systems. However, in 2006 a product called Zimki offered the first functions as a Service. This allowed a “pay as you go” model for code execution.

Zimki was not commercially successful, but it paved the way for a new business model for computing services. Because of Zimki Functions as a Service or FaaS became a new category in the cloud space.

In 2008, Google offered Google App Engine, App Engine allowed “metered billing” for applications. This new offering from Google allowed developers to create functions using a custom Python Framework. The limitation of this is glaringly obvious, developers were not able to execute arbitrary code.

In November of 2014, AWS officially announced AWS Lambda. A fully-fledged Functions as a Service Platform that allowed the execution of arbitrary code.

In our DevOps weekly newsletter, Shipped, we curate must-read serverless tutorials, news, and stories.  Each week, there are tons of articles published, we read them for you, choose the best ones and share them with you. You can subscribe to Shipped by visiting faun.dev/join.


Sep 08, 202013:57
The Four Golden Signals, SlO, SLI, and Kubernetes

The Four Golden Signals, SlO, SLI, and Kubernetes

This episode is sponsored by “The Chief IO”.

The Chief I/O is the IT leaders' source for insights about DevOps, Cloud-Native, and other related topics. It’s also a place where companies can share their stories and experience with the community. Visit www.thechief.io to read insightful stories from cloud-native companies or to submit yours.

It's 2018 in Kubecon North America, a loud echo in the microphone, and then Ben Sigelman is on the stage.

There is conventional wisdom that observing microservice is hard. Google and Facebook solved this problem, right? They solved it in a way that allowed Observability to scale to multiple orders of magnitude to suit their use cases.

The prevailing assumption that we needed to sacrifice features in order to scale is wrong. In other words, the notion that people need to solve scalability problems as a tradeoff for having a powerful set of features is incorrect.

People assume that you need these three pillars of Observability: metrics, logging, and tracing, and all of a sudden, everything is solved. However, more often than not, this is not the case.

I'm Kassandra Russel, and today we are going to discuss Observability and why this is a critical day-2 operation in Kubernetes. Next, we will discuss the problems with Observability and leverage its three pillars to dive deep into some concepts like service level objectives, service level indicators, and finally, service level agreements.

Welcome to episode 6!

Moving from a world of monolithic to microservices world solved a lot of problems. This is true for the scalability of machines but also of the teams working on them. Kubernetes largely empowered us to migrate these monolithic applications to microservices. However, it made our applications distributed in nature.

The nature of Distributed Computing added more complexity in how microservices interact. Having multiple dependencies in each one produces a higher overhead in monitoring.

Observability became more critical in this context.

According to some, Observability is another soundbite without much meaning. However, not everyone thought this way. Charity Majors, a proponent of Observability, defines it as the power to answer any questions about what’s happening on the inside of the system just by observing the outside of the system, without having to ship new code to answer new questions. It’s truly what we need our tools to deliver now that system complexity is outpacing our ability to predict what’s going to break.

According to Charity, you need Observability because you can “completely own” your system. You have the ability to make changes based on data you have observed from the system. This makes Observability a powerful tool in highly complex systems like microservices and distributed architectures.

Imagine you are sleeping one night and suddenly your phone rings.

Jul 31, 202012:38
The Ubiquity of Kubernetes

The Ubiquity of Kubernetes

This episode is sponsored by The Chief IO

The Chief I/O is the IT leaders' source for news and insights about DevOps, Cloud Computing, Kubernetes, Observability, Cloud Native, AIOps, and other interesting topics. 

The Chief I/O makes cutting edge topics accessible to decision-makers and software engineering professionals. It's a place where companies can share their expertise while sharing their products and services. 

Visit www.thechief.io today and apply to become a writer.


It is the year 2017 Kelsey Hightower is on the KubeCon stage. The sound of the microphone starts echoing... Raise your hands if you think installing kubernetes is easy. This is how a well known Kubernetes advocate started his presentation.

Explaining abstract concepts as we all know, is complicated. How do you wrap your head around a concept such as a cluster?

Cluster is the core concept of Borg and later on Kubernetes. But what is it and why is it important?

Kelsey Hightower made his name describing concepts such as this into understandable metaphors.

In the same year, 2017, at the O’Reilly Software conference, he explained the reason why you would use something like Kubernetes.  He used the game “Tetris”. Imagine your machine is a Tetris. Everything is automated but without awareness of CPU and memory. Now imagine the blocks all falling vertically without any changes. Very soon the game will be over and your machines will run over.

However, imagine you use kubernetes to schedule these “blocks“ of workloads, fitting them into the machine’s spare resources.

When blocks are moved to the best possible places, Tetris may be a never-ending game and so is your cluster capacity. The cluster mentality of knowing where to put a workload best is one of the greatest advantages when it comes to using Kubernetes. Kubernetes actually knows where to schedule the workload based on CPU and memory.

Kubernetes solved a lot of problems. It created a whole ecosystem and community around it. The rate of adoption and its ubiquity became a rallying point to many software engineers.

Docker helped make the containers mainstream however it introduced a problem. How do you use containers in production? Kubernetes solved that problem.

Today, we will walk you through the adoption that Kubernetes went through. How it differentiated itself from other container orchestration systems.

We’re going to discuss the early growing pains Kubernetes have. Finally, we’ll talk about how kubernetes adoption best practices. Should you self host your own clusters or use a fully managed service? What is a good setup for a single cluster or multiple clusters?

Jul 10, 202018:14
The Missing Introduction to Containerization and the Origins of Kubernetes

The Missing Introduction to Containerization and the Origins of Kubernetes

It is 2016, Karl Isenberg is on the center stage, "Container Orchestration Wars," he said. The stage was set for the orchestration race. Armed to the teeth, the warriors at this time were DCOS Mesos, Kubernetes, Nomad, and Docker Swarm, among others. Behind each technology, a company. Google, Hashicorp, Docker inc, and Apache. Each one is willing to win the day and add a feather in their cap. The global application container market was expected to grow from 1.2 billion USD in 2018 to 4.98 billion USD by 2023, at a compound annual growth rate of 32.9% during the forecast period.

Which orchestration system will win the war?
Who's going to have the lion's share?
Will the winner take all?
Or will there be multiple winners?

It is all obvious now, Elvis has left the building, and some technologies weren't.

Today, you will get your free ticket to travel back to the '50s, to discover the first containers, then go back to the '70s, and so on until the present day.

We are going to go through the interesting history of containerization and discover how it has evolved.  We will talk about the container orchestration systems. Docker and the problems it solved. We are going to understand why Docker and containers became a big deal and finally, we'll wrap up with the history of Kubernetes.

Credits:

References:

Jun 22, 202015:54
From Abacus to Containers - A Brief History of Computing

From Abacus to Containers - A Brief History of Computing

We live in a world built by our collective ingenuity and imagination. We see further and more than our predecessors, not because of keener vision or greater heights but because we are standing on the shoulders of giants that came before us.

The Japanese word "sensei" literally means the person who came before.

Do you remember the first time you touch the computer keyboard?

Do you remember the typewriter clanging sound?

Do you remember your first HTML rendered on the world wide web or your first "hello world" application?

Maybe you were a gamer, and you blew the cartridges on your family computer?

No doubt, your first lines of code relies on the combined outcomes of thousands of years of accumulated knowledge and wisdom.

Today, we will navigate through history to discover how our ancestors made knowledge out of information. We'll talk about the technologies that have marked contemporary and modern history. Then we'll go back to modern times to talk about the first web server, virtualization, cloud computing, Docker and Kubernetes. This is the tale of computing, in which humans are heroes, and their greatest weapon is the imagination.

May 27, 202013:37
The first crisis in software engineering, the roots of Agile and the short story of DevOps

The first crisis in software engineering, the roots of Agile and the short story of DevOps

For some of us, the interest we have for software engineering started when we played our first video game when we were young; for others, it started with coding bootcamps, and others started having an interest in software engineering in college.

Whether you started this or that way, there's a common thing that most of us share; we initially started with the intention of building visible code, whether it's a game, a website, or a mobile application.

At this early stage of our understanding of software engineering, you very rarely hear of people who want to develop APIs, manage networks, or maintain production systems... In the beginning, we were charmed by the light and colors shining out of our screens, but with time, driven by wonder and curiosity, we start asking questions on what's really running inside, how it's running, and more importantly, how to create a similar thing.

We started seeking the answers, and this is when we discovered a new fascinating world of possibilities: The backend that powers the frontend. The technologies that everybody uses, but no one sees. The foundation of the base.

From there, choices and possibilities diverged. Many chose to stay in development, as is the case with backend developers. Others have chosen to go into infrastructure and networks like system and network engineers. Each took a different track, but these tracks met once again with DevOps.

Today, we're going to talk about the short story of DevOps, but before that, we're going to travel back in time to the first crisis in software engineering, passing through agile methodologies, and arriving to DevOps.


May 03, 202011:38
We Made a March Towards Continuous Development: Introducing The DevOps FaunCast
Apr 22, 202008:14