Share DevOps and Docker Talk: Cloud Native Interviews and Tooling
Share to email
Share to Facebook
Share to X
By Bret Fisher
4.5
5151 ratings
The podcast currently has 173 episodes available.
Bret and Nirmal Mehta are joined by Ken Collins to dig into using AI for more than coding, and if we can build an AI assistant that knows us.
They touch on a lot of tools and platforms. "We're bit all over the place on this one, from talking about AI features in our favorite note taking apps like Notion, to my journey of making an open AI assistant with all of my Q&A from my courses, thousands of questions and answers, to coding agents and more."
Ken is a local friend in Virginia Beach and was on the show last year talking about AWS Lambda, and we've both been trying to find value in all of these AI tools for our day to day work.
Be sure to check out the live recording of the complete show from October 24, 2024 on YouTube (Stream 279).
★Topics★
The Lifestyle Copilot Blog Post
Serverless AI Inference with Gemma 2 Blog Post
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret explores the spectrum of user interfaces and tools available for managing Kubernetes clusters as of Autumn 2024.
This solo episode touches on both paid and open-source options, looking at their features, benefits, and drawbacks. Key tools covered include Lens, Aptakube, K8Studio, Visual Studio Code's Kubernetes extension, K9S, Portainer, and Meshery.
Bret also discusses specialized tools like Headlamp and the Argo CD dashboard, and their specific use cases and advantages.
★Topics★
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret and Nirmal are joined by Chris Kühl and Jose Blanquicet, the maintainers of Inspektor Gadget, the new eBPF-focused multitool, to see what it's all about.
Inspektor Gadget, aims to solve some serious problems with managing Linux kernel-level tools via Kubernetes. Each security, troubleshooting, or observability utility is packaged in an OCI image and deployed to Kubernetes (and now Linux directly) via the Inspektor Gadget CLI and framework.
Be sure to check out the live recording of the complete show from September 12, 2024 on YouTube (Stream 277).
★Topics★
Inspektor Gadget website
Inspektor Gadget Docs
GitHub Repository
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret and Nirmal are joined by Maria Vechtomova, a MLOps Tech Lead and co-founder of Marvelous MLOps, to discuss the obvious and not-so obvious differences between a MLOps Engineer and traditional DevOps jobs.
Maria is here to discuss how DevOps engineers can adopt and operate machine learning workloads, also known as MLOps. With her expertise, we'll explore the challenges and best practices for implementing ML in a DevOps environment, including some hot takes on using Kubernetes.
Be sure to check out the live recording of the complete show from June 20, 2024 on YouTube (Stream 271).
★Topics★
Marvelous MLOps on LinkedIn
Marvelous MLOps Substack
Marvelous MLOps YouTube Channel
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret and Nirmal were joined by Emile Vauge, CTO of Traefik Labs to talk all about Traefik 3.0.
We talk about what's new in Traefik 3, 2.x to 3.0 migrations, Kubernetes Gateway API, WebAssembly (Cloud Native Wasm), HTTP3, Tailscale, OpenTelemetry, and much more!
Be sure to check out the live recording of the complete show from June 6, 2024 on YouTube (Stream 269). Includes demos.
★Topics★
Traefik Website
Traefik Labs Community Forum
Traefik's YouTube Channel
Gateway API helper CLI
ingress2gateway migration tool
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret is joined by DockerSlim (now mintoolkit) founder Kyle Quest, to show off how to slim down your existing images with various options.
The slimming down includes distroless images like Chainguard Images and Nix. We also look at using the new "mint debug" feature to exec into existing images and containers on Kubernetes, Docker, Podman, and containerd. Kyle joined us for a two-hour livestream to discuss mint’s evolution.
Be sure to check out the live recording of the complete show from May 30, 2024 on YouTube (Stream 268). Includes demos.
★Topics★
Mint repository in GitHub
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret is joined by Shahar Azulay, Groundcover CEO and Co-Founder, to discuss their new approach to fully observe K8s and its workloads with a "hybrid observability architecture."
Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced. It is a product that can drastically reduce your monitoring, logging, and tracing costs and complexity, it stores all its data in your clusters and only needs one agent per host for full observability and APM.
We dig into the deployment, architecture, and how it all works under the hood.
Be sure to check out the live recording of the complete show from June 27, 2024 on YouTube (Stream 272). Includes demos.
★Topics★
Groundcover Discord Channel
Groundcover Repository in GitHub
Groundcover YouTube Channel
Join the Groundcover Slack
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret and Nirmal are joined by Continue.dev co-founder, Nate Sesti, to walk through an open source replacement for GitHub Copilot.
Continue lets you use a set of open source and closed source LLMs in JetBrains and VSCode IDEs for adding AI to your coding workflow without leaving the editor.
You've probably heard about GitHub Copilot and other AI code assistants. The Continue team has created a completely open source solution as an alternative, or maybe a superset of these existing tools, because along with it being open source, it's also very configurable and allows you to choose multiple models to help you with code completion and chatbots in VSCode, JetBrains, and more are coming soon.
So this show builds on our recent Ollama show. Continue uses Ollama in the background to run a local LLM for you, if that's what you want to Continue to do for you, rather than internet LLM models.
Be sure to check out the live recording of the complete show from May 16, 2024 on YouTube (Ep. 266). Includes demos.
★Topics★
Continue.dev Website
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret and Nirmal are joined by Michael Fischer of AWS to discuss why we should use Graviton, their arm64 compute with AWS-designed CPUs.
Graviton is AWS' term for their custom ARM-based EC2 instances. We now have all major clouds offering an ARM-based option for their server instances, but AWS was first, way back in 2018. Fast forward 6 years and AWS is releasing their 4th generation Graviton instances, and they deliver all the CPU, networking, memory and storage performance that you'd expect from their x86 instances and beyond.
I'm a big fan of ARM-based servers and the price points that AWS gives us. They have been my default EC2 instance type for years now, and I recommend it for all projects I'm working on with companies.
We get into the history of Graviton, how easy it is to build and deploy containers and Kubernetes clusters that have Graviton and even two different platform types in the same cluster. We also cover how to build multi-platform images using Docker BuildKit.
Be sure to check out the live recording of the complete show from May 9, 2024 on YouTube (Ep. 265). Includes demos.
★Topics★
Graviton + GitLab + EKS
Porting Advisor for Graviton
Graviton Getting Started
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
Bret and Nirmal are joined by friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama and Docker's "GenAI Stack," to build apps on top of open source LLMs.
We've designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs.
Matt Williams is walking us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks.
Be sure to check out the video version of this episode for any demos.
This episode is from our YouTube Live show on April 18, 2024 (Stream 262).
★Topics★
Creators & Guests
You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!
Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com
The podcast currently has 173 episodes available.
374 Listeners
270 Listeners
277 Listeners
149 Listeners
582 Listeners
629 Listeners
127 Listeners
199 Listeners
213 Listeners
135 Listeners
973 Listeners
180 Listeners
132 Listeners
61 Listeners
140 Listeners